score
int64
10
1.34k
text
stringlengths
296
618k
url
stringlengths
16
1.13k
year
int64
13
18
16
As our world sinks deeper into recession and nations struggle to infuse cash and confidence into their deflated economies, planetary scientists and areospace engineers in the U.S. are preparing to launch the first mission specifically designed to find other planets in the Milky Way similar to our own in size and distance from the stars they orbit. On March 5, weather and technology permitting, NASA will send the Kepler spacecraft into orbit around the Sun, beginning NASA Discovery mission #10, the latest in the agency's 15-year-old commitment to explore space with "lower-cost, highly focused planetary science investigations designed to enhance our understanding of the solar system." Kepler's purpose is clear: "The scientific goal of the Kepler Mission is to explore the structure and diversity of planetary systems, with a special emphasis on the detection of Earth-size planets. It will survey the extended solar neighborhood to detect and characterize hundreds of terrestrial and larger planets in or near the 'habitable zone,' defined by scientists as the distance from a star where liquid water can exist on a planet's surface. The results will yield a broad understanding of planetary formation, the structure of individual planetary systems, and the generic characteristics of stars with terrestrial planets." The price tag is clear by now, too: somewhere between $550 and $600 million. More on this later. Kepler is a fitting name for this unique craft. German-born Johannes Kepler (1571-1630) is one of the giants of early modern astromony. A contemporary of Galileo and Brahe, Kepler discovered the basic laws governing planetary motion and in doing so created the field of celestial mechanics. He also was among the earliest to defend in print the heliocentric cosmology of Copernicus, and was the first to explain that the Moon was responsible for tides on Earth. His contributions to optics were also important. Because he gave us the ablity to know with mathematical precision the past, present, and future positions of planets, as well as our fundamental understanding of how telescopes and human vision work, it's appropriate that his name is on a spacecraft that relies on an ultra-sensitive light meter and a telescope to determine if planets similar to ours exist in potentially habitable locations elsewhere in the galaxy. For planetary scientists, size and distance matter. Up to now, the exoplanets discovered outside our solar system have been gas giants, ice giants, or super-hot bodies orbiting close to stars. With Kepler it should be possible to find rocky ones about the same size as Earth orbiting at distances where conditions are more favorable for maintaining liquid water. Where there is water, there may be life. The Kepler spacecraft was built by Ball Aerospace & Technologies Corp., in Boulder, CO, a major player in space technology which also happens to be a subsidary of Ball Corporation, the venerable company better known to Americans as the manufacturer of Ball glass canning jars. (FYI, the familiar Ball logo is prominently featured on the areospace corp.'s Web site.) Kepler is highly sophisticated instrumentation and packaging. Its data-gathering capability relies on a .95-meter diameter telescope and a collection of 42 light-sensitive microchips called charged coupling devices (CCDs), together forming the Kepler Focal Plane Array, which will be used to record extremely minute variations in the brightness of stars observed in the Cygnus region of the Milky Way over a period of about four years. The Cygnus region, located in the Orion spur of our galaxy, was selected because this star field isn't obscured by our Sun at any time of the year. The location also avoids occultations, or obscuring transits across the field of view, caused by asteroids and Kuiper Belt objects. In short, Kepler can stay focused on the Cygnus region without interference. This is important, because Kepler will be measuring the incredibly subtle changes in star brightness that occur when small diameter planets cross, or transit, the face of their orbiting stars. The transit method is how the majority of exosolar planets have been detected, but it also explains why the known extrasolars are so large: on Earth, the observations can't be precise or constant enough to detect smaller orbiting bodies. This is because our atmosphere bends the light entering it (why stars twinkle), and transits by Earth-size or even smaller planets could happen over a very short time frame -- perhaps as little as two hours in the span of a year. Even if it was possible to compensate for light bending and detect Earth-size planets, this would require building dedicated telescopes in multiple locations to avoid the changes in light resulting from Earth's own orbit, not to mention 24/7/365 monitoring. The cost would be many times over what the U.S. will ultimately spend on Kepler. What makes Kepler relatively cost-effective is its ability to observe a large number of stars from space itself using ultra-sensitive CCDs, something never done before. Initially, Kepler will be looking at about 140,000 stars, but as the projected four-year mission unfolds this number will be narrowed to approximately 100,000. That's still a lot of stars and data to work with. I'm not much of a betting man, but assuming its launch, orbital insertion, and internal systems work properly, I think it's likely that by mission's end, Kepler will have detected many smaller new planets of known size, orbit, and even approximate temperature, including rocky worlds in habitable zones. In light of the economic trends, it's fair to ask if such results are worth $600m. There are as many opinions about that as there are observers of the project, but one thing is clear: if Kepler wasn't approved back in 2001, prospects for discovering other terrestrial planets in the Milky Way would be on hold indefinitely. This is not to say the project's been immune to cost overruns, sometimes poor managment, and other pitfalls of large and complex contract work. Kepler's had its share of difficulties; the project dramatically faced being canceled in 2005. I'm looking ahead, though, not back. The possibility of discovering one or more terrestrial planets orbiting in habitable zones in what amounts to our own back yard is more than exciting. If it happens, our understanding of planetary systems will take a dramatic step forward. Next stop: eBay and Life beyond Earth Johannes Kepler and the planets known during his lifetime. For editorial use only. Artist's conception of Kepler in orbit with transiting planet in background. Credit: NASA. Kepler Focal Plane Array during assembly. Credit: Ball Aerospece & Technology Corp. Banner bus photo credit: Dorothy Delina Porter.
http://cosmologybus.typepad.com/cosmology_bus/johannes-kepler/
13
16
More meteorites have been recovered from Antarctica than from any other place on Earth. The Antarctic is an ideal location to collect meteorites because they are actually concentrated by the action of the ice and can survive for millions of years. A diagram illustrating the formation of blue ice areas. When a meteorite falls in Antarctica it becomes buried under snow and will eventually become incorporated into the deep ice cap. The ice cap in Antarctica is continually flowing to the edge of the continent, carrying the meteorites with it. When the ice hits a barrier, such as a mountain range, it is forced upwards. The strong 'katabatic' (downward) winds erode the surface ice, revealing high pressure 'blue ice' and the meteorites it has brought up from great depths. The meteorites accumulate, as they are too heavy to be carried far by the wind. Collecting meteorites in Antarctica with ANSMET. As Antarctica is extremely cold, generally below freezing, it is very dry. The lack of moisture means that meteorites weather incredibly slowly and can survive longer than anywhere else on Earth. The oldest meteorite collected in Antarctica has a terrestrial residence age of over 2.5 million years. Scientists have noticed that the really old meteorites from Antarctica differ from more recent finds, suggesting that the number and type of meteorites has changed over the last 2 million years. Both the USA and Japan organise expeditions to collect meteorites in the Antarctic each year and between them have found many thousands of meteorites. Several scientists working at the Natural History Museum, including Dr Sara Russell and Dr Gretchen Benedix, have collected meteorites in the Antarctic with the United States Antarctic Search for Meteorites (US ANSMET) team.
http://www.nhm.ac.uk/nature-online/space/meteorites-dust/collecting-identifying-meteorites/antarctic-meteorites/index.html
13
26
Lunar Laser Ranging experiment The ongoing Lunar Laser Ranging Experiment measures the distance between the Earth and the Moon using laser ranging. Lasers on Earth are aimed at retroreflectors planted on the Moon during the Apollo program, and the time for the reflected light to return is determined. Early tests, Apollo, and Lunokhod The first successful tests were carried out in 1962 when a team from the Massachusetts Institute of Technology succeeded in observing reflected laser pulses using a laser with a millisecond pulse length. Similar measurements were obtained later the same year by a Soviet team at the Crimean Astrophysical Observatory using a Q-switched ruby laser. Greater accuracy was achieved following the installation of a retroreflector array on July 21, 1969, by the crew of Apollo 11, while two more retroreflector arrays left by the Apollo 14 and Apollo 15 missions have also contributed to the experiment. Successful lunar laser range measurements to the retroreflectors were first reported by the 3.1m telescope at Lick Observatory, Air Force Cambridge Research Laboratories Lunar Ranging Observatory in Arizona, the Pic du Midi Observatory in France, the Tokyo Astronomical Observatory, and McDonald Observatory in Texas. The unmanned Soviet Lunokhod 1 and Lunokhod 2 rovers carried smaller arrays. Reflected signals were initially received from Lunokhod 1, but no return signals were detected after 1971 until a team from University of California rediscovered the array in April 2010 using images from NASA’s Lunar Reconnaissance Orbiter. Lunokhod 2's array continues to return signals to Earth. The Lunokhod arrays suffer from decreased performance in direct sunlight, a factor which was considered in the reflectors placed during the Apollo missions. The Apollo 15 array is three times the size of the arrays left by the two earlier Apollo missions. Its size made it the target of three-quarters of the sample measurements taken in the first 25 years of the experiment. Improvements in technology since then have resulted in greater use of the smaller arrays, by sites such as the Côte d'Azur Observatory in Grasse, France; and the Apache Point Observatory Lunar Laser-ranging Operation (APOLLO) at the Apache Point Observatory in New Mexico. The distance to the Moon is calculated approximately using this equation: - Distance = (Speed of light × Time taken for light to reflect) / 2. In actuality, the round-trip time of about 2½ seconds is affected by the relative motion of the Earth and the Moon, the rotation of the Earth, lunar libration, weather, polar motion, propagation delay through Earth's atmosphere, the motion of the observing station due to crustal motion and tides, velocity of light in various parts of air and relativistic effects. Nonetheless, the Earth-Moon distance has been measured with increasing accuracy for more than 35 years. The distance continually changes for a number of reasons, but averages about 384,467 kilometers (238,897 miles). At the Moon's surface, the beam is only about 6.5 kilometers (four miles) wide and scientists liken the task of aiming the beam to using a rifle to hit a moving dime 3 kilometers (approximately two miles) away. The reflected light is too weak to be seen with the human eye: out of 1017 photons aimed at the reflector, only one will be received back on Earth every few seconds, even under good conditions. They can be identified as originating from the laser because the laser is highly monochromatic. This is one of the most precise distance measurements ever made, and is equivalent in accuracy to determining the distance between Los Angeles and New York to one hundredth of an inch. As of 2002[update] work is progressing on increasing the accuracy of the Earth-Moon measurements to near millimeter accuracy, though the performance of the reflectors continues to degrade with age. Some of the findings of this long-term experiment are: - The Moon is spiraling away from Earth at a rate of 3.8 cm per year. This rate has been described as anomalously high. - The Moon probably has a liquid core of about 20% of the Moon's radius. - The universal force of gravity is very stable. The experiments have put an upper limit on the change in Newton's gravitational constant G of less than 1 part in 1011 since 1969. - The likelihood of any "Nordtvedt effect" (a composition-dependent differential acceleration of the Moon and Earth towards the Sun) has been ruled out to high precision, strongly supporting the validity of the Strong Equivalence Principle. - Einstein's theory of gravity (the general theory of relativity) predicts the Moon's orbit to within the accuracy of the laser ranging measurements. The presence of reflectors on the Moon has been used to rebut claims that the Apollo landings were faked. For example, the APOLLO Collaboration photon pulse return graph, shown here, has a pattern consistent with a retroreflector array near a known landing site. Photo gallery In popular culture See also - Apache Point Observatory Lunar Laser-ranging Operation - Apollo Lunar Surface Experiments Package - Tom Murphy (Physicist) (principal investigator of Apollo's reflector experiment) - Carroll Alley (previous principal investigator of Apollo's reflector experiment) - EME (communications) - Lunar distance (astronomy) - Lunokhod programme - Third-party evidence for Apollo Moon landings - Bender, P. L.; Currie, D. G.; Dicke, R. H. et al. (October 19, 1973). "The Lunar Laser Ranging Experiment" (PDF). Science (Washington, D.C.: American Association for the Advancement of Science) 182 (4109): 229–238. Bibcode:1973Sci...182..229B. doi:10.1126/science.182.4109.229. PMID 17749298. Retrieved April 27, 2013. - McDonald, Kim (April 26, 2010). "UC San Diego Physicists Locate Long Lost Soviet Reflector on Moon". UCSD. Retrieved 27 April 2010. - James G. Williams and Jean O. Dickey. "Lunar Geophysics, Geodesy, and Dynamics" (PDF). ilrs.gsfc.nasa.gov. Retrieved 2008-05-04. 13th International Workshop on Laser Ranging, October 7–11, 2002, Washington, D. C. - "It’s Not Just The Astronauts That Are Getting Older". Universe Today. March 10, 2010. Retrieved 24 August 2012. - Seeber, Gunter. Satellite Geodesy 2nd Edition. de Gruyter, 2003, p. 439 - Fred Espenek (August 1994). "NASA - Accuracy of Eclipse Predictions". eclipse.gsfc.nasa.gov. Retrieved 2008-05-04. - "Apollo 11 Experiment Still Going Strong after 35 Years". www.jpl.nasa.gov. July 20, 2004. Retrieved 2008-05-04. - Bills, B.G., and Ray, R.D. (1999), "Lunar Orbital Evolution: A Synthesis of Recent Results", Geophysical Research Letters 26 (19): 3045–3048, Bibcode:1999GeoRL..26.3045B, doi:10.1029/1999GL008348 - Adelberger, E.G., Heckel, B.R., Smith, G., Su, Y., and Swanson, H.E. (1990-Sep-20), "Eötvös experiments, lunar ranging and the strong equivalence principle", Nature 347 (6290): 261–263, Bibcode:1990Natur.347..261A, doi:10.1038/347261a0 - Williams, J.G., Newhall, X.X., and Dickey, J.O. (1996), "Relativity parameters determined from lunar laser ranging", Phys. Rev. D 53: 6730–6739, Bibcode:1996PhRvD..53.6730W, doi:10.1103/PhysRevD.53.6730 - Apollo 14 Laser Ranging Retroreflector Experiment - History of Laser Ranging - Lunar Retroreflectors History and Position - Station de Télémétrie Laser-Lune in Grasse, France - 2002 article about "UW researcher plans project to pin down moon's distance from Earth" - NASA: What Neil & Buzz Left on the Moon - CNN: Apollo 11 Experiment Still Returning Results after 30 Years
http://en.wikipedia.org/wiki/Lunar_Laser_Ranging_Experiment
13
11
Glowing silver-blue clouds that sometimes light up summer night skies in polar regions, after sunset and before sunrise, are called noctilucent clouds. Scientists studying these clouds, using data from NASA’s AIM (Aeronomy of Ice in the Mesosphere) satellite have found that year-to-year changes in noctilucent clouds are closely linked to weather and climate across the globe. One major discovery they’ve made is that weather conditions in one hemisphere can have a profound effect on the other hemisphere. Also known as night shining clouds, noctilucent clouds form in the highest reaches of the atmosphere called the mesosphere, as much as 50 miles (80 km) above the Earth’s surface. They’re usually seen during summer in polar regions. After sunset or before sunrise, when the sun is below the ground horizon but visible from the high altitude of noctilucent clouds, sunlight illuminates these clouds, causing them to glow in the dark night sky. Noctilucent clouds are thought to be made of ice crystals that form on fine dust particles from meteors; they can only form when temperatures are incredibly low and when there’s water available to form ice crystals. NASA’s AIM satellite was launched on April 25, 2007. Weighting just 430 pounds, AIM was placed into a polar orbit, 373 miles in altitude, using a Pegasus-XL launch vehicle out of Vandenberg Air Force Base in California. Its mission is to observe noctilucent clouds using several onboard instruments to collect information such as temperature, atmospheric gases, ice crystal size, changes in the clouds, as well as the amount of meteoric space dust that enters the atmosphere. Scientists will use the data to study how noctilucent are formed and why they change over time. A video about AIM, that shows the Pegasus-XL rocket launch about 2 min. 30 sec. into the video. James Russell, an atmospheric scientist at Hampton University in Hampton, VA, and Principal Investigator for AIM, said in a press release, The question people usually ask is why do clouds which require such cold temperatures form in the summer? It’s because of the dynamics of the atmosphere. You actually get the coldest temperatures of the year near the poles in summer at that height in the mesosphere. Here’s how it works: during summer, air close to the ground gets heated and rises. Since atmospheric pressure decreases with altitude, the rising air expands. When the air expands, it also cools down. This, along with other processes in the upper atmosphere, drives the air even higher causing it to cool even more. As a result, temperatures in the mesosphere can plunge to as low as -210°F (-134°C). In the northern hemisphere, the mesosphere predictably reaches these temperatures by mid-May, give or take a week. But that’s not the case in the southern hemisphere where the appearance of noctilucent clouds are not as predictable; for instance, in 2010, the clouds arrived one month later than in 2009. Atmospheric scientist Bodil Karlsson, at Stockholm University in Sweden and a member of the AIM team, said in the same press release, Since the clouds are so sensitive to the atmospheric temperatures, they can act as a proxy for information about the wind circulation that causes these temperatures. They can tell us that the circulation exists first of all, and tell us something about the strength of the circulation. She explained that the appearance of summer noctilucent clouds in the southern hemisphere is linked to an atmospheric phenomenon called the southern stratospheric vortex. It’s a pattern of winter wind circulation above the south pole. In 2010, that vortex persisted into the southern summer season, keeping cold air at lower altitudes which prevented the formation of noctilucent clouds at higher latitudes till later into the summer. There’s also evidence of a connection between atmospheric conditions in the northern and southern hemispheres. The upwelling of air needed to create noctilucent clouds is part of a larger wind circulation loop traveling between the two poles. Wind activity about 13,000 miles (20,920 km) away in the northern hemisphere appears to influence the southern hemisphere. With AIM, scientists have observed a 3 to 10 day lag between low altitude weather in the north — where mountains create complex wind systems — and its effect on the southern mesosphere during the southern hemisphere summers. However, during northern hemisphere summers, the lower atmosphere in the southern polar region has little variablility. As a result, it produces calmer steadier conditions in the northern hemisphere mesosphere which allows for consistent timing in noctilucent cloud formation. Said Russell: The real importance of all of that is not only that events down where we live can affect the clouds 50 miles (80 km) above, but that the total atmosphere from one pole to the next is rather tightly connected. It will take additional analysis to understand the details of this complex northern-southern hemisphere atmospheric interaction. AIM data will also help scientists learn more about how noctilucent cloud seasons vary due to changes in the atmosphere, caused by the sun’s cycles, as well as by natural and human-induced changes. As more information is gathered about these causes and effects, noctilucent clouds could be used to monitor atmospheric processes that are otherwise difficult to observe directly. Bottom line: NASA’s AIM (Aeronomy of Ice in the Mesosphere) satellite has been observing noctilucent clouds since 2007, gathering data that will help scientists understand the characteristics of these extreme high altitude clouds that are closely linked to weather and climate across the globe. Data has shown, at unprecedented detail, how low altitude weather conditions in the northern hemisphere affect the southern polar mesosphere during the southern summer. And vice versa. A deeper understanding of noctilucent clouds could someday help scientists monitor the effects of climate change in the atmosphere.
http://earthsky.org/earth/nasa-satellite-observations-of-noctilucent-clouds-show-complex-atmospheric-interactions/comment-page-1
13
14
Planet Earth began 4.6 billion years ago as humble specks of dust that stuck together within a disk around our newborn sun. A dust experiment aboard a rocket suggests that those specks first assembled into elongated chains of particles, rather than the clumpier structures that some computer simulations had predicted. The results, published in the 9 July PRL, confirm some earlier, more limited findings about fluffy “planetary seedlings” but also include new types of measurements. In computer simulations of the earliest stages of a planet’s growth, particles floating within a gaseous disk collide because of their constant thermal agitation, known as Brownian motion. Gravitational attraction is negligible for tiny dust grains, so they stick only through electrostatic van der Waals forces. The simulations predict loose aggregates of particles with many branches in a complex network, like a portion of a spider’s web. Mathematically, such branched chains have a so-called fractal dimension of just under 2, midway between a linear string of fractal dimension 1.0 and a compact yet porous clump of dimension 3.0. But checking this prediction isn’t easy because it requires low gravity. In 1998 a team led by Jürgen Blum of the Technical University in Braunschweig, Germany, tested the simulation predictions in an enclosed experiment on a space shuttle flight. The team released a cloud of silicon-dioxide dust particles–in essence, micron-sized glass spheres–and took photographs with microscopes as the particles interacted. The results hinted that the dusty structures were unexpectedly open, consisting of long chains with few branched sections and no clumps. But the particles quickly drifted to the side of the canister, limiting the data. The new results come from an adapted version of the same experiment, launched aboard an unmanned rocket from northern Sweden in 1999. The rocket’s dusty cargo was in microgravity for 6 minutes. Blum and colleague Maya Krause of Friedrich Schiller University in Jena, Germany, designed high-speed microscopes that scanned across the cloud of silicate spheres as they floated through the brightly lit container. As the particles stuck together and grew, they produced increasingly darker silhouettes against the lighted background. Krause and Blum determined the masses of the aggregates from the amount of light they absorbed. Then, they deduced the structures’ shapes from the sizes of the blotches in the images: tiny black spots for tight clumps, or extended smudges for long strings. The results confirm the tentative findings from the shuttle flight, Blum says. Dust aggregates had a typical fractal dimension of 1.4, pointing to long chains of silicate spheres with few side branches. “No one has managed to model this extreme fluffiness from first principles,” Blum says. He suspects that the Brownian motions of the growing dust chains make them rotate quickly every few milliseconds, like tiny helicopter blades. Most new dust particles would then get swept up by the ends of the chains before they have a chance to adhere to particles in the middle. The research also went beyond the space shuttle data to show that the dust structures grew at an exponentially increasing rate–so quickly, says Blum, that aggregates of 100 grains or more should be common within one year of the formation of a disk around a new star. The new report should help astronomers interpret how large dust grains affect images of planet-forming regions from NASA’s new Spitzer Space Telescope, which uses infrared light to peer into star-forming clouds, says theorist Stuart Weidenschilling of the Planetary Science Institute in Tucson, Arizona. “It’s an ingenious experiment,” he says. However, Weidenschilling adds that Blum and others still must demonstrate whether the same mechanism can assemble planet-building clumps more than a meter wide. Robert Irion is a freelance science writer based in Santa Cruz, CA. - J. Blum et al., “Growth and Form of Planetary Seedlings: Results from a Microgravity Aggregation Experiment,” Phys. Rev. Lett. 85, 2426 (2000).
http://physics.aps.org/story/v14/st2
13
10
A Tour of the CryosphereEntry ID: SVS_CRYOSPHERE Abstract: The cryosphere consists of those parts of the Earth's surface where water is found in solid form, including areas of snow, sea ice, glaciers, permafrost, ice sheets, and icebergs. In these regions, surface temperatures remain below freezing for a portion of each year. Since ice and snow exist relatively close to their melting point, they frequently change from solid to liquid and back again due to ... fluctuations in surface temperature. Although direct measurements of the cryosphere can be difficult to obtain due to the remote locations of many of these areas, using satellite observations scientists monitor changes in the global and regional climate by observing how regions of the Earth's cryosphere shrink and expand. This animation portrays fluctuations in the cryosphere through observations collected from a variety of satellite-based sensors. The animation begins in Antarctica, showing some unique features of the Antarctic landscape found nowhere else on earth. Ice shelves, ice streams, glaciers, and the formation of massive icebergs can be seen clearly in the flyover of the Landsat Image Mosaic of Antarctica. A time series shows the movement of iceberg B15A, an iceberg 295 kilometers in length which broke off of the Ross Ice Shelf in 2000. Moving farther along the coastline, a time series of the Larsen ice shelf shows the collapse of over 3,200 square kilometers ice since January 2002. As we depart from the Antarctic, we see the seasonal change of sea ice and how it nearly doubles the apparent area of the continent during the winter. From Antarctica, the animation travels over South America showing glacier locations on this mostly tropical continent. We then move further north to observe daily changes in snow cover over the North American continent. The clouds show winter storms moving across the United States and Canada, leaving trails of snow cover behind. In a close-up view of the western US, we compare the difference in land cover between two years: 2003 when the region received a normal amount of snow and 2002 when little snow was accumulated. The difference in the surrounding vegetation due to the lack of spring melt water from the mountain snow pack is evident. As the animation moves from the western US to the Arctic region, the areas effected by permafrost are visible. As time marches forward from March to September, the daily snow and sea ice recede and reveal the vast areas of permafrost surrounding the Arctic Ocean. The animation shows a one-year cycle of Arctic sea ice followed by the mean September minimum sea ice for each year from 1979 through 2008. The superimposed graph of the area of Arctic sea ice at this minimum clearly shows the dramatic decrease in Arctic sea ice over the last few years. While moving from the Arctic to Greenland, the animation shows the constant motion of the Arctic polar ice using daily measures of sea ice activity. Sea ice flows from the Arctic into Baffin Bay as the seasonal ice expands southward. As we draw close to the Greenland coast, the animation shows the recent changes in the Jakobshavn glacier. Although Jakobshavn receded only slightly from 1964 to 2001, the animation shows significant recession from 2001 through 2009. As the animation pulls out from Jakobshavn, the effect of the increased flow rate of Greenland costal glaciers is shown by the thinning ice shelf regions near the Greenland coast. This animation shows a wealth of data collected from satellite observations of the cryosphere and the impact that recent cryospheric changes are making on our planet. [Summary provided by the NASA Scientific Visualization Studio.] Data Set Citation Dataset Originator/Creator: Prof. Magda Vincx Dataset Title: Meiobenthic biodiversity and fluxes within theOnline Resource: http://www.marinebiology.ugent.be/ Antarctic biogeochemical environment Start Date: 1992-01-01Stop Date: 2005-12-31 OCEANS > OCEAN CHEMISTRY > BIOGEOCHEMICAL CYCLES SOLID EARTH > GEOCHEMISTRY > BIOGEOCHEMICAL PROCESSES OCEANS > OCEAN CHEMISTRY > MARINE GEOCHEMISTRY SOLID EARTH > GEOCHEMISTRY > MARINE GEOCHEMICAL PROCESSES BIOSPHERE > AQUATIC ECOSYSTEMS > BENTHIC HABITAT BIOSPHERE > ECOLOGICAL DYNAMICS > SPECIES/POPULATION INTERACTIONS > USE/FEEDING HABITATS BIOSPHERE > ECOLOGICAL DYNAMICS > ECOSYSTEM FUNCTIONS > BIOGEOCHEMICAL CYCLES BIOSPHERE > ECOLOGICAL DYNAMICS > ECOSYSTEM FUNCTIONS > NUTRIENT CYCLING BIOSPHERE > ECOLOGICAL DYNAMICS > ECOSYSTEM FUNCTIONS > RESPIRATION RATE ISO Topic Category Use Constraints Macintosh users: We do all our primary programming on Macs, and we strongly recommend using Macs for your display computers. At a minimum, you need a Mac Power PC with 64 MB free RAM, CD player, System 8.1 or higher with Quicktime 4 or higher installed. The software runs much faster and allows one-click updating of today's latest ... images if you copy the software suite to your hard drive. In that case, please note the total hard disk requirements below for installing all the software on your computer instead of running it from the CD. You need 450 MB of free hard disk space for all of SPACE UPDATE, or 50-140 MB free if you only wish to show one of the modules (Sky Tonight is the largest because of the skyview movies included). Monitor requirements are 640x480 pixels, with 16-bit color (thousands of colors). No keyboard is necessary for the display - only a trackball or trackpad and clicker. Our system will run on Windows 95, 98, NT, 2000, and ME, with at least 64 MB of RAM. Sound cards are necessary to hear the effects (but are not necessary for module operation). You need 400 MB free hard disk space for all of SPACE UPDATE; or 50-140 MB if you only wish to show one of the modules. The Windows versions requires Quicktime 4, and a display set to High Color (16 bit). The Windows version is not available in two-screen format. 2 Screen version: Our two-screen version is only available for Mac's. The first monitor is the typical "Space Update" screen. The second monitor displays the chosen image in highest resolution, without menu buttons or text shown - perfect for putting on a projector for a live lecture. The second screen has a resolution of 640 x 480 pixels for Astronomy, Space Weather, and Solar System. It is also possible to display the second screen on an RGB projection system, or TV monitor depending on your setup and video cpabilities. The two-screen version CD contains both the individual two-screen modules and the 16-bit color version of the single-screen Space Update program - basically everything which is presently on display at the Houston Museum of Natural Science! Distribution Media: CD-ROM Fees: $15 USD Role: SERF AUTHOR Phone: (301) 614-6898 Email: Tyler.B.Stevens at nasa.gov NASA Goddard Space Flight Center Global Change Master Directory Province or State: MD Postal Code: 20771 Creation and Review Dates
http://gcmd.nasa.gov/KeywordSearch/Metadata.do?Portal=GCMD&KeywordPath=Parameters%7CAGRICULTURE%7CFOREST+SCIENCE%7CFOREST+YIELDS&OrigMetadataNode=GCMD&EntryId=USDA0598&MetadataView=Full&MetadataType=0&lbnode=mdlb3
13
11
The cell cycle, or cell-division cycle, is the series of events that take place in a cell leading to its division and duplication (replication). In cells without a nucleus (prokaryotic), the cell cycle occurs via a process termed binary fission. In cells with a nucleus (eukaryotes), the cell cycle can be divided in two periods: interphase—during which the cell grows, accumulating nutrients needed for mitosis and duplicating its DNA—and the mitotic (M) phase, during which the cell splits itself into two distinct cells, often called "daughter cells" and the final phase, cytokinesis, where the new cell is completely divided. The cell-division cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. The cell cycle consists of four distinct phases: G1 phase, S phase (synthesis), G2 phase (collectively known as interphase) and M phase (mitosis). M phase is itself composed of two tightly coupled processes: mitosis, in which the cell's chromosomes are divided between the two sister cells, and cytokinesis, in which the cell's cytoplasm divides in half forming distinct cells. Activation of each phase is dependent on the proper progression and completion of the previous one. Cells that have temporarily or reversibly stopped dividing are said to have entered a state of quiescence called G0 phase. |Gap 0||G0||A resting phase where the cell has left the cycle and has stopped dividing.| |Interphase||Gap 1||G1||Cells increase in size in Gap 1. The G1 checkpoint control mechanism ensures that everything is ready for DNA synthesis.| |Synthesis||S||DNA replication occurs during this phase.| |Gap 2||G2||During the gap between DNA synthesis and mitosis, the cell will continue to grow. The G2 checkpoint control mechanism ensures that everything is ready to enter the M (mitosis) phase and divide.| |Cell division||Mitosis||M||Cell growth stops at this stage and cellular energy is focused on the orderly division into two daughter cells. A checkpoint in the middle of mitosis (Metaphase Checkpoint) ensures that the cell is ready to complete cell division.| After cell division, each of the daughter cells begin the interphase of a new cycle. Although the various stages of interphase are not usually morphologically distinguishable, each phase of the cell cycle has a distinct set of specialized biochemical processes that prepare the cell for initiation of cell division. G0 phase The term "post-mitotic" is sometimes used to refer to both quiescent and senescent cells. Nonproliferative cells in multicellular eukaryotes generally enter the quiescent G0 state from G1 and may remain quiescent for long periods of time, possibly indefinitely (as is often the case for neurons). This is very common for cells that are fully differentiated. Cellular senescence occurs in response to DNA damage or degradation that would make a cell's progeny nonviable; it is often a biochemical reaction; division of such a cell could, for example, become cancerous. Some cells enter the G0 phase semi-permanentally e.g., some liver and kidney cells. Before a cell can enter cell division, it needs to take in nutrients. All of the preparations are done during the interphase. Interphase proceeds in three stages, G1, S, and G2. Cell division operates in a cycle. Therefore, interphase is preceded by the previous cycle of mitosis and cytokinesis. Interphase is also known as preparatory phase. In this stage nucleus and cytosol division does not occur. The cell prepares for division. G1 phase The first phase within interphase, from the end of the previous M phase until the beginning of DNA synthesis is called G1 (G indicating gap). It is also called the growth phase. During this phase the biosynthetic activities of the cell, which had been considerably slowed down during M phase, resume at a high rate. This phase is marked by the use of 20 amino acids to form millions of proteins and later on enzymes that are required in S phase, mainly those needed for DNA replication. Duration of G1 is highly variable, even among different cells of the same species. It is under the control of the p53 gene. S phase The ensuring S phase starts when DNA replication commences; when it is complete, all of the chromosomes have been replicated, i.e., each chromosome has two (sister) chromatids. Thus, during this phase, the amount of DNA in the cell has effectively doubled, though the ploidy of the cell remains the same. During this phase, synthesis is completed as quickly as possible due to the exposed base pairs being sensitive to external factors such as any drugs taken or any mutagens (such as nicotine). G2 phase During the gap between DNA synthesis and mitosis, the cell will continue to grow. The G2 checkpoint control mechanism ensures that everything is ready to enter the M (mitosis) phase and divides Mitosis (M phase, mitotic phase) The relatively brief M phase consists of nuclear division (karyokinesis). The M phase has been broken down into several distinct phases, sequentially known as: - cytokinesis (strictly speaking, cytokinesis is not part of mitosis but is an event that directly follows mitosis in which cytoplasm is divided into two daughter cells) Mitosis is the process by which a eukaryotic cell separates the chromosomes in its cell nucleus into two identical sets in two nuclei. It is generally followed immediately by cytokinesis, which divides the nuclei, cytoplasm, organelles and cell membrane into two cells containing roughly equal shares of these cellular components. Mitosis and cytokinesis together define the mitotic (M) phase of the cell cycle - the division of the mother cell into two daughter cells, genetically identical to each other and to their parent cell. This accounts for approximately 10% of the cell cycle. Mitosis occurs exclusively in eukaryotic cells, but occurs in different ways in different species. For example, animals undergo an "open" mitosis, where the nuclear envelope breaks down before the chromosomes separate, while fungi such as Aspergillus nidulans and Saccharomyces cerevisiae (yeast) undergo a "closed" mitosis, where chromosomes divide within an intact cell nucleus. Prokaryotic cells, which lack a nucleus, divide by a process called binary fission. The process of mitosis is complex and highly regulated. The sequence of events is divided into phases, corresponding to the completion of one set of activities and the start of the next. These stages are prophase, prometaphase, metaphase, anaphase and telophase. During the process of mitosis the pairs of chromosomes condense and attach to fibers that pull the sister chromatids to opposite sides of the cell. The cell then divides in cytokinesis, to produce two identical daughter cells. Because cytokinesis usually occurs in conjunction with mitosis, "mitosis" is often used interchangeably with "M phase". However, there are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei in a process called endoreplication. This occurs most notably among the fungi and slime moulds, but is found in various groups. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Errors in mitosis can either kill a cell through apoptosis or cause mutations that may lead to cancer. Regulation of eukaryotic cell cycle Regulation of the cell cycle involves processes crucial to the survival of a cell, including the detection and repair of genetic damage as well as the prevention of uncontrolled cell division. The molecular events that control the cell cycle are ordered and directional; that is, each process occurs in a sequential fashion and it is impossible to "reverse" the cycle. Role of cyclins and CDKs Two key classes of regulatory molecules, cyclins and cyclin-dependent kinases (CDKs), determine a cell's progress through the cell cycle. Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of these central molecules. Many of the genes encoding cyclins and CDKs are conserved among all eukaryotes, but in general more complex organisms have more elaborate cell cycle control systems that incorporate more individual components. Many of the relevant genes were first identified by studying yeast, especially Saccharomyces cerevisiae; genetic nomenclature in yeast dubs many of these genes cdc (for "cell division cycle") followed by an identifying number, e.g., cdc25 or cdc20. Cyclins form the regulatory subunits and CDKs the catalytic subunits of an activated heterodimer; cyclins have no catalytic activity and CDKs are inactive in the absence of a partner cyclin. When activated by a bound cyclin, CDKs perform a common biochemical reaction called phosphorylation that activates or inactivates target proteins to orchestrate coordinated entry into the next phase of the cell cycle. Different cyclin-CDK combinations determine the downstream proteins targeted. CDKs are constitutively expressed in cells whereas cyclins are synthesised at specific stages of the cell cycle, in response to various molecular signals. General mechanism of cyclin-CDK interaction ||This section needs additional citations for verification. (July 2010)| Upon receiving a pro-mitotic extracellular signal, G1 cyclin-CDK complexes become active to prepare the cell for S phase, promoting the expression of transcription factors that in turn promote the expression of S cyclins and of enzymes required for DNA replication. The G1 cyclin-CDK complexes also promote the degradation of molecules that function as S phase inhibitors by targeting them for ubiquitination. Once a protein has been ubiquitinated, it is targeted for proteolytic degradation by the proteasome. Active S cyclin-CDK complexes phosphorylate proteins that make up the pre-replication complexes assembled during G1 phase on DNA replication origins. The phosphorylation serves two purposes: to activate each already-assembled pre-replication complex, and to prevent new complexes from forming. This ensures that every portion of the cell's genome will be replicated once and only once. The reason for prevention of gaps in replication is fairly clear, because daughter cells that are missing all or part of crucial genes will die. However, for reasons related to gene copy number effects, possession of extra copies of certain genes is also deleterious to the daughter cells. Mitotic cyclin-CDK complexes, which are synthesized but inactivated during S and G2 phases, promote the initiation of mitosis by stimulating downstream proteins involved in chromosome condensation and mitotic spindle assembly. A critical complex activated during this process is a ubiquitin ligase known as the anaphase-promoting complex (APC), which promotes degradation of structural proteins associated with the chromosomal kinetochore. APC also targets the mitotic cyclins for degradation, ensuring that telophase and cytokinesis can proceed. Specific action of cyclin-CDK complexes Cyclin D is the first cyclin produced in the cell cycle, in response to extracellular signals (e.g. growth factors). Cyclin D binds to existing CDK4, forming the active cyclin D-CDK4 complex. Cyclin D-CDK4 complex in turn phosphorylates the retinoblastoma susceptibility protein (Rb). The hyperphosphorylated Rb dissociates from the E2F/DP1/Rb complex (which was bound to the E2F responsive genes, effectively "blocking" them from transcription), activating E2F. Activation of E2F results in transcription of various genes like cyclin E, cyclin A, DNA polymerase, thymidine kinase, etc. Cyclin E thus produced binds to CDK2, forming the cyclin E-CDK2 complex, which pushes the cell from G1 to S phase (G1/S , which initiates the G2/M transition. Cyclin B-cdc2 complex activation causes breakdown of nuclear envelope and initiation of prophase, and subsequently, its deactivation causes the cell to exit mitosis. Two families of genes, the cip/kip family (CDK interacting protein/Kinase inhibitory protein) and the INK4a/ARF (Inhibitor of Kinase 4/Alternative Reading Frame) prevent the progression of the cell cycle. Because these genes are instrumental in prevention of tumor formation, they are known as tumor suppressors. The cip/kip family includes the genes p21, p27 and p57. They halt cell cycle in G1 phase, by binding to, and inactivating, cyclin-CDK complexes. p21 is activated by p53 (which, in turn, is triggered by DNA damage e.g. due to radiation). p27 is activated by Transforming Growth Factor of β (TGF β), a growth inhibitor. Transcriptional regulatory network Evidence suggests that a semi-autonomous transcriptional network acts in concert with the CDK-cyclin machinery to regulate the cell cycle. Several gene expression studies in Saccharomyces cerevisiae have identified approximately 800 to 1200 genes that change expression over the course of the cell cycle; they are transcribed at high levels at specific points in the cell cycle, and remain at lower levels throughout the rest of the cell cycle. While the set of identified genes differs between studies due to the computational methods and criterion used to identify them, each study indicates that a large portion of yeast genes are temporally regulated. Many periodically expressed genes are driven by transcription factors that are also periodically expressed. One screen of single-gene knockouts identified 48 transcription factors (about 20% of all non-essential transcription factors) that show cell cycle progression defects. Genome-wide studies using high throughput technologies have identified the transcription factors that bind to the promoters of yeast genes, and correlating these findings with temporal expression patterns have allowed the identification of transcription factors that drive phase-specific gene expression. The expression profiles of these transcription factors are driven by the transcription factors that peak in the prior phase, and computational models have shown that a CDK-autonomous network of these transcription factors is sufficient to produce steady-state oscillations in gene expression). Experimental evidence also suggests that gene expression can oscillate with the period seen in dividing wild-type cells independently of the CDK machinery. Orlando et al. used microarrays to measure the expression of a set of 1,271 genes that they identified as periodic in both wild type cells and cells lacking all S-phase and mitotic cyclins (clb1,2,3,4,5,6). Of the 1,271 genes assayed, 882 continued to be expressed in the cyclin-deficient cells at the same time as in the wild type cells, despite the fact that the cyclin-deficient cells arrest at the border between G1 and S phase. However, 833 of the genes assayed changed behavior between the wild type and mutant cells, indicating that these genes are likely directly or indirectly regulated by the CDK-cyclin machinery. Some genes that continued to be expressed on time in the mutant cells were also expressed at different levels in the mutant and wild type cells. These findings suggest that while the transcriptional network may oscillate independently of the CDK-cyclin oscillator, they are coupled in a manner that requires both to ensure the proper timing of cell cycle events. Other work indicates that phosphorylation, a post-translational modification, of cell cycle transcription factors by Cdk1 may alter the localization or activity of the transcription factors in order to tightly control timing of target genes (Ubersax et al. 2003; Sidorova et al. 1995; White et al. 2009). While oscillatory transcription plays a key role in the progression of the yeast cell cycle, the CDK-cyclin machinery operates independently in the early embryonic cell cycle. Before the midblastula transition, zygotic transcription does not occur and all needed proteins, such as the B-type cyclins, are translated from maternally loaded mRNA. Cell cycle checkpoints are used by the cell to monitor and regulate the progress of the cell cycle. Checkpoints prevent cell cycle progression at specific points, allowing verification of necessary phase processes and repair of DNA damage. The cell cannot proceed to the next phase until checkpoint requirements have been met. Several checkpoints are designed to ensure that damaged or incomplete DNA is not passed on to daughter cells. Two main checkpoints exist: the G1/S checkpoint and the G2/M checkpoint. G1/S transition is a rate-limiting step in the cell cycle and is also known as restriction point. An alternative model of the cell cycle response to DNA damage has also been proposed, known as the postreplication checkpoint. p53 plays an important role in triggering the control mechanisms at both G1/S and G2/M checkpoints. Role in tumor formation A disregulation of the cell cycle components may lead to tumor formation. As mentioned above, some genes like the cell cycle inhibitors, RB, p53 etc., when they mutate, may cause the cell to multiply uncontrollably, forming a tumor. Although the duration of cell cycle in tumor cells is equal to or longer than that of normal cell cycle, the proportion of cells that are in active cell division (versus quiescent cells in G0 phase) in tumors is much higher than that in normal tissue. Thus there is a net increase in cell number as the number of cells that die by apoptosis or senescence remains the same. The cells which are actively undergoing cell cycle are targeted in cancer therapy as the DNA is relatively exposed during cell division and hence susceptible to damage by drugs or radiation. This fact is made use of in cancer treatment; by a process known as debulking, a significant mass of the tumor is removed which pushes a significant number of the remaining tumor cells from G0 to G1 phase (due to increased availability of nutrients, oxygen, growth factors etc.). Radiation or chemotherapy following the debulking procedure kills these cells which have newly entered the cell cycle. The fastest cycling mammalian cells in culture, crypt cells in the intestinal epithelium, have a cycle time as short as 9 to 10 hours. Stem cells in resting mouse skin may have a cycle time of more than 200 hours. Most of this difference is due to the varying length of G1, the most variable phase of the cycle. M and S do not vary much. In general, cells are most radiosensitive in late M and G2 phases and most resistant in late S. For cells with a longer cell cycle time and a significantly long G1 phase, there is a second peak of resistance late in G1. The pattern of resistance and sensitivity correlates with the level of sulfhydryl compounds in the cell. Sulfhydryls are natural radioprotectors and tend to be at their highest levels in S and at their lowest near mitosis. See also - Synchronous culture – synchronization of cell cultures - Cooper GM (2000). "Chapter 14: The Eukaryotic Cell Cycle". The cell: a molecular approach (2nd ed.). Washington, D.C: ASM Press. ISBN 0-87893-106-6. - King, Roger (2006). Cancer Biology. Essex, England: Pearson Education. p. 146. - Rubenstein, Irwin, and Susan M. Wick. "Cell." World Book Online Reference Center. 2008. 12 January 2008 <http://www.worldbookonline.com/wb/Article?id=ar102240> - De Souza CP, Osmani SA (2007). "Mitosis, not just open or closed". Eukaryotic Cell 6 (9): 1521–7. doi:10.1128/EC.00178-07. PMC 2043359. PMID 17660363. - Maton, Anthea; Hopkins, Jean Johnson, Susan LaHart, David, Quon Warner, David, Wright, Jill D (1997). Cells: Building Blocks of Life. New Jersey: Prentice Hall. pp. 70–4. ISBN 0-13-423476-6. - Lilly M, Duronio R (2005). "New insights into cell cycle control from the Drosophila endocycle". Oncogene 24 (17): 2765–75. doi:10.1038/sj.onc.1208610. PMID 15838513. - Nigg EA (June 1995). "Cyclin-dependent protein kinases: key regulators of the eukaryotic cell cycle". BioEssays 17 (6): 471–80. doi:10.1002/bies.950170603. PMID 7575488. - "Press release". Nobelprize.org. - Spellman PT, Sherlock G, Zhang MQ, Iyer VR, Anders K, Eisen MB, Brown PO, Botstein D, Futcher B (December 1998). "Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization". Mol. Biol. Cell 9 (12): 3273–97. PMC 25624. PMID 9843569. - Robbins and Cotran; Kumar, Abbas, Fausto (2004). Pathological Basis of Disease. Elsevier. ISBN 81-8147-528-3. - Mahmoudi, Morteza; et al. (01 2011). "Effect of Nanoparticles on the Cell Life Cycle". Chemical Reviews 111 (5): 3407–3432. doi:10.1021/cr1003166. - Norbury C (1995). "Cdc2 protein kinase (vertebrates)". In Hardie, D. Grahame; Hanks, Steven. Protein kinase factsBook. Boston: Academic Press. p. 184. ISBN 0-12-324719-5. - Presentation on CDC25 PHOSPHATASES: A Potential Target for Novel Anticancer Agents - Pramila T, Wu W, Miles S, Breeden L (August 2006). "The Forkhead transcription factor Hcm1 regulates chromosome segregation genes and fills the S-phase gap in the transcriptional circuitry of the cell cycle". Genes Dev 20 (16): 2266–227. doi:10.1101/gad.1450606. PMC 1553209. PMID 16912276. - Orlando DA, Lin CY, Bernard A, Wang JY, Socolar JES, Iversen ES, Hartemink AJ, Haase SB (June 2008). "Global control of cell-cycle transcription by coupled CDK and network oscillators". Nature 453 (453): 944–947. Bibcode:2008Natur.453..944O. doi:10.1038/nature06955. - de Lichtenberg U, Jensen LJ, Fausbøll A, Jensen TS, Bork P, Brunak S (April 2005). "Comparison of computational methods for the identification of cell cycle-regulated genes". Bioinformatics 21 (7): 1164–1171. doi:10.1093/bioinformatics/bti093. PMID 15513999. - White MA, Riles L, Cohen BA (February 2009). "A systematic screen for transcriptional regulators of the yeast cell cycle". Genetics 181 (2): 435–46. doi:10.1534/genetics.108.098145. PMC 2644938. PMID 19033152. - Lee T, et. al (October 2002). "Transcriptional Regulatory Networks in Saccharomyces cerevisiae". Science 298 (5594): 799–804. Bibcode:2002Sci...298..799L. doi:10.1126/science.1075090. PMID 12399584. - Simon I, et. al (September 2001). "Serial Regulation of Transcriptional Regulators in the Yeast Cell Cycle". Cell 106 (6): 697–708. doi:10.1016/S0092-8674(01)00494-9. PMID 11572776. - Sidorova JM, Mikesell GE, Breeden LL (December 1995). "Cell cycle-regulated phosphorylation of Swi6 controls its nuclear localization". Mol Biol Cell. 6 (12): 1641–1658. PMC 301322. PMID 8590795. - Ubersax J, et. al (October 2003). "Targets of the cyclin-dependent kinase Cdk1". Nature 425 (6960): 859–864. Bibcode:2003Natur.425..859U. doi:10.1038/nature02062. PMID 14574415. - Morgan DO (2007). "2-3". The Cell Cycle: Principles of Control. London: New Science Press. p. 18. ISBN 0=9539181-2-2. - Stephen J. Elledge (6 December 1996). "Cell Cycle Checkpoints: Preventing an Identity Crisis". Science 274 (5293): 1664–1672. doi:10.1126/science.274.5293.1664. PMID 8939848. Further reading - Morgan DO (2007). The Cell Cycle: Principles of Control. London: Published by New Science Press in association with Oxford University Press. ISBN 0-87893-508-8. - Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2008). "Chapter 17". Molecular Biology of the Cell (5th ed.). New York: Garland Science. ISBN 978-0-8153-4111-6. - Krieger M, Scott MP; Matsudaira PT, Lodish HF, Darnell JE, Zipursky L, Kaiser C; Berk A (2004). Molecular cell biology. New York: W.H. Freeman and CO. ISBN 0-7167-4366-3. - Watson JD, Baker TA, Bell SP, Gann A, Levine M, Losick R (2004). "Chapter 7". Molecular biology of the gene (5th ed.). San Francisco: Pearson/Benjamin Cummings. ISBN 0-8053-4642-2. |Wikimedia Commons has media related to: Cell cycle| - Cell Cycle iBioSeminar by David Morgan (UCSF) - Transcriptional program of the cell cycle: high-resolution timing - Cell cycle and metabolic cycle regulated transcription in yeast - Cell Cycle Animation 1Lec.com - Cell Cycle and Cytokinesis - The Virtual Library of Biochemistry and Cell Biology - Cell Cycle - Cell Cycle Portal - Fucci:Using GFP to visualize the cell-cycle - Science Creative Quarterly's overview of the cell cycle - Cells alive - CCO The Cell-Cycle Ontology - KEGG - Human Cell Cycle - Cell cycle modeling - Drosophila Cell Cycle Genes - The Interactive Fly
http://en.wikipedia.org/wiki/Cell_cycle
13
19
In this video segment from Cyberchase, Matt, Digit and Slider must create an exact copy of a large ring in order to activate a powerful force and foil Hacker. To make sure the copy they make is perfect, they have to know what the “radius” of a circle is and figure out a way to carefully measure it. Here are some Frame, Focus and Follow-up suggestions for using this video in a math lesson. What is Frame, Focus and Follow-up? Frame: All shapes have certain features or properties that help us define them. For example, how do we know when a four-sided shape should be called a square and not a trapezoid? What is it about a circle that makes it a circle? Sometimes measuring helps with the process of defining a shape. Can you think of an example? Focus: As you watch this segment, ask yourself, “What feature or property of a circle helps Matt, Digit and Slider figure out how to create the exact copy of the ring they need? Was measurement involved?" Follow Up: How did Matt, Digit and Slider create the exact copy of the ring? What is the “radius"? How would you find the radius of a circular object like a pie plate or Frisbee®? What other properties of a circle would be helpful to know if you had to create an exact duplicate without tracing? MATT: The fake ring has to look just like the original - the same shape, same size, same everything, or it won't work. DIGIT: Yeah! How you gonna pull that off? SLIDER: With this. It's the plan the original ring was made from. MATT: Whoa! Where'd you get that? SLIDER: My father. We gotta cut out a circle that matches this ring exactly. MATT: We need to cut out two circles! One for the inside edge of the ring - the other for the outside edge. DIGIT: But, how do we know how big to make 'em? SLIDER: We need to adjust the radius. DIGIT: Uh huh...What am I missing here? MATT: Don't worry, Didge. The radius is the distance from the center point out to the edge of the circle. To change how big or small a circle is, just change the radius! I'll show you. SLIDER: Never Mind. I'll...make it! And I've got just the tool to do that! MATT: Sheech. What's with this guy? SLIDER: Inner circle radius: 19 Cyber units. Outer circle radius: 23 cyber units. We just have to put the markings on it. DIGIT: May I? MATT: An exact copy! Let's make the switch. Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co. We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment.
http://www.teachersdomain.org/resource/vtl07.math.measure.cir.trickhack/
13
14
TenMarks teaches you how to apply concepts of similar figures to find actual distances using maps. Read the full transcript » Learn about Maps, Models, and Actual Distance Actual Measurements Let’s learn about actual measurement when it comes to scaling. Let’s look at the question which says that on a map, the distance between City A and City B is 20 inches. So, the distance A to B on the map is 20 inches. That’s one fact given to us. It has also given to us that the scale on the map is 2 inches=4. Scale is 2 inches to 4.5 miles or 2 inches =4.5 mile, same thing. What is scale? Scale is the ratio of measurement on a drawing or a model to measurement of the actual object. So, if we all know that the ratio is 2 inches equals 4.5 miles, the scale ratio equals 2 inches/4.5 miles. That’s one fact given to us. Let’s create a little bit of extra space and look at the second fact. What we are also given is that the distance between A and B on the map equals 20 inches. We know that the distance on map equals 20 inches. Let’s say distance in reality, A to B actual. The actual distance, let’s say that is Y miles. The ratio between these two, again, it’s the distance on a map to distance actually, that’s also a scale. So, scale according to the numbers that we’ve been given is 20 inches/Y miles. That’s also a scale because remember the scale is the ratio of measurement on drawing which is 20 inches to measurement of actual object which we said is, let’s say it’s Y miles. So now, we know that this one scale and this is a second scale but these two scales are equal or equivalent or in proportion. They are one and the same. Let’s look at them which mean 2 inches/4.5 miles is the same as 20 inches/Y miles. If this is indeed true, then their cross product is 2 as well. So, if I remove the units for a minute, 2/4.5=20/Y. All I did was remove miles and inches form both sides. So, if we’ve got that then the cross products are the same. So, 2×y=4.5×20, so dividing both sides by 2, I get y=4.5×10, 20/2 is 10, equals 45. If y equals 45 and y was the actual distance between A and B in miles, then what we know is the actual distance between A and B is 45 miles. That’s the answer we were looking for. To quickly recap what we did here is we looked at the data we were given. If we know that the distance on a map is 20 inches and the scale is given to us 2:4.5 miles, then we can use the scale as one ratio, we know that the distance on the map is 20 inches. Let’s say the distance in reality is Y miles. We get a second ratio which is 20 inches=y miles. Since both of these are the scale, they are the same. Once I got two ratios with one variable, we can easily cross multiply to get y=45 or the distance in real life between A and B Cities is 45 miles.
http://www.healthline.com/hlvideo-5min/learn-about-maps-models-and-actual-distance-285026161
13
20
This article is from TOS Vol. 2, No. 1. The full contents of the issue are listed here. Induction and Experimental Method The scientific revolution of the 17th century was made possible by the achievements of ancient Greece. The Greeks were the first to seek natural (as opposed to supernatural) explanations, offer comprehensive theories of the physical world, and develop both deductive logic and advanced mathematics. However, their progress in physical science was impeded by the widely held view that higher knowledge is passively received rather than actively acquired. For many Greek thinkers, perfection was found in the realm of “being,” an eternal and immutable realm of universal truths that can be grasped by the contemplative mind of the philosopher. In contrast, the physical world of activity was often regarded as a realm of “becoming,” a ceaselessly changing realm that cannot be fully understood by anyone. The modern scientist views himself as an active investigator, but such an attitude was rare among the Greeks. This basic difference in mindset—contemplation versus investigation—is one of the great divides between the ancient and modern minds. Modern science began with the full development of its own distinctive method of investigation: experiment. Experimentation is “the method of establishing causal relationships by means of controlling variables.” The experimenter does not merely observe nature; he manipulates it by holding some factor(s) constant while varying others and measuring the results. He knows that the tree of knowledge will not simply drop its fruit into his open mind; the fruit must be cultivated and picked, often with the help of instruments designed for the purpose. Precisely what the Greeks were missing can be seen by examining their closest approach to modern experimental science, which was Claudius Ptolemy’s investigation of refraction. Ptolemy conducted a systematic study in which he measured the angular deflection of light at air/water, air/glass, and water/glass interfaces. This experiment, when eventually repeated in the 17th century, led Willebrord Snell to the sine law of refraction. But Ptolemy himself did not discover the law, even though he did the right experiment and possessed both the requisite mathematical knowledge and the means to collect sufficiently accurate data. Ptolemy’s failure was caused primarily by his view of the relationship between experiment and theory. He did not regard experiment as the means of arriving at the correct theory; rather, the ideal theory is given in advance by intuition, and then experiment shows the deviations of the observed physical world from the ideal. This is precisely the Platonic approach he had taken in astronomy. The circle is the geometric figure possessing perfect symmetry, so Ptolemy and earlier Greek astronomers began with the intuition that celestial bodies orbit in circles at uniform speed. Observations then determined the deviations from the ideal, which Ptolemy modeled using mathematical contrivances unrelated to physical principles (deferents, epicycles, and equants). Similarly, in optics, he began with an a priori argument that the ratio of incident and refracted angles should be constant for a particular type of interface. When measurements indicated otherwise, he used an arithmetic progression to model the deviations from the ideal constant ratio.1 Plato had denigrated sense perception and the physical world, exhorting his followers to direct their attention inward to discover thereby the knowledge of the perfect ideas that have their source in a non-physical dimension. Unfortunately, Plato explained, these perfect ideas will correspond only approximately to the ceaselessly changing and imperfect physical world we observe. Ptolemy’s science was superficially anti-Platonic in that he emphasized the role of careful observation. However, at a deeper level, his science was a logical application of Platonism; in astronomy and in optics, he started with the “perfect” model and then merely described without explanation the inherently unintelligible deviations from it. Thus Ptolemy regarded experiment not as a method of discovery but instead as the handmaiden of intuition; he used it to fill in details about a physical world that refuses to behave in perfect accordance with our predetermined ideas. This approach is a recipe for stagnation: The theory is imposed on rather than derived from sensory data; the math is detached from physical principles; and, without an understanding of causes, the scientist is left with no further questions to ask. The birth of modern science required an opposite view: Experiment had to be regarded as the essential method of grasping causal connections. The unique power of this method is revealed by examining how it was used by the geniuses who created the scientific era. . . . 1 “Ptolemy’s Search for a Law of Refraction,” Archive for History of Exact Sciences, vol. 26, 1982, pp. 221–40.
http://www.theobjectivestandard.com/issues/2007-spring/induction-experimental-method.asp
13
14
In the previous lesson, you learned how to identify a function by analyzing the domain and range and using the vertical line test. Now we are going to take a look at function notation and how it is used in Algebra. The typical notation for a function is f(x). This is read as "f of x" This does NOT mean f times x. This is a special notation used only for functions! However, f(x) is not the only variable used in function notation! You may see g(x), or h(x), or even b(a). You can use any letters, but they must be in the same format - a variable followed by another variable in parenthesis. Ok.. what does this really mean? Remember when we graphed linear equations? Every equation was written as y = ..... Well, now instead of y = , you are going to see f(x) ..... f(x) is another way of representing the "y" variable in an equation. Let's take a look at an example. Notice y is replaced with f(x), g(x), even h(a).This is function notation. They all mean exactly the same thing! You graph all of these exactly as you would y = 2x +3. We are just using a different notation! CommentsWe would love to hear what you have to say about this page! Like Us on Facebook Recommend this on Google Algebra Class E-course Members Sign Up for Algebra Class E-courses Search This Site* Having Trouble with Your Homework? Subscribe To This Site Enjoy This Site? Then why not use the button below, to add us to your favorite bookmarking service?
http://www.algebra-class.com/function-notation.html
13
10
By Tony Stockill In previous articles, 'History of the Computer - How Computers Add' and '- Flip-flops - a basic counter' we looked at adders, and counters. Now we will consider how these building blocks can be used to perform multiplication. In the Decimal system, we can multiply by 10 by adding a 0 to the end of a number. For example 4 with a zero added becomes 40, similarly 346 becomes 3460.We can expand this by adding 2 or 3 zeros to multiply by 100 or 1000. THE SHIFT REGISTER In the Binary system used in computers, we can multiply by 2 by adding a zero at the end of the number. Thus 110 (2+4=6 decimal) becomes 1100 (4+8=12 decimal). Similarly we can add more zeros and multiply by 4,8,16 etc. (decimal). This is one form of multiplication, the process is called shifting as each bit, 1 or 0 is shifted to the next bit position, and a zero is added in at the first bit position. Several different techniques have been used to multiply using logic elements, as before these are usually described in a logic diagram as a 'black box' labeled multiplier. In an even more sophisticated logic diagram, this would be combined with other 'black boxes' such as adders, dividers, square roots, etc. to make one big 'black box' the ALU (arithmetic logical unit). The actual 'works' inside this unit are irrelevant to the overall design of the computer. All the designer needs to know is that if he puts two numbers into the ALU, and tells it to multiply them, he will get an output of the result. Initially these boxes would have been made up physically of vacuum tubes, in a box the size of your bedroom, these have been gradually improved, replaced, miniaturised, until nowadays that will all fit on a chip. However the basic principles are the same. If we analyse the concept of multiplication, we see that it is one of repetition (and we know computers excel at this). Take for example 2X4. This means take 4 lots of 2, and add them together, or 2+2+2+2=8. So to make a multiplier for a computer we can use an adder, which we have, and some method of counting, which we also have, as discussed in the earlier articles we mentioned. For the example we just looked at, 2X4, our multiplier would have one input from the 2 (10 binary) going to a 4-bit adder. The output, or result, from the adder would be looped around to form the second input to the adder. The second of the numbers to be multiplied, 4 (100 binary) sets a flip-flop counter to count down from 4 to 1, with one count pulse every time we add. Thus the counter is 'more than 1, which is the condition for the adder output to be routed to its input. The initial add would be 10 + 10 binary (2+2 decimal), giving 100 binary. This result is returned to the input, gated by the counter 'more than 1' to be added to 10 again, giving 110 binary. We perform another add of 110 + 10 getting a result of 1000. This time the counter has counted down to one, and blocks the adder input. At the same time it allows the adder result output to become the multiplier result. You can see how this simple example could be used in a scaled-up version capable of multiplying multi-bit numbers. All we need is a lot more adders, and a few logic gates to control them, maybe throw in a bit of timing, so that it doesn't all get mixed up! As we've said before, when you're talking in nanoseconds, you can get through a lot of calculations very quickly. Still to come we will look at how negative numbers are represented in computers, and how they handle very large numbers with something called Floating Point Arithmetic. Tony is an experienced computer engineer. He is currently webmaster and contributor to http://www.what-why-wisdom.com. A set of diagrams accompanying these articles may be seen at http://www.what-why-wisdom.com/history-of-the-computer-0.html.
http://anythingaboutcomputer.blogspot.jp/2008/01/history-of-computer-how-computers.html
13
27
Grades: 2, 3, 4 Related Subjects: English - Language Arts, Mathematics, Visual & Performing Arts Class time required: 1 X 50 minute session Author: SDMA Education Department Download an editable Lesson Plan File Type: RTF (Choose Save-As when dialogue box appears) In this one-session lesson, students will integrate their knowledge of geometric shapes and measuring to create a chart displaying primary and secondary colors. - 11x17 white construction paper or watercolor paper - Red, yellow, and blue paint - Paper plates - Water bowls - Paper towels - Tag board for shape templates - Glossary terms: primary colors, secondary colors, warm colors, cool colors - Print the above images onto overhead transparencies. - Cut 2-inch high right triangles, rectangles, and squares out of tag board for each student. - Create a blank color chart (PDF 84kb) (pencil lines, but no paint) to use as an example. - Place primary colored paint, each on its own separate plate, before starting the art lesson. - Shapes can be placed in plastic Ziploc bags, prior to the activity, for easy dispersal. - A completely painted color chart can be used as an example for young students to copy. This would make the lesson more focused on painting technique, rather than exploring what happens when colors are combined. 1. Begin a discussion with the students about color: Who can name the primary colors? Why are they called primary? What happens when you mix the primary colors together? What are these new colors called? Why are they called secondary colors? Which colors feel warm when you look at them? Which colors feel cool when you look at them? Have the students spend the next minute, working with a partner, finding all of the primary colors in the classroom. 2. Show the students the transparency images. Use the following questions to guide the discussion about the images: What primary colors do you see in this painting? What secondary colors do you see? Does this painting have more warm colors or cool colors? What feelings or emotions do you think the artist was trying to get across? What kind of feelings do you feel when you look at this painting? 3. Explain the objective of the activity: to use geometric shapes to create a color chart. (PDF 84kb) 4. Hand out the shapes, construction paper, and a pencil to each student. 5. Ask the students to place the construction paper in a vertical position. Have them choose the shape with one right angle (triangle). 6. Measure 2-fingers’ width from the top of the left side of the paper and 2-fingers’ width from the left side of the paper. Place the triangle on the paper and trace it. 7. Next, place a pinky finger along the diagonal line of the triangle and place the triangle on the paper so that it faces the first triangle and forms a square. Trace the triangle. 8. Repeat steps 6 and 7 to draw two more sets of triangles below the first set of triangles. 9. Next, have the students choose the shape with four equal sides and four right angles (square). 10. Measure 2-fingers’ width from the top of the right side of the paper and 2-fingers’ width from the right side of the paper. Place the square on the paper and trace it. Repeat this step to draw two more squares underneath the first square. 11. Write “Primary Colors” above the triangles and “Secondary Colors” above the squares. 12. Draw equal signs between the triangles and the squares. 13. Next, have the students choose the shape that has four unequal sides and four right angles (rectangle). 14. Measure two-fingers’ width from the bottom left side of the paper and one finger’s width from the left side of the paper. Place the rectangle vertically and trace. 15. Place one-finger’s width along the right side of the rectangle and trace another vertical rectangle. Repeat once more until three vertical rectangles have been traced. 16. Repeat steps 14 and 15 starting from the right side of the paper. 17. Write “Warm Colors” above the left set of rectangles and “Cool Colors” above the right set of rectangles. 18. Demonstrate how to paint the color chart. • The triangles are for the primary colors. • The squares are for the secondary colors. • The rectangles are for warm and cool colors. • Start with the red paint and fill in the first triangle. • When switching paint colors, clean the brush in the water and blot it on a paper towel, making sure that it is clean before moving on. • Have the students name another primary color that red can mix with to create a secondary color (either yellow or blue). Paint this color in the second triangle. Ask the students to predict the secondary color that will be created by mixing these two primary colors. • Show how to mix colors, using the paper plate as a palette. Pick up some red paint on the brush and place it on a blank spot of the palette. Then, pick up another primary color paint and add it to the red to create a secondary color. 19. Hand out the brushes, paint, paper plates, paper towels, and water. Have the students complete their color charts. English-Language Arts: Students can choose one of the pieces of artwork shown in this lesson and create a poem describing the emotions felt while looking at the painting. Some ideas for poems: diamante, cinquain, haiku, or quatrain. English-Language Arts: Students can choose one of the paintings from this lesson and pretend to be inside the painting. Have students write how they feel and what they see, using all of the five senses. Mathematics: Students can use the artwork images from this lesson or look at artwork in books and create a bar graph displaying the number of primary colors/secondary colors found. Visual Arts: Students can create a landscape painting, expressing mood through choice of color. CA Content Standards Second Grade Visual Arts 1.2 Perceive and discuss differences in mood created by warm and cool colors. 1.3 Identify the elements of art in objects in nature, the environment, and works of art, emphasizing line, color, shape/form, texture, and space. 2.2 Demonstrate beginning skill in the use of art media, such as oil pastels, watercolors, and tempera. 2.4 Create a painting or drawing, using warm or cool colors expressively. 3.1 Explain how artists use their work to share experiences or communicate ideas. Third Grade Visual Arts 1.5 Identify and describe elements of art in works of art, emphasizing line, color, shape/form, texture, space, and value. 4.1 Compare and contrast selected works of art and describe them, using appropriate vocabulary of art. 5.2 Write a poem or story inspired by their own works of art. Fourth Grade Visual Arts 1.3 Identify pairs of complementary colors (e.g., yellow/violet; red/green; orange/blue) and discuss how artists use them to communicate an idea or mood. 1.5 Describe and analyze the elements of art (e.g., color, shape/form, line, texture, space, value), emphasizing form, as they are used in works of art and found in the environment. Second Grade Mathematics 2.1 Describe and classify plane and solid geometric shapes (e.g., circle, triangle, square, rectangle, sphere, pyramid, cube, rectangular prism) according to the number and shape of faces, edges, and vertices. 2.2 Put shapes together and take them apart to form other shapes (e.g., two congruent right triangles can be arranged to form a rectangle). Third Grade Mathematics 2.1 Identify, describe, and classify polygons (including pentagons, hexagons, and octagons). 2.2 Identify attributes of triangles (e.g., two equal sides for the isosceles triangle, three equal sides for the equilateral triangle, right angle for the right triangle). 2.3 Identify attributes of quadrilaterals (e.g., parallel sides for the parallelogram, right angles for the rectangle, equal sides and right angles for the square). Fourth Grade Mathematics 3.5 Know the definitions of a right angle, an acute angle, and an obtuse angle. Understand that 90°, 180°, 270°, and 360° are associated, respectively, with 1/4, 1/2, 3/4, and full turns. 3.6 Visualize, describe, and make models of geometric solids (e.g., prisms, pyramids) in terms of the number and shape of faces, edges, and vertices; interpret two-dimensional representations of three-dimensional objects; and draw patterns (of faces) for a solid that, when cut and folded, will make a model of the solid. 3.7 Know the definitions of different triangles (e.g., equilateral, isosceles, scalene) and identify their attributes. 3.8 Know the definition of different quadrilaterals (e.g., rhombus, square, rectangle, parallelogram, trapezoid). Second Grade English-Language Arts 2.1 Write brief narratives based on their experiences. Third Grade English-Language Arts 2.1 Write narratives. 2.2 Write descriptions that use concrete sensory details to present and support unified impressions of people, places, things, or experiences. Fourth Grade English-Language Arts 2.1 Write narratives. Paul, Tony. How to Mix and Use Color: the artist’s guide to achieving the perfect color. Cincinnati, OH: North Light Books, 2003. Zelanski, Paul. Color. Upper Saddle River, NJ: Prentice Hall, 2003. Art Basics, San Diego State University The seven formal elements of art are described on this Web site. A Guide to Building Visual Arts Lessons, the J. Paul Getty Museum This comprehensive Web site includes definitions and examples of art elements, as well as a grade-by-grade guide to creating lessons for the classroom. It also includes several CA-standards aligned lesson plans for each grade level that focus on the elements of art. Foundations in Art, University of Delaware An introduction to the elements of art that includes images of artwork and concise explanations. Learning to Look at Art Learn about the elements of art by looking at famous pieces of artwork. This Web site provides background information on the piece of artwork and descriptions of how each piece is an example of an art element (line, color, texture, shape, form, space, and value.) It also includes interactive and printable activities for students. Baxter, Nicola. Amazing Colors. Chicago, IL: Children’s Press, 1996. Court, Rob. Color. Chanhassan, MN: The Child’s World, 2003. Ehlert, Lois. Color Zoo. New York: HarperFestival, 1997. Ehlert, Lois. Planting a Rainbow. San Diego, CA: Harcourt, Inc., 2003. Gogh, Vincent van. Vincent’s Colors. New York: Metropolitan Museum of Art, 2005. Flux, Paul. Color. Chicago, IL: Heinemann Library, 2001. Richardson, Joy. Using Color in Art. Milwaukee: Gareth Stevens, 2000. Rodrigue, George. Why is Blue Dog Blue?: a tale of colors. New York: Stewart, Tabori & Chang, 2001 Westray, Kathleen. A Color Sampler. New York: Ticknor & Fields, 1993. The Artist’s Toolkit: Visual Elements and Principles Students can “Explore the Toolkit” to learn about and interact with the elements of art and create their own artwork. Colorworm Explains Color An interactive student Web site that teaches about the visible spectrum, the color wheel, and the painter’s palette.
http://carearts.org/teachers/lesson-plans/a-g/color-chart.html
13
159
(2002-06-23) The Basics: | f ' = What is a derivative? Well, let me give you the traditional approach first. This will be complemented by an abstract glimpse of the bigger picture, which is more closely related to the way people actually use derivatives, once they are familiar with them. For a given real-valued function f of a real variable, consider the slope (m) of its graph at some point. That is to say, some straight line of equation y = mx+b (for some irrelevant constant b) is tangent to the graph of f at that point. In some definite sense, mx+b is the best linear approximation to f(x) when x is close to the point under consideration... The tangent line at point x may be defined as the limit of a secant line intersecting a curve at point x and point x+h, when h tends to 0. When the curve is the graph of f, the slope of such a secant is equal to [ f(x+h)-f(x) ] / h, and the derivative (m) at point x is therefore the limit of that quantity, as h tends to 0. The above limit may or may not exist, so the derivative of f at point x may or may not be defined. We'll skip that discussion. The popular trivia question concerning the choice of the letter "m" to denote the slope of a straight line (in most US textbooks) is discussed Way beyond this introductory scope, we would remark that the quantity we called h is of a vectorial nature (think of a function of several variables), so the derivative at point x is in fact a tensor whose components are called partial derivatives. Also beyond the scope of this article are functions of a complex variable, in which case the above quantity h is simply a complex number, and the above division by h remains thus purely numerical (albeit complex). However, a complex number h (a point on the plane) may approach zero in a variety of ways that are unknown in the realm of real numbers (points on the line). This happens to severely restrict the class of functions for which the above limit exists. Actually, the only functions of a complex variable which have a derivative are the so-called analytic functions [essentially: the convergent sums of power series]. The above is the usual way the concept of derivative is This traditional presentation may be quite a hurdle to overcome, when given to someone who may not yet be thoroughly familar with functions and/or limits. Having defined the derivative of f at point x, we define the derivative function g = f ' = D( f ) of the function f, as the function g whose value g(x) at point x is the derivative of f at point x. We could then prove, one by one, the algebraic rules listed in the first lines of the following table. These simple rules allow most derivatives to be easily computed from the derivatives of just a few elementary functions, like those tabulated below (the above theoretical definition is thus rarely used in practice): u and v are functions of x, whereas a, b and n are constants. || Derivative D( f ) = f ' |Linearity||a u + b v ||a u' + b v' |u ´ v ||u' ´ v + u ´ v' |u / v ||[ u' ´ v - u ´ v' ] / v 2 ||v' ´ u'(v) |Inversion||v = u-1 ||1 / u'(v) ||n x n-1 ||1/x = x -1 |Exponentials||e x||e x |a x||ln(a) a x |sin x||cos x |cos x||- sin x |tg x||1 + (tg x)2 |ln | cos x |||- tg x |sh x||ch x |ch x||sh x |th x||1 - (th x)2 |ln ( ch x )||th x || 1 / Ö(1-x2 ) | arccos x = p/2 - arcsin x ||-1 / |arctg x|| 1 / (1 + x2 ) || 1 / Ö(1+x2 ) (for |x|>1)|| 1 / |argth x (for |x|<1)|| 1 / (1 - x2 ) |gd x = 2 arctg ex - p/2|| 1 / ch x | gd-1 x = ln tg (x/2 + p/4) || 1 / cos x One abstract approach to the derivative concept would be to bypass (at first) the relevance to slopes, and study the properties of some derivative operator D, in a linear space of abstract functions endowed with an internal product (´), where D is only known to satisfy the following two axioms (which we may call linearity and Leibniz' law, as in the above table): |D(au + bv)||= ||a D(u) + b D(v)| |D( u ´ v )||= ||D(u) ´ v + u ´ D(v)| For example, the product rule imposes that D(1) is zero [in the argument of D, we do not distinguish between a function and its value at point x, so that "1" denotes the function whose value is the number 1 at any point x]. The linearity then imposes that D(a) is zero, for any constant a. Repeated applications of the product rule give the derivative of x raised to the power of any integer, so we obtain (by linearity) the correct derivative for any polynomial. (The two rules may also be used to prove the chain rule for polynomials.) A function that has a derivative at point x (defined as a limit) also has arbitrarily close polynomial approximations about x. We could use this fact to show that both definitions of the D operator coincide, whenever both are valid (if we only assume D to be continuous, in a sense which we won't make more precise here). This abstract approach is mostly for educational purposes at the elementary level. For theoretical purposes (at the research level) the abstract viewpoint which has proven to be the most fruitful is totally different: In the Theory of Distributions, a pointwise product like the above (´) is not even defined, whereas everything revolves around the so-called convolution product (*), which has the following strange property concerning the operator D: D( u * v ) = D(u) * v = u * D(v) To differentiate a convolution product (u*v), differentiate either factor! What's the "Fundamental Theorem of Calculus" ? Once known as Barrow's rule, it states that, if f is the derivative of F, then: | f (x) dx In this, if f and F are real-valued functions of a real variable, the right-hand side represents the area between the curve y = f (x) and the x-axis (y = 0), counting positively what's above the axis and negatively [negative area!] what's below it. Any function F whose derivative is equal to f is called a primitive of f (all such primitives simply differ by an arbitrary additive constant, often called constant of integration). A primitive function is often called an indefinite integral (as opposed to a definite integral which is a mere number, not a function, usually obtained as the difference of the values of the primitive at two different points). The usual indefinite notation is: ò f (x) dx At a more abstract level, we may also call "Fundamental Theorem of Calculus" the generalization of the above expressed in the language of differential forms, which is also known as Stokes' Theorem. Fundamental Theorem of Calculus (Theorem of the Day #2) by Robin Whitty Example involving complex exponentials What is the indefinite integral of cos(2x) e 3x ? That function is the real part of a complex function of a real variable: (cos 2x + i sin 2x) e 3x = e i (2x) e 3x = e (3+2i) x Since the derivative of exp(a x) / a is exp(a x) we obtain, conversely: e(3+2i) x dx = e(3+2i) x / (3+2i) = e3x (cos 2x + i sin 2x) (3-2i) / 13 The relation we were after is obtained as the real part of the above: cos(2x) e 3x dx = (3 cos 2x + 2 sin 2x) e 3x / 13 Integration by parts A useful technique to reduce the computation of one integral to another. This method was first published in 1715 by Brook Taylor (1685-1731). The product rule states that the derivative (uv)' of a product of two functions is u'v+uv'. When the integral of some function f is sought, integration by parts is a minor art form which attempts to use this backwards, by writing f as a product u'v of two functions, one of which (u') has a known integral (u). In which case: ò f dx = ò u'v dx - ò uv' dx This reduces the computation of the integral of f to that of uv'. The tricky part, of course, is to guess what choice of u would make the latter simpler... The choice u' = 1 (i.e., u = x and v = f ) is occasionally useful. Example: ò ln(x) dx x ln(x) - ò (x/x) dx x ln(x) - x Another classical example pertains to Laplace transforms ( p > 0 ) and/or Heaviside's operational calculus, where all integrals are understood to be definite integrals from 0 to +¥ (with a subexponential function ò f '(t) exp(-pt) dt - f (0) + p ò f (t) exp(-pt) dt Integration by parts What is the perimeter of a parabolic curve, given the base length and height of [the] parabola? Choose the coordinate axes so that your parabola has equation y = x2/2p for some constant parameter p. The length element ds along the parabola is such that (ds)2 = (dx)2 + (dy)2, or ds/dx = Ö(1+(dy/dx)2) = Ö(1 + x2/p2). The length s of the arc of parabola from the apex (0,0) to the point (x, y = x2/2p) is simply the following integral of this (in which we may eliminate x or p, using 2py = x2 ). |s || = ||1 + x2/p2 || + (p/2) ln( ||1 + x2/p2 || + x/p ) ||1 + p/2y || + (p/2) ln( ||1 + 2y/p ||1 + (2y/x)2 || + (x2/4y) ln( ||1 + (2y/x)2 || + 2y/x ) For a symmetrical arc extending on both sides of the parabola's axis, the length is 2s (twice the above). If needed, the whole "perimeter" is 2s+2x. What's the top height of a (parabolic) bridge? If a curved bridge is a foot longer than its mile-long horizontal span... Let's express all distances in feet (a mile is 5280 ft). Using the notations of the previous article, 2x = 5280, 2s = 5281, u = x/p = 2y/x = y/1320 |s / x = 5281 / 5280|| = ½ ||1 + u2 || + (1/2u) ln( ||1 + u2 || + u ) For small values of u, the right-hand side is roughly 1+u2/6. Solving for u the equation thus simplified, we obtain The height y is thus roughly equal to that quantity multiplied by 1320 ft ,or about 44.4972 ft. This approximation is valid for any type of smooth enough curve. It can be refined for the parabolic case using successive approximations to solve for u the above equation. This yields u = 0.0337128658566... which exceeds the above by about 85.2 ppm (ppm = parts per million) for a final result of about 44.5010 ft. The previous solution would have satisfied any engineer before the computer era. (2008-03-27; e-mail) Length of a sagging horizontal cable: How long is a cable which spans 28 m horizontally and sags 300 mm? Answer : Surprisingly, just about 28.00857 m... Under its own weight, a uniform cable without any rigidity (a "chain") would actually assume the shape of a In a coordinate system with a vertical y-axis and centered on its apex, the catenary has the following cartesian equation: y/a = ch (x/a) -1 ½ (ex/a - 2 + e-x/a ) 2 sh2 (x/2a) Measured from the apex at x = y = 0, the arclength s along the cable is: s = a sh (x/a) Those formulas are not easy to work with, unless the parameter a is given. For example, in the case at hand (a 28 m span with a 0.3 m sag) all we know is: x = 14 y = 0.3 So, we must solve for a (numerically) the transcendantal 0.3 / a = 2 sh2 (7/a) This yields a = 326.716654425... 2s = 2a sh (14 / a) Thus, an 8.57 mm slack produces a 30 cm sag for a 28 m span. In similar cases, the parameter a is also large (it's equal to the radius of curvature at the curve's apex). So, we may find a good approximation to the relevant transcendental equation by equating the sh function to its (small) 2 sh2 (x/2a) a » x2 / 2y whereby s = a sh (x/a) x ( 1 + x2 / 6a2 ) x ( 1 + 2y2 / 3x2 ) This gives 2s 2x ( 1 + 8/3 (y/2x)2 ) = 28.0085714... in the above case. This is indeed a good approximation to the aforementioned exact result. Parabolic Approximation : If we plug the values x = 14 and y = 0.3 in the above formula for the exact length of a parabolic arc, we obtain: 2s = 28.0085690686... Circular Approximation : A thin circular arc of width 2x and of height y has a length || arcsin ( || ) = 28.00857064... In fact, all smooth enough approximations to a flat enough catenary will have a comparable precision, because this is what results from equating a curve to its osculating circle at the lowest point. The approximative expression we derived above in the case of the catenary is indeed quite general: 2 x [ 1 + 8/3 (y/2x) 2 ] Find the ratio, over one revolution, of the distance moved by a wheel rolling on a flat surface to the distance traced out by a point on its circumference. As a wheel of unit radius rolls (on the x-axis), the trajectory of a point on its circumference is a cycloid, whose parametric equation is not difficult to establish: x = t - sin(t) y = 1 - cos(t) In this, the parameter t is the abscissa [x-coordinate] of the center of the wheel. In the first revolution of the wheel (one arch of the cycloid), t goes from 0 to 2p. The length of one full arch of a cycloid ("cycloidal arch") was first worked out in the 17th century by Evangelista Torricelli (1608-1647), just before the advent of the calculus. Let's do it again with modern tools: Calling s the curvilinear abscissa (the length along the curve), we have: (dx)2 + (dy)2 = [(1-cos(t))2 + (sin(t))2](dt)2 (ds/dt)2 = 2 - 2 cos(t) = 4 sin2(t/2) so, if 0 ≤ t ≤ 2p: ds/dt = 2 sin(t/2) ≥ 0 The length of the whole arch is the integral of this when t goes from 0 to 2p and it is therefore equal to 8, [since the indefinite integral is -4 cos(t/2)]. On the other hand, the length of the trajectory of the wheel's center (a straight line) is clearly 2p (the circumference of the wheel). In other words, the trajectory of a point on the circumference is 4/p times as long as the trajectory of the center, for any whole number of revolutions (that's about 27.324% longer, if you prefer). The ratio you asked for is the reciprocal of that, namely p/4 (which is about 0.7853981633974...), the ratio of the circumference of the wheel to the length of the cycloidal arch. However, the result is best memorized as: "The length of a cycloidal arch is 4 times the diameter of the wheel." (from Schenectady, NY. 2003-04-07; e-mail) What is the [indefinite] integral of (tan x)1/3 dx ? An obvious change of variable is to introduce y = tan x [ dy = (1+y2 ) dx ], so the integrand becomes y1/3 dy / (1+y2 ). This suggests a better change of variable, namely: z = y2/3 = (tan x)2/3 [ dz = (2/3)y-1/3 dy ], which yields z dz = (2/3)y1/3 dy, and makes the integrand equal to the following rational function of z, which may be integrated using standard methods (featuring a decomposition into 3 easy-to-integrate terms): (3/2) z dz / (1+z3 ) = ¼ (2z-1) dz / (1-z+z2 ) + (3/4) dz / (1-z+z2 ) - ½ dz / (1+z) As (1-z+z2 ) is equal to the positive quantity ¼ [(2z - 1)2 + 3] , we obtain: ò (tan x)1/3 dx ¼ ln(1-z+z2 ) - ½ ln(1+z) where z stands for | tan x | 2/3 (D. B. of Grand Junction, CO. A particle moves from right to left along the parabola y = Ö(-x) in such a way that its x coordinate decreases at the rate of 8 m/s. When x = -4, how fast is the change in the angle of inclination of the line joining the particle to the origin? We assume all distances are in meters. When the particle is at a negative abscissa x, the (negative) slope of the line in question is y/x = Ö(-x)/x and the corresponding (negative) angle is thus: a = arctg(Ö(-x)/x) [In this, "arctg" is the "Arctangent" function, which is also spelled "atan" in US textbooks.] Therefore, a varies with x at a (negative) rate: da/dx = -1/(2´Ö(-x)(1-x)) (rad/m) If x varies with time as stated, we have dx/dt = -8 m/s, so the angle a varies with time at a (positive) rate: da/dt = 4/(Ö(-x)(1-x)) (rad/s) When x is -4 m, the rate dA/dt is therefore 4/(Ö4 ´5) rad/s = 0.4 rad/s. The angle a, which is always negative, is thus increasing at a rate of 0.4 rad/s when the particle is 4 meters to the left of the origin (rad/s = radian per second). What's the area bounded by the following curves? - y = f(x) = x3 - 9x - y = g(x) = x + 3 The curves intersect when f(x) = g(x), which translates into x3 - 10x - 3 = 0. This cubic equation factors nicely into (x + 3) (x2 - 3x - 1) = 0 , so we're faced with only a quadratic equation... To find if there's a "trivial" integer which is a root of a polynomial with integer coefficients [whose leading coefficient is ±1], observe that such a root would have to divide the constant term. In the above case, we only had 4 possibilities to try, namely -3, -1, +1, +3. The abscissas A < B < C of the three intersections are therefore: A = -3 , B = ½ (3 - Ö13) C = ½ (3 + Ö13) Answering an Ambiguous Question : The best thing to do for a "figure 8", like the one at hand, is to compute the (positive) areas of each of the two lobes. The understanding is that you may add or subtract these, according to your chosen orientation of the boundary: - The area of the lobe from A to B (where f(x) is above g(x)) is the integral of f(x)-g(x) = x3 - 10x - 3 [whose primitive is x4/4 - 5x2 - 3x] from A to B, namely (39Ö13 - 11)/8, or about 16.202... - The area of the lobe from B to C (where f(x) is below g(x)) is the integral of g(x)-f(x) from B to C, namely (39Ö13)/4, or about 35.154... The area we're after is thus either the sum (±51.356...) or the difference (±18.952...) of these two, depending on an ambiguous boundary orientation... If you don't switch curves at point B, the algebraic area may also be obtained as the integral of g(x)-f(x) from A to C (up to a change of sign). Signed Planar Areas Consistently Defined A net planar area is best defined as the apparent area of a 3D loop. The area surrounded by a closed planar curve may be defined in general terms, even when the curve does cross itself The usual algebraic definition of areas depends on the orientation (clockwise or counterclockwise) given to the closed boundary of a simple planar surface. The area is positive if the boundary runs counterclockwise around the surface, and negative otherwise the positive direction of planar angles is always counterclockwise). In the case of a simple closed curve [without any multiple points] this is often overlooked, since we normally consider only whichever orientation of the curve makes the area of its interior positive... The clear fact that there is such an "interior" bounded by any given closed planar curve is known as "Jordan's Theorem". It's a classical example of an "obvious" fact with a rather However, when the boundary has multiple points (like the center of a "figure 8"), there may be more than two oriented boundaries for it, since we may have a choice at a double point: Either the boundary crosses itself or it does not (in the latter case, we make a sharp turn, unless there's an unusual configuration about the intersection). Not all sets of such choices lead to a complete tracing of the whole loop. At left is the easy-to-prove "coloring rule" for a true self-crossing of the boundary, concerning the number of times the ordinary area is to be counted in the "algebraic area" dicussed here. It's nice to consider a given oriented closed boundary as a projection of a three-dimensional loop whose apparent area is defined as a path integral. x dy - y dx - y dx of Hickory, NC. 2001-04-13/email) [How do you generalize the method] of variation of parameters when solving differential equations (DE) of 3rd and higher order? For example: x''' - 3x'' + 4x = exp(2t) In memory of | taught me this and much more, many years ago. As shown below, a high-order linear DE can be reduced to a system of first-order linear differential equations in several variables. Such a system is of the form: X' = dX/dt = AX + B X is a column vector of n unknown functions of t. The square matrix A may depend explicitely on t. B is a vector of n explicit functions of t, called forcing terms. The associated homogeneous system is obtained by letting B = 0. For a nonconstant A, it may be quite difficult to find n independent solutions of this homogeneous system (an art form in itself) but, once you have them, a solution of the forced system may be obtained by generalizing to n variables the method (called "variation of parameters") commonly used for a single variable. Let's do this using only n-dimensional notations: The fundamental object is the square matrix W formed with the n columns corresponding to the n independent solutions of the homogeneous system. Clearly, W itself verifies the homogeneous equation: W' = AW It's an interesting exercise in the manipulation of determinants to prove that det(W)' = tr(A) det(W) (HINT: Differentiating just the i-th line of W gives a matrix whose determinant is the product of det(W) by the i-th component in the diagonal of the matrix A). Since det(W), the so-called "Wronskian", is thus solution of a first-order linear DE, it's proportional to the exponential of some function and is therefore either nonzero everywhere or zero everywhere. (Also, the Wronskians for different sets of homogeneous solutions must be proportional.) Homogeneous solutions that are linearly independent at some point are therefore independent everywhere and W(t) has an inverse for any t. We may thus look for the solution X to the nonhomogeneous system in the form X = WY : AX + B = X' = W'Y + WY' = AWY + WY' = AX + WY' Therefore, B = WY' So, Y is simply obtained by integrating W-1 B and the general solution of the forced system may be expressed as follows, with a constant vector K (whose n components are the n "constants of integration"). This looks very much like the corresponding formula for a single variable : X(t) = W(t) [ K + W-1(u) B(u) du ] Linear Differential Equation of Order n : A linear differential equation of order n has the following form (where ak and b are explicit functions of t): an-1 x(n-1) + ... + a3 x(3) + a2 x" + a1 x' + a0 x This reduces to the above system X' = AX + B with the following notations : || X = || B = The first n-1 components in the equation X' = AX+B merely define each component of X as the derivative of the previous one, whereas the last component expresses the original high-order differential equation. Now, the general discussion above applies fully with a W matrix whose first line consists of n independent solutions of the homogeneous equation (each subsequent line is simply the derivative of its predecessor). Here comes the Green function... We need not work out every component of W-1 since we're only interested in the first component of X... The above boxed formula tells us that we only need the first component of W(t)W-1(u)B(u) which may be written G(t,u)b(u), by calling G(t,u) the first component of W(t)W-1(u)Z, where Z is a vector whose component are all zero, except the last one which is one. G(t,u) is called the Green function associated to the given homogeneous equation. It has a simple expression (given below) in terms of a ratio of determinants computed for independent solutions of the homogeneous equation. (Such an expression makes it easy to prove that the Green function is indeed associated to the equation itself and not to a particular set of independent solutions, as it is clearly invariant if you replace any solution by some linear combination in which it appears with a nonzero coefficient.) For a third-order equation with homogeneous solutions A(t), B(t) and C(t), the expression of the Green function (which generalizes to any order) is simply: It's also a good idea to define G(t,u) to be zero when u>t, since such values of G(t,u) are not used in the integral ò t G(t,u) b(u) du. This convention allows us to drop the upper limit of the integral, so we may write a special solution of the inhomogeneous equation as the definite integral (from -¥ to +¥, whenever it converges): ò G(t,u) b(u) du. If this integral does not converge (the issue may only arise when u goes to -¥), we may still use this formal expression by considering that the forcing term b(u) is zero at any time t earlier than whatever happens to be the earliest time we wish to consider. (This is one unsatisfying way to reestablish some kind of fixed arbitrary lower bound for the integral of interest when the only natural one, namely -¥, is not acceptable.) In the case of the equation x''' - 3x" + 4x = exp(2t), three independent solutions are A(t) = exp(-t), B(t) = exp(2t), and C(t) = t exp(2t). This makes the denominator in the above (the "Wronskian") equal to 9 exp(3u) whereas the numerator is With those values, the integral of G(t,u)exp(2u)(u)du when u goes from 0 to t turns out to be equal to f(t) = [ (9t2-6t+2)exp(2t) - 2 exp(-t) ]/54, which is therefore a special solution of your equation. The general solution may be expressed as: x(t) = (a + bt + t2/6) exp(2t) + c exp(-t) [ a, b and c are constant ] Clearly, this result could have been obtained without this heavy artillery: Once you've solved the homogeneous equation and realized that the forcing term is a solution of it, it is very natural to look for an inhomogeneous solution of the form z exp(2t) and find that z"=1/3 works. That's far less tedious than computing and using the associated Green's function. However, efficiency in this special case is not what the question was all about... Convolutions and the Theory of Distributions An introduction to the epoch-making approach of Laurent Schwartz. The above may be dealt with using the elegant idea of convolution products among distributions. The notorious Theory of Distributions occurred to the late Schwartz (1915-2002) "one night in 1944". For this, he received the first ever awarded to a Frenchman, in 1950. (Schwartz taught me functional analysis in the Fall of 1977.) A linear differential equation with constant coefficients (an important special case) may be expressed as a convolution a * x = b. The convolution operator * is bilinear, associative and commutative. Its identity element is the Delta distribution d (dubbed Dirac's "function"). Loosely speaking, the Delta distribution d would correspond to a "function" whose integral is 1, but whose value at every point except zero is zero. The integral of an ordinary function which is zero almost everywhere would necessarily be zero. Therefore, the d distribution cannot possibly be an ordinary function: Convolutions must be put in the proper context of the Theory of Distributions. A strong case can be made that the convolution product is the notion that gives rise to the very concept of distribution. Distributions had been used loosely by physicists for a long time, when Schwartz finally found a very simple mathematical definition for them: Considering a (very restricted) space D of so-called test functions, a distribution is simply a linear function which associates a scalar to every test function. Although other possibilities have been studied (which give rise to less general distributions) D is normally the so-called Schwartz space of infinitely derivable functions of compact support These are perfectly smooth functions vanishing outside of a bounded domain, like the function of x which is exp(-1 / (1-x 2 )) in [-1,+1] and 0 elsewhere. What could be denoted f(g) is written This hint of an ultimate symmetry between the rôles of f and g is fulfilled by the following relation, which holds whenever the integral exists for ordinary functions f and g. ò f(t-u)g(u) du This relation may be used to establish commutativity (switch the variable to v = t-u, going from +¥ to -¥ when u goes from -¥ to +¥). The associativity of the convolution product is obtained by figuring out a double integral. Convolutions have many stunning properties. In particular, the Fourier transform of the convolution product of two functions is the ordinary product of their Fourier transforms. Another key property is that the derivative of a convolution product may be obtained by differentiating either one of its factors: This means the derivatives of a function f can be expressed as convolutions, using the derivatives of the d distribution (strange but useful beasts): f = d * f f' = d' f'' = d'' If the n-th order linear differential equation discussed above has constant coefficients, we may write it as f*x = b by introducing the distribution f = d(n) + an-1 d(n-1) + ... + a3 d(3) + a2 d" + a1 d' + Clearly, if we we have a function such that we will obtain a special solution of the inhomogeneous equation as If you translate the convolution product into an integral, what you obtain is thus the general expression involving a Green function G(t,u)=g(t-u), where g(v) is zero for negative values of v. The case where coefficients are constant is therefore much simpler than the general case: Where you had a two-variable integrator, you now have a single-variable one. Not only that, but the homogeneous solutions are well-known (if z is an eigenvalue of multiplicity n+1 for the matrix involved, the product of exp(zt) by any polynomial of degree n, or less, is a solution). In the important special case where all the eigenvalues are distinct, the determinants involved in the expression of G(t,u)=g(t-u) are essentially or Vandermonde cofactors (a Vandermonde determinant is a determinant where each column consists of the successive powers of a particular number). The expression is thus fairly easy to work out and may be put into the following simple form, involving the characteristic polynomial P for the equation (it's also the characteristic polynomial of the matrix we called A in the above). For any eigenvalue z, the derivative P'(z) is the product of the all the differences between that eigenvalue and each of the others (which is what Vandermonde expressions entail): exp(z1v) / P'(z1) + exp(z2v) / P'(z2) + ... + exp(znv) / P'(zn) With this, x = g*b is indeed a special solution of our original equation f*x = b (Brent Watts of Hickory, NC. do you use Laplace transforms to solve this differential system? Initial conditions, for t=0 : w=0, w'=1, y=0, y'=0, z= -1, z'=1. - w" + y + z = -1 - w + y" - z = 0 - -w' -y' + z"=0 The (unilateral) Laplace transform g(p) of a function f(t) is given by: g(p) = òo¥ f(t) exp(-pt) dt This is defined, for a positive p, whenever the integral makes sense. For example, the Laplace transform of a constant k is the function g such that g(p) = k/p. Integrating by parts f '(t) exp(-pt) dt gives a simple relation, which may be iterated, between the respective Laplace transforms L(f ') and L(f) of f ' and f : L(f ')[p] = -f(0) + p L(f)[p] L(f")[p] = -f '(0) + p L(f ')[p] = -f '(0) - p f(0) + p2 L(f)[p] This is the basis of the so-called Operational Calculus, invented by Oliver Heaviside (1850-1925), which translates many practical systems of differential equations into algebraic ones. (Originally, Heaviside was interested in the transient solutions to the simple differential equations arising in electrical circuits). In this particular case, we may use capital letters to denote Laplace transforms of lowercase functions (W=L(w), Y=L(y), Z=L(z)...) and your differential system translates into: In other words: - (p2 W - 1 - 0p)+ Y + Z = -1/p - W + (p2 Y - 0 - 0p) - Z = 0 - -(pW - 0) -(pY - 0) + (p2 Z - 1 + p) = 0 Solve for W,Y and Z and express the results as simple sums (that's usually the tedious part, but this example is clearly designed to be simpler than usual): - p2 W + Y + Z = 1 -1/p - W + p2 Y - Z = 0 - -pW -pY + p2 Z = 1-p The last step is to go from these Laplace transforms back to the original (lowercase) functions of t, with a reverse lookup using a table of Laplace transforms, similar to the (short) one provided below. - W = 1/(p2 +1) - Y = p/(p2 +1) - 1/p - Z = 1/(p2 +1) - p/(p2 +1)2 - w = sin(t) - y = cos(t) - 1 - z = sin(t) - cos(t) With other initial conditions, solutions may involve various linear combinations of no fewer than 5 different types of functions (namely: sin(t), cos(t), exp(-t), t and the constant 1), which would make a better showcase for Operational Calculus than this particularly simple example... Below is a small table of Laplace transforms. This table enables a reverse lookup which is more than sufficient to solve the above for any set of initial conditions: = òo¥ f(t) exp(-pt) dt |1 = t 0||1/p| |t n||n! / pn+1| |exp(at)||1 / (p-a)| |sin(kt)||k / (p2 + k2 )| |cos(kt)||p / (p2 + k2 )| |exp(at) sin(kt)||k / ([p-a]2 + k2 )| |exp(at) cos(kt)||[p-a] / ([p-a]2 + k2 )| |d [Dirac Delta]||1| |f '(t)||p g(p) - f(0)| |f ''(t)||p2 g(p) - p f '(0)| Brent Watts of Hickory, NC. 1) What is an example of a function for which the integral from -¥ to +¥ of |f(x)| dx exists, but [that of] of f(x)dx does not? 2) [What is an example of a function f ] for which the opposite is true? The integral from -¥ to +¥ exists for f(x)dx but not for |f(x)|dx . 1) Consider any nonmeasurable set E within the interval [0,1] (the existence of such a set is guaranteed by Zermelo's Axiom of Choice) and define f(x) to be: The function f is not Lebesgue-integrable, but its absolute value clearly is (|f(x)| is equal to 1 on [0,1] and - +1 if x is in E - -1 if x is in [0,1] but not in E - 0 if x is outside [0,1] That was for Lebesgue integration. For Riemann integration, you may construct a simpler example by letting the above E be the set of rationals between 0 and 1. 2) On the other hand, the function sin(x)/x is a simple example of a function which is Riemann-integrable over (Riemann integration can be defined over an infinite interval, although it's not usually done in basic textbooks), whereas the absolute value |sin(x)/x| is not. Neither function is Lebesgue-integrable over although both are over any finite interval. Show that: f (D)[eax y] = eax f (D+a)[y] , where D is the operator d/dx. The notation has to be explained to readers not familiar with If f (x) is the converging sum of all terms (for some scalar sequence f is called an analytic function [about zero] and it can be defined for some nonnumerical things that can be added, scaled or "exponentiated"... The possibility of exponentiation to the power of a nonnegative integer reasonably requires the definition of some kind of with a neutral element (in order to define the zeroth power) but that multiplication need not be commutative or even associative. The lesser requirement of alternativity suffices (as is observed in the case of the octonions). Here we shall focus on the multiplication of square matrices of finite sizes which corresponds to the composition of linear functions in a vector space of finitely many dimensions. If M is a finite square matrix representing some linear operator (which we shall denote by the same symbol M for convenience) f (M) is defined as a power series of M. If there's a vector basis in which the operator M is diagonal, f (M) is diagonal in that same basis, with f (z) appearing on the diagonal of f (M) wherever z appears in the diagonal of M. Now, the differential operator D is a linear operator like any other, whether it operates on a space of finitely many dimensions (for example, polynomials of degree 57 or less) or infinitely many dimensions (polynomials, formal series...). f (D) may thus be defined the same way. It's a formal definition which may or may not have a numerical counterpart, as the formal series involved may or may not converge. The same thing applies to any other differential operator, and this is how f (D) and f (D+a) are to be interpreted. To prove that a linear relation holds when f appears homogeneously (as is the case here), it is enough to prove that it holds for any n when f (x)=xn : - The relation is trivial for n=0 (the zeroth power of any operator is the identity operator) as the relation translates into exp(ax)y = exp(ax)y. - The case n=1 is: D[exp(ax)y] = a exp(ax)y + exp(ax)D[y] = exp(ax)(D+a)[y]. - The case n=2 is obtained by differentiating the case n=1 exactly like the case n+1 is obtained by differentiating case n, namely: Dn+1[exp(ax)y] = D[exp(ax)(D+a)n(y)] = a exp(ax)(D+a)n[y] + exp(ax) D[(D+a)n(y)] = exp(ax) (D+a)[(D+a)n(y)] = exp(ax) (D+a)n+1[y]. This completes a proof by induction for any f (x) = xn, which establishes the relation for any analytic function f, through summation of such elementary results.
http://www.numericana.com/answer/calculus.htm
13
11
|Part of a series on| An earthquake (also known as a quake, tremor or temblor) is the result of a sudden release of energy in the Earth's crust that creates seismic waves. The seismicity, seismism or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time. Earthquakes are measured using observations from seismometers. The moment magnitude is the most common scale on which earthquakes larger than approximately 5 are reported for the entire globe. The more numerous earthquakes smaller than magnitude 5 reported by national seismological observatories are measured mostly on the local magnitude scale, also referred to as the Richter scale. These two scales are numerically similar over their range of validity. Magnitude 3 or lower earthquakes are mostly almost imperceptible or weak and magnitude 7 and over potentially cause serious damage over larger areas, depending on their depth. The largest earthquakes in historic times have been of magnitude slightly over 9, although there is no limit to the possible magnitude. The most recent large earthquake of magnitude 9.0 or larger was a 9.0 magnitude earthquake in Japan in 2011 (as of October 2012), and it was the largest Japanese earthquake since records began. Intensity of shaking is measured on the modified Mercalli scale. The shallower an earthquake, the more damage to structures it causes, all else being equal. At the Earth's surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity. In its most general sense, the word earthquake is used to describe any seismic event — whether natural or caused by humans — that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter. Naturally occurring earthquakes Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities and this leads to a form of stick-slip behaviour. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy. This energy is released as a combination of radiated elastic strain seismic waves, frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior. Earthquake fault types There are three main types of fault, all of which may cause an earthquake: normal, reverse (thrust) and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip. Reverse faults, particularly those along convergent plate boundaries are associated with the most powerful earthquakes, including almost all of those of magnitude 8 or more. Strike-slip faults, particularly continental transforms can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7. This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet which can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 degrees Celsius flow in response to stress; they do not rupture in earthquakes. The maximum observed lengths of ruptures and mapped faults, which may break in one go are approximately 1000 km. Examples are the earthquakes in Chile, 1960; Alaska, 1957; Sumatra, 2004, all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939) and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter. The most important parameter controlling the maximum earthquake magnitude on a fault is however not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees. Thus the width of the plane within the top brittle crust of the Earth can become 50 to 100 km (Japan, 2011; Alaska, 1964), making the most powerful earthquakes possible. Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km within the brittle crust, thus earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about 6 km. In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike slip by intermediate, and normal faults by the lowest stress levels. This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that 'pushes' the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass 'escapes' in the direction of the least principal stress, namely upward, lifting the rock mass up, thus the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions. Earthquakes away from plate boundaries Where plate boundaries occur within continental lithosphere, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend" region). The Northridge earthquake was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian and Eurasian plates where it runs through the northwestern part of the Zagros mountains. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike-slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms. All tectonic plates have internal stress fields caused by their interactions with neighbouring plates and sedimentary loading or unloading (e.g. deglaciation). These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes. Shallow-focus and deep-focus earthquakes The majority of tectonic earthquakes originate at the ring of fire in depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km are classified as 'shallow-focus' earthquakes, while those with a focal-depth between 70 and 300 km are commonly termed 'mid-focus' or 'intermediate-depth' earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from 300 up to 700 kilometers). These seismically active areas of subduction are known as Wadati-Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure. Earthquakes and volcanic activity Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the Mount St. Helens eruption of 1980. Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions. A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m while other evidence, such as a slow component revealed by low-frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40% of earthquakes are preceded by foreshocks. Once the rupture has initiated it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone. Rupture propagation is generally modeled using a fracture mechanics approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70–90% of the S-wave velocity and this is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S-wave velocity. These supershear earthquakes have all been observed during large strike-slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighbouring coast, as in the 1896 Meiji-Sanriku earthquake. Most earthquakes form part of a sequence, related to each other in terms of location and time. Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern. An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the main shock. Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is obviously the main shock, therefore none have notable higher magnitudes than the other. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park. In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s. Sometimes a series of earthquakes occur in a sort of earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East. Size and frequency of occurrence It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt. Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the U.S., as well as in Mexico, Guatemala, Chile, Peru, Indonesia, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India and Japan, but earthquakes can occur almost anywhere, including New York City, London, and Australia. Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5. In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are: an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years. This is an example of the Gutenberg–Richter law. The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable. In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend. More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey (USGS). A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low-intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case. Most of the world's earthquakes (90%, and 81% of the largest) take place in the 40,000 km long, horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific Plate. Massive earthquakes tend to occur along other plate boundaries, too, such as along the Himalayan Mountains. With the rapid growth of mega-cities such as Mexico City, Tokyo and Tehran, in areas of high seismic risk, some seismologists are warning that a single quake may claim the lives of up to 3 million people. While most earthquakes are caused by movement of the Earth's tectonic plates, human activity can also produce earthquakes. Four main activities contribute to this phenomenon: storing large amounts of water behind a dam (and possibly building an extremely heavy building), drilling and injecting liquid into wells, and by coal mining and oil drilling. Perhaps the best known example is the 2008 Sichuan earthquake in China's Sichuan Province in May; this tremor resulted in 69,227 fatalities and is the 19th deadliest earthquake of all time. The Zipingpu Dam is believed to have fluctuated the pressure of the fault 1,650 feet (503 m) away; this pressure probably increased the power of the earthquake and accelerated the rate of movement for the fault. The greatest earthquake in Australia's history is also claimed to be induced by humanity, through coal mining. The city of Newcastle was built over a large sector of coal mining areas. The earthquake has been reported to be spawned from a fault that reactivated due to the millions of tonnes of rock removed in the mining process. Measuring and locating earthquakes Earthquakes can be recorded by seismometers up to great distances, because seismic waves travel through the whole Earth's interior. The absolute magnitude of a quake is conventionally reported by numbers on the Moment magnitude scale (formerly Richter scale, magnitude 7 causing serious damage over large areas), whereas the felt magnitude is reported using the modified Mercalli intensity scale (intensity II–XII). Every tremor produces different types of seismic waves, which travel through rock with different velocities: - Longitudinal P-waves (shock- or pressure waves) - Transverse S-waves (both body waves) - Surface waves — (Rayleigh and Love waves) Propagation velocity of the seismic waves ranges from approx. 3 km/s up to 13 km/s, depending on the density and elasticity of the medium. In the Earth's interior the shock- or P waves travel much faster than the S waves (approx. relation 1.7 : 1). The differences in travel time from the epicentre to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also the depth of the hypocenter can be computed roughly. In solid rock P-waves travel at about 6 to 7 km per second; the velocity increases within the deep mantle to ~13 km/s. The velocity of S-waves ranges from 2–3 km/s in light sediments and 4–5 km/s in the Earth's crust up to 7 km/s in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle. On average, the kilometer distance to the earthquake is the number of seconds between the P and S wave times 8. Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth's core was located in 1913 by Beno Gutenberg. Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn-Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions. Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID. Effects of earthquakes The effects of earthquakes include, but are not limited to, the following: Shaking and ground rupture Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground acceleration. Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits. Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several metres in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges and nuclear power stations and requires careful mapping of existing faults to identify any which are likely to break the ground surface within the life of the structure. Landslides and avalanches Earthquakes, along with severe storms, volcanic activity, coastal wave attack, and wildfires, can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue. Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself. Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves. Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600-800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them. Ordinarily, subduction earthquakes under magnitude 7.5 on the Richter scale do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more. A flood is an overflow of any amount of water that reaches land. Floods occur usually when the volume of water within a body of water, such as a river or lake, exceeds the total capacity of the formation, and as a result some of the water flows or sits outside of the normal perimeter of the body. However, floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods. The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flood if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people. An earthquake may cause injury and loss of life, road and bridge damage, general property damage (which may or may not be covered by earthquake insurance), and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease, lack of basic necessities, and higher insurance premiums. One of the most devastating earthquakes in recorded history occurred on 23 January 1556 in the Shaanxi province, China, killing more than 830,000 people (see 1556 Shaanxi earthquake). Most of the population in the area at the time lived in yaodongs, artificial caves in loess cliffs, many of which collapsed during the catastrophe with great loss of life. The 1976 Tangshan earthquake, with a death toll estimated to be between 240,000 to 655,000, is believed to be the largest earthquake of the 20th century by death toll. The 1960 Chilean Earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960. Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday Earthquake, which was centered in Prince William Sound, Alaska. The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history. Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes. Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month. However, for well-understood faults the probability that a segment may rupture during the next few decades can be estimated. Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt. The objective of earthquake engineering is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes. Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences. Ways to Survive an Earthquake - Be Prepared: Before, During and After an Earthquake Earthquakes do not last for a long time, generally a few seconds to a minute. The 1989 San Francisco earthquake only lasted 15 seconds. - Securing water heaters, major appliances and tall, heavy furniture to prevent them from toppling are prudent steps. So, too, are storing hazardous or flammable liquids, heavy objects and breakables on low shelves or in secure cabinets. - If you're indoors, stay there. Get under -- and hold onto --a desk or table, or stand against an interior wall. Stay clear of exterior walls, glass, heavy furniture, fireplaces and appliances. The kitchen is a particularly dangerous spot. If you’re in an office building, stay away from windows and outside walls and do not use the elevator. Stay low and cover your head and neck with your hands and arms. Bracing yourself to a wall or heavy furniture when weaker earthquakes strike usually works. - Cover your head and neck. Use your hands and arms. If you have any respiratory disease, make sure that you cover your head with a t-shirt or bandana, until all the debris and dust has settled. Inhaled dirty air is not good for your lungs. - DO NOT stand in a doorway: An enduring earthquake image of California is a collapsed adobe home with the door frame as the only standing part. From this came our belief that a doorway is the safest place to be during an earthquake. True- if you live in an old, unreinforced adobe house or some older woodframe houses. In modern houses, doorways are no stronger than any other part of the house, and the doorway does not protect you from the most likely source of injury- falling or flying objects. You also may not be able to brace yourself in the door during strong shaking. You are safer under a table. Many are certain that standing in a doorway during the shaking is a good idea. That’s false, unless you live in an unreinforced adode structure; otherwise, you're more likely to be hurt by the door swinging wildly in a doorway. - Inspect your house for anything that might be in a dangerous condition. Glass fragments, the smell of gas, or damaged electrical appliances are examples of hazards. - Do not move. If it is safe to do so, stay where you are for a minute or two, until you are sure the shaking has stopped. Slowly get out of the house. Wait until the shaking has stopped to evacuate the building carefully. - PRACTICE THE RIGHT THING TO DO… IT COULD SAVE YOUR LIFE, You will be more likely to react quickly when shaking begins if you have actually practiced how to protect yourself on a regular basis. A great time to practice Drop, Cover, and Hold. - If you're outside, get into the open. Stay clear of buildings, power lines or anything else that could fall on you. Glass looks smooth and still, but when broken apart, a small piece can damage your foot. This is why you wear heavy shoes to protect your feet at such times. - Be aware that items may fall out of cupboards or closets when the door is opened, and also that chimneys can be weakened and fall with a touch. Check for cracks and damage to the roof and foundation of your home. - Things You'll Need: Blanket, Sturdy shoes, Dust mask to help filter contaminated air and plastic sheeting and duct tape to shelter-in-place, basic hygiene supplies, e.g. soap, Feminine supplies and personal hygiene items. From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth." Thales of Miletus, who lived from 625–547 (BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water. Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes. Pliny the Elder called earthquakes "underground thunderstorms." Earthquakes in culture Mythology and religion In Norse mythology, earthquakes were explained as the violent struggling of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble. In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge. In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes. In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906. Fictional earthquakes tend to strike suddenly and without warning. For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1998). A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection after the quake depicts the consequences of the Kobe earthquake of 1995. The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996) and Goodbye California (1977) among other works. Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent. Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones. Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival. Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions. As was observed after other disasters involving destruction and loss of life and their media depictions, such as those of the 2001 World Trade Center Attacks or Hurricane Katrina—and has been recently observed in the 2010 Haiti earthquake, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem-solving and reflection as to how one might improve the conditions of those affected. - "Earthquake FAQ". Crustal.ucsb.edu. Retrieved 2011-07-24. - Spence, William; S. A. Sipkin, G. L. Choy (1989). "Measuring the Size of an Earthquake". United States Geological Survey. Retrieved 2006-11-03. - Wyss, M. (1979). "Estimating expectable maximum magnitude of earthquakes from fault dimensions". Geology 7 (7): 336–340. Bibcode:1979Geo.....7..336W. doi:10.1130/0091-7613(1979)7<336:EMEMOE>2.0.CO;2. - Sibson R. H. (1982) "Fault Zone Models, Heat Flow, and the Depth Distribution of Earthquakes in the Continental Crust of the United States", Bulletin of the Seismological Society of America, Vol 72, No. 1, pp. 151–163 - Sibson, R. H. (2002) "Geology of the crustal earthquake source" International handbook of earthquake and engineering seismology, Volume 1, Part 1, page 455, eds. W H K Lee, H Kanamori, P C Jennings, and C. Kisslinger, Academic Press, ISBN / ASIN: 0124406521 - "Global Centroid Moment Tensor Catalog". Globalcmt.org. Retrieved 2011-07-24. - "Instrumental California Earthquake Catalog". WGCEP. Retrieved 2011-07-24. - Hjaltadóttir S., 2010, "Use of relatively located microearthquakes to map fault patterns and estimate the thickness of the brittle crust in Southwest Iceland" - "Reports and publications | Seismicity | Icelandic Meteorological office". En.vedur.is. Retrieved 2011-07-24. - Schorlemmer, D.; Wiemer, S.; Wyss, M. (2005). "Variations in earthquake-size distribution across different stress regimes". Nature 437 (7058): 539–542. Bibcode:2005Natur.437..539S. doi:10.1038/nature04094. PMID 16177788. - Talebian, M; Jackson, J (2004). "A reappraisal of earthquake focal mechanisms and active shortening in the Zagros mountains of Iran". Geophysical Journal International 156 (3): 506–526. Bibcode:2004GeoJI.156..506T. doi:10.1111/j.1365-246X.2004.02092.x. - Nettles, M.; Ekström, G. (May 2010). "Glacial Earthquakes in Greenland and Antarctica". Annual Review of Earth and Planetary Sciences 38 (1): 467–491. Bibcode:2010AREPS..38..467N. doi:10.1146/annurev-earth-040809-152414. Avinash Kumar - Noson, Qamar, and Thorsen (1988). Washington State Earthquake Hazards: Washington State Department of Natural Resources. Washington Division of Geology and Earth Resources Information Circular 85. - "M7.5 Northern Peru Earthquake of 26 September 2005" (PDF). National Earthquake Information Center. 17 October 2005. Retrieved 2008-08-01. - Greene II, H. W.; Burnley, P. C. (October 26, 1989). "A new self-organizing mechanism for deep-focus earthquakes". Nature 341 (6244): 733–737. Bibcode:1989Natur.341..733G. doi:10.1038/341733a0. - Foxworthy and Hill (1982). Volcanic Eruptions of 1980 at Mount St. Helens, The First 100 Days: USGS Professional Paper 1249. - Watson, John; Watson, Kathie (January 7, 1998). "Volcanoes and Earthquakes". United States Geological Survey. Retrieved May 9, 2009. - National Research Council (U.S.). Committee on the Science of Earthquakes (2003). "5. Earthquake Physics and Fault-System Science". Living on an Active Earth: Perspectives on Earthquake Science. Washington D.C.: National Academies Press. p. 418. ISBN 978-0-309-06562-7. Retrieved 8 July 2010. - Thomas, Amanda M.; Nadeau, Robert M.; Bürgmann, Roland (December 24, 2009). "Tremor-tide correlations and near-lithostatic pore pressure on the deep San Andreas fault". Nature 462 (7276): 1048–51. Bibcode:2009Natur.462.1048T. doi:10.1038/nature08654. PMID 20033046. - "Gezeitenkräfte: Sonne und Mond lassen Kalifornien erzittern" SPIEGEL online, 29.12.2009 - Tamrazyan, Gurgen P. (1967). "Tide-forming forces and earthquakes". Icarus 7 (1–3): 59–65. Bibcode:1967Icar....7...59T. doi:10.1016/0019-1035(67)90047-4. - Tamrazyan, Gurgen P. (1968). "Principal regularities in the distribution of major earthquakes relative to solar and lunar tides and other cosmic forces". Icarus 9 (1–3): 574–92. Bibcode:1968Icar....9..574T. doi:10.1016/0019-1035(68)90050-X. - "What are Aftershocks, Foreshocks, and Earthquake Clusters?". - "Repeating Earthquakes". United States Geological Survey. January 29, 2009. Retrieved May 11, 2009. - "Earthquake Swarms at Yellowstone". United States Geological Survey. Retrieved 2008-09-15. - Duke, Alan. "Quake 'swarm' shakes Southern California". CNN. Retrieved 27 August 2012. - Amos Nur; Cline, Eric H. (2000). "Poseidon's Horses: Plate Tectonics and Earthquake Storms in the Late Bronze Age Aegean and Eastern Mediterranean". Journal of Archaeological Science 27 (1): 43–63. doi:10.1006/jasc.1999.0431. ISSN 0305-4403. - "Earthquake Storms". Horizon. 1 April 2003. Retrieved 2007-05-02. - "Earthquake Facts". United States Geological Survey. Retrieved 2010-04-25. - Pressler, Margaret Webb (14 April 2010). "More earthquakes than usual? Not really.". KidsPost (Washington Post: Washington Post). pp. C10. - "Earthquake Hazards Program". United States Geological Survey. Retrieved 2006-08-14. - "Seismicity and earthquake hazard in the UK". Quakes.bgs.ac.uk. Retrieved 2010-08-23. - "Italy's earthquake history." BBC News. October 31, 2002. - "Common Myths about Earthquakes". United States Geological Survey. Retrieved 2006-08-14. - "Earthquake Facts and Statistics: Are earthquakes increasing?". United States Geological Survey. Retrieved 2006-08-14. - The 10 biggest earthquakes in history, Australian Geographic, March 14, 2011. - "Historic Earthquakes and Earthquake Statistics: Where do earthquakes occur?". United States Geological Survey. Retrieved 2006-08-14. - "Visual Glossary — Ring of Fire". United States Geological Survey. Retrieved 2006-08-14. - Jackson, James, "Fatal attraction: living with earthquakes, the growth of villages into megacities, and earthquake vulnerability in the modern world," Philosophical Transactions of the Royal Society, doi:10.1098/rsta.2006.1805 Phil. Trans. R. Soc. A 15 August 2006 vol. 364 no. 1845 1911–1925. - "Global urban seismic risk." Cooperative Institute for Research in Environmental Science. - Madrigal, Alexis (4 June 2008). "Top 5 Ways to Cause a Man-Made Earthquake". Wired News (CondéNet). Retrieved 2008-06-05. - "How Humans Can Trigger Earthquakes". National Geographic. February 10, 2009. Retrieved April 24, 2009. - Brendan Trembath (January 9, 2007). "Researcher claims mining triggered 1989 Newcastle earthquake". Australian Broadcasting Corporation. Retrieved April 24, 2009. - "Speed of Sound through the Earth". Hypertextbook.com. Retrieved 2010-08-23. - Geographic.org. "Magnitude 8.0 - SANTA CRUZ ISLANDS Earthquake Details". Gobal Earthquake Epicenters with Maps. Retrieved 2013-03-13. - "On Shaky Ground, Association of Bay Area Governments, San Francisco, reports 1995,1998 (updated 2003)". Abag.ca.gov. Retrieved 2010-08-23. - "Guidelines for evaluating the hazard of surface fault rupture, California Geological Survey". California Department of Conservation. 2002. - "Natural Hazards — Landslides". United States Geological Survey. Retrieved 2008-09-15. - "The Great 1906 San Francisco earthquake of 1906". United States Geological Survey. Retrieved 2008-09-15. - "Historic Earthquakes — 1946 Anchorage Earthquake". United States Geological Survey. Retrieved 2008-09-15. - Noson, Qamar, and Thorsen (1988). Washington Division of Geology and Earth Resources Information Circular 85. Washington State Earthquake Hazards. - MSN Encarta Dictionary. Flood. Retrieved on 2006-12-28. Archived 2009-10-31. - "Notes on Historical Earthquakes". British Geological Survey. Retrieved 2008-09-15. - "Fresh alert over Tajik flood threat". BBC News. 2003-08-03. Retrieved 2008-09-15. - USGS: Magnitude 8 and Greater Earthquakes Since 1900 - "Earthquakes with 50,000 or More Deaths". U.S. Geological Survey - Spignesi, Stephen J. (2005). Catastrophe!: The 100 Greatest Disasters of All Time. ISBN 0-8065-2558-4 - Kanamori Hiroo. "The Energy Release in Great Earthquakes". Journal of Geophysical Research. Retrieved 2010-10-10. - USGS. "How Much Bigger?". United States Geological Survey. Retrieved 2010-10-10. - Earthquake Prediction. Ruth Ludwin, U.S. Geological Survey. - Working Group on California Earthquake Probabilities in the San Francisco Bay Region, 2003 to 2032, 2003, http://earthquake.usgs.gov/regional/nca/wg02/index.php. - "Earthquakes". Encyclopedia of World Environmental History 1. Encyclopedia of World Environmental History. 2003. pp. 358–364. - Sturluson, Snorri (1220). Prose Edda. ISBN 1-156-78621-5. - Sellers, Paige (1997-03-03). "Poseidon". Encyclopedia Mythica. Retrieved 2008-09-02. - Van Riper, A. Bowdoin (2002). Science in popular culture: a reference guide. Westport: Greenwood Press. p. 60. ISBN 0-313-31822-0. - JM Appel. A Comparative Seismology. Weber Studies (first publication), Volume 18, Number 2. - Goenjian, Najarian; Pynoos, Steinberg; Manoukian, Tavosian; Fairbanks, AM; Manoukian, G; Tavosian, A; Fairbanks, LA (1994). "Posttraumatic stress disorder in elderly and younger adults after the 1988 earthquake in Armenia". Am J Psychiatry 151 (6): 895–901. PMID 8185000. - Wang, Gao; Shinfuku, Zhang; Zhao, Shen; Zhang, H; Zhao, C; Shen, Y (2000). "Longitudinal Study of Earthquake-Related PTSD in a Randomly Selected Community Sample in North China". Am J Psychiatry 157 (8): 1260–1266. doi:10.1176/appi.ajp.157.8.1260. PMID 10910788. - Goenjian, Steinberg; Najarian, Fairbanks; Tashjian, Pynoos (2000). "Prospective Study of Posttraumatic Stress, Anxiety, and Depressive Reactions After Earthquake and Political Violence". Am J Psychiatry 157 (6): 911–895. doi:10.1176/appi.ajp.157.6.911. - Coates SW, Schechter D (2004). Preschoolers' traumatic stress post-9/11: relational and developmental perspectives. Disaster Psychiatry Issue. Psychiatric Clinics of North America, 27(3), 473–489. - Schechter, DS; Coates, SW; First, E (2002). "Observations of acute reactions of young children and their families to the World Trade Center attacks". Journal of ZERO-TO-THREE: National Center for Infants, Toddlers, and Families 22 (3): 9–13. - Deborah R. Coen. The Earthquake Observers: Disaster Science From Lisbon to Richter (University of Chicago Press; 2012) 348 pages; explores both scientific and popular coverage - Donald Hyndman, David Hyndman (2009). "Chapter 3: Earthquakes and their causes". Natural Hazards and Disasters (2nd ed.). Brooks/Cole: Cengage Learning. ISBN 0-495-31667-9. |Wikimedia Commons has media related to: Earthquake| - Earthquake Hazards Program of the U.S. Geological Survey - European-Mediterranean Seismological Centre a real-time earthquake information website - Seismological Society of America - Incorporated Research Institutions for Seismology - Open Directory - Earthquakes - World earthquake map captures every rumble since 1898 —Mother Nature Network (MNN) (29 June 2012)
http://en.wikipedia.org/wiki/Earthquakes
13
27
Richard K. Moore This document continues to evolve, based on continuing research. The latest version is always maintained at this URL: You can click on any graphic in this document to see a larger image. Global temperatures in perspective Lets look at the historical temperature record, beginning with the long-term view. For long-term temperatures, ice-cores provide the most reliable data. Lets look first at the very long-term record, using ice cores from Vostok, in the Antarctic. Temperatures are shown relative to 1900, which is shown as zero. Here we see a very regular pattern of long-term temperature cycles. Most of the time the Earth is in an ice age, and about every 125,000 years there is a brief period of warm tempertures, called an interglacial period. Our current interglacial period has lasted a bit longer than most, indicating that the next ice age is somewhat overdue. These long-term cycles are probably related to changes in the eccentricity of the Earths orbit, which follows a cycle of about 100,000 years. We also see other cycles of more closely-spaced peaks, and these are probably related to other cycles in the Earths orbit. There is an obliquity cycle of about 41,000 years, and a precession cycle, of about 20,000 years, and all of these cycles interfere with one another in complex ways. Heres a tutorial from NASA that discusses the Earths orbital variations: Next lets zoom-in on the current interglacial period, as seen in Vostok and Greenland, again using ice-core data. Here we see that the Antarctic emerged from the last ice age about 1,000 years earlier than the Arctic. While the Antarctic has oscillated up and down throughout the interglacial period, the Arctic has been on a steady decline towards the next ice age for the past 3,000 years. As of 1900, in comparison to the whole interglacial period, the temperature was 2°C below the maximum in Vostok, and 3°C below the maximum in Greenland. Thus, as of 1900, temperatures were rather cool for the period in both hemispheres, and in Greenland temperatures were close to a minimum. During this recent interglacial period, temperatures in both Vostok and Greenland have oscillated through a range of about 4°C, although the patterns of oscillation are quite different in each case. In order to see just how different the patterns are, lets look at Greenland and Vostok together for the interglacial period. Vostok is shown with a dashed line. The patterns are very different indeed. While Greenland has been almost always above the 1900 base line, Vostok has been almost always below. And in the period 1500-1900, while Greenland temperatures were relatively stable, within a range of 0.5°C, Vostok went through a radical oscillation of 3°C, from an extreme high to an extreme low. These dramatic differences between the two arctic regions might be related to the Earths orbital variations (See NASA tutorial). On the other hand, we may be seeing a regulatory mechanism, based on the fact that the Southern Hemisphere is dominated by oceans, while most of the land mass is in the Northern Hemisphere. Perhaps incoming heat, though retained by the northern continents, leads to evaporation from the oceans and increased snowfall in the Antarctic. Whatever the reasons, the differences between the two arctic regions are striking. Lets now look at the average of Greenland and Vostok temperatures over the interglacial period: 8,500 BC 1900 Here we see that the average temperature has followed a more stable pattern, with more constrained oscillations, than either of the hemispheres. The graph shows a relatively smooth arc, rising from the last ice age, and descending steadily over the past 4,000 years toward the next ice age. Heres the average again, together with Vostok and Greenland: 8,500 BC 1900 Notice how the average is nearly always nestled between the Arctic and Antarctic temperatures, with the Arctic above and the Anatarctic below. It does seem that the Antarctic is acting as a regulatory mechanism, keeping the average temperature always moderate, even when the Arctic is experiencing high temperatures. I dont offer this as a theory, but simply as an observation of a possibility. We can see that the average temperature tells us very little about what is happening in either arctic region. We cannot tell from the average that Arctic temperatures were 3°C higher in 1500 BC, and that glacier melting might have been a danger then. And the average does not tell us that the Antarctic has almost always been cool, with very little danger of ice-cap melting at any time. In general, the average is a very poor indicator of conditions in either arctic region. If we want to understand warming-related issues, such as tundra-melting and glacier-melting, we must consider the two polar regions separately. If glaciers melt, they do so either because of high Arctic temperatures, or high Antarctic temperatures. Whether or not glaciers are likely to melt cannot be determined by global averages. Next lets take a closer look at Vostok and Greenland since 500 BC: 500 BC 1900 Again we see how the Antarctic temperatures balance the Arctic, showing almost a mirror image over much of this period. From 1500 to 1800, while the Arctic was experiencing the Little Ice Age, it seems almost as if the Antarctic was getting frantic, going into radical oscillations in an effort to keep the average up near the base line. Beginning about 1800 we have an unusual situation, where both arctic regions begin warming rapidly at the same time, as each follows its own distinct pattern. This of course means that the average will also be rising. Keep in mind that everything weve been looking at so far has been before human-caused CO2 emissions were at all significant. Thus, just as human-caused emissions began to increase, around 1900, average temperatures were already rising sharply, from natural causes. There has been a strong correlation between rising average temperature and CO2 levels since 1900, arising from a coincidental alignment of three distinct trends. Whether or not rising CO2 levels have accelerated the natural increase in average temperature remains to be seen. Well return to this question of CO2 causation, but first lets look at some other records from the Northern Hemisphere, to find out how typical the Greenland record is of its hemisphere. This first record is from Spain, based on the mercury content in a peat bog, as published in Science, 1999, vol. 284. Note that this graph is backwards, with present day on the left. Present day 2,000 BC This next record is from the Central Alps, based on stalagmite isotopes, as published in Earth and Planteary Science Letters, 2005, vol. 235. 0 AD Present Day And for comparison, heres the Greenland record for the most recent 4,000 years: 2,000 BC 1900 While the three records are clearly different, they do share certain important characteristics. In each case we see a staggered rise, followed by a staggered decline a long-term up-and-down cycle over the period. In each case we see that during the past few thousand years, temperatures have been 3°C higher than 1900 temperatures. And in each case we see a steady descent towards the overdue next ice age. The Antarctic, on the other hand, shares none of these characteristics. In the Northern Hemisphere, based on the shared characteristics we have observed, temperatures would need to rise at least 3°C above 1900 levels before we would need to worry about things like the extinction of polar bears, the melting of the Greenland ice sheet, or runaway methane release. We know this because none of these things have happened in the past 4,000 years, and temperatures have been 3°C higher during that period. However such a 3°C rise seems very unlikely to happen, given that all three of our Nothern Hemisphere samples show a gradual but definite decline toward the overdue next ice age. Lets now zoom-in on the temperature record since 1900, and see what kind of rise has actually occurred. Lets turn to Jim Hansens latest article, published on realclimate.org, 2009 temperatures by Jim Hansen. The article includes the following two graphs. Jim Hansen is of course one of the primary spokespersons for the human-caused-CO2-dangerous-warming theory, and there is some reason to believe these graphs show an exaggerated picture as regards to warming. Here is one article relevant to that point, and it is typical of other reports Ive seen: Son of Climategate! Scientist says feds manipulated data Nonetheless, lets accept these graphs as a valid representation of recent average temperature changes, so as to be as fair as possible to the warming alarmists. Well be using the red line, which is from GISS, and which does not use the various extrapolations that are included in the green line. Well return to this topic later, but for now suffice it to say that these extrapolations make little sense from a scientific perspective. The red line shows a temperature rise of .7°C from 1900 to the 1998 maximum, a leveling off beginning in 2001, and then a brief but sharp decline starting in 2005. Lets enter that data into our charting program, using values for each 5-year period that represent the center of the oscillations for that period. Heres what we get for 1900-2008: In order to estimate how these average changes would be reflected in each of the polar regions, lets look at Greenland and Vostok together, from 1000 AD to 1900. Vostok shown with dashed line Here we can see that in 1900 the Antarctic was warming much faster the Arctic. As usual, the Antarctic was exhibiting the more extreme oscillations. In the most recent warming shown, from 1850 to 1900, the Arctic increased by only 0.5°C while the Antarctic increased by 0.75°C. As regards the average of these two inreases, the Antarctic contributed 60%, while the Arctic contributed 40%. If we assume these trends continue, and changes in global average are reflected in the polar regions, then we get the following estimate for temperature changes in the two polar regions: (based on apportioning GISS changes, 60% to Vostok, 40% to Greenland) Vostok shown with dashed line This is only approximate, of course, but it is probably closer to the truth than apportioning the changes equally to the two polar regions. Lets now look again at Greenland and Vostok together, for the past 4,000 years, with these apportioned GISS changes appended. 2,000 BC 2008 Extended by GISS data Vostok shown with feint line We see here that both polar regions have remained below their maximum for this period. The Arctic has been nearly 2.5°C warmer, and the Antarctic about 0.5°C warmer. Perhaps CO2 is accelerating Antarctic warming, or perhaps Antarctica is simply continuing its erratic oscillations. In the Arctic however, temperatures are definitely following their long-term pattern, with no apparent influence from increased CO2 levels. The recent warming period has given us a new peak in the Greenland record, one in a series of declining peaks. If you hold a ruler up to the screen, youll see that the four peaks shown, occuring about every 1,000 years, fall in a straight line. If the natural pattern continues, then the recent warming has reached its maximum in the Northern Hemisphere, and we will soon experience about two centuries of rapid cooling, as we continue our descent to the overdue next ice age. The downturn shown in the GISS data beginning in 2005 fits perfectly with this pattern. Next lets look at the Greenland-Vostok average temperature for the past 4,000 years, extended by the GISS data. 2,000 BC 2008 Extended by GISS data Here we see a polar-region subset of the famous hockey stick, on the right end of the graph and we can see how misleading that is as regards the likelihood of dangerous warming. From the average polar temperature, we get the illusion that temperatures are warmer now at the poles than theyve been any time since year 0. But as our previous graph shows, the Arctic has been about 1.5°C warmer during that period, and the Antarctic has been about 0.5°C warmer. And even the average has been nearly 0.5°C warmer, if we look back to 2,000 BC. So in fact we have not been experiencing alarmingly high temperatures recently in either hemisphere. Dr. Hansen tells us the recent downturn, starting in 2005, is very temporary, and that temperatures will soon start rising again. Perhaps he is right. However, as we shall see, his arguments for this prediction are seriously flawed. What we know for sure is that a downward trend has begun. How far that trend will continue is not yet known. So everything depends on the next few years. If temperatures turn sharply upwards again, then the IPCC may be right, and human-caused CO2 emissions may have taken control of climate. However, if temperatures continue downward, then climate has been following natural patterns all along in the Northern Hemisphere. The record-setting cold spells and snows in many parts of the Northern Hemisphere this winter seem to be a fairly clear signal that the trend is continuing downwards. If so, then there has been no evidence of any noticeable influence on northern climate from human-caused CO2, and we are now facing an era of rapid cooling. Within two centuries we could expect temperatures in the Northern Hemisphere to be considerably lower than they were in the recent Little Ice Age. We dont know for sure which way temperatures will go, rapidly up or rapidly down. But I can make this statement: As of this moment, based on the long-term temperature patterns in the Northern Hemisphere, there is no evidence that human-caused CO2 has had any effect on climate. The rise since 1800, as well as the downward dip starting in 2005, are entirely in line with the natural long-term pattern. If temperatures turn sharply upwards in the next few years, that will be the first-ever evidence for human-caused warming in the Northern Hemisphere. The illusion of dangerous warming arises from a failure to recognize that global averages are a very poor indicator of actual conditions in either hemisphere. If the downward trend continues in the Northern Hemisphere, as the long-term pattern suggests, we are likely to experience about two centuries of rapid cooling in the Northern Hemisphere, as we continue our descent toward the overdue next ice age. As regards the the recent downturn, here are two other records, both of which show an even more dramatic downturn than the one shown in the GISS data: Dr. John Christy UAH Monthly Means of Lower Troposphere LT5-2 2004 - 2008 RSS MSU Monthly Anomaly - 70S to 82.5N (essentially Global) 2004 - 2008 Why havent unsually high levels of CO2 significantly affected temperatures in the Northern Hemisphere? One place to look for answers to this question is in the long-term patterns that we see in the temperature record of the past few thousand years, such as the peaks separated by about 1,000 years in the Greenland data, and other more closely-spaced patterns that are also visible. Some forces are causing those patterns, and whatever those forces are, they have nothing to do with human-caused CO2 emissions. Perhaps the forces have to do with cycles in solar radiation and solar magnetism, or cosmic radiation, or something we havent yet identified. Until we understand what those forces are, how they intefere with one another, and how they effect climate, we cant build useful climate models, except on very short time scales. We can also look for answers in the regulatory mechanisms that exist within the Earths own climate system. If an increment of warming happens on the surface, for example, then there is more evaporation from the oceans, which cools the ocean and leads to increased precipitation. While an increment of warming may melt glaciers, it may also cause increased snowfall in the arctic regions. To what extent do these balance one another? Do such mechanisms explain why Antarctic temperatures seem to always be balancing the Arctic, as we have seen in the data? It is important to keep in mind that CO2 concentrations in the atmosphere are tiny compared to water-vapor concentrations. A small reduction in cloud formation can more than compensate for a large increase in CO2 concentration, as regards the total greenhouse effect. If there is a precipitation response to CO2-warming, that could be very significant, and we would need to understand it quantitatively, by observing it not by making assumptions and putting them in our models. Vegetation also acts as a regulatory system. Plants and trees gobble up CO2; that is where their substance comes from. Greater CO2 concentration leads to faster growth, taking more CO2 out of the atmosphere. Until we understand quantitively how these various regulatory systems function and interact, we cant even build useful models on a short time scale. In fact a lot of research is going on, investigating both lines of inquiry extraterrestrial forces as well as terrestrial regulation mechanisms. However, in the current public-opinion and media climate, any research not related to CO2 causation is dismissed as the activity of contrarians, deniers, and oil-company hacks. Just as the Bishop refused to look through Galileos telescope, so today we have a whole society that refuses to look at many of the climate studies that are available. From observation of the patterns in climate history, the evidence indicates that regulatory mechanisms of some kind are operating. Its not so much the lack of a CO2-effect that provides evidence, but rather the constrained, oscillatory pattern in the average polar temperatures over the whole interglacial period. Whenever you see contrained oscillations in a system, that is evidence of a regulatory mechanism some kind of thermostat at work. Direct evidence for climate-regulation mechanisms Id like to draw attention to one example of a scientist who has been looking at one aspect of the Earths regulatory system. Roy Spencer has been conducting research using the satellite systems that are in place for climate studies. Here are his relevant qualifications: Roy W. Spencer is a principal research scientist for the University of Alabama in Huntsville and the U.S. Science Team Leader for the Advanced Microwave Scanning Radiometer (AMSR-E) on NASAs Aqua satellite. He has served as senior scientist for climate studies at NASAs Marshall Space Flight Center in Huntsville, Alabama.He describes his research in a presentation available on YouTube: In the talk he gives a lot of details, which are quite interesting, but one does need to concentrate and listen carefully to keep up with the pace and depth of the presentation. He certainly sounds like someone who knows what hes talking about. Permit me to summarize the main points of his research: When greenhouse gases cause surface warming, a response occurs, a feedback response, in the form of changes in cloud and precipitation patterns. The CRU-related climate models all assume the feedback response is a positive one: any increment of greenhouse warming will be amplified by knock-on effects in the weather system. This assumption then leads to the predictions of runaway global warming.This is the kind of research we need to look at if we want to build useful climate models. Certainly Spencers results need to be confirmed by other researchers before we accept them as fact, but to simply dismiss his work out of hand is very bad for the progress of climate science. Consider what the popular website SourceWatch says about Spencer. Spencer set out to see what the feedback response actually is, by observing what happens in the cloud-precipitation system when surface warming is occurring. What he found, by targeting satellite sensors appropriately, is that the feedback response is negative rather than positive. In particular, he found that the formation of storm-related cirrus clouds is inhibited when surface temperatures are high. Cirrus clouds are themselves a powerful greenhouse gas, and this reduction in cirrus cloud formation compensates for the increase in the CO2 greenhouse effect. We dont find there any reference to rebuttals to his research, but we are told that Spencer is a global warming skeptic who writes columns for a free-market website funded by Exxon. They also mention that he spoke at conference organized by the Heartland Institute, that promotes lots of reactionary, free-market principles. They are trying to discredit Spencers work on irrelevant grounds, what the Greeks referred to as an ad hominem argument. Sort of like, If he beats his wife, his science must be faulty. And its true about beating his wife Spencer does seem to have a pro-industry philosophy that shows little concern for sustainability. That might even be part of his motivation for undertaking his recent research, hoping to give ammunition to pro-industry lobbyists. But that doesnt prove his research is flawed or that his conclusions are invalid. His work should be challenged scientifically, by carrying out independent studies of the feedback process. If the challenges are restricted to irrelevant attacks, that becomes almost an admission that his results, which are threatening to the climate establishment, cannot be refuted. He does not hide his data, or his code, or his sentiments. The same cannot be said for the warming-alarmist camp. What are we to make of Jim Hansens prediction that rapid warming will soon resume? Once again, I refer you to Dr. Hansens recent article, 2009 temperatures by Jim Hansen. Jim explains his prediction methodlolgy in this paragraph, emphasis added: The global record warm year, in the period of near-global instrumental measurements (since the late 1800s), was 2005. Sometimes it is asserted that 1998 was the warmest year. The origin of this confusion is discussed below. There is a high degree of interannual (year‐to‐ year) and decadal variability in both global and hemispheric temperatures. Underlying this variability, however, is a long‐term warming trend that has become strong and persistent over the past three decades. The long‐term trends are more apparent when temperature is averaged over several years. The 60‐month (5‐year) and 132 month (11‐year) running mean temperatures are shown in Figure 2 for the globe and the hemispheres. The 5‐year mean is sufficient to reduce the effect of the El Niño La Niña cycles of tropical climate. The 11‐ year mean minimizes the effect of solar variability the brightness of the sun varies by a measurable amount over the sunspot cycle, which is typically of 10‐12 year duration. As Ive emphasized above, Jim is assuming there is a strong and persistent warming trend, which he of course attributes to human-caused CO2 emissions. And then that assumption becomes the justification for the 5 and 11-year running averages. Those running averages then give us phantom temperatures that dont match actual observations. In particular, if a downard decline is beginning, the running averages will tend to hide the decline, as we see in these alarmist graphs from the article with their exaggerated hockey stick. It seems we are looking at a classic case of scientists becoming over-attached to their model. In the beginning there was a theory of human-caused global warming, arising from the accidental convergence of three independent trends, combined with the knowledge that CO2 is a greenhouse gas. That theory has now become an assumption among its proponents, and actual observations are being dismissed as confusion because they dont agree with the model. One is reminded again of the Bishop who refused to look through Galileos telescope, so as not to be confused about the fact that the Earth is the center of the universe. The climate models have definitely strayed into the land of imaginary epicycles. The assumption of CO2 causation, plus the preoccupation with an abstract global average, creates a warming illusion that has no connection with reality in either hemisphere. This mathematical abstraction, the global average, is characteristic of nowhere. It creates the illusion of a warming crisis, when in fact no evidence for such a crisis exists. In the context of IPCC warnings about glacers melting, runaway warming, etc., the global-average hockey stick serves as deceptive and effective propaganda, but not as science. As with the Ptolemaic model, there is a much simpler explantation for our recent era of warming, at least in the Northern Hemisphere: long-term temperature patterns are continuing, from natural causes, and natural regulatory mechanisms have compensated for the greenhouse effect of human-caused CO2 emissions. There is no strong reason to believe that CO2 has been affecting the Southern Hemisphere either, given the natural record of rapid and extreme oscillations which often go opposite to northern trends. This simpler explanation is based on actual observations, and requires no abstract mathematical epicycles or averages but it removes CO2 from the center of the climate debate. And just as politically powerful factions in Galileos day wanted the Earth to remain the center of the universe, powerful factions today want CO2 to remain at the center of climate debate, and global warming to be seen as a threat. What is the real agenda of the politically powerful factions who are promoting global-warming alarmism? One thing we always need to keep in mind is that the people at the top of the power pyramid in our society have access to the very best scientific information. They control dozens, probably hundreds, of high-level think tanks, able to hire the best minds, and carrying out all kinds of research we dont hear about. They have access to all the secret military and CIA research, and a great deal of influence over what research is carried out in think tanks, the military, and in universities. Just because they might be promoting faulty science for its propaganda value, that doesnt mean they believe it themselves. They undoubtedly know that global cooling is the most likely climate prognosis, and the actions they are promoting are completely in line with such an understanding. Cap-and-trade, for example, wont reduce carbon emissions. Rather it is a mechanism that allows emissions to continue, while pretending they are declining by means of a phony market model. You know what a phony market model looks like. It looks like Reagan and Thatcher telling us that lower taxes will lead to higher government revenues due to increased business activity. It looks like globalization, telling us that opening up free markets will raise all boats and make us all prosperous. It looks like Wall Street, telling us that mortgage derivatives are a good deal, and we should buy them. And it looks like Wall Street telling us the bailouts will restore the economy, and that the recession is over. In short, its a con. Its a fake theory about what the consequences of a policy will be, when the real consequences are known from the beginning. Cap-and-trade has nothing to do with climate. It is part of a scheme to micromanage the allocation of global resources, and to maximize profits from the use of those resources. Think about it. Our powerful factions decide who gets the initial free cap-and-trade credits. They run the exchange market itself, and can manipulate the market, create derivative products, sell futures, etc. They can cause deflation or inflation of carbon credits, just as they can cause deflation or inflation of currencies. They decide which corporations get advance insider tips, so they can maximize their emissions while minimizing their offset costs. They decide who gets loans to buy offsets, and at what interest rate. They decide what fraction of petroleum will go to the global North and the global South. They have their man in the regulation agencies that certify the validity of offset projects, such as replacing rainforests with tree plantations, thus decreasing carbon sequestration. And they make money every which way as they carry out this micromanagement. In the face of global cooling, this profiteering and micromanagenent of energy resources becomes particularly significant. Just when more energy is needed to heat our homes, well find that the price has gone way up. Oil companies are actually strong supporters of the global-warming bandwagon, which is very ironic, given that they are funding some of the useful contrary research that is going on. Perhaps the oil barrons are counting on the fact that we are suspicious of them, and asssume we will discount the research they are funding, as most people are in fact doing. And the recent onset of global cooling explains all the urgency to implement the carbon-management regime: they need to get it in place before everyone realizes that warming alarmism is a scam. And then theres the carbon taxes. Just as with income taxes, you and I will pay our full share for our daily commute and for heating our homes, while the big corporate CO2 emitters will have all kinds of loopholes, and offshore havens, set up for them. Just as Federal Reserve theory hasnt left us with a prosperous Main Street, despite its promises, so theories of carbon trading and taxation wont give us a happy transition to a sustainable world. Instead of building the energy-efficient transport systems we need, for example, theyll sell us biofuels and electric cars, while most of societys overall energy will continue to come from fossil fuels, and the economy continues to deteriorate. The North will continue to operate unsustainably, and the South will pay the price in the form of mass die-offs, which are already ticking along at the rate of six million children a year from malnutrition and disease. While collapse, suffering, and die-offs of marginal populations will be unpleasant for us, it will give our powerful factions a blank canvas on which to construct their new world order, whatever that might be. And well be desperate to go along with any scheme that looks like it might put food back on our tables and warm up our houses. This document continues to evolve, based on continuing research. The latest version is always maintained at this URL: The author can be reached here: [email protected]
http://rkmdocs.blogspot.com/2010/01/climate-science-observations-vs-models.html
13
18
The Van Allen radiation belt is a torus of energetic charged particles (i.e. a plasma) around Earth, trapped by Earth's magnetic field. The Van Allen belts are closely related to the polar aurora where particles strike the upper atmosphere and fluoresce. The presence of a radiation belt had been theorized prior to the Space Age and the belt's presence was confirmed by the Explorer I on January 31, 1958 and Explorer III missions, under Doctor James Van Allen. The trapped radiation was first mapped out by Explorer IV and Pioneer III.Qualitatively, it is very useful to view this belt as consisting of two belts around Earth, the inner radiation belt and the outer radiation belt. The particles are distributed such that the inner belt consists mostly of protons while the outer belt consists mostly of electrons. Within these belts are particles capable of penetrating about 1 g/cm2 of shielding (e.g., 1 millimetre of lead). The term 'Van Allen Belts' refers specifically to the radiation belts surrounding Earth; however, similar radiation belts have been discovered around other planets. The Sun does not support long-term radiation belts. The Earth's atmosphere limits the belts' particles to regions above 200-1000 km, while the belts do not extend past 7 Earth radii RE. The belts are confined to an area which extends about 65¡ from the celestial equator. Enlarged View of this image The big outer radiation belt extends from an altitude of about 10,000Ð65,000 km and has its greatest intensity between 14,500Ð19,000 km. The outer belt is thought to consist of plasma trapped by the Earth's magnetosphere. The USSR's Luna 1 reported that there were very few particles of high energy within the outer belt. The gyroradii for energetic protons would be large enough to bring them into contact with the Earth's atmosphere. The electrons here have a high flux and along the outer edge and electrons with kinetic energy E > 40 keV can drop to normal interplanetary levels within about 100 km (a decrease by a factor of 1000). This drop-off is a result of the solar wind. The particle population of the outer belt is varied, containing electrons and various ions. Most of the ions are in the form of energetic protons, but a certain percentage are alpha particles and O+ oxygen ions, similar to those in the ionosphere but much more energetic. This mixture of ions suggests that ring current particles probably come from more than one source. The outer belt is larger and more diffused than the inner, surrounded by a low-intensity region known as the ring current. Unlike the inner belt, the outer belt's particle population fluctuates widely and is generally weaker in intensity (less than 1 MeV), rising when magnetic storms inject fresh particles from the tail of the magnetosphere, and then falling off again. There is debate as to whether the outer belt was discovered by the US Explorer IV or the USSR Sputnik II/III. The inner Van Allen Belt extends from roughly 1.1 to 3.3 Earth radii, and contains high concentrations of energetic protons with energies exceeding 100 MeV, trapped by the strong (relative to the outer belts) magnetic fields in the region. It is believed that protons of energies exceeding 50 MeV in the lower belts at lower altitudes are the result of the beta decay of cosmic ray neutrons. The source of lower energy protons is believed to be proton diffusion due to changes in the magnetic field during geomagnetic storms. Solar cells, integrated circuits, and sensors can be damaged by radiation. In 1962, the Van Allen belts were temporarily amplified by a high-altitude nuclear explosion (the Starfish Prime test) and several satellites ceased operation. Magnetic storms occasionally damage electronic components on spacecraft. Miniaturization and digitization of electronics and logic circuits have made satellites more vulnerable to radiation, as incoming ions may be as large as the circuit's charge. Electronics on satellites must be hardened against radiation to operate reliably. The Hubble Space Telescope, among other satellites, often has its sensors turned off when passing through regions of intense radiation. An object satellite shielded by 3 mm of aluminum will receive about 2500 rem (25 Sv) per year. Proponents of the Apollo Moon Landing Hoax have argued that space travel to the moon is impossible because the Van Allen radiation would kill or incapacitate an astronaut who made the trip. Van Allen himself, still alive and living in Iowa City, has dismissed these ideas. In practice, Apollo astronauts who travelled to the moon spent very little time in the belts and received a harmless dose. Nevertheless NASA deliberately timed Apollo launches, and used lunar transfer orbits that only skirted the edge of the belt over the equator to minimise the radiation. Astronauts who visited the moon probably have a slightly higher risk of cancer during their lifetimes, but still remain unlikely to become ill because of it. It is generally understood that the Van Allen belts are a result of the collision of Earth's magnetic field with the solar wind. Radiation from the solar wind then becomes trapped within the magnetosphere. The trapped particles are repelled from regions of stronger magnetic field, where field lines converge. This causes the particle to bounce back and forth between the earth's poles, where the magnetic field increases. The gap between the inner and outer Van Allen belts is caused by low-frequency radio waves that eject any particles that would otherwise accumulate there. Solar outbursts can pump particles into the gap but they drain again in a matter of days. The radio waves were originally thought to be generated by turbulence in the radiation belts, but recent work by James Green of the NASA Goddard Space Flight Center comparing maps of lightning activity collected by the Micro Lab 1 spacecraft with data on radio waves in the radiation-belt gap from the IMAGE spacecraft suggests that they're actually generated by lightning within Earth's atmosphere. The radio waves they generate strike the ionosphere at the right angle to pass through it only at high latitudes, where the lower ends of the gap approach the upper atmosphere. The Soviets once accused the U.S. of creating the inner belt as a result of nuclear testing in Nevada. The U.S. has, likewise, accused the USSR of creating the outer belt through nuclear testing. It is uncertain how particles from such testing could escape the atmosphere and reach the altitudes of the radiation belts. Likewise, it is unclear why, if this is the case, the belts have not weakened since atmospheric testing was banned by treaty. Thomas Gold has argued that the outer belt is left over from the aurora while Dr Alex Dessler has argued that the belt is a result of volcanic activity. In another view, the belts could be considered a flow of electric current that is fed by the solar wind. With the protons being positive and the electrons being negative, the area between the belts is sometimes subjected to a current flow, which "drains" away. The belts are also thought to drive aurora, lightning and many other electrical effects. The belts are a hazard for artificial satellites and moderately dangerous for human beings and difficult and expensive to shield against. There is a proposal by the late Robert L. Forward called HiVolt which may be a way to drain at least the inner belt to 1% of its natural level within a year. The proposal involves deploying highly electrically charged tethers in orbit. The idea is that the electrons would be deflected by the large electrostatic fields and intersect the atmosphere and harmlessly dissipate. Some scientists, however, theorize that the Van Allen belts carry some additional protection against solar wind, which means that a weakening of the belts could harm electronics and organisms, and that they may influence the Earth's telluric current, dissipating the belts could influence the behavior of Earth's magnetic poles. References and Links NASA Discovers New Radiation Belt Around Earth Live Science - February 28, 2013 A ring of radiation previously unknown to science fleetingly surrounded Earth last year before being virtually annihilated by a powerful interplanetary shock wave, scientists say. NASA's twin Van Allen space probes, which are studying the Earth's radiation belts, made the cosmic find. The surprising discovery - a new, albeit temporary, radiation belt around Earth - reveals how much remains unknown about outer space, even those regions closest to the planet, researchers added. After humanity began exploring space, the first major find made there were the Van Allen radiation belts, zones of magnetically trapped, highly energetic charged particles first discovered in 1958. ALPHABETICAL INDEX OF ALL FILES CRYSTALINKS HOME PAGE PSYCHIC READING WITH ELLIE 2012 THE ALCHEMY OF TIME
http://www.crystalinks.com/vanallenbelt.html
13
10
Coastal zones are of great importance to the country. Many families of fishermen depend on coastal fisheries for subsistence, the tourism industry has developed principally along stretches of sandy beaches and for the local population coastal zones are important centres of leisure activities. Unfortunately, coastal zones are also recipients of land based pollution such as untreated domestic and industrial sewage, solid waste from dumps close to the shore and agricultural run-offs. The mining of sand, though regulated and limited to selected sites exerts further pressures on the resources of the coastal zones. The competing and often conflicting demands for access to coastal zones by the population and property developers, the need to preserve the marine and coastal ecology for future generations and the need to promote sustainable development mean that an integrated approach to coastal zone management is urgently required. What are coastal zones? Coastal zones are composed of the coastal plain, the continental shelf, the waters that cover this shelf and includes features such as bays, estuaries, lagoons, small islets and reefs. It is also the region where marine and continental processes of erosion and deposition interact giving rise to different types of land forms. Morphology of shores and beaches around the island The Continental Shelf In spite of the limited extent of the Mauritian coast, barely 323 km in length, it comprises a great variety of different features. The presence of an appreciable and shallow continental shelf all round the island has determined in part the nature of the coastal features seen. For example the shallow shelf has enabled the development of the coral reef, which mainly thrives in shallow and warm waters. The reef then shapes coastal morphology. Formation of Land forms Land forms that develop and persist along the coast result from a combination of processes acting upon the sediments and rocks present in the coastal zone. Waves, currents and tides are the most prominent processes affecting coastal morphology. Climate and gravity are also significant agents of change. Waves moving towards a coast are the most obvious of the coastal processes under consideration. As waves enter shallow waters they interact with the sea bottom. As a result sediment can become temporarily suspended and is available for movement by sea currents. The larger the wave, the deeper the water in which this process can occur and the larger the particles that can be moved. Generally, small waves cause sediment, usually sand, to be transported toward the coast and deposited along a beach. Larger waves, during a storm for example, can remove sediment from the coast and carry it out to into deeper water. Waves erode the bedrock along the coast largely by abrasion. Similarly, suspended sediment particles, pebbles and rock debris have an abrasive effect on a surface. Waves which have considerable force can break up bedrock simply by impact. Long shore currents Waves usually approach a coast at an acute angle rather than head on, in a direction perpendicular to the coast. When the waves enter shallow waters at an angle, they are bent ( refracted). As this happens, the bent waves generate a current that runs along the shore and parallel to it. This current is called a longshore current. The current's speed depends on the power of the waves and their angle of approach with the shore. It can vary from 10 centimetres per second to over one metre per second under stormy conditions. Waves and longshore currents together transport large quantities of sediment along the shallow zone adjacent to the shore. Longshore currents may move in either direction along the shore depending upon wave direction. As this is determined in part by wind direction, it follows that the wind is the ultimate factor in determining the direction of longshore currents and the transport of sediment along the shoreline. Typically waves lift up the sediment and longshore currents carry it along the coast. In Mauritius, the coral reefs act as barriers and absorb most of the impact of waves. Those overflowing hit the shore almost orthogonally. However, where the coral reef barrier is absent, at river mouths for example, waves can approach the coast at an angle and produce a longshore current. It appears ( Reference 1 P 272) though that a long shore current exists along the western and south western coasts that causes a drift of sediment. It does not appear to be continuous and its strength has not been measured. High frequency waves can cause the accumulation of considerable volumes of water in the lagoon, raising its level by up to 1.5 metres. This excess water then flows out of the lagoon through gaps in the coral barrier reef thus creating a current called an intra-lagoonal current which may reach up to 3.5 knots. This current can transport loose sediment on the lagoonal floor out to the gaps in the barrier reef. Tides are semi-diurnal and have a mean amplitude of 0.8 metres and generally vary between 0.5 to 1.3 metres. The relatively low tidal amplitude means that tidal currents generated are of low magnitude. Hence their effects on coastal morphology is weak. Climate, Winds and Gravity The climatic elements of importance in the development of land forms are rainfall and wind. Rainfall is important because it provides the run off in the form of streams and is an important factor in producing and transporting sediment to the coast. The importance of wind comes about in its relationship to waves. The presence of strong winds is associated with high energy waves. The direction and intensity of winds determines both the direction and energy of the waves. Cyclones ( Tropical storms ) with their associated strong winds and considerable rain water increase in magnitude the usual processes that affect land forms. Gravity also plays an important role in coastal processes. It is indirectly involved in processes associated with wind and waves and it is directly involved through down slope movement of sediment and rock. This role is particularly evident along shorelines cliffs where waves attack the base of the cliffs and undercut the slope. That results, eventually, in the collapse of rocks into the sea or accumulation of debris at the base of the cliffs. Depositional And Erosional Coasts There are two major types of coastal morphology. One type dominated by erosion and the other by deposition. Generally erosional coasts have little or no sediment in contrast to depositional coasts with abundant sediment accumulation. Sea cliffs and wave cut platforms are characteristic of erosional coasts. Wave cut platforms arise when the face of the sea cliff recedes under wave action. In Mauritius erosional coasts occur mainly where coral reefs are absent. This occurs along part of the western coast at Pointe Aux Caves and Montagne Jacquot and along the southern coast. Waves and wave-generated currents significantly influence the development of depositional land forms. Waves crashing on the barrier reef lose most of their energy, but enough is left permitting sediment to be lifted off the reef flat, transported to the shore and deposited there. In Mauritius, beaches are the most common depositional land form found along the coastline and sandy beaches made up of carbonate sediment are the most frequent forms seen. The Use of coastal lands and lagoons Considerable pressure is exerted on coastal zones ecosystems and its resources. It is clear that it is a matter of urgency for the country to determine what forms of coastal development is possible and desirable within the constraints imposed by local conditions. Provided, of course, the aim is to promote sustainable human development and not to maximise returns and profits at all costs for a minority of private operators at the expense of the community. Coastal Land Use St Antoine Sugar Estate Source: Ministry of Land, Housing and Town Planning The first historical use of coastal zones has been for artisanal fisheries. It is still a very important activity which provides a means of livelihood for thousands of families. Sand mining at selected places have been going for years and is still going on unabated. Close to 800,000 tonnes are extracted yearly form the lagoon and inland deposits close to the shore. It is government policy to eliminate completely this activity by the year 2001. For decades, very few mauritians were wealthy enough to be able to enjoy the sea for recreational purposes. Few bungalows existed around the coast and before independence (1968) only a couple of hotels were in operation. The environmental stresses on coastal zones were minimal. The increased affluence of the seventies (due to an increase in sugar prices on the world market), a governmental policy of encouraging tourism as from the eighties and the success of industrialisation as from the mid eighties, have had the greatest of incidence on the use of coastal resources. The above mentioned factors have resulted in: (1) more wealthy mauritians leasing beach frontage for the erection of private bungalows. (2) a host of new hotels built upon prime beach frontage (more than a hundred hotel complexes currently dot the coastline) to accommodate an ever increasing flow of tourists. (3) a spectacular increase in the number of mauritians heading for the beaches for recreational purposes. (4) a haphazard urbanisation of a number of previously sleepy coastal villages. Grand Baie and Flic en Flac being prime examples. (5) a spectacular increase in the number of leisure boats operating in the lagoon. (6) a greater demand for the local varieties of fishes. (1) Private Bungalows Nearly all of the strip of land all round the island from the high water mark to 81.21 metres inland is known as Pas Geometriques and is the property of the Government. However it can be leased for a maximum of 30 years renewable against a fee that is ridiculously low. Over the past decades, the different governments have been generous in leasing away most of that land either to individuals or to hotel developers. The result of which is that bungalow sites occupy 52 kilometres of coastal land representing 16% of the total. Though that does not appear to be such a high proportion, it is important to realise that the vast majority of bungalows are built on lands adjacent to sandy beaches. The erection of bungalows tend to preclude the population from gaining access to those beaches, though this is unintentional in most cases. But on numerous occasion, owners of bungalows have erected fences and walls in order to prevent access by the public. Laws had to be passed to render illegal fencing off access to the beaches. It is clear that any future governments will find it increasingly difficult to justify leasing off further tracts of Pas Geometriques to private individuals when the public is facing rather crowded public beaches with few if any amenities. In fact, public pressure will soon demand that leases be not renewed and the land so freed be transformed into public beaches with proper amenities. A perfectly reasonable demand. (2) Tourism And Coastal land Use The vast majority of tourists come to the island to enjoy the beaches, the sea and the sun. Hence tourists are concentrated on coastal zones. The north, the west, the south west, the east of the island being the principal tourist zones. Prior to the seventies, few tourists visited the island and there were few hotels. Since independence (1968), it has been Government policy to encourage tourism in order to increase foreign currency reserves and provide much needed employment. It is beyond reasonable discussion that the tourism industry has played a pivotal role in the development of the country. It has boosted foreign reserves and provided employment. The influx of foreign tourists has increased the exposure of the public to the outside world and influences. It has spurred the development of service industries that cater for the need of tourists, like restaurants, travel agencies, car hire services, retail shops, bars & discotheques, and so on. The vast stretches of sandy beaches adjacent to unoccupied Pas Geometriques lands, have enabled the first hotel developers to lease from Government, for a small yearly sum of money, hectares of prime coastal land. In the seventies or even in the eighties, this aroused little attention from the public because few could afford to go regularly and frequently to the beach for a day out. The sugar boom of the seventies, industrialisation of the eighties steadily increased the welfare of the population. Once the basic needs more than satisfied, people naturally looked for better recreational facilities. Inevitably they turned to the sea and its beaches. Furthermore, the increased wealth enabled more people to purchase or erect bungalows from leased lands on the Pas Geometriques. Hence competition for access to sandy beaches inevitably arose among the three groups: hotel developers, bungalows owners and the public. Unfortunately, the pressure to build new hotels directly on the beach frontage is relentless because tourism is one of the few growth areas of the local economy and is highly lucrative. Very powerful commercial interests are at play in this sector. More hotels on the beach means less beach frontage for the public. At the present, hotel sites occupy 41.9 kilometres of coastal zones which represent 13% of the total which does not seem to be considerable but again it must be reminded that hotels tend to be built along the most beautiful stretches of sandy beaches, obviously their share of sandy beaches must be much greater than the above percentage figure. The insistence from property developers to have prime beach frontage and the demand from the public for more public beaches with better amenities will inevitably lead to uneasy situations that could lead to confrontation. (3) Recreational Purposes & Public Beaches Coastal zones have become, over the years, important centres of leisure activities for the local population, and it is expected to grow in importance in the years to come. Currently, public beaches total 26.6 kilometres which represent 8.2% of coastal land use. It is clear that bungalows and hotel site with a combined total of 29% fare much better than the public with a mere 8.2% of the total. Any government, present or future will have to come up with more public beaches to dissipate mounting public concern for a better access to beaches and better amenities on site. A visit to the hugely popular beaches at Flic En Flac ( west coast) on Sundays is sufficient to convince anyone of the urgency of the situation, the public beach there is packed with people, cars and buses. Amenities like toilets and water points are far and few between, and thus totally insufficient. The same scenario repeats itself in the north at Mon Choisy and La Cuvette, two very popular public beaches. ((5) Leisure Boats Tourism has considerably increased the number of pleasure crafts operating in the lagoons round Mauritius, whether it be motor boats for water skiing or para sailing, or the usual sailing crafts. The operation of pleasure crafts is regulated by law. Environmental impacts of human activities in coastal zones Human activities with impacts on coastal ecology and environment can broadly be divided up into: (a) activities that are situated in coastal zones (b) activities occurring elsewhere (principally inland). The category (a) can be subdivided into the following activities: - The construction and operation of hotel complexes and bungalows - Sand mining - Artisanal Fisheries - The recreational use of beaches - The operation of pleasure boats Similarly category (b) can be subdivided into the following activities - The disposal of industrial sewage - The disposal of domestic sewage & storm water - The disposal of waste water from sugar mills - The disposal Of Solid Waste - Agricultural run off Environmental Impacts of Hotel & Bungalow Construction and Operation on Coastal Zones Apart from occupying beaches and rendering access difficult to the public, the construction of hotels directly on the beach head may have significant environmental impact. For instance, though hotels with more than 75 rooms must have, by law, a water treatment plant on site, it is not known whether all the different hotels' treatment plants are really adequate to cope with the load or whether some seepage does occur at times which could have adverse effects on the lagoon. Furthermore, hotels construct piers or jetties that can severely interfere with the long shore movement of sand creating sand erosion further down the coast and can interfere greatly with the free passage of the public up and down the coast. Sand erosion caused by the construction of piers and by sand mining is beginning to be a significant problem though no studies are publicly available on the matter. The seriousness of the problem can be gauged by the fact that the Government has, over the past years, built sea defences at certain places round the coast like Grand Baie, Cap Malheureux and Flic en Flac. The defences consist of placing at selected places gabions which are wire netting cages 1 metre cube in volume filled with rocks. This method is thought to hold the sand in place and permit local accumulation of sand. The clearing of sea weeds, corals and other rocks in the lagoon close to the shore has regularly been carried out to create suitable bathing areas or sky lanes. Though, in some cases, the clearing is fairly innocuous, on a couple of occasions, it cannot be said to be so. For example, at Balaclava ( west coast of Mauritius), where a marine park has just been set up, a couple of hotels obtained the permission to create water skiing lanes by clearing corals over a long stretch of the lagoon. Notably, The Victoria Hotel, in 1995, cleared corals for a water skiing lane 750 metres long and 30 metres wide and further proceeded, in 1996, to clear another site for the creation of a bathing site and this amidst much opposition from local fishermen who feared for their livelihoods. Needless to say that the hotel had the necessary permits and Environmental Impact Assessment reports to back up this operation. In 1993, the Touessrok Hotel at Trou D'eau Douce (east coast) carried out very important works in the lagoon with the necessary Environmental Impact Assessment report. The government of that time informed the management that "the ministry has no objection to the implementation of the proposed works in relation to (i) the dredging of the inner cover and of the two channels (ii) dredged material treatment and handling onshore (iii) beach recharging and widening (iv) erection of a groyne and (v) the construction of an artificial breakwater to protect the cove beach, provided that the following conditions are observed" ( Le Week End 20th of June 1993). Though the local fishermen went to court to obtain an injunction, it does not appear that they managed to influence the course of things. Unfortunately, very little is at present known on the impacts of hotel development on the coastal and lagoon ecology. Bungalows built along the coastline have never been connected to the sewage system and disposal of sewage is done exclusively through absorption pits or cess pits. It is possible that nutrient enrichment of the lagoon occurs through seepage of sewage to the lagoon. But that is at present purely speculative. At several places, bungalows and even hotels have been built on wetlands or marshy grounds, for example at Flic en Flac or Grand Baie. This has resulted in a drastic reduction of wetlands around the coast, hence wetlands are no longer there to act as natural filtering systems of either sewage or storm waters. The water table at Grand Baie has risen significantly, for example, and is now only a metre deep. Flooding and pollution by sewage is now a reality in parts of Grand Baie. At Flic en Flac also, construction of hotels and bungalows has been going on for years on marshy lands. And now certain parts of Flic En Flac is prone to flooding after heavy rains. Environmental Impact of Recreational Use of Beaches One of the main impacts of the use of beaches by the public on the environment is the fact that a fair proportion of the public fails to use the dust bins provided on the beaches for the proper disposal of solid waste. Hence, at times and on certain beaches, there is solid waste accumulating on site. This waste, apart from being unsightly and a source of bad smells attracting rodents, can drift into the lagoon waters polluting it. Furthermore, at certain places, the lagoon is used by some people as a huge and uncontrolled dumping ground. Regularly, non governmental organisations working in the field of the environment and professional divers team up to remove from the lagoon bottom large quantities of solid waste which found its way there. For example on the 7th of June 1997, during the "World Environment Day" divers removed from the lagoon of Blue Bay ( South of the island ) car and truck tyres, old nets, discarded plastic bags and bottles, broken plates and even radio sets! Anchor damage by pleasure crafts or fishing boats is thought to be a significant factor in the destruction of corals. Coastal zones are undoubtedly under heavy use, and the pressure will not cease in the foreseeable future, on the contrary it can only increase significantly with a greater number of tourists visiting the island every year, with more of the population going to the sea side for leisure activities. It is indeed, high time that a comprehensive policy of coastal management be set up by government before irremediable damage is inflicted upon coastal zones. Already, there are signs that all is not well, a decrease in the catch of fishes over the years, nutrient enrichment of the lagoon due to sewage, sand erosion, industrial pollution are but a few of the problems that have to be addressed fully. As a fair share of the stresses on coastal zones originate inland, it is clear that coastal zone management cannot be seen in isolation from what happens elsewhere, making proper management a challenging and interesting task of supreme importance.
http://library.thinkquest.org/C0110237/Geography__An_overview/Geography/Marine_Resources/Coastal_Zones/coastal_zones.html
13
15
Topography, geology and physical properties of space The universe is made of 70% vacuum energy, 26% exotic dark matter, 4% ordinary matter (e.g. planets, stars, asteroids) and 0.005% radiation (light, cosmic and gamma rays, X-rays). The existence and properties of empty space can be determined by experiment. Most of the physical properties of space are paradoxical: space is supposed to be empty, but not an absolute vacuum, containing sizeable amounts of matter, energy and radiation; space is an unwelcoming environment, but it offers endless possibilities for life beyond our world. “Nothing” is a philosophical concept, accessible to logical analysis. Philosophers have been trying to define it since ancient times (Aristotle). We have come to understand that truly empty space cannot exist (that would mean that no matter would be present and gravitational and electromagnetic fields would be exactly zero). Still, the concept needs further clarification for us to fully understand it. The nineteenth-century Scottish physicist James Clerk Maxwell gave the following definition for vacuum: “The vacuum is that which is left in a vessel after we have removed everything from it”. This definition still leaves us with an unanswered question: what can’t we remove and how do we know we have removed “everything we can remove”? The distinction between matter and void had to be abandoned when it was proved that particles can spontaneously appear or disappear in the void without the presence of any particles causing powerful interaction. Three particles: a proton (p) an antiproton (p-) and a pion (π) form out of nothing and then disappear in the void. According to the theory of fields this type of event occurs all the time. Vacuum is far from being “empty”. It contains an unlimited number of particles that are constantly formed and destroyed. In physics, “something” is quantified by energy. An enclosed space is empty in a physical sense if it has released all the energy it can. According to Einstein’s formula “E=mc2”, air molecules (with the mass “m”) stand for an amount of energy, and the energy from an enclosed space is removed when the air is pumped out. Any system left alone will release all the energy that the surroundings can absorb, assuming a state of minimum energy (e.g. a pendulum will eventually slow to a stop and hang motionless whatever its initial state; it gives off its energy through friction). In some cases, the physical definition of emptiness may lead to surprising results. For example, a physical system represented by a glass filled with water at 0° Celsius (32° Fahrenheit) will surrender energy in the form of heat when the water passed from liquid state to solid (frozen) state. When it melts, it absorbs energy (the heat of melting), which means that the water in its lowest state of energy is solid. According to Einstein’s formula E=mc2, taking the ice out of the system would further lower its energy. Is there something that we cannot take away from any system without raising its energy? Fully removing matter and energy from a system is, at the present time, impossible. Since pure vacuum contains no matter, temperature does not exist, as temperature is a measure of the kinetic energy of particles in a substance. Space is not a perfect vacuum, and temperatures in space vary from just above 0 K (-459,66 Fahrenheit) to millions of degrees at the center of stars. Gravity gives shape to apparently featureless space. The hills and valleys it creates will be as important to space settlers as geographical features are to terrestrial settlers. For a relatively small body to escape from the surface of a massive body (a planet or moon), it must be lifted through a gravitational well (the more massive the body, the deeper the well). The Earth’s gravity is 22 times more powerful than that of the Moon. This will be of importance to space colonists. In deciding where to get their resources they will have to take into account that matter can be more easily lifted from the Moon than from the Earth. Lagrangian liberation points can also be fount in the Earth-Moon system. These are points where gravitational forces from the two bodies cancel each other out. The primary criteria for choosing the site of the colony are ease of access to resources, communication and low transportation costs. Satisfactory balances among them can be achieved by efficiently exploiting the topography of space. One of the most important sources of energy in space is solar radiation. It consists of charged particles (protons) emitted from the sun and its intensity decreases as distance from the sun increases (as the square of distance from the sun). Another, more constant energy source is cosmic radiation, consisting of heavier particles (e.g. iron nuclei) from other galaxies. Radiation on the surface of a planet consists of solar winds or cosmic radiation that reaches the surface and neutrons and gamma-ray photons released when space radiation particles interact with the planet’s atmosphere and crust. Outside Earth’s atmosphere, the energy flow from the sun is more steady and intense. 1390W of sunlight pass through every square meter of space directly exposed to the sun, while the maximum amount of light reaching the Earth’s surface is 745W/m2. A square meter of space receives 7.5 times more energy from the sun than an average square meter on Earth because of the day-night alternation on Earth and because sunlight doesn’t fall perpendicularly on the surface of the planet. The intensity and wavelength of unfiltered sunlight is deadly for humans, but it is, at the same time, one of the most valuable energy sources in space. The earth’s surface is protected from solar winds and cosmic radiation by the atmosphere and magnetic field. The atmosphere absorbs both space radiation and the gamma rays that are produced by the Earth’s crust. The magnetic field diverts most charged particles to the poles, creating aurora borealis. Mars has little atmosphere and no magnetic field, so the flow of charged particles anywhere on the surface greatly exceeds that on Earth. There is enough atmosphere to create a neutron field (from the interaction of charged particles with the atmosphere and with the crust), but it isn’t thick enough to absorb the neutrons before they reach the surface. Some neutrons are reflected back toward the surface after interacting with the planet’s crust. Planets, moons and asteroids make up the main material sources in space. Comets could also be considered material sources, but they are hard to exploit because of their high velocity. Accessibility to these sources is determined by distance and the depth of the gravitational field. The Earth would be an important source of material for a colony situated in the vicinity, especially of hydrogen, nitrogen and carbon, which are not found in sufficient amounts anywhere near our planet. The moons of planets usually have shallow gravitational wells, so they offer an attractive source of materials. The Moon can be a good source of aluminum, iron, titanium, oxygen and silicon. These resources, supplemented with small amounts of a few elements from Earth, can supply a colony with all the materials it needs to sustain life. Asteroids also have shallow gravitational wells and move in regular orbits. They may contain sizeable amounts of hydrogen, carbon and nitrogen, as well as other minerals and frozen water. Recent studies revealed that the Universe is expanding at an increasing rate. This discovery seems to confirm Einstein’s idea of “dark matter”, the vacuum energy, which is forcing the expansion of the Universe. After studying this dark energy, proffesors Andrei Linde and Renata Kallosh of Stanford University say that the Universe will stop expanding in 10 to 20 billion years and the influence of dark energy will become neutral and then negative, causing a collapse. In the 1930s, Paul Dirac, an English physicist, proposed that vacuum contained electromagnetic waves called “zero point energy”, contained in “virtual photons”, which appear out of nothing and the energy to create them is taken from the vacuum until the virtual photon disappears. According to this theory, there is an infinite number of possible photon modes, so the total zero point energy in the vacuum is infinite. It was suggested that there is a substance called “ether”, present everywhere, even in “empty” space. Energy residing in the ether would be the source for the random emerging and disappearance of particles in the, but there is nothing that permits the growth of large objects. When the energy increases, the number of participating particles increases, but they cannot be joined together, because they disappear as randomly as they appear. Because an object is uniformly bombarded under most circumstances, the effects of zero point energy in space are not obvious. Andrei Dan Costea, Flaviu Valentin If you have any questions please contact us.
http://www.nss.org/settlement/nasa/Contest/Results/2004/winner/html%20only/Chapter%20I.htm
13
11
Fire Science: Every year, fires and other emergencies take thousands of lives and destroy property worth billions of dollars. Fire fighters help protect the public against these dangers by responding to fires and a variety of other emergencies. In addition to putting out fires, they are frequently the first emergency personnel at the scene of a traffic accident or medical emergency and may be called upon to treat injuries or perform other vital functions. View our Fire Science Degree Programs. Not what you were looking for? Return to the Online Salary Guides home. Different Fire Science Positions During duty hours, fire fighters must be prepared to respond immediately to a fire or others emergency. Fighting fires is dangerous and complex; therefore requires organization and teamwork. At every emergency scene, fire fighters perform specific duties assigned by a superior officer. At fires, they connect hose lines to hydrants and operate a pump to send water to high-pressure hoses. Some carry hoses, climb ladders, and enter burning buildings—using systematic and careful procedures—to put out fires. At times, they may need to use tools, like an ax, to make their way through doors, walls, and debris, sometimes with the aid of information about a building’s floor plan. Some find and rescue occupants who are unable to safely leave the building without assistance. They also provide emergency medical attention, ventilate smoke-filled areas, and attempt to salvage the contents of buildings. Fire fighters’ duties may change several times while the company is in action. Sometimes they remain at the site of a disaster for days at a time, rescuing trapped survivors, and assisting with medical treatment. Fire fighters work in a variety of settings, including metropolitan areas, rural areas with grasslands and forests, airports, chemical plants and other industrial sites. They have also assumed a range of responsibilities, including emergency medical services. In fact, most calls to which fire fighters respond involve medical emergencies. In addition, some fire fighters work in hazardous materials units that are specially trained for the control, prevention, and cleanup of hazardous materials, such as oil spills or accidents involving the transport of chemicals. Workers specializing forest fires utilize different methods and equipment than other fire fighters. In national forests and parks, forest fire inspectors and prevention specialists spot fires from watchtowers and report the fires to headquarters by telephone or radio. Forest rangers also patrol to ensure that travelers and campers comply with fire regulations. When fires break out, crews of fire fighters are brought in to suppress the blaze with heavy equipment and water hoses. Fighting forest fires, like fighting urban fires, is rigorous work. One of the most effective means of fighting a forest fire is creating fire lines—cutting down trees and digging out grass and all other combustible vegetation in the path of the fire—to deprive it of fuel. Elite fire fighters called smoke jumpers parachute from airplanes to reach otherwise inaccessible areas. This tactic, however, can be extremely hazardous. When they aren’t responding to fires and other emergencies, fire fighters clean and maintain equipment, study fire science and fire fighting techniques, conduct practice drills and fire inspections, and participate in physical fitness activities. They also prepare written reports on fire incidents and review fire science literature to stay informed about technological developments and changing administrative practices and policies. Most fire departments have a fire prevention division, usually headed by a fire marshal and staffed by fire inspectors. Workers in this division conduct inspections of structures to prevent fires by ensuring compliance with fire codes. These inspectors also work with developers and planners to check and approve plans for new buildings and inspect buildings under construction. Some fire fighters become fire investigators, who determine the causes of fires. They collect evidence, interview witnesses, and prepare reports on fires in cases where the cause may be arson or criminal negligence. They often are asked to testify in court. In some cities, these investigators work in police departments, and some are employed by insurance companies. The Fire Science Work environment Fire fighters spend much of their time at fire stations, which are usually similar to dormitories. When an alarm sounds, fire fighters respond, regardless of the weather or hour. Fire fighting involves the risk of death or injury from floors caving in, walls toppling, traffic accidents, and exposure to flames and smoke. Fire fighters also may come into contact with poisonous, flammable, or explosive gases and chemicals and radioactive materials, which may have immediate or long-term effects on their health. For these reasons, they must wear protective gear that can be very heavy and hot. Work hours of fire fighters are longer and more varied than the hours of most other workers. Many fire fighters work more than 50 hours a week, and sometimes they may work longer. In some agencies, fire fighters are on duty for 24 hours, then off for 48 hours, and receive an extra day off at intervals. In others, they work a day shift of 10 hours for 3 or 4 days, a night shift of 14 hours for 3 or 4 nights, have 3 or 4 days off, and then repeat the cycle. In addition, fire fighters often work extra hours at fires and other emergencies and are regularly assigned to work on holidays. Fire lieutenants and fire captains often work the same hours as the fire fighters they supervise. Training & Qualifications Applicants for firefighting jobs are usually required to have at least a high school diploma, but candidates with some education after high school are increasingly preferred. Most municipal jobs require passing written and physical tests. All fire fighters receive extensive training after being hired. Most fire fighters have a high school diploma; however, the completion of community college courses, or in some cases, an associate degree, in fire science may improve an applicant’s chances for a job. A number of colleges and universities offer courses leading to 2- or 4-year degrees in fire engineering or fire science. In recent years, an increasing proportion of new fire fighters have had some education after high school. As a rule, entry-level workers in large fire departments are trained for several weeks at the department’s training center or academy. Through classroom instruction and practical training, the recruits study fire fighting techniques, fire prevention, hazardous materials control, local building codes, and emergency medical procedures, including first aid and cardiopulmonary resuscitation (CPR). They also learn how to use axes, chain saws, fire extinguishers, ladders, and other fire fighting and rescue equipment. After successfully completing this training, the recruits are assigned to a fire company, where they undergo a period of probation. Many fire departments have accredited apprenticeship programs lasting up to 4 years. These programs combine formal instruction with on-the-job training under the supervision of experienced fire fighters. Almost all departments require fire fighters to be certified as emergency medical technicians. Although most fire departments require the lowest level of certification, Emergency Medical Technician-Basic (EMT-Basic), larger departments in major metropolitan areas increasingly require paramedic certification. Some departments include this training in the fire academy, whereas others prefer that recruits earn EMT certification on their own but will give them up to 1 year to do it. In addition to participating in training programs conducted by local fire departments, some fire fighters attend training sessions sponsored by the U.S. National Fire Academy. These training sessions cover topics such as executive development, anti-arson techniques, disaster preparedness, hazardous materials control, and public fire safety and education. Some States also have either voluntary or mandatory fire fighter training or certification programs. Many fire departments offer fire fighters incentives such as tuition reimbursement or higher pay for completing advanced training. Applicants for municipal fire fighting jobs usually must pass a written exam; tests of strength, physical stamina, coordination, and agility; and a medical examination that includes a drug screening. Workers may be monitored on a random basis for drug use after accepting employment. Examinations are generally open to people who are at least 18 years of age and have a high school education or its equivalent. Those who receive the highest scores in all phases of testing have the best chances of being hired. Among the personal qualities fire fighters need are mental alertness, self-discipline, courage, mechanical aptitude, endurance, strength, and a sense of public service. Initiative and good judgment also are extremely important because fire fighters make quick decisions in emergencies. Members of a crew live and work closely together under conditions of stress and danger for extended periods, so they must be dependable and able to get along well with others. Leadership qualities are necessary for officers, who must establish and maintain discipline and efficiency, as well as direct the activities of the fire fighters in their companies. Most experienced fire fighters continue studying to improve their job performance and prepare for promotion examinations. To progress to higher level positions, they acquire expertise in advanced firefighting equipment and techniques, building construction, emergency medical technology, writing, public speaking, management and budgeting procedures, and public relations. Opportunities for promotion depend upon the results of written examinations, as well as job performance, interviews, and seniority. Hands-on tests that simulate real-world job situations are also used by some fire departments. Usually, fire fighters are first promoted to engineer, then lieutenant, captain, battalion chief, assistant chief, deputy chief, and, finally, chief. For promotion to positions higher than battalion chief, many fire departments now require a bachelor’s degree, preferably in fire science, public administration, or a related field. An associate degree is required for executive fire officer certification from the National Fire Academy. Median annual earnings of fire fighters were $41,190 in May 2006. The middle 50 percent earned between $29,550 and $54,120. The lowest 10 percent earned less than $20,660, and the highest 10 percent earned more than $66,140. Median annual earnings were $41,600 in local government, $41,070 in the Federal Government, and $37,000 in State governments. Median annual earnings of first-line supervisors/managers of fire fighting and prevention workers were $62,900 in May 2006. The middle 50 percent earned between $50,180 and $79,060. The lowest 10 percent earned less than $36,820, and the highest 10 percent earned more than $97,820. First-line supervisors/managers of fire fighting and prevention workers employed in local government earned a median of about $64,070 a year. Median annual earnings of fire inspectors and investigators were $48,050 in May 2006. The middle 50 percent earned between $36,960 and $61,160 a year. The lowest 10 percent earned less than $29,840, and the highest 10 percent earned more than $74,930. Fire inspectors and investigators employed in local government earned a median of about $49,690 a year. According to the International City-County Management Association, average salaries in 2006 for sworn full-time positions were as follows: Fire fighters who average more than a certain number of work hours per week are required to be paid overtime. The hour’s threshold is determined by the department. Fire fighters often earn overtime for working extra shifts to maintain minimum staffing levels or during special emergencies. Fire fighters receive benefits that usually include medical and liability insurance, vacation and sick leave, and some paid holidays. Almost all fire departments provide protective clothing (helmets, boots, and coats) and breathing apparatus, and many also provide dress uniforms. Fire fighters generally are covered by pension plans, often providing retirement at half pay after 25 years of service or if the individual is disabled in the line of duty. In 2006, total paid employment in firefighting occupations was about 361,000. Fire fighters held about 293,000 jobs, first-line supervisors/managers of fire fighting and prevention workers held about 52,000, and fire inspectors and investigators held about 14,000 jobs. These employment figures include only paid career fire fighters—they do not cover volunteer fire fighters, who perform the same duties and may constitute the majority of fire fighters in a residential area. According to the U.S. Fire Administration, about 71 percent of fire companies were staffed entirely by volunteer fire fighters in 2005. About 9 out of 10 fire fighting workers were employed by local government. Some large cities have thousands of career fire fighters, while many small towns have only a few. Most of the remainder worked in fire departments on Federal and State installations, including airports. Private fire fighting companies employ a small number of fire fighters. In response to the expanding role of fire fighters, some municipalities have combined fire prevention, public fire education, safety, and emergency medical services into a single organization commonly referred to as a public safety organization. Some local and regional fire departments are being consolidated into countywide establishments to reduce administrative staffs, cut costs, and establish consistent training standards and work procedures. Although employment is expected to grow as fast as the average for all jobs, candidates for these positions are expected to face keen competition as these positions are highly attractive and sought after. Employment of workers in fire fighting occupations is expected to grow by 12 percent over the 2006-2016 decade, which is as fast as the average for all occupations. Most job growth will stem from volunteer fire fighting positions being converted to paid positions. In recent years, it has become more difficult for volunteer fire departments to recruit and retain volunteers. This may be the result of the considerable amount of training and time commitment required. Furthermore, a trend towards more people living in and around cities has increased the demand for fire fighters. When areas develop and become more densely populated, emergencies and fires affect more buildings and more people and therefore require more fire fighters. Prospective fire fighters are expected to face keen competition for available job openings. Many people are attracted to fire fighting because, it is challenging and provides the opportunity to perform an essential public service; a high school education is usually sufficient for entry; and a pension is usually guaranteed after 25 years work. Consequently, the number of qualified applicants in most areas far exceeds the number of job openings, even though the written examination and physical requirements eliminate many applicants. This situation is expected to persist in coming years. Applicants with the best chances are those who are physically fit and score the highest on physical conditioning and mechanical aptitude exams. Those who have completed some fire fighter education at a community college and have EMT or paramedic certification will have an additional advantage. Like fire fighters, emergency medical technicians and paramedics and police and detectives respond to emergencies and save lives. Resources and Others Additional Links Information about a career as a fire fighter may be obtained from local fire departments and from either of the following organizations: Information about professional qualifications and a list of colleges and universities offering 2- or 4-year degree programs in fire science or fire prevention may be obtained from: A special thanks to the US Bureau of Labor and Statistics United States. U.S.Bureau of Labor and Statistics. Occupational Outlook Handbook. 2008-2009 Edition http://www.bls.gov/OCO/.
http://www.directoryofschools.com/Salary-Guides/Fire-Science.htm
13
14
Therefore, the slope of our line is 2. This means for each positive change of 1 unit in the x variable, the y variable will increase 2 units. Remember, you can choose any two points on the line to calculate the slope. Using the graph above, calculate the slope using the Origin (0, 0) and point R (2, 4). If the Origin (0, 0) is selected to be (x1, y1) and R (2, 4) to be the point (x2, y2), our resulting slope comes out to be: What is the slope of the line given in the graph? - Step One: Identify two points on the line. Let's calculate the slope of the line in the graph above using the points A (1, 2) and B (3, 6). - Step Two: Select one to be (x1, y1) and the other to be (x2, y2). Let's take A (1, 2) to be (x1, y1). Let's take the point B (3, 6) to be the point (x2, - Step Three: Use the equation to calculate slope. Again the slope is 2. You will find that regardless of what two points you choose on a given straight line to calculate a slope, your answer will always be the same. The slope for a given line is a constant. [return to unit]
http://cstl.syr.edu/FIPSE/GraphA/Unit4/Unit4Ex1.html
13
16
Posted: July 22, 2008 Two complementary studies based on data from NASA's Mars One study, based on data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and the High Resolution Imaging Science Experiment (HiRISE), and published in the current issue of Nature, shows that extensive regions of the ancient highlands of Mars that comprise about half the planet, contain clay-like minerals called phyllosilicates, which can form only in the presence of water. These minerals preserve a record of the interaction between water and rock dating back to the very first billion years of Martian history. Following the deposition of these water-loving minerals, a drier period followed, dominated by volcanic lavas which buried the clays. But during the period of Heavy Bombardment 3.8-4.6 billion years ago, in which the inner Solar System endured an intense assault from asteroids and comets, and also in later years when the impact flux was more sporadic, these clay minerals were exposed in thousands of impact craters dotted across the Martian surface. Craters act like windows into the past and allow planetary geologists to look at the different layers of minerals and rocks that have built up over time. This three-dimensional map of a trough in the Nili Fossae region of Mars shows the prolific nature of the phyllosilicate minerals (indicated by magenta and blue hues), largely concentrated on the slopes of steep cliffs and along canyon walls. The abundance of phyllosilicates shows that water played a sizable role in changing the minerals of a variety of terrains in the planet's early history. Image: NASA/JPL/JHUAPL/University of Arizona/Brown University. It is well known to scientists that organic material can strongly interact with clays, therefore clays are thought to have played a significant role in the emergence of life on the Earth. But owing to to the destructive nature of plate tectonics and erosion on our home planet, the earliest clues to this potential interaction have mostly been destroyed, making the Martian phyllosilicates a unique record of liquid water environments which may have been suitable for life in the early Solar System. Indeed, two favourable sets of conditions – the mild chemistry that protects organic matter from destruction, and the intensity of erosion and activity of liquid water that would have allowed organics to accumulate – work well in the favour of discovering potential organic ‘cemeteries’ in the future. "The minerals present in Mars' ancient crust show a variety of wet environments," says John Mustard, a CRISM team member from Brown University. "In most locations the rocks are lightly altered by liquid water, but in a few locations they have been so altered that a great deal of water must have flushed though the rocks and soil. This is really exciting because we're finding dozens of sites where future missions can land to understand if Mars was ever habitable and if so, to look for signs of past life." A colour-enhanced image of a river delta in the now empty lake bed of Jezero Crater. Ancient rivers are thought to have ferried clay like minerals (shown here in green) into the lake, forming the delta. Clays are ideal for trapping and preserving organic matter, making this location a good place to look for signs of ancient life. Image: NASA/JPL/ JHUAPL/MSSS/Brown University. Another study, published in last month’s issue of Nature Geosciences, supports the idea of wet conditions persisting for thousands to millions of years on Mars. This conclusion comes from the observation that a system of river channels eroded the clay minerals out of the highlands and concentrated them in a delta where the river emptied into the 40 kilometre diameter Jezero crater lake. "The distribution of clays inside the ancient lakebed shows that The team also identified three principal classes of water-related minerals dating to the earliest epoch of Martian history, the Noachian Period, as: aluminum-phyllosilicates, hydrated silica or opal, and the more common and widespread iron and magnesium-phyllosilicates. The variations in the minerals across the Martian surface suggest that different processes, or different types of watery environments – such as standing water or flowing water – dominated at different times. The presence and state of water on the surface of Mars has always been a subject of intense debate because of the direct astrobiological implications, and the recent research that implicates clays into the equation make these deposits very attractive locations for future exploration. Indeed, the results from both studies will be used to compile a list of sites where future missions, such as the Mars Science Laboratory or Exomars, could land and look for organic chemistry that could finally determine whether life has ever existed on Mars. This special publication features the photography of British astro-imager Nik Szymanek and covers a range of photographic methods from basic to advanced. Beautiful pictures of the night sky can be obtained with a simple camera and tripod before tackling more difficult projects, such as guided astrophotography through the telescope and CCD imaging. U.S. & WORLDWIDE STORE Mars rover poster This new poster features some of the best pictures from NASA's amazing Mars Exploration Rovers Spirit and Opportunity. U.S. & WORLDWIDE STORE HOME | NEWS ARCHIVE | MAGAZINE | SOLAR SYSTEM | SKY CHART | RESOURCES | STORES | SPACEFLIGHT NOW © 2013 Pole Star Publications Ltd.
http://astronomynow.com/080722OrganiccemeteriescoulddominateancientMars.html
13
41
A value is one of the fundamental things — like a letter or a number — that a program manipulates. The values we have seen so far are 4 (the result when we added 2 + 2), and "Hello, World!". These values are classified into different classes, or data types: 4 is an integer, and "Hello, World!" is a string, so-called because it contains a string of letters. You (and the interpreter) can identify strings because they are enclosed in quotation marks. If you are not sure what class a value falls into, Python has a function called type which can tell you. >>> type("Hello, World!") <class 'str'> >>> type(17) <class 'int'> Not surprisingly, strings belong to the class str and integers belong to the class int. Less obviously, numbers with a decimal point belong to a class called float, because these numbers are represented in a format called floating-point. At this stage, you can treat the words class and type interchangeably. We’ll come back to a deeper understanding of what a class is in later chapters. >>> type(3.2) <class 'float'> What about values like "17" and "3.2"? They look like numbers, but they are in quotation marks like strings. >>> type("17") <class 'str'> >>> type("3.2") <class 'str'> Strings in Python can be enclosed in either single quotes (') or double quotes ("), or three of each (''' or """) >>> type('This is a string.') <class 'str'> >>> type("And so is this.") <class 'str'> >>> type("""and this.""") <class 'str'> >>> type('''and even this...''') <class 'str'> Double quoted strings can contain single quotes inside them, as in "Bruce's beard", and single quoted strings can have double quotes inside them, as in 'The knights who say "Ni!"'. Strings enclosed with three occurrences of either quote symbol are called triple quoted strings. They can contain either single or double quotes: >>> print('''"Oh no", she exclaimed, "Ben's bike is broken!"''') "Oh no", she exclaimed, "Ben's bike is broken!" >>> Triple quoted strings can even span multiple lines: >>> message = """This message will ... span several ... lines.""" >>> print(message) This message will span several lines. >>> Python doesn’t care whether you use single or double quotes or the three-of-a-kind quotes to surround your strings: once it has parsed the text of your program or command, the way it stores the value is identical in all cases, and the surrounding quotes are not part of the value. But when the interpreter wants to display a string, it has to decide which quotes to use to make it look like a string. >>> 'This is a string.' 'This is a string.' >>> """And so is this.""" 'And so is this.' So the Python language designers usually chose to surround their strings by single quotes. What do think would happen if the string already contained single quotes? When you type a large integer, you might be tempted to use commas between groups of three digits, as in 42,000. This is not a legal integer in Python, but it does mean something else, which is legal: >>> 42000 42000 >>> 42,000 (42, 0) Well, that’s not what we expected at all! Because of the comma, Python chose to treat this as a pair of values. We’ll come back to learn about pairs later. But, for the moment, remember not to put commas or spaces in your integers, no matter how big they are. Also revisit what we said in the previous chapter: formal languages are strict, the notation is concise, and even the smallest change might mean something quite different from what you intended. One of the most powerful features of a programming language is the ability to manipulate variables. A variable is a name that refers to a value. The assignment statement gives a value to a variable: >>> message = "What's up, Doc?" >>> n = 17 >>> pi = 3.14159 This example makes three assignments. The first assigns the string value "What's up, Doc?" to a variable named message. The second gives the integer 17 to n, and the third assigns the floating-point number 3.14159 to a variable called pi. The assignment token, =, should not be confused with equals, which uses the token ==. The assignment statement binds a name, on the left-hand side of the operator, to a value, on the right-hand side. This is why you will get an error if you enter: >>> 17 = n File "<interactive input>", line 1 SyntaxError: can't assign to literal When reading or writing code, say to yourself “n is assigned 17” or “n gets the value 17”. Don’t say “n equals 17”. A common way to represent variables on paper is to write the name with an arrow pointing to the variable’s value. This kind of figure is called a state snapshot because it shows what state each of the variables is in at a particular instant in time. (Think of it as the variable’s state of mind). This diagram shows the result of executing the assignment statements: If you ask the interpreter to evaluate a variable, it will produce the value that is currently linked to the variable: >>> message 'What's up, Doc?' >>> n 17 >>> pi 3.14159 We use variables in a program to “remember” things, perhaps the current score at the football game. But variables are variable. This means they can change over time, just like the scoreboard at a football game. You can assign a value to a variable, and later assign a different value to the same variable. (This is different from maths. In maths, if you give `x` the value 3, it cannot change to link to a different value half-way through your calculations!) >>> day = "Thursday" >>> day 'Thursday' >>> day = "Friday" >>> day 'Friday' >>> day = 21 >>> day 21 You’ll notice we changed the value of day three times, and on the third assignment we even made it refer to a value that was of a different type. A great deal of programming is about having the computer remember things, e.g. The number of missed calls on your phone, and then arranging to update or change the variable when you miss another call. Variable names can be arbitrarily long. They can contain both letters and digits, but they have to begin with a letter or an underscore. Although it is legal to use uppercase letters, by convention we don’t. If you do, remember that case matters. Bruce and bruce are different variables. The underscore character ( _) can appear in a name. It is often used in names with multiple words, such as my_name or price_of_tea_in_china. There are some situations in which names beginning with an underscore have special meaning, so a safe rule for beginners is to start all names with a letter. If you give a variable an illegal name, you get a syntax error: >>> 76trombones = "big parade" SyntaxError: invalid syntax >>> more$ = 1000000 SyntaxError: invalid syntax >>> class = "Computer Science 101" SyntaxError: invalid syntax 76trombones is illegal because it does not begin with a letter. more$ is illegal because it contains an illegal character, the dollar sign. But what’s wrong with class? It turns out that class is one of the Python keywords. Keywords define the language’s syntax rules and structure, and they cannot be used as variable names. Python has thirty-something keywords (and every now and again improvements to Python introduce or eliminate one or two): You might want to keep this list handy. If the interpreter complains about one of your variable names and you don’t know why, see if it is on this list. Programmers generally choose names for their variables that are meaningful to the human readers of the program — they help the programmer document, or remember, what the variable is used for. Beginners sometimes confuse “meaningful to the human readers” with “meaningful to the computer”. So they’ll wrongly think that because they’ve called some variable average or pi, it will somehow magically calculate an average, or magically know that the variable pi should have a value like 3.14159. No! The computer doesn’t understand what you intend the variable to mean. So you’ll find some instructors who deliberately don’t choose meaningful names when they teach beginners — not because we don’t think it is a good habit, but because we’re trying to reinforce the message that you — the programmer — must write the program code to calculate the average, and you must write an assignment statement to give the variable pi the value you want it to have. A statement is an instruction that the Python interpreter can execute. We have only seen the assignment statement so far. Some other kinds of statements that we’ll see shortly are while statements, for statements, if statements, and import statements. (There are other kinds too!) When you type a statement on the command line, Python executes it. Statements don’t produce any result. An expression is a combination of values, variables, operators, and calls to functions. If you type an expression at the Python prompt, the interpreter evaluates it and displays the result: >>> 1 + 1 2 >>> len("hello") 5 In this example len is a built-in Python function that returns the number of characters in a string. We’ve previously seen the print and the type functions, so this is our third example of a function! The evaluation of an expression produces a value, which is why expressions can appear on the right hand side of assignment statements. A value all by itself is a simple expression, and so is a variable. >>> 17 17 >>> y = 3.14 >>> x = len("hello") >>> x 5 >>> y 3.14 Operators are special tokens that represent computations like addition, multiplication and division. The values the operator uses are called operands. The following are all legal Python expressions whose meaning is more or less clear: 20+32 hour-1 hour*60+minute minute/60 5**2 (5+9)*(15-7) The tokens +, -, and *, and the use of parenthesis for grouping, mean in Python what they mean in mathematics. The asterisk (*) is the token for multiplication, and ** is the token for exponentiation. >>> 2 ** 3 8 >>> 3 ** 2 9 When a variable name appears in the place of an operand, it is replaced with its value before the operation is performed. Addition, subtraction, multiplication, and exponentiation all do what you expect. Example: so let us convert 645 minutes into hours: >>> minutes = 645 >>> hours = minutes / 60 >>> hours 10.75 Oops! In Python 3, the division operator / always yields a floating point result. What we might have wanted to know was how many whole hours there are, and how many minutes remain. Python gives us two different flavors of the division operator. The second, called floor division uses the token //. Its result is always a whole number — and if it has to adjust the number it always moves it to the left on the number line. So 6 // 4 yields 1, but -6 // 4 might surprise you! >>> 7 / 4 1.75 >>> 7 // 4 1 >>> minutes = 645 >>> hours = minutes // 60 >>> hours 10 Take care that you choose the correct flavor of the division operator. If you’re working with expressions where you need floating point values, use the division operator that does the division accurately. Here we’ll look at three more Python functions, int, float and str, which will (attempt to) convert their arguments into types int, float and str respectively. We call these type converter functions. The int function can take a floating point number or a string, and turn it into an int. For floating point numbers, it discards the decimal portion of the number — a process we call truncation towards zero on the number line. Let us see this in action: >>> int(3.14) 3 >>> int(3.9999) # This doesn't round to the closest int! 3 >>> int(3.0) 3 >>> int(-3.999) # Note that the result is closer to zero -3 >>> int(minutes / 60) 10 >>> int("2345") # Parse a string to produce an int 2345 >>> int(17) # It even works if arg is already an int 17 >>> int("23 bottles") This last case doesn’t look like a number — what do we expect? Traceback (most recent call last): File "<interactive input>", line 1, in <module> ValueError: invalid literal for int() with base 10: '23 bottles' The type converter float can turn an integer, a float, or a syntactically legal string into a float: >>> float(17) 17.0 >>> float("123.45") 123.45 The type converter str turns its argument into a string: >>> str(17) '17' >>> str(123.45) '123.45' When more than one operator appears in an expression, the order of evaluation depends on the rules of precedence. Python follows the same precedence rules for its mathematical operators that mathematics does. The acronym PEMDAS is a useful way to remember the order of operations: Parentheses have the highest precedence and can be used to force an expression to evaluate in the order you want. Since expressions in parentheses are evaluated first, 2 * (3-1) is 4, and (1+1)**(5-2) is 8. You can also use parentheses to make an expression easier to read, as in (minute * 100) / 60, even though it doesn’t change the result. Exponentiation has the next highest precedence, so 2**1+1 is 3 and not 4, and 3*1**3 is 3 and not 27. Multiplication and both Division operators have the same precedence, which is higher than Addition and Subtraction, which also have the same precedence. So 2*3-1 yields 5 rather than 4, and 5-2*2 is 1, not 6. Operators with the same precedence are evaluated from left-to-right. In algebra we say they are left-associative. So in the expression 6-3+2, the subtraction happens first, yielding 3. We then add 2 to get the result 5. If the operations had been evaluated from right to left, the result would have been 6-(3+2), which is 1. (The acronym PEDMAS could mislead you to thinking that division has higher precedence than multiplication, and addition is done ahead of subtraction - don’t be misled. Subtraction and addition are at the same precedence, and the left-to-right rule applies.) Due to some historical quirk, an exception to the left-to-right left-associative rule is the exponentiation operator **, so a useful hint is to always use parentheses to force exactly the order you want when exponentiation is involved: >>> 2 ** 3 ** 2 # The right-most ** operator gets done first! 512 >>> (2 ** 3) ** 2 # Use parentheses to force the order you want! 64 The immediate mode command prompt of Python is great for exploring and experimenting with expressions like this. In general, you cannot perform mathematical operations on strings, even if the strings look like numbers. The following are illegal (assuming that message has type string): >>> message - 1 # Error >>> "Hello" / 123 # Error >>> message * "Hello" # Error >>> "15" + 2 # Error Interestingly, the + operator does work with strings, but for strings, the + operator represents concatenation, not addition. Concatenation means joining the two operands by linking them end-to-end. For example: 1 2 3 fruit = "banana" baked_good = " nut bread" print(fruit + baked_good) The output of this program is banana nut bread. The space before the word nut is part of the string, and is necessary to produce the space between the concatenated strings. The * operator also works on strings; it performs repetition. For example, 'Fun'*3 is 'FunFunFun'. One of the operands has to be a string; the other has to be an integer. On one hand, this interpretation of + and * makes sense by analogy with addition and multiplication. Just as 4*3 is equivalent to 4+4+4, we expect "Fun"*3 to be the same as "Fun"+"Fun"+"Fun", and it is. On the other hand, there is a significant way in which string concatenation and repetition are different from integer addition and multiplication. Can you think of a property that addition and multiplication have that string concatenation and repetition do not? There is a built-in function in Python for getting input from the user: 1 n = input("Please enter your name: ") A sample run of this script in PyScripter would pop up a dialog window like this: The user of the program can enter the name and click OK, and when this happens the text that has been entered is returned from the input function, and in this case assigned to the variable n. Even if you asked the user to enter their age, you would get back a string like "17". It would be your job, as the programmer, to convert that string into a int or a float, using the int or float converter functions we saw earlier. So far, we have looked at the elements of a program — variables, expressions, statements, and function calls — in isolation, without talking about how to combine them. One of the most useful features of programming languages is their ability to take small building blocks and compose them into larger chunks. For example, we know how to get the user to enter some input, we know how to convert the string we get into a float, we know how to write a complex expression, and we know how to print values. Let’s put these together in a small four-step program that asks the user to input a value for the radius of a circle, and then computes the area of the circle from the formula Firstly, we’ll do the four steps one at a time: 1 2 3 4 response = input("What is your radius? ") r = float(response) area = 3.14159 * r**2 print("The area is ", area) Now let’s compose the first two lines into a single line of code, and compose the second two lines into another line of code. 1 2 r = float( input("What is your radius? ") ) print("The area is ", 3.14159 * r**2) If we really wanted to be tricky, we could write it all in one statement: 1 print("The area is ", 3.14159*float(input("What is your radius?"))**2) Such compact code may not be most understandable for humans, but it does illustrate how we can compose bigger chunks from our building blocks. If you’re ever in doubt about whether to compose code or fragment it into smaller steps, try to make it as simple as you can for the human to follow. My choice would be the first case above, with four separate steps. The modulus operator works on integers (and integer expressions) and gives the remainder when the first number is divided by the second. In Python, the modulus operator is a percent sign (%). The syntax is the same as for other operators. It has the same precedence as the multiplication operator. >>> q = 7 // 3 # This is integer division operator >>> print(q) 2 >>> r = 7 % 3 >>> print(r) 1 So 7 divided by 3 is 2 with a remainder of 1. The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another—if x % y is zero, then x is divisible by y. Also, you can extract the right-most digit or digits from a number. For example, x % 10 yields the right-most digit of x (in base 10). Similarly x % 100 yields the last two digits. It is also extremely useful for doing conversions, say from seconds, to hours, minutes and seconds. So let’s write a program to ask the user to enter some seconds, and we’ll convert them into hours, minutes, and remaining seconds. 1 2 3 4 5 6 7 8 total_secs = int(input("How many seconds, in total?")) hours = total_secs // 3600 secs_still_remaining = total_secs % 3600 minutes = secs_still_remaining // 60 secs_finally_remaining = secs_still_remaining % 60 print("Hrs=", hours, " mins=", minutes, "secs=", secs_finally_remaining) A statement that assigns a value to a name (variable). To the left of the assignment operator, =, is a name. To the right of the assignment token is an expression which is evaluated by the Python interpreter and then assigned to the name. The difference between the left and right hand sides of the assignment statement is often confusing to new programmers. In the following assignment: n = n + 1 n plays a very different role on each side of the =. On the right it is a value and makes up part of the expression which will be evaluated by the Python interpreter before assigning it to the name on the left. Take the sentence: All work and no play makes Jack a dull boy. Store each word in a separate variable, then print out the sentence on one line using print. Add parenthesis to the expression 6 * 1 - 2 to change its value from 4 to -6. Place a comment before a line of code that previously worked, and record what happens when you rerun the program. Start the Python interpreter and enter bruce + 4 at the prompt. This will give you an error: NameError: name 'bruce' is not defined Assign a value to bruce so that bruce + 4 evaluates to 10. The formula for computing the final amount if one is earning compound interest is given on Wikipedia as Write a Python program that assigns the principal amount of $10000 to variable P, assign to n the value 12, and assign to r the interest rate of 8%. Then have the program prompt the user for the number of years t that the money will be compounded for. Calculate and print the final amount after t years. Evaluate the following numerical expressions in your head, then use the Python interpreter to check your results: - >>> 5 % 2 - >>> 9 % 5 - >>> 15 % 12 - >>> 12 % 15 - >>> 6 % 6 - >>> 0 % 7 - >>> 7 % 0 What happened with the last example? Why? If you were able to correctly anticipate the computer’s response in all but the last one, it is time to move on. If not, take time now to make up examples of your own. Explore the modulus operator until you are confident you understand how it works. You look at the clock and it is exactly 2pm. You set an alarm to go off in 51 hours. At what time does the alarm go off? (Hint: you could count on your fingers, but this is not what we’re after. If you are tempted to count on your fingers, change the 51 to 5100.) Write a Python program to solve the general version of the above problem. Ask the user for the time now (in hours), and ask for the number of hours to wait. Your program should output what the time will be on the clock when the alarm goes off.
http://www.openbookproject.net/thinkcs/python/english3e/variables_expressions_statements.html
13
59
Resources · Tests In the theory of relativity, it is convenient to express results in terms of a spacetime coordinate system relative to an implied observer. In many (but not all) coordinate systems, an event is specified by one time coordinate and three spatial coordinates. The time specified by the time coordinate is referred to as coordinate time to distinguish it from proper time. In the special case of an inertial observer in special relativity, by convention the coordinate time at an event is the same as the proper time measured by a clock that is at the same location as the event, that is stationary relative to the observer and that has been synchronised to the observer's clock using the Einstein synchronisation convention. Coordinate time, proper time, and clock synchronization Fuller explanation of the concept of coordinate time arises from its relationships with proper time and with clock synchronization. Synchronization, along with the related concept of simultaneity, has to receive careful definition in the framework of general relativity theory, because many of the assumptions inherent in classical mechanics and classical accounts of space and time had to be removed. Specific clock synchronization procedures were defined by Einstein and give rise to a limited concept of simultaneity. Two events are called simultaneous in a chosen reference frame if and only if the chosen coordinate time has the same value for both of them; and this condition allows for the physical possibility and likelihood that they will not be simultaneous from the standpoint of another reference frame. But the coordinate time is not a time that could be measured by a clock located at the place that nominally defines the reference frame, e.g. a clock located at the solar system barycenter would not measure the coordinate time of the barycentric reference frame, and a clock located at the geocenter would not measure the coordinate time of a geocentric reference frame. For non-inertial observers, and in general relativity, coordinate systems can be chosen more freely. For a clock whose spatial coordinates are constant, the relationship between proper time τ (Greek lowercase tau) and coordinate time t, i.e. the rate of time dilation, is given by An alternative formulation, correct to the order of terms in 1/c2, gives the relation between proper and coordinate time in terms of more-easily recognizable quantities in dynamics: is a sum of gravitational potentials due to the masses in the neighborhood, based on their distances ri from the clock. This sum of the terms GMi/ri is evaluated approximately, as a sum of Newtonian gravitational potentials (plus any tidal potentials considered), and is represented using the positive astronomical sign convention for gravitational potentials. Equation (2) is a fundamental and much-quoted differential equation for the relation between proper time and coordinate time, i.e. for time dilation. A derivation, starting from the Schwarzschild metric, with further reference sources, is given in Time dilation due to gravitation and motion together. The coordinate times cannot be measured, but only computed from the (proper-time) readings of real clocks with the aid of the time dilation relationship shown in equation (2) (or some alternative or refined form of it). Only for explanatory purposes it is possible to conceive a hypothetical observer and trajectory on which the proper time of the clock would coincide with coordinate time: such an observer and clock have to be conceived at rest with respect to the chosen reference frame (v = 0 in (2) above) but also (in an unattainably hypothetical situation) infinitely far away from its gravitational masses (also U = 0 in (2) above). Even such an illustration is of limited use because the coordinate time is defined everywhere in the reference frame, while the hypothetical observer and clock chosen to illustrate it has only a limited choice of trajectory. Coordinate time scales A coordinate time scale (or coordinate time standard) is a time standard designed for use as the time coordinate in calculations that need to take account of relativistic effects. The choice of a time coordinate implies the choice of an entire frame of reference. As described above, a time coordinate can to a limited extent be illustrated by the proper time of a clock that is notionally infinitely far away from the objects of interest and at rest with respect to the chosen reference frame. This notional clock, because it is outside all gravity wells, is not influenced by gravitational time dilation. The proper time of objects within a gravity well will pass more slowly than the coordinate time even when they are at rest with respect to the coordinate reference frame. Gravitational as well as motional time dilation must be considered for each object of interest, and the effects are functions of the velocity relative to the reference frame and of the gravitational potential as indicated in (2). There are four purpose-designed coordinate time scales defined by the IAU for use in astronomy. Barycentric Coordinate Time (TCB) is based on a reference frame comoving with the barycenter of the Solar system, and has been defined for use in calculating motion of bodies within the Solar system. However, from the standpoint of Earth-based observers, general time dilation including gravitational time dilation causes Barycentric Coordinate Time, which is based on the SI second, to appear when observed from the Earth to have time units that pass more quickly than SI seconds measured by an Earth-based clock, with a rate of divergence of about 0.5 seconds per year. Accordingly, for many practical astronomical purposes, a scaled modification of TCB has been defined, called for historical reasons Barycentric Dynamical Time (TDB), with a time unit that evaluates to SI seconds when observed from the Earth's surface, thus assuring that at least for several millennia TDB will remain within 2 milliseconds of Terrestrial Time (TT), albeit that the time unit of TDB, if measured by the hypothetical observer described above, at rest in the reference frame and at infinite distance, would be very slightly slower than the SI second (by 1 part in 1/LB = 1 part in 108/1.550519768). Geocentric Coordinate Time (TCG) is based on a reference frame comoving with the geocenter (the center of the Earth), and is defined in principle for use for calculations concerning phenomena on or in the region of the Earth, such as planetary rotation and satellite motions. To a much smaller extent than with TCB compared with TDB, but for a corresponding reason, the SI second of TCG when observed from the Earth's surface shows a slight acceleration on the SI seconds realized by Earth-surface-based clocks. Accordingly, Terrestrial Time (TT) has also been defined as a scaled version of TCG, with the scaling such that on the defined geoid the unit rate is equal to the SI second, albeit that in terms of TCG the SI second of TT is a very little slower (this time by 1 part in 1/LG = 1 part in 1010/6.969290134). See also - Absolute time and space - Introduction to special relativity - Introduction to the mathematics of general relativity - S A Klioner (1992), "The problem of clock synchronization - A relativistic approach", Celestial Mechanics and Dynamical Astronomy, vol.53 (1992), pp. 81-109. - S A Klioner (2008), "Relativistic scaling of astronomical quantities and the system of astronomical units", Astronomy and Astrophysics, vol.478 (2008), pp.951-958, at section 5: "On the concept of coordinate time scales", esp. p.955. - S A Klioner (2008), cited above, at page 954. - This is for example equation (6) at page 36 of T D Moyer (1981), "Transformation from proper time on Earth to coordinate time in solar system barycentric space-time frame of reference", Celestial Mechanics, vol.23 (1981), pages 33-56.) - S A Klioner (2008), cited above, at page 955. - A graph giving an overview of the rate differences (when observed from the Earth's surface) and offsets between various standard time scales, present and past, defined by the IAU: for description see Fig. 1 (at p.835) in P K Seidelmann & T Fukushima (1992), "Why new time scales?", Astronomy & Astrophysics vol.265 (1992), pages 833-838. - IAU 2006 resolution 3, see Recommendation and footnotes, note 3. - These differences between coordinate time scales are mainly periodic, the basis for them explained in G M Clemence & V Szebehely, "Annual variation of an atomic clock", Astronomical Journal, Vol.72 (1967), p.1324-6. - Scaling defined in IAU 2006 resolution 3. - Scaling defined in Resolutions of the IAU 2000 24th General Assembly (Manchester), see Resolution B1.9.
http://en.wikipedia.org/wiki/Coordinate_time
13
13
mRNA vs tRNA DNA and RNA are macromolecules which are polymers of nucleotides. Deoxyribonucleic Acid or DNA is responsible for carrying genetic information from generation to generation while Ribonucleic Acid or RNA mainly involve in protein synthesis. Although DNA is the main genetic material for most living organisms, RNA is the genetic material of some viruses. RNA composed of pentose carbon sugar and nitrogenous bases. These bases are again categorized as purines and pyrimidines. The purines bases are Adenine (A) and Guanine (G) and pyrimidines are Cytosine (C) and Uracil (U). RNA is located normally within the cytoplasm. There are three classes of RNA: mRNA, tRNA and rRNA are these three classes which are involved in protein synthesis using information of DNA. Messenger RNA or mRNA contains ribonucleotides which involve in protein synthesis. They encode the amino acid sequence of proteins which is called translation. In the translation mRNA is read as the triplet codons. The genetic code of DNA specifies the amino acid correspondent to each of the triplet codons. Molecules of mRNA are transcribed from DNA very similar to the DNA replication, but only one DNA template is transcribed. The base of Thiamin (T) is substituted with U. In eukaryotes, a single mRNA is coded for a single polypeptide chain while, in prokaryotes, several polypeptide chains may be coded from a single mRNA strand. Most mRNA molecules have a short life span and high turnover rate. So they can be synthesized over and over again from the same stretch of template DNA. In this short life time, it is processed, edited and transported before the translation in eukaryotes. During the processing, several things occur such as 5′ cap addition, splicing, editing, and polyadenylation. In prokaryotes processing does not occur. In eukaryotes, translation and transcription occur in different places, so they need to be transported extensively. The main function of transfer RNA or tRNA is to carry amino acids to the ribosomes and interact with the mRNA in translation of protein synthesis. These tRNAs have 70-90 nucleotides. All matured tRNA molecules have secondary structure containing several hairpin loops. At the end, the tRNA has anticodon which binds with mRNA. These amino acids are joined in the way specified by the mRNA. There is at least one type of tRNA for each amino acid. Because of that, in a cell tRNA is produced largely. These tRNAs are synthesized in a precursor both in eukaryotic and prokaryotic cells. This tRNA processing involves removal of short leader sequence from 5′end, addition of CCA instead of two nucleotides at 3′end, chemical modification of certain bases and excision of an intron. What is the difference between mRNA and tRNA? • Transfer RNA or tRNA carries amino acids to the ribosomes and interacts with the mRNA in translation of protein synthesis, and mRNA sequence is transcribed from DNA template similar as DNA replication and encode for amino acid sequence of proteins. • The structure of mRNA is unfolded linear molecule whereas tRNA molecule is a 3-D structure containing several hairpin loops. • At one end of the tRNA, it has CCA trinucleotides which are common to all tRNA molecules but mRNA lacks such characteristic. • In a cell, tRNA is produced largely due to existing of at least one type of tRNA for each amino acid. So, comparing to the amount of tRNA, amount of mRNA is lower. • In the translation mRNA is read as codons whereas tRNA does not. • Transfer RNA has an anticodon, but mRNA does not have.
http://www.differencebetween.com/difference-between-mrna-and-vs-trna/
13
17
1. What are the two main functions that prices perform in market economies? How do they address the three main questions: what gets produced, how are they produced, and who gets the products? How do prices transmit information about changing consumer wants and resource availability? In market economies, prices answer the three main questions that any economic system must address. Prices do this by performing two main functions: rationing the goods and services that are produced and allocating the resources used to produce them. The question of who gets the goods and services that are produced is answered by the rationing function that prices perform. Products are rationed according to willingness and ability to pay the market prices for products. The questions of what gets produced and how they are produced are answered by the allocative function that prices perform. Prices transmit information between consumers and producers. Changes in consumers' desires and changing resource scarcity are signaled by the changing prices of goods and resources. 2. How do prices ration goods? Why must goods be rationed? What are other means of rationing besides price? Are these other methods fairer than using price? All scarce goods must be rationed somehow. Because goods are not freely available to everyone who wants them, some people will get certain goods and others will not. Rationing invariably discriminates against someone. Rationing by price discriminates against people with a low ability or willingness to pay the market price. Sometimes, other rationing mechanisms are employed, such as queuing. 3. If the price of a good is kept below the market price through the use of a government-imposed price control, how can the total cost end up exceeding the supposedly higher market price? When employing other ways of rationing goods, we should keep in mind that every rationing mechanism discriminates against someone and can result in wasted resources. For example, queuing often leads to long lines and wasted time and discriminates against people on the basis of the opportunity cost of their time. The total cost will often exceed what would have been paid in a free market. 4. How can supply and demand be used as a tool for analysis? The basic logic of supply and demand is a powerful tool for analysis. For example, supply and demand analysis show that an oil import tax will reduce the quantity of oil demanded, increase domestic production and generate revenues for the government. 5. How is market efficiency related to demand and supply? Supply and demand curve be used to illustrate the idea of market efficiency, an important aspect of normative economics. 6. What is Consumer Surplus? Consumer surplus is the difference between maximum amount a person is willing to pay for a good and the current market price. 7. What is Producer Surplus? Producer Surplus is the difference between the current market price and the full cost of production at each output level. 8. When are producer and consumer surpluses maximized? Producer and consumer surpluses are maximized at free market equilibrium in competitive markets. 9. What happens to consumer surplus if goods are over or under produced? The is a loss in both consumer and producer surplus and this is referred to as a deadweight loss. 10. What is elasticity? Elasticity is a general measure of responsiveness. If one variable A changes in response to changes in another variable B, the elasticity of A with respect to B is equal to the percentage change in A divided by the percentage change in B. 11. How is the slope of the demand curve related to responsiveness? The slope of a demand curve is an inadequate measure of responsiveness, because its value depends on the units of measurement used. For this reason, elasticities are calculated using percentages. 12. What is price elasticity of demand and what are its extremes? The price elasticity of demand lets us know the percentage change we could expect in the quantity demanded of a good for a 1% change in price. Perfectly inelastic demand does not respond to price changes, its numerical value is zero. Perfectly elastic demand for a product drops to zero when there is a very small price increase. Unitary elastic demand describes a relationship in which the percentage change in the quantity of a product demanded is the same as the percentage change in price; its numerical value is -1. Elastic demand is demand in which the percentage change in the quantity of a product demanded is larger than the percentage change in price. Inelastic demand is demand in which the percentage change in the quantity of a product demanded is smaller than the percentage change in price. 13. What happens to total revenue if demand is elastic and price increases? A price increase will cause total revenue to fall as the quantity demanded will fall by a larger amount than the price rose. 14. What happens to total revenue if demand is elastic and price decreases? A price increase will cause total revenue to rise as the quantity demanded will rise by a larger amount than the price fell. 15. What does the elasticity of demand depend upon? The elasticity of demand depends upon the availability of substitutes, the importance of the item in individual budgets, and the time frame in question. 16. What are other important elasticity measures? Other important elasticity measures are: (1) income elasticity which measures the responsiveness of the quantity demanded with respect to changes in income; (2) cross price elasticity of demand which measures the responsiveness of the quantity demanded of one good with respect to changes in the price of another good, (3) elasticity of supply which measures the responsiveness of the quantity supplied of a good with respect to changes in the price of that good; and (4) elasticity of labor supply which measures the responsiveness of the quantity of labor supplied of a good with respect to changes in the price labor.
http://wps.prenhall.com/bp_casefair_econf_7e/30/7932/2030659.cw/index.html
13
25
There are four types of angles: acute, right, obtuse, and straight. Each name indicates a specific range of degree measurements. Congruent angles have equivalent measures. Adjacent angles share a vertex and a common side. An Angle is something that we use throughout Geometry, we talk about it all the way to the very end when we're talking about Trigonometry. Well an angle is formed by two rays that share common end point. It's measured in degrees and is between 0 and 180 degrees, if it's over 180 then you're going to subtract that number. So let's say you had 220 degrees you're going to subtract 180 from that so it's actually a 40 degree angle. If we look at an example where we have angle a, b, c, there's two ways that you could label this. You can write this as angle abc or since there are no other adjacent angles that is an adjacent angle would be something like this where it would share that vertex that common end point. Since there are no other adjacent angles you could also just label this based on the vertex which is b. Now there's something very specific about the way that I wrote angle abc, whenever you write the angle its vertex must be the middle letter. But what is the vertex? The vertex is this point that is the common end point of your rays. So I'm going to label this as the vertex, so the rays form what are called the sides. So bc, ray bc is one side of this angle and ray ba is another side. So again you can label an angle two different ways, one using three letters that make up the two sides and the vertex making sure that your vertex is the middle letter or if there aren't any other adjacent angles you can just label it based on its vertex. There are four key types of angles first one is a acute, so if I drew this angle and I said that's angle x if it is acute that means that is less than 90 degrees but also greater than 0 degrees. So it has to be somewhere in between them, it cannot be exactly 90 degrees it cannot be exactly 0 degrees. A right angle if this is x is equal to exactly 90 degrees a right angle. So we're going going to label all of our right angles in Geometry using these two segments which will tell you the student that this is a 90 degree angle. The third type is an obtuse angle. So here if we measured x, x is going to be less than 180 degrees but more than 90 degrees. Because if this angle was able to be exactly 90 degrees it would be a right angle if it was less than 90 it would be acute. The last one which is key to a lot of proofs is a straight angle. If you have a straight angle it is the equal to exactly 180 degrees. Which means if we had the rotation about point x that was form the full rotation around any given point is 360 degrees. So 2x=360 degrees. So keep this in mind and remember obtuse is going to be in between 180 and 90, right is going to be exactly 90, acute is going to be between 90 and 0. And that there are two different ways of labeling your angles. And the way that will always work is using three letters.
http://www.brightstorm.com/math/geometry/geometry-building-blocks/angles-types-and-labeling/
13
74
Validity and Soundness A deductive argument is said to be valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Otherwise, a deductive argument is said to be invalid. A deductive argument is sound if and only if it is both valid, and all of its premises are actually true. Otherwise, a deductive argument is unsound. According to the definition of a deductive argument (see the Deduction and Induction), the author of a deductive argument always intends that the premises provide the sort of justification for the conclusion whereby if the premises are true, the conclusion is guaranteed to be true as well. Loosely speaking, if the author’s process of reasoning is a good one, if the premises actually do provide this sort of justification for the conclusion, then the argument is valid. In effect, an argument is valid if the truth of the premises logically guarantees the truth of the conclusion. The following argument is valid, because it is impossible for the premises to be true and the conclusion to nevertheless be false: Either Elizabeth owns a Honda or she owns a Saturn. Elizabeth does not own a Honda. Therefore, Elizabeth owns a Saturn. It is important to stress that the premises of an argument do not have actually to be true in order for the argument to be valid. An argument is valid if the premises and conclusion are related to each other in the right way so that if the premises were true, then the conclusion would have to be true as well. We can recognize in the above case that even if one of the premises is actually false, that if they had been true the conclusion would have been true as well. Consider, then an argument such as the following: All toasters are items made of gold. All items made of gold are time-travel devices. Therefore, all toasters are time-travel devices. Obviously, the premises in this argument are not true. It may be hard to imagine these premises being true, but it is not hard to see that if they were true, their truth would logically guarantee the conclusion’s truth. It is easy to see that the previous example is not an example of a completely good argument. A valid argument may still have a false conclusion. When we construct our arguments, we must aim to construct one that is not only valid, but sound. A sound argument is one that is not only valid, but begins with premises that are actually true. The example given about toasters is valid, but not sound. However, the following argument is both valid and sound: No felons are eligible voters. Some professional athletes are felons. Therefore, some professional athletes are not eligible voters. Here, not only do the premises provide the right sort of support for the conclusion, but the premises are actually true. Therefore, so is the conclusion. Although it is not part of the definition of a sound argument, because sound arguments both start out with true premises and have a form that guarantees that the conclusion must be true if the premises are, sound arguments always end with true conclusions. It should be noted that both invalid, as well as valid but unsound, arguments can nevertheless have true conclusions. One cannot reject the conclusion of an argument simply by discovering a given argument for that conclusion to be flawed. Whether or not the premises of an argument are true depends on their specific content. However, according to the dominant understanding among logicians, the validity or invalidity of an argument is determined entirely by its logical form. The logical form of an argument is that which remains of it when one abstracts away from the specific content of the premises and the conclusion, i.e., words naming things, their properties and relations, leaving only those elements that are common to discourse and reasoning about any subject matter, i.e., words such as “all”, “and”, “not”, “some”, etc. One can represent the logical form of an argument by replacing the specific content words with letters used as place-holders or variables. For example, consider these two arguments: All tigers are mammals. No mammals are creatures with scales. Therefore, no tigers are creatures with scales. All spider monkeys are elephants. No elephants are animals. Therefore, no spider monkeys are animals. These arguments share the same form: All A are B; No B are C; Therefore, No A are C. All arguments with this form are valid. Because they have this form, the examples above are valid. However, the first example is sound while the second is unsound, because its premises are false. Now consider: All basketballs are round. The Earth is round. Therefore, the Earth is a basketball. All popes reside at the Vatican. John Paul II resides at the Vatican. Therefore, John Paul II is a pope. These arguments also have the same form: All A’s are F; X is F; Therefore, X is an A. Arguments with this form are invalid. This is easy to see with the first example. The second example may seem like a good argument because the premises and the conclusion are all true, but note that the conclusion’s truth isn’t guaranteed by the premises’ truth. It could have been possible for the premises to be true and the conclusion false. This argument is invalid, and all invalid arguments are unsound. While it is accepted by most contemporary logicians that logical validity and invalidity is determined entirely by form, there is some dissent. Consider, for example, the following arguments: My table is circular. Therefore, it is not square shaped. Juan is bachelor. Therefore, he is not married. These arguments, at least on the surface, have the form: x is F; Therefore, x is not G. Arguments of this form are not valid as a rule. However, it seems clear in these particular cases that it is, in some strong sense, impossible for the premises to be true while the conclusion is false. However, many logicians would respond to these complications in various ways. Some might insist–although this is controverisal–that these arguments actually contain implicit premises such as “Nothing is both circular and square shaped” or “All bachelors are unmarried,” which, while themselves necessary truths, nevertheless play a role in the form of these arguments. It might also be suggested, especially with the first argument, that while (even without the additional premise) there is a necessary connection between the premise and the conclusion, the sort of necessity involved is something other than “logical” necessity, and hence that this argument (in the simple form) should not be regarded as logically valid. Lastly, especially with regard to the second example, it might be suggested that because “bachelor” is defined as “adult unmarried male”, that the true logical form of the argument is the following universally valid form: x is F and not G and H; Therefore, x is not G. The logical form of a statement is not always as easy to discern as one might expect. For example, statements that seem to have the same surface grammar can nevertheless differ in logical form. Take for example the two statements: (1) Tony is a ferocious tiger. (2) Clinton is a lame duck. Despite their apparent similarity, only (1) has the form “x is a A that is F”. From it one can validly infer that Tony is a tiger. One cannot validly infer from (2) that Clinton is a duck. Indeed, one and the same sentence can be used in different ways in different contexts. Consider the statement: (3) The King and Queen are visiting dignitaries. It is not clear what the logical form of this statement is. Either there are dignitaries that the King and Queen are visiting, in which case the sentence (3) has the same logical form as “The King and Queen are playing violins,” or the King and Queen are themselves the dignitaries who are visiting from somewhere else, in which case the sentence has the same logical form as “The King and Queen are sniveling cowards.” Depending on which logical form the statement has, inferences may be valid or invalid. Consider: The King and Queen are visiting dignitaries. Visiting dignitaries is always boring. Therefore, the King and Queen are doing something boring. Only if the statement is given the first reading can this argument be considered to be valid. Because of the difficulty in identifying the logical form of an argument, and the potential deviation of logical form from grammatical form in ordinary language, contemporary logicians typically make use of artificial logical languages in which logical form and grammatical form coincide. In these artificial languages, certain symbols, similar to those used in mathematics, are used to represent those elements of form analogous to ordinary English words such as “all”, “not”, “or”, “and”, etc. The use of an artifically constructed language makes it easier to specify a set of rules that determine whether or not a given argument is valid or invalid. Hence, the study of which deductive argument forms are valid and which are invalid is often called “formal logic” or “symbolic logic”. In short, a deductive argument must be evaluated in two ways. First, one must ask if the premises provide support for the conclusion by examing the form of the argument. If they do, then the argument is valid. Then, one must ask whether the premises are true or false in actuality. Only if an argument passes both these tests is it sound. However, if an argument does not pass these tests, its conclusion may still be true, despite that no support for its truth is given by the argument. Note: there are other, related, uses of these words that are found within more advanced mathematical logic. In that context, a formula (on its own) written in a logical language is said to be valid if it comes out as true (or “satisfied”) under all admissible or standard assignments of meaning to that formula within the intended semantics for the logical language. Moreover, an axiomatic logical calculus (in its entirety) is said to be sound if and only if all theorems derivable from the axioms of the logical calculus are semantically valid in the sense just described. For a more sophisticated look at the nature of logical validity, see the articles on “Logical Consequence” in this encyclopedia. The articles on “Argument” and “Deductive and Inductive Arguments” in this encyclopedia may also be helpful. The author of this article is anonymous. The IEP is actively seeking an author who will write a replacement article. Last updated: August 27, 2004 | Originally published: August/27/2004
http://www.iep.utm.edu/val-snd/
13
10
May 15, 2007: Astronomers using NASA's Hubble Space Telescope have discovered a ghostly ring of dark matter that formed long ago during a titanic collision between two massive galaxy clusters. The ring's discovery is among the strongest evidence yet that dark matter exists. Astronomers have long suspected the existence of the invisible substance as the source of additional gravity that holds together galaxy clusters. Such clusters would fly apart if they relied only on the gravity from their visible stars. Although astronomers don't know what dark matter is made of, they hypothesize that it is a type of elementary particle that pervades the universe. This Hubble composite image shows the ring of dark matter in the galaxy cluster Cl 0024+17. The ring-like structure is evident in the blue map of the cluster's dark matter distribution. The map was derived from Hubble observations of how the gravity of the cluster Cl 0024+17 distorts the light of more distant galaxies, an optical illusion called gravitational lensing. Although astronomers cannot see dark matter, they can infer its existence by mapping the distorted shapes of the background galaxies. The map is superimposed on a Hubble Advanced Camera for Surveys image of the cluster taken in November 2004.See the rest:
http://www.hubblesite.org/newscenter/archive/releases/exotic/2007/17/results/50/layout/thumb/
13
20
Black Holes Bound to Merge Two supermassive black holes have been found to be spiraling toward a merger, astronomers said today. The collision will create a single super-supermassive black hole capable of swallowing material equal to billions of stars, the researchers said. Mergers between black holes are thought to be one way they grow. A handful of similar setups have been observed in which black holes appear inevitably on a merger course. This pair, at the center of a galaxy cluster called Abell 400, was known to be close but their fate hadn't been determined. "The question was: Is this pair of supermassive black holes an old married couple, or just strangers passing in the night?" said Craig Sarazin of the University of Virginia. "We now know that they are coupled, but more like the mating of black widow spiders. One of the black holes invariably will eat the other." Black holes can't be seen. Their presence is inferred by their gravitational effects on their surroundings and by radiation from near the black hole, where a feeding frenzy superheats gas so much that it emits X-rays. Determining that these two black holes will collide involved other indirect evidence, drawing data from NASA's Chandra X-ray Observatory. Each of the black holes in Abell 400 is ejecting a pair of oppositely directed jets of superheated gas called plasma. The movement of the black holes through gas in the galaxy cluster causes the plasma jets to be swept backward. "The jets are similar to the contrails produced by planes as they fly through the air on Earth," Sarazin said. "From the contrails, we can determine where the planes have been, and in which direction they are going. What we see is that the jets are bent together and intertwined, which indicates that the pair of supermassive black holes are bound and moving together." When the objects merge several million years from now, Einstein's theory of relativity predicts they will emit a burst of gravitational waves. Similar mergers could soon be detected by NASA's planned Laser Interferometer Space Antenna (LISA). The results will be published in an upcoming issue of the journal Astronomy & Astrophysics. - Study Supports Idea that Giant Black Holes Merge - Pair of Supermassive Black Holes Inhabit Same Galaxy, Destined to Collide - When Black Holes Merge MORE FROM SPACE.com
http://www.space.com/2258-black-holes-bound-merge.html
13
11
Number Theory began as a playground for a few mathematicians that were fascinated by the curious properties of numbers. Today, it has numerous applications from pencil and paper algorithms, to the solving of puzzles, to the design of computer software, to cryptanalysis (a science of code breaking). Number Theory uses the familiar operations of arithmetic (addition, subtraction, multiplication, and division), but more as the starting point of intriguing investigations than as topics of primary interest. Number Theory is more involved in finding relations, patterns, and the structure of numbers. This Number Theory course will cover topics such as the Fundamental Theorem of Algebra, Euclid's Algorithm, Pascal's Triangle, Fermat's Last Theorem, and Pythagorean Triples. We will finish the course with a linkage of Number Theory to Cryptography. In today's world of high speed communication, banks, corporations, law enforcement agencies and so on need to transmit confidential information over public phone lines or airwaves to a large number of other similar institutions. Prime numbers and composite numbers play a crucial role in many cryptographic schemes. Come taste the flavor of the purest of pure mathematics. This course is open to any student having basic algebra or higher mathematics who is challenged by puzzles and mathematics problems. It will run for a full semester. *This course may be appropriate for Gifted and Talented middle school students that meet all course prerequisites.* **Please Note: This course may not be appropriate for students with specific accessibility limitations as written. Please refer to the VHS Handbook policy on Special Education/Equity for more information on possible modifications. If you need additional assistance, please let us know at service.goVHS.org.
http://www.govhs.org/vhsweb/coursecatalog12.nsf/9916f71ba35e9520852570ba00582dc0/9f939da4669cff8786257409006f0d48?OpenDocument
13
11
Years before it housed aircraft or supercomputers, NCAR was sending balloons into the stratosphere. Bolstered by new space-age technology, this simple but powerful observing strategy gathered critical data from hard-to-reach places. Balloons had been launched for scientific purposes since the 1800s, but the original plans for NCAR did not include a balloon facility. That changed after a 1960 conference—one of NCAR’s first—when experts convinced Walter Orr Roberts that ballooning had a rightful place at the new center. Within months, Roberts had brought expert Vincent Lally on board to head up the project. By 1963 the National Scientific Ballooning Facility (NSBF) was in place near Palestine, Texas, a prairie location where aircraft interference was minimal and good launch weather was frequent. When Lally arrived at NCAR, he dreamed of combining several technologies into a balloon-based system for observing the atmosphere in three dimensions. At the time, satellites were still experimental, and the balloons carrying radiosondes (see page 15) were unable to penetrate far into the stratosphere before bursting. Lally envisioned a set of balloons floating at high altitudes for weeks or months at a time, drifting around the globe and radioing back data around the clock. In order to haul heavy instruments to stratospheric heights for long periods, the new balloons would have to be huge and ultra-strong. “Modern scientific ballooning actually owes its existence to the American housewife,” Lally once observed. He noted that the increased popularity of plastic vegetable bags after World War II made polyethylene film far more affordable. When coated with Mylar, this material was highly resistant to the formation of tiny holes that could sabotage long flights. It was also highly reflective, which reduced the Sun-driven temperature changes that caused most balloons to rise by day and fall by night. Strong enough to withstand intense pressures without leaking or sinking, the resulting vehicle—a “superpressure” balloon developed largely at the Air Force Cambridge Research Laboratories—could withstand intense pressures and float at a height of constant atmospheric density without having to drop ballast at night simply to stay airborne. As the NSBF grew busy launching instruments, including some built at NCAR, Lally and colleagues looked to other latitudes. The Soviet Union prohibited balloon overflights, so attention turned to the Southern Hemisphere, where the relative lack of land mass meant fewer radiosondes were sampling the atmosphere. NCAR teamed with New Zealand’s weather service and other partners to launch the GHOST program (Global Horizontal Sounding Technique), which kicked off in March 1966 with a launch from Christchurch. Not only did a GHOST balloon become the first to fly around the world, it completed six more circuits before falling to Earth after 51 days aloft. “Those were exciting days, and our achievements were due in no small measure to the excellent cooperation we received from the New Zealand Meteorological Service and the volunteer balloon tracking stations around the Southern Hemisphere,” says NCAR’s Marcel Verstraete, part of the GHOST team. Though it never became the routine monitoring system Lally and others had envisioned, GHOST was a durable success, launching more than 350 balloons over a decade’s time. One flew for 744 days at heights above 6 miles (10 kilometers). GHOST balloons provided a unique window on processes far above the southern midlatitudes and the tropics, where temperature data remain scarce to this day. High-altitude launches continue from the Palestine site, which has been managed by NASA contractors since 1987 and is now known as the Columbia Scientific Balloon Facility. Lally kept an unpretentious attitude toward his career at NCAR, which spanned four decades. As he put it, “It’s a nice way to make a living—getting paid for blowing up balloons.” "We're making accurate measurements in very hard-to-reach areas." —David Parsons, University of Oklahoma Each day more than 2,000 radiosondes send weather data to Earth as they ascend via balloons through the lowest few miles of the atmosphere. NCAR engineers have taken this venerable technology and, in a sense, turned it upside down. In 2006, they unveiled the “driftsonde,” based on a balloon that floats across the stratosphere over a week or more. The payload: dozens of instrument packages (dropsondes) that transmit data as they fall from the balloons’ gondola via parachute. Vin Lally and colleagues had contemplated the notion of a driftsonde as far back as the 1970s, but they were limited by weak batteries, inadequate communication links, and heavy instruments. Technology had transcended these roadblocks by the turn of the next century. NCAR engineers and machinists worked together to produce a highly compact instrument package, about the size of a small bottle but weighing only about 140 grams (5 ounces). Driftsondes also had to hold up to the intense sunlight and brutal cold of the stratosphere, often drifting for days in standby mode. “Try letting your car sit at minus 80 Fahrenheit for 14 days, and then try to start it,” said David Parsons, the NCAR lead on the driftsonde project and now director of the University of Oklahoma’s meteorology school. The system proved its durability in the 2006 African Monsoon Multidisciplinary Analysis project, when it sampled incipient tropical cyclones across the eastern Atlantic. Five driftsonde units released more than 200 dropsondes, gathering data on two systems close to tropical storm strength that went on to become hurricanes Florence and Gordon. In 2010, a set of successful launches from the Seychelles paved the way for the driftsondes’ use in late 2010 as part of Concordiasi, an ambitious project of the World Weather Research Program to reduce uncertainties about the present weather and future climate of the Antarctic. Part of THORPEX (The Observing-System Research and Predictability Experiment) and the International Polar Year, Concordiasi is led by scientists from the United States and France, the latter a world leader in ballooning technology and the primary collaborator with NCAR on the driftsonde project. “Retrieving all the useful information from satellite observations is a delicate process, especially over the extreme conditions found at the poles,” says Florence Rabier, head of the observations team in Météo-France’s center for numerical weather prediction. The driftsondes will gather up to 600 atmospheric profiles in and near Antarctica. These unprecedented sorties into rarely sampled regions will be used to calibrate computer models and should help scientists interpret what satellites are observing. The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
http://www2.ucar.edu/ucarat50/borne-on-a-balloon
13
15
Comparing 2 values should be easy, but … Comparing values to see which is greater should be straightforward: you just use the less-than comparison operator: IF(A1<A2,TRUE,FALSE). So 1 is less than 2 and A is less than B. And when things are sorted ascending each value is less than the following value. This all seems obvious and easy, but wait: Is “AA” less than “AAA”? Is TRUE less than A? Is 99 less than B? Is “11″ less than “2″? What about special characters (!”£$%^&*()@#~ etc) ? And is “a” less than “A”? And then there are all those accented characters, how should they be sorted? And what about Chinese, Japanese, Cyrillic, Arabic, Hebrew … ? When sorting street names should St. James St be next to Saint James Street? Code Pages enable computers to represent characters Inside the computer everything works in binary so the computer designers have to make a choice about how to represent characters in binary, and preferably get everyone to agree about the choices so that one computer can understand another. These choices are known as Code Pages. A single-byte code page has 256 characters corresponding to the 256 binary numbers that can be held in a Byte, and in the early days each country/language tended to have their own code page to represent the character set that they used. Of course that does not work very well when you want to send an email from one country to another, so we started to invent multilingual code pages (500, 850, ISO 8859 etc). And Unicode was developed to handle languages such as Japanese that need more than 256 characters anyway. Today Windows uses multilingual Code Page 1252 for Latin-1 based (Western European) languages. This is similar to ISO 8859 (but not identical!). An Excel Formula does Binary Comparison of characters OK now we have a code page where each character is represented by a binary number. So we can use the binary numbers to compare the characters. This is the binary comparison that an Excel formula like IF(A1<A2,TRUE,FALSE) does. You can test this for yourself: In column A put numbers 1 to 255 in succeeding rows. Then in column B enter =CHAR(A1) and fill down. This will give you in column B the character equivalent of the number in column A. Then in column C enter =IF(A1<A2, TRUE,FALSE) and fill down. The result in C should always be TRUE, and it is, but ihem points out that more complicated rules seem to apply for some strings of more than one character containing accented characters. Excel SORT does NOT use Binary Comparison Now copy the characters in column B and use paste special values to put them in column E. Add the numbers 0 to 9 and TRUE and FALSE at the bottom. Then use Excel’s SORT to sort all the values in column E: The result of the Excel sort is a very different sequence: Numbers<Special characters<Numbers as Text<lowercase a followed by upper case A followed by accented a followed by b then B<followed by False followed by True. Note that the German ß is treated as an accented S. Blank/Empty cells are always sorted last in both ascending and descending sort. Also note that Excel SORT ignores apostrophes and hyphens, except that a string that is identical to another string except that it contains hyphens will be sorted after its hyphen-less twin. This kind of comparison sequence is known as a Collating Sequence. Each combination of language and country (a locale such as EN-US) can have a different collating sequence even when using the same code page. Some collating sequences have to treat 2 characters as if they were one (digraphs – for example Traditional Spanish ch, ll, rr). Excel’s SORT uses a “stable” sorting algorithm. This means that if there multiple identical items to be sorted then Excel’s SORT will preserve the original order of the sorted items. Duplicating Excel’s collating sequence in your own QuickSort routine. If you need to use your own sort routine, for instance to sort arrays using QuickSort, it is difficult to use the same collating sequence as Excel’s SORT. Excel VBA has 2 different comparison methods (Binary and Text) available through StrComp() or Option Compare. But using VBA Text compare just does a case-insensitive collating compare, not a collating compare. In fact to do a case-sensitive collating comparison using VBA you need to build your own collating sequence table and do a byte-by-byte comparison using lookups in your collating sequence table. Or you could probably use the Windows API function CompareStringW. Or you could dump the array to a worksheet, use Excel Sort and then read the result back into an array. C++ has string comparison methods (wcscoll(0 and _wcsicoll() for Unicode strings) that will use the collating sequence of the current Locale. I used these to build the COMPARE() function and all the LOOKUP and SORT functions in the FastExcel V3 function library. Note that the usual QuckSort algorithm is NOT a “stable” sort: identical items will often be rearranged from their original sequence. When do you need Collating Quicksort? Most of the time its not a problem to use a binary compare quicksort (and its a lot faster). Some exceptions that won’t work well are: - Comparing your own sorted array with an Excel sorted range. - Using Excel’s MATCH and LOOKUP functions with approximate match on your own sorted array. - Case-sensitive sorting. So have you ever needed to use a collating Quicksort?
http://fastexcel.wordpress.com/2012/06/09/sorting-and-comparing-the-excel-way-code-pages-and-collating-sequences/
13
13
NASA's Exoplanet Exploration Program is leading humankind on a voyage of unprecedented scope and ambition, promising insight into two of our most timeless questions: Where did we come from? Are we alone? The primary goal of the program is to discover and characterize planetary systems and Earth-like planets around nearby stars. The missions are designed to build on each other's success, each providing an essential step forward toward the goal of discovering habitable planets and evidence of life beyond. The first phase of exploration entails building an understanding of how many and what kinds of planetary systems nature has provided. Much of this work has been done with ground telescopes around the world, pushing the limits of their ability to detect smaller planets through Earth's turbulent atmosphere. The Kepler mission, in the stillness of space, is probing deeper into the galaxy to detect smaller and more Earth-like planets around other stars. Future NASA and international missions, as well as larger and more sensitive ground observatories, will extend this exoplanetary census much farther in the coming years. At the same time, important investigations will tell us about the environments around stars with exoplanets, such as dust and debris in disks that could make further measurements of the planets more difficult. Ultimately, the goal is to see whether there are exoplanets that show signs of possible life that we know how to interpret. The evidence will be primarily in the form of detailed spectroscopic studies of the atmospheres of extrasolar planets. For a planet to host life, our expectations are that the planet would require liquid water on the surface. We do not assume that the planet would necessarily resemble Earth itself. It would lie in an orbit that is neither too close nor too far from its star, so that liquid water could exist over geological timescales, and its atmosphere would contain the right balance of gases that could support life. Moreover, the atmosphere of the planet would be altered by the presence of life, such that only the existence of living organisms could account for the unusually high levels of gases in its atmosphere. (It's not that scientists reject any possibility of other life forms than what we know, but we do not yet know what other life forms could exist or how to look for them.) The volume of space that would be explored would be limited to the closest stars. In this context "nearby" is understood to be stars that lie within approximately 20 parsecs (60 light-years) from our Sun. This is roughly the distance we can explore using technologies available in the next decade. The ultimate stakeholder in the adventure of exoplanet exploration is the public who underwrites it. NASA’s Exoplanet Exploration Program is especially interested in making materials and information available so that all can appreciate and understand the new scientific discoveries and the challenges ahead, and to engage and inspire students to take interest in technical and scientific matters. *Sort missions by clicking the column headers. Keck Interferometer (KI) The Keck Interferometer is part of NASA's overall effort to find planets and ultimately life beyond our solar system. It combines the light from the twin Keck telescopes to make high resolution measurements of stars and galaxies and to measure ... The Kepler Mission, a NASA Discovery mission, is specifically designed to survey our region of the Milky Way Galaxy to detect and characterize hundreds of Earth-size and smaller planets in or nearby the habitable zone. |20090306 March 06, 2009||3Operating| The Large Binocular Telescope Interferometer (LBTI) is part of NASA's effort to find planets and ultimately life beyond our solar system. It combines the light from the twin telescope mirrors to make high resolution measurements of stars and galaxies and ...
http://science.nasa.gov/about-us/smd-programs/ExEP/
13
44
Written by G. Jeffrey Taylor Hawai'i Institute of Geophysics and Planetology Huge circular basins, marked by low regions surrounded by concentric mountain ranges, decorate the Moon. The giant holes may have formed during a short, violent period from about 3.9 to 3.8 billion years ago. Three hundred to 1000 kilometers in diameter, their sizes suggest that fast-moving objects with diameters of 20 to about 150 kilometers hit the Moon. Numerous smaller craters also formed. If most large lunar craters formed between 3.9 and 3.8 billion years ago, where were the impactors sequestered for over 600 million years after the Moon formed? One possibility has been studied with computer simulations by Harold Levison and colleagues from the Southwest Research Institute (Boulder, Colorado), Queen's University (Ontario, Canada), and NASA Ames Research Center in California. The idea, originally suggested in 1975 by George Wetherill (Carnegie Institution of Washington), is that a large population of icy objects inhabited the Solar System beyond Saturn. They were in stable orbits around the Sun for several hundred million years until, for some reason, Neptune and Uranus began to form. As the planets grew by capturing the smaller planetesimals, their growing gravitational attraction began to scatter the remaining planetesimals, catapulting millions of them into the inner Solar System. A small fraction of these objects crashed into the Moon and rocky planets, sculpturing the surfaces with immense craters. Calculations suggest that the bombardment would have lasted less than 100 million years, consistent with the ages of craters and impact basins in the lunar highlands. Reference:Levison, H, F., Dones, L., Chapman, C. R., Stern, S. A., Duncan, M. J., and Zahnle, K. (2001) Could the lunar "late heavy bombardment" have been triggered by the formation of Uranus and Neptune? Icarus, vol. 151, p. 286-306. Basins, Craters, and the Lunar Cataclysm Except for their soft glow in reflected sunlight, there is nothing subtle about the highlands of the Moon. They are a cratered mess, a rubble pile where craters are more tightly packed than commuters at rush hour. There are thousands of craters tens of kilometers across, and about 45 that are larger than 300 kilometers in diameter. The largest is the South Pole-Aitken basin, 2500 kilometers across, the distance from San Francisco to Kansas City. This was a blitzkrieg. Impacts of large, fast-moving projectiles have sculpted the lunar highlands, as shown on the left. The large crater in the center is Tsiolkovsky, about 180 kilometers in diameter. It is flooded with dark mare basalt lava flows. Tsiolkovsky sits in an older, larger crater called the Tsiolkovsky-Stark basin, which is about 700 kilometers in diameter. About 45 craters on the Moon are larger than 300 kilometers across; lunar geologists call these basins. The prominent ring of the Orientale Basin (right) is 930 kilometers in diameter and marks the highest rim of the basin. When did all these craters form? Almost all are clearly older than the lunar maria, which fill low spots in the big basins and contain far fewer craters. The maria have ages younger than about 3.8 billion years, so the intense bombardment of the highlands and the formation of the basins took place before that. Some lunar scientists believe that the bombardment took place between 4.5 billion years (when the Moon and planets formed) to 3.8 billion years. In this view, the bombardment rate decreased drastically from 4.5 to 3.8 billion years ago. Experts in the formation of planets from swarms of planetesimals tend to favor this idea. The leader of this school of thought is William Hartmann (Planetary Science Institute, Tucson). An alternative view holds that the impact rate declined very rapidly soon after the Moon formed, but then increased dramatically during a short interval between about 3.9 and 3.8 billion years ago. This idea, dubbed the "lunar cataclysm," was first proposed by Fouad Tera and Gerald Wasserburg (Caltech) in 1975 on the basis of the ages of rocks returned by astronauts from the lunar highlands. The idea suffered benign neglect until Graham Ryder (Lunar and Planetary Institute) revived it in 1990 and has since found additional evidence for it among Apollo samples. The ages of melted chunks of rock in meteorites from the lunar highlands also seem to favor a sharp cutoff at about 3.9 billion years. So, many scientists specializing in the analysis of lunar samples believe this story. (Many, not all. There are naysayers, as explained below.) To test the idea of the lunar cataclysm we must determine the ages of the large basins. To do this we need samples whose ages were completely reset by the impact that blasted out a specific basin. The only samples we can be sure were reset are those that were melted during the impact. Unmelted samples retain a memory of their pre-impact origin, so cannot be used. In fact, most of the material tossed out of a growing crater, although fractured and partly pulverized, is not even heated enough to reset rock ages at all. We need impact melts. And we need them from impact basins. Ancient impact events can be dated only by finding pieces of rock that were melted during the formation of a crater or basin. This one from the Apollo 16 site helps date the Nectaris basin. The photo, 2.9 millimeters across, was taken in polarized light, giving false colors diagnostic of the minerals in the rock. The gray to white one is plagioclase feldspar. The lath-like shapes of the feldspar crystals provide unambiguous evidence for crystallization from molten rock. We can use the number of craters on a planetary surface to determine relative ages. Crater counts allow us to safely say that all the maria are younger than the Orientale basin, the youngest and freshest of the great impact basins. The oldest mare rocks are 3.80 billion years. Thus, all impact basins are older than that age. The Imbrium basin is older than the non-mare lava flows of the Apennine Bench Formation, which samples from the Apollo 15 mission show formed 3.84 billion years ago. (The uncertainty in the age is plus or minus 0.02 billion years. This means that the age of the lava flows of the Apennine Bench Formation is somewhere between 3.82 and 3.86 billion years.) Impact melts from the Apollo 14 and 15 missions can be used to date the Imbrium impact, although none can be proved to have been produced by the event itself. Nevertheless, they were collected in the debris pile thrown out of Imbrium, so either were formed by the event or existed before it. These ages, determined by Brent Dalrymple (Oregon State University) and Graham Ryder (Lunar and Planetary Institute) suggest Imbrium formed 3.85 billion years ago (give or take 0.02 billion years). Samples of impact melt collected during the Apollo 17 mission allow Dalrymple and Ryder to date the Serenitatis basin, as the Apollo 17 landing site is inside that basin. Those ages are 3.893 (plus or minus 0.009) billion years, clearly older than Imbrium. The Luna 20 mission landed on the rim of the Crisium basin. It returned lunar regolith (soil), so the samples are all small rock fragments. Nevertheless, analytical capabilities are so good that we can determine the rock type from microscopic study, the bulk chemical composition, and the age on a little rock only a millimeter or two across. Those ages, determined by Tim Swindle (University of Arizona) and colleagues, came out to be 3.85 to 3.90 billion years. The best guess is that Crisium is somewhat less than 3.90 billion years old, perhaps 3.89. Apollo 16 landed on ejecta from the Nectaris basin. One group of impact melt rocks at the site is considered by many lunar scientists to have been produced by the Nectaris impact. Those rocks have ages of less than 3.92 billion years. Graham Ryder argues that the age of Nectaris is likely to be about 3.90 billion years. All these ages indicate that the five dated basins formed between 3.9 and 3.8 billion years ago. In addition, by noting which basins deposited debris on other basins, we can determine the relative ages of lots of basins. Using that geologic data, it is clear that at least seven other basins formed during the same time interval. It is possible that most--maybe almost all--lunar basins formed during that short time interval. As G. J. Wasserburg said in a talk at the Lunar Science Conference in 1975, "It must have been a hell of a show to watch!" As usual in science, not everyone agrees. Larry Haskin (Washington University in St. Louis) and others argue that the impact that blasted out the Imbrium basin distributed materials very widely and reset ages all over the place. The result, they argue, is that the narrow interval of less than 100 million years is only apparent. We are dating just the Imbrium event. Most of us do not agree with this point of view. We point out that the chemical compositions of the impact melts vary from landing site to landing site. More important, the ages are distinguishable from one another-there is a genuine age difference between the rocks collected at Apollo 14 and 15 (Imbrium) compared to those at Apollo 17 (Serenitatis), Lunar 20 (Crisium), and Apollo 16 (Nectaris). We will not resolve this debate until we get more samples from lunar basins. In the meantime, we'll argue with each other. The Moon was not the only object bombarded long ago. The ancient surfaces of Mercury and Mars are cratered severely, with many multi-ring basins. Did the same population of objects that pummeled the Moon also dig up the surfaces of Mercury and Mars? In fact, what about the heavily cratered surfaces of the icy satellites of Jupiter and Saturn? (Venus and Earth are too active geologically to have preserved much of a record of the early bombardment. There are rocks older than 3.8 billion years on Earth and some investigators are trying to use them to understand the early bombardment of the Earth.) The surfaces of Mercury (left) and Mars (right) show the same kind of circular scars as the Moon. The Caloris basin, 1300 kilometers in diameter, on Mercury dominates the left half of the photograph. Numerous craters are visible in the image of Mars, including the basin Schiaparelli (400 kilometers in diameter), located in the top center of this Viking mosaic. Planetary scientists agree that these basins are very old, but their precise ages are unknown. Uranus and Neptune: Late Bloomers There was nothing gentle about planet formation. Dust grains glommed together, making clumps. The clumps stuck together to make big chunks, until objects were large enough to begin to attract material with their gravity fields, producing objects the size of asteroids (up to a few hundred kilometers in diameter). This led to a period of runaway growth in which tens of objects much larger than the Moon formed. Finally, these huge objects whacked into each other, creating larger planets, but a smaller number of them. The entire process was dominated by large impacts. The accretion of the planets swept up much of the debris, so it is logical to expect that the impact rate on a given planet would decrease with time. In the case of the Moon, ages of impact melts should cluster towards the time when the impact rate was highest--right after the Moon formed 4.5 billion years ago. Instead, the ages cluster around 3.9 billion years. What gives? Imaginative planetary scientists have proposed several explanations for the dramatic increase in the impact rate at about 3.9 billion years ago. One is the leftovers model. This idea proposes that there were a lot of small bodies left over after the formation of the inner planets, enough to make about a Moon's worth. They were swept up fairly rapidly, but there might have been enough left over to do a lot of planetary pummeling about 3.9 billion years ago. Another idea is that a large asteroid broke up and the fragments showered the inner planets. This requires the break up of a hefty asteroid, one as large as the largest surviving asteroid, Ceres (about 1000 kilometers in diameter). Objects that big are difficult to bust apart. Others have suggested that a passing star could have disturbed the orbits of comets in the Oort cloud, the vast collection of comets far beyond Pluto, and sent them zipping through the solar system. And then there's the delayed formation of Uranus and Neptune, an idea proposed originally in 1975 by George Wetherill (Carnegie Institution of Washington). Harold Levison and his coworkers have begun a series of detailed studies of all these ideas. They begin with the late blooming of Uranus and Neptune. Now you would think that if huge planets were going to form, they would do so early, when we think the inner planets, Jupiter, and Saturn formed. Computer calculations, in fact, predict formation times of about 100 million years--but not for Uranus and Neptune. The problem is that the part of the solar system where Uranus and Neptune now reside was populated by small bodies that were widely spaced. This made it difficult for them to attract each other. Closer to Jupiter and Saturn, however, there were more objects available for planet construction. One controversial idea developed by Levison and his colleagues Edward Thommes (Queen's University, Ontario) and Martin Duncan (Southwest Research Institute, Boulder) is that Uranus and Neptune formed in the region of solar system where Jupiter and Saturn formed, and then were scattered outwards by the immense gravity fields of the gas giants. They end up in their present locations, ready to fling planetesimals towards the inner solar system. Scientists understand so little about the formation of Uranus and Neptune that Levison states, "...the possibilities concerning the formation of Uranus and Neptune are almost endless." That being the case, their delayed formation or their transport from near Jupiter and Saturn are as likely as their early formation in their present locations. So, Levison and his colleagues assume that the two planets formed 600 million years after the beginning of the solar system, and examined whether their assembly caused impacts in the inner solar system. Let's set the stage. In the story examined by Levison and coworkers, in the early solar system the planets ended at Saturn. Beyond Saturn there was nothing but a huge number of cold, icy planetesimals in orbit around the Sun. After 600 million years or so, something causes some of them to accrete to a couple of larger objects. This causes a rapid growth of the objects, eventually making Uranus and Neptune. Their large gravity begins to alter the orbits of the remaining planetesimals. Some are flung outwards, others inwards. A small percentage of those hurled inwards smack into the icy satellites of Saturn and Jupiter, and into Mars and the rest of the inner planets, including the Moon. Their calculations indicate that the scattered planetesimals would have bombarded the inner solar system for only a few tens of millions of years--the duration of the lunar cataclysm. The calculations also indicate that only one in about 100 million of the scattered objects hit the Moon. This means that there must have been a lot of material in the region where Uranus and Neptune formed, about 30 Earth masses worth. Most models of the solar system indicate that there was at most only 50 Earth masses way out there, consistent with the calculations. The whole process also scatters asteroids, which add to the impacting population in the inner solar system. Testing the Late Arrival of Uranus and Neptune The overdue birth of Uranus and Neptune seems to provide a satisfying explanation for the spike in the bombardment history of the Moon. However, before we declare this case closed, some additional tests need to be done. One is to determine the ages of more lunar basins. Dating impact melt rocks inside lunar meteorites, as Barbara Cohen and her colleagues have done is a good start. An even better way would be to collect samples from the floors of large basins on the Moon. This could be done with automated sample return missions. It is also crucial to determine the ages of basins on Mars. This, too, will require samples to be returned to Earth because it is impossible to make age measurements remotely to the precision and accuracy needed to see if the basins formed between 3.8 and 3.9 billion years ago. Ages from Mercury would be helpful, too, but sample returns from that planet are very difficult. Returning a sample to Earth requires a gigantic, expensive rocket to blast away from the nearby Sun's huge gravity field. Detailed studies of the satellites of Jupiter will also be important. Levison notes that about 500 basins would have formed on Callisto, the second largest of Jupiter's satellites. The heat generated by the impacts would have melted the surface to a depth of perhaps 150 kilometers, possibly erasing almost all of the basins. The fact that there are only four basins known on Callisto is consistent with the late formation of Uranus and Neptune, but more detailed studies of all the icy satellites need to be done. It will be equally important for those studying how planets form to develop consistent stories for the formation of Uranus and Neptune. Levison and colleagues conclude their paper by noting: "the model presented in this paper must be viewed with skepticism until formation models of Uranus and Neptune are available that are consistent with this late arrival." It seems certain that scientists will view it all skeptically! The study of the bombardment history of the Solar System is fundamental to understanding the formation of the planets and their early histories. It also requires an interdisciplinary approach. The bombardment history of each planet and moon must be worked out from geological studies and analyses of samples returned from them. All those data can then be used to test the calculations done by scientists like Levison and his coworkers. Last Updated: 16 February 2011
http://solarsystem.nasa.gov/scitech/display.cfm?ST_ID=465
13
13
Data are organized into two broad categories: ♦ Qualitative Data: Information that is difficult to measure, count or express in numerical terms. An example of qualitative data would be how safe a resident feels in his or her apartment. ♦ Quantitative Data: Information that can be expressed in numerical terms, counted or compared on a scale. An example of quantitative data would be the number of 911 calls received in a month. Qualitative research explores attitudes, behavior and experiences through such methods as interviews or focus groups. It attempts to get an in-depth opinion from participants. As it is attitudes, behavior and experiences that are important, fewer people take part in the research, but the contact with these people tends to last a lot longer. The strength of qualitative research is its ability to provide complex textual descriptions of how people experience a given research issue. It provides information about the “human” side of an issue – that is, the often contradictory behaviors, beliefs, opinions, emotions, and relationships of individuals. Qualitative methods are also effective in identifying intangible factors, such as social norms, socioeconomic status, gender roles, ethnicity, and religion, whose role in the research issue may not be readily apparent. When used along with quantitative methods, qualitative research can help us to interpret and better understand the complex reality of a given situation and the implications of quantitative data. Qualitative Research Methods The most common qualitative methods are participant observation, in-depth interviews, and focus groups. Each method is used to obtain a specific type of data. ♦ Participant observation is appropriate for collecting data on naturally occurring behaviors in their usual contexts. ♦ In-depth interviews are optimal for collecting data on individuals’ personal histories, perspectives, and experiences, particularly when sensitive topics are being explored. ♦ Focus groups are effective in eliciting data on the cultural norms of a group and in generating broad overviews of issues of concern to the cultural groups or subgroups represented. Quantitative research generates statistics through the use of large-scale survey research, using methods such as questionnaires or structured interviews. If a market researcher stops you on the streets, or you fill out a questionnaire that has arrived through the mail, this falls under the umbrella of quantitative research. This type of research reaches many more people, but the contact with those people is much quicker than it is in qualitative research. ♦ The main focus is on measuring 'how much is happening to how many people.' ♦ The main tools are large scale surveys analyzed using statistical techniques. Quantitative measurable indicators relevant to the pre-determined hypotheses are identified and combined into questionnaires. ♦ Questionnaires are then conducted for a random sample or stratified random sample of individuals, often including a control group. ♦ Causality is assessed through comparison of the incidence of the variables under consideration between main sample and control group and/or the degree to which they co-occur. ♦ In large-scale research projects teams are composed of a number of skilled research designers and analysts assisted by teams of local enumerators (©2005 Linda Mayoux). Qualitative vs. Quantitative Inquiry The key difference between quantitative and qualitative methods is their flexibility. Generally, quantitative methods are fairly inflexible. With quantitative methods such as surveys and questionnaires, for example, researchers ask all participants identical questions in the same order. The response categories from which participants may choose are “closed-ended” or fixed. The advantage of this inflexibility is that it allows for meaningful comparison of responses across participants and study sites. However, it requires a thorough understanding of the important questions to ask, the best way to ask them, and the range of possible responses. Qualitative methods are typically more flexible – that is, they allow greater spontaneity and adaptation of the interaction between the researcher/evaluator and the study participant. For example, qualitative methods ask mostly “open-ended” questions that are not necessarily worded in exactly the same way with each participant. With open-ended questions, participants are free to respond in their own words, and these responses tend to be more complex than simply “yes” or “no.” In addition, with qualitative methods, the relationship between the researcher/evaluator and the participant is often less formal than in quantitative research. Participants have the opportunity to respond more elaborately and in greater detail than is typically the case with quantitative methods. In turn, researchers have the opportunity to respond immediately to what participants say by tailoring subsequent questions to information the participant has provided. To read more about using data, click here.
http://ncjp.org/research-evaluation/overview/research/data-types-sources
13
10
Physical Sciences Division Seeing the World in a Column of Sand Study helps predict where uranium-tainted groundwater will go, when it will arrive Results: By examining the behavior and characteristics of a small volume of sediment, scientists at Pacific Northwest National Laboratory developed a coupled experimental and computational approach that may better predict the behavior of uranium over relatively large areas in the field. Using experimentation and a computer model, the team accurately scaled the chemical effects they observed for a sediment component to describe the combined physical and chemical effects on uranium transport within the sediment as a whole. The scaling information was derived from experimentation with non-reactive tracers moving through the whole sediment. At the Hanford Site, a former plutonium production complex in Washington State, groundwater contaminated with uranium is a concern. When and how much of this uranium reaches the nearby Columbia River is a question whose resolution could have large impacts on remediation. Why It Matters: The Department of Energy, which manages the Site, must predict where along the riverbank the uranium will arrive and when. An accurate determination of river shore impacts will allow effective and efficient implementation of technologies to capture or stabilize the uranium before it enters the river. Chemical experimentation can determine the reactions responsible for controlling uranium movement, but such results are limited in their ability to predict field-scale movement in the natural environment. The laboratory results must be "scaled up" to include the physical and chemical components of the natural system. Scaling is complicated by the size and inherent variety of materials in the subsurface. At Pacific Northwest National Laboratory, researchers developed a combined experimental and computational approach that may better predict the behavior of uranium in the field. Methods: To understand how contaminants move through the subsurface, researchers collected sediment from the field. They separated the sediment into two fractions: one fraction of grains with a diameter greater than 2 millimeters, and the other of grains with a diameter smaller than 2 millimeters. Contaminants preferentially bonded to the finer grains, while large grains made up most of the soil by weight and volume. The team studied the fine grains to define the chemical behavior of uranium. They used the results to establish a chemical model to describe the behavior of uranium in groundwater associated with the fine fraction of the sediment. Then, the researchers packed about 60 pounds of whole sediment into a see-through plastic column with a diameter of 6 inches, and measured how fast a solution moved through the soil. The solution was designed to mimic uranium-contaminated groundwater, but the role of uranium was played by the nonradioactive chemical pentafluorobenzoic acid, or PFBA, which was chosen because it closely matched the physical behavior of uranium. For example, its rate of diffusion in an aqueous solution was close to the rate for uranium. For whole-sediment studies, the movement of uranium was thus isolated from the effects of chemical reaction. A non-reactive tracer, bromide, was used to determine the movement of the groundwater components other than uranium. The researchers found that the groundwater flow and migration of the uranium analogue were strongly influenced by the distribution of large grains (pebbles and cobbles). The results from the large column experiment were used to build a physics-based model to describe uranium migration within the complex sand-pebble-cobble system. To predict the overall behavior of uranium movement in the field, the physics-based model was linked computationally with the chemical model of uranium reaction in the fine-grained portion of the sediment. The combined model was tested against experimentation with the same column using uranium-containing solutions: the measured results of uranium migration in the large column (responding to the chemical and physical components of the experimental system) validated the model. The experimental and computational approaches developed in this research may be extended to predict uranium migration in field. The requirement for the field application is the direct field measurement of groundwater flow and nonradioactive chemical migration. The field information can then be linked with laboratory measurements of chemical reaction to predict uranium transport in field. What's Next? The researchers will perform tests with undisturbed soils and conduct studies in the field at the Hanford Site. Acknowledgments: The Office of Biological and Environmental Research at DOE funded this research through the Environmental Remediation Science Program. The DOE Office of Environmental Management also supported this work through the Hanford Remediation and Closure Science Project. Chongxuan Liu, John Zachara, Nikolla Qafoku, and Zheming Wang at PNNL performed the research. They conducted the column tests in PNNL's 331 Building and performed the laser-induced fluorescence spectroscopy measurements at DOE's EMSL, a national scientific user facility at PNNL. This work supports PNNL's mission to strengthen U.S. scientific foundations for innovation by developing tools and understanding required to control chemical and physical processes in complex multiphase environments. Reference: Liu C, JM Zachara, N Qafoku, and Z Wang. 2008. "Scale-dependent Desorption of Uranium from Contaminated Subsurface Sediments." Water Resources Research 44(8):W08413, doi:10.1029/2007WR006478.
http://www.pnnl.gov/science/highlights/highlight.asp?groupid=756&id=524
13
12
1. Students test water quality parameters at a local stream, pond, or other aquatic system. 2. Students return to the aquatic ecosystem to take repeat measurements. 3. Students compare their data with current HR-ECOS data. Day 2: Is Our Water Healthy? Students will decide whether their local stream or the larger Hudson River are healthy, using chemical and physical characteristics, and be able to collect data to support or negate their hypotheses. - Measuring tape - Thermometers (air and water) - Orange or ping-pong ball - Waders or appropriate shoes - Dissecting trays, tweezers, nets to observe benthic material, ID cards (optional) - Test kits for DO, phosphates, nitrates, pH, chloride and other appropriate tests - Goggles, gloves - Data sheets- stream/river, pond/lake, chemistry, hypothesis sheets Preparation: Prepare the students using lesson 1 in this module. You should also decide whether you want to include macroinvertebrates in your survey. If so, use the collection techniques in the lesson titled “An Aquatic Ecosystem” in Module 1. Engage: Show students a map of the local watershed and/or the Hudson River watershed. Ask: What do you know about the water quality here? How could you find out? How often do you have to test? Where? What else would you have to know? Together with students, define baseline data. Review safety procedures for outdoor work. Explore 1: In groups, students will test the water quality and make observations about the physical characteristics of a stream or pond (optional: macroinvertebrate collection). Data sheets are provided for both types of ecosystems. Based on the size of your class, you will want to assign groups different variables to test. Decide as a class how you want to sample the stream; do you want to split groups up to sample different areas, or will everyone work in one area? Visit the stream and allow the students to gather their respective data for about 20 minutes (or when all groups seem finished with the survey). All students should do a detailed site drawing. Explain 1: After you return to the classroom, discuss student findings. What did students notice? If students collected macroinvertebrates, discuss the connections between the organisms that live in/near stream with the physical characteristics of that stream. Explore 2: When students have discussed the initial surveys, allow time in their groups to develop hypotheses. Have the group hypothesize how each stream characteristic that they observed might change (or not) over the course of the year, at different locations, or whatever other variable you decided to use. Conduct the second and subsequent testing during the remainder of the school year. Explain 2: Students may or may not be able to measure physical changes or chemical changes. If possible, return to the stream a few more times to collect more data. Encourage students to determine the validity of their data based on the limitations of a school setting (ie limited class time, inability to measure during a storm, at the source of pollution, etc). While students are writing up their lab reports, they are asked to think about the difference between a ‘bend’ and a ‘break’ in an ecosystem (a temporary vs a permanent change). If this is a difficult concept for students, spend some time discussing what this might mean for a stream versus a larger ecosystem such as a river. Ask students to classify different environmental problems as ‘bends’ or ‘breaks’. Extend: Students can create a presentation on their research for community members or other audiences within the school. Evaluate: Students turn in the completed hypotheses and data sheets, along with a lab report.
http://www.caryinstitute.org/educators/teaching-materials/changing-hudson-project/pollution/day-2-our-water-healthy
13
54
Categorical syllogisms are a special type of argument which has been studied for more than two thousand years. now since the time of Aristotle. It is the central piece of Aristotelian logic, and is still the most visible type of argument in logic courses and textbooks today. I Categorical syllogisms, no matter what they are about, have a rigorous structure: - There are exactly three categorical propositions. - Two of those propositions are premises; the other is the conclusion. - There are exactly three terms, each appearing only twice. Given these structural requirements. categorical syllogisms are rather cumbersome and unnatural. The structure, however, is transparent, and the structural properties of the syllogism (the relationships asserted between the three terms) determine whether an argument is valid or not. Consider the following argument:(1) All men are mortal. (All P are M) (2) Socrates is a man. (Some S are M) (3) Therefore, Socrates is mortal. (Some S are P)Each of the three terms in a categorical syllogism occurs in exactly two of the propositions in the argument. For ease of identification and reference, these terms are called the major, minor and middle terms of the argument. The term which occurs in both premises is called the middle term and is usually represented by the letter M. The term which occurs as the predicate term in the conclusion is called the major term and it is usually represented by the letter P. The premise which has the major term is called the major premise. The subject of the conclusion is called the minor term, represented by the letter S, and the premise with the minor term is called the minor premise. So, in the example above, 'Socrates' is the minor term, 'mortals' is the major term, and 'men' is the middle term. In the example above, it appears that the conclusion follows from the premises. But how can we be sure? Fortunately, there are two distinct methods available to us for testing the validity of a categorical syllogism. The first of the methods relies on an understanding of properties of categorical propositions : quantity, quality, and the distribution of terms. Four rules apply to all valid categorical syllogisms: Rule 1: In a valid categorical syllogism, the middle term must be distributed in at least one premise. Rule 2: In a valid categorical syllogism, any term that is distributed in the conclusion must be distributed in the premises. Rule 3: In a valid categorical syllogism, the number of negative premises must be equal to the number of negative conclusions. Rule 4: In a valid categorical syllogism a particular conclusion cannot be drawn from exclusively universal premises unless one assumes existential import. We do not assume existential import, and we will refer to arguments that would be valid if we did as traditionally valid. All and only those arguments that pass each of these tests are valid. Failure to satisfy one or more of the rules renders the argument non-valid. Applying these rules to our argument, we see that the middle term, 'men', is distributed in the first premise (the subject of an A proposition is distributed), so the argument passes the first test. Neither of the terms in the conclusion is distributed (both terms in an I proposition are undistributed), so the argument passes the second test. There are no negative premises and no negative conclusions, and 0 = 0, so the argument passes the third test. Finally, the second premise is particular, so the argument passes the fourth test even though the conclusion is particular. Consider another example:(1) Some logicians wear earrings. (2) Some persons who wear earrings are rational. (3) Therefore, some logicians are not rational. This, too, is a categorical syllogism. However, this argument is not valid. The reason is that even though some logicians wear earrings and some persons who wear earrings are not rational, it does not necessarily follow that some logicians are not rational. In fact the premises could be all true while the conclusion is false. In terms of the four rules, this argument violates Rule 1, the middle term, 'those who wear earrings', is not distributed in either of the premises. Fallacies and Rule Violations Categorical syllogisms that violate one or more of the rules commit a fallacy in reasoning. Different violations are given specific names. An argument that violates rule 1 commits the fallacy of the undistributed middle. If the minor term is distributed in the conclusion but not in the minor premise, the argument commits the fallacy of an illicit minor. If the major term is distributed in the conclusion but not in the major premise, the argument commits the fallacy of an illicit major. An argument with 2 negative premises commits the fallacy of 2 negatives, any other violation of rule 3 is called the fallacy of negative terms. Finally, an argument that violates rule 4 commits the existential fallacy. Quiz yourself of applying the rules to arguments to test for validity. Many people find the rule tests for validity unnatural and cumbersome. Fortunately, there is another method for testing categorical syllogisms for validity that involves Venn diagrams. Return to Tutorials Index Go on to Venn Diagram Tests for Validity
http://cstl-cla.semo.edu/hill/PL120/notes/syllogisms.htm
13
115
||This article needs additional citations for verification. (April 2012)| Multiplication (often denoted by the cross symbol "×") is the mathematical operation of scaling one number by another. It is one of the four basic operations in elementary arithmetic (the others being addition, subtraction and division). Because the result of scaling by whole numbers can be thought of as consisting of some number of copies of the original, whole-number products greater than 1 can be computed by repeated addition; for example, 3 multiplied by 4 (often said as "3 times 4") can be calculated by adding 4 copies of 3 together: Here 3 and 4 are the "factors" and 12 is the "product". Educators differ as to which number should normally be considered as the number of copies, and whether multiplication should even be introduced as repeated addition. For example 3 multiplied by 4 can also be calculated by adding 3 copies of 4 together: Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have given lengths (for numbers generally). The area of a rectangle does not depend on which side you measure first, which illustrates that the order numbers are multiplied together in doesn't matter. In general the result of multiplying two measurements gives a result of a new type depending on the measurements. For instance: The inverse operation of multiplication is division. For example, 4 multiplied by 3 equals 12. Then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number. Multiplication is also defined for other types of numbers (such as complex numbers), and for more abstract constructs such as matrices. For these more abstract constructs, the order that the operands are multiplied in sometimes does matter. Notation and terminology |This section does not cite any references or sources. (August 2011)| |addend + addend =||sum| |minuend − subtrahend =||difference| |multiplicand × multiplier =||product| |dividend ÷ divisor =||quotient| |nth root (√)| |degree √ =||root| - (verbally, "two times three equals six") There are several other common notations for multiplication. Many of these are intended to reduce confusion between the multiplication sign × and the commonly used variable x: - The middle dot is standard in the United States, the United Kingdom, and other countries where the period is used as a decimal point. In other countries that use a comma as a decimal point, either the period or a middle dot is used for multiplication. Internationally, the middle dot is commonly connotated with a more advanced or scientific use. - The asterisk (as in 5*2) is often used in programming languages because it appears on every keyboard. This usage originated in the FORTRAN programming language. - In algebra, multiplication involving variables is often written as a juxtaposition (e.g., xy for x times y or 5x for five times x). This notation can also be used for quantities that are surrounded by parentheses (e.g., 5(2) or (5)(2) for five times two). - In matrix multiplication, there is actually a distinction between the cross and the dot symbols. The cross symbol generally denotes a vector multiplication, while the dot denotes a scalar multiplication. A similar convention distinguishes between the cross product and the dot product of two vectors. The numbers to be multiplied are generally called the "factors" or "multiplicands". When thinking of multiplication as repeated addition, the number to be multiplied is called the "multiplicand", while the number of multiples is called the "multiplier". In algebra, a number that is the multiplier of a variable or expression (e.g., the 3 in 3xy2) is called a coefficient. The result of a multiplication is called a product, and is a multiple of each factor if the other factor is an integer. For example, 15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5. The common methods for multiplying numbers using pencil and paper require a multiplication table of memorized or consulted products of small numbers (typically any two numbers from 0 to 9), however one method, the peasant multiplication algorithm, does not. Multiplying numbers to more than a couple of decimal places by hand is tedious and error prone. Common logarithms were invented to simplify such calculations. The slide rule allowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early twentieth century, mechanical calculators, such as the Marchant, automated multiplication of up to 10 digit numbers. Modern electronic computers and calculators have greatly reduced the need for multiplication by hand. Historical algorithms The Egyptian method of multiplication of integers and fractions, documented in the Ahmes Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining 1 × 21 = 21, 2 × 21 = 42, 4 × 21 = 84, 8 × 21 = 168. The full product could then be found by adding the appropriate terms found in the doubling sequence: - 13 × 21 = (1 + 4 + 8) × 21 = (1 × 21) + (4 × 21) + (8 × 21) = 21 + 84 + 168 = 273. The Babylonians used a sexagesimal positional number system, analogous to the modern day decimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering 60 × 60 different products, Babylonian mathematicians employed multiplication tables. These tables consisted of a list of the first twenty multiples of a certain principal number n: n, 2n, ..., 20n; followed by the multiples of 10n: 30n 40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50n and 3n computed from the table. In the mathematical text Zhou Bi Suan Jing, dated prior to 300 BC, and the Nine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employed Rod calculus involving place value addition, subtraction, multiplication and division. These place value decimal arithmetic algorithms were introduced by Al Khwarizmi to Arab countries in the early 9th century. Modern method The modern method of multiplication based on the Hindu–Arabic numeral system was first described by Brahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication and division. Henry Burchard Fine, then professor of Mathematics at Princeton University, wrote the following: - The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division they did cumbrously. Computer algorithms The standard method of multiplying two n-digit numbers requires n2 simple multiplications. Multiplication algorithms have been designed that reduce the computation time considerably when multiplying large numbers. In particular for very large numbers methods based on the Discrete Fourier Transform can reduce the number of simple multiplications to the order of n log2(n). Products of measurements When two measurements are multiplied together the product is of a type depending on the types of the measurements. The general theory is given by dimensional analysis. This analysis is routinely applied in physics but has also found applications in finance. One can only meaningfully add or subtract quantities of the same type but can multiply or divide quantities of different types. A common example is multiplying speed by time gives distance, so - 50 kilometers per hour × 3 hours = 150 kilometers. Products of sequences Capital Pi notation The product of a sequence of terms can be written with the product symbol, which derives from the capital letter Π (Pi) in the Greek alphabet. Unicode position U+220F (∏) contains a glyph for denoting such a product, distinct from U+03A0 (Π), the letter. The meaning of this notation is given by: The subscript gives the symbol for a dummy variable (i in this case), called the "index of multiplication" together with its lower bound (m), whereas the superscript (here n) gives its upper bound. The lower and upper bound are expressions denoting integers. The factors of the product are obtained by taking the expression following the product operator, with successive integer values substituted for the index of multiplication, starting from the lower bound and incremented by 1 up to and including the upper bound. So, for example: In case m = n, the value of the product is the same as that of the single factor xm. If m > n, the product is the empty product, with the value 1. Infinite products One may also consider products of infinitely many terms; these are called infinite products. Notationally, we would replace n above by the lemniscate ∞. The product of such a series is defined as the limit of the product of the first n terms, as n grows without bound. That is, by definition, One can similarly replace m with negative infinity, and define: provided both limits exist. For the natural numbers, integers, fractions, and real and complex numbers, multiplication has certain properties: - Commutative property - The order in which two numbers are multiplied does not matter: - Associative property - Expressions solely involving multiplication or addition are invariant with respect to order of operations: - Distributive property - Holds with respect to multiplication over addition. This identity is of prime importance in simplifying algebraic expressions: - Identity element - The multiplicative identity is 1; anything multiplied by one is itself. This is known as the identity property: - Zero element - Any number multiplied by zero is zero. This is known as the zero property of multiplication: - Zero is sometimes not included amongst the natural numbers. There are a number of further properties of multiplication not satisfied by all types of numbers. - Negative one times any number is equal to the opposite of that number. - Negative one times negative one is positive one. - The natural numbers do not include negative numbers. - Order preservation - Multiplication by a positive number preserves order: if a > 0, then if b > c then ab > ac. Multiplication by a negative number reverses order: if a < 0 and b > c then ab < ac. - The complex numbers do not have an order predicate. Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative for matrices and quaternions. In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed axioms for arithmetic based on his axioms for natural numbers. Peano arithmetic has two axioms for multiplication: Here S(y) represents the successor of y, or the natural number that follows y. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic including induction. For instance S(0). denoted by 1, is a multiplicative identity because The axioms for integers typically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent to x−y when x and y are treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is The rule that −1 × −1 = 1 can then be deduced from Multiplication with set theory It is possible, though difficult, to create a recursive definition of multiplication with set theory. Such a system usually relies on the Peano definition of multiplication. Cartesian product if the n copies of a are to be combined in disjoint union then clearly they must be made disjoint; an obvious way to do this is to use either a or n as the indexing set for the other. Then, the members of are exactly those of the Cartesian product . The properties of the multiplicative operation as applying to natural numbers then follow trivially from the corresponding properties of the Cartesian product. Multiplication in group theory There are many sets that, under the operation of multiplication, satisfy the axioms that define group structure. These axioms are closure, associativity, and the inclusion of an identity element and inverses. A simple example is the set of non-zero rational numbers. Here we have identity 1, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, we must exclude zero because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example we have an abelian group, but that is not always the case. To see this, look at the set of invertible square matrices of a given dimension, over a given field. Now it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, therefore this group is nonabelian. Another fact of note is that the integers under multiplication is not a group, even if we exclude zero. This is easily seen by the nonexistence of an inverse for all elements other than 1 and -1. Multiplication in group theory is typically notated either by a dot, or by juxtaposition (the omission of an operation symbol between elements). So multiplying element a by element b could be notated a b or ab. When referring to a group via the indication of the set and operation, the dot is used, e.g., our first example could be indicated by Multiplication of different kinds of numbers Numbers can count (3 apples), order (the 3rd apple), or measure (3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such as matrices) or do not look much like numbers (such as quaternions). - is the sum of M copies of N when N and M are positive whole numbers. This gives the number of things in an array N wide and M high. Generalization to negative numbers can be done by and . The same sign rules apply to rational and real numbers. - Rational numbers - Generalization to fractions is by multiplying the numerators and denominators respectively: . This gives the area of a rectangle high and wide, and is the same as the number of things in an array when the rational numbers happen to be whole numbers. - Real numbers - is the limit of the products of the corresponding terms in certain sequences of rationals that converge to x and y, respectively, and is significant in calculus. This gives the area of a rectangle x high and y wide. See Products of sequences, above. - Complex numbers - Considering complex numbers and as ordered pairs of real numbers and , the product is . This is the same as for reals, , when the imaginary parts and are zero. - Further generalizations - See Multiplication in group theory, above, and Multiplicative Group, which for example includes matrix multiplication. A very general, and abstract, concept of multiplication is as the "multiplicatively denoted" (second) binary operation in a ring. An example of a ring that is not any of the above number systems is a polynomial ring (you can add and multiply polynomials, but polynomials are not numbers in any usual sense.) - Often division, , is the same as multiplication by an inverse, . Multiplication for some types of "numbers" may have corresponding division, without inverses; in an integral domain x may have no inverse "" but may be defined. In a division ring there are inverses but they are not commutative (since is not the same as , may be ambiguous). When multiplication is repeated, the resulting operation is known as exponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with a superscript three. In this example, the number two is the base, and three is the exponent. In general, the exponent (or superscript) indicates how many times to multiply base by itself, so that the expression indicates that the base a to be multiplied by itself n times. See also - Makoto Yoshida (2009). "Is Multiplication Just Repeated Addition?". - Henry B. Fine. The Number System of Algebra – Treated Theoretically and Historically, (2nd edition, with corrections, 1907), page 90, http://www.archive.org/download/numbersystemofal00fineuoft/numbersystemofal00fineuoft.pdf - PlanetMath: Peano arithmetic - Boyer, Carl B. (revised by Merzbach, Uta C.) (1991). History of Mathematics. John Wiley and Sons, Inc. ISBN 0-471-54397-7. - Multiplication and Arithmetic Operations In Various Number Systems at cut-the-knot - Modern Chinese Multiplication Techniques on an Abacus
http://en.wikipedia.org/wiki/Multiplication
13
18
The IO class is the basis for all input and output in Ruby. Objects from this class represent connections to various different input and output devices such as hard drives, keyboards, and screens. All Ruby programs have three standard I/O streams: - the input stream, known as STDIN or $stdin, is set to capture data from the keyboard; - the output stream, named STDOUT or $stdout, is set to output data to a terminal screen; - the error stream, called STDERR or $stderr, also outputs to the terminal screen. Whenever unadorned IO methods are called (for example puts, prints, and gets) they are routed to and from the standard output and input streams. In order to send output to the error stream STDERR must be explicitly specified, for example STDERR.puts “text”. To change the routing of any of the standard streams, you can reassign the global variables associated to each one ($stdin, $stdout, $stderr). It is recommended that you leave the constants untouched (STDIN, STDOUT, STDERR), so that you can still access the default input and output devices. Now let’s discuss how IO objects read data streams. IO objects use iterators to read and write data to IO streams. Iterations in are delineated by the global input record separator, $/. The default global input record separator is a new line character, “\n”, which is why Ruby usually processes data one line at a time. By changing the global input record separator you can change how Ruby iterates through input and output streams. Before covering how to read and write data to IO objects, let’s take a look at the most common IO object in Ruby: The File object. As the name suggests, the File object is used to represent files within Ruby. This object provides functionality that enables files to be opened up, read from, written to, and closed. The most common approaches to create a file object is to use the File.open or File.new methods. These method require one parameter along with several optional parameters. The File.open object has two advantages: it also supports an optional code block and it can be called without being preceded by the class name File. The first parameter is the only one that is mandatory. It accepts a string object that holds the location of the file. The location can be specified using an absolute or relative path. The other parameters can be used to define several options, though the only option I will cover here is the file mode. The file mode determines how a file can be used. The most common file modes are: - Read mode is identified by an “r”. This is mode only supports reading from a file. This is the default mode. If a file does not exist then the method will return an exception. - Append mode is identified by “a”. This mode supports writing to a file by appending new data to any existing content. If a file does not exist then it will be created. - Write mode is identified by “w”. This modes supports writing to a file by overwriting any existing content. If a file does not exist then it will be created. File.new method and the File.open method called without a code block, function in the same way, they return a reference to a file. When using either of these approaches to create a file object it is important to remember to call the close method when you are done. If the optional code block is created, it will be passed a reference to the file object as an argument. The file object will automatically be closed when the block terminates. In this instance, File.open returns the value of the block. The open-uri library makes it easy to access remote files from networks using the HTTP and HTTPS protocols. After importing this library using the require keyword you can open remote files using the open method as though they were local files. Files are downloaded and returned as StringIO objects. These objects enable strings to behave like an IO stream, which means that they can be read using the standard IO stream input methods described below. In Ruby, IO objects feature a suite of standard input and output methods. We’ll take a look at the input methods first. All of these methods can be used with different objects to read input from various sources such as the keyboard, files from a hard-drive, or from a local or remote server. First let’s take a look at the methods that read input one character at a time. There are four such methods, getc, getbyte, readchar, and readbyte. These can be divided in two different ways based on how they work. First we can group these methods by how they deal with being called to read data after reaching the end of a file. getc, and getbyte return nil, while readchar, and readbyte return a fatal error. The other, more significant difference relates to the data that is actually returned by these methods. The getc and readchar methods return characters, whereas the getbyte and readbyte methods return individual bytes. Since most characters are encoded in two bytes, these methods return two numbers, each one representing a different byte. Now let’s move on to the methods that read data one line at a time. There are three such methods, gets, readline, and each. The first two function similar to their counterparts. They both read one line at a time, but gets returns nil when it reaches the end of a file; readline returns a fatal error. The third method functions a bit different. If you recall, each is a standard iterator method. Therefore, it iterates through the entire file (as it would with any other collection), yielding each line to a code block where it can be processed. This approach is ideal if you plan to process all the lines from a file at once. On the other hand, it does not allow you to walk through a file with the same level of control provided by gets, and readline. Lastly, let’s take a look at the two methods that read entire files: read and readlines. These methods are designed to read from files only and are not appropriate for getting input from a keyboard. Usually, these methods are only used to read small files. When reading large files it is best to process them iteratively, as this is a more efficient use of memory and processing power. Before we cover how to output data, let’s briefly review the methods for navigating within a file. First off, the rewind method enables you to jump back to the beginning of a file. The pos accessor attribute provides getter and setter methods that enable you to check your current location, to move to a new absolute position within the file. The seek() method can also be used to change your current position within a file. It enables you to move by specifying a position that is either relative to your current position, to the start of the file or to the end of the file. The first argument is an integer that specifies the distance to be moved, while the second parameter is a constant that specifies the point of origin for this movement. Here are the constants that can be specified for the second argument: IO::SEEK_SET is the default setting and it sets pointer position from the beginning of file; IO::SEEK_CUR sets pointer position from the current location; IO::SEEK_END sets pointer position from end of file). Here is a link to a short script that illustrates how to read and navigate through a file. Now let’s take a look at the standard output methods: print, puts, and p. For any of these methods to work the output stream on which they are called must be opened for output. If these methods are called without specifying an object recipient then they will default to the standard output stream,$stdout. The puts(obj, …) method accepts multiple objects as arguments and writes them to an IO stream. Any objects that are not strings are converted to string using their to_s method. A newline character is appended to each object before it is written to an IO stream (unless the original object already ended with a newline). If method is called with an array argument it writes each element on a newline. If called without any arguments it outputs a newline character. This method also always returns nil. The print(obj, …) method accepts multiple objects as arguments and writes them to an IO stream. Any objects that are not strings are converted to string using their to_s method. If multiple arguments are provided they are appended to each other using the output field operator, saved in global variable: $,. By default this global variable is empty, which means that strings are appended back to back. If this method is called without any arguments then it returns the last line of input that was read in your program, saved in global variable: $_. This method always returns nil. The printf(obj, …) method accepts a format string followed by multiple objects. It uses the format string to determine how to integrate the data from the objects into the output string that is sent to the IO stream. The number of objects passed into the method must be consistent with the number and type of objects identified in the format string, otherwise an error exception will be raised. The IO capabilities used by this method are based on the print method covered above, which is why this method also returns nil. The formatting capabilities embedded in this method are based on Ruby’s format method, which accepts the same parameters and returns a formatted string. For more details on creating format strings check out the documentation here. The p(obj, …) method accepts multiple objects as arguments and outputs the return value from each object’s inspect method, followed by a newline character. Unlike the other output methods that we just reviewed, this one returns the same string that it outputs to the IO stream. When you are working with files many of the error exceptions you will encounter are system errors. In these cases, Ruby is just a messenger that is informing you about errors that happened at the operating system level. Several objects have been created to wrap these system errors to enable Ruby to provide intelligible error exception messages. These error exception objects are part of the Errno namespace, that is why all of these errors will be labelled Errno::ERRORNAME.
http://julioterra.com/journal/2011/10/
13
14
Rate vs Ratio Rate and ratio are numbers of the same kind. They usually explain the equivalence of one from the other. These two are used in mathematics to better understand and distinguish a matter’s proportion or value. In this way, it will be easier to distinguish and know the value from to another. Rate is the relationship of two measurements that have various units. The quantity or unit, where a particular thing is unspecified, is generally the rate per unit time. Nevertheless, the rate of alteration can be named as per unit of length, mass or time. The most common kind of rate is time, like heart rate and speed. When it comes to describing the unit rates, the term “per” is used to divide the 2 measurements that are used to compute the rate. Ratio is the connection of 2 numbers that have the same type. It may pertain to spoonfuls, units, students, persons and objects. It is commonly expressed as a: b or a is to b. At times, it is expressed mathematically as the dimensions quotient of the 2. This means the number of times the 1st number contains the 2nd one (not essentially a figure.) Difference between Rate and Ratio Rate pertains to fixed quantity between 2 things while a ratio is the relationship between lots of things. A unit rate can be written as 12 kms per hour or 10km/1hr; a unit ratio can be written in this manner 10:1 or is read as 10 is to 1. A rate usually pertains to a certain change while a ratio is the difference of something. A rate usually focuses on physics and chemistry, mostly measurements, terms like the measurements of speed, heart rate, literacy rate and, etc. while ratio can be of any object, thing, students or persons. Rate and ratios are very important in explaining the equivalence from one and the other. A rate cannot be one if ratio does not exist. You don’t even notice that these two are still being used in our day to day living like calculating bank interest, product cost and many more. Life has been made easier because of these two. • Rate will not exist if Ratio did not exist. • Rate is used for measurements • Ratio is used for other kinds of things.
http://www.differencebetween.com/difference-between-rate-and-vs-ratio/
13
11
The three measurements (angle, side, angle) determine a unique triangle; proving that two triangles are similar requires only two more measurements (the two angles in the second triangle). Also, as we've seen in Session 4, every polygon can be divided into triangles, which can be regarded as its basic building blocks. Therefore, triangles will work in every situation, which is why we use them instead of any other polygon. Answers will vary, but here is one example: An indirect measurement can be taken when two figures are known to be similar and when a known measurement is taken from each figure. The ratio of this measurement establishes a scale factor for any other measurements that compare the two similar figures.
http://www.learner.org/courses/learningmath/measurement/session5/solutions_a.html
13
12
From the summit of Hawaii’s dormant Mauna Kea volcano, astronomers at the W. M. Keck Observatory probe the local and distant Universe with unprecedented power and precision. Their instruments are the twin Keck telescopes—the world’s largest optical and infrared telescopes. Each telescope stands eight stories tall, weighs 300 tons and operates with nanometer precision. The telescopes’ primary mirrors are 10 meters in diameter and are each composed of 36 hexagonal segments that work in concert as a single piece of reflective glass. In the middle of the Pacific Ocean, Hawai’i Island is surrounded by thousands of miles of thermally stable seas. The 13,796-foot Mauna Kea summit has no nearby mountain ranges to roil the upper atmosphere. Few city lights pollute Hawaiian night skies, and for most of the year, the atmosphere above Mauna Kea is clear, calm and dry. Because of the large size of the 10-meter primary mirrors, the Keck telescopes offer the greatest potential sensitivity and clarity available in astronomy. Their performance, and the performance of all ground-based telescopes, is limited by the turbulence of the Earth’s atmosphere, which distorts astronomical images. Astronomers have recently overcome the effect of atmospheric blurring using an established and fundamental technique called adaptive optics (AO). AO corrects for the image distortions by measuring and then correcting for the atmospheric turbulence using a deformable mirror that changes shape 2,000 times per second. In 1999, the Keck II telescope became the first large telescope worldwide to develop and install an AO system. The results provided a tenfold improvement in image clarity compared to what was previously possible with Keck and other large, ground-based telescopes. Initially adaptive optics relied on the light of a star that was both bright and close to the target celestial object. But there are only enough bright stars to allow adaptive optics correction in about one percent of the sky. In response, astronomers developed laser guide star adaptive optics using a special-purpose laser to excite sodium atoms that sit in an atmospheric layer 90 kilometers above Earth. Exciting the atoms in the sodium layers creates an artificial “star” for measuring atmospheric distortions and allows adaptive optics to produce sharp images of celestial objects positioned nearly anywhere in the sky. In 2004, Keck Observatory deployed the first laser guide star adaptive optics system on a large telescope. The laser guide star AO now routinely produces images with greater crispness and detail than those resulting from the Hubble Space Telescope. The W. M. Keck Foundation funded both the original Keck I telescope and six years later, its twin, Keck II. The project was managed by the University of California and the California Institute of Technology. The Keck I telescope began science observations in May 1993; Keck II saw first light in October 1996. In 1996, the National Aeronautics and Space Administration (NASA) joined as a one-sixth partner in the Observatory. Today Keck Observatory is a 501(c)3 supported by both public funding sources and private philanthropy. The organization is governed by the California Association for Research in Astronomy (CARA), whose Board of Directors includes representatives from the California Institute of Technology and the University of California, with liaisons from NASA and the Keck Foundation The Observatory endorses and supports the Astro2010 Decadal Survey process. Its senior staff members and Science Steering Committee, along with colleagues at WMKO’s partner institutions, have provided community input through the following contributions:
http://keckobservatory.org/about/the_observatory
13
68
A triangle is a type of polygon having three sides and, therefore, three angles. The triangle is a closed figure formed from three straight line segments joined at their ends. The points at the ends can be called the corners, angles, or vertices of the triangle. Since any given triangle lies completely within a plane, triangles are often treated as two-dimensional geometric figures. As such, a triangle has no volume and, because it is a two-dimensionally closed figure, the flat part of the plane inside the triangle has an area, typically referred to as the area of the triangle. Triangles are always convex polygons. A triangle must have at least some area, so all three corner points of a triangle cannot lie in the same line. The sum of the lengths of any two sides of a triangle is always greater than the length of the third side. The preceding statement is sometimes called the Triangle Inequality. Certain types of triangles Categorized by angle The sum of the interior angles in a triangle always equals 180o. This means that no more than one of the angles can be 90o or more. All three angles can all be less than 90oin the triangle; then it is called an acute triangle. One of the angles can be 90o and the other two less than 90o; then the triangle is called a right triangle. Finally, one of the angles can be more than 90o and the other two less; then the triangle is called an obtuse triangle. Categorized by sides If all three of the sides of a triangle are of different length, then the triangle is called a scalene triangle. If two of the sides of a triangle are of equal length, then it is called an isoceles triangle. In an isoceles triangle, the angle between the two equal sides can be more than, equal to, or less than 90o. The other two angles are both less than 90o. If all three sides of a triangle are of equal length, then it is called an equilateral triangle and all three of the interior angles must be 60o, making it equilangular. Because the interior angles are all equal, all equilateral triangles are also the three-sided variety of a regular polygon and they are all similar, but might not be congruent. However, polygons having four or more equal sides might not have equal interior angles, might not be regular polygons, and might not be similar or congruent. Of course, pairs of triangles which are not equilateral might be similar or congruent. Opposite corners and sides in triangles If one of the sides of a triangle is chosen, the interior angles of the corners at the side's endpoints can be called adjacent angles. The corner which is not one of these endpoints can be called the corner opposite to the side. The interior angle whose vertex is the opposite corner can be called the angle opposite to the side. Likewise, if a corner or its angle is chosen, then the two sides sharing an endpoint at that corner can be called adjacent sides. The side not having this corner as one of its two endpoints can be called the side opposite to the corner. The sides or their lengths of a triangle are typically labeled with lower case letters. The corners or their corresponding angles can be labeled with capital letters. The triangle as a whole can be labeled by a small triangle symbol and its corner points. In a triangle, the largest interior angle is opposite to longest side, and vice versa. Any triangle can be divided into two right triangles by taking the longest side as a base, and extending a line segment from the opposite corner to a point on the base such that it is perpendicular to the base. Such a line segment would be considered the height or altitude ( h ) for that particular base ( b ). The two right triangles resulting from this division would both share the height as one of its sides. The interior angles at the meeting of the height and base would be 90o for each new right triangle. For acute triangles, any of the three sides can act as the base and have a corresponding height. For more information on right triangles, see Right Triangles and Pythagorean Theorem. Area of Triangles If base and height of a triangle are known, then the area of the triangle can be calculated by the formula: ( is the symbol for area) Ways of calculating the area inside of a triangle are further discussed under Area. The centroid is constructed by drawing all the medians of the triangle. All three medians intersect at the same point: this crossing point is the centroid. Centroids are always inside a triangle. They are also the centre of gravity of the triangle. The three angle bisectors of the triangle intersect at a single point, called the incentre. Incentres are always inside the triangle. The three sides are equidistant from the incentre. The incentre is also the centre of the inscribed circle (incircle) of a triangle, or the interior circle which touches all three sides of the triangle. The circumcentre is the intersection of all three perpendicular bisectors. Unlike the incentre, it is outside the triangle if the triangle is obtuse. Acute triangles always have circumcentres inside, while the circumcentre of a right triangle is the midpoint of the hypotenuse. The vertices of the triangle are equidistant from the circumcentre. The circumcentre is so called because it is the centre of the circumcircle, or the exterior circle which touches all three vertices of the triangle. The orthocentre is the crossing point of the three altitudes. It is always inside acute triangles, outside obtuse triangles, and on the right vertex of the right-angled triangle. Please note that the centres of an equilateral triangle are always the same point.
http://en.wikibooks.org/wiki/Geometry/Triangle
13
11
A comet is an icy small Solar System body that, when close enough to the Sun, displays a visible coma (a thin, fuzzy, temporary atmosphere) and sometimes also a tail. These phenomena are both due to the effects of solar radiation and the solar wind upon the nucleus of the comet. Comet nuclei are themselves loose collections of ice, dust, and small rocky particles, ranging from a few hundred meters to tens of kilometers across. Comet Elenin is coming to the inner-solar system this fall of 2011. Comet Elenin (also known by its astronomical name C/2010 X1), was first detected on Dec. 10, 2010 by Leonid Elenin, an observer in Lyubertsy, Russia, who made the discovery using the ISON-NM observatory near Mayhill, New Mexico. At the time of the discovery, the comet was about 647 million kilometers (401 million miles) from Earth. Over the past four-and-a-half months, the comet has — as comets do — closed the distance to Earth's vicinity as it makes its way closer to perihelion (its closest point to the sun). As of May 4, Elenin's distance is about 170 million miles. It is scheduled to come as close as 22 million miles. Comets and other extraterrestrial visitors fairly often come relatively close to the Earth. For example on Oct. 20, 2010, Comet Hartley 2 passed just over 11 million miles from Earth. At the time of discovery Elenin had an apparent magnitude of 19.5, making it about 150,000 times fainter than the naked eye magnitude of 6.5. The discoverer, Leonid Elenin, estimates that the comet nucleus is 3—4 km in diameter. As of April 2011, the comet is around magnitude 15 (roughly the brightness of Pluto), and the coma (expanding tenuous dust atmosphere) of the comet is estimated to be about 80,000 km in diameter. "We're talking about how a comet looks as it safely flies past us," said Yeomans. "Some cometary visitors arriving from beyond the planetary region — like Hale-Bopp in 1997 -- have really lit up the night sky where you can see them easily with the naked eye as they safely transit the inner-solar system. But Elenin is trending toward the other end of the spectrum. You'll probably need a good pair of binoculars, clear skies, and a dark, secluded location to see it even on its brightest night." Comet Elenin should be at its brightest shortly before the time of its closest approach to Earth on Oct. 16, 2011. At its closest point, it will be 22 million miles from the Earth. "Comet Elenin will not only be far away, it is also on the small side for comets," said Yeomans. "And comets are not the most densely-packed objects out there. They usually have the density of something akin to loosely packed icy dirt. "So you've got a modest-sized icy dirtball that is getting no closer than 35 million kilometers," said Yeomans. "It will have an immeasurably miniscule influence on our planet. By comparison, my subcompact automobile exerts a greater influence on the ocean's tides than comet Elenin ever will." "This comet may not put on a great show. Just as certainly, it will not cause any disruptions here on Earth. But there is a cause to marvel," said Yeomans. "This intrepid little traveler will offer astronomers a chance to study a relatively young comet that came here from well beyond our solar system's planetary region. After a short while, it will be headed back out again, and we will not see or hear from Elenin for thousands of years." For further information: http://www.jpl.nasa.gov/news/news.cfm?release=2011-135&rn=news.xml&rst=2989
http://www.enn.com/top_stories/article/42661
13
18
Line multiplication is a nice activity for teaching multiplication especially for more than one-digit numbers. The method is shown in the figure. The horizontal line represents the number 13 where the top line represents the tens digit and the lines below it represents the ones digit. The lines are grouped according to their place value. The same is true for the number 22. To find the final answer, count the number of intersections and add them diagonally. Dr. James Tanton produced a video about line multiplication. Click the link to view. James Tanton related this procedure to rectangle multiplication. For example, the problem 13 x 22 in rectangle multiplication is If this is done in class I would suggest that before you show the rectangle multiplication as explanation to the process of line multiplication it would be great to connect it first to counting problem. Instead of counting the points at each cluster by one by one, you can ask the class to find for a more efficient way of counting the points of intersections. It will not take long for students to think of multiplying the array of points in each cluster. Given time I’m sure students could even ‘invent’ the rectangle multiplication themselves. Inventing and generalizing procedures are very important math habits of mind. Line multiplication or counting intersections of sets of parallel lines is generalizable. You can ask your students to show the product of (a+b)(c+d) using this technique. The answer is shown in the figure below. Note that like rectangle multiplication this can be extended to more than two terms in each factor also. This is much better than the FOIL method which is restricted to binomials. I’m not a fan of FOIL method especially if it is taught and not discovered by the students themselves. Through this line multiplication activity I think they can discover that shortcut and I hope you will not introduce “FOIL” because that would only add to the cognitive load and not to their understanding.
http://math4teaching.com/2012/06/03/line-multiplication-and-the-foil-method/
13
19
Basic R Commands Written by: Carsten Friis The purpose of this exercise is to introduce the free statistical software package R. R is a flexible and powerful tool developed by an extensive open source effort. For more on R, see the R-project home page. Let us start things up by playing around with some variables. Remember that variables are often referred to as objects in R. As this exercise uses several functions not covered in the lecture, you may want to use the help system to familiarize yourself with them. You do this by writing: help(function_name). - Start up R On Windows/Mac you merely click the R icon. On UNIX/Linux, you start R by typing 'R' in a command terminal. - Assign the value 12 to a and the value 5 to b You can verify the values of a and b by simply typing 'a' or 'b' in the R prompt and then pressing return. - Add the two variables together using the + operator - Now try to add them together using the sum() function This pretty much acomplishes the same this as the '+' operator. While writing '+' may seem much simpler than using the sum() function, it is not always so. For instance, if you wanted to add the values in a vector together, using the function is much easier. - The function rnorm() can generate random numbers from the normal distribution. Use it to create two random vectors x and y with 10000 numbers each. (hint: use rnorm(10000)) - Use the str() function to verify that x and y do indeed contain ten thousand numbers You could also just write 'x' or 'y', but then you would have to wait while R prints the variables to the screen. - Plot the two vectors using the plot() function Do they look random to you? (Like a round fuzzy ball) - Try to use the function hist() to make histograms of x and y This is a more useful illustration of the two vectors. Now you should be able to confirm that they are normally distributed. While the hist() function serves us well here, the textbook plot to use when dealing with distributions is a density plot. Let's try to make one. - Construct an object called xd containing the parameters for a density plot using the density() function - Now construct the density plot itself using the plot() function If you typed the command correctly, the plot() function automatically recognizes that the input is a 'density' object, and acts accordingly. - Because x and y are normally distributed we can calculate the mean and the variance. Use the var() and mean() functions to do this. (See, functions are nice when working with vectors :-) ) Are they similar? - Use the t.test() function to statistically test whether they are similar (hint: use t.test(x,y)) Look to the p-value, it gives the probability that the means of x and y can be assumed to be identical (i.e. that x and y are sampled from the same distribution) - Fun thing: Write down your p-value we'll compare them later on... - Use ls() to get an overview of your objects - Generate a new y vector so that it is no longer similar to the x vector. Confirm the difference with the t.test() function (the p-value should become extremely small) Use the help system on rnorm() to figure out how to generate the new vector. - If you have not already done so, try running the graphics demo (write: demo(graphics)), it's pretty and it'll give you an idea of R's capabilities You must press return with the window running R in focus (i.e. the active window) to cycle through the different plots. Note that pressing return with the plotting window active will accomplish nothing.
http://www.cbs.dtu.dk/chipcourse/Exercises/Ex_BasicR/Ex_BasicR.php
13
26
Types and Typeclasses Believe the type Previously we mentioned that Haskell has a static type system. The type of every expression is known at compile time, which leads to safer code. If you write a program where you try to divide a boolean type with some number, it won't even compile. That's good because it's better to catch such errors at compile time instead of having your program crash. Everything in Haskell has a type, so the compiler can reason quite a lot about your program before compiling it. Unlike Java or Pascal, Haskell has type inference. If we write a number, we don't have to tell Haskell it's a number. It can infer that on its own, so we don't have to explicitly write out the types of our functions and expressions to get things done. We covered some of the basics of Haskell with only a very superficial glance at types. However, understanding the type system is a very important part of learning Haskell. A type is a kind of label that every expression has. It tells us in which category of things that expression fits. The expression True is a boolean, "hello" is a string, etc. Now we'll use GHCI to examine the types of some expressions. We'll do that by using the :t command which, followed by any valid expression, tells us its type. Let's give it a whirl. ghci> :t 'a' 'a' :: Char ghci> :t True True :: Bool ghci> :t "HELLO!" "HELLO!" :: [Char] ghci> :t (True, 'a') (True, 'a') :: (Bool, Char) ghci> :t 4 == 5 4 == 5 :: Bool Here we see that doing :t on an expression prints out the expression followed by :: and its type. :: is read as "has type of". Explicit types are always denoted with the first letter in capital case. 'a', as it would seem, has a type of Char. It's not hard to conclude that it stands for character. True is of a Bool type. That makes sense. But what's this? Examining the type of "HELLO!" yields a [Char]. The square brackets denote a list. So we read that as it being a list of characters. Unlike lists, each tuple length has its own type. So the expression of (True, 'a') has a type of (Bool, Char), whereas an expression such as ('a','b','c') would have the type of (Char, Char, Char). 4 == 5 will always return False, so its type is Bool. Functions also have types. When writing our own functions, we can choose to give them an explicit type declaration. This is generally considered to be good practice except when writing very short functions. From here on, we'll give all the functions that we make type declarations. Remember the list comprehension we made previously that filters a string so that only caps remain? Here's how it looks like with a type declaration. removeNonUppercase :: [Char] -> [Char] removeNonUppercase st = [ c | c <- st, c `elem` ['A'..'Z']] removeNonUppercase has a type of [Char] -> [Char], meaning that it maps from a string to a string. That's because it takes one string as a parameter and returns another as a result. The [Char] type is synonymous with String so it's clearer if we write removeNonUppercase :: String -> String. We didn't have to give this function a type declaration because the compiler can infer by itself that it's a function from a string to a string but we did anyway. But how do we write out the type of a function that takes several parameters? Here's a simple function that takes three integers and adds them together: addThree :: Int -> Int -> Int -> Int addThree x y z = x + y + z The parameters are separated with -> and there's no special distinction between the parameters and the return type. The return type is the last item in the declaration and the parameters are the first three. Later on we'll see why they're all just separated with -> instead of having some more explicit distinction between the return types and the parameters like Int, Int, Int -> Int or something. If you want to give your function a type declaration but are unsure as to what it should be, you can always just write the function without it and then check it with :t. Functions are expressions too, so :t works on them without a problem. Here's an overview of some common types. Int stands for integer. It's used for whole numbers. 7 can be an Int but 7.2 cannot. Int is bounded, which means that it has a minimum and a maximum value. Usually on 32-bit machines the maximum possible Int is 2147483647 and the minimum is -2147483648. Integer stands for, er … also integer. The main difference is that it's not bounded so it can be used to represent really really big numbers. I mean like really big. Int, however, is more efficient. factorial :: Integer -> Integer factorial n = product [1..n] ghci> factorial 50 30414093201713378043612608166064768844377641568960512000000000000 Float is a real floating point with single precision. circumference :: Float -> Float circumference r = 2 * pi * r ghci> circumference 4.0 25.132742 Double is a real floating point with double the precision! circumference' :: Double -> Double circumference' r = 2 * pi * r ghci> circumference' 4.0 25.132741228718345 Bool is a boolean type. It can have only two values: True and False. Char represents a character. It's denoted by single quotes. A list of characters is a string. Tuples are types but they are dependent on their length as well as the types of their components, so there is theoretically an infinite number of tuple types, which is too many to cover in this tutorial. Note that the empty tuple () is also a type which can only have a single value: () What do you think is the type of the head function? Because head takes a list of any type and returns the first element, so what could it be? Let's check! ghci> :t head head :: [a] -> a Hmmm! What is this a? Is it a type? Remember that we previously stated that types are written in capital case, so it can't exactly be a type. Because it's not in capital case it's actually a type variable. That means that a can be of any type. This is much like generics in other languages, only in Haskell it's much more powerful because it allows us to easily write very general functions if they don't use any specific behavior of the types in them. Functions that have type variables are called polymorphic functions. The type declaration of head states that it takes a list of any type and returns one element of that type. Although type variables can have names longer than one character, we usually give them names of a, b, c, d … Remember fst? It returns the first component of a pair. Let's examine its type. ghci> :t fst fst :: (a, b) -> a We see that fst takes a tuple which contains two types and returns an element which is of the same type as the pair's first component. That's why we can use fst on a pair that contains any two types. Note that just because a and b are different type variables, they don't have to be different types. It just states that the first component's type and the return value's type are the same. A typeclass is a sort of interface that defines some behavior. If a type is a part of a typeclass, that means that it supports and implements the behavior the typeclass describes. A lot of people coming from OOP get confused by typeclasses because they think they are like classes in object oriented languages. Well, they're not. You can think of them kind of as Java interfaces, only better. What's the type signature of the == function? ghci> :t (==) (==) :: (Eq a) => a -> a -> Bool Interesting. We see a new thing here, the => symbol. Everything before the => symbol is called a class constraint. We can read the previous type declaration like this: the equality function takes any two values that are of the same type and returns a Bool. The type of those two values must be a member of the Eq class (this was the class constraint). The Eq typeclass provides an interface for testing for equality. Any type where it makes sense to test for equality between two values of that type should be a member of the Eq class. All standard Haskell types except for IO (the type for dealing with input and output) and functions are a part of the Eq typeclass. The elem function has a type of (Eq a) => a -> [a] -> Bool because it uses == over a list to check whether some value we're looking for is in it. Some basic typeclasses: Eq is used for types that support equality testing. The functions its members implement are == and /=. So if there's an Eq class constraint for a type variable in a function, it uses == or /= somewhere inside its definition. All the types we mentioned previously except for functions are part of Eq, so they can be tested for equality. ghci> 5 == 5 True ghci> 5 /= 5 False ghci> 'a' == 'a' True ghci> "Ho Ho" == "Ho Ho" True ghci> 3.432 == 3.432 True Ord is for types that have an ordering. ghci> :t (>) (>) :: (Ord a) => a -> a -> Bool All the types we covered so far except for functions are part of Ord. Ord covers all the standard comparing functions such as >, <, >= and <=. The compare function takes two Ord members of the same type and returns an ordering. Ordering is a type that can be GT, LT or EQ, meaning greater than, lesser than and equal, respectively. To be a member of Ord, a type must first have membership in the prestigious and exclusive Eq club. ghci> "Abrakadabra" < "Zebra" True ghci> "Abrakadabra" `compare` "Zebra" LT ghci> 5 >= 2 True ghci> 5 `compare` 3 GT Members of Show can be presented as strings. All types covered so far except for functions are a part of Show. The most used function that deals with the Show typeclass is show. It takes a value whose type is a member of Show and presents it to us as a string. ghci> show 3 "3" ghci> show 5.334 "5.334" ghci> show True "True" Read is sort of the opposite typeclass of Show. The read function takes a string and returns a type which is a member of Read. ghci> read "True" || False True ghci> read "8.2" + 3.8 12.0 ghci> read "5" - 2 3 ghci> read "[1,2,3,4]" ++ [1,2,3,4,3] So far so good. Again, all types covered so far are in this typeclass. But what happens if we try to do just read "4"? ghci> read "4" <interactive>:1:0: Ambiguous type variable `a' in the constraint: `Read a' arising from a use of `read' at <interactive>:1:0-7 Probable fix: add a type signature that fixes these type variable(s) What GHCI is telling us here is that it doesn't know what we want in return. Notice that in the previous uses of read we did something with the result afterwards. That way, GHCI could infer what kind of result we wanted out of our read. If we used it as a boolean, it knew it had to return a Bool. But now, it knows we want some type that is part of the Read class, it just doesn't know which one. Let's take a look at the type signature of read. ghci> :t read read :: (Read a) => String -> a See? It returns a type that's part of Read but if we don't try to use it in some way later, it has no way of knowing which type. That's why we can use explicit type annotations. Type annotations are a way of explicitly saying what the type of an expression should be. We do that by adding :: at the end of the expression and then specifying a type. Observe: ghci> read "5" :: Int 5 ghci> read "5" :: Float 5.0 ghci> (read "5" :: Float) * 4 20.0 ghci> read "[1,2,3,4]" :: [Int] [1,2,3,4] ghci> read "(3, 'a')" :: (Int, Char) (3, 'a') Most expressions are such that the compiler can infer what their type is by itself. But sometimes, the compiler doesn't know whether to return a value of type Int or Float for an expression like read "5". To see what the type is, Haskell would have to actually evaluate read "5". But since Haskell is a statically typed language, it has to know all the types before the code is compiled (or in the case of GHCI, evaluated). So we have to tell Haskell: "Hey, this expression should have this type, in case you don't know!". Enum members are sequentially ordered types — they can be enumerated. The main advantage of the Enum typeclass is that we can use its types in list ranges. They also have defined successors and predecesors, which you can get with the succ and pred functions. Types in this class: (), Bool, Char, Ordering, Int, Integer, Float and Double. ghci> ['a'..'e'] "abcde" ghci> [LT .. GT] [LT,EQ,GT] ghci> [3 .. 5] [3,4,5] ghci> succ 'B' 'C' Bounded members have an upper and a lower bound. ghci> minBound :: Int -2147483648 ghci> maxBound :: Char '\1114111' ghci> maxBound :: Bool True ghci> minBound :: Bool False minBound and maxBound are interesting because they have a type of (Bounded a) => a. In a sense they are polymorphic constants. All tuples are also part of Bounded if the components are also in it. ghci> maxBound :: (Bool, Int, Char) (True,2147483647,'\1114111') Num is a numeric typeclass. Its members have the property of being able to act like numbers. Let's examine the type of a number. ghci> :t 20 20 :: (Num t) => t It appears that whole numbers are also polymorphic constants. They can act like any type that's a member of the Num typeclass. ghci> 20 :: Int 20 ghci> 20 :: Integer 20 ghci> 20 :: Float 20.0 ghci> 20 :: Double 20.0 Those are types that are in the Num typeclass. If we examine the type of *, we'll see that it accepts all numbers. ghci> :t (*) (*) :: (Num a) => a -> a -> a It takes two numbers of the same type and returns a number of that type. That's why (5 :: Int) * (6 :: Integer) will result in a type error whereas 5 * (6 :: Integer) will work just fine and produce an Integer because 5 can act like an Integer or an Int. To join Num, a type must already be friends with Show and Eq. Integral is also a numeric typeclass. Num includes all numbers, including real numbers and integral numbers, Integral includes only integral (whole) numbers. In this typeclass are Int and Integer. Floating includes only floating point numbers, so Float and Double. A very useful function for dealing with numbers is fromIntegral. It has a type declaration of fromIntegral :: (Num b, Integral a) => a -> b. From its type signature we see that it takes an integral number and turns it into a more general number. That's useful when you want integral and floating point types to work together nicely. For instance, the length function has a type declaration of length :: [a] -> Int instead of having a more general type of (Num b) => length :: [a] -> b. I think that's there for historical reasons or something, although in my opinion, it's pretty stupid. Anyway, if we try to get a length of a list and then add it to 3.2, we'll get an error because we tried to add together an Int and a floating point number. So to get around this, we do fromIntegral (length [1,2,3,4]) + 3.2 and it all works out. Notice that fromIntegral has several class constraints in its type signature. That's completely valid and as you can see, the class constraints are separated by commas inside the parentheses.
http://learnyouahaskell.com/types-and-typeclasses
13
14
Layer 2 - Data Link Layer The Data Link layer is the very powerful and complete set of functions for message transfer between hosts. Protocols of this layer provide the interface between a physical network and protocols of the top layers. The main protocol consists of a frame of a special format that encapsulates the data of the network layer protocol and mechanism, thus, regulating access to a shared medium. The physical medium can be inaccessible to view if it is busy when a large number of computers transfer information simultaneously. In this situation, the second layers laws and definitions determine the action entered for the solution to the problem. Data Link checks the availability of the data medium, also known as the flow control. Another task of the second layer is to utilize its mechanisms of detection to find and correct mistakes, otherwise known as error notification. The bits are then grouped in sets called frames. This layer provides an accurate transfer of each frame, placing a special sequence bit in the beginning and at the end of the frame determining its allocation, then calculates the control sum (usually called a checksum), and then, processes all bits of the frame in the same specific way and adds the control sum to the frame. At reception of the frame, the control sum of the received data is again calculated and the result is compared to the result of the control sum from the frame. By making and exact match, the frame is considered correct and is accepted. If they do not coincide, the mistake is fixed. With retransmission of the damaged frame, the Data Link layer can correct the mistakes. However, this function of correcting mistakes is absent in some protocols of the second layer, such as Ethernet and Frame Relay. A method of addressing between hosts is incorporated in protocols of the second layer for a LAN medium. The Data Link layer provides delivery of the frames between any two LAN units if the network topology is typical. Typical topology consists of bus, ring and star, and their hybrid versions. Examples of the LAN protocols served are Ethernet, Token Ring, FDDI and 100VG Any-LAN. In LANs, the protocols of the second layer are used by computers with the help of NICs, bridges, switches and routers. In WAN networks, which usually do not have the regular topology, the second layer provides an exchange of the messages between only two neighboring hosts. Such an example is the protocol PPP (point-to-point protocol). In this case, delivery of the messages through the entire network uses the top layers facilities. Sometimes functions of the second layer are indistinct, as in one protocol, they are incorporated with functions of a network layer, an example is the ATM (Asynchronous Transfer Mode)177 Home - Table Of Contents - Contact Us CertiGuide to A+ (A+ 4 Real) (http://www.CertiGuide.com/apfr/) on CertiGuide.com Version 1.0 - Version Date: March 29, 2005 Adapted with permission from a work created by Tcat Houser et al. CertiGuide.com Version © Copyright 2005 Charles M. Kozierok. All Rights Reserved. Not responsible for any loss resulting from the use of this site.
http://www.certiguide.com/apfr/cg_apfr_Layer2DataLinkLayer.htm
13
19
The familiar trigonometric functions can be geometrically derived from a circle. But what if, instead of the circle, we used a regular polygon? In this animation, we see what the “polygonal sine” looks like for the square and the hexagon. The polygon is such that the inscribed circle has radius 1. We’ll keep using the angle from the x-axis as the function’s input, instead of the distance along the shape’s boundary. (These are only the same value in the case of a unit circle!) This is why the square does not trace a straight diagonal line, as you might expect, but a segment of the tangent function. In other words, the speed of the dot around the polygon is not constant anymore, but the angle the dot makes changes at a constant rate. Since these polygons are not perfectly symmetrical like the circle, the function will depend on the orientation of the polygon. More on this subject and derivations of the functions can be found in this other post Now you can also listen to what these waves sound like. This technique is general for any polar curve. Here’s a heart’s sine function, for instance
http://morielle.tumblr.com/
13
14
Search this site » As a student of psychology you will undoubtedly be required at some point to conduct an experiment. Like other sciences, psychology utilizes the scientific method by formulating a hypothesis and deducing its consequences. Accomplishing this will require experimental design and execution. The first step is to identify and define the problem. This can be as simple as observing everyday life with an eye for cause and effect. An observed behavior is chosen and a cause and/or a correlation with another measurable behavior is then postulated. Alternatively, you can search psychology literature on the web, in journals, or books and find a subject of interest. The goal of this research is to identify what you consider unanswered questions. Having found your question, you must then develop a hypothesis comprising a specific, testable prediction of the expected result. You can predict based on the correlation between the observed behavior and your variable. For example, a study designed to look at the relationship between a student's marital status and study habits could present a hypothesis that married people have different study habits than their single counterparts. In order to determine if the results of the study are significant, it is essential to also come up with a null hypothesis, and a possible negative result. The next step in conducting a psychology experiment is to outline an experimental design. The hypothesis must include what the variables are and how they will be measured and interpreted, often a difficult task in the social sciences. It is essential to compare apples to apples or, if you are comparing apples to oranges, to know which is which. This is particularly important when choosing subjects. They must represent a random sample of a significant number of participants from a group or randomly selected participants from different subsets of the population. These subsets could be based on geographic location, age, sex, race, socioeconomic status, or other criteria. Finally, data collection can begin using your defined testing procedures and selected participants. When data collection is complete, the next step is to analyze the results of your experiment. Statistical analysis will determine if the results of the study support the original hypothesis. Typically, in the social sciences a Chi-square or ANOVA analysis is carried out. Both determine a p-value (probability) by measuring the correlation between the control and variable(s) defined in your hypothesis. In this simple case, the values will be identical with either method. In the social sciences significant results should have a p of 0.05 or less. A substantially higher value of p, 0.1 or above supports a null hypothesis. However, to accept the null hypothesis is to suggest that something is true simply because you did not find any evidence to the contrary. This represents a logical fallacy that should be avoided in scientific research. Finally, after your psychology experiment is finished, it is time to write up your results. A good start is to consult the Publication Manual of the American Psychological Association, 4th Edition pages 258 - 264 and the web. Drawing upon our more than 30-year history of granting degrees in professional psychology, Argosy University has developed a curriculum that focuses on interpersonal skills and practical experience alongside academic learning. hypothesis of p value psychology sample t test scientific method statistical statistics t test test what is Click here to rate this company Helping Psychology maintains an RSS 2.0 Feed. Click the icon to subscribe to this feed. Optimized by Lead Maverick | Add Your Content |
http://directory.leadmaverick.com/Helping-Psychology/DallasFort-WorthArlington/TX/10/9075/index.aspx
13
11
Working with Named Pipes Pipes allow processes to communicate with each other. A pipe may also be known as a "FIFO" (First In, First Out). The advantage over using files as a means of communication is, that processes are synchronized by pipes: a process writing to a pipe blocks if there is no reader, and a process reading from a pipe blocks if there is no writer. The following is an example on how to create a named pipe, named here as pipe1. cd /tmp mkfifo pipe1 To send a message to the pipe, use: echo "hello" > pipe1 The process will appear to be hung at this point. There is no other process running to collect the data, so the kernel suspends the process. The process is said to be "blocked" at this stage. On another terminal, it is possible to collect the data from the pipe, as follows: The data from the pipe will now be read by cat (and written to the terminal), and the "blocked" writer process will be free to resume. For some more information, see Bash FAQ #85. Synchronous bidirectional Client - Server example Here is a small example of a server process communicating with a client process. The server sends commands to the client, and the client acknowledges each command: # server - communication example # Create a FIFO. Some systems don't have a "mkfifo" command, but use # "mknod pipe p" instead mkfifo pipe while sleep 1 do echo "server: sending GO to client" # The following command will cause this process to block (wait) # until another process reads from the pipe echo GO > pipe # A client read the string! Now wait for its answer. The "read" # command again will block until the client wrote something read answer < pipe # The client answered! echo "server: got answer: $answer" done # client # We cannot start working until the server has created the pipe... until [ -p pipe ] do sleep 1; # wait for server to create pipe done # Now communicate... while sleep 1 do echo "client: waiting for data" # Wait until the server sends us one line of data: read data < pipe # Received one line! echo "client: read <$data>, answering" # Now acknowledge that we got the data. This command # again will block until the server read it. echo ACK > pipe done Write both examples to files server and client respectively, and start them concurrently to see it working: $ chmod +x server client $ server & client & server: sending GO to client client: waiting for data client: read <GO>, answering server: got answer: ACK server: sending GO to client client: waiting for data client: read <GO>, answering server: got answer: ACK server: sending GO to client client: waiting for data [...]
http://mywiki.wooledge.org/NamedPipes
13
14
Who travelled faster than light NEUTRINOS possess a seemingly endless capacity to discombobulate. First the elusive particles, which theorists believe to be as abundant in the universe as photons, but which almost never interact with anything, turned out to have mass. That discovery, made at Japan's Super-Kamiokande detector in 1998, flew in the face of the Standard Model, a 40-year-old rulebook of particle physics which predicted they ought to be massless (and which has since been tweaked to accommodate the result). Now researchers at CERN, the world's main particle-physics laboratory, report that their neutrinos appear to confound what is, if anything, an even bigger theoretical colossus: Albert Einstein's special theory of relativity. They did it by apparently travelling faster than the speed of light. Physicists from OPERA, one of the experiments at CERN, send beams of neutrinos from the organisation's headquarters on the outskirts of Geneva, through the Earth's crust to an underground laboratory 730km away underneath Gran Sasso, a mountain in the Apennines. They use fancy kit like high-precision GPS and atomic clocks to measure the distance the neutrinos travel to within 20cm and their time of flight to within ten nanoseconds (billionths of a second). The neutrinos in question appear to be reaching the detector 60 nanoseconds faster than light would take to cover the same distance. That translates to a speed 0.002% higher than the 299,792,458 metres per second at which light zaps through a vacuum. The result, published in arXiv, an online database, is based on data from 15,000 neutrinos detected at Gran Sasso over three years. If it holds up it would be the first chink in what has until now been the impenetrable armour of special relativity, a theory which has been tested—and confirmed—time and again since its publication in 1905. The theory states that as an object speeds up, time slows down until it stops altogether on hitting the speed of light. Anything going faster than light would, in other words, be moving backwards in time. A violation of special relativity that affects only neutrinos would be very weird indeed. To confuse matters further, observations of neutrinos emitted by a supernova observed in 1987 established that the particles travel at just below the speed of light through the vacuum of space to a precision four orders of magnitude better than the OPERA claim. That means that the OPERA neutrinos would have to be interacting with matter in some bizarre way that violates special relativity. The odds, it must be admitted, are that a mistake has been made somewhere in the long chain of timing measurements required to compare the moment when neutrinos are created at CERN by smashing a beam of protons into a target, and their detection in Gran Sasso, though OPERA's researchers have done their best to account for all possible instrumental quirks. What makes the result slightly less than incredible is that an experiment in America, called MINOS, detected a similar anomaly in 2007. MINOS's researchers dismissed that result as a mismeasurement. Now, though, the experiment has ten times more data than it did four years ago, as well as ideas about how to make the necessary calculations more accurate. (A proposed upgrade called MINOS+, which could start collecting data in 2013, might be able to determine the flight time to within one nanosecond.) Physicists working on another neutrino experiment in Japan, known as T2K, are holding a meeting next week and the OPERA result will be high on the agenda. The effect may be too small to spot in the data recorded before T2K was damaged by the earthquake in March. Moreover, T2K's detector is located just 295km from the neutrino source, so the effect would be just 25 nanoseconds, if it were real. T2K hopes to start taking data again in 2012. If the Japanese and American experiments do see the same strange result, it would be the greatest revolution in physics since, well, special relativity burst onto the scene. And it would be fair to say of a neutrino what a wag once quipped about a lady named Bright: that it went away, in a relative way, and came back on the previous night.
http://www.economist.com/blogs/babbage/2011/09/neutrinos?fsrc=scn%2Ftw%2Fte%2Fbl%2Ftherewasaneutrinonamedbright
13
34
When an asteroid or comet impacts a planetary body, it releases a tremendous amount of energy. Except for objects smaller than a few meters, the impacting asteroid or comet is obliterated by the energy of the impact. The impactor material is mixed with the target material (the rock on the planet's surface) and dispersed in the form of vapor, melt, and rock fragments. During the impact, sulfur in the impactor or in sulfur-containing target rocks can be injected into the atmosphere in a vapor-rich impact plume. In some impact events, such as Chicxulub, the rocks hit by the impactor contain sulfur. Sedimentary rocks hit by an impactor sometimes include large amounts of evaporites. Evaporites are rocks that are formed with minerals that precipitated from evaporating water, such as halite (rock salt) and calcite (calcium carbonate). Two other very common evaporite minerals are gypsum (CaSO4 + H20) and anhydrite (CaSO4), both of which contain sulfur (S). Projectiles also contain sulfur-bearing minerals, particularly the mineral troilite (FeS), which is obliterated in an impact event. This material releases its sulfur, which is then injected into the stratosphere. The amount of sulfur injected into the stratosphere depends partly on the composition of the projectile, which can vary from one crater to another. Using chemical traces of the projectiles left at impact craters, scientists can determine the type of meteoritic material involved. Using this data, scientists can then calculate the amount of sulfur each specific impact injects into the stratosphere. The amount of this sulfur can be substantial, because meteoritic materials contain up to 6.25 weight percent sulfur. Consequently, even if the asteroid or comet does not hit a S-rich target, it can still cause dramatic increases in the total amount of atmospheric sulfur. Once vaporized, this sulfur can react with water to form sulfate (or sulfuric acid) particles. These particles can greatly reduce the amount of sunlight that penetrates to the surface of the earth for a period of up to several years. Over time, the sulfate will settle out of the stratosphere (upper atmosphere) into the troposphere (lower atmosphere) where they can form acid rain which can have additional environmental and biological effects. FAQ About The Table - What are the projectile types and how are they determined? - What does enhancement mean? - What does Ir in ejecta mean? - What are the Eltanin and Australasian impacts events? Do they have craters associated with them? The table below shows calculations of the abundances of sulfur added to the atmosphere during known large impact events. These calculations are based on the amount of sulfur in the projectile only, and do not take into account the sulfur present in the |Botsumtwi||1.3 ± 0.2||-||iron||4 x 1013 - 2 x 1014||1 x 1010 - 2 x 1012||0.05-10| |New Quebec||1.4 ± 0.1||-||chondrite||2 x1012 - 9 x 1012||3 x 1010 - 5 x 1011||0.15-2.5| |Eltanin||~2.3||6 x 107||mesosiderite||-||1012 - 1013||5-50| |Popigai||35 ± 5||-||chondrite||1 x 1017 - 6 x 1017||2 x 1015 - 4 x 1016||10000-105| |Wanapitei||37 ± 2||-||LL (chondrite)||1 x 1013 - 6 x 1013||2 x 1011 - 2 x 1012||1-10| |Chicxulub*||65||2 x 1011||-||-||1014 - 1016||500-105| |Kamensk||65 ± 2||-||chondrite||1 x 1015 - 5 x 1015||1 x 1013 - 3 x 1014||50-1500| |Kara||73±3||-||chondrite||3 x 1016 - 1 x 1017||5 x 1014 - 8 x 1015||2500-40000| |Ust-Kara||73 ± 3||-||chondrite||1 x 1015 - 5 x 1015||1 x 1013 - 3 x 1014||50-1500| |Lappajarvi||73.3 ± 0.4||-||chondrite||2 x 1014 - 1 x 1015||4 x 1012 - 7 x 1013||20-350| |Boltysh||88 ± 3||-||chondrite||1 x 1015 - 5 x 1015||1 x 1013 - 3 x 1014||50-1500| |Obolon||215 ± 25||-||iron||1 x 1014 - 6 x 1014||4 x 1010 - 6 x 1012||0.2-30| |Clearwater East||290 ± 20||-||CI||5 x 1014 - 3 x 1015||7 x 1012 - 2 x 1014||35-1000| |Ilyinets||395 ± 5||-||iron||2 x 1012 - 8 x 1012||5 x 108 - 9 x 1010||0.0025-.45| |Brent||450 ± 3||-||L or LL (chondrite)||3 x 1012 - 1 x 1013||5 x 1010 - 3 x 1011||0.25-1.5| |Saaksjarvi||514 ± 12||-||chondrite||3 x 1012 - 2 x 1013||5 x 1010 - 9 x 1011||0.25-4.5| The majority of geological information about asteroids comes from meteorites, which are their associated rock-type fragments. Meteorites, and asteroids by association, are classified based on their chemical composition. Given below are descriptions of the various meteorite types: Ordinary Chondrites (H,L,LL): These stony meteorites are the most common meteorites. They are composed mostly of silicate minerals (olivine, pyroxene, plagioclase) and represent undifferentiated primitive material from the solar nebula, dating back over 4.5 billion years. Chondrites are characterized by small, globular, millimeter-sized inclusions called chondrules. If you could remove chondrules from the meteorites, they would roll across a table like a marble. These meteorites also contain several percent metal. Both the chondrules and the metal content of chondrites can be seen in the photo below. The sulfur in chondrites is primarily in the form of troilite (FeS) - a sulfide mineral. (Above) The ordinary chondrite - Dos Cabezas. The S-type asteroid 243 Ida While the link between specific types of meteorites and asteroids is uncertain, some scientists have suggested that S- type asteroids like Ida are composed of ordinary chondrite material. Iron meteorites are composed of a nickel-iron alloy along with trace amounts of non-metallic minerals and sulfides. Some iron meteorites are thought to be fragments of the iron core of a differentiated asteroid. The sulfur in iron meteorites is primarily in the form of troilite (FeS) - a sulfide mineral. (Above) The iron meteorite - Bagdad. Shape model rendering from radar data of the M-type asteroid 216 Kleopatra (NASA/JPL). The refelectance characteristics of M-type asteroids like Kleopatra suggest that they may be composed of iron-nickel which hints at a possible source for iron meteorites. Carbonaceous Chondrite (CI, CM, CV, CO, CK, and Carbonaceous chondrites are very rare and primitive meteorites. These meteorites contain organic compounds as well as hydrous silicates (water bearing minerals). The sulfur in carbonaceous chondrites can take the form of sulfide minerals such as troilite (FeS), elemental sulfur, or water soluble sulfate. The Allende meteorite (CV) shown below is approximately 2.1% sulfur by mass, while CI carbonaceous chondrites have ~6.25% sulfur. (Above) The carbonaceous chondrite - The C-type asteroid 253 Mathilde (NASA/JHUAPL). While the link between specific types of meteorites and asteroids is uncertain, some scientists have suggested that C- type asteroids like Mathilde are composed of carbonaceous chondrite material. Stony Iron - This is an unusual type of meteorite that is composed of nearly equal amounts of metals and silicates. Breccia is a rock type that contains broken rock fragments welded into a finer grained matrix. Mesosiderites probably represent the shattered regolith of an asteroid that has been the target of several asteroid-asteroid collisions. Pieces of this regolith can be blasted off the surface of a larger body, and eventually reach the Earth as meteorites, or if large enough, as impacting bolides. The sulfur in mesosiderites is primarily in the form of troilite (FeS) - a sulfide mineral. (Above) The mesosiderite - Clover Springs The S-type asteroid 951 Gaspra (NASA). While the link between specific types of meteorites and asteroids is uncertain, some scientists have suggested that S- type asteroids like Gaspra are composed of stony-iron meteoritic material, possibly including mesosiderite material. What does enhancement mean? The enhancement value listed in the table is a calculation of how many times greater the overall sulfur content of the stratosphere would be following a large impact event. The baseline for this calculation is the background sulfur present in our current atmosphere. This number likely fluctuated to a small degree over geologic time, especially following periods of extreme volcanism. What does Ir in ejecta mean? This is the amount of the rare trace element iridium (Ir), a platinum-group mineral, sampled in an impact crater's ejecta. The importance of iridium in impact ejecta comes from its very low concentration in the Earth's rocks, and its relatively high concentration is meteorites, comets, and asteroids. Anomalously high levels of iridium in a thin clay layer at the Cretaceous-Tertiary boundary are what led Luis Alvarez, (a Nobel Prize-winning physicist) and his son Walter (a geologist), to propose the impact hypothesis for the K-T mass extinction. What are the Eltanin and Australasian impacts events? Do they have craters associated with them? The Eltanin event occurred over two million years ago when a 1-4 km asteroid impacted in the Southern Ocean between the southernmost tip of South America and Antartica. The impact evidence for Eltanin stems from iridium anomalies in ocean drilling cores (see above). Although the impact was very large, no submarine crater has been found. The Australasian impact is inferred from the huge number of tektites found over thousands of kilometers of southeast Asia and Australia. Tektites are small teardrop or button-shaped rocks that are formed by the solidification of molten droplets. The droplets were terrestrial rocks and dirt that were superheated during the impact, ejected from their source crater, and then rained down on the land and sea. The source crater for the 700,000 year-old Australasian tektite strewn field has not been found. Return to Environmental Effects Main Page This web site is based on information originally created for the NASA/UA Space Imagery Center’s Impact Cratering Series. Concept and content by David A. Kring and Jake Bailey. Design, graphics, and images by Jake Bailey and David A. Kring. Any use of the information and images requires permission of the Space Imagery Center and/or David A. Kring (now at LPI).
http://www.lpi.usra.edu/science/kring/epo_web/impact_cratering/enviropages/atmossulphur/sulphurweb.html
13
19
By have a rotating hoop for a space elevator then objects sliding along Rotating Space Elevator(RSE) strings do not require internal engines or propulsion to be transported from the Earth's surface into outer space. (H/T Tom Craver) A previous article had noted that the strength of the space elevator tether and the power of the engines driving the climbers were inter-related in terms of how feasible the space elevator was. By removing the need for powered climbers this could improve the overall feasibility of space elevators. Physorg has info as well To initiate the double rotational motion, the string system is given an initial spin. Other than this initial spin, the RSE moves purely under the influence of inertia and gravity. In simulations, Golubović and Knudsen show how a load starting at rest near the Earth spontaneously oscillates between its starting point near Earth and a turning point in outer space (close to the top of the string). Using a specially chosen variation of the tapered elevator cable cross-sectional area, the scientists could ensure that the RSE string will indefinitely maintain its initial looped shape. Golubović said that, as far as he knew, this type of motion does not occur in any other areas of physics or astronomy. Golubović and Knudsen also proposed a slightly different form of the RSE, which combines an RSE with an LSE (an ellipse-like rotating string is attached to a linear string). This “uniform stress RSE” (USRSE) could be designed with its loop positioned above the Earth’s surface, which might have advantages for launching satellites. The scientists also show that stacking several USRSE loops could create pathways reaching deeply into outer space, and loads could cross from string to string at intersection points. The RSEs are rapid outer space transportation systems that require no internal engines for the climbers sliding along the elevator strings. RSE strings exhibit interesting nonlinear dynamics and statistical physics phenomena. RSEs' action employs basic natural phenomena—gravitation and inertial forces. Satellites and spacecrafts carried by sliding climbers can be released (launched) along RSEs. RSE strings can host space stations and research posts. Sliding climbers can be then used to transport useful loads and humans from the Earth to these outer space locations. Strings and membranes play prominent roles in modern days investigations in statistical physics, nonlinear dynamics, biological physics, and in applied physical sciences. Technologically achievable celestial-size strings are no exception to this. Ever since an early dream of Tsiolkovsky, the vision of Space Elevator, a giant string connecting the Earth with heavens has intrigued diverse researchers as well as science fiction writers. The space elevator reaches beyond the geosynchronous satellite orbit. In its equilibrium state, the space elevator is straight and at rest in the non-inertial frame associated with the rotating planet, thanks to a balance between the gravity and the centrifugal force acting on the long elevator string. A major shortcoming of this traditional linear space elevator (LSE) is that significant energy must be locally (by internal engines, propulsion, or laser light pressure) supplied to climbers creeping along the LSE string to allow them to leave the gravitational potential trap of the Earth. This study opens a new venue in the physics of strings and membranes. We introduce for the first time a novel class of nonlinear dynamical systems, Rotating Space Elevators (RSE). The RSEs are multiply rotating systems of strings. Remarkably, useful loads and humans sliding along RSE strings do not require internal engines or propulsion to be rapidly transported (sled) away from the Earth's surface into outer space. The nonlinear dynamics and statistical physics of RSE strings are shown here to be also interesting in their own right. Our RSE is a double rotating floppy string. In its quasi-periodic like state, the RSE motion is nearly a geometrical superposition of: a) geosynchronous (one-day period) rotation around the Earth, and b) yet another rotational motion of the string which is typically much faster (with period ~tens of minutes) and goes on around a line perpendicular to the Earth at its equator. This second, internal rotation plays a very special role: It provides the dynamical stability of the RSE shape and, importantly, it also provides a mechanism for the climbing of objects free to slide along the RSE string. The RSE can be envisioned in various shapes;. As revealed here, for a given RSE shape, by a special (magical) choice of the mass distribution of the RSE string, the simple double-rotating geometrical motion can be (under some conditions) made to represent an approximate yet exceedingly accurate solution to the exact equations of the RSE string dynamics. Classical and statistical mechanics of celestial-scale spinning strings: Rotating space elevators (full 6 page article) Figure 1. (Color online) In (a) and (d), respectively, the elliptical RSE and the USRSE (attached to a LSE) discussed in the text. In these figures we include also the equipotentials of the effective potential. Sliding climbers oscillate between two turning points (indicated by arrows) that are on the same equipotential. From our simulations: The R1(t) coordinate of the climber is shown in (b) and (e) on the floppy RSEs with initial shapes in, respectively, (a) and (d). (With a sliding friction (not included here), climbers would eventually stop at the RSE point minimizing the which occurs close to the RSE point with maximizing R2 in (a) and (d)). The magical mass distributions derived by eq. (8) are shown in (c) and (f) for the RSEs in, respectively, (a) and (d).
http://nextbigfuture.com/2009_05_17_archive.html
13
56
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | A statistical hypothesis test is a method of making statistical decisions from and about experimental data. Null-hypothesis testing just answers the question of "how well the findings fit the possibility that chance factors alone might be responsible." This is done by asking and answering a hypothetical question. One use is deciding whether experimental results contain enough information to cast doubt on conventional wisdom. As an example, consider determining whether a suitcase contains some radioactive material. Placed under a Geiger counter, it produces 10 counts per minute. The null hypothesis is that no radioactive material is in the suitcase and that all measured counts are due to ambient radioactivity typical of the surrounding air and harmless objects in a suitcase. We can then calculate how likely it is that the null hypothesis produces 10 counts per minute. If it is likely, for example if the null hypothesis predicts on average 9 counts per minute and a standard deviation of 1 count per minute, we say that the suitcase is compatible with the null hypothesis (which does not imply that there is no radioactive material, we just can't determine!); on the other hand, if the null hypothesis predicts for example 1 count per minute and a standard deviation of 1 count per minute, then the suitcase is not compatible with the null hypothesis and there are likely other factors responsible to produce the measurements. The test described here is more fully the null-hypothesis statistical significance test. The null hypothesis is a conjecture that exists solely to be falsified by the sample. Statistical significance is a possible finding of the test - that the sample is unlikely to have occurred by chance given the truth of the null hypothesis. The name of the test describes its formulation and its possible outcome. One characteristic of the test is its crisp decision: reject or do not reject (which is not the same as accept). A calculated value is compared to a threshold. One may be faced with the problem of making a definite decision with respect to an uncertain hypothesis which is known only through its observable consequences. A statistical hypothesis test, or more briefly, hypothesis test, is an algorithm to choose between the alternatives (for or against the hypothesis) which minimizes certain risks. This article describes the commonly used frequentist treatment of hypothesis testing. From the Bayesian point of view, it is appropriate to treat hypothesis testing as a special case of normative decision theory (specifically a model selection problem) and it is possible to accumulate evidence in favor of (or against) a hypothesis using concepts such as likelihood ratios known as Bayes factors. There are several preparations we make before we observe the data. - The null hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. For example: The mean response to treatment being tested is equal to the mean response to the placebo in the control group. Both responses have the normal distribution with this unknown mean and the same known standard deviation ... (value). - A test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. In the example given above, it might be the numerical difference between the two sample means, m1 − m2. - The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). In this example, the difference between sample means would have a normal distribution with a standard deviation equal to the common standard deviation times the factor where n1 and n2 are the sample sizes. - Among all the sets of possible values, we must choose one that we think represents the most extreme evidence against the hypothesis. That is called the critical region of the test statistic. The probability of the test statistic falling in the critical region when the null hypothesis is correct, is called the alpha value (or size) of the test. - The probability that a sample falls in the critical region when the parameter is , where is for the alternative hypothesis, is called the power of the test at . The power function of a critical region is the function that maps to the power of . After the data are available, the test statistic is calculated and we determine whether it is inside the critical region. If the test statistic is inside the critical region, then our conclusion is one of the following: - Reject the null hypothesis. (Therefore the critical region is sometimes called the rejection region, while its complement is the acceptance region.) - An event of probability less than or equal to alpha has occurred. The researcher has to choose between these logical alternatives. In the example we would say: the observed response to treatment is statistically significant. If the test statistic is outside the critical region, the only conclusion is that there is not enough evidence to reject the null hypothesis. This is not the same as evidence in favor of the null hypothesis. That we cannot obtain using these arguments, since lack of evidence against a hypothesis is not evidence for it. On this basis, statistical research progresses by eliminating error, not by finding the truth. Definition of termsEdit Following the exposition in Lehmann and Romano, we shall make some definitions: - Simple hypothesis - Any hypothesis which specifies the population distribution completely. - Composite hypothesis - Any hypothesis which does not specify the population distribution completely. - Statistical test - A decision function that takes its values in the set of hypotheses. - Region of acceptance - The set of values for which we fail to reject the null hypothesis. - Region of rejection / Critical region - The set of values of the test statistic for which the null hypothesis is rejected. - Power of a test (1-) - The test's probability of correctly rejecting the null hypothesis. The complement of the false negative rate - Size / Significance level of a test () - For simple hypotheses, this is the test's probability of incorrectly rejecting the null hypothesis. The false positive rate. For composite hypotheses this is the upper bound of the probability of rejecting the null hypothesis over all cases covered by the null hypothesis. - Most powerful test - For a given size or significance level, the test with the greatest power. - Uniformly most powerful test (UMP) - A test with the greatest power for all values of the parameter being tested. - Unbiased test - For a specific alternative hypothesis, a test is said to be unbiased when the probability of rejecting the null hypothesis is not less than the significance level when the alternative is true and is less than or equal to the significance level when the null hypothesis is true. - Uniformly most powerful unbiased (UMPU) - A test which is UMP in the set of all unbiased tests. Common test statisticsEdit |One-sample z-test||(Normal distribution or n > 30) and σ known.| (z is the distance from the mean in standard deviations. It is possible to calculate a minimum proportion of a population that falls within n standard deviations (see: Chebyshev's inequality). |Two-sample z-test||Normal distribution and independent observations and (σ1 AND σ2 known)| |(Normal population or n > 30) and σ unknown| |(Normal population of differences or n > 30) and σ unknown| |One-proportion z-test||n .p > 10 and n (1 − p) > 10| |Two-proportion z-test, equal variances|| |n1.p1 > 5 AND n1(1 − p1) > 5 and n2.p2 > 5 and n2(1 − p2) > 5 and independent observations| |Two-proportion z-test, unequal variances||n1.p1 > 5 and n1(1 − p1) > 5 and n2.p2 > 5 and n2(1 − p2) > 5 and independent observations| |Two-sample pooled t-test||(Normal populations or n1+n2 > 40) and independent observations and σ1 = σ2 and (σ1 and σ2 unknown)| |Two-sample unpooled t-test||(Normal populations or n1+n2 > 40) and independent observations and σ1 ≠ σ2 and (σ1 and σ2 unknown)| |Definition of symbols|| = sample size| = sample mean = population mean = population standard deviation = t statistic = degrees of freedom = sample 1 size = sample 2 size = sample 1 std. deviation = sample 2 std. deviation = sample mean of differences = population mean difference = std. deviation of differences = proportion 1 = proportion 2 = population 1 mean = population 2 mean = minimum of n1 or n2 Hypothesis testing is largely the product of Ronald Fisher, Jerzy Neyman, Karl Pearson and (son) Egon Pearson. Fisher was an agricultural statistician who emphasized rigorous experimental design and methods to extract a result from few samples assuming Gaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an (extended) hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century. The following example is summarized from Fisher Fisher thoroughly explained his method in a proposed experiment to test a Lady's claimed ability to determine the means of tea preparation by taste. The article is less than 10 pages in length and is notable for its simplicity and completeness regarding terminology, calculations and design of the experiment. The example is loosely based on an event in Fisher's life. The Lady proved him wrong. - The null hypothesis was that the Lady had no such ability. - The test statistic was a simple count of the number of successes in 8 trials. - The distribution associated with the null hypothesis was the binomial distribution familiar from coin flipping experiments. - The critical region was the single case of 8 successes in 8 trials based on a conventional probability criterion (<5%). - Fisher asserted that no alternative hypothesis was (ever) required. If, and only if the 8 trials produced 8 successes was Fisher willing to reject the null hypothesis - effectively acknowledging the Lady's ability with >98% confidence (but without quantifying her ability). Fisher later discussed the benefits of more trials and repeated tests. Little criticism of the technique appears in introductory statistics texts. Criticism is of the application or of the interpretation rather than of the method. Criticism of null-hypothesis significance testing is available in other articles (null-hypothesis and statistical significance) and their references. Attacks and defenses of the null-hypothesis significance test are collected in Harlow et al. The original purposes of Fisher's formulation, as a tool for the experimenter, was to plan the experiment and to easily assess the information content of the small sample. There is little criticism, Bayesian in nature, of the formulation in its original context. In other contexts, complaints focus on flawed interpretations of the results and over-dependence/emphasis on one test. Numerous attacks on the formulation have failed to supplant it as a criterion for publication in scholarly journals. The most persistent attacks originated from the field of Psychology. After review, the American Psychological Association did not explicitly deprecate the use of null-hypothesis significance testing, but adopted enhanced publication guidelines which implicitly reduced the relative importance of such testing. The International Committee of Medical Journal Editors recognizes an obligation to publish negative (not statistically significant) studies under some circumstances. The applicability of the null-hypothesis testing to the publication of observational (as contrasted to experimental) studies is doubtful. Some statisticians have commented that pure "significance testing" has what is actually a rather strange goal of detecting the existence of a "real" difference between two populations. In practice a difference can almost always be found given a large enough sample, what is typically the more relevant goal of science is a determination of causal effect size. The amount and nature of the difference, in other words, is what should be studied. Many researchers also feel that hypothesis testing is something of a misnomer. In practice a single statistical test in a single study never "proves" anything. [How to reference and link to summary or text] "Hypothesis testing: generally speaking, this is a misnomer since much of what is described as hypothesis testing is really null-hypothesis testing." "Statistics do not prove anything." "Billions of supporting examples for absolute truth are outweighed by a single exception." "...in statistics, we can only try to disprove or falsify." Even when you reject a null hypothesis, effect sizes should be taken into consideration. If the effect is statistically significant but the effect size is very small, then it is a stretch to consider the effect theoretically important.[How to reference and link to summary or text] Philosophical criticism Edit Philosophical criticism to hypothesis testing includes consideration of borderline cases. Any process that produces a crisp decision from uncertainty is subject to claims of unfairness near the decision threshold. (Consider close election results.) The premature death of a laboratory rat during testing can impact doctoral theses and academic tenure decisions. Clotho, Lachesis and Atropos yet spin, weave and cut the threads of life under the guise of Probability.[How to reference and link to summary or text] "... surely, God loves the .06 nearly as much as the .05" The statistical significance required for publication has no mathematical basis, but is based on long tradition. "It is usual and convenient for experimenters to take 5% as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results." Fisher, in the cited article, designed an experiment to achieve a statistically significant result based on sampling 8 cups of tea. Ambivalence attacks all forms of decision making. A mathematical decision-making process is attractive because it is objective and transparent. It is repulsive because it allows authority to avoid taking personal responsibility for decisions. Pedagogic criticism Edit Pedagogic criticism of the null-hypothesis testing includes the counter-intuitive formulation, the terminology and confusion about the interpretation of results. "Despite the stranglehold that hypothesis testing has on experimental psychology, I find it difficult to imagine a less insightful means of transiting from data to conclusions." Students find it difficult to understand the formulation of statistical null-hypothesis testing. In rhetoric, examples often support an argument, but a mathematical proof "is a logical argument, not an empirical one". A single counterexample results in the rejection of a conjecture. Karl Popper defined science by its vulnerability to dis-proof by data. Null-hypothesis testing shares the mathematical and scientific perspective rather the more familiar rhetorical one. Students expect hypothesis testing to be a statistical tool for illumination of the research hypothesis by the sample; It is not. The test asks indirectly whether the sample can illuminate the research hypothesis. Students also find the terminology confusing. While Fisher disagreed with Neyman and Pearson about the theory of testing, their terminologies have been blended. The blend is not seamless or standardized. While this article teaches a pure Fisher formulation, even it mentions Neyman and Pearson terminology (Type II error and the alternative hypothesis). The typical introductory statistics text is less consistent. The Sage Dictionary of Statistics would not agree with the title of this article, which it would call null-hypothesis testing. "...there is no alternate hypothesis in Fisher's scheme: Indeed, he violently opposed its inclusion by Neyman and Pearson." In discussing test results, "significance" often has two distinct meanings in the same sentence; One is a probability, the other is a subject-matter measurement (such as currency). The significance (meaning) of (statistical) significance is significant (important). There is widespread and fundamental disagreement on the interpretation of test results. "A little thought reveals a fact widely understood among statisticians: The null hypothesis, taken literally (and that's the only way you can take it in formal hypothesis testing), is almost always false in the real world.... If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null hypothesis is always false, what's the big deal about rejecting it?" (The above criticism only applies to point hypothesis tests. If one were testing, for example, whether a parameter is greater than zero, it would not apply.) "How has the virtually barren technique of hypothesis testing come to assume such importance in the process by which we arrive at our conclusions from our data?" Null-hypothesis testing just answers the question of "how well the findings fit the possibility that chance factors alone might be responsible." Null-hypothesis significance testing does not determine the truth or falseness of claims. It determines whether confidence in a claim based solely on a sample-based estimate exceeds a threshold. It is a research quality assurance test, widely used as one requirement for publication of experimental research with statistical results. It is uniformly agreed that statistical significance is not the only consideration in assessing the importance of research results. Rejecting the null hypothesis is not a sufficient condition for publication. "Statistical significance does not necessarily imply practical significance!" Practical criticism Edit Practical criticism of hypothesis testing includes the sobering observation that published test results are often contradicted. Mathematical models support the conjecture that most published medical research test results are flawed. Null-hypothesis testing has not achieved the goal of a low error probability in medical journals. "Contradiction and initially stronger effects are not unusual in highly cited research of clinical interventions and their outcomes." "Most Research Findings Are False for Most Research Designs and for Most Fields" Jones and Tukey suggested a modest improvement in the original null-hypothesis formulation to formalize handling of one-tail tests. Fisher ignored the 8-failure case (equally improbable as the 8-success case) in the example tea test which altered the claimed significance by a factor of 2. Killeen proposed an alternative statistic that estimates the probability of duplicating an experimental result. It "provides all of the information now used in evaluating research, while avoiding many of the pitfalls of traditional statistical inference." - Comparing means test decision tree - Confidence limits (statistics) - Multiple comparisons - Omnibus test - Behrens-Fisher problem - Bootstrapping (statistics) - Fisher's method for combining independent tests of significance - Null hypothesis testing - Predictability (measurement) - Prediction errors - Statistical power - Statistical theory - Statistical significance - Theory formulation - Theory verification - Type I error, Type II error - ↑ 1.0 1.1 1.2 1.3 The Sage Dictionary of Statistics, pg. 76, Duncan Cramer, Dennis Howitt, 2004, ISBN 076194138X - ↑ Testing Statistical Hypotheses, 3E. - ↑ 3.0 3.1 Fisher, Sir Ronald A. (1956). "Mathematics of a Lady Tasting Tea" James Roy Newman The World of Mathematics, volume 3. - ↑ What If There Were No Significance Tests? (Harlow, Mulaik & Steiger, 1997, ISBN 978-0-8058-2634-0 - ↑ The Tao of Statistics, pg. 91, Keller, 2006, ISBN 1-4129-2473-1 - ↑ Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284. - ↑ 7.0 7.1 Loftus, G.R. 1991. On the tyranny of hypothesis testing in the social sciences. Contemporary Psychology 36: 102-105. - ↑ 8.0 8.1 Cohen, J. 1990. Things I have learned (so far). American Psychologist 45: 1304-1312. - ↑ Introductory Statistics, Fifth Edition, 1999, pg. 521, Neil A. Weiss, ISBN 0-201-59877-9 - ↑ Ioannidis JPA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218-228. - ↑ Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2(8): e124. - ↑ A Sensible Formulation of the Significance Test, Jones and Tukey, Psychological Methods 2000, Vol. 5, No. 4, pg. 411-414 - ↑ An Alternative to Null-Hypothesis Significance Tests, Killeen, Psychol Sci. 2005 May ; 16(5): 345-353. - A Guide to Understanding Hypothesis Testing - A good Introduction - Bayesian critique of classical hypothesis testing - Critique of classical hypothesis testing highlighting long-standing qualms of statisticians - Analytical argumentations of probability and statistics - Laws of Chance Tables - used for testing claims of success greater than what can be attributed to random chance |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
http://psychology.wikia.com/wiki/Statistical_test
13
30
1 - 2 class periods - Ability to add like terms - Experience in extending patterns Ohio Standards Alignment - Overhead transparency, activity sheet, and Pascal's Triangle template (all provided) Simplifying expressions, Pascal's Triangle, generalizing patterns Students complete a triangular array by inserting numbers into the top row and adding pairs of numbers for subsequent rows to achieve a final sum at the bottom. After discovering that the position of a number in the top row affects the final sum, they are challenged to find five different numbers that lead to a sum of 100 in the bottom row. They then analyze the diagram algebraically to determine a formula for the final result in terms of the starting numbers. Next, they compare the coefficients in their formula to the corresponding row of Pascal’s triangle and use that pattern to predict and complete a larger array with a specified end sum. - To add polynomials, collect like terms, and simplify expressions. - To discern and apply a pattern through Pascal’s triangle. - To generalize and extend a pattern. A triangular array (see Overhead) shows five circles in the top row, four in the next row, and so on, down to one circle in the last row. Hand out the Activity Sheet and ask students to fill in the rows in the first array in this way: 1. Write five different numbers in the top row. 2. For the next row, in each circle write the sum of the two numbers in the top row diagonally above the circle. 3. Complete the next three rows in the same way until you have the result in the bottom circle. Can you write five different numbers in the top row so the result at the bottom is 100? How does the arrangement of the numbers in the first row affect the sum at the bottom? Analyzing the Pattern To discover a formula for the result in the bottom circle, have students fill in the letters a, b, c, d, e into the diagram and write the polynomial sum in each circle in the second row, continuing until they complete the diagram. Does your "formula" for the result at the bottom work for the numbers you used in the Critical Question above? Does it work for other numbers that you choose for the five starting numbers in row 1? Pascal’s Triangle is an inverted triangular array of numbers that starts with 1 at the top. Each new row starts and ends with 1. Any numbers in between are the sum of the two numbers diagonally above them in the preceding row. Here are the first five rows: 1 2 1 1 3 3 1 1 4 6 4 1 What do you observe about the numbers in the fifth row and the formula you discovered above? Generalizing the Pattern Have students complete two more rows of Pascal’s Triangle, then ask the question: How can you use the numbers in row 7 to complete a triangular array of single-digit whole numbers in row 1 so the result in the bottom circle is 253? To answer this question, students could follow these guidelines: a. Show Pascal’s Triangle for 7 rows. b. Draw a triangular array with 7 rows of circles at the top and write a formula for the result in the bottom row. c. Choose 7 different whole numbers for the circles in the top row and show how your completed diagram results in the number 253 at the bottom. At the conclusion of this activity, discuss with students how algebra explains how numbers “behave” by using letters to represent generalized numbers and then following those general numbers through the various steps to determine how each step affects the end result. In this case, the pattern observed provides a formula for the final result, which is much easier to adjust trial numbers in than working through the whole diagram. Pascal’s Triangle is an abstract generalization for the binomial behavior of the array, and one that is frequently encountered in mathematics. An extension of this activity is to look for the many patterns that can be discerned from Pascal's Triangle and how these patterns correspond to particular kinds of numbers (e.g., triangular numbers, pentagonal numbers). Solutions to the questions can be found here. See the following URLs for many intriguing patterns than can be found in Pascal’s Triangle: Link to Pascal’s Petals: http://mathforum.org/workshops/usi/pascal/petals_pascal.html Link to Twelve Days of Christmas and Pascal’s Triangle: http://dimacs.rutgers.edu/~judyann/LP/lessons/12.days.pascal.html Link to Pascal's Triangle lessons: http://mathforum.org/workshops/usi/pascal/ Check this applet: http://www.ies.co.jp/math/java/misc/PascalTriangle/PascalTriangle.html From the teaching files of Steven P. Meiring.
http://www.ohiorc.org/pm/math/richproblemmath.aspx?pmrid=78
13
15
Asteroid Impact Craters on Earth as Seen From Space Asteroid impact craters are among the most interesting geological structures on any planet. Many other planets and moons in our solar system, including our own moon, are pock-marked with loads of craters. But because Earth has a protective atmosphere and is geologically active — with plate tectonics and volcanic eruptions, mostly relatively young oceanic crust, and harsh weathering from wind and water — impact structures don’t last long and can be tough to come by. But on a few old pieces of continent, especially in arid deserts, the marks of asteroids have been preserved. One well-known example is our own Barringer crater, also known as Meteor Crater, in Arizona. The images here show some of the biggest, oldest and most interesting impact craters on the planet. Aorounga crater, pictured above and below, is one of the best preserved impact craters on Earth, thanks in part to its location in the Sahara Desert in Chad. The 10 mile-wide crater is probably around 350 million years old. The stripes are alternating rock ridges and sand layers, known as yardangs, caused by persistent unidirectional wind. The image above was taken by astronauts in the International Space Station in July. The radar image below, taken from the space shuttle in 1994, reveals that Aorounga may be one of two or three craters. The Shoemaker crater in Western Australia, formerly known as the Teague crater, was renamed in honor of the planetary geologist Eugene Shoemaker for whom the Comet Shoemaker-Levy 9 is also named. The age of the crater unclear, but it could be 1.7 billion years old, which makes it the oldest known impact in Australia. The brightly colored splotches are seasonal salt-water lakes. This image was taken by the Landsat 7 satellite. Image: NASA/USGS, 2000. The Manicouagan Crater in northern Canada is one of the largest impact craters known on Earth. The impact occurred around 210 million years ago at the end of the Triassic period and may have caused a mass extinction that killed around 60 percent of all species. Though the crater has mostly eroded, Lake Manicouagan outlines what is left of the 43-mile wide impact structure. The asteroid that created the crater is thought to have been about three miles wide. Today the lake is a reservoir and popular salmon fishing location. Images: 1) NASA, STS 9 Crew, 1983. 2) NASA/GSFC/LaRC/JPL, MISR Team, 2001. This crazy looking structure in Western Australia is called the Spider crater, for obvious reasons. Geologists determined it was an impact crater when they found shatter cones, telltale cone-shaped, grooved rocks found only around impacts. The crazy looking legs surrounding the impact are mostly due to erosion of different rock layers. The harder sandstone ridges withstood the weathering of wind and water better than the softer intervening layers. The crater is somewhere between 600 million and 900 million years old and the raised area at its center is around 1,600 feet wide. The image above is a false-color image taken by the Advanced Spaceborne Thermal Emission and Reflection Radiometer on NASA’s Terra. The true-color image below was captured by Taiwan’s Formost-2 satellite Images: 1) NASA/GSFC/METI/ERSDAC/JAROS, U.S./Japan ASTER Science Team, 2008. 2) Cheng-Chien Liu, National Cheng-Kung University and Dr. An-Ming Wu, National Space Organization, Taiwan, 2008 The Gosses Bluff crater in Australia’s Northern Territory sits between two mountain ranges. The raised circular feature in the center of the structure is around 2.8 miles across, but the original crater rim was probably at the edge of the surrounding grayish area. The crater was formed caused by an asteroid impact around 140 million years ago. Clearwater Lakes in Quebec, Canada were formed by a pair of asteroid impact craters. The impacts probably occurred simultaneously around 290 million years ago. The larger crater measures 22 miles across. The Roter Kamm crater in Namibia is hard to see in visible light, but shows up more clearly with imaging radar. The crater’s 1.5 mile-wide rim stands out as a bright ring in the lower left of the image, which was taken by a radar instrument on board the space shuttle. Irregular surfaces show up brightly and smooth areas are dark. The white splotch at the bottom of the photo is a rocky hill, the darkest areas are wind-blown sand dunes, the blue around the crater’s rim may be material that was ejected during the impact, the red is limestone outcrop and the green areas are mostly vegetation. The impact occurred around 5 million years ago. Image: NASA/JPL, 1994. The Lonar crater in Maharashta, India is around 6,000 feet wide and 500 feet deep and contains a saltwater lake. Scientists determined the structure was caused by an asteroid through clues such as the presence of maskelynite, a glass that is only formed by extremely high-velocity impacts. The impact occurred around 50,000 years ago. This image was captured by the Advanced Spaceborne Thermal Emission and Reflection Radiometer on NASA’s Terra satellite. It is a simulated true-color image Image: NASA, 2004 The Vredefort Dome in South Africa is possibly the oldest and largest clearly visible asteroid impact on Earth. The 155-mile crater is approximately 60 miles southwest of Johannesburg and was formed around 2 billion years ago. We here at Wired Science love a really good, cheesy artist’s rendering of a space object or phenomenon, so we couldn’t resist this beauty. This is one man’s idea of what a 300-mile wide asteroid would look impacting Earth. This event probably would have killed everything on the planet. Fortunately, NASA says nothing this big is headed our way anytime soon. Image: NASA/Don Davis
http://www.wired.com/wiredscience/2009/08/impactcraters/all/1
13
11
The Evolution Deceit In order for a single protein to form, five separate conditions that demolish the very foundations of Darwinists’ theories need to be met simultaneously: There are more than 200 amino acids in nature. Only 20 specific amino acids need to be selected in order for proteins to form. If any other amino acid apart from these 20 enters the equation, no protein will result. Following the selection of these special 20 amino acids, it is essential they be set out in a specific sequence. Even if all the conditions are completely fulfilled, just a single amino acid being in the wrong sequence will prevent protein forming. The amino acids constituting protein all have to be left-handed. • Although right- and left-handed amino acids have all the same characteristics, they are mirror images of one anther, like right- and left-handed gloves. • There is not a single right-handed amino acid in living structures. • If just one right-handed amino acid enters the equation, that protein will be incapable of being used. • The probability of a small protein being formed from left-handed amino acids alone is 1 in 10210. The well-known chemist Walter T. Brown makes this statement on the subject: … the amino acids that comprise the proteins found in living things, including plants, animals, bacteria, molds, and even viruses, are essentially all left-handed. No known natural process can isolate either the left- or the right-handed variety. The mathematical probability that chance processes could produce just one tiny protein molecule with only left-handed amino acids is virtually zero.1 Amino acids are bound by "peptide bonds" alone. - As scientists discovered amino acids, they established that the amino acids constituting proteins were connected in a very interesting, different way to that observed in nature. That bond is a special chemical one known as the peptide bond. - The atoms in molecules are generally connected together by covalent bonds, Only amino acids are bound together by special peptide bonds. - Peptide bonds can only be dissolved at high temperature, or prolonged exposure to powerful acids or alkalis. It is these peptide bonds that make proteins very strong and resistant. • The amino acid sequence that needs to take place in order for a protein to form has to be linear. • In other words, the amino acid chain that takes place must not be a structure that branches out and develops lateral chains, but has to have a straight structure with amino acids following on one from the other. • Sydney Fox conducted an experiment using amino acids to try to produce protein. He heated dry amino acid compounds in an atmosphere of nitrogen at 160-180 degrees for several hours. • Amino acids bound to one another, but not in a linear manner. They were not connected by peptide bonds and branched out rather than being linear. • Fox called these proteinoids, though they were in reality nothing more that irregular strings with nothing to do with proteins or life. • These sequences made the existing amino acids used in the experiment non-functional. • The experiment in question was invalid in many other respects. You can obtain detailed information about this here. All of the preconditions listed above have to be fully met in order for a single protein to form. And the probability of all these conditions being met and giving rise to a single protein is ONE IN 10950. 1. Walter T. Brown, In the Beginning (1989), p. 8
http://evolutiondeceit.com/en/works/17333/The-five-essential-conditions-for-protein-formation
13
74
Using Living Math Materials and Plans with Structured, Incremental or Various Other Math Teaching Approaches Note: This article was written for families using the Living Math Lesson plans in response to questions by families wanting to integrate the material with their math approach. Most of the comments, however, are applicable to families using various living math resources, and are not limited to lesson plans users. The Living Math lesson plans were written in a format to facilitate teaching mathematics to all ages within a framework of its historical development. Materials facilitating this have been available for advanced high school levels on up, but I have yet to run across materials beyond the Luetta Reimer "Mathematicians Are People, Too" support materials, Historical Connections in Mathematics available from AIMS Education, that attempt to provide more than tidbits to students who have not mastered high school level algebra and geometry. Because the lesson plans follow math development through history, they refer to more and more complex ideas. This structure / organization does not directly facilitate the sequence of elementary math learning topics that a traditional math curriculum does. It becomes more naturally sequential by the advanced / high school levels, because the math skills and cognitive development necessary to understand advanced ideas have more likely been attained. As such, the materials aren't written to be used back to back in levels. The Primary Level plans suggest readings and activities appropriate for early elementary students, but it would be entirely appropriate to use the materials over a three to four year period,often repetitively as will be explained below, rather than used once through in order and then assumed to be ready to move on to the Intermediate Level. The same goes for Intermediate and even to a degree to Advanced. High School is different, in that the skills needed to complete this level of work may be attained to allow for a sequential, college-model study of mathematics in the full context of history. Therefore the comments below contain more for the elementary years than beyond. When I taught weekly co-op classes using the plans and activities, parents approached the material in several ways. Unschoolers, or advocates of delayed formal academics, tend to find it easiest to adapt the plan materials, because the philosophy of this learning style is not generally incremental learning based. For a child who enjoys reading aloud with a parent and doing hands on activities, it can work quite well to provide math exposure in a wide range of real life situations. It can also provide the parent with many experiences to enrich their ability to stimulate interest in mathematics, especially if they enjoy history. Relaxed or "eclectic" schoolers often took the opportunity of the classes or lesson plans to take a completely different approach to math for a given period of time. For many families, it was a break from a structured approach to help shift attitudes in a more positive direction toward math learning. Some families continued with math curriculum on certain days of the week, and did "living math" on other days. Others immersed themselves in the math history studies and left curriculum aside for months, if not a year. If their children enjoyed the materials, this was a beneficial process. Many went back to curriculum work after a time, reporting their children enjoyed it more, and together they had "hooks" to place ideas they were encountering in the curriculum on that were simply abstract before. Some families continued a fully structured approach to math, supplementing with Living Math readings and activities for enrichment. They still used a daily curriculum such as Saxon, but reported that they could reduce the number of problems they assigned their children because they were "getting" the math more quickly after the classes or doing activities and readings at home. Many of these families chose to use the Living Math plans as their history curriculum, not worrying about the order of math concepts reported, as they continued to use the curriculum for sequential learning. The most challenging model to teach and/or describe would be one wherein the sequential teaching is done through the Living Math plan materials, without using a formal math curriculum as the base. This is, however, in essence how I myself use the materials with my own children now, although we still use texts and workbooks that appeal to my children. Living math has been a feature of our home for so many years, my children have had read to them and/or have read to themselves the math readers that are scheduled in the lesson plans, often repeatedly. We do the activities – the best ones are often repeated at least once, if not more times over the course of several years. We as a family read the historical and thematic readers and refer to them as we experience things that relate to what we've read. The reason we can repeat an activity in as short a period of a year is the fact we are not trying to limit the learning objective. The emphasis of a repeated activity tends to follow whatever concepts I know my children are working on learning at a given point in their math development. This natural approach took a few years to develop, and required me to be familiar with the tools that were out there to use. I will give examples below. Dialoguing with families about the various ways they used the materials, and the experiences I've had with my own children, have given me confidence that the lesson plan materials can be used with a wide range of homeschooling and teaching approaches. I hope to give parents ideas of ways the materials can be adapted in various situations. Primary/ Elementary Levels (approximately ages 6 to 8) In order to use the Living Math materials as the primary basis to teach incrementally, it requires the parent to be familiar with the math activities available, and be able to identify when activities can be used to facilitate learning of the concepts the parent wants to emphasize in a given period. In other words, get to know your toolkit. When I first began using these materials myself, I had no guides or manuals as to how to teach math through history at the pre-high school levels. The first step I took years ago was that of keeping us on a curriculum through the week, but having math history days wherein we would not work on curriculum at all, but rather read the materials and follow any bunny paths they led us on. This allowed me to keep the structure I felt I needed, but blend in the other materials for interest and relevancy. The bunny paths we took often involved math concepts the kids hadn't yet fully learned, yet they wanted to keep going. We found ourselves spending hours and hours on math ideas and activities, whereas had we set that time aside for working math curriculum we would have spent far less time. Interest and relevancy provided energy to spend many more hours on math learning, and in context. Most of the plan activities in the early lesson plans came from these bunny path explorations. I became more and more educated myself about these ideas to be able to naturally insert them when the opportunity came up with a younger child, or if a different concept that was linked to it came up. As we spent more and more time on bunny paths, we cut back on the curriculum use, as it was simply becoming unnecessary and a distraction from our our highly productive and enjoyable studies. I also realized many of these activities did not require as advanced math skills as I assumed. Many times doing an activity with my 9 year old and 6 year old, I could see that the difference was only in how much of the work I myself did to complete the activity, or whether we completed the activity at all. It was okay to stop when substantial learning had occurred and interest began to wane with a younger child, just like leaving some food on your plate when you're full. Similarly, I might be able to do an activity written for an older child with a younger child if I rounded every number to numbers the younger child could comprehend, or if the fractions we encountered were rounded to whole numbers, or to easy fractions they could easily work with. To give an example, in the Pythagoras lessons, ideas of number theory are introduced. Number theory is usually considered a high-level, complex mathematical area. But the simple idea of even and odd numbers is number theory that originated with Pythagoras. And while simple, it is profound; math theorists often rely on even and odd properties when constructing complex proofs. While I might spend time with my youngest working on even and odd numbers, and introduce figurate numbers to them in concrete / pictorial ways, I would investigate further with my older child relationships between figurate numbers to the level they could understand. We would simply stop when it was clear the child could not comprehend anything more. In the Ancient Mathematics units different number bases are introduced. Again, in the traditional curriculum, number bases are usually considered middle school level math. But in co-op classes with children as young as 4 or 5 years old, I could demonstrate the Mayan base 20 system quite effectively, if they understood ones, fives and tens, as the numbers are written pictorially. Binary would be more difficult as it is more abstract, but it could be presented concretely with objects, games and activities. More advanced number bases would be an idea for older children. The relationships between numbers can be analyzed by very young children in terms of their additive properties, or their multiplicative properties – big words, but concepts that are shown in math picture books quite easily. The doubling sequence is so prevalent in the history of mathematics it comes up in many activities and stories. The ubiquity of the idea itself communicates to a child to the notion that it is a very important idea. I've had young children who could recall and chant the binary sequence as well or better than they could skip counting by 2s. A young child might go as far as 1, 2, 4, 8 . . . . my older child may go up to 32 . . . and the oldest as far as they like. And of course, a middle school child on up can understand that this is an exponential pattern, and relate it to other bases. I would not attempt to teach exponents to a younger child, relying on concrete examples, but it they may in fact pick it up themselves, especially if it is compared to our base 10 system. Having taught my oldest children math using more standard math programs such as Math U See and Singapore Math, I found that the activities over time were working the same principles that were presented in the curriculum. But it was up to me as the math mentor to bring the concept teaching in when it was appropriate for their level. As I worked with the materials and ideas more and more, I became better at identifying when a specific activity might work well with my child to work on a concept they were learning. I became an expert at adapting an activity to a child's level, because by exposing myself to these repetitively, I began to intuitively see the basic math structure underneath them. When you yourself actually do an activity with your child, you can observe and participate in the process required to do it. When you realize that multiplication is simply fast addition, any activity involving multiplication can be converted to an activity in addition, by bringing the numbers down to simple terms the child can add, or ending the activity when the terms become too large. Any simple multiplication activity can be used if a child can skip count,reinforcing the upcoming link to multiplication. Once I'd read through the historical materials sequentially with my children, I did not need to stick to the plan order anymore for activities. We can read Hans Magnus Enzensberger's The Number Devil: A Mathematical Adventure and decide to go back to Pythagoras ideas we visited a few months or even a year earlier. If you read the book and did the activity with your children, you can make the connections with them. We can read Theoni Pappas's Penrose the Mathematical Cat books, and revisit the numerous activities and ideas we covered in other units. Repetition in these activities is usually well tolerated, and even welcomed, if it isn't immediately after the first exposure, because in the repetition they see and understand things they didn't understand the first time. Because the activities take more time than a typical math worksheet, they are often remembered for a long time, but even more so if they are repeated at a later date. The "ah-ha" moments are very empowering, showing them how much more they are able to understand than when they saw an idea presented before. This happens to my children very often when a younger child reads an older reader repeatedly over years. Here are some examples of blending living math materials with incremental teaching at younger levels, and how I've identified opportunities quite often by teaching to older children. My 7th grader was going through the Harold Jacobs Elementary Algebra text with a friend of his who planned on going to high school the next year. As such, his friend's goal was to complete the course, and my son committed to the same goal as long as he is able to keep up. We met twice a week to learn concepts and work problems, and they did homework between our meetings. As my son was familiar with the math history topics from the younger levels, we blended in the advanced reading materials and some activities as they match the ideas in the algebra course, vs. strictly following the lesson plans. One activity suggested in the Harold Jacobs text to demonstrate the idea of a direct variation function involved experimenting with dropping bouncy balls from various heights, and recording the data in tables to generate an algebraic formula. It was an extremely effective activity for these middle school boys. We completed the experiments and generated formulas to describe the direct variation between the height we dropped the ball from and the bounce height. I observed that this experiment was similar to one I had done in the Galileo lesson in our math history studies, but it was different in that we were measuring the bounce height, rather than the time. I realized this was easier for younger children to measure. Removing the more abstract aspect of the formula, I realized I could do this with my fourth grade daughter's math group. The girls were working on multiple digit multiplication, easy division and easy fractions / proportions in word problems. We were using a Singapore word problem book to provide a sequential framework for them to work on these skills between our meetings. So whatever activity we did, I emphasized the math skills they were working on, even as other math skills, and many logical reasoning skills may have come into play. One day the girls were doing some Hands On Equations work which involved solving equations with "x" and "(-x)." One of the girls wanted to know, is there such a thing as a "y" or "z"? What a great lead-in to the bouncy ball experiment I had already planned I thought. I could say, yes, we'll get a "y" in there today. So the girls did the same activity the boys did – dropping the balls, recording the heights, and finding patterns. They estimated the relationships between the two different balls – one ball bounced on average about 2/3 of the way up, the other bounced three fourths of the way up. If we were careful with our measurements, the relationships were strikingly accurate. We converted the bounces into percentages of the original drop height using calculators at first. They have not technically learned percentages, but we've encountered them many times in activities, and I put percentage ideas in terms of cents and dollars which they do understand – i.e, three quarters is the same as 75 cents out of a dollar. We put up a table of their results where they could see that no matter what height they dropped the ball from,the bounces were about the same fraction of the drop height, two thirds for the first ball, three quarters. I made sure we were rounding the figures to significant numbers they could understand. When presented this way, my 9 year old daughter could easily answer the question: If the ball is bounced from 10 feet, how high will the bounce be? Initially she said 7 feet, drew the picture up to three fourths, and then realizing it was 7-1/2 feet, corrected her answer. She also could figure out that if I dropped the other ball from 9 feet, it would bounce up to 6 feet high. She could do this if I kept the numbers round and simple. Now she has another concrete "hook" to continue to refer to as we work on these skills. The key with activities like this with younger children is to keep the numbers simple and intuitive, so they do not have to rely on more complex algorithms such as long division to get the answers, confusing the lesson to be learned. When children can begin to comprehend simple relationships such as basic fractions of halves, quarters and thirds (and many children can begin to comprehend these in terms of dividing up food or items by age 5 or 6), and can do simple addition, they can begin doing these activities, and the parent need not worry about the fact they can't complete them. I allow younger children to use calculators to complete activities when the math is beyond their comprehension, to again facilitate what they are to learn without confusing it with what they aren't ready for. When we did the Cheops Pyramid activity in the Thales lesson (from Mark Wahl's Mathematical Mystery Tour), an activity I have done successfully many times, the math can become complicated for all but middle schoolers on up. But if I give younger kids a calculator they can complete it. It gives them an idea of how to use a calculator, the importance of a decimal point, and experience with rounding. For these kids, the learning objective isn't how to do division in repeating decimals. It's to see that mathematical relationships can be built in spatial objects. Once the calculations are done, they compare the results and see that the numbers are very similar. To realize how powerful some of these lessons can be in terms of retention of ideas, my oldest who entering his junior year in high school still recalls many of the concrete lessons he learned. He homeschooled for 9 years before entering high school two years ago. Recently he saw the pyramid my daughter made and commented, "Oh, that's the pyramid that has pi built into it, because it's basically a half sphere, right?" He was 11 years old when he first did the pyramid activity in one of our co-op classes. He tells me that he recalls the formula for the circumference of a circle because he remembers our Egyptian rope-stretching activities. The circumference of a circle is the diameter times three and a "little bit" – a funny idea from his beloved Murderous Maths - and he routinely expresses that in the abstract form, C = d pi or C = 2r pi, while recalling its meaning, it's not just a formula. I have done the rope stretching activity twice now with my 9 year old and will likely do it again before she gets to this point. Each time we've done it, she enjoys it and learns something more from it. In the last instance we did this activity, we practiced division factors of 12 to get the proportions of the right triangle in place. We also practiced multiplication when finding Pythagorean triples. My 12 year old now has a series of these memorized from doing the activity and then extending it to a chapter in Ed Zaccaro's Challenge Math where he solved a number of right triangle problems that used Pythagorean triples to keep the answers in whole numbers. Another rather obvious example for early elementary incremental learning is reading math readers or playing around with manipulatives for fun and exposure, but filing away in your mind what the lesson of the activity is if they aren't ready to master it. My youngest daughter was working on addition with regrouping when she was 7 years old. At 5, she read "A Fair Bear Share" MathStart reader which focuses on regrouping, and has read it many times since then. We also worked quite a bit with an abacus at one time. While she could follow a year ago or so, she couldn't reproduce the process if given a problem in a workbook. We later encountered regrouping again reading "Mr. Base Ten Invents Mathematics." She exhibited more conceptual understanding in following it on the page, but no interest in attempting to do it herself on paper. Then, at 7, she encountered regrouping in a Singapore workbook. We brought out the Fair Bear Share book, Mr. Base Ten and the abacus again, and used these old friends to help learn the concept as it was presented in her Singapore book. The books turned out to be more effective than abacus for her at this point, as she is a print oriented learner. We could refer to the characters and objects in the books when going through the idea and developing a process for her to figure out her answer. She quickly developed her own personal notation to make sure she does not lose track of her ones to be carried over, and in a matter of a couple days she had this idea fully mastered. A month later, she mastered regrouping of tens and hundreds, realizing the same idea applied as she had learned for her 1s. She moved out of numbers and quantities she could concretely understand to applying to a more abstract numbers and quantities. The idea here is that when we first pulled out these materials in the context of the ideas presented in the math history lessons, I was not attempting to teach her the lesson to mastery, nor did I wait to introduce materials to her because she wasn't ready to master the concepts. She was having fun and enjoying the ideas presented. When she was ready, we pulled out these same materials she was already familiar with, and the lesson was very quickly learned to mastery. I filed away in my mind that her next logical step could be multiple place values of regrouping, which she herself discovered in the Singapore book a few days later. One more lesson to show her that the same process applies to other place values and she understood it, in large part because she really does understand place value concretely through many hours of exploration with base 10 blocks. Therefore she understands that she is carrying tens, or hundreds, not ones, as many children get confused when taught the regrouping algorithm. If she did not appear to be ready to understand this, I would have waited and kept her supplied with other math activities. Her "next" learning objective might be what she encounters in her Singapore book again,or it might be what she is learning with Hands On Equations, a program that teaches algebraic ideas in a logical sequential manner. I am prepared with the materials that will blend well with what I see her working on next. My goal, and what I hope will be a benefit to others using these materials, is to become a better and better math mentor to my children with this constant exposure to math in contextual and interesting applications that are far off the page of what I was taught. For many parents, elementary math concepts are no longer routine, but can become exciting and interesting in these context, giving us a fresh and new perspective to share with our children. If one wants to teach incrementally using activities, the Primary Level readers and lesson plans contain multiple activities for all basic concepts in early elementary. One would need to separate, however, the activities from the readings to present them incrementally. This is fine, the plans are written as guides, not strict methodology. In fact, the only setting that the plans really would be strictly followed would be in a classroom setting wherein everyone needs to be on the same page. In a home setting, you have total control over how to use the materials. Your own comfort level in working activities that contain ideas you yourself never really learned may be a factor. My ability to teach with these materials has improved dramatically over the years because I myself understand them well. I did not have anything more than a typical math education myself until a few years ago. I never heard of Fibonacci numbers, Pascal's Triangle, or Pythagorean triples before embarking on this study with my children. I could not naturally and comfortably present this material to my kids unless I had read, investigated and understood it myself to some degree. In understanding it myself, I could see the underpinnings of the math ideas – that Pascal's Triangle is built on a very simple addition process that a first grader can understand up to a certain level, and that I can go even further if we make it into an art activity, because visual representations of relationships in the triangle become apparent. But if I don't understand it myself, I can't see these underpinnings. So just as with any study, learning ahead of your kids will make you much more comfortable presenting to your children. As you know your own child, you'll see connections they are likely to make based on their current development. And likely they'll make many more connections you won't expect them to make if you do not limit them by not exposing them to ideas beyond where you assess their development to be. If you enjoy the material, you will be much more likely to inspire them with your own enthusiasm. Knowing your children is also instrumental in how much of what type of resource to use and the timing to use it. For wiggly children who do not have long attention spans, abbreviated readings make sense, and possibly you may linger more on hands on activities, or reserve the more challenging history readings for bedtime when they can become quite attentive, especially if it means they might be able to go to bed a little later :o) If readings are too challenging, consider putting the book away for six months or a year, rather than allowing them to develop a negative attitude toward the book that would mean future exposure will be resisted. Focus on the kinds of readers or activities your child is enjoying. In homeschooling, it seems to me, timing is everything. Seasoned homeschoolers will tell you, what is "wrong" for a child now may be totally "right" a year later. Middle School / Pre-Algebra Level Middle school often tends to be a period where curriculum and classrooms keep kids in the pre-algebra territory until they seem ready for algebra, recycling concepts in progressively more difficult settings. This can be a wise strategy in terms of delaying formal algebra instruction until they have all the tools necessary to complete a full course, but it can be boring for a child that has essentially learned basic mathematics, but who has not yet fully developed the level of "proportional reasoning" needed to move to the abstract level formal algebra requires. This is a level that the Living Math Plans lend themselves quite well to use as written. The course provides a review of all basic pre-algebra ideas from counting and place value on up, but in contexts they've likely never seen before. Many of the activities are generated to exercise pre-algebra skills in real contexts. When algebra is referred to, it is usually possible to get the answers without it. Decimals, percents, and ratios are used extensively in activities. Exponents, radicals and other important concepts for algebra success are woven in. Links between geometry and algebra are brought in to give students more of an idea where all this math they are learning is heading. The plans can just as easily be used, however, in a similar way as Primary Level if a family desires, especially if a child is borderline for the level, or highly asynchronous with their reading and math skills. Historical readings can be scheduled while the activities can be done in a different order based on the child's skill level. Readings can be done through the week and a family can schedule an activity day, since the amount of time required to complete these activities increases with the skill level. After a year of co-op classes, my middle schoolers grazed on reading multi-concept books such as the Murderous Maths series by Kjartan Poskitt and The Penrose series by Theoni Pappas. My oldest son had read these before, but understood the math in them much better after having gone through the math history activities. Advanced Level and Up: Algebra and Beyond This level offers up a number of ways to use the Living Math materials as well. If an advanced level student has never encountered the ideas in the plans, as is the case for many parents, then working through all the material at a comfortable pace is beneficial. There are many opportunities to learn algebraic ideas that are embedded in the plans. It can be used as a pre-cursor to a formal algebra course, as in fact this level was for my oldest son who in his pre-secondary years was more language oriented than math oriented (this has changed in his high school years to being evenly balanced in skill and interest). He completed a formal algebra course with ease after spending nearly two years with the Living Math plans. The materials can also be used as a conceptual review after an algebra course is completed, as the contexts will be different from most algebra texts, and this was the situation with some of my co-op students as well. Finally, a challenging algebra book is suggested through the lessons (Gelfand's Algebra) for students wanting to learn algebra in a problem-solving framework that is not a typical textbook. Even students who have had an algebra course may find this challenging, it is recommended by the Art of Problem Solving staff for gifted students and students who really want to understand algebra, vs. learn it procedurally. If a student is ready for algebra and will be working through an algebra course concurrently, the pace will need to be slow enough to make room for the time required to learn the algebra material. One way to accomplish this is to make the math history lessons the basis of social studies, and treat the readings and activities as that subject. It might mean that the algebra course would take more than the usual year if you wish to get the full benefit out of both programs. Homeschooling allows us to pace a course this way. My middle son went through the Intermediate Level math history materials a couple years ago, and even that was his second round, as we'd done quite a lot of reading and activities since he was 5 or 6 years old. He has been doing an abbreviated version of the Advanced Level plans, going through materials such as String Straightedge and Shadow that he never read before. He is picking up ideas he did not fully understand, or ideas he forgot, and the familiarity of the previous exposure makes them feel like old friends. His choice to work on formal algebra course last year with his friend was due to the opportunity created by his friend's goal to be ready for high school next year, his own realization that he is enjoying algebra after going through Hands on Equations the past few months, and his learning style which is less print oriented than his brother, he learns better with me teaching than trying to teach himself. After half a year of formal algebra, we decided to table that for next year, and he picked up Ed Zaccaro's Challenge Math and 25 Real Life Math Investigations books for the rest of his 7th grade year. So in his situation, we used a fully sequential math textbook as the basis of his math learning for part of the year, laying over it the math history reading and activities that match up to it. The Harold Jacobs Algebra was a good choice for this, since Jacobs does bring in a lot of number patterns and other tools for being able to learn algebra in an analytical way, vs. simply learning via rote practice of processes introduced. Taking the time to work on the binary system worked well with understanding the difference between exponential group vs. pure multiplicative growth or additive growth – and these ideas are presented early in the text as they learn to differentiate between different sorts of functions and their graphs. If you would like a look at how I've blending these in this fashion, I posted a tentative syllabus here: http://www.livingmath.net/JacobsAlgebraYear/tabid/1000/Default.aspx I could do the same thing with a geometry text. Two years before my oldest son took Algebra I, he took a high school Euclidean geometry class. This provided structure for his math learning that we laid our math history studies on as well. Incremental Learning Objective Tagging It is a goal of mine to go through all activities and "tag" them with the concepts they focus on. This is very time consuming, but the project is moving along. I have a concern that activities might be tagged as only being beneficial in teaching certain concepts. In reality, numerous activities can present ideas a kindergartner can learn as well as a high schooler if they have never been exposed to the idea (the Egyptian rope stretching is a great example, or the King's Chessboard, etc.). Parents exposed to these ideas for the first time can understand this. In the mean time, the Primary levels have grown considerably since I originally wrote them to provide a suggested rotation of all primary level concepts through a Cycle of lesson plans. It is not possible, however, that every child will be working on the same concepts at the same time. So it is up to parental discretion as to how much of the various concepts they cover with the child and how they do it. Book lists are posted for each unit, which include extensive reader lists by concept. So even if you only purchased the first unit, if your child is working on skip counting, going through all four units of reading lists for skip counting resources is fine. If they are working on division, look at the Unit 2 list of readers and incorporate those in your living math studies. You won't ruin a scheduled reader by reading it ahead of time as most Primary children enjoy reading math picture books multiple times. A to-do of mine is to include a list of basic concepts included in each unit, while it can be derived by looking at the book lists, it would be easier to see if the rotation were seen visually on the website. It's one of the project I am fitting in as I can with my own homeschooling.
http://www.livingmath.net/LessonPlans/UsingLivingMathArticle/tabid/1026/language/en-US/Default.aspx
13
15
The scale of the planets is tiny compared to the scale of the Solar System. The distance from Earth to the moon is 384 thousand kilometers, or 9.6 times Earth's equatorial circumference. The Sun is 150 million kilometers away, or 390 times the distance of the Moon from Earth, and 3,743 times Earth's circumference. When we speak of the distances between the planets, we are speaking of a scale that dwarfs not only the scale of the planets, but also the scale of a planet's system of moons. The basic unit of distance for the Solar System is the Astronomical Unit (AU). Roughly speaking, this is the distance of Earth from the center of the Sun. More precisely, the AU is the length of the semimajor axis of the Earth-Moon system's orbit around the Sun. The AU is approximately 1.50×108 km (a more precise value, accurate to 50 meters, is given in the Basic Values table). This is 100 billion times our human scale, or the ratio of the size of a house to the size of an atom. In terms of Earth's orbit, Earth's equatorial radius is 4.0×10-5 AU, Jupiter's equatorial radius is 4.8×10-4 AU, and the Sun's equatorial radius is 4.6×10-3 AU. The size of each planet's system of moons is also much smaller than an AU. The Moon is on our doorstep at 2.56×10-3 AU, so the Earth-Moon system could fit inside the Sun, comfortably if it weren't so hot. The outermost of the four Galilean moons of Jupiter has an orbit with a semimajor axis of 0.013 AU. Saturn's moon Titan is in an orbit with a semimajor axis of 8.1×10-3 AU, and the outer edge of Saturn's A ring, the outermost of the rings that is easily seen from Earth, is 9.1×10-4 AU in radius, which is little more than one-third of the Moon's distance from Earth. The terrestrial planets have distances from the Sun in the AU range: the semimajor axis of Mercury is 0.39 AU, that of Venus is 0.72 AU, and that of Mars is 1.52 AU. Once we turn to the giant planets, we jump to a length scale of tens of AU. Jupiter's semimajor axis is 5.20 AU, Saturn's is 9.54 AU, Uranus's is 19.19 AU, and Neptune's is 30.07. The Kuiper belt of planetoids ranges from 30 to 50 AU. Finally, the Oort cloud, which gives birth to the long-period comets, is more than 100 AU from the Sun. The illusion that light moves instantaneously is lifted for light traveling through the solar system. Light travels 1 AU in 499 seconds (8 minutes and 19 seconds). As a consequence, large time delays occur in communicating with interplanetary spacecraft. The time delay for communicating one-way with the Mars Rover ranges between 4 and 21 minutes, and this delay for communicating with the Cassini spacecraft at Saturn ranges between 71 and 88 minutes. We can traverse interplanetary distances by spacecraft, but we are unable to achieve velocities that are dramatically larger than the orbital velocities of the planets. As a consequence, travel to any planet takes months or years. Travel to Mars takes 6 months. It took 7 years for the Cassini spacecraft to travel to Saturn in a trip that required gravitational boosts through close passages of Earth, Venus, and Jupiter.
http://www.astrophysicsspectator.com/topics/overview/DistanceSolarSystemScale.html
13
20
Perpendicular lines have the property that the product of their slopes is −1. Mathematically, we say if a line has slope m1 and another line has slope m2 then the lines are perpendicular if m1 × m2= −1 In the example at right, the slopes of the lines are `2` and `-0.5` and we have: 2 × −0.5 = −1 So the lines are perpendicular. Another way of finding the slope of a perpendicular line is to find the opposite reciprocal of the slope of the original line. In plain English, this means turn the original slope upside down and take the negative. Interactive graph - perpendicular lines You can explore the concept of perpendicular lines in the following JSXGraph (it's not a fixed image). Drag any of the points A, B or C and observe the slopes m1, m2 of the 2 perpendicular lines. You can move the graph up-down, left-right if you hold down the "Shift" key and then drag the graph. Sometimes the labels overlap. It can't be helped! If you get lost, you can always refresh the page. What if one of the lines is parallel to the y-axis? For example, the line y = 3 is parallel to the x-axis and has slope `0`. The line x = 3.6 is parallel to the y-axis and has an undefined slope. The lines are clearly perpendicular, but we cannot find the product of their slopes. In such a case, we cannot draw a conclusion from the product of the slopes, but we can see immediately from the graph that the lines are perpendicular. The same situation occurs with the x- and y-axes. They are perpendicular, but we cannot calculate the product of the 2 slopes, since the slope of the y-axis is undefined. A line L has slope `m = 4`. a) What is the slope of a line parallel to L? b) What is the slope of a line perpendicular to L? A line passes through (-3, 9) and (4, 4). Another line passes through (9, -1) and (4, -8). Are the lines parallel or perpendicular? Didn't find what you are looking for on this page? Try search: Online Algebra Solver This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Go to: Online algebra solver Ready for a break? Play a math game. (Well, not really a math game, but each game was made using math...) The IntMath Newsletter Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents! Short URL for this Page Save typing! You can use this URL to reach this page: Math Lessons on DVD Easy to understand math lessons on DVD. See samples before you commit. More info: Math videos
http://www.intmath.com/plane-analytic-geometry/1d-perpendicular-lines.php
13
46
An optical telescope is a telescope which is used to gather and focus light mainly from the visible part of the electromagnetic spectrum to directly view a magnified image for making a photograph, or collecting data through electronic image sensors. There are three primary types of optical telescope: refractors which use lenses (dioptrics), reflectors which use mirrors (catoptrics), and catadioptric telescopes which use both lenses and mirrors in combination. A telescope's light gathering power and ability to resolve small detail is directly related to the diameter (or aperture) of its objective (the primary lens or mirror that collects and focuses the light). The larger the objective, the more light the telescope can collect and the finer detail it can resolve. The telescope is more a discovery of optical craftsmen than an invention of scientist. The lens and the properties of refracting and reflecting light had been known since antiquity and theory on how they worked were developed by ancient Greek philosophers, preserved and expanded on in the medieval Islamic world, and had reached a significantly advanced state by the time of the telescope's invention in early modern Europe. But the most significant step cited in the invention of the telescope was the development of lens manufacture for spectacles, first in Venice and Florence in the thirteenth century, and later in the spectacle making centers in both the Netherlands and Germany. It is in the Netherlands in 1608 where the first recorded optical telescopes (refracting telescopes) appeared. The invention is credited to the spectacle makers Hans Lippershey and Zacharias Janssen in Middelburg, and the instrument-maker and optician Jacob Metius of Alkmaar. Galileo greatly improved upon these designs the following year and is generally credited with being the first to use a telescope for astronomical purposes. Galileo's telescope used Hans Lippershey's design of a convex objective lens and a concave eye lens and this design has come to be called a Galilean telescope. Johannes Kepler proposed an improvement on the design that used a convex eyepiece, often called the Keplerian Telescope. The next big step in the development of refractors was the advent of the Achromatic lens in the early 18th century that corrected chromatic aberration seen in Keplerian telescopes up to that time, allowing for much shorter instruments with much larger objectives. For reflecting telescopes, which use a curved mirror in place of the objective lens, theory preceded practice. The theoretical basis for curved mirrors behaving similar to lenses was probably established by Alhazen, whose theories had been widely disseminated in Latin translations of his work. Soon after the invention of the refracting telescope Galileo, Giovanni Francesco Sagredo, and others, spurred on by their knowledge that curved mirrors had similar properties as lenses, discussed the idea of building a telescope using a mirror as the image forming objective. The potential advantages of using parabolic mirrors (primarily a reduction of spherical aberration with elimination of chromatic aberration) led to several proposed designs for reflecting telescopes, the most notable of which was published in 1663 by James Gregory and came to be called the Gregorian telescope, but no working models were built. Isaac Newton has been generally credited with constructing the first practical reflecting telescopes, the Newtonian telescope, in 1668 although due to their difficulty of construction and the poor performance of the speculum metal mirrors used it took over 100 years for reflectors to become popular. Many of the advances in reflecting telescopes included the perfection of parabolic mirror fabrication in the 18th century, silver coated glass mirrors in the 19th century, long-lasting aluminum coatings in the 20th century, segmented mirrors to allow larger diameters, and active optics to compensate for gravitational deformation. A mid-20th century innovation was catadioptric telescopes such as the Schmidt camera, which uses both a lens (corrector plate) and mirror as primary optical elements, mainly used for wide field imaging without spherical aberration. The basic scheme is that the primary light-gathering element the objective (1) (the convex lens or concave mirror used to gather the incoming light), focuses that light from the distant object (4) to a focal plane where it forms a real image (5). This image may be recorded or viewed through an eyepiece (2) which acts like a magnifying glass. The eye (3) then sees an inverted magnified virtual image (6) of the object. Inverted images Most telescope designs produce an inverted image at the focal plane; these are referred to as inverting telescopes. In fact, the image is both inverted and reverted, or rotated 180 degrees from the object orientation. In astronomical telescopes the rotated view is normally not corrected, since it does not affect how the telescope is used. However, a mirror diagonal is often used to place the eyepiece in a more convenient viewing location, and in that case the image is erect but everted (reversed left to right). In terrestrial telescopes such as Spotting scopes, monoculars and binoculars, prisms (e.g., Porro prisms), or a relay lens between objective and eyepiece are used to correct the image orientation. There are telescope designs that do not present an inverted image such as the Galilean refractor and the Gregorian reflector. These are referred to as erecting telescopes. Design variants Many types of telescope fold or divert the optical path with secondary or tertiary mirrors. These may be integral part of the optical design (Newtonian telescope, Cassegrain reflector or similar types), or may simply be used to place the eyepiece or detector at a more convenient position. Telescope designs may also use specially designed additional lenses or mirrors to improve image quality over a larger field of view. Angular resolution Ignoring blurring of the image by turbulence in the atmosphere (atmospheric seeing) and optical imperfections of the telescope, the angular resolution of an optical telescope is determined by the diameter of the primary mirror or lens gathering the light (also termed its "aperture") Here, denotes the resolution limit in arcseconds and is in millimeters. In the ideal case, the two components of a double star system can be discerned even if separated by slightly less than . This is taken into account by the Dawes limit The equation shows that, all else being equal, the larger the aperture, the better the angular resolution. The resolution is not given by the maximum magnification (or "power") of a telescope. Telescopes marketed by giving high values of the maximum power often deliver poor images. For large ground-based telescopes, the resolution is limited by atmospheric seeing. This limit can be overcome by placing the telescopes above the atmosphere, e.g., on the summits of high mountains, on balloon and high-flying airplanes, or in space. Resolution limits can also be overcome by adaptive optics, speckle imaging or lucky imaging for ground-based telescopes. Recently, it has become practical to perform aperture synthesis with arrays of optical telescopes. Very high resolution images can be obtained with groups of widely-spaced smaller telescopes, linked together by carefully controlled optical paths, but these interferometers can only be used for imaging bright objects such as stars or measuring the bright cores of active galaxies. Example images of starspots on Betelgeuse can be seen here. Focal length and f-ratio The focal length determines how wide an angle the telescope can view with a given eyepiece or size of a CCD detector or photographic plate. The f-ratio (or focal ratio, or f-number) of a telescope is the ratio between the focal length and the diameter (i.e., aperture) of the objective. Thus, for a given objective diameter, low f-ratios indicate wide fields of view. Wide-field telescopes (such as astrographs) are used to track satellites and asteroids, for cosmic-ray research, and for astronomical surveys of the sky. It is more difficult to reduce optical aberrations in telescopes with low f-ratio than in telescopes with larger f-ratio. Light-gathering power The light-gathering power of an optical telescope is proportional to the area of the objective lens or mirror, or proportional to the square of the diameter (or aperture). For example, a telescope with a lens which has a diameter three times that of another will have nine times the light-gathering power. A bigger telescope can have an advantage over a smaller one, because their sensitivity increases as the square of the entrance diameter. For example, a 7 meter telescope would be about ten times more sensitive than a 2.4 meter telescope. For a survey of a given area, the field of view is just as important as raw light gathering power. Survey telescopes such as Large Synoptic Survey Telescope therefore try to maximize the product of mirror area and field of view (or etendue) rather than raw light gathering ability alone. Imperfect images No telescope can form a perfect image. Even if a reflecting telescope could have a perfect mirror, or a refracting telescope could have a perfect lens, the effects of aperture diffraction are unavoidable. In reality, perfect mirrors and perfect lenses do not exist, so image aberrations in addition to aperture diffraction must be taken into account. Image aberrations can be broken down into two main classes, monochromatic, and polychromatic. In 1857, Philipp Ludwig von Seidel (1821–1896) decomposed the first order monochromatic aberrations into five constituent aberrations. They are now commonly referred to as the five Seidel Aberrations. The five Seidel aberrations - Spherical aberration - The difference in focal length between paraxial rays and marginal rays, proportional to the square of the objective diameter. - A most objectionable defect by which points are imaged as comet-like asymmetrical patches of light with tails, which makes measurement very imprecise. Its magnitude is usually deduced from the optical sine theorem. - The image of a point forms focal lines at the sagittal and tangental foci and in between (in the absence of coma) an elliptical shape. - Curvature of Field - The Petzval field curvature means that the image instead of lying in a plane actually lies on a curved surface which is described as hollow or round. This causes problems when a flat imaging device is used e.g. a photographic plate or CCD image sensor. - Either barrel or pincushion, a radial distortion which must be corrected for if multiple images are to be combined (similar to stitching multiple photos into a panoramic photo). They are always listed in the above order since this expresses their interdependence as first order aberrations via moves of the exit/entrance pupils. The first Seidel aberration, Spherical Aberration, is independent of the position of the exit pupil (as it is the same for axial and extra-axial pencils). The second, coma, changes as a function of pupil distance and spherical aberration, hence the well-known result that it is impossible to correct the coma in a lens free of spherical aberration by simply moving the pupil. Similar dependencies affect the remaining aberrations in the list. The chromatic aberrations - Longitudinal chromatic aberration: As with spherical aberration this is the same for axial and oblique pencils. - Transverse chromatic aberration (chromatic aberration of magnification) Astronomical research telescopes Optical telescopes have been used in astronomical research since the time of their invention in the early 17th century. Many types have be constructed over the years depending on the optical technology, such as refracting and reflecting, the nature of the light or object being imaged, and even where they are placed, such as space telescopes. Some are classified by the task they perform such as Solar telescopes, Large reflectors Nearly all large research-grade astronomical telescopes are reflectors. Some reasons are: - In a lens the entire volume of material has to be free of imperfection and inhomogeneities, whereas in a mirror, only one surface has to be perfectly polished. - Light of different colors travels through a medium other than vacuum at different speeds. This causes chromatic aberration. - Reflectors work in a wider spectrum of light since certain wavelengths are absorbed when passing through glass elements like those found in a refractor or catadioptric. - There are technical difficulties involved in manufacturing and manipulating large-diameter lenses. One of them is that all real materials sag in gravity. A lens can only be held by its perimeter. A mirror, on the other hand, can be supported by the whole side opposite to its reflecting face. Most large research reflectors operate at different focal planes, depending on the type and size of the instrument being used. These including the prime focus of the main mirror, the cassegrain focus (light bounced back down behind the primary mirror), and even external to the telescope all together (such as the Nasmyth and coudé focus). A new era of telescope making was inaugurated by the Multiple Mirror Telescope (MMT), with a mirror composed of six segments synthesizing a mirror of 4.5 meters diameter. This has now been replaced by a single 6.5 m mirror. Its example was followed by the Keck telescopes with 10 m segmented mirrors. The largest current ground-based telescopes have a primary mirror of between 6 and 11 meters in diameter. In this generation of telescopes, the mirror is usually very thin, and is kept in an optimal shape by an array of actuators (see active optics). This technology has driven new designs for future telescopes with diameters of 30, 50 and even 100 meters. Relatively cheap, mass-produced ~2 meter telescopes have recently been developed and have made a significant impact on astronomy research. These allow many astronomical targets to be monitored continuously, and for large areas of sky to be surveyed. Many are robotic telescopes, computer controlled over the internet (see e.g. the Liverpool Telescope and the Faulkes Telescope North and South), allowing automated follow-up of astronomical events. Initially the detector used in telescopes was the human eye. Later, the sensitized photographic plate took its place, and the spectrograph was introduced, allowing the gathering of spectral information. After the photographic plate, successive generations of electronic detectors, such as the charge-coupled device (CCDs), have been perfected, each with more sensitivity and resolution, and often with a wider wavelength coverage. Current research telescopes have several instruments to choose from such as: - imagers, of different spectral responses - spectrographs, useful in different regions of the spectrum - polarimeters, that detect light polarization. The phenomenon of optical diffraction sets a limit to the resolution and image quality that a telescope can achieve, which is the effective area of the Airy disc, which limits how close two such discs can be placed. This absolute limit is called the diffraction limit (and may be approximated by the Rayleigh criterion, Dawes limit or Sparrow's resolution limit). This limit depends on the wavelength of the studied light (so that the limit for red light comes much earlier than the limit for blue light) and on the diameter of the telescope mirror. This means that a telescope with a certain mirror diameter can theoretically resolve up to a certain limit at a certain wavelength. For conventional telescopes on Earth, the diffraction limit is not relevant for telescopes bigger than about 10 cm. Instead, the seeing, or blur caused by the atmosphere, sets the resolution limit. But in space, or if adaptive optics are used, then reaching the diffraction limit is sometimes possible. At this point, if greater resolution is needed at that wavelength, a wider mirror has to be built or aperture synthesis performed using an array of nearby telescopes. In recent years, a number of technologies to overcome the distortions caused by atmosphere on ground-based telescopes have been developed, with good results. See adaptive optics, speckle imaging and optical interferometry. See also - Amateur telescope making - Depth of field - Globe effect - Bahtinov mask - Carey mask - Hartmann mask - History of optics - List of optical telescopes - List of largest optical reflecting telescopes - List of largest optical refracting telescopes - List of largest optical telescopes historically - List of solar telescopes - List of space telescopes - List of telescope types - galileo.rice.edu The Galileo Project > Science > The Telescope by Al Van Helden – “the telescope was not the invention of scientists; rather, it was the product of craftsmen.” - Fred Watson, Stargazer (page 55) - The History of the Telescope By Henry C. King, Page 25-29 - progression is followed through Robert Grosseteste Witelo, Roger Bacon, through Johannes Kepler, D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 94-99 - galileo.rice.edu The Galileo Project > Science > The Telescope by Al Van Helden - Renaissance Vision from Spectacles to Telescopes By Vincent Ilardi, page 210 - galileo.rice.edu The Galileo Project > Science > The Telescope by Al Van Helden - The History of the Telescope By Henry C. King, Page 27 "(spectacles) invention, an important step in the history of the telescope" - galileo.rice.edu The Galileo Project > Science > The Telescope by Al Van Helden "The Hague discussed the patent applications first of Hans Lipperhey of Middelburg, and then of Jacob Metius of Alkmaar... another citizen of Middelburg, Sacharias Janssen had a telescope at about the same time but was at the Frankfurt Fair where he tried to sell it" - See his books Astronomiae Pars Optica and Dioptrice - Sphaera - Peter Dollond answers Jesse Ramsden - A review of the events of the invention of the achromatic doublet with emphasis on the roles of Hall, Bass, John Dollond and others. - Stargazer - By Fred Watson, Inc NetLibrary, Page 108 - Stargazer - By Fred Watson, Inc NetLibrary, Page 109 - works by Bonaventura Cavalieri and Marin Mersenne among others have designs for reflecting telescopes - Stargazer - By Fred Watson, Inc NetLibrary, Page 117 - The History of the Telescope By Henry C. King, Page 71 - Isaac Newton: adventurer in thought, by Alfred Rupert Hall, page 67 - Parabolic mirrors were used much earlier, but James Short perfected their construction. See "Reflecting Telescopes (Newtonian Type)". Astronomy Department, University of Michigan. - Silvering was introduced by Léon Foucault in 1857, see madehow.com - Inventor Biographies - Jean-Bernard-Léon Foucault Biography (1819-1868), and the adoption of long lasting aluminized coatings on reflector mirrors in 1932. Bakich sample pages Chapter 2, Page 3 "John Donavan Strong, a young physicist at the California Institute of Technology, was one of the first to coat a mirror with aluminum. He did it by thermal vacuum evaporation. The first mirror he aluminized, in 1932, is the earliest known example of a telescope mirror coated by this technique." - http://optics.nasa.gov/concept/vlst.html NASA - SOMTC- Advanced Concepts Studies – The Very Large Space Telescope (VLST) - S. McLean, Electronic imaging in astronomy: detectors and instrumentation, page 91 - Notes on AMATEUR TELESCOPE OPTICS - Online Telescope Math Calculator - The Resolution of a Telescope - skyandtelescope.com - What To Know (about telescopes)
http://en.wikipedia.org/wiki/Optical_telescope
13
38
APEC/CANR, University of Delaware In 1763-67 Charles Mason and Jeremiah Dixon surveyed and marked most of the boundaries between Maryland, Pennsylvania and the Three Lower Counties that became Delaware. The survey, commissioned by the Penn and Calvert families to settle their long-running boundary dispute, provides an interesting reference point in the region’s history. This paper summarizes the historical background of the boundary dispute, the execution of Mason and Dixon’s survey, and the symbolic role of the Mason-Dixon Line in American civil rights history. English claims to North America originated with John Cabot's letters patent from King Henry VII (1496) to explore and claim territories for England. ("John Cabot" was actually a Venetian named Giovanni Caboto.) Cabot almost certainly sailed past Cape Breton, Nova Scotia, and Newfoundland. He stepped ashore only once in North America, in the summer of 1497 at an unknown location, to claim the region for England. It is highly unlikely that Cabot came anywhere near the mid-Atlantic coast, however. The first Europeans to explore the Chesapeake Bay in the 1500's were Spanish explorers and Jesuit missionaries. But based on Cabot's prior claim, Queen Elizabeth I granted Sir Walter Ralegh a land patent in 1584 to establish the first English colony in America. The first English colonists settled on Roanoke Island inside the Outer Banks of North Carolina in 1585 (see colonist John White's map). Most of the colonists returned to England the following year; the remaining settlers had disappeared when White's re-supply ship finally returned to the island in 1590. The Virginia Company of London, a joint stock venture, established the first permanent English colony at "James Fort," aka Jamestown, in 1607. The fort was erected on a peninsula on the James River. The colony was supposed to extract gold from the Indians, or mine for it, and it only survived by switching its economic focus to tobacco (introduced to England by John Rolfe), furs, etc. John Smith published his famous Map of Virginia (1612) based on his 1608 exploration of the Chesapeake. Smith's map includes the "Smyths fales" on the Susquehanna River (now the Conowingo dam), "Gunters Harbour" (North East, MD), and "Pergryns mount" (Iron Hill near Newark, DE). Notice that the latitude markings at the top of the map are surprisingly accurate. The Maryland colony When George Calvert, England’s Secretary of State under King James I, publicly declared his Catholicism in 1625, English law required that he resign. James awarded him an Irish baronetcy, making him the first Lord Baltimore. Although Calvert was an investor in the Virginia Company, he was barred from Virginia because of his religion. He then started his own "Avalon" colony in Newfoundland, but the climate proved inhospitable. So Calvert persuaded James’s successor, Charles I, to grant his family the land north of the Virginia colony that became Maryland. The 1632 grant gave the Calverts everything north of the Potomac to the 40th parallel, and from the Atlantic west to the source of the Potomac. George Calvert died later in 1632, and his sons started the Maryland colony, named in honor of Charles I’s consort Henrietta Maria On May 27th, 1634, Leonard Calvert and about 300 settlers arrived in the Chesapeake Bay at St. Mary’s. George Alsop published a "Land-skip" map of the new colony (1666). But while the Calverts were settling on the Chesapeake Bay, Dutch and Swedish colonists were settling on the Delaware bay (named by Captain Thomas Argyll in honor of Lord De La Warr, the governor of the Virginia colony). At the bottom of the Delaware Bay, Dutch colonists established a settlement at Zwaanendael (now Lewes) and a trading post at Fort Nassau in 1631, although these settlers were killed in a dispute with local Indians within a year. Swedish colonists, led by Peter Minuit had purchased Manhattan Island in 1626 for the Dutch West India Company and directed the New Netherlands colony, including New Amsterdam (New York), from 1626 until 1633, when he was dismissed from the Company. He then negotiated with the Swedish government to create the New Sweden colony on the Delaware River. Minuit and a first group of Swedish colonists on two Swedish ships, the Kalmar Nyckel and the Fogel Grip, arrived at Swedes Landing in 1638 and established Fort Christina (Wilmington) as the principal town in the new colony. The political and economic chaos of the English Civil Wars (1642-51) and the Commonwealth and Protectorate periods stalled English colonial expansion. Charles I had not called a Parliament for a decade, until the Bishops' War in Scotland (1639) bankrupted the crown and forced him to call a new Parliament in 1640. This "Long Parliament" (which lasted eight years!) could only be adjourned by itself. Having lost control of it, Charles left London, raised a Royalist army and sought help from Scottish and Irish Catholic sympathizers. After a series of battles with Parliamentarian forces, Charles was imprisoned in 1648. The "Rump Parliament" under the control of the "New Model Army" ordered his trial for treason. He was conviced and beheaded in 1649. A Parliamentary Commonwealth (1649-53) was replaced by the Protectorate under Oliver Cromwell. After military campaigns in Scotland and Ireland, Cromwell had to deal with the first and second Dutch Wars (1652-54 and 1655-57). After Cromwell died in 1658, the army replaced his son Richard with another Parliamentary Commonwealth under a dysfunctional Rump Parliament (1659-60) before restoring the monarchy and inviting Charles II back from exile. While England was in chaos, the Dutch kept expanding their American colonies. Colonial governor Peter Stuyvesant purchased the land between the Christina River and Bombay Hook from the Indians, and established Fort Casimir at what is now New Castle in 1651. The Swedes, just a few miles up the river, captured Fort Casimir in 1654, but Dutch soldiers from New Amsterdam (Manhattan) took control of the entire New Sweden colony in 1655. After the Restoration brought Charles II to the English throne, English colonial expansion resumed. In 1664 the Duke of York, Charles II's brother James, captured New Amsterdam, renaming it New York, and he seized the Swedish-Dutch colonies on the Delaware River as well. The Dutch briefly recaptured New York in 1673, but after their 1674 defeat in Europe in the third Dutch War, they ceded all their American claims to England in the Treaty of Westminster. Having regained his American territories, the Duke of York granted the land between the Hudson and Delaware rivers to his friends George Carteret and John Berkeley in 1675, and they established the colony of New Jersey. The Pennsylvania colony Sir William Penn had served the Duke of York in the Dutch wars, and had loaned the crown about £16,000. His son William Penn, who had become a Quaker, petitioned Charles for a grant of land north of the Maryland colony as repayment of the debt. In 1681 Charles granted Penn all the land extending five degrees west from the Delaware River between the 40th and 43rd parallels, excluding the lands held by the Duke of York within a "twelve-mile circle" centered on New Castle, plus the lands to the south that had been ceded by the Dutch. Was this to be a twelve-mile radius circle, or a twelve-mile diameter circle, or maybe a twelve-mile circumference circle?--—the language was uncler and unnecessary: even a twelve-mile radius circle centered on New Castle lies entirely below the 40th parallel. The Calvert family had ample opportunity to get the 40th parallel surveyed and marked, but never bothered to do so. Philadelphia was established at the upper limit of deep-water navigability on the Delaware, although it was below the 40th parallel, and Pennsylvania colonists settled areas west and south of the city with no resistace from the Calverts. Penn needed to get his colony better access to the Atlantic, and in 1682 he leased the Duke of York’s lands from New Castle down to Cape Henlopen. Penn arrived in New Castle in October 1682 to take official possession of the "Three Lower Counties" on the Delaware Bay. He renamed St. Jones County to Kent County, and Deale County to Sussex County, and the Three Lower Counties were annexed to the Pennsylvania colony. Penn negotiated with the third Lord Baltimore at the end of 1682 at Annapolis, and in April 1683 at New Castle, to establish and mark a formal boundary between Maryland and Pennsylvania including the Three Lower Counties. The Calverts wanted to determine the 40th parallel by astronomical survey, while Penn suggested measuring northward from the southern tip of the Delmarva peninsula (about 370 5' N), assuming 60 miles per degree as Charles II had suggested. (The true distance of one degree of latitude is about 69 miles.) This would have given Pennsylvania the uppermost part of the Chesapeake Bay. After the negotiations failed, Penn took his case to the Commission for Trade and Plantations. In 1685 the Commission determined that the land lying north of Cape Henlopen between the Delaware Bay and the Chesapeake should be divided equally; the western half belonged to the Calverts, while the eastern half belonged to the crown, i.e., to the Duke of York, and thus to Pennsylvania under Penn's lease. So the north-south boundary between Maryland and the Three Lower Counties was now legally defined, but the east-west boundary between Pennsylvania and Maryland remained unresolved. Charles II died in 1685, and the Duke of York, a Catholic convert, succeeded him as James II. But three years later, William of Orange, the Dutch grandson of Charles I and husband of James II’s protestant daughter Mary, seized the English throne in the "Glorious Revolution." The Calverts lost control of their Maryland holdings, and Maryland was declared a royal colony. Penn’s ownership of Pennsylvania and the Lower Three Counties was also suspended from 1691 to 1694. The Calverts did not regain their proprietorship of Maryland until 1713 when Charles Calvert, the fifth Lord Baltimore, renounced Catholicism. Penn revisited America in 1699-1701, and reluctantly granted Pennsylvania and the Lower Three Counties separate elected legislatures under the Charter of Privileges. He also commissioned local surveyors Thomas Pierson and Isaac Taylor to survey and demarcate the twelve-mile radius arc boundary between New Castle and Chester counties. Pierson and Taylor completed the survey in ten days using just a chain and compass. The survey marks were tree blazes, and once these disappeared, the location of the arc boundary was mostly a matter of fuzzy recall and conjecture. Geodetic science was in its infancy. Latitude could be estimated reasonably accurately with sextant and compass, but longitude was largely guesswork. As England’s naval power and colonial holdings continued to expand, the demand for better maps and navigation intensified. Parliament set a prize of £20,000 for a solution to the "longitude problem" in 1712. The challenge was to determine a longitude in the West Indies onboard a ship with less than half a degree of longitude error. Dava Sobel’s book Longitude (1996) details how clock-maker John Harrison eventually won the prize with his "H4" precision chronometer. Penn died in 1718, disinheriting his alcoholic eldest son William Jr., and leaving the colonies to his second wife Hannah, who transferred the lands to her sons Thomas, John, Richard and Dennis. Thomas outlived the others and accumulated a two-thirds interest in the holdings. In 1731, the fifth Lord Baltimore petitioned King George II for an official resolution of the boundary dispute. In the ensuing negotiations the Calverts tried to hold out for the 40th parallel, but Pennsylvania colonists had settled enough land to the west and southward of Philadelphia that this was no longer practical. In 1732 the parties agreed that the boundary line should run east from Cape Henlopen to the midpoint of the peninsula, then north to a tangency with the west side of the twelve-mile radius arc around New Castle, then around the arc to its northernmost point, then due north to an east-west line 15 miles south of Philadelphia. It was a bad deal for the Calverts. The east-west line would turn out to be about 19 miles south of the 40th parallel, and, as the map appended to the agreement shows (Senex, 1732), would intersect the arc. The map placed "Cape Hinlopen" at what is now Fenwick Island, almost 20 miles to the south as well; this error was an attempt at deception, not ignorance (compare the 1670 map from more that 50 years earlier). But litigation over interpretation and details dragged on. The border conflict led to sporadic local violence. In 1736 a mob of Pennsylvanians attacked a Maryland farmstead. A survey party commissioned by the Calverts was run off by another mob in 1743. In 1750, the Court of Chancery established a bipartisan commission to survey and mark the boundaries per the 1732 agreement. The commissioners hired local surveyors to mark an east-west transpeninsular line from Fenwick Island to the Chesapeake in 1750-51, and then determine the middle point of this line, which would mark the southwest corner of the Three Lower Counties. As the survey team worked from Fenwick Island westward the rivers, swamps and dense vegetation made the work difficult, and there were continuing disputes, e.g., should distances be determined by horizontal measures or on the slopes of the terrain? Should the transpeninsular line stop at the Slaughter Creek estuary or continue across that peninsula, known as Taylor’s Island, to the open Chesapeake? Should the line stop at the inundated marsh line of the Chesapeake or at open water? The transpeninsular survey and its middle point were not officially approved in London until 1760. In 1761, the colonial surveyors began running the north-south "tangency" line from the middle point toward a target tangent point on the twelve-mile arc. With poor equipment and some miscalculations, their first try at a tangency line passed a half-mile east of the target point on the arc. Their second try was 350 yards to the west. The disputants required much higher standards of accuracy, and they consulted the royal astronomer James Bradley at the Greenwich observatory for advice on getting the survey done right. The Mason and Dixon survey Bradley recommended Charles Mason and Jeremiah Dixon to complete the boundary survey. Mason was Bradley’s assistant at the observatory, an Anglican widower with two sons. Dixon was a skilled surveyor from Durham, a Quaker bachelor whose Meeting had ousted him for his unwillingness to abstain from liquor. In 1761 Mason and Dixon had sailed together for Sumatra, but only made it to the Cape of Good Hope, to record a transit of Venus across the sun to support the Royal Society’s calculations of distance by parallax between the Earth and sun. Their major tasks in America would be to survey the exact tangent line northward from the middle point of the transpeninsular line to the twelve-mile arc, and survey the east-west boundary five degrees westward along a line of latitude passing fifteen miles south of the southernmost part of Philadelphia (Figure 6). It would be one of the great technological feats of the century Mason and Dixon arrived in Philadelphia on November 15th 1763 during a tense period. The Seven Years’ War had spilled over to North America as the French and Indian Wars, and although the Treaty of Paris, signed in February 1763, had put an official end to the hostilities, conflicts between colonists and Indians continued. The Iroquois League, or Six Nations (Mohawk, Onondaga, Cayuga, Seneca, Oneida and Tuscarora), had supported the British against their longtime enemies, the Cherokee, Huron, Algonquin and Ottawa, whom the French had supported in their attacks on colonists. Pontiac, chief of the Ottawa, had organized a large-scale attack on Fort Detroit on May 5th 1763, and some 200 settlers were massacred along the western frontier. Local reaction to the news was brutal. In Lancaster, Pennsylvania, a mob of mostly Scots-Irish immigrants known as the "Paxton boys" attacked a small Conestoga Indian village in December, hacked their victims to death and scalping them. The remaining Conestogas were brought to the town jail for protection, but when the mob attacked the jail the regiment assigned to protect the Indians did nothing to stop them. The helpless Indians—men, women and children—were all hacked to pieces and scalped in their cells. The Paxton Boys then went after local Moravian Indians, who were taken to Philadelphia for protection. Enraged that the government would "protect Indians but not settlers," about 500 Paxton Boys actually invaded Philadelphia on February 6, 1764, although Benjamin Franklin was able to calm the mob. Mason and Dixon were shocked at the violence, and Mason would visit the scene of the Lancaster murders a year later. As the survey progressed, racial violence and the relentless dispossession of Indians were frequent background themes. Mason had brought along state-of-the-art equipment for the survey. This included a "transit and equal altitude instrument," a telescope with cross-hairs, mounted with precision adjustment screws, to sight exact horizontal points using a mounted spirit level, and also to determine true north by tracking stars to their maximum heights in the sky where they crossed the meridian. The famous "zenith sector," built by London instrument-maker John Bird, was a six-foot telescope mounted on a six-foot radius protractor scale, with fine tangent screws to adjust its position; it was used to measure the angles of reference stars from the zenith of the sky as they crossed the meridian. These measurements could be compared against published measurements of the same stars’ angles of declination at the equator to determine latitude. These were more reliable than measurements of azimuth against a plumb bob, which were already known to be subject to local gravitational anomalies. The zenith sector traveled on a mattress laid on a cart with a spring suspension. Mason and Dixon also brought a Hadley quadrant, used to measure angular distances; high-quality survey telescopes; 66-foot long Gunter chains comprised of 100 links each (1 chain = 4 rods; 1 chain × 10 chains = 43,560 square feet = 1 acre; 80 chains = 1 mile), along with a precision brass measure to calibrate the chain lengths; and wood measuring rods or "levels" to measure level distances across sloping ground. A large wooden chest contained a collection of star almanacs, seven-figure logarithm tables, trigonometric tables and other reference materials; Mason was skilled at spherical trigonometry. Mason had acquired a precision clock so that the local times of predicted astronomical events could be compared against published Greenwich times. Each one-minute local time difference implies a 15-second longitude difference. John Harrison’s "H4" chronometer had sailed to Jamaica and back in 1761, losing only 39 seconds on the round trip; the longitude calculations in Jamaica based on his clock were well within the accuracy standards Parliament had set for the £20,000 longitude prize. But Nevil Maskelyne, who had succeeded Bradley as royal astronomer, and the Royal Society remained skeptical about the reliability of chronometers in complementing astronomical calculations of longitude. Maskelyne insisted on the superiority of a purely astronomical approach, a computationally complex "lunar distance" method based on angular distances between the moon and various reference stars. Harrison wouldn’t collect his entire prize until 1773. Mason and Dixon would test the reliability of chronometric positioning, although Mason was skeptical of it. The southernmost part of Philadelphia was determined by the survey commissioners to be the north wall of a house on the south side of Cedar Street (the address is now 30 South Street) near Second Street. Mason and Dixon had a temporary observatory erected 55 yards northwest of the house, and after detailed celestial observations and calculations, they determined the latitude of the house wall to be 39o56’29.1"N. Since going straight south would take them through the Delaware River, they then surveyed and measured an arbitrary distance (31 miles) west to a farm owned by John Harland in Embreeville, Pennsylvania, at the "Forks of the Brandywine." They negotiated with Harland to set up an observatory, and set a reference stone, now known as the Stargazer’s Stone, at the same latitude. They spent the winter at Harland’s farm making astronomical observations on clear nights and enjoying local taverns on cloudy nights. The Harland house still stands at the intersection of Embreeville and Stargazer Roads, and the Stargazers’ Stone is in a stone enclosure just up Stargazer Road on the right. Its latitude is 39o56’18.9"N, which they calculated to be 356.8 yards south of the parallel determined in Philadelphia. At Harland’s they observed and timed predicted transits of Jupiter’s moons, as well as a lunar eclipse on March 17th 1764. The average (sun) time of these events at the Stargazers’ Stone was 5 hours 12 minutes and 54 seconds earlier than published predicted times for the Paris observatory (longitude 2o20’14"E). So they were able to estimate their longitude as (5:12:54)/(24:00:00) x 360o = 78o13’30"west of Paris, and thus 78o13’30" - 2o20’14" = 75o53’6" west of Greenwich. They published these findings in the Royal Society’s Philosophical Transactions in 1769. The clock used in this experiment was actually 37 seconds fast, so at fifteen arc seconds of longitude per clock second, their calculated longitude was 9’15" or about eight miles too far west. That was more accurate than Parliament’s longitude prize had required, but the margin of error was still a thousand times larger than the margin of error in their latitude calculations. Fortunately, Mason and Dixon’s principal tasks involved more local positioning than global positioning. They proposed measuring a degree of longitude for the Royal Society as part of their survey of the parallel between Pennsylvania and Maryland; although the Society never funded that project, it would fund their measurement of a degree of latitude in 1768. In the spring of 1764 the survey party ran a line due south from Harland’s farm, measured with the survey chains and levels, with a team of axmen clearing a "visto" or line of sight eight or nine yards wide the entire way. They arrived in April 1764 at a farm field owned by Alexander Bryan in what is now the Possum Hill section of Delaware’s White Clay Creek State Park. They placed an oak post called "Post mark’d West" at a latitude of 39o43’ 26.4"N, after verifying that this point was exactly 15 miles below the 39o56’29.1"N latitude they had determined in Philadelphia. This point is now marked by a stone monument accessible by a short spur trail off the Bryan’s Field trail, about 600 yards downhill (due south) from the ruins of the farmstead. The easiest access point is from the east (gravel road) parking lot at Possum Hill off Paper Mill Road. The Post mark’d West would be the eastern origin and reference latitude point for the west line. Mason and Dixon then headed south to the middle point of the transpeninsular line that the colonial surveyors had marked, and they spent the rest of 1764 surveying the north-south boundary line. With a team of axmen clearing the vistos ahead of them, they resurveyed and marked the tangency line northward from the middle point toward the target tangency point on twelve-mile arc 82 miles to the north. They crossed the Nanticoke River, Marshyhope Creek, Choptank River, Bohemia River, and Broad Creek. Where their survey chains could not span a river, they measured the river width by triangulation, using the Hadley quadrant on its side to calculate the angle between two points on the opposite side. They arrived at the 82-mile point in August 1764. Mason and Dixon then ran an exact twelve-mile line from the New Castle courthouse to the tangency line, setting the tangent point marker at the 82-mile point of the tangency line; this is located by a small drainage pond at the edge of an apartment complex, about 600 meters south of the Delaware-Maryland boundary on Elkton Road, about 100 meters north of the rail lines. It was 17 chains and 25 links west of the tangency point targeted by the 1761 survey. Since the tangency line runs slightly west of true north, the tangent point lies south and slightly east of the arc’s westernmost point. After joining the tangency line perpendicularly to the twelve-mile radius line from New Castle in August 1764, they returned south to the middle point, checking and correcting marks as they went. On this re-check, their final error at the middle point, after 82 miles, was 26 inches. They returned north again, making final placements of the marks into November. During this phase of the survey, their base of operations in Delaware was St. Patrick’s Tavern in Newark, where the Deer Park Tavern now stands. Tavern scenes in Thomas Pynchon’s 1997 novel Mason & Dixon are consistent with at least one contemporary account of their enjoyment of the taproom. In January 1765 Mason visited Lancaster (and the jail where the Tuscaroras had been slaughtered) and "Pechway" (Pequa). In February, he toured Princeton NJ and New York. Mason and Dixon began the survey of the west line from the "Post mark’d West" in April 1765. The Arc Corner Monument, located at the north side of the W.S. Carpenter Recreation Area of White Clay Creek State Park, just off Hopkins Bridge Road, marks the intersection of the west line with the 12-mile arc, and is the start of the actual Maryland-Pennsylvania boundary line. Mason and Dixon spent the next couple of years surveying this line westward. Again, their axmen cleared vistos, generally eight yards wide. They would survey straight 12-mile line segments, starting at headings about 9 minutes northward of true west and sighting linear chords to the true latitude curve, then make detailed astronomical calculations to adjust the intermediate mile mark southward to the exact 39o 43’ 17.4" N latitude. The true latitude is not a straight line: looking westward in the northern hemisphere it gradually curves to the right. It was exacting work. The survey crossed the two branches of the Christina Creek, the Elk River, and the winding Octoraro several times. The survey party reached the Susquehanna in May 1765. At the end of May they interrupted the survey of the west line, and returned to Newark to survey the north line from the tangent point through the western edge of the 12-mile arc to its intersection with the west line. From the tangent point, the survey proceeded due north, intersecting the arc boundary again about a mile and a half further up at a point now marked by an "intersection stone." The location is behind the DuPont Company’s Stine-Haskell labs north of Elkton Road very near the Conrail rail line. The north line ended at a perpendicular intersection with the west line in a tobacco field owned by Captain John Singleton. This is the northeast corner of Maryland. The boundaries between Maryland and the Three Lower Counties were now complete. The locations of the final mile points on the tangent and north lines, and the discernible inflections of the Maryland/Delaware boundary at the tangent point, are shown on the Newark West 7.5-minute USGS topographic map. The tri-state marker is located about 150 meters east of Rt. 896 behind a blue industrial building at the MD/PA boundary. The thin sliver of land (secant) west of the North Line but within the 12-mile arc west of the North Line was assigned to New Castle County (PA, now DE) per the 1732 agreement. The "Wedge" between the North Line and the 12-mile arc just below the West Line was assigned to Chester County, PA, but later ceded to Delaware. In June 1765 Mason and Dixon reported their progress to the survey commissioners representing the Penn and Calvert families at Christiana Bridge (now the village of Christiana). They then resumed the survey of the west line from the Susquehanna. As they went along, the locals learned whether they were Marylanders or Pennsylvanians. They reached South Mountain (mile 61) at the end of August, crossed Antietam Creek and the Potomac River in late September, and continued westward to North (aka Cove) Mountain near Mercersburg PA in late October, completing a total of 117 miles of the west line that year. From the summit of North Mountain they could see that their west line would pass about two miles north of the northernmost bend in the Potomac. Had the Potomac looped further north into Pennsylvania, the western piece of Maryland would have been cut off from the rest of the colony. The survey party stored their instruments at the house of a Captain Shelby near North Mountain, and returned east in the fall, checking and resetting their marks. In November 1765 they returned to the middle point to place the first 50 mile markers along the tangent line. These had been quarried and carved in England, and were delivered via the Nanticoke and Choptank rivers. They spent January 1766 at the Harland farm. In February and March, Mason traveled "for curiosity" to York PA; Frederick MD; Alexandria, Port Royal, Williamsburg and Annapolis VA. The survey party rendezvoused at North Mountain in March 1766 and resumed the survey from there, reaching Sideling Hill at mile 135 at the end of April. There were long periods of rain and snow through late April. West of Sideling Hill was almost unbroken wilderness, and the wagons with the marker stones couldn’t make it over the mountain so they marked with oak posts from there onward. They reached mile 165 in June, near the eastern continental divide, and spent the rest of the summer backtracking for corrections and final placement of marks. Mason noted the gradual curvature of the visto along the latitude, as seen from several summits including the top of South Mountain: From any Eminence in the Line where 15 or 20 Miles of the Visto can be seen (of which there are many): The said Line or Visto very apparently shows itself to form a Parallel of Northern Latitude. The Line is measured Horizontal; the Hills and Mountains measured with a 16 ½ feet Level and besides the Mile Posts; we have set Posts in the true Line, marked W, on the West side, all along the Line opposite the Stationary Points where the Sector and Transit Instrument stood. The said Posts stand in the Middle of the Visto; which in general is about Eight yards wide. The number of Posts set in the West Line is 303. (Journal entry for 25 September 1766) Back in Newark in October, they got permission from the commissioners to measure the distance of a degree of latitude as a side project for the Royal Society. They returned to the middle point of the transpeninsular line for astronomical observations in preparation for this, then returned to Newark and began setting 100 stone mile markers along tangent and west lines. The stones in the west line were set at mile intervals starting from the northeast corner of Maryland. At the end of November, at the request of the commissioners, they measured the eastward extension of the west line from the Post mark’d West across Pike, Mill, Red Clay and Christiana (Christina) creeks to the Delaware River. The southern boundary of Pennsylvania was to extend 5 degrees of longitude west from this point. They spent parts of the winter of 1766-67 at Harland’s farm making astronomical observations, using a clock on loan from the Royal Society to time ephemera. Mason spent the late winter and early spring traveling through Pennsylvania, Maryland and Virginia. He met the chief of the Tuscaroras in Williamsburg. The survey was supposed to extend a full five degrees of longitude (about 265 miles) to the west, but the Iroquois wanted the survey stopped. Negotiations between the Six Nations and William Johnson, the commissioner of Indian Affairs, lasted well into 1767. After a payment of £500 to the Indians, Mason and Dixon finally got authorization in June 1767 to continue the survey from the forks of the Potomac near Cumberland. They started out with more than 100 men that summer, including an Indian escort party and a translator, Hugh Crawford, as they continued the survey westward from mile 162. A.H. Mason’s edition of the survey journal (1969) includes a long undated memorandum written by Mason describing the terrain crossed by the west line. West of the Monongahela they met Catfish, a Delaware; then a party of Seneca warriors on a raid against the Cherokees; then Prisqueetom, an 86-year-old Delaware who "had a great mind to go and see the great King over the Waters; and make a perpetual Peace with him; but was afraid he should not be sent back to his own Country." The memorandum includes Crawford’s detailed descriptions of the Allegheny, Ohio and Mississippi rivers and many of their tributaries. As the survey party opened the visto further westward, the Indians grew increasingly resentful of the intrusion into their lands. The survey team reached mile 219 at the Monongahela River in September. Twenty-six men quit the crew in fear of reprisals from Indians, leaving only fifteen axmen to continue clearing vistos for the survey until additional axmen could be sent from Fort Cumberland. On October 9th, 231 miles from the Post mark’d West, the survey crossed the Great Warrior Path, the principal north-south Indian footpath in eastern North America. The Mohawks accompanying the survey said the warpath was the western extent of the commission with the chiefs of the Six Nations, and insisted the survey be terminated there. Realizing they had gone as far as they could, Mason and Dixon set up their zenith sector and corrected their latitude, and backtracked about 25 miles to reset their last marks. They left a stone pyramid at the westernmost point of their survey, 233 miles 17 chains and 48 links west of the Post mark’d West in Bryan’s field. Mason and Dixon returned east, arriving back at Bryan’s farm on December 9th 1767, and reported their work to the commissioners at Christiana Bridge later that month. They had hoped the Royal Society would sponsor a measurement of a degree of longitude along the west line, but that proposal was never approved. Mason calculated that if the earth were a perfect spheroid of uniform density (which it is not) the measurement would be 53.5549 miles. They spent about four months in early 1768 working on the latitude measurement project for the Royal society, using high-precision measuring levels with adjustments for temperature. They worked their way southward from the tangent point reaching the middle point in early June 1768, then working northward again. In Mason’s final calculation, published in the Royal Society’s Philosophical Transactions in 1768, a degree of latitude on the Delmarva Peninsula from the middle point northward was 68.7291 miles. On August 16th 1768 they delivered 200 printed copies of the map of their surveys, as drawn by Dixon, to the commissioners at a meeting at New Town on the Chester River. They were elected to the American Philosophical Society in April 1768. After settling their accounts, they enjoyed ten days of socializing in Philadelphia and then left for New York, sailing on the Halifax Packet to Falmouth, England, on September 11th 1768. Mason and Dixon never worked together again. In May 1769 the Royal Society sent Dixon to Hammerfest, above the Arctic Circle in Norway, and Mason to Cavan, Ireland, to record the June 4th transit of Venus, which occurred simultaneously with a lunar eclipse. David Rittenhouse and members of the American Philosophical Society conducted simultaneous observations in America. Dixon was elected a fellow of the Royal Society in 1773. He remained a bachelor, retired to Cockfield, Durham, and died in 1779 at age 45. Mason remarried in 1770, and continued to work for Nevil Maskelyne at the Royal Observatory, although he was never elected to the Royal Society. He returned to Philadelphia with his second wife and eight children in July 1786, died there on October 25th, and was buried in the Christ Church burial ground on Arch Street. His widow and her six children returned to England. His two sons from his first marriage remained in America. Less than a decade after the 1763-67 survey settled their long-running boundary dispute, the Penns and Calverts lost their colonies to the American Revolution. On June 15, 1776, the "Assembly of the Lower Counties of Pennsylvania" declared that New Castle, Kent and Sussex Counties would be separate and independent of both Pennsylvania and Britain. So Mason and Dixon’s tangent, north and west lines became the boundaries between the three new states of Delaware, Maryland and Pennsylvania. The Mason-Dixon Line The west line would not become famous as the "Mason-Dixon Line" for another fifty years as America slowly and haltingly addressed longstanding inequities in civil rights. In the east, the piedmont Lenni Lenape tribes of Delaware and Pennsylvania were completely dispossessed, and the remnants of the tribes were eventually relocated by a series of forced marches: to Ohio, Indiana, Missouri, Kansas, and finally to the Indian Territory which became Oklahoma. Hannah Freeman (1730-1802), known as "Indian Hannah," was the last of the Lenni Lenape in Chester County, Pennsylvania. The tidewater Nanticoke communities were dispersed from Delaware and Maryland by 1750, and the last tribal speaker of the Nanticoke, Lydia Clark, died before 1850. Some migrated as far north as Canada and were assimilated into other tribes, and some were relocated west. The remnant that remains in the area holds an inter-tribal pow-wow each September in Sussex County. With Indians almost entirely displaced from the eastern states, the national debate focused on slavery and abolition, and whether new states entering the Union should be free or slave states. The Missouri Compromise of 1820 designated Mason and Dixon’s west line as the national divide between the "free" and "slave" states east of the Ohio River, and the line suddenly acquired new significance. Delaware’s 1776 state constitution had banned the importation of slaves, and state legislation in 1797 effectively stopped the export of slaves by declaring exported slaves automatically free. The state’s population in the 1790 census was 15 percent black, and only 30 percent of these were free blacks. By the 1820 census, 78 percent of Delaware’s blacks were free. By 1840, 87 percent were free. Both escaped slaves and legally free blacks living anywhere near the line were vulnerable to kidnapping by slave-catchers operating out of Maryland. One of the most famous kidnappers was Patty Cannon, a notoriously violent woman who, with her son-in-law Joe Johnson, ran a tavern on the Delaware-Maryland line near the Nanticoke River. The Cannon-Johnson gang seized blacks as far north as Philadelphia and transported them south for sale, hiding them in her house or supposedly shackled to trees on a small island in the Nanticoke River, and then transporting across the Woodland ferry or loading them onto a schooner to be shipped down the Nanticoke for eventual sale in Georgia. In 1829 Cannon and Johnson were arrested and charged with kidnapping, and Cannon was charged with several murders, including the murder of a slave buyer for his money. Johnson was flogged, and Cannon died in jail before trial, reportedly a suicide by poison. Her skull is kept in a hatbox at the Dover Public Library. It does not circulate via inter-library loan. For free blacks in Delaware, freedom was quite restricted. Blacks could not vote, or testify in court against whites. After Nat Turner’s 1831 rebellion in Virginia triggered rumors and panic about a black insurrection in Sussex County, the Delaware legislature banned blacks from owning weapons, or meeting in groups larger than twelve. Through the first half of the 19th century the Mason-Dixon Line represented the line of freedom for tens of thousands of blacks escaping slavery in the south. The Underground Railroad provided food and temporary shelter at secret way-stations, and guided or sometimes transported northbound slaves across the Line. The spirituals sung by these slaves included coded references for escapees: the song "Follow the drinking gourd" referred to the Big Dipper from which runaways could sight the North Star; the River Jordan was the Mason-Dixon Line; Pennsylvania was the Promised Land. After the Fugitive Slave Act of 1850 allowed slave owners to pursue their escaped slaves into the north, the line of freedom became the Canadian border, "Canaan" in the spirituals, and abolitionists created Underground Railroad stops all the way to Canada. Thomas Garrett, a member of Wilmington’s Quaker community, was one of the most prominent conductors on the Underground Railroad. In 1813, while Garrett was still living in Upper Darby, Pennsylvania, a free black employee of his family’s was kidnapped and taken into Maryland. Garrett succeeded in rescuing her, but the experience reportedly made him a committed abolitionist, and he dedicated the next fifty years of his life to helping others escape slavery. Garrett moved to Wilmington in 1822 and lived at 227 Shipley Street, where he ran a successful iron business. He befriended and helped Harriet Tubman as she brought group after group of escaping slaves over the line; his house was the final step to freedom. Garrett was caught in 1848, prosecuted and convicted, forthrightly telling the court he had helped over 1,400 slaves escape. Judge Roger Taney ordered Garrett to reimburse the owners of slaves he was known to have helped, and it bankrupted him, but he continued in his work, assisting approximately 1,300 more slaves to freedom by 1864. Taney went on to become Chief Justice of the US Supreme Court, and wrote the majority decision in Dred Scott v. Sanford (1857), declaring that no blacks, slave or free, could ever be US citizens, and striking down the Missouri Compromise. In the buildup to the Civil War, Delaware was a microcosm of the country, sharply split between abolitionists in New Castle County and pro-slavery interests in Sussex County. A series of abolition bills were defeated in the state legislature by a single vote. Like other Union border states, Delaware remained a slave state during the war, although its slave population had fallen to only a few hundred. President Abraham Lincoln offered a federal reimbursement of $500 per slave (far more than their market value) to Delaware slave-owners if Delaware would abolish slavery, but the state legislature stubbornly refused. Lincoln’s January 1st 1863 Emancipation Proclamation abolished slavery in the Confederate states, but not in the Union border states. After the Civil War, Maryland, Pennsylvania, West Virginia, Ohio, Indiana, Illinois, Missouri and Arkansas outlawed slavery on or before their individual ratifications of the Thirteenth Amendment in 1865. New Jersey had technically abolished slavery in 1846, although it only ratified the Amendment in 1866. So as the Thirteenth Amendment neared ratification by 27 of the 36 states on December 6th 1865, America’s last two remaining slave states were Kentucky and Delaware. Delaware didn’t ratify the Thirteenth, Fourteenth or Fifteenth amendments until 1901. In the middle of the 20th century the Mason-Dixon Line was the backdrop for one of the five school desegregation cases that were eventually consolidated into the US Supreme Court’s Brown v. Board of Education of Topeka case. Until 1952, public education in Delaware was strictly segregated. Since the late 19th century, property taxes paid by whites in Delaware had funded whites-only schools, while property taxes paid by blacks funded blacks-only schools. In the 1910’s, P.S. duPont had financed the construction of schools for black children throughout Delaware, and effectively shamed the Legislature into providing better school facilities for whites as well. There was only high school for black children in the entire state—Howard High School. Persistent income disparities between blacks and whites insured persistent inequalities in public education. In 1950 the Bulah family had a vegetable stand at the corner of Valley Road and Limestone Road, and Shirley Bulah attended Hockessin Colored Elementary School 107, which had no bus service. The bus to Hockessin School 29, the white school, went right past the Bulah farm, and the Bulahs merely asked if Shirley could ride the bus to her own school. But Delaware law prohibited black and white children on the same school bus. Shirley’s mother Sarah Bulah contacted Wilmington lawyer Louis Redding, who had recently won the Parker v. University of Delaware case forcing the University to admit blacks. In 1950, the Wilmington chapter of the NAACP had launched an effort to get black parents in and around Wilmington to register their children in white schools, but the children were turned away. Redding chose the Bulahs as plaintiffs in one of two test cases, and convinced Sarah Bulah to sue in Delaware’s Chancery Court for Shirley’s right to attend the white school (Bulah v. Gebhart). Parents of eight black children from Claymont filed a parallel suit (Belton v. Gebhart). The complaints argued that the school system violated the "separate but equal" clause in Delaware’s Constitution (taken from Plessy v. Ferguson) because the white and black schools clearly were not equal. Redding knew that a court venue on the Mason-Dixon Line, with its local legacies of slavery and abolitionism, would be most likely to support integration. He argued the cases pro bono and the Wilmington NAACP paid the court costs. In 1952, Judge Collins Seitz found that the plaintiffs’ black schools were not equal to the white schools, and ordered the white schools to admit the plaintiff children. The Bulah v. Gebhart decision did not challenge the "separate but equal" doctrine directly, but it was the first time an American court found racial segregation in public schools to be unconstitutional. The state appealed Seitz’s decision to the Delaware Supreme Court, where it was upheld. The state’s appeal to the US Supreme Court was consolidated into the Brown v. Board case, which also upheld the decision. The town of Milford, Delaware, had riots when it integrated its schools immediately after the Brown decision. Elsewhere in Delaware, school integration proceeded slowly; the resistance to it was passive but pervasive. A decade after Brown, Delaware still had seventeen blacks-only school districts. As Wilmington’s schools were integrated, upscale families, both black and white, were moving to the suburbs, leaving behind high-poverty, black-majority city neighborhoods. Wilmington’s public school system, now serving a predominantly black, low-income population, was mired in corruption and failure. Following a second round of civil rights litigation in the 1970’s, the US Third Circuit court imposed a desegregation plan on New Castle County in 1976, under which schools in Wilmington would teach grades 4, 5 and 6 for all children in the northern half of the county, while suburban schools would teach grades 1-3 and 7-12. Wilmington children would have nine years of busing to the suburbs; suburban children would have three years of busing to Wilmington. After the 1976 desegregation order, a spate of new private schools popped up in the suburbs. One third of all schoolchildren living within four districts around Wilmington now attend non-public schools. In 1978 the Delaware legislature split the northern half of New Castle County into four large suburban districts, each to include a slice of Wilmington. The Brandywine, Red Clay Consolidated and Colonial districts are contiguous to Wilmington and serve adjacent city neighborhoods. The Christina district has two non-contiguous areas: the large Newark-Bear-Glasgow area and a high-poverty section of Wilmington about 10 miles distant on I-95. In 1995, the federal court lifted the desegregation order, declaring that the county had achieved "unitary status." Wilmington’s poorest communities remain predominantly black, but the urbanized Newark-New Castle corridor now has far more minority households than Wilmington. The school districts are committed to reducing black-white school achievement gaps as mandated under the federal No Child Left Behind Act (the 2000 reauthorization of the Elementary and Secondary Education Act). Louis Redding and Collins Seitz both died in 1998. The city government building at 800 North French St. in Wilmington is named in Redding’s honor. The 800-acre triangular area known as the Wedge lies below the eastern end of Mason and Dixon’s West line, bounded by Mason and Dixon’s North line on the west and the 12-mile arc on the east. It is located west of Wedgewood Road, and is intersected by Rt. 896 in Delaware just before the road crosses the very northeast tip of Maryland into Pennsylvania. Although the Delaware legislature had representatives from the Wedge in the mid-19th century, jurisdiction over the Wedge remained ambiguous. A joint Delaware-Pennsylvania commission assigned it to Delaware in 1889, and Pennsylvania ratified the assignment in 1897, but Delaware, sensitive to Wedge residents who considered themselves Pennsylvanians, didn’t vote accept the Wedge as part of Delaware until 1921. Congress ratified the compact in 1921. Through most of the 19th century the Wedge was a popular hideout for criminals, and a place for duels, prize-fighting, gambling and other recreations, conveniently outside any official jurisdiction. A historic marker on Rt. 896 summarizes its history. An 1849 stone marker replaced the stone Mason and Dixon used to mark the intersection of the North line with the West line; when the Wedge was annexed to Delaware in 1921 this became the MD/PA/DE tri-state marker. Until fairly recently, the area around Rising Sun, Maryland, had sporadic activity from a local Ku Klux Klan group whose occasional requests for parade permits attracted a lot of media attention. In his book Walkin’ the Line, William Ecenbarger recounts watching a Klan rally in Rising Sun 1995. Local Klan leader Chester Doles served a prison sentence for assault, and then left Cecil County for Georgia. Whatever Klan is left in this area has been very quiet since. The Mason-Dixon Trail is a 193-mile hiking trail, marked in light blue paint blazes. It begins at the intersection of Pennsylvania Route 1 and the Brandywine River in Chadds Ford, PA; runs southeast through Hockessin and Newark, DE; eastward though Elkton to Perryville and Havre de Grace, MD (although pedestrians are not allowed on the Rt. 40 bridge!); then northward up the west side of the Susquehanna into York County, PA, and proceeding northwest through York County through Gifford Pinchot State Park to connect with the Appalachian Trail at Whiskey Springs. The Mason-Dixon Trail does not actually follow any line that Mason and Dixon surveyed, but it’s an interesting trail over diverse terrain. The stone markers used in the Mason-Dixon survey were quarried and carved in England and shipped to America. The ordinary mile markers placed by the survey party are inscribed with "M" and "P" on opposite sides. Every fifth mile was marked with a crownstone with the Calvert and Penn coats of arms on opposite sites. The locations of many of these markers are noted on USGS 7.5-minute topographic maps. Roger Nathan and William Ecenbarger have both explored these markers and written readable histories of them. Many markers are lost, but some are still accessible (with landowner permission). Cope, Thomas D. 1949. Degrees along the west line, the parallel between Maryland and Pennsylvania. Proceedings of the American Philosophical Society 93(2):127-133 (May 1949). Thomas Cope, a physics professor at the University of Pennsylvania, published a number of articles about the survey. Cummings, Hubertis Maurice, 1962. The Mason and Dixon line, story for a bicentenary, 1763-1963. Commonwealth of Pennsylvania, Dept. of Internal Affairs, Harrisburg, PA. Written for the bicentennial of the survey, this book provides a good mix of technical detail and narrative. Danson, Edwin, 2001 Drawing the line : How Mason and Dixon surveyed the most famous border in America. John Wiley & Sons, New York. Provides the clearest technical explanations of the survey along with a readable narrative of it. Ecenbarger, William, 2000. Walkin' the line: a journey from past to present along the Mason-Dixon. M. Evans, New York. Ecenbarger describes his tour of the tangent, north and west lines, and intertwines local vignettes of slavery and civil rights with brief descriptions of the actual survey. Latrobe, John H. B. 1882. "The history of Mason and Dixon's line" contained in an address, delivered by John H. B. Latrobe of Maryland, before the Historical society of Pennsylvania, November 8, 1854. G. Bower, Oakland, DE. Mason, A.H. (ed.) Journal of Charles Mason [1728-1786] and Jeremiah Dixon [1733-1779]. 1969. Memoirs of the American Philosophical Society vol. 76). American Philosophical Society, Philadelphia. The survey journal, written in Mason’s hand, was lost for most of a century, turning up in Halifax, Nova Scotia, in 1860; the original is now in the National Archives in Washington DC. A transcription edited by A. Hughlett Mason was published in 1969 by the American Philosophical Association. The journal is mostly technical notes of the survey, with letters received and comments by Mason on his travels interspersed. An abridged fair copy of the journal, titled "Field Notes and Astronomical Observations of Charles Mason and Jeremiah Dixon," is in Maryland’s Hall of Records in Annapolis. Nathan, Roger E. 2000. East of the Mason-Dixon Line: a history of the Delaware boundaries. Delaware Heritage Press, Wilmington, DE. Focuses on the history of Delaware’s boundaries, in which Mason and Dixon played the largest part. Pynchon, Thomas. 1997. Mason & Dixon. Henry Holt, New York. Pynchon's novel mixes historically accurate details with wild fantasies. Mason and Dixon are portrayed as naïve, picaresque characters, the Laurel and Hardy of the 18th century, surrounded by an odd cast including a talking dog, a mechanical duck in love with an insane French chef, an electric eel, a renegade Chinese Jesuit feng-shui master, and a narrator who swallowed a perpetual motion watch. Mason and Dixon personify America’s confused moral compass, slowly realizing how their survey line defiles a wild, innocent landscape, and opens the west to the violence and moral ambiguities that accompany "civilization." Sobel, Dava. 1996. Longitude: the true story of a lone genius who solved the greatest scientific problem of his time. Walker & Co., New York.
http://www.udel.edu/johnmack/mason_dixon/
13
93
SO LITTLE WAS KNOWN of Mercury before the epic voyage of Mariner that the mission was virtually man's first look at this innermost planet of the Solar System. The science objectives for the mission were to explore Mercury as thoroughly as possible with seven experiments: television imaging, infrared radiometry, extreme ultraviolet spectroscopy, magnetometer, plasma, charged particles, and radio wave propagation. The same experiments were used to explore Venus, adding to knowledge gained from earlier U.S. and U.S.S.R. flights. Exploration of Venus was restricted somewhat by the trajectory requirements for reaching the prime target, Mercury. These requirements made it necessary, for example, for Mariner to follow a trajectory that did not produce a Sun occultation at Venus, so the ultraviolet occultation experiment (see page 24) could not be conducted at that planet. To obtain best science results, the objectives of each experiment were established and the space near Mercury was evaluated for aiming points and trajectories that would satisfy them. Of major importance was a flight path that would place the planet between the spacecraft and the Sun, and also between the spacecraft and the Earth, i.e., solar and Earth occultation, respectively. Study of the planet's effect on the Sun's plasma gas and magnetic fields ("solar wind") required a solar occultation, as did the sounding of Mercury's atmosphere by the ultraviolet occultation experiment. By observing the decrease in intensity of solar ultraviolet radiation as Mercury and its atmosphere blocked it out, a measure of this atmosphere could be obtained. Earth occultation was needed to observe the passage of radio signals from the spacecraft to Earth until cut off by the planet, and again on emergence from behind the planet. This would provide information concerning the radius of the planet, its atmosphere and ionosphere. To provide the greatest amount of information obtainable with remote sensing devices, Mariner Venus/Mercury carried more science instruments (Fig. 3-1) than most previous Mariner spacecraft. A magnetometer measured magnetic fields, a plasma analyzer measured the ions and electrons of the solar wind, and cosmic ray telescopes provided information on solar and galactic cosmic rays. The main objective of these instruments was to learn about a planet by studying its effects on the interplanetary medium. An infrared radiometer measured temperatures of the clouds of Venus and the surface of Mercury. Two independent ultraviolet instruments (measuring light beyond the violet end of the spectrum) analyzed the planetary atmospheres. One instrument was fixed to the body of the spacecraft and was used at Mercury to search for traces of atmosphere along the edges of the visible disc of the planet. A second instrument, mounted along with the television cameras on a.... ....scan platform, could be pointed on command. This "airglow " spectrometer was used to scan both of the planets, searching for evidence of hydrogen, helium, argon, neon, oxygen, and carbon. At Venus, it searched for specific gases, and during the cruise phase it looked for sources of ultraviolet radiation coming from hot stars and gas clouds in the galaxy. Measurements were also made of the gaseous envelope surrounding the comet Kohoutek. A complex of two televisor cameras with eight filters was the basis of the imaging experiment. These cameras were capable of taking both narrow- and wide-angle views of Venus and Mercury. Sharing the scan platform with the airglow spectrometer, the imaging complex was directed by command from Earth. As well as taking pictures in different colors of light, these cameras also measured how the light was polarized, observations intended to provide information on the composition of the clouds of Venus and the surface of Mercury. A radio experiment used the signals transmitted from the spacecraft to Earth. By tracking the spacecraft signals, experimenters determined how the spacecraft was affected by the gravitational fields of the planets. From this information they determined the shape of each planet and whether there were anomalies in its gravitational field. By analysis of what happened to the radio signals as they passed close to the limb or edge of the planet, experimenters were able to probe the atmosphere of Venus and check for an atmosphere of Mercury. To take full advantage of the Venus occultation, which bent the radio signals appreciably, the high-gain antenna on the spacecraft was steered so as to compensate partially for the bending of the radio signal. In this way, information was obtained at deeper levels of the atmosphere than was possible with earlier flyby spacecraft. The science experiments were selected from proposals submitted to NASA in response to the announcement of the Mariner Venus/Mercury flight opportunity. This instrument measured temperatures on the surface of Mercury and the clouds of Venus by sampling thermal ( infrared ) radiation. Observations of thermal emission from Mercury were expected to provide information on the average thermal properties, large-scale and small-scale surface anomalies, and surface roughness. It was known that temperature variations on Mercury would be large, owing to intensive heating of the day side and the slow rotation period of 58.6 days, which allows the night side to radiate away most of its heat. Measurement of heat absorption and loss across the terminator ( shadow line ) regions could provide indirect evidence of the nature of the surface material: such as whether it is sand, gravel, or rock. At Venus the instrument was expected to provide cloud top brightness temperatures at higher resolution than can be achieved from Earth or had been achieved by earlier spacecraft. The infrared radiometer was fixed to the body of the spacecraft on the sunlit side, with apertures shielded from the direct sunlight under a thermal blanket. The instrument ( Fig. 3-2 ) was based upon earlier radiometers flown on Mariner Mars 1969 and 1971, but instead of the reflecting optics of the earlier radiometers, the new instrument made use of two Cassegrain telescopes with special long-wavelength filters. This allowed observations at longer wavelengths and also increased sensitivity. Two 1/2-deg fields of view separated by 120 deg were used to scan Mercury, the angular separation being obtained by a three-position scan mirror (see Fig. 3-3). The forward and aft.... ....viewing beams thus ensured that there could be both a planet viewing beam and a black space reference beam for all the observations. The instrument measured surface brightness temperature in the two spectral bands 34 to 55 and 7.5 to 14 micrometers, which correspond to temperature ranges of 80 to 340 and 200 to 700 K, respectively. Observations of the velocity and the directional distributions of the normal solar wind constituents in the vicinity of Mercury were required to understand the interaction of the solar wind with the planet. Observations of the solar wind inside the orbit of Venus were also important, since no previous spacecraft had penetrated this region. Therefore, continuous measurements were planned from the orbit of the Earth to the orbit of Mercury. Additionally, an objective of the experiment was to verify and extend previous observations of the solar wind's interactions with Venus and to clarify the role of electrons in these interactions. Instrumentation for the experiment consisted of two detectors on a motor-driven platform (Fig. 3-4). The principal detector, facing sunward,.... ....consisted of a pair of electrostatic analyzers. The auxiliary detector, facing away from the Sun, was a single electrostatic analyzer. The forward looking device was called the scanning electrostatic analyzer while the backward looking device was called the scanning electron spectrometer. The former measured positive ions and electrons, the latter only electrons. The importance of investigating the interaction of the solar wind with the planets and the variation of the wind with distance inside the orbit of Venus was evidenced by the large team of investigators selected from seven research organizations for this experiment. The solar wind is an extension of the Sun's corona into interplanetary space. It is a fully ionized gas which consists of equal numbers of positively charged particles (mostly protons) and negative electrons. This ionized gas or plasma moves radially outward from the Sun at a very high velocity, hundreds of kilometers per second. The magnetic field of the Sun is carried outward by the plasma and is bent into a spiral configuration by a combination of the radial motion of the plasma and the rotation of the Sun. If one thinks of the plasma as a hot, ionized gas, the ions and electrons have two sorts of motions: a bulk velocity because they are both streaming outward from the Sun, and a thermal velocity because the gas is hot. For the protons, the bulk velocity is much higher, about a factor of 10, than the average thermal velocity; for electrons, the situation is exactly reversed. To an observer on the spacecraft, the positive ions appear to come almost directly from the Sun, whereas the electrons come almost uniformly from all directions. To study the properties of the plasma, the combined experiments were mounted at the end of a short boom, on a platform which allowed the plasma experiment to scan right or left through an angle of 60 deg above and below the spacecraft-Sun line. Magnetic Field Experiment The magnetic field experiment consisted of two 3-axis sensors located at different positions along a 6.1-m (20-ft) boom. Figure 3-5 shows a magnetometer mounted on the boom, together with a cutaway view of a sensor. The two sensors.... ....carried on the boom were biaxial fluxgate magnetometers. Each sensor was protected from direct solar radiation by a sunshade and a thermal blanket. The purpose of the two sensors was to permit the simultaneous measurement (at different distances from the spacecraft ) of the magnetic field, which is the sum of the weak magnetic field in space (and near the planets) and the magnetic field of the spacecraft itself. The inboard magnetometer, being approximately twice as close to the spacecraft as the outboard sensor, was more sensitive to changes in the magnetic field of the spacecraft, with the result that these perturbations could be isolated and removed from the outboard sensor measurements. In interplanetary space, the magnetic field is typically about 6 gamma (compared with the strength at Earth's equator on the surface of 30,000 gamma ). By contrast, the field of the spacecraft, as measured at the outboard sensor, was observed to vary in direction and intensity quite considerably during the mission, swinging from 1 to 4 gamma. This variation in the spacecraft field demonstrated the importance of having two sensors to remove the spacecraft field from the measured field. In addition to the planetary observations, magnetic field observations were important in studying how the interplanetary plasma varies with distance from the Sun and how this plasma moves outward from the Sun. The measurements of plasma and magnetic fields were mutually supporting, and their correlation was an important and sensitive test of consistency between the two scientific instruments. This experiment was designed to observe high-energy charged particles-atomic nuclei-over a wide range in energy and atomic number. The instrument had two parts, a main telescope and a low-energy telescope, both mounted on the body of the spacecraft. During cruise the charged particle experiment measured solar and galactic cosmic rays with the objective of determining the effect of the Sun's extended atmosphere (heliosphere ) on cosmic rays coming into the Solar System from elsewhere in the galaxy. During encounter with Mercury, the experiment was to search for charged particles in the vicinity of Mercury. The effect of solar flares on the flux of charged particles was correlated with measurements made from Pioneer spacecraft in the inner and outer Solar System as well as IMP (Interplanetary Monitoring Platform) spacecraft circling the Earth to determine how solar particles propagate in interplanetary space. The instrument is shown in Fig. 3-6. The two telescopes looked 45 to 50 deg west of the line from spacecraft to the Sun, with a 70-de" field of view. The low-energy telescope allowed the separate detection of relatively low-energy protons in the range 0.4 to 9 MeV (million electron volts) and alpha particles (helium nuclei) in the range 1.6 to 25 MeV. The high-energy telescope detected electrons in the range 200 KeV (thousand electron volts) to 30 MeV, protons of energy greater than 0.55 MeV, and uniquely detected alpha particles with energy greater than 40 MeV. Both telescopes were able to detect energetic nuclei of atomic numbers up to oxygen. The telescopes were very similar to those flown in Pioneer 10 and 11 to the outer Solar System. In fact, when Mariner reached Mercury for a first encounter, Pioneer 10 was more than five times the distance of the Earth from the Sun and Pioneer 11 3.5 times the distance from the Sun. Thus the three spacecraft provided an unprecedented range of radial measurements of the modulation of the cosmic ray flux by the heliosphere. This experiment consisted of two independent instruments: a fixed solar-looking occultation spectrometer, mounted on the body of the spacecraft, and an airglow instrument, mounted on the scan platform. The aim of the experiment was to analyze planetary atmospheres, and, during cruise, to measure distribution of hydrogen and helium Lyman-alpha radiation emanating from outside the Solar System. The search for an atmosphere on Mercury represented a primary scientific objective of this experiment. The extreme ultraviolet spectrometers provided two approaches to this search. The first observed the occultation of the Sun by the disc of Mercury; the other scanned through the atmosphere on both bright and dark limbs in search of emission from the neutral constituents hydrogen, helium, carbon, oxygen, argon and neon, at wavelengths ranging from 304 to 1659 angstroms. These elements were selected for study on the basis of theoretical prediction of the most likely constituents of the presumably tenuous atmosphere of Mercury. The occultation spectrometer (Fig. 3-7) was set to be responsive at four spectral bands, centered at 475, 740, 810, and 890 angstroms, where the relatively high solar ultraviolet intensity and the large absorption cross section of all gases in this spectral region would combine to provide highly sensitive measurements of the atmosphere of Mercury, independent of its composition. The airglow experiment (Fig. 3-8), in addition to providing a measurement of the relative.... ....abundances of the constituents sought in the atmosphere of Mercury, also made important observations at Venus and during the cruise phase between the planets. The angular dimension of the field of view of the airglow instrument was selected to allow resolution to about one scale height of the heaviest expected atmospheric constituent at the limb of the planet (argon), thereby providing data on the structure as well as the composition of the planetary atmosphere. Celestial Mechanics and Radio Science These experiments relied upon mathematical analysis of the radio signals coming from the spacecraft, based upon radio tracking of the spacecraft and analysis of the effects of the planetary atmospheres on the radio signal. In the celestial mechanics experiment the mass and gravitational characteristics of both Mercury and Venus were to be determined from the effect of each planet on the predicted trajectory of the spacecraft. These data would also provide estimates of the internal composition and density of the planets. The occultation experiment (Fig. 3-9) observed changes to the radio waves from the spacecraft transmitter as they passed through the atmosphere of Venus and Mercury en route to the Earth-based receivers as Mariner passed behind the planets as viewed from Earth. Gases in an atmosphere refract and scatter a radio signal, and by measuring these effects scientists can calculate the pressure and temperature of the atmosphere. The presence of an ionosphere is revealed by its special effects upon the characteristics of the radio signal. The cutoff of the radio signal as it grazes the surface of the planet provides a measurement for accurately determining the radius of the planet. Because the thick atmosphere of Venus bends the radio signal and traps it in a path around the planet, the high-gain antenna of Mariner was steered along the limb to compensate for the expected bending so as to allow deeper penetration of the radio waves through the atmosphere. The experiment used two frequencies to provide more accurate information about Venus's atmosphere and the inter... ....planetary medium than is obtainable with a single frequency. The television system centered around two vidicon cameras, each equipped with an eight-position filter wheel. The vidicons were attached to telescopes mounted on a scan platform that allowed movement in vertical and horizontal directions for precise targeting on the planetary surfaces. These folded optics (Cassegrain) telescopes were required to provide narrow-angle, high-resolution photography ( Fig. 3 -10 ). They were powerful enough for newspaper classified ads to be read from a distance of 400 meters (a quarter of a mile). An auxiliary optical system mounted on each camera allowed the acquisition of a wide-angle, lower-quality image. Changing to the wide-angle photography was done by moving a mirror on the filter wheel to a position in the optical path of the auxiliary system. In addition to wide-angle capability, the filter wheels included blue bandpass filters, ultraviolet polarizing filters, minus ultraviolet high-pass filters, clear apertures, ultraviolet bandpass filters, defocussing lenses for calibration, and yellow bandpass filters. A shutter blade controlled the exposure of the 9.8- by 12.3-mm image face of the vidicon for an interval that could be varied from 3 msec to 12 sec. The light image formed on the photosensitive surface of the vidicon produced an electrostatic charge proportional to the relative brightness of points within the image. During vidicon readout, an electron beam scanned the back side of the vidicon and neutralized part of the charge so as to produce electric current variations proportional to the point charge being scanned at the time. These analog signals produced from the vidicon readout process were electronically digitized as 832 discrete dots or picture elements (pixels) per scan line, and presented to the flight data system in the form of 8-bit elements for transmission. Each TV frame-one picture-consisted of 700 of these vidicon scan lines. All timing and control signals, such as frame start, line start/stop, frame erase, shutter open/close, and filter wheel step, were provided by the systems on board the spacecraft. The television experiment had the objectives of providing data to permit the following scientific studies of Mercury: gross physiography, radius and shape of the planet, morphology of local features, rotation and cartography, photometric properties, and regional color differences. For Venus the experiment aimed at obtaining data on the visual cloud structure, scale and stratification, and the ultraviolet markings and their structure and motions. The television experiment also searched for satellites of Mercury and Venus and was used for targets of opportunity such as Comet Kohoutek.
http://history.nasa.gov/SP-424/ch3.htm
13
21
Astronomers study light from all across the electromagnetic spectrum to piece together the story of the universe. X-ray astronomy looks at high energy, short wavelength light – over 40 times smaller than the shortest wavelength our eyes can detect. This light, emitted by gas heated to millions of degrees, provides a glimpse into extreme environments like black holes, neutron stars, and colliding galaxies. Million degree gas can be found throughout the universe. In x-ray binary systems, a neutron star or black hole – the very dense remnant of a deceased massive star – is orbiting another star and stealing gas from its companion. The stolen gas gets caught up in a disk that spirals around the stellar remnant. The intense gravity of a neutron star or black hole accelerates the spiraling gas to high speeds, heating the material in the disk to extreme temperatures, and causing it to glow in x-ray light. Any time interstellar gas is rapidly compressed, it can be heated enough to emit x-rays. The shock front from a supernova can send a wave of x-ray emission rippling through space. X-rays also permeate galactic clusters – the largest structures in the universe. In a galactic cluster, thousands of galaxies dance around one another, drawn together by their mutual gravitational attraction. Collisions between member galaxies are fairly common. The energy released in these titanic clashes is enough to heat the tenuous gas that permeates the cluster. When observed with x-ray telescopes, galactic clusters appear bathed in a diffuse x-ray glow. Studying the x-ray emission can tell astronomers a lot about the evolution of galaxies and the nature of the elusive “dark matter” that binds the cluster together. The trouble with cosmic x-rays is they never make it to Earth’s surface. Our planet’s atmosphere is very effective at absorbing incoming x-rays. That’s good news for us since sustained exposure to such high-energy light is lethal. But it does mean that if you want to study the x-ray universe, you have to get above the atmosphere. The first attempt at detecting extraterrestrial sources of x-rays came with a 1949 rocket launch in the deserts of New Mexico. Detectors in the rocket picked up x-rays coming from the sun. Now, the sun itself is actually a very weak emitter of x-rays. At a relatively cool temperature of “only” 6000 degrees Celsius, most of its energy comes out as visible light. What the rocket had detected was the million degree plasma bubble that surrounds the sun: its corona. Why the gas around the sun is hotter than the sun itself is a long-standing question in astrophysics. There are many ideas, such as electric currents generated by magnetic fields, but none are fully satisfactory. More rockets launched in the early 1960’s stumbled upon x-rays coming from well outside the solar system. An experiment in 1962 registered x-rays coming from somewhere in the constellation Scorpius. The source, dubbed Scorpius X-1, turned out to be a neutron star, 9000 light-years away, orbiting another star. Superheated gas falling onto the neutron star was releasing 60,000 times more energy just in x-rays than all the wavelengths of light emitted by the sun! Sounding rockets in 1964 found another very unusual x-ray object in the constellation Cygnus the Swan. Cygnus X-1 was not just an x-ray binary, but the first confirmed observation of a black hole—the remnant core of a supermassive star whose gravity is so intense that it can no longer emit light. At a distance of 6100 light-years from Earth, Cygnus X-1 is the black hole companion to a blue supergiant. By measuring how quickly the blue star is being whipped around in space, astronomers were able to figure out that the black hole contains the mass of 15 suns! Since black holes don’t emit any light of their own, this is one of the only ways that astronomers can locate and study these very strange and poorly understood creatures. The problem with sounding rockets is they are above the atmosphere for only a few minutes. This limits astronomers to only getting a quick peek at the x-ray sky. The introduction of x-ray telescopes on Earth-orbiting satellites in the late 1970’s changed all that. In the intervening decades, researchers have discovered a sky littered with pinpoints of x-ray light: the sites of neutron stars and black holes. Closer to home, satellites have revealed an x-ray glow emanating from all across the sky. What they’re seeing is the inside of a gargantuan gas bubble – 300 light-years across – in which the solar system resides. Dubbed “The Local Bubble”, it is most likely the very ancient marker of a supernova explosion that shook the region approximately 20 million years ago. What would that have looked like to our ancestors on a more primitive Earth? X-ray telescopes reveal a hidden, and very energetic universe. They trace interstellar and intergalactic streams of gas heated to millions of degrees. Through neutron stars and black holes, supernova shock waves and colliding galaxies, the relatively recent discovery of extraterrestrial x-ray sources lets astronomers explore some of the most extreme environments in our cosmos.
http://earthsky.org/astronomy-essentials/x-rays-reveal-the-violent-side-of-the-universe/comment-page-1
13
48
Ancient India's Contribution to Mathematics "India was the motherland of our race and Sanskrit the mother of Europe's languages. India was the mother of our philosophy, of much of our mathematics, of the ideals embodied in Christianity... of self-government and democracy. In many ways, Mother India is the mother of us all." - Will Durant - American Historian 1885-1981 Mathematics represents a high level of abstraction attained by the human mind. In India, mathematics has its roots in Vedic literature which is nearly 4000 years old. Between 1000 B.C. and 1000 A.D. various treatises on mathematics were authored by Indian mathematicians in which were set forth for the first time, the concept of zero, the techniques of algebra and algorithm, square root and cube root. As in the applied sciences like production technology, architecture and shipbilding, Indians in ancient times also made advances in abstract sciences like Mathematics and Astronomy. It has now been generally accepted that the technique of algebra and the concept of zero originated in India. But it would be surprising for us to know that even the rudiments of Geometry, called Rekha-Ganita in ancient India, were formulated and applied in the drafting of Mandalas for architectural purposes. They were also displayed in the geometric patterns used in many temple motifs. Even the technique of calculation, called algorithm, which is today widely used in designing soft ware programs (instructions) for computers was also derived from Indian mathematics. In this chapter we shall examine the advances made by Indian mathematicians in ancient times. ALGEBRA- THE OTHER MATHEMATICS ? In India around the 5th century A.D. a sys tem of mathematics that made astronomical calculations easy was developed. In those times its application was limited to astronomy as its pioneers were Astronomers. As tronomical calculations are complex and involve many variables that go into the derivation of unknown quantities. Algebra is a short-hand method of calculation and by this feature it scores over conventional arithmetic. In ancient India conventional mathematics termed Ganitam was known before the development of algebra. This is borne out by the name - Bijaganitam, which was given to the algebraic form of computation. Bijaganitam means 'the other mathematics' (Bija means 'another' or 'second' and Ganitam means mathematics). The fact that this name was chosen for this system of computation implies that it was recognised as a parallel system of computation, different from the conventional one which was used since the past and was till then the only one. Some have interpreted the term Bija to mean seed, symbolizing origin or beginning. And the inference that Bijaganitam was the original form of computation is derived. Credence is lent to this view by the existence of mathematics in the Vedic literature which was also shorthand method of computation. But whatever the origin of algebra, it is certain that this technique of computation Originated in India and was current around 1500 years back. Aryabhatta an Indian mathematican who lived in the 5th century A.D. has referred to Bijaganitam in his treatise on Mathematics, Aryabhattiya. An Indian mathematician - astronomer, Bhaskaracharya has also authored a treatise on this subject. the treatise which is dated around the 12th century A.D. is entitled 'Siddhanta-Shiromani' of which one section is entitled Bijaganitam. Thus the technique of algebraic computation was known and was developed in India in earlier times. From the 13th century onwards, India was subject to invasions from the Arabs and other Islamised communities like the Turks and Afghans. Alongwith these invader: came chroniclers and critics like Al-beruni who studied Indian society and polity. The Indian system of mathematics could no have escaped their attention. It was also the age of the Islamic Renaissance and the Arabs generally improved upon the arts and sciences that they imbibed from the land they overran during their great Jehad. Th system of mathematics they observed in India was adapted by them and given the name 'Al-Jabr' meaning 'the reunion of broken parts'. 'Al' means 'The' & 'Jabr' mean 'reunion'. This name given by the Arabs indicates that they took it from an external source and amalgamated it with their concepts about mathematics. Between the 10th to 13th centuries, the Christian kingdoms of Europe made numerous attempts to reconquer the birthplace of Jesus Christ from its Mohammedan-Arab rulers. These attempts called the Crusades failed in their military objective, but the contacts they created between oriental and occidental nations resulted in a massive exchange of ideas. The technique of algebr could have passed on to the west at thi time. During the Renaissance in Europe, followed by the industrial revolution, the knowledge received from the east was further developed. Algebra as we know it today has lost any characteristics that betray it eastern origin save the fact that the tern 'algebra' is a corruption of the term 'Al jabr' which the Arabs gave to Bijaganitam Incidentally the term Bijaganit is still use in India to refer to this subject. In the year 1816, an Englishman by the name James Taylor translated Bhaskara's Leelavati into English. A second English translation appeared in the following year (1817) by the English astronomer Henry Thomas Colebruke. Thus the works of this Indian mathematician astronomer were made known to the western world nearly 700 years after he had penned them, although his ideas had already reached the west through the Arabs many centuries earlier. In the words of the Australian Indologist A.L. Basham (A.L. Basham; The Wonder That was India.) "... the world owes most to India in the realm of mathematics, which was developed in the Gupta period to a stage more advanced than that reached by any other nation of antiquity. The success of Indian mathematics was mainly due to the fact that Indians had a clear conception of the abstract number as distinct from the numerical quantity of objects or spatial extension." Thus Indians could take their mathematical concepts to an abstract plane and with the aid of a simple numerical notation devise a rudimentary algebra as against the Greeks or the ancient Egyptians who due to their concern with the immediate measurement of physical objects remained confined to Mensuration and Geometry. GEOMETRY AND ALGORITHM But even in the area of Geometry, Indian mathematicians had their contribution. There was an area of mathematical applications called Rekha Ganita (Line Computation). The Sulva Sutras, which literally mean 'Rule of the Chord' give geometrical methods of constructing altars and temples. The temples layouts were called Mandalas. Some of important works in this field are by Apastamba, Baudhayana, Hiranyakesin, Manava, Varaha and Vadhula. The Arab scholar Mohammed Ibn Jubair al Battani studied Indian use of ratios from Retha Ganita and introduced them among the Arab scholars like Al Khwarazmi, Washiya and Abe Mashar who incorporated the newly acquired knowledge of algebra and other branches of Indian mathema into the Arab ideas about the subject. The chief exponent of this Indo-Arab amalgam in mathematics was Al Khwarazmi who evolved a technique of calculation from Indian sources. This technique which was named by westerners after Al Khwarazmi as "Algorismi" gave us the modern term Algorithm, which is used in computer software. Algorithm which is a process of calculation based on decimal notation numbers. This method was deduced by Khwarazmi from the Indian techniques geometric computation which he had st ied. Al Khwarazmi's work was translated into Latin under the title "De Numero Indico" which means 'of Indian Numerals' thus betraying its Indian origin. This translation which belong to the 12th century A.D credited to one Adelard who lived in a town called Bath in Britian. Thus Al Khwarazmi and Adelard could looked upon as pioneers who transmit Indian numerals to the west. Incidents according to the Oxford Dictionary, word algorithm which we use in the English language is a corruption of the name Khwarazmi which literally means '(a person) from Khawarizm', which was the name of the town where Al Khwarazmi lived. To day unfortunately', the original Indian texts that Al Khwarazmi studied arelost to us, only the translations are avail able . The Arabs borrowed so much from India the field of mathematics that even the subject of mathematics in Arabic came to known as Hindsa which means 'from India and a mathematician or engineer in Arabic is called Muhandis which means 'an expert in Mathematics'. The word Muhandis possibly derived from the Arabic term mathematics viz. Hindsa. The Concept of Zero The concept of zero also originated inancient India. This concept may seem to be a very ordinary one and a claim to its discovery may be viewed as queer. But if one gives a hard thought to this concept it would be seen that zero is not just a numeral. Apart from being a numeral, it is also a concept, and a fundamental one at that. It is fundamental because, terms to identify visible or perceptible objects do not require much ingenuity. But a concept and symbol that connotes nullity represents a qualitative advancement of the human capacity of abstraction. In absence of a concept of zero there could have been only positive numerals in computation, the inclusion of zero in mathematics opened up a new dimension of negative numerals and gave a cut off point and a standard in the measurability of qualities whose extremes are as yet unknown to human beings, such as temperature. In ancient India this numeral was used in computation, it was indicated by a dot and was termed Pujyam. Even today we use this term for zero along with the more current term Shunyam meaning a blank. But queerly the term Pujyam also means holy. Param-Pujya is a prefix used in written communication with elders. In this case it means respected or esteemed. The reason why the term Pujya - meaning blank - came to be sanctified can only be guessed. Indian philosophy has glorified concepts like the material world being an illusion Maya), the act of renouncing the material world (Tyaga) and the goal of merging into the void of eternity (Nirvana). Herein could lie the reason how the mathematical concept of zero got a philosophical connotation of reverence. It is possible that like the technique of algebra; the concept of zero also reached the west through the Arabs. In ancient India the terms used to describe zero included Pujyam, Shunyam, Bindu the concept of a void or blank was termed as Shukla and Shubra. The Arabs refer to the zero as Siphra or Sifr from which we have the English terms Cipher or Cypher. In English the term Cipher connotes zero or any Arabic numeral. Thus it is evident that the term Cipher is derived from the Arabic Sifr which in turn is quite close to the Sanskrit term Shubra. The ancient India astronomer Brahmagupta is credited with having put forth the concept of zero for the first time: Brahmagupta is said to have been born the year 598 A.D. at Bhillamala (today's Bhinmal ) in Gujarat, Western India. ] much is known about Brahmagupta's early life. We are told that his name as a mathematician was well established when K Vyaghramukha of the Chapa dyansty m him the court astronomer. Of his two treatises, Brahma-sputa siddhanta and Karanakhandakhadyaka, first is more famous. It was a corrected version of the old Astronomical text, Brahma siddhanta. It was in his Brahma-sphu siddhanta, for the first time ever had be formulated the rules of the operation zero, foreshadowing the decimal system numeration. With the integration of zero into the numerals it became possible to note higher numerals with limited charecters. In the earlier Roman and Babylonian systems of numeration, a large number of chara acters were required to denote higher numerals. Thus enumeration and computation became unwieldy. For instance, as E the Roman system of numeration, the number thirty would have to be written as X: while as per the decimal system it would 30, further the number thirty three would be XXXIII as per the Roman system, would be 33 as per the decimal system. Thus it is clear how the introduction of the decimal system made possible the writing of numerals having a high value with limited characters. This also made computation easier. Apart from developing the decimal system based on the incorporation of zero in enumeration, Brahmagupta also arrived at solutions for indeterminate equations of 1 type ax2+1=y2 and thus can be called the founder of higher branch of mathematics called numerical analysis. Brahmagupta's treatise Brahma-sputa-siddhanta was translated into Arabic under the title Sind Hind). For several centuries this translation mained a standard text of reference in the Arab world. It was from this translation of an Indian text on Mathematics that the Arab mathematicians perfected the decimal system and gave the world its current system of enumeration which we call the Arab numerals, which are originally Indian numerals.
http://mathemajik.tripod.com/article/mathematics.html
13
28
Were on problem 36, it says which of the following sentences is true about the graphs of Y=3×x-5²+1 and Y=3×x+5²+1? So lets just do very something similar in what we did in the past and if you think about it. Both of these equations, Y is going to be one or greater. Let me just you know, let just analyze this a little bit right. This term right here, let me, this term right here since were squaring is always going to be positive, right even if what’s inside the parenthesis becomes negative if you know X-10. Inside the parenthesis, this becomes negative but when you square it. It always becomes positive and you’re going to multiply 3× a positive number. So you’re getting it a positive number. So the lowest value of this could be a zero. So the lowest value that Y could be is actually one and same thing here. The lowest value that, you know this number can become very negative but when you square it. This going to become positive, so this expression with the squared here is going to be positive when you multiply 3 is going to be positive. So the lowest value here is always going to be zero when you include this whole term. So similarly the lowest value Y can be as one. I just want to think about it a little bit just to give you a little bit of intuition and let’s think of this in the context of what we’ve talked and learn last time with the shifting. So let me draw it a color that you can see. So if that is the Y axis and I'll just draw mainly the positive area. So let see, so if I were to just draw Y=x²+1. It would look like this where this is 1, well that’s Y=1 and the graph will look something of along this. Y=x²+1, oh that’s a horrible drawing. Normally I wouldn’t do redo it but that was just atrocious. Y=x²+1, look something like that its symmetric you know, you get the idea. You have seen this problem before. This is Y=x²+1. Now if we were to do X-5²+1, what happens to it? Well let me think, what is 3x²+1? Well then it just increases a little bit faster. So if I were to say Y=3x²+1, it might look something like this. It will just increase a little bit faster, 3 times as fast actually. So that would be 3x²+1, right. The rate of increase in both directions just goes faster because you have this constant term three out there multiplying the numbers. Okay, now what happens when you shift it? When you shift this once to x-5, so where x=0 was the minimum point before. Now if we substitute the five here that will be our minimum point, right because then that whole term becomes zero. So this vertex will now be shifted to the right. We do in another color. So if this is the point 5, now this would be the graph. You just took this graph and you shifted it over to the right by five. I won’t draw the whole thing. That graph right there would be 3×x-5²+1 and remember the Y shift is always intuitive, if you add one. You’re shifting it up if you subtract one you’re shifting it down. The X shift isn’t. We subtracted 5, x-5. We replaced X with x-5 but we shifted to the right and the intuition is there because now +5 make this expression zero. So that’s 3x-5² and then the same logic, 3×x+5² is going to be in here +1. So if we shift it that’s going to be shifted to the, let me pick a good color, to the left. So it’s going to look something like this, going to be this blue graph shifted to the left. So this is minus five. So this is the graph right here of 3×x+5²+1. Now hopefully you have an intuition, so let’s read their statements and see which one makes sense. Which of the following is true? There vertices are maximum, no that’s not true of any of these because the vertices is that point right there and there actually at some minimum point, right. A maximum point will look something like that and we know that because you just go positive. This term can only be positive; if this was a -3 then will would flip it over. I guess its not choice A. The graphs have the same shape with different vertices. Yeah, both of these graphs have the shape of 3x² but they have one vertices just 10 to the left of the other one. So I think B is our choice, so let’s read the other one. The graphs have different shapes with different vertices. No they have the same shape. They definitely have the same shape. I mean they both have this 3x² shape. One graph has a vertex that has a maximum while the other has graph, no that’s not right. They both are upward facing, so they both have minimum points, so its choice B. Next problem, problem 37, let me see what it says. What are the X intercepts? Let me copy and paste that, okay and I'll paste it there. What are the X intercepts of the graph of that? Well the X intercepts whatever this graph looks like. I don’t know exactly what it looks like. I mean I don’t know what this graphs looks like something like this. Actually I have no idea what it looks like until I solve it. It’s going to look like something like this. When they X intercepts they’re like where does it intersect the X axis? So that’s like there and there. I don’t know of those are the actual points, right and to do that. We set the function equal to zero because this is the point Y=0. You’re essentially saying, when does this function equal zero because that’s the X axis. When Y=0, so you set Y=0 and you get 0=12x²-5x-2. And whatever I have a coefficient larger than one in front of the x² term. I find that very hard to just eyeball and factor, so I use a quadratic equation. So -B, this is B. B is minus 5, so --5 is plus 5, right. -B plus or minus, the square root of B², -5² is 25-4×A which is 12×C which is ,minus 2. So let just make that ×+2 and put the plus out there, right minus times minus is a plus. All of that over 2A, all of that over 24, 2×A, so that is equal to, let me see, 5+ or minus the square root. Let see 25+4×12×2, right because that was a -2 when we have a minus there before, so 8×12=96, 96 all of that over 24. What’s 25+96=121, right? This is 121 which is 11², so this becomes 5+ or minus 11/24. So and remember these are the points where these are X values where that original function will equal zero. So it’s always important to remember what were even doing. So lets see, so if x=5+11/24 that is equal to 16/24 which is equal to 2/3 that’s one potential intercept. So you know maybe that’s right here, right that x=2/3 and y=0 and the other value is x=5-11/24 and that’s what -6/24 which is equal to -¼ which could be this point. I actually drew graph that far off of what it could be, so this would be x=-¼ and those are the X intercepts of that graph. So let see 2/3 and -¼ is choice C on the test. We have time for at least one more. Let see where, oh boy they drew us all these graphs, so which is the graph? Let me shrink it. I want to be able to fit all the graphs, so let me copy and paste their graphs. So this is one where the clipboard is definitely going to come in useful. Okay that’s good enough. Okay so they want to know which is the graph of, so let me--. I’ve never done something this graphical, let see. So that the graph they say is Y=-2×x-1²+1, so that’s what we have to find the graph off. So immediately when you look at it, so you say okay this is like the same thing as Y=-2x²+1 but they shifted the X right. They shifted the X to the right by one. I know it says a -1 but think about it. When X=+1, this is equal to zero. So it’s going to be shifted to the right by one, right plus one. We know that, we know that’s going to be shifted up by one; right so up plus one and then we have to think. Is it going to be opening upwards or downwards? You think of this way, if this was Y=2x²+1. Then this term would always be positive and it will just come more and more positive as you get further and further away from zero. So it would open up but if you put a negative number there if you say Y=-2x²+1. Then you’re going to open downward. You’re just going to get more and more negative as you get away from your vertex right. So were shifted to the right by one, were shifted up by one and were going to be opening downwards. So if we look at our choices, only these two are opening downwards and both of them are shifted up by one. There vertex is at Y=1 but this is shifted one to the right and this is shifted one to the left. And remember we said it was x-1², so the vertex happens when this whole expression is equal to zero and this whole expression is equal to zero when x-1, when x=+1. So that’s right here, so its actually choice C. and that’s probably when you’re shifting graph because that’s can be one of them kind of hardest things to engrail but I just really encourage you to explore graphs, practice with graphs with a graphing calculator and really try to plot points, and try to get a really good graph of Y when you go from -2x²+1 to -2×x-1²or when you’re placing X with an X-1 while this shifts the graph to the right by one.
http://on.aol.com/video/learn-about-algebra-ii--shifting-quadratic-graphs-99176124
13
47
Suppose that we are asked to find the area enclosed by a circle of given radius. A simple way to go about this is to draw such a circle on graph paper and count the number of small squares within it. Then area contained ≈ number of small squares within circle × area of a small square. A circle drawn on graph paper - the area inside is approximately the number of small squares times the area of each small square If we doubled all the lengths involved then the new circle would have 4 times the area contained in the old circle We notice that if we doubled all the lengths involved then the new circle would have twice the radius and each of the small squares would have four times the area. Thus ≈ number of small squares × area of a new small square = number of small squares × 4 × area of a old small square ≈ 4 × area contained in old circle. By imagining what would happen if we used finer and finer graph paper, we conclude that doubling the radius of a circle increases the area by a factor of 4.The same argument shows that, if we multiply the the radius of a circle by , its area is multiplied by . Thus We can now play another trick. Consider our circle of radius r as a cake and divide it into a large number of equal slices. By reassembling the slices so that their pointed ends are alternately up and down we obtain something which looks like a rectangle of height r and width half the length of the circle. The area covered by a cake is unchanged by cutting it up and moving the pieces about. But the area of a rectangle is width × height and the area of of the cake is unchanged by cutting it up and moving the pieces about. So Approximating PiOne approximation goes back to the ancient Greeks who looked at the length of a regular polygon inscribed in a circle of unit radius. As we increase the number of sides of the polygon the length of the polygon will get closer and closer to the length of the circle that is to . Can you compute the total length of an inscribed square? Of an inscribed equilateral triangle? Many of the ideas in this article go back to Archimedes, but developments in mathematical notation and computation enabled the great 16th century mathematician Vieta to put them in a more accessible form. (Among other things, Vieta broke codes for the King of France. The King of Spain, who believed his codes to be unbreakable, complained to the Pope that black magic was being employed against his country.) We can approximate the circle with n-sided polygon, in this case an hexagon with n=6. If you calculated the length of the perimeter for an inscribed square or triangle, does our general formula for an n-sided polygon agree for n = 3 and 4? If you try to use your calculator to calculate , , you'll observe that the results aren't a very good approximation for .There are two problems with our formula for . The first is that we need to take large to get a good approximation to . The second is that we cheated when we used our calculator to evaluate since the calculator uses hidden mathematics substantially more complicated than occurs in our discussion. Doubling sidesHow can we calculate ? The answer is that we cannot with the tools presented here. However, if instead of trying to calculate for all , we concentrate on , , , ... (in other words we double the number of sides each time), we begin to see a way forward. Ideally, we would like to know how to calculate from . We can not quite do that, but we do know the formulae (from the standard trigonometric identities) We can make the pattern of the calculation clear by writing things algebraically. Let us take and write Vieta's formula and beyondAlthough Vieta lived long before computers, this approach is admirably suited to writing a short computer program. The definitions of and lead to the rules Nowadays we leave computation of square roots to electronic calculators, but, already in Vieta's time, there were methods for calculating square roots by hand which were little more complicated than long division. Vieta used his method to calculate to 10 decimal places. We have shown that From an elementary point of view this formula is nonsense, but it is beautiful nonsense and the theory of the calculus shows that, from a more advanced standpoint, we can make complete sense of such formulae. The Rhind Papyrus, from around 1650 BC, is thought to contain the earliest approximation of pi; as 3.16. The accuracy of this approximation increases fairly steadily, as you will have seen if you used your calculator to compute the successive values of sn. Roughly speaking the number of correct decimal places after the nth step is proportional to n. Other methods were developed using calculus, but it remained true for these methods that number of correct decimal places after the nth step was proportional to n. In the 1970's the mathematical world was stunned by the discovery by Brent and Salamin of method which roughly doubled the number of correct decimal places at each step. To show that it works requires hard work and first year university calculus. However, the method is simple enough to be given here. Take and . Let Since then even faster methods have been discovered. Although has now been calculated to a trillion (that is to say 1,000,000,000,000) places, the hunt for better ways to do the computation will continue. About the author Tom Körner is a lecturer in the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge, and Director of Mathematical Studies at Trinity Hall, Cambridge. His popular mathematics book, The Pleasures of Counting, was reviewed in Issue 13 of Plus. He works in the field of classical analysis and has a particular interest in Fourier analysis. Much of his research deals with the construction of fairly intricate counter-examples to plausible conjectures.
http://plus.maths.org/content/os/issue43/features/korner/index
13
20
How Fractals are Generated, Part One This section will show you the process, but not the difficult math, behind generating fractals. When you see a fractal image, you should think of the screen as a plane (a flat surface) made up of many, many points, or pixels. Each pixel has an x-coordinate and a y-coordinate, which determine its position on the screen. Since each pixel is in a different place, each pixel's coordinates are different from the rest. To generate the fractal, first we need a function. A function is a bunch of math that can be performed on any pair of coordinates, and it will give you two new coordinates. So to start, a pixel is selected. Then, a function is iterated on the point. This brings us to a new x-coordinate and a new y-coordinate. So we "move" that point to the new location specified by these coordinates. To iterate a function means to keep applying it over and over, so that's what we do. We take the new coordinates, and use our function again. This brings us to a new set of coordinates for that point. And then we use the function on those coordinates, and so on. As we do this, one of two things happens. The point may move around when we iterate the function, but never leave the screen. Or, it may stay on the screen for a while, and then leave, never to be seen again. Which of these occurs determines the different colors that you see in fractal pictures. If the point never leaves the screen, then we go back to the first coordinates for that point, and make the pixel there a certain color. The points that never leave the screen are all colored the same color. If eventually the point does leave the screen, however, than we count how many times we had to iterate our function to make it leave, and use that number to color the pixel at the original coordinates. For example, if it takes one iteration to make the point leave the screen, then maybe we color it blue. If it takes two, then maybe it's red. And so on. When you see smoothly shaded fractals, it simply means that one iteration makes a light blue, two makes a little darker blue, and so on. The following chart shows the movement of the point labeled 0 under the function that produces the Mandelbrot set. Each of the numbered points shows the new location after another iteration. So after one iteration of the function it has moved to position 1. After two, it has moved to position 2, and so on. As you can see, this point leaves the screen after 5 iterations, so it will be colored with color #5. In the case of the Mandelbrot set, the pixels inside the strangely shaped boundary are the ones that never leave the screen. The next image shows the iteration of a point inside this boundary, and as you can see, it keeps looping back to itself. This point never leaves the screen no matter how much you iterate it. So we color it with whichever color we are using for such points. Now we repeat this process with every pixel in the image. There may be 800,000 or more pixels in one picture, so that's why making fractals can take a long time! So here's an example. We start with a pixel whose x-coordinate is 4 and whose y-coordinate is 5. We will write this as (4,5). Say that when we use this function, the new coordinates are (3,9). That point is still on our screen, so we continue. When we iterate the function again, our coordinates are now (4,5). That's where we started! So now we know that if we iterate this again, it will go back to (3,9), then back to (4,5), and so on. This point will never leave the screen because of this. So we go to the original coordinates, (4,5), and color it, say, black. From now on any point that doesn't leave the screen will be black. Now say we pick the next pixel, with coordinates (5,5). When we iterate this, maybe we get (5, 13). When we do it again, we get (324, 573457), which is not on our screen! So it took two iterations to make this point leave the screen. So we go back to (5,5), and color that point with We continue with the next pixel, and we do this until we have done every pixel on the screen! Then, step back, and look at what we have. A fractal! Which one? Well, that's determined by the function. If we use one function, we may get the Mandelbrot set, and if we use another, we may get a Julia set. There are an infinite number of functions, and therefore there are an infinite number of fractals! You can keep reading about how fractals are generated in Part Two
http://library.thinkquest.org/3288/gnrte1.html
13
10
A spatial point is a concept used to define an exact location in space. It has no volume, area or length, making it a zero dimensional object. Points are used in the basic language of geometry, physics, vector graphics (both 2D and 3D), and many other fields. In mathematics generally, particularly in topology, any form of space is considered as made up of an infinite amount of points as basic elements. Points in Euclidean geometry In Euclidean geometry, points are one of the fundamental objects. Originally defined by Euclid as "that which has no part," this essentially means that it has no length, width, depth or any higher dimensional measure of value. In two dimensional space, a point is represented by an ordered pair (a1,a2) of numbers, where a1 conventionally represents it's location on the x-axis, and a2 represents it's location on the y-axis. For higher dimensions, a point is represented by a ordered collection of n elements, (a1, a2, ..., an) where n is the dimension of the space. Euclid both postulated and asserted many key ideas about points. His first postulate is that it was possible to draw a straight line from any point to any other point. This is confirmed in modern day set theory in two dimensions by the set F = , with higher dimensional analogues existing for any given dimension. Euclid sometimes implicitly assumed facts that did not follow from the axioms (for example about the ordering of points on lines, and occasionally about the existence of points distinct from a finite list of points). Therefore the traditional axiomatization of point was not entirely complete and definitive. Observe that there are also approaches to geometry in which the points are not primitive notions. The notion of "region" is primitive and the points are defined by suitable "abstraction processes" from the regions (see Whitehead's point-free geometry]. Points in topology In topology, a point is simply an element of the underlying set of a topological space. Similar usage holds for similar structures such as uniform spaces, metric spaces, and so on. The point, being often characterized as "infinitely small," is the geometrical representation of the inwards infinitude, greater natural principle spread throughout every mathematical field, where any finite value, part of a greater infinite value, is itself formed by infinite finite values. Likewise, the point, though immeasurable, is the basic element of any measurable form. It is so for, even having it no dimensions, neither height, width nor length, its association causes the existence of such. (Two zero-dimensional points can form a one-dimensional line; two lines can form a two-dimensional surface; two surfaces can form a three-dimensional object) As it is, the point, in geometry, is the basic visual (imaginable) representation for the minimal structure of existence. Measurability of immeasurable elements associations, or limited infinitude, is what makes it, for many people, in common language, so "abstract" and hard to understand (like trying to picture a point), but inwards infinitude appears, for instance, within every irrational number, such as pi, and complies with every rule of existence, matter or not, being the point one possible interpretation of what would be the basis of it. - Affine space - Arnone, Wendy. 2001. Geometry for Dummies. Hoboken, NJ: For Dummies (Wiley). ISBN 0764553240 - Euclid. 1956. The Thirteen Books of Euclid's Elements. V. 1, Introduction and Books I, II. New York: Dover Publications. ISBN 0486600882 - Hartshorne, Robin. 2002. Geometry: Euclid and Beyond. Undergraduate Texts in Mathematics. New York: Springer. ISBN 0387986502 - Definition of Point with interactive applet. Retrieved December 12, 2007. - Points definition pages with interactive animations that are also useful in a classroom setting. Math Open Reference. Retrieved December 12, 2007. - Point. PlanetMath. Retrieved December 12, 2007. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/Point_(geometry)
13
14
Modulus, Poisson’s Ratio and Elongation Modulus and elongation are two important properties of a material, which are determined by tensile tests. A stress-strain curve plays an important role in determining these material properties. The slope of the stress-strain curve, where the stress is proportional to strain, is called the Young’s Modulus or Modulus of Elasticity. It is a measure of the stiffness of the material. Since strain is unitless, modulus is measured by the units as that of stress. Generally, the epoxy modulus is expressed in psi or MPa units. Epoxies that are very stiff and rigid have a higher modulus as compared to flexible ones. The modulus value also varies depending on whether the system is filled or unfilled. Poisson’s ratio is another important physical property of a material. It is defined as the negative of the ratio of lateral to axial strain. When a material is deformed in one way, in one direction, deformation occurs in the other two directions as well. Poisson’s ratio helps us to understand this theory. Poisson’s ratio is unitless, since it is a ratio of strains. The Poisson’s ratio also varies depending on whether the system is filled or unfilled. Elongation is the measure of ductility of a material. It is expressed as a percentage and it is the ratio of change in axial length to the original length of the specimen. The elongation percentage of a flexible epoxy or a silicone is much higher than that of a rigid epoxy. Master Bond offers products with varying modulus, Poisson’s ratio and elongation. The modulus varies from very low values for flexible epoxies to very high (around 500,000 psi) for rigid and filled epoxies. Elongation varies from as low as a few percent for rigid epoxies to more than 150% for flexible products. Poisson’s ratio for Master Bond’s epoxies typically varies from 0.29 to 0.34, depending on whether the system is filled or unfilled.
http://www.masterbond.com/techtips/modulus-poisson%E2%80%99s-ratio-and-elongation
13
10
Not only is the Earth warming at the high-end of predicted models, but now human produced carbon dioxide emissions are accumulating in greater amounts in the upper reaches of the atmosphere, according to the results of a new study of data captured by a Canadian satellite. That’s the key finding of a team at the University of Waterloo in Canada and the U.S. Naval Research Laboratory’s Space Science Division, relayed in a new paper published Sunday online in the journal Nature Geoscience. The team analyzed eight-years worth of atmospheric carbon dioxide (CO2) data collected by the Canadian Space Agency’s Atmospheric Chemistry Experiment (ACE), a satellite launched in 2003 and bobs around the Earth in a 74 degree orbit, taking spectra measurements and images of the atmosphere. What the scientists found from looking at the ACE’s data from 2004 through 2012 was troubling: Carbon dioxide levels in the upper atmosphere increased eight percent over the period, from 209 parts per million in 2004 to 225 parts per million in 2012. Check out the following graph from the U.S. Naval Research Laboratory (NRL): As the NRL described in a news release on the findings on Sunday: “The scientists estimate that the concentration of carbon near 100 km [approximately 62 miles] altitude is increasing at a rate of 23.5 ± 6.3 parts per million (ppm) per decade, which is about 10 ppm/decade faster than predicted by upper atmospheric model simulations.” At lower altitudes, carbon dioxide emissions make the Earth warmer by trapping sunlight. But at higher altitudes, the reverse is true: In the mesosphere (between 31 miles and 55 miles up) and the thermosphere (above 55 miles up), carbon dioxide’s density is thinner and a less effective at trapping infrared radiation. In fact, CO2 at these altitudes is something of a heat sink, allowing infrared radiation to escape back out into space. But this isn’t a good thing. On the contrary, the thinning, cooling trend at this level due to increasing CO2 is likely to have detrimental effects on human spacefaring activity, something of a bitter irony given that a satellite was the reason we know about the increased CO2 levels in the first place. As the U.S. NRL explained: “The enhanced cooling produced by the increasing CO2 should result in a more contracted thermosphere, where many satellites, including the International Space Station, operate. The contraction of the thermosphere will reduce atmospheric drag on satellites and may have adverse consequences for the already unstable orbital debris environment, because it will slow the rate at which debris burn up in the atmosphere.” In other words, rather than trapping heat, the increased CO2 levels in the upper atmosphere are likely to result in longer-lasting debris, and thus, a greater proportion of debris over time as humans continue to launch objects into space. Already, NASA’s Orbital Debris Program, which tracks the overall amount of space junk around the planet, reports that there are at least 500,000 objects orbiting the Earth between 1 and 10 centimeters in size, another 21,000 larger than 10 centimeters. Other scientists have previously warned that Earth is collectively approaching a “tipping point” when it comes to space junk, where one piece of space junk colliding into another could set off a chain reaction of cascading collisions that would make it prohibitively risky to launch anything else into space, a phenomena known as the “Kessler effect” or the “Kessler syndrome” after the scientist who first proposed it in 1978. Space junk has become such a looming problem that the U.S. NRL has concocted a plan to reduce some of it by shooting clouds of dust into space to increase the drag on debris and bring them plummeting back to Earth, to burn up in the atmosphere. That idea remains just a proposal, for now.
http://idealab.talkingpointsmemo.com/2012/11/carbon-dioxide-emissions-reaching-upper-atmosphere-canadian-space-satellite-finds.php?ref=fpnewsfeed
13
18
This article describes the formula syntax and usage of the COUNTA function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Excel. The COUNTA function counts the number of cells that are not empty in a range (range: Two or more cells on a sheet. The cells in a range can be adjacent or nonadjacent.). COUNTA(value1, [value2], ...) The COUNTA function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.): - value1 Required. The first argument representing the values that you want to count. - value2, ... Optional. Additional arguments representing the values that you want to count, up to a maximum of 255 arguments. - The COUNTA function counts cells containing any type of information, including error values and empty text (""). For example, if the range contains a formula that returns an empty string, the COUNTA function counts that value. The COUNTA function does not count empty cells. - If you do not need to count logical values, text, or error values (in other words, if you want to count only cells that contain numbers), use the COUNT function. - If you want to count only cells that meet certain criteria, use the COUNTIF function or the COUNTIFS function. Use the embedded workbook shown here to work with examples of this function. You can inspect and change existing formulas, enter your own formulas, and read further information about how the function works. This example uses the COUNTA function to count the number of nonblank cells in a range. To work in-depth with this workbook, you can download it to your computer and open it in Excel. For more information, see the article Download an embedded workbook from SkyDrive and open it on your computer.
http://office.microsoft.com/en-us/excel-help/counta-function-HP010342344.aspx?CTT=5&origin=HA010351132
13
78
The Delivery System Our first objective is to describe why the orbits of Neptune and Uranus came to be located 1.8 and 2.8 billion miles from the Sun. These distances are 19.2 and 30.1 a.u. from the Sun. Some groundwork must be laid. Earlier, evidence was presented in the form of twin spins that both planets co-orbited in space somewhere beyond 900 a.u. from the Sun. How did planets such as Neptune and Uranus relocate from there to "here" in the visible, "inner" solar system? Our question is one of relocation, not one of creation. A delivery by implication involves either a one time delivery, or more often, a delivery system. If a package is dropped on the doorstep with a "United Parcel" delivery label, that implies a delivery system. If such deliveries happen repeatedly, it implies a repeating route, and perhaps a periodic delivery schedule. Postal deliveries fall into this periodic pattern. So are deliveries in our cosmological theory. A Comparison with Early 20th Century Cosmologies In the 1910's, it occurred to James Jeans that the total of the planetary mass was only 0.14% of the Sun's mass. Yet these nine planet systems contained the bulk of the angular momentum in the Solar System. In fact, Jupiter along has 60%. The four giant planets carry 98% of the angular momentum of all matter this side of Pluto. The gigantic Sun carries only 2%. The law of conservation of angular momentum would seem to suggest that if the Sun and its plasma were the genesis of the Solar System, then the Sun should retain most of the angular momentum that is observed. The Sun isn't even close to conforming. This is why Jeans and others began searching for an extra-solar source, a passing star perhaps. This principle is demonstrated in an ice skater, who spins with her arms extended. As her arms are withdrawn, she spins all the faster. But if that were an example of the Sun, the Sun is one of the most slowly spinning bodies of the Solar System. Its photosphere requires 26.8 days for a rotation at its equator. Even more strange, the Sun's photosphere displays even slower rotations rates in its higher latitudes. At 60° latitude, it spins once in 30.8 days and at 75° latitude, it spins in 31.8 days. Although the Nebular Hypothesis is still being taught as cosmological fact to many students, Press and Siever make the following damaging assessment: To address this defect, Moulton and Chamberlin proposed that the Sun was approached by a much larger star, to a proximity of 4 or 5 billion miles. That huge star allegedly pulled out from the Sun a filament of material, providing material eventually condensing into the planets, into planet satellites, in spin rates, etc. In the 1930's, Henry N. Russell recognized further defects. Instead, Russell postulated the Sun had been approached by a pair of stars, a binary system. Together they did the job whereas, he felt, Jeans' theory and Moulton's both failed. Note that early 20th century cosmologists continued to assume that the planets were formed from solar ejecta, billions of years ago. This was a major conceptual mistake. Our concept of a delivery system contains the idea that the planets were delivered to the Sun. They were delivered from a region more distant than 1,000 a.u. The planets of this Solar System never were part of a gaseous filament pulled out of the Sun (or any other star.) Our concept is different and the details of our cosmology are far different from the "standard fare" of this 19th and 20th century. DIFFERENCE # 1. One difference is that their approaching star, or binary pair of stars, approached from interstellar space, from beyond the nearer stars. We offer that the Sun was approached from a region less than 5% of the distance to the nearest star, that is, from between 1,000 and 2,000 a.u., or possibly from 3,000 a.u. DIFFERENCE # 2. A second difference is that our delivery system body was not, and is not luminous. Their concept was one, or two co-rotating luminous stars, at least one of which was larger than the Sun. Had such a luminous star been in the neighborhood four billion years ago, it could still be seen and its path charted. No trace of such can be seen in the Milky Way.. DIFFERENCE # 3. A third difference is that our delivery system body is ONLY 3% TO 4% as massive as the Sun, + or - 1%. The view of Jeans, Russell and Lyttleton was that the approaching star or pair of stars was much heftier than the Sun. DIFFERENCE # 4. A fourth difference is that in our delivery system, the delivering body came much closer to the Sun. They suggest such a theoretical approach was several billion miles from the Sun, and considerably beyond Neptune's orbit. Neptune is three billion miles distant. Evidence exists that the intruder approached as close as 15,000,000 miles from the Sun, more than twice as close as Mercury. This evidence will be cited and presented, along with its ramifications, in chapters 7 through 10. DIFFERENCE # 5. A fifth distance is conceptual. Traditional 18th, 19th and 20th century cosmologies have the Sun as the mother of the planets in a natal sense. In a natal sense, the planets are like afterbirth material which for some reason, the Sun expelled. In contrast, we offer that the Sun is the mother of the nine planets only in an adoptive sense, not a natal sense. Some writers, including science fiction authors, have speculated on Solar System disturbances coming in from beyond the realm of visibility, beyond the orbit of Pluto. Those writers have named their fictitious intruders such names as "Planet X" and "the Nemesis Star", etc. We choose the name "Little Brother" since it evidently was 3% to 4% as massive as the Sun, and it penetrated deeply into the hot, inner region. DIFFERENCE # 6. A sixth difference, if our hypothesis is correct, is that Little Brother continues to orbit the Sun. And on its own schedule, whatever that is, it will return in due time. When it returns, it will realign any planet that gets in its path. And when it returns, it could bring in a new package of planets and drop them off in the Inner Solar System. Evidence, not science fiction writers, indicates Little Brother exists. We choose this name because the Sun is "Big Brother." Its "nickname" is "L. B." This nickname has nothing to do with any prominent politician from Texas. If the Sun stripped the planets away from Little Brother, and if Little Brother delivered them, then that capture must have followed certain mathematical constraints. One constraint is that the planets Mercury through Neptune were all dropped off on the same plane, the orbit plane of "L. B." A second constraint is the "Radius of Action" principle, the zone of control of Little Brother. As "L. B." approached the Sun, this zone inexorably kept shrinking. And as it returned to its aphelion, perhaps 1,000 to 3,000 a.u. distant. This is 5 to 15 light days distant. How expansive would be Little Brother's "zone of control" out there where the Sun's attraction is so faint? How extensive is that "zone of control" which allowed, at three billion miles from the Sun, for Neptune and Uranus to begin to get away? The Sun's mass is 332,000 as massive as the Earth, and 1,050 times as massive as Jupiter. Thus, Little Brother, if our analysis is correct, is about 30 to 40 times as massive as the giant Jupiter. Control versus capture in our Solar System follows a principle, which Gerard Kuiper called the "radius of action". As geographers and engineers, we prefer to call it the "zone of control". It is the same thing. For instance, in our present orbit, the Earth's zone of control, its radius of action, is out to 750,000 miles. At this distance theoretically the Earth would automatically lose any satellite forever to the Sun. At this distance, a satellite merely exchanges its orbit prime focus for the Sun. For math buffs, Kuiper's equation is an approximation, not a rule, not a mathematical law. It is an approximation that merits some elaboration and qualifications. The approximation of a zone of control anywhere in this solar system In this equation, RA is the radius of action. "" is the mass of the planet (Little Brother) divided by the total of the mass of the Sun and Little Brother. "a" is an astronomical unit, 93,000,000 miles.F2 The Capture of NeptuneUranus System Neptune and Uranus had to co-orbit in two long, narrow, highly eccentric orbits. Both revolved around a "bary center," a point that is the common center of mass. Most of the time, Neptune and Uranus were co-orbiting with a considerable distance in between. But with highly eccentric elliptical orbits, fast flybys and sharp spasms of catastrophism occurring every few years. As was discussed earlier, each flyby increased the spin rates of each planet, and the increase was in a reciprocal manner. We suggest that when these two planets were co-orbiting, their bary center orbited Little Brother at a distance of about 600,000,000 miles. At about 2,500,000,000 miles (or 27 a.u.) from the Sun, the Sun stripped this binary away from "L. B." It captured them and dispersed or separated them. Uranus was sent nearer, Neptune farther. In this cosmology, Little Brother performed the job of a delivery service. The Sun proceeded to separate Neptune from Uranus and redirect them into new, virgin, capture orbits. One ended up 1.8 million miles from the Sun and the other 2.8 million miles. Their twin spins are clues of their former co-orbiting relationship, when they were much much deeper in dark, frigid, remote, debris-strewn space. When were the planets Neptune and Uranus dropped off at their present location? That is the $64,000 question for which we do not have the answer. However, there is no evidence for such an event being billions of years ago. There is evidence friendly to the thought that they were delivered less than 100,000 years ago. This evidence involves the capture of other planets, satellites and icy ring systems. We are tempted to get ahead of our story. The story of the Sun's capture of the Neptune system and the Uranian system is story 6 in our new skyscraper cosmology. It involves planetary catastrophism deep in space, before capture by the Sun. We cannot agree that these planets are so far out because of "chance" or "coincidence". They are so remote from the Sun because they were co-orbiting at a similarly remote distance from Little Brother. That "similarly remote" distance from L. B. was some 600,000,000 miles, compared to their present remoteness of 1.8 and 2.8 billion miles. The Capture of JupiterSaturn System In a like manner, Little Brother once "owned" the Jupiter-Saturn binary, a co-orbiting pair whose bary center was perhaps 200,000,000 miles from "L. B." That is about as far as the asteroid belt is today from the Sun. When Little Brother approached the Sun to a distance of some 600,000,000 miles to 700,000,000 miles (6.5 to 7.5 a.u.), the gravitational competition increased. It increased to the threshold where "L.B." was no longer able to retain this second co-orbiting binary either. First, Saturn was stripped from Jupiter, and then from L. B. also. Following that loss, Jupiter was next, stripped away from L. B. as it inexorably kept approaching the Sun. As it was with Saturn, so also with Jupiter; each planet was wrenched from L.B. together with its satellite systems. As Jupiter and Saturn were lost by the Little Brother system, that incoming system was stripped of about 0.3% of its mass; it also lost a similar amount of energy. Its orbit shifted just a wiggle. The L. B. system, now separated, also lost a little angular momentum. That angular momentum in the bodies of Jupiter and Saturn was relocated inward from one to three thousand a.u. all the way down to five to ten a.u. The two captures by the Sun, Neptune-Uranus and Jupiter-Saturn, did not necessarily occur during the same incoming flyby of Little Brother. But it is a distinct possibility. There is evidence that Little Brother has made either one or just a very few such "delivery trips" to the doorstep of the Sun. That evidence will be presented in a few chapters. The evidence of a paucity of trips by L. B. around the Sun is critical to our understanding as to how, and how recently this Solar System was organized. Thus we will present evidence suggesting that the Solar System is recent, less than a million years, less than a hundred thousand years perhaps. From the gradualist dogma of four billion years, this could be a reduction in time requirements of 99.975% The question becomes, "Did Little Brother acquire the Jupiter-Saturn binary during the same orbit into deep space that it acquired the Neptune-Uranus binary?" If the answer were "yes", it follows that the Sun acquired all four planet systems within the same score of years. If the answer were "no", it follows that they were captured in different eras. Which is more probable? At this point in time, we do not know. What we are sure of is that Jupiter and Saturn once co-orbited as a binary pair in remote space. Their twin spins are a solid pair of clues. Coupled with the spins of Uranus and Neptune, we now gather two pairs of solid clues. The Earth's acquisition of the Moon, likely in remote space, is a fifth clue that our "capture cosmology" is the best approach. The seventh story of our catastrophic cosmology is the acquisition of their modern orbits by Jupiter and Saturn. It will be noticed that Uranus and Neptune were closely related in remote space. Today, Uranus and Neptune are still fairly closely related. They are the seventh and eighth closest planets. They still are next to each other, though not as closely as when under the dominion of Little Brother. Jupiter and Saturn are our fifth and sixth most distant planets from the Sun. Although they were separated by the Sun, they also, as Inner Solar System planets, still remain next to each other, although they are not as close as when under Little Brother's dominion. Their place in the solar system, next to each other, fifth and sixth, is no accident, no coincidence, not a result of chance. The neighboring relationships of both Neptune-Uranus and Saturn-Jupiter are vestiges of the former age when they co-orbited and created reciprocal spins. A Dating ClueThe Icy Rings of Saturn The ice in the icy rings of Saturn does effervesce away constantly into space. Various estimates have been made of the rate the thinning of these icy rings. So far as we are aware, there is no consensus. It could make an exciting study to examine the celestial brilliance of the rings of Saturn as they were on early photographic plates over 100 years ago. Comparing to the present reflectivity, one could then estimate the minute rate of effervescence - thinning of the ring system. Estimates have been made as to how long in the future Saturn's rings will last. Those estimates range from 10,000 years future to 100,000 years for the time left for the life span of those icy rings. The icy rings had a genesis when an icy satellite, an ice ball was rerouted too close to Saturn. "Too close" is 2.5 radii, as was defined by Roche's Limit. In 1850, Edouard Roche studied the tidal effects of two planets, or a planet and a moon that theoretically were on a collision course. He found that due to sudden internal tidal stresses that would be generated, the smaller of two planets on a collision course would fragment before collision. He calculated the distance of fragmentation at 2.44 radii. He assumed two bodies of equal density, and with circular orbits. Saturn has a radius of almost 36,000 miles. Thus, for an ice ball, and given Saturn's low density, its "Roche Limit" is about 85,000 miles. This distance is also the outer boundary of the icy ring system, a confirmation of the case for an icy fragmentation. Most likely, this ice ball was redirected during a close Saturn-Jupiter flyby in the former age when they co-orbited Little Brother. The icy rings would begin to effervesce when Saturn was delivered to the Inner Solar System. Following this, we don't know how much ice originally orbited Saturn; we do know the masses of its inner ice balls. They may be similar in size to the fragmented ice ball. In this way, the icy rings of Saturn suggests some degree of recentness for Saturn's delivery to its present orbit. After more study, if the estimate of 100,000 years for present longevity of Saturn's rings holds up, then it points to a Solar System that is "shockingly youthful" (to gradualists.) For Saturn's icy moons, it was chaos to experience a close flyby of Jupiter in the previous era. Mimas is Saturn's innermost surviving satellite, at 115,000 miles. It is just beyond the Saturnian Roche Limit. As was mentioned earlier, Mimas is an icy satellite pocked with craters and pitlets. Mimas has one crater that is one-third of its own radius. Perhaps some of its craters came from hits by icy debris from Saturn's icy fragmentation. The density of little Mimas is 1.2 compared to water, at a density of 1.0. No one knows, or wonders about how much ice formerly was in the ring system of Saturn. Little Mimas has an orbit radius of some 115,000 miles. It has a physical radius of about 121 miles. Largely composed of ice, its volume exceeds 7,000,000 cubic miles. No one knows whether the ice ball that did fragment was of a similar size, but it is a reasonable conjecture. Mimas might be an indication of how much ice originally might have been in Saturn's icy fragmentation. A fraction of that amount of ice settled into Saturn's resplendent icy rings. As was the case with the capture of Neptune and Uranus, the dating of L. B.'s delivery of Saturn and Jupiter to the doorstep of the Sun is a $64,000 question. The icy rings of Saturn, and their rate of effervescence, are an indication of recentness as gradualist astronomers assess time past. Saturn's rings point to our first theme, planetary catastrophism. These icy rings also point toward our second theme, a shockingly young solar system. And, Dr. Watson, the plot is about to thicken. The clue of Saturn's Rings, and their rate of effervescence is our story 8 of the new, catastrophic cosmology. That leaves some 62 stories yet to be erected. The Capture of The Four Inner Planets The Most Recent of the Snatches We have modeled that the Uranus and Neptune pair formerly co-orbited L. B., as did the Saturn-Jupiter pair. The model of capture of the inner planets by the Sun also works very well if we model Venus and the Earth formerly co-orbiting in orbits of low eccentricity. Venus has a mass 81.5% compared to the Earth. It has a density of 5.24 compared to 5.52 for the Earth. Venus has a polar diameter of 7,5l7 miles compared to the Earth's at 7,900 miles (polar). Physically, Venus is the Earth's twin. Mars, Mercury and the Moon, on the other hand, have masses with reference to the Earth, of only .107, .055 and .012 respectively. The Original Quintet in the Pre-Capture Era In size, the Venus and Earth pair are virtually twin planets. However, there are no twin spins, which means no close flybys in the previous age for Venus and the Earth. On the other hand, Mars has a twin spin with the Earth, indicating a third case of repeated planetary catastrophism in the former era. The model also works wonderfully well if we assume that, in orbiting L. B., Venus co-orbited with the Earth, and like the Moon, Venus was spinless - it showed the same face constantly to he Earth. In addition, our model functions best if Venus co-orbited the Earth in the clockwise direction (as viewed from Polaris). This is the same direction it slowly rotates today, backward. All of the nine planets today orbit the Sun counter-clockwise. And eight of the nine all rotate in the counter-clockwise mode, all except Venus. Today, although Venus hardly rotates at all, what little spin it does have is backward. The model of delivery by Little Brother and capture by the Sun included a package of five small bodies - Earth, Venus, Mars, Mercury and the Moon. The model works best if the conditions of this little five group was the following: Subset A. Originally in deep space, the Moon orbited the Earth in a roundish orbit at a distance of roughly 250,000 miles. This is similar to today. In so doing, the Moon rotated so as always to show the same face to the Earth. It still does. From the Earth's viewpoint, the Moon does not rotate. But from the Sun's viewpoint, the Moon rotates once in 29+ days, with one side always facing the Earth. However it has no spin axis. Subset B. Second, in deep space, in the former age, Mercury orbited Venus also in a roundish orbit at a distance of some 300,000 miles. It also behaved like the Moon; it showed one face and one face only to Venus. It, too, lacked a spin axis. Subset C. Third, in deep space, Mars orbited the Earth on a slightly different plane than the Moon. The orbit of Mars must have been long and narrow, i.e. highly eccentric. This is evident because the two developed reciprocal twin spins, just like Neptune-Uranus and Jupiter-Saturn. Twin spins developed from those close flybys long before the two planets were delivered to the doorstep of the Sun. We model Mars in deep space in the previous age coming within 30,000 miles of the Earth but retreating a distance of several million miles. Subset D. In the previous era, perhaps 1,000 a.u. from the Sun, Venus and the Earth co-orbited at a distance of perhaps 950,000 miles to 1,000,000 miles from each other. Its slow, backward rotation today corresponds to a slow, circular, backward revolving around the Earth in the previous age. The direction or co-orbiting was clockwise for Venus. Thus, Venus orbited the Earth in the opposite direction that Little Brother orbited the Sun. This we call "retrograde" (uncommon) or "clockwise," as it is viewed from Polaris, the North Star. Thus, in deep space, the Earth had a co-orbiting partner (Venus) plus two satellites, Mars and the Moon. Its partner, Venus, also had a non-rotating satellite some 300,000 miles distant, Mercury. This was a sticky little quintet. This quintet also was relatively close in to Little Brother (compared to Jupiter-Saturn and Neptune-Uranus.) . Hence, when stripping time came, if all the planet-stripping was done in one flyby, the quintet was the last system to be stripped off "L. B." and dismembered by the Sun, the Moon excepted. Hence these five comprise what some consider to be the "inner solar system" of today. All orbit within 160,000,000 miles of the Sun, compared to Jupiter's 480,000,000 miles. In this last capture process, for the sticky quintet, first the Sun separated Mars from the Earth. Shortly, perhaps within days, the Earth was divorced from its co-orbiting partner, spinless Venus. Within a couple of weeks more, as "L. B." inexorably approached the Sun, little spinless Mercury was stripped from Venus. Venus, was deposited on the brink of Hell's Kitchen, and the other, Mercury, as it was separated, was sent into an orbit inside Hell's Kitchen itself where temperatures rise to 700° and 800° F. Only the Earth-Moon system had survived the process of dismemberment and realignment around the Sun. This process of capture can be modeled. Figure 3 illustrates the last and the nearest to the Sun of the three packages of celestial captives. The Delivery Orbit for Mars Story 9 is about vestiges and the geographical relationships of the planets today. Mars was delivered to the inner Solar System. Evidently, it was delivered with a long, narrow, highly eccentric orbit, and it maintained that orbital trait into its second and even its third age. The "First Orbit of Mars" was when it orbited the Earth in the remote region 1,000 a.u. or more. The "Second Orbit of Mars" was ended when Mars met Astra in space, some 230,000,000 miles distant. Astra fragmented. Mars gained a little mass, and some angular momentum. But it lost some energy in the crisis. But we are getting ahead of ourselves. "The Scars of Mars" is the title for Volume II, where the details of the Second Orbit, the Third Orbit, and the Fourth Orbit of Mars are analyzed, and why the shifts. To summarize, somewhere between 150,000,000 and 200,000,000 miles from the Sun, both Little Brother and the Earth lost Mars. About 92,000,000 miles from the Sun, Little Brother lost the Earth, and shortly, some 67,000,000 miles, Venus (already stripped from the Earth) was also lost by "L. B." Finally, some 35,000,000 miles from the Sun, Mercury, already stripped from Venus, was also lost by Little Brother. Little Brother was picked clean of its satellites systems. Figure 3 illustrates. Earlier, it was noted that Neptune and Uranus once co-orbited, and they are still in the same neighborhood in the Solar System, still next to each other. This is a vestige of the ancient age. Next it was noted that Saturn and Jupiter once co-orbited, and they also are still next to each other, a second vestige of the primordial age. Now, we see that Mars and the Earth once co-orbited in the remote frigid region. And when the Sun stripped them, they continued to be next to each other. This is a third vestige. In addition, Venus and the Earth co-orbited, and they are still next to each other, a fourth vestige. Finally, little Mercury was a satellite of Venus and after it was stripped from Venus, it also settled down into an orbit next to Venus. Such is our fifth vestige. All of these five vestiges are geographical catastrophism and the geography of the cosmos. This series of separations from L. B., and repositions of the various planets may seem complicated. It isn't. There are only three bunches of planets that were separated from L. B., and (or) delivered to the Sun. First was Uranus-Neptune with satellites, second was Jupiter-Saturn with satellites, and third was Venus-Earth, of which two of the three satellites were stripped off, Mars and Mercury. Because of their greater distance from the Sun, Neptune, Uranus, Saturn and Jupiter each retained their satellite systems. But because Venus and the Earth were separated so close to the Sun (within 100,000,000 miles), two of the their three satellites, Mercury and Mars, were stripped off. Today we call them planets, tiny ones to be sure. In science, there is a maxim that almost always valid. When science is faced with two explanations for a phenomenon, a simple answer and a convoluted one, the simple answer is almost always the correct one. It is known as "Occam's Razor." In the 1300's, William of Occam (Ockham) wrote, "Entia non sunt multiplicanda praeter necessitatem." Loosely translated, it says that complications ought not to be multiplied except out of necessity. In his century, Occam was scientifically quite correct, although he was politically incorrect (and he paid the price of that age.) Our relatively simple, straight-forward theory of capture of three clusters of planets needs to be compared with, and contrasted to the many convolutions, and revision after revision of the nebular hypothesis. The nebular hypothesis, still a favorite of gradualists some 200 years later, tries to affirm all planetary components were extruded from the Sun. More on this convoluted, "popular" (frequently taught) approach is reserved for Chapter 10. The Placement of The Dismembered VenusEarth Binary Earlier, it was noted that Uranus and Neptune were separated from each other, but in that separation still remain somewhat close by each other. The same can be said for the Jupiter-Saturn binary; they too are still somewhat close by each other. Now, once again, we note that even though Venus and the Earth were separated from each other, they still remain fairly close to each other, side by side in the order of the planets. These are vestiges of delivery and capture; they are not three coincidences. The Earth formed a 360-day orbit, its "second orbit." Its first orbit was around Little Brother. This second orbit around the Sun was some 92,250,000 miles from the Sun, almost 1% closer than is the present arrangement. The "second orbit" of Mars in our model has Mars in a new, capture orbit where it may have came in to a region some 64,000,000 miles to the Sun but yet returned out to approximately 230,000,000 miles. Today this region, 230,000,000 miles from the Sun, is known as the heart of the asteroid belt. As was mentioned earlier, the "Second Orbit of Mars" will be discussed at some length in the next volume. Then, and in that context, occurred its sudden conversion, or deterioration into the infamous "Third Orbit of Mars." In the process Mars acquired an interesting display of scars on its surface. Thus, the Sun's radius of action broke up both the ancient Earth-Venus and the Earth-Mars relationships. It also broke up the Venus-Mercury relationship. But it did not break up the Earth-Moon relationship, merely because the Moon originally was so close to the Earth. The Moon never ventured out anywhere near 750,000 miles from the Earth, where it, too, could have been picked off. Figure 3 illustrates this original quintet, as the group orbited L.B. Mars, Earth-Moon system, Venus, Mercury. Such was the order of separation and delivery to the Sun, or, put in another way, it was the "order of capture" by the Sun. It was something like an adoption agency, sending five siblings all from the same family off in four different directions, allowing only the smallest of the five siblings to stay with the largest. Mars was separated from the Earth, yet maintaining its ancient long narrow orbit. Gravities attract, and Mars continued to cross the Earth's orbit. In a sense, Mars continued to "search for" and to seek the Earth, its former major focus. But with little success, at first. The Backward Slow Rotation of Venus The slow, backward rotation of Venus has been a mammoth-like conundrum, probably the greatest conundrum of all for gradualists during the 20th century. Venus, deeply in the Inner Solar System, together with Mercury, are right there where accretion from the Sun's ejecta was supposed to have condensed to the maximum, but instead somehow it has functioned to the minimum. This failure can no longer be swept under the rug. Somehow, some way, Venus, the morning star, rotates backward. And ever so slowly. Its backward (retrograde) rotation rate is once in 243.01 days - once in 5,832 hours. Its equatorial rotation measures to be only 4.05 mph., walking speed. The Earth rotates 1,037.6 mph in the other direction, prograde, counter-clockwise. This conundrum is easily solved. All that is needed is a well thought out model of capture and delivery. The key is the pre-capture era. Venus in the pre-capture era co-orbited with the Earth, at a distance of almost 1,000,000 miles. Both moved from the outer solar region to the inner region by orbiting, or revolving around L.B. clockwise - backward, or retrograde. In addition, Venus did not rotate, but, like the Moon, its ancient "face" "looked at" the Earth constantly. The model works best if the two planets co-orbited around "L. B." in the clockwise mode, opposite to the mode "L.B." orbited the Sun. Given this model, Venus would be picked off by the Sun, separating it first from the Earth, and second from "L. B.". Venus was sent into an orbit with an average radius of some 68,000,000 miles. With its retrograde direction of orbiting, at the moment of separation and capture, Venus kept facing the Earth. After separation, even today, Venus still in fact looks back to the Earth. (Gradualists, please note). Venus was like a lover, being separated from her husband during World War II. He, the soldier, boarded the troop train, or the boat, leaving forever. She, the wife left behind, on the dock departed, slowly throwing BACKWARD a final kiss to her beloved. This kind of thing happened many times to GIs and their brides in the early l940's. And many soldiers in fact never did return. The liberty (and ability) to a think in terms of planetary catastrophism frees us, as cosmologists, from the straight-jacket and the jail cell of gradualism. This is something like Copernicus and Kepler being freed from the tragedy of geocentricity. Copernicus and Kepler went on to provide the first two birth pangs of something entirely new to the history of man. It was the discovery of a system of natural law, which we now call "science". The tenth story in our cosmological construction is the acquisition of backward rotation by Venus. If gradualists choose to refute planetary catastrophism, this is where they should begin. This is certainly one of their biggest dilemmas, and we know a secret. It, the backward, slow rotation of Venus, will continue to be their foremost dilemma until they chuck gradualism. The Prograde Slow Rotation of Mercury Story 11 of our skyscraper is concerned with the ever-so-slow rotation of Mercury, prograde. Mercury rotates once in 58.65 days. At Mercury's equator, it rotates 6.7 mph. It compares to Venus, which rotates at 4.l mph at its equator. Both have such slow rotations because they were non-rotating satellites in the primordial age, when they revolved around "L. B." Mercury was dropped off into Hell's Kitchen because it was the last of the satellites to be stripped. This means that Little Brother approached at least 28,000,000 miles close to the Sun, because such is Mercury's distance today. Mercury's orbit period is 87.97 days. For reasons presently unknown, Mercury's rotation and is orbit period are in 3:2 resonance. Recently, it was determined that Mercury is not a liquid planet with a crust like Venus, the Earth and Mars. It is a solid planet. This is an indication that Mercury's center was very cold when it was delivered, and it hasn't warmed up a great deal since the time of delivery. How the Earth-Moon System Acquired Its Ancient 360-Day Orbit The twelfth story of our celestial skyscraper of cosmology concerns how the Earth-Moon system acquired its ancient 360-day orbit, some 92,250,000 miles from the Sun. This location for the Earth is in the middle of a 15,000,000-mile slot in the Solar System. In this narrow slot, and only in this slot, water neither boils constantly (as on Venus) nor does water freeze permanently (as on Mars.) This "slot" happens to be the one and only favorable location in the solar system where chloroplasts and chlorophyll can function. And where a planet can be greened. But we are getting ahead of our story. Whether by chance or design, the Earth was dropped into that marvelous, advantageous slot. It was 92,250,000 miles from the Sun, just 25,000,000 miles from Venus where surface temperatures rise to 700° F. The Earth was dropped off into "the slot" due to its previous distance from Little Brother and due to the geometry (and geography) of capture by the Sun. Our age, in part framed by gradualist dogma, is the age of the vanity of humanity. Our good fortune for our planet's location "in the slot" is not widely appreciated. Compared to vanity, humility is better, and an age of humility would be best. Job learned this, before it was too late, long ago. Job, viewing the grandeur of creation in a new light, was utterly speechless. The sixth story in our skyscraper is how the orbits of Neptune and Uranus came to be, and why Neptune and Uranus still are neighbors. Their ancient spin rates were nearly identical before separation and still are. The seventh story is how and why, if not when Jupiter and Saturn were picked off and captured by the Sun, some 480,000,000 miles and 880,000,000 miles respectively from the Sun. Part of the story is why they, too, are still neighbors. Like the Neptune-Uranus case, their spin rates also were nearly identical and still are. The eighth story is related to the seventh. It is probable that Saturn already had its icy rings before it was ferried close to the Sun and delivered. Since then, the solar radiation has been effervescing away those splendid rings; they are a mere shadow of what they once were. The rings of Saturn are one kind of dating mechanism for the origin of the solar system, and as such, indicate recentness. The eighth story is the breakup of the quintet in the inner region of the solar system. A two-planet co-orbiting binary, with three satellites, revolving around "L. B." was converted to four planets and just one satellite, all revolving around the Sun. The ninth story of our skyscraper is how an early orbit of Mars and Earth was changed; Mars was liberated first from the Earth and next, from Little Brother. It features the new second orbit of Mars. It was still long and narrow, but Mars now orbited the Sun instead of L. B. In the earlier age, Mars had sought the Earth repeatedly. And next, despite having been separated from the Earth, it continued to seek our planet. The tenth story of our skyscraper of cosmology addresses why Venus rotates so slowly, and why it rotates in the backward mode. Gradualists have pondered this for 100 years and have yet to gain even an inkling. This story also reveals why Venus orbits on the other side, on the edge of Hell's Kitchen, only 65,000,000+ miles from the Sun. We are pleased to announce that understanding this condition is merely by understanding its previous co-orbiting of the Earth and related conditions. Given a good model, the unique, backward, slow rotation of Venus isn't that hard to solve. The eleventh story of our skyscraper is why tiny Mercury rotates so slowly. The reason is because formerly it had no spin axis at all when it orbited Venus in the previous, primordial era. Once non-rotating, its geometry of capture dictated a very, very slow prograde rotation. Mercury rotates in 58.65 days. The twelfth story in our catastrophic cosmology is an observation that the Earth-Moon was dropped off in "the slot." It acquired a new orbit around the Sun, one conquered neither by superheated waters like Venus, nor by perpetual ices like Mars. Coincidence? Perhaps. By design? More likely. For some 200+ years, gradualists have always looked to the Sun for cosmic supplies to stock the Solar System. Having admitted their error in part (after over 100 years), they now have settled for claiming the Sun and planets formed simultaneously out of a cloud. Sublime in misdirection, the gradualists have been looking in just exactly the wrong region for the origin of the planets, the region of inner space, near to and in "Hell's Kitchen." They should have been looking to the region 1,000 a.u. or so from the Sun in dark, remote, frigid space. However, evidence indicates that spin rates were acquired in a remote region at or beyond 1,000 a.u. or more from the Sun. So were satellite systems and craters in abundance. A delivery into the Inner Solar System requires a properly modeled delivery system, and a logical route. The logical route is simply the ecliptic plane. The delivery system is some super-planet along the lines we have modeled, a super-planet 30 to 40 times as massive as Jupiter in mass, also 9,000 to 12,000 times the Earth's mass. If its density is similar to the Earth's, Little Brother could have a diameter of 190,000 miles. If these eight planets were delivered to the Sun into the inner Solar system, there must be a delivery system, a United Parcel Service of the cosmos. The deliveries of the Neptune-Uranus pair and the Saturn Jupiter pair are two items in evidence. The delivery of the quintet is a third indication that a delivery system exists. But, is there more evidence? PREVIEW. Surprisingly, as one proceeds into an analysis of the planets in "Hell's Kitchen," including the Sun itself, three or four more scars, or clues, can be observed, scars of Little Brother's last flyby around the Sun. Read on. We offer our model with logic interspersed with various kinds of evidence. PREVIEW. As it so happens in good movies, while the previously mentioned clues are good evidence of this delivery system, the United Parcel Service of the cosmos, nevertheless the best clues are left for the last. Those clues, three or four of them, can be seen inside the orbit of Venus. Read on. F2 Kuiper, Gerard P., Planets and Satellites. Chicago, Univ. of Chicago Press, 1961, pp. 577-578. The Recent Organization of The Solar System by Patten & Windsor
http://www.creationism.org/patten/PattenRecOrgSolSys/PattenRootssCh06.html
13
11
The Substitution Method Even relatively untrained algebra learners can find the answer to a simpler system of equations using the guess and check method. If time permits, graphing the lines is an alternate way to find the solutions. It is important to familiarize yourself with the above two methods of solving systems before moving on to tackle more heavy duty methods. Almost every system of two equations can be solved using the algebraic methods of substitution or elimination. This lesson demonstrates how to substitute to find the solutions. Obviously, this method is called the substitution method. Substituting is a concept that has been presented many times since the beginning of algebra. First, you learned how to substitute for a single variable: Next, you learned how to substitute for several variables: In this lesson, the first step to solving a system of equations is to substitute for an entire side of the equation. When there are many ways to do a particular problem, it makes sense to choose a method that gives you the best chance to get the answer. If two methods both give you a “best” chance at the correct answer, then choose the one that requires the least amount of work. The difficulty level of a systems of equations may range from “easy” up to “very hard.” When doing an easy problem, guess and check is a good method to use. More difficult problems may be nearly impossible to guess, though, so they are best done using another method. When one of the equations of a system has a variable that can be easily isolated, a good method to use is substitution. Substituting in a system of equations simply means that you will replace a variable in one of the equations with whatever that variable equals. The substitution in example 1 is clearly marked so that you can see how the substitution worked. Example 1: Find the solution to the system So basically the first equation was substituted into the second equation, resulting in an answer of y = 4. Since there are two variables, you must also find the value of x in your solution. Since y = 4, you can substitute this value into the equation x = 2y – 1. Once you have found the value of both variables, rewrite them in a single location… this is your solution. When both equations already have a variable isolated, then the problem can be done in much the same way. This type of problem is confusing to many people learning how to solve systems because there is a choice of how to substitute. You can actually do the problem by substituting either equation into the other one. Example 2: Find the solution to the system It would be fine to use the a = 400 – 2b and substitute the (400 – 2b) in for the value of a into the second equation. The answer works out to be the same this way. The only advantage that (b + 100) has over (400 – 2b) is that it is a little simpler to work with. There are times when neither equation has a term that is isolated. In this case, determine whether one of the equations can be manipulated to isolate one term. If so, then substitution may be the best method for solving the system. If it is not easy to isolate a term, then substitution is not a good option and you should consider another method (like elimination.) Example 3: Find the solution to the system Since there are so many work steps in these equations, it may be hard to tell if you made a mistake. Luckily, it is very easy to check your work on these problems… just plug your solution into each equation and see if it works. It is worth the time and effort needed to check your answer. Here is a check to the answer of example #3. Another tip is to make sure you leave yourself enough room for all your work. Efficient students can do about 6 of these problems on one piece of paper. Avoid the temptation to squeeze in work or leave out steps in order to save paper. After you have spent all that time doing the problem, be sure you got it right by checking to see if your answer is right. If not, try the problem again. Only after you have tried a second and third time and still are coming up with a solution that doesn’t work should you consider getting a second opinion on how to do the problem. Remember that you must think for yourself in order to improve in mathematics and feel really confident about your ability. Don’t rob yourself of that opportunity by being impatient as you complete your assignment. Looking for a different lesson on systems of equations? Try the links below. Before attempting to learn systems of equations, one should be comfortable solving a variety of (single) equations. - Two Step Equations - Multi-Step Equations - Equations with Variables on Both Sides - Absolute Value Equations
http://www.freemathresource.com/lessons/algebra/64-the-substitution-method
13
11
The term fascism was first used by Italian dictator Benito Mussolini in 1919. The term comes from the Italian word fascio, which means “union” or “league.” It also refers to the ancient Roman symbol of power, the fasces, a bundle of sticks bound to an ax, which represented civic unity and the authority of Roman officials to punish wrongdoers. Fascist movements surfaced in most European countries and in some former European colonies in the early 20th century. Fascist political parties and movements capitalized on the intense patriotism that emerged as a response to widespread social and political uncertainty after World War I (1914-1918) and the Russian Revolution of 1917. With the important exceptions of Italy and Germany, however, fascist movements failed in their attempts to seize political power. In Italy and Germany after World War I, fascists managed to win control of the state and attempted to dominate all of Europe, resulting in millions of deaths in the Holocaust and World War II (1939-1945). Because fascism had a decisive impact on European history from the end of World War I until the end of the World War II, the period from 1918 to 1945 is sometimes called the fascist era. Fascism was widely discredited after Italy and Germany lost World War II, but persists today in new forms. Some scholars view fascism in narrow terms, and some even insist that the ideology was limited to Italy under Mussolini. When the term is capitalized as Fascism, it refers to the Italian movement. But other writers define fascism more broadly to include many movements, from Italian Fascism to contemporary neo-Nazi movements in the United States. This article relies on a very broad definition of fascism, and includes most movements that aim for total social renewal based on the national community while also pushing for a rejection of liberal democratic institutions. II. Major Elements Scholars disagree over how to define the basic elements of fascism. Marxist historians and political scientists (that is, those who base their approach on the writings of German political theorist Karl Marx) view fascism as a form of politics that is cynically adopted by governments to support capitalism and to prevent a socialist revolution. These scholars have applied the label of fascism to many authoritarian regimes that came to power between World War I and World War II, such as those in Portugal, Austria, Poland, and Japan. Marxist scholars also label as fascist some authoritarian governments that emerged after World War II, including regimes in Argentina, Chile, Greece, and South Africa. Some non-Marxist scholars have dismissed fascism as a form of authoritarianism that is reactionary, responding to political and social developments but without any objective beyond the exercise of power. Some of these scholars view fascism as a crude, barbaric form of nihilism, asserting that it lacks any coherent ideals or ideology. Many other historians and political scientists agree that fascism has a set of basic traits—a fascist minimum—but tend to disagree over what to include in the definition. Scholars disagree, for example, over issues such as whether the concept of fascism includes Nazi Germany and the Vichy regime (the French government set up in southern France in 1940 after the Nazis had occupied the rest of the country). Beginning in the 1970s, some historians and political scientists began to develop a broader definition of fascism, and by the 1990s many scholars had embraced this approach. This new approach emphasizes the ways in which fascist movements attempt revolutionary change and their central focus on popularizing myths of national or ethnic renewal. Seen from this perspective, all forms of fascism have three common features: anticonservatism, a myth of ethnic or national renewal, and a conception of a nation in crisis. Fascist movements usually try to retain some supposedly healthy parts of the nation's existing political and social life, but they place more emphasis on creating a new society. In this way fascism is directly opposed to conservatism—the idea that it is best to avoid dramatic social and political change. Instead, fascist movements set out to create a new type of total culture in which values, politics, art, social norms, and economic activity are all part of a single organic national community. In Nazi Germany, for example, the fascist government in the 1930s tried to create a new Volksgemeinschaft (people's community) built around a concept of racial purity. A popular culture of Nazi books, movies, and artwork that celebrated the ideal of the so-called new man and new woman supported this effort. With this idealized people's community in mind, the government created new institutions and policies (partly as propaganda) to build popular support. But the changes were also an attempt to transform German society in order to overcome perceived sources of national weakness. In the same way, in Italy under Mussolini the government built new stadiums and held large sporting events, sponsored filmmakers, and financed the construction of huge buildings as monuments to fascist ideas. Many scholars therefore conclude that fascist movements in Germany and Italy were more than just reactionary political movements. These scholars argue that these fascist movements also represented attempts to create revolutionary new modern states. B. Myth of National or Ethnic Renewal Even though fascist movements try to bring about revolutionary change, they emphasize the revival of a mythical ethnic, racial, or national past. Fascists revise conventional history to create a vision of an idealized past. These mythical histories claim that former national greatness has been destroyed by such developments as the mixing of races, the rise of powerful business groups, and a loss of a shared sense of the nation. Fascist movements set out to regain the heroic spirit of this lost past through radical social transformations. In Nazi Germany, for example, the government tried to "purify" the nation by killing millions of Jews and other minority groups. The Nazis believed they could create harmonious community whose values were rooted in an imaginary past in which there were no differences of culture, "deviant" ideologies, or "undesirable" genetic traits. Because fascist ideologies place great value on creating a renewed and unified national or ethnic community, they are hostile to most other ideologies. In addition to rejecting conservatism, fascist movements also oppose such doctrines as liberalism, individualism, materialism, and communism. In general, fascists stand against all scientific, economic, religious, academic, cultural, and leisure activities that do not serve their vision of national political life. C. Idea of a Nation in Crisis A fascist movement almost always asserts that the nation faces a profound crisis. Sometimes fascists define the nation as the same as a nation-state (country and people with the same borders), but in other cases the nation is defined as a unique ethnic group with members in many countries. In either case, the fascists present the national crisis as resolvable only through a radical political transformation. Fascists differ over how the transformation will occur. Some see a widespread change in values as coming before a radical political transformation. Others argue that a radical political transformation will then be followed by a change in values. Fascists claim that the nation has entered a dangerous age of mediocrity, weakness, and decline. They are convinced that through their timely action they can save the nation from itself. Fascists may assert the need to take drastic action against a nation's "inner" enemies. Fascists promise that with their help the national crisis will end and a new age will begin that restores the people to a sense of belonging, purpose, and greatness. The end result of the fascist revolution, they believe, will be the emergence of a new man and new woman. This new man and new woman will be fully developed human beings, uncontaminated by selfish desires for individual rights and self-expression and devoted only to an existence as part of the renewed nation's destiny. III. How Fascist Movements Differ Because each country's history is unique, each fascist movement creates a particular vision of an idealized past depending on the country's history. Fascist movements sometimes combine quasi-scientific racial and economic theories with these mythical pasts to form a larger justification for the fascist transformation, but also may draw on religious beliefs. Even within one country, separate fascist movements sometimes arise, each creating its own ideological variations based on the movement's particular interpretation of politics and history. In Italy after World War I, for example, the Fascist Party led by Benito Mussolini initially faced competition from another fascist movement led by war hero Gabriele D'Annunzio. A. Intellectual Foundations The diversity of fascist movements means that each has its own individual intellectual and cultural foundation. Some early fascist movements were inspired in part by early 20th century social and political thought. In this period the French philosopher Georges Sorel built on earlier radical theories to argue that social change should be brought about through violent strikes and acts of sabotage organized by trade unions. Sorel's emphasis on violence seems to have influenced some proponents of fascism. The late 19th and early 20th century also saw an increasing intellectual preoccupation with racial differences. From this development came fascism's tendency toward ethnocentrism—the belief in the superiority of a particular race. The English-born German historian Houston Stewart Chamberlin, for example, proclaimed the superiority of the German race, arguing that Germans descended from genetically superior bloodlines. Some early fascists also interpreted Charles Darwin's theory of evolution to mean that some races of people were inherently superior. They argued that this meant that the “survival of the fittest” required the destruction of supposedly inferior peoples. But these philosophical influences were not the main inspiration for most fascist movements. Far more important was the example set by the fascist movements in Germany and Italy. Between World War I and World War II fascist movements and parties throughout Europe imitated Italian Fascism and German Nazism. Since 1945 many racially inclined fascist organizations have been inspired by Nazism. These new Nazi movements are referred to as neo-Nazis because they modify Nazi doctrine and because the original Nazi movement inspires them. B. Views on Race Though all fascist movements are nationalist, some fascist ideologies regard an existing set of national boundaries as an artificial constraint on an authentic people or ethnic group living within those boundaries. Nazism, for example, sought to extend the frontiers of the German state to include all major concentrations of ethnic Germans. This ethnic concept of Germany was closely linked to an obsession with restoring the biological purity of the race, known as the Aryan race, and the destruction of the allegedly degenerate minorities. The result was not only the mass slaughter of Jews and Gypsies (Roma), but the sterilization or killing of hundreds of thousands of ethnic Germans who were members of religious minorities or mentally or physically disabled, or for some other reason deemed by self-designated race experts not to have lives worth living. The Nazis' emphasis on a purified nation also led to the social exclusion or murder of other alleged deviants, such as Communists, homosexuals, and Jehovah's Witnesses. The ultranationalism and ethnocentrism of fascist ideologies makes all of them racist. Some forms of fascism are also anti-Semitic (hostile to Jews) or xenophobic (fearful of foreign people). Some fascist movements, such as the Nazis, also favor eugenics—attempts to supposedly improve a race through controlled reproduction. But not all fascist movements have this hostility toward racial and ethnic differences. Some modern forms of fascism, in fact, preach a “love of difference” and emphasize the need to preserve distinct ethnic identities. As a result, these forms of fascism strongly oppose immigration in order to maintain the purity of the nation. Some scholars term this approach differentialism, and point to right-wing movements in France during the 1990s as examples of this form of fascism. Some modern fascist variants have broken with the early fascist movements in another important way. Many early fascist movements sought to expand the territory under their control, but few modern fascist movements take this position. Instead of attempting to take new territory, most modern fascists seek to racially purify existing nations. Some set as their goal a Europe of ethnically pure nations or a global Aryan solidarity. C. Attitudes Toward Religion In addition, fascist movements do not share a single approach to religion. Nazism was generally hostile to organized religion, and Hitler's government arrested hundreds of priests in the late 1930s. Some other early fascist movements, however, tried to identify themselves with a national church. In Italy, for example, the Fascists in the 1930s attempted to gain legitimacy by linking themselves to the Catholic Church. In the same way, small fascist groups in the United States in the 1980s and 1990s combined elements of neo-Nazi or Aryan paganism with Christianity. In all these cases, however, the fascist movements have rejected the original spirit of Christianity by celebrating violence and racial purity. D. Emphasis on Militarism Fascist movements also vary in their reliance on military-style organization. Some movements blend elite paramilitary organizations (military groups staffed by civilians) with a large political party led by a charismatic leader. In most cases, these movements try to rigidly organize the lives of an entire population. Fascism took on this military or paramilitary character partly because World War I produced heightened nationalism and militarism in many countries. Even in these movements, however, there were many purely intellectual fascists who never served in the military. Nazi Germany and Italy under Mussolini stand as the most notable examples of a paramilitary style of organization. Since the end of World War II, however, the general public revulsion against war and anything resembling Nazism created widespread hostility to paramilitary political organizations. As a result, fascist movements since the end of World War II have usually relied on new nonparamilitary forms of organization. There have been some fascist movements that have paramilitary elements, but these have been small compared to the fascist movements in Germany and Italy of the 1930s and 1940s. In addition, most of the paramilitary-style fascist movements formed since World War II have lacked a single leader who could serve as a symbol of the movement, or have even intentionally organized themselves into leaderless terrorist cells. Just as most fascist movements in the postwar period downplayed militarism, they have also abandoned some of the more ambitious political programs created in Nazi Germany and Fascist Italy. Specifically, recent movements have rejected the goals of corporatism (government-coordinated economics), the idea that the state symbolizes the people and embodies the national will, and attempts to include all social groups in a single totalitarian movement. E. Use of Political Rituals Another feature of fascism that has largely disappeared from movements after World War II is the use of quasi-religious rituals, spectacular rallies, and the mass media to generate mass support. Both Nazism and Italian Fascism held rallies attended by hundreds of thousands, created a new calendar of holidays celebrating key events in the regime's history, and conducted major sporting events or exhibitions. All of this was intended to convince people that they lived in a new era in which history itself had been transformed. In contrast to what fascists view as the absurdity and emptiness of life under liberal democracy, life under fascism was meant to be experienced as historical, life-giving, and beautiful. Since 1945, however, fascist movements have lacked the mass support to allow the staging of such theatrical forms of politics. The movements have not, however, abandoned the vision of creating an entirely new historical era. IV. Compared to Other Radical Right-Wing Ideologies Although fascism comes in many forms, not all radical right-wing movements are fascist. In France in the 1890s, for example, the Action Française movement started a campaign to overthrow the democratic government of France and restore the king to power. Although this movement embraced the violence and the antidemocratic tendencies of fascism, it did not develop the fascist myth of revolutionary rebirth through popular power. There have also been many movements that were simply nationalist but with a right-wing political slant. In China, for example, the Kuomintang (The Chinese National People's Party), led by Chiang Kai-shek, fought leftist revolutionaries until Communists won control of China in 1949. Throughout the 20th century this type of right-wing nationalism was common in many military dictatorships in Latin America, Africa, and Asia. Fascism should also be distinguished from right-wing separatist movements that set out to create a new nation-state rather than to regenerate an existing one. This would exclude cases such as the Nazi puppet regime in Croatia during World War II. This regime, known as the Ustaše government, relied on paramilitary groups to govern, and hoped that their support for Nazism would enable Croatia to break away from Yugoslavia. This separatist goal distinguishes the Ustaše from genuine fascist movements. Fascism also stands apart from regimes that are based on racism but do not pursue the goal of creating a revolutionary new order. In the 1990s some national factions in Bosnia and Herzegovina engaged in ethnic cleansing, the violent removal of targeted ethnic groups with the objective of creating an ethnically pure territory. In 1999 the Serbian government's insistence upon pursuing this policy against ethnic Albanians in the province of Kosovo led to military intervention by the North Atlantic Treaty Organization (NATO). But unlike fascist movements, the national factions in Yugoslavia did not set out to destroy all democratic institutions. Instead these brutal movements hoped to create ethnically pure democracies, even though they used violence and other antidemocratic methods. Another example of a racist, but not fascist, organization was the Ku Klux Klan in the 1920s, which became a national mass movement in the United States. Although racial hatred was central to the Klan's philosophy, its goals were still reactionary rather than revolutionary. The Klan hoped to control black people, but it did not seek to build an entirely new society, as a true fascist movement would have. Since 1945, however, the Klan has become increasingly hostile to the United States government and has established links with neo-Nazi groups. In the 1980s and 1990s this loose alliance of antigovernment racists became America's most significant neo-fascist movement. V. The Origins of Fascism Despite the many forms that fascism takes, all fascist movements are rooted in two major historical trends. First, in late 19th-century Europe mass political movements developed as a challenge to the control of government and politics by small groups of social elites or ruling classes. For the first time, many countries saw the growth of political organizations with membership numbering in the thousands or even millions. Second, fascism gained popularity because many intellectuals, artists, and political thinkers in the late 19th century began to reject the philosophical emphasis on rationality and progress that had emerged from the 18th-century intellectual movement known as the Enlightenment. These two trends had many effects. For example, new forms of popular racism and nationalism arose that openly celebrated irrationality and vitalism—the idea that human life is self-directed and not subject to predictable rules and laws. This line of thinking led to calls for a new type of nation that would overcome class divisions and create a sense of historical belonging for its people. For many people, the death and brutality of World War I showed that rationality and progress were not inherent in humanity, and that a radically new direction had to be taken by Western civilization if it was to survive. World War I also aroused intense patriotism that continued after the war. These sentiments became the basis of mass support for national socialist movements that promised to confront the disorder in the world. Popular enthusiasm for such movements was especially strong in Germany and Italy, which had only become nation-states in the 19th century and whose parliamentary traditions were weak. Despite having fought on opposite sides, both countries emerged from the war to face political instability and a widespread feeling that the nation had been humiliated in the war and by the settlement terms of the Treaty of Versailles. In addition, many countries felt threatened by Communism because of the success of the Bolsheviks during the Russian Revolution. VI. The First Fascist Movement: Italy A. Mussolini's Fasci The first fascist movement developed in Italy after World War I. Journalist and war veteran Benito Mussolini served as the guiding force behind the new movement. Originally a Marxist, by 1909 Mussolini was convinced that a national rather than an international revolution was necessary, but he was unable to find a suitable catalyst or vehicle for the populist revolutionary energies it demanded. At first he looked to the Italian Socialist Party and edited its newspaper Avanti! (Forward!). But when war broke out in Europe in 1914, he saw it as an opportunity to galvanize patriotic energies and create the spirit of heroism and self-sacrifice necessary for the country's renewal. He thus joined the interventionist campaign, which urged Italy to enter the war. In 1914, as Italian leaders tried to decide whether to enter the war, Mussolini founded the newspaper Il Popolo d'Italia (The People of Italy) to encourage Italy to join the conflict. After Italy declared war against Germany and Austria-Hungary in May 1915, Mussolini used Il Popolo d'Italia, to persuade Italians that the war was a turning point for their country. Mussolini argued that when the frontline combat soldiers returned from the war, they would form a new elite and bring about a new type of state and transform Italian society. The new elite would spread community and patriotism, and introduce sweeping changes in every part of society. Mussolini established the Fasci Italiani di Combattimento (Italian Combat Veteran's League) in 1919 to channel the revolutionary energies of the returning soldiers. The group's first meeting assembled a small group of war veterans, revolutionary syndicalists (socialists who worked for a national revolution as the first step toward an international one), and futurists (a group of poets who wanted Italian politics and art to fuse in a celebration of modern technological society's dramatic break with the past). The Fasci di Combattimento, sometimes known simply as the Fasci, initially adopted a leftist agenda, including democratic reform of the government, increased rights for workers, and a redistribution of wealth. In the elections of 1919 Fascist candidates won few votes. Fascism gained widespread support only in 1920 after the Socialist Party organized militant strikes in Turin and Italy's other northern industrial cities. The Socialist campaign caused chaos through much of the country, leading to concerns that further Socialist victories could damage the Italian economy. Fear of the Socialists spurred the formation of hundreds of new Fascist groups throughout Italy. Members of these groups formed the Blackshirts—paramilitary squadre (squads) that violently attacked Socialists and attempted to stifle their political activities. B. Mussolini's Rise to Power The Fascists gained widespread support as a result of their effective use of violence against the Socialists. Prime Minister Giovanni Giolitti then gave Mussolini's movement respectability by including Fascist candidates in his government coalition bloc that campaigned in the May 1921 elections. The elections gave the newly formed National Fascist Party (PNF) 35 seats in the Italian legislature. The threat from the Socialists weakened, however, and the Fascists seemed to have little chance of winning more power until Mussolini threatened to stage a coup d'état in October 1922. The Fascists showed their militant intentions in the March on Rome, in which about 25,000 black-shirted Fascists staged demonstrations throughout the capital. Although the Italian parliament moved swiftly to crush the protest, King Victor Emmanuel III refused to sign a decree that would have imposed martial law and enabled the military to destroy the Fascists. Instead the king invited Mussolini to join a coalition government along with Giolitti. Mussolini accepted the bargain, but it was another two years before Fascism became an authoritarian regime. Early in 1925 Mussolini seized dictatorial powers during a national political crisis sparked by the Blackshirts' murder of socialist Giacomo Matteotti, Mussolini's most outspoken parliamentary critic. C. Fascist Consolidation of Power Between 1925 and 1931, the Fascists consolidated power through a series of new laws that provided a legal basis for Italy's official transformation into a single-party state. The government abolished independent political parties and trade unions and took direct control of regional and local governments. The Fascists sharply curbed freedom of the press and assumed sweeping powers to silence political opposition. The government created a special court and police force to suppress so-called anti-Fascism. In principle Mussolini headed the Fascist Party and as head of state led the government in consultation with the Fascist Grand Council. In reality, however, he increasingly became an autocrat answerable to no one. Mussolini was able to retain power because of his success in presenting himself as an inspired Duce (Leader) sent by providence to make Italy great once more. The Fascist government soon created mass organizations to regiment the nation's youth as well as adult leisure time. The Fascists also established a corporatist economic system, in which the government, business, and labor unions collectively formulated national economic policies. The system was intended to harmonize the interests of workers, managers, and the state. In practice, however, Fascist corporatism retarded technological progress and destroyed workers' rights. Mussolini also pulled off a major diplomatic success when he signed the Lateran Treaty with the Vatican in 1929, which settled a long-simmering dispute over the Catholic Church's role in Italian politics. This marked the first time in Italian history that the Catholic Church and the government agreed over their respective roles. Between 1932 and 1934 millions of Italians attended the Exhibition of the Fascist Revolution in Rome, staged by the government to mark Fascism's first ten years in power. By this point the regime could plausibly boast that it had brought the country together through the Risorgimento (Italian unification process) and had turned Italy into a nation that enjoyed admiration and respect abroad. For a time it seemed that Italy had recovered from the national humiliation, political chaos, and social division following World War I and was managing to avoid the global economic and political crises caused by the Great Depression. Mussolini could claim that he had led the country through a true revolution with a minimum of bloodshed and repression, restoring political stability, national pride, and economic growth. All over the country, Mussolini's speeches drew huge crowds, suggesting that most Italians supported the Fascist government. Many countries closely watched the Italian corporatist economic experiment. Some hoped that it would prove to be a Third Way—an alternative economic policy between free-market capitalism and communism. Mussolini won the respect of diplomats all over the world because of his opposition to Bolshevism, and he was especially popular in the United States and Britain. To many, the Fascist rhetoric of Italy's rebirth seemed to be turning into a reality. D. The Fall of Italian Fascism Two events can be seen as marking the turning point in Fascism's fortunes. First, Adolf Hitler became chancellor of Germany in January 1933, which meant that Mussolini had the support of a powerful fascist ally. Second, Italy invaded Ethiopia in October 1935 (see Italy: The Ethiopian Campaign). In less than a year the Fascist army crushed the poorly equipped and vastly outnumbered Ethiopians. Mussolini's power peaked at this point, as he seemed to be making good on his promise to create an African empire worthy of the descendants of ancient Rome. The League of Nations condemned the invasion and voted to impose sanctions on Italy, but this only made Mussolini a hero of the Italian people, as he stood defiant against the dozens of countries that opposed his militarism. But the Ethiopian war severely strained Italy's military and economic resources. At the same time, international hostility to Italy's invasion led Mussolini to forge closer ties with Hitler, who had taken Germany out of the League of Nations. As Hitler and Mussolini worked more closely together, they became both rivals and allies. Hitler seems to have dictated Mussolini's foreign policy. Both Germany and Italy sent military assistance to support General Francisco Franco's quasi-fascist forces during the Spanish Civil War, which broke out in 1936. The Italian troops in Spain suffered several dramatic losses, however, undermining Mussolini's claim that his Fascist army made Italy a military world power. Then in November 1936 Mussolini announced the existence of the Rome-Berlin Axis—a formal military alliance with Nazi Germany. Fascism, once simply associated with Italy's resolution of its domestic problems, had become the declared enemy of Britain, France, and the United States, and of many other democratic and most communist countries. Italian Fascism was fatally linked with Hitler's bold plans to take control of much of Europe and Russia. The formation of the pact with Hitler further isolated Italy internationally, leading Mussolini to move the country closer to a program of autarky (economic self-sufficiency without foreign trade). As Italy prepared for war, the government's propaganda became more belligerent, the tone of mass rallies more militaristic, and Mussolini's posturing more vain and delusional. Italian soldiers even started to mimic the goose-step marching style of their Nazi counterparts, though it was called the Roman step. Although the Italian Fascists had ridiculed Nazi racism and declared that Italy had no “Jewish problem,” in 1938 the government suddenly issued Nazi-style anti-Semitic laws. The new laws denied that Jews could be Italian. This policy eventually led the Fascist government of the Italian Social Republic—the Nazi puppet government in northern Italy—to give active help to the Nazis when they sent 8,000 Italian Jews to their deaths in extermination camps in the fall of 1943. Mussolini knew his country was ill-prepared for a major European war and he tried to use his influence to broker peace in the years before World War II. But he had become a prisoner of his own militaristic rhetoric and myth of infallibility. When Hitler's armies swept through Belgium into France in the spring of 1940, Mussolini abandoned neutrality and declared war against France and Britain. In this way he locked Italy into a hopeless war against a powerful alliance that eventually comprised the British empire, the Union of Soviet Socialist Republics (USSR), and the United States. Italy's armed forces were weak and unprepared for war, despite Mussolini's bold claims of invincibility. Italian forces suffered humiliating defeats in 1940 and 1941, and Mussolini's popularity in Italy plummeted. In July 1943, faced with imminent defeat at the hands of the Allies despite Nazi reinforcements, the Fascist Grand Council passed a vote of no confidence against Mussolini, removing him from control of the Fascist Party. The king ratified this decision, dismissed Mussolini as head of state and had him arrested. Most Italians were overjoyed at the news that the supposedly infallible Mussolini had been deposed. The popular consensus behind the regime had evaporated, leaving only the fanaticism of intransigenti (hard-liners). Nevertheless, Nazi Schutzstaffel (SS) commandos rescued Mussolini from his mountain-top prison, and Hitler then put him in control of the Italian Social Republic—the Nazi puppet government in northern Italy. The Nazis kept Mussolini under tight control, however, using him to crush partisans (anti-Fascist resistance fighters) and to delay the defeat of Germany. Partisans finally shot Mussolini as he tried to flee in disguise to Switzerland in April 1945. Meanwhile hundreds of thousands of Italian soldiers endured terrible suffering, either forced to fight alongside the Nazis in Italy or on the Russian front, or to work for the Nazi regime as slave labor. The rise and fall of Fascism in Italy showed several general features of fascism. First, Italian Fascism fed off a profound social crisis that had undermined the legitimacy of the existing system. Many Europeans supported fascism in the 1930s because of a widespread perception that the parliamentary system of government was fundamentally corrupt and inefficient. Thus it was relatively easy for Italians to support Mussolini's plans to create a new type of state that would transform the country into a world power and restore Italy to the prominence it enjoyed during the Roman Empire and the Renaissance. Second, Italian Fascism was an uneasy blend of elitism and populism. A revolutionary elite imposed Fascist rule on the people. In order to secure power the movement was forced to collaborate with conservative ruling elites—the bourgeoisie (powerful owners of business), the army, the monarchy, the Church, and state officials. At the same time, however, the Fascist movement made sustained efforts to generate genuine popular enthusiasm and to revolutionize the lives of the Italian people. Third, Fascism was a charismatic form of politics that asserted the extraordinary capabilities of the party and its leader. The main tool for the Fascistization (conversion to Fascism) of the masses and the creation of the new Fascist man was not propaganda, censorship, education, or terror, or even the large fascist social and military organizations. Instead, the Fascists relied on the extensive use of a ritualized, theatrical style of politics designed create a sense of a new historical era that abolished the politics of the past. In this sense Fascism was an attempt to confront urbanization, class conflict, and other problems of modern society by making the state itself the object of a public cult, creating a sort of civic religion. Fourth, Italy embraced the fascist myth that national rebirth demanded a permanent revolution—a constant change in social and political life. To sustain a sense of constant renewal, Italian Fascism was forced by its own militarism to pursue increasingly ambitious foreign policy goals and ever more unrealizable territorial claims. This seems to indicate that any fascist movement that identifies rebirth with imperialist expansion and manages to seize power will eventually exhaust the capacity of the nation to win victory after victory. In the case of Italian Fascism, this exhaustion set in quickly. A fifth feature of Italian Fascism was its attempt to achieve a totalitarian synthesis of politics, art, society, and culture, although this was a conspicuous failure. Italian Fascism never created a true new man. Modern societies have a mixture of people with differing values and experiences. This diversity can be suppressed but not reversed. The vast majority of Italians may have temporarily embraced Fascist nationalism because of the movement's initial successes, but the people were never truly Fascistized. In short, in its militarized version between World War I and World War II, the fascist vision was bound to lead in practice to a widening gap between rhetoric and reality, goals and achievements. Finally, the fate of Italian Fascism illustrates how the overall goal of a fascist utopia has always turned to nightmare. Tragically for Italy and the international community, Mussolini embarked on his imperial expansion just as Hitler began his efforts to reverse the Versailles Treaty and reestablish Germany as a major military power. This led to the formation of the Axis alliance, which gave Hitler a false sense of security about the prospects for his imperial schemes. The formation of this alliance helped lead to World War II, and it committed Mussolini to unwinnable military campaigns that resulted in the Allied invasion of Italy in 1943. The death, destruction, and misery of the fighting in Italy was inflicted on a civilian population that had come to reject the Fascist vision of Italian renewal, but whose public displays of enthusiasm for the regime before the war had kept Mussolini in power. VII. Fascism in Germany: National Socialism The only fascist movement outside Italy that came to power in peacetime was Germany's National Socialist German Workers Party—the Nazis. The core of the National Socialist program was an ideology and a policy of war against Germany's supposed moral and racial decay and a struggle to begin the country's rebirth. This theme of struggle and renewal dominates the many ideological statements of Nazism, including Adolf Hitler's book Mein Kampf (My Struggle, 1939), speeches by propaganda minister Joseph Goebbels, and Leni Riefenstahl's propaganda film Triumph des Willens (Triumph of the Will, 1935). All of the Nazi government's actions served this dual purpose of destroying the supposed sickness of the old Germany and creating a healthy new society. The government abolished democratic freedoms and institutions because they were seen as causing national divisions. In their place the government created an authoritarian state, known as the Third Reich, that would serve as the core of the new society. The Nazis promoted German culture, celebrated athleticism and youth, and tried to ensure that all Germans conformed physically and mentally to an Aryan ideal. But in order to achieve these goals, the Nazi regime repressed supposedly degenerate books and paintings, sterilized physically and mentally disabled people, and enslaved and murdered millions of people who were considered enemies of the Reich or "subhuman." This combination of renewal and destruction was symbolized by the pervasive emblem of Nazism, the swastika—a cross with four arms broken at right angles. German propaganda identified the swastika with the rising sun and with rebirth because the bars of the symbol suggest perpetual rotation. To its countless victims, however, the swastika came to signify cruelty, death, and terror. A. Main Features There were two features specific to Nazism that combined to make it so extraordinarily destructive and barbaric once in power. The first feature was the Nazi myth of national greatness. This myth suggested that the country was destined to become an imperial and great military power. Underpinning this myth was a concept of the nation that blended romantic notions about national history and character with pseudo-scientific theories of race, genetics, and natural selection. It led naturally to a foreign policy based on the principle of first uniting all ethnic Germans within the German nation, and then creating a vast European empire free of racial enemies. These ideas led to international wars of unprecedented violence and inhumanity. The second important feature of Nazism was that it developed in the context of a modern economy and society. Even after Germany's defeat in World War I, the country was still one of the most advanced nations in the world in terms of infrastructure, government efficiency, industry, economic potential, and standards of education. Germany also had a deep sense of national pride, belonging, and roots, and a civic consciousness that stressed duty and obedience. In addition, the nation had a long tradition of anti-Semitism and imperialism, and of respect for gifted leaders. The institutions of democracy had only weak roots in Germany, and after World War I democracy was widely rejected as un-German. B. Hitler's Rise to Power The dangerous combination of Germany's modernity and its racist, imperialist ultranationalism became apparent after the economic and political failure of the Weimar Republic, the parliamentary government established in Germany following World War I. Unlike Mussolini, Hitler took control of a country that had a strong industrial, military, and governmental power base that was merely dormant after World War I. Hitler also became more powerful than Mussolini because the Nazis simply radicalized and articulated widely held prejudices, whereas the Fascists of Italy had to create new ones. Although the Nazi Party won control of the German legislature after a democratic election in 1932 , in 1933 Hitler suspended the constitution, abolished the presidency, and declared himself Germany's Führer (leader). Once in control, Hitler was able to insert his fascist vision of the new Germany into a highly receptive political culture. The Third Reich quickly created the technical, organizational, militaristic, and social means to implement its far-reaching schemes for the transformation of Germany and large parts of Europe. The Nazis' attempts to build a new German empire led to the systematic killings of about six million civilians during the 1940s, and the deaths of millions more as the result of Nazi invasion and occupation—a horror rivaled only by Josef Stalin's rule in the Soviet Union during the 1930s. The Nazis primarily killed Jews, but also targeted homosexuals, people with disabilities, and members of religious minorities such as the Jehovah's Witnesses. All of this killing and destruction stemmed from the Nazis' conviction that non-Germans had sapped the strength of the German nation. At the same time, the Nazis attempted to take control of most of Europe in an effort to build a new racial empire. This effort led to World War II and the deaths of millions of soldiers and civilians. After early successes in the war, Germany found itself facing defeat on all sides. German forces were unable to overcome the tenacity and sheer size of the Soviet military in Eastern Europe, while in Western Europe and North Africa they faced thousands of Allied aircraft, tanks, and ships. Facing certain defeat, Hitler killed himself in April 1945, and Germany surrendered to the Allies in the following month. Although scholars generally view Italy under Mussolini as the benchmark for understanding fascism in general, the German case shows that not all fascist movements were exactly alike. German National Socialism differed from Italian Fascism in important ways. The most important differences were Nazism's commitment to a more extreme degree of totalitarian control, and its racist conception of the ideal national community. Hitler's visionary fanaticism called for the Gleichschaltung (coordination) of every possible aspect of life in Germany. The totalitarianism that resulted in Germany went further than that of Italy, although not as far as Nazi propaganda claimed. Italian Fascism lacked the ideological fervor to indulge in systematic ethnic cleansing on the scale seen in Germany. Although the Italian Fascist government did issue flagrantly anti-Semitic laws in 1938, it did not contemplate mass extermination of its Jewish population. In Italy Fascism also was marked by pluralism, compromise, and inefficiency as compared to Nazism. As a result, in Fascist Italy far more areas of personal, social, and cultural life escaped the intrusion of the state than in Nazi Germany. Nevertheless, both Italian Fascism and German National Socialism rested on the same brutal logic of rebirth through what was seen as creative destruction. In Italy this took form in attempts by the Fascist Party to recapture Roman qualities, while in Germany it led the Nazis to attempt to re-Aryanize European civilization. When Nazism is compared to other forms of fascism, it becomes clear that Nazism was not just a peculiar movement that emerged from Germany's unique history and culture. Instead, Nazism stands as a German variant of a political ideology that was popular to varying degrees throughout Europe between World War I and World War II. As a result of this line of thinking, some historians who study Nazism no longer speculate about what elements of German history led to Nazism. Instead, they try to understand which conditions in the German Weimar Republic allowed fascism to become the country's dominant political force in 1932, and the process by which fascists were able to gain control of the state in 1933. The exceptional nature of the success of fascism in Germany and Italy is especially clear when compared to the fate of fascism in some other countries. VIII. Fascism in Other Countries from 1919 to 1945 World War I and the global economic depression of the 1930s destabilized nearly all liberal democracies in Europe, even those that had not fought in the war. Amidst this social and political uncertainty, fascism gained widespread popularity in some countries but consistently failed to overthrow any parliamentary system outside of Italy and Germany. In many countries fascism attracted considerable attention in newspaper and radio reports, but the movement never really threatened to disturb the existing political order. This was the case in countries such as Czechoslovakia, Denmark, England, Holland, Iceland, Ireland, Norway, Sweden, and Switzerland. Fascism failed to take root in these countries because no substantial electoral support existed there for a revolution from the far right. In France, Finland, and Belgium, far-right forces with fascistic elements mounted a more forceful challenge in the 1930s to elected governments, but democracy prevailed in these political conflicts. In the Communist USSR, the government was so determined to crush any forms of anticommunist dissent that it was impossible for a fascist movement to form there. But fascism did represent a significant movement in a handful of European countries. A review of the countries where fascism saw some success but ultimately failed helps explain the more general failure of fascism. These countries included Spain, Portugal, Austria, France, Hungary, and Romania. In these countries fascism was denied the political space in which to grow and take root. Fascist movements were opposed by powerful coalitions of radical right-wing forces, which either crushed or absorbed them. Some conservative regimes adopted features of fascism to gain popularity. Spain's fascist movement, the Falange Española (Spanish Phalanx) was hobbled by the country's historical lack of a coherent nationalist tradition. The strongest nationalist sentiments originated in Basque Country in north central Spain and in Catalonia in the northeast. But in both areas the nationalists favored separation rather than the unification of Spain as a nation. The Falange gained some support in the 1930s, but it was dominated by the much stronger coalition of right-wing groups led by General Francisco Franco. The Falangists fought alongside Franco's forces against the country's Republican government during the Spanish Civil War in 1936 and 1937. But the Falange was too small to challenge the political supremacy of Franco's coalition of monarchists (supporters of royal authority), Catholics, and conservative military forces. The Republican government killed the Falangist leader José Antonio Primo de Rivera in November 1936. With the loss of this key leader, Franco managed to absorb fascism into his movement by combining the Falange with the Carlists, a monarchist group that included a militia known as the Requetés (Volunteers). The fascism of the Falange retained some influence when Franco became dictator in 1939, but this was primarily limited to putting a radical and youthful face on Franco's repressive regime. Franco's quasi-fascist government controlled Spanish politics until Franco's death in 1975. Franco's reign marked the longest-lived form of fascist political control, but fascist ideology took second place to Franco's more general goal of protecting the interests of Spain's traditional ruling elite. In Portugal the dictator António de Olivera Salazar led a right-wing authoritarian government in the 1930s that showed fascist tendencies, but was less restrictive than the regimes of other fascist countries. Salazar sought to create a quasi-fascist Estado Novo (New State) based on strict government controls of the economy, but his government was relatively moderate compared to those in Italy, Germany, and Spain. Salazar's conservative authoritarianism was opposed by another movement with fascist tendencies, the National Syndicalists, which hoped to force a more radical fascist transformation of Portugal. But Salazar's government banned the National Syndicalist movement in 1934 and sent its leader, Rolão Preto, into exile in Spain. Salazar continued to rule as the dictator of Portugal until 1968. In the wake of World War I, Marxist forces on the left and quasi-fascist groups on the right increasingly polarized Austrian politics. Some right-wing forces organized the paramilitary Heimwehr (Home Defense League) to violently attack members of the Socialist Party. Other right-wing forces created an Austrian Nazi party, but this group rejected many basic elements of fascism. The somewhat less extreme Christian Social Party led by Engelbert Dollfuss won power in 1932 through a parliamentary coalition with the Heimwehr. Once in power, Dollfuss created a quasi-fascist regime that resisted incorporation into Hitler's Germany and emphasized the government's ties with the Catholic Church. Dollfuss was killed when the Austrian Nazis attempted a putsch (takeover) in 1934, but the Nazis failed in this effort to take control of the government. The government then suppressed the Nazi party, eliminating the threat of extreme fascism in Austria until Nazi Germany annexed the country in 1938. The Vichy regime in France stood as one of the most radical quasi-fascist governments during World War II. The regime took its name from the town of Vichy, which was the seat of the pro-German government controlled by the Nazis from 1940 until 1945. The Vichy government shared many characteristics with Nazism, including an official youth organization, a brutal secret police, a reliance on the political rituals of a "civic religion," and vicious anti-Semitic policies that led to the killing of an estimated 65,000 French Jews. The Vichy regime was headed by Henri Philippe Pétain, a fatherly figure who ensured that genuine fascists gained little popular support for their radical plans to rejuvenate France. At the same time, fascists in other parts of the country supported the Nazi occupation, but the Germans never granted real power to these radical forces. Fascism had a mixed impact on Hungarian politics in the 1920s and 1930s. Some Hungarian leaders hoped that an alliance with Nazi Germany would bring the return of Transylvania, Croatia, and Slovakia—territories that Hungary had lost in World War I. At the same time, however, many Hungarians feared that Germany would try to regain its historical military dominance of the region. Right-wing nationalist groups who favored close ties to Germany flourished in the 1930s, and by 1939 the fascist Arrow Cross movement was the dominant political party. Under the leadership of the radical army officer Fernec Szálasi, the Arrow Cross sought to enlarge Hungary and hoped to position the country along with Italy and Germany as one of Europe's great powers. The Hungarian government led by Miklós Horthy de Nagybánya supported Hitler's overall regional ambitions and maintained close ties with the Nazi government, but the regime felt threatened by the Arrow Cross's challenge to its authority. Horthy clamped down on the Arrow Cross, even though his own government had fascist tendencies. During World War II Hungary sent about 200,000 soldiers to fight alongside the German army on the Russian front, and about two-thirds of the Hungarian force was killed. As the war turned against Germany, Hungary began to curtail its support for the Nazis, leading Hitler to send troops to occupy Hungary in 1944. The Nazis installed Szálasi as the head of a puppet government that cooperated with the SS when it began rounding up the country's Jewish population for deportation to Nazi extermination camps. By the end of World War II, fascist Hungarian forces and the Nazis had killed an estimated 550,000 Hungarian Jews. The Arrow Cross party collapsed after the war, and some of its leaders were tried as war criminals. To the east of Hungary, Romanian fascist forces nearly won control of the government. The Iron Guard, the most violent and anti-Semitic movement in the country, grew rapidly when the Romanian economy was battered by the global depression of the 1930s. As the Iron Guard became more powerful, Romanian ruler King Carol II withdrew his initial support for the movement and in 1938 ordered the execution of its top leaders. Romanian general Ion Antonescu, who was backed by the Iron Guard and by Nazi Germany, demanded that Carol II abdicate his rule. After the king left the country, Antonescu set up a quasi-fascist military dictatorship that included fellow members of the Iron Guard. Intent upon creating their own new order, the Iron Guard assassinated political enemies and seized Jewish property. But the campaign led to economic and political chaos, which convinced Nazi officials that the Iron Guard should be eliminated. In 1941, amidst rumors that the Iron Guard was planning a coup, Antonescu crushed the movement with Nazi approval. Antonescu's army then cooperated with Nazi soldiers to exterminate Jews in the eastern portion of the country in 1941, and thousands more died when the fascist forces expelled them to a remote eastern region of the country. By the end of the war an estimated 364,000 Jews had died in the Romanian Holocaust as a result of this alliance of conservative and fascist forces. IX. Fascism after World War II After the world became fully aware of the enormous human suffering that occurred in Nazi concentration camps and extermination centers, many people came to see the defeat of fascism as a historic victory of humanity over barbarism. World War II discredited fascism as an ideology, and after the war most of the world saw levels of sustained economic growth that had eluded most countries in the years after World War I. The economic and political turmoil that had spurred fascist movements in the years after World War I seemed to have disappeared. At the same time fascism could not take root in the conditions of tight social and political control in the USSR. Government controls also prevented fascism from gaining a foothold in Soviet client states in Eastern Europe. But fascism proved resilient, and new movements adapted the ideology to the changed political environment. Some support for a revival of fascism came from the movement's supporters who were disappointed by the defeat of the Axis powers. In addition, a new generation of ultranationalists and racists who grew up after 1945 hoped to rebuild the fascist movement and were determined to continue the struggle against what they saw as decadent liberalism. During the Cold War, in which the United States and the Soviet Union vied for global dominance, these new fascists focused their efforts on combatting Communism, the archenemy of their movement. Since 1945 fascism has spread to other countries, notably the United States. In several countries fascist groups have tried to build fascist movements based on historical developments such as fear of immigration, increased concern over ecological problems, and the Cold War. Along with the change in ideology, fascists have adopted new tools, such as rock music and the Internet, to spread their ideas. Some fascist groups have renounced the use of paramilitary groups in favor of a "cultural campaign" for Europeans to recover their "true identity." Fundamentally, contemporary fascism remains tightly linked to its origins in the early 20th century. Fascism still sets as its goal the overthrow of liberal democratic institutions, such as legislatures and courts, and keeps absolute political power as its ultimate aim. Fascism also retains its emphasis on violence, sometimes spurring horrific incidents. For instance, fascist beliefs motivated the 1995 bombing of the federal building in Oklahoma City, Oklahoma, that killed 168 people and wounded more than 500 others. In Germany, fascist groups in the early 1990s launched scores of firebomb attacks against the homes of immigrants, sometimes killing residents. In 1999, inspired by Nazi ideals of ethnic cleansing, fascist groups conducted a series of bomb attacks in London. The attacks were directed against ethnic minorities, gays, and lesbians. After World War II, only South Africa saw the emergence of a significant fascist movement that followed the prewar pattern. In South Africa the white supremacist paramilitary movement Afrikaner Weerstandsbeweging (Afrikaner Resistance Movement) organized radical white South Africans to create a new hard-line racial state. Most white South Africans supported the system of racial and economic exploitation of the black majority known as apartheid, but only a small fraction went so far as to support the Afrikaner Resistance Movement. The movement carried out repeated acts of violence and sabotage in the 1980s and especially the 1990s, but remained a minor political force. South Africa's political reforms in the 1990s led to the further reduction in support for the Afrikaner Resistance Movement. In other countries, widespread hostility to fascism made it impossible to create a mass movement coordinated by a paramilitary political party, as Nazi Germany's National Socialists or Romania's Iron Guard had been. As a result, fascists have relied on a number of new strategies to keep the prospect of national revolution open. X. New Fascist Strategies Fascist groups have developed many new strategies since World War II, but they have virtually no chance of winning control of the government in any country. Citizens in all countries hope for political stability and economic prosperity, and do not see fascism as a realistic way of achieving these goals. Even in countries where ethnic tensions are strong, such as in some areas that were once part of the USSR or under its control, there is no mass support for visions of a reborn national community based on self-sacrifice, suppression of individualism, and isolation from global culture and trade. A. Reliance on Dispersed Small Groups One of the most important new fascist strategies is to form small groups of ideologically committed people willing to dedicate their lives to the fascist cause. In some cases these minor groups turn to terrorism. Since 1945, fascists in Western Europe and the United States formed many thousands of small groups, with memberships ranging from a few hundred to less than ten. These small groups can be very fragile. Many of them are dissolved or change names after a few years, and members sometimes restlessly move through a number of groups or even belong to several at once. Although the groups often use bold slogans and claim that their forces will create a severe social crisis, in practice they remain unable to change the status quo. These groups remain ineffective because they fail to attract mass support, failing even to win significant support from their core potential membership of disaffected white males. Despite their weaknesses, these small fascist groups cannot be dismissed as insignificant. Some of them have been known to carry out acts of violence against individuals. In 1997 in Denmark, for example, a fascist group was accused of sending bombs through the mail to assassinate political opponents. In the United States, fascists have assaulted and killed African Americans, Jews, and other minorities, and set off scores of bombs. Small fascist groups also present a threat because the fliers they distribute and the marches and meetings they hold can create a local climate of racial intolerance. This encourages discrimination ranging from verbal abuse to murder. In addition, the small size and lack of centralized organization that weakens these groups also makes them nearly impossible for governments to control. If a government stops violence by arresting members of a few groups, the larger fascist network remains intact. This virtually guarantees that the ideology of fascism will survive even if government authorities clamp down on some organizations. B. Shift to Electoral Politics In addition to organizing through small groups, some fascists have tried to participate in mainstream party-based electoral politics. In contrast to the first fascist movements, these new fascist parties do not rely on a military branch to fight their opponents, and they tend to conceal their larger fascist agenda. To make fascist ideas seem acceptable, some parties water down their revolutionary agenda in order to win voter support even from people who do not want radical change and a fascist regime. Instead of emphasizing their long-term objectives for change, the fascist parties focus on issues such as the threat of Communism, crime, global economic competition, the loss of cultural identity allegedly resulting from mass immigration, and the need for a strong, inspiring leader to give the nation a direction. Italy, for example, saw this type of quasi-democratic fascism with the 1946 formation of the Movimento Sociale Italiano (MSI), which hoped to keep fascist ideals alive. In the mid-1990s the MSI managed to widen its support significantly when it renounced the goals of historic Italian Fascism and changed its name to the National Alliance (Alleanza Nazionale, or AN). Although the AN presents itself as comparable to other right-wing parties, its programs still retain significant elements of their fascist origins. During the 1990s several other extreme-right parties gained significant mass support, including the Republicans (Die Republikaner) in Germany, the National Front (Front National, or FN) in France, the Freedom Movement (Die Freiheitlichen) in Austria, the Flemish Bloc (Vlaams Blok) in Belgium, and the Liberal Democratic Party in Russia. All of these groups have some fascistic elements, but reject the revolutionary radicalism of true fascism. C. Emphasis on Cultural Change Since World War II, some fascist movements have also shifted their goal from the political overthrow of democratic governments to a general cultural transformation. These movements hope that a cultural transformation will create the necessary conditions to achieve a radical political change. This form of fascism played an important role in the formative phase of the New Right. In the 1960s and 1970s New Right intellectuals criticized both liberal democratic politics and communism, arguing that societies should be organized around ethnic identity. Unlike earlier fascist movements, the New Right agenda did not require paramilitary organizations, uniforms, or a single unifying leader. As a result of their emphasis on culture and ethnicity, the New Right argues that it is important to maintain a diversity of cultures around the world. But since it favors the preservation of ethnic cultures, the New Right strongly opposes the mixing of cultures that is increasingly common in the United States, Canada, and Europe. As a result, New Right thinkers attack the rise of global culture, the tendencies toward closer ties between countries, and all other trends that encourage the loss of racial identity. These thinkers argue that people who oppose racism in fact want to allow racial identity to be destroyed and are therefore promoting racial hatred. Known as differentialists, these fascists proclaim their love of all cultures, but in practice attack the multiculturalism and tolerance that lies at the heart of liberal democracy. Some political scientists and historians therefore argue that differentialism is really just a thinly disguised form of racism and fascism. Since the 1980s some leading New Right intellectuals have moved away from the fascist vision of a new historical era. However, the ideas that form the basis of the New Right movement continue to exert considerable influence on fascist activists who wish to disguise their true agenda. One example is "Third Positionists," who claim to reject capitalism and communism in their search for a "third way" based on revolutionary nationalism. D. Attempts to Build a Global Movement Fascists since World War II have also reshaped fascist ideology by attempting to create an international fascist movement. New Rightists and Third Positionists in Europe condemn cultural and ethnic mixing, and strive to unite fascist forces in Britain, Denmark, France, Italy, and other countries behind a shared vision of a reborn Europe. These fascists thus break with the narrow nationalism that characterized the first fascist movements. At the same time, neo-Nazi groups worldwide have embraced the myth of Aryan superiority, which German fascists used as the basis for war against the rest of humanity. The neo-Nazis hope to build a global movement, and rely on this central element of racism to create a doctrine of white supremacy for all of Europe, Canada, the United States, and other places with substantial populations of white people. The new international character of fascism can also be seen in the pseudo-scholarly industry that publishes propaganda in an academic style to play down, trivialize, or excuse the horrors of Nazism. This approach is sometimes called historical revisionism, although it is separate from a much more general and mainstream approach to history known as revisionism. Some of these self-styled scholars manufacture or distort documentary evidence to “prove” that the Nazis did not create extermination camps that killed millions of Jews during the Holocaust. All professional historians completely reject any attempt to show that the Holocaust never happened, but there continues to be a loosely knit international community of fascist writers who make such claims. The Internet has made it much easier for these writers to spread their ideas and propaganda in a way that is practically impossible to censor. While fascism has no prospect of returning to its former influence, it is set to be a continuous source of ideological and physical attacks on liberal society for the foreseeable future, and a permanent component of many democracies.
http://www.angelfire.com/tx5/ara/pde/facism.html
13
21
In its most general sense, gravitational lensing is a collective term for all effects of a gravitational field on the propagation of electromagnetic radiation, with the latter usually described in terms of rays. According to general relativity, the gravitational field is coded in a metric of Lorentzian signature on the 4-dimensional spacetime manifold, and the light rays are the lightlike geodesics of this spacetime metric. From a mathematical point of view, the theory of gravitational lensing is thus the theory of lightlike geodesics in a 4-dimensional manifold with a Lorentzian metric. The first observation of a ‘gravitational lensing’ effect was made when the deflection of star light by our Sun was verified during a Solar eclipse in 1919. Today, the list of observed phenomena includes the The gravitational field of a galaxy (or a cluster of galaxies) bends the light from a distant quasar in such a way that the observer on Earth sees two or more images of the quasar. An extended light source, like a galaxy or a lobe of a galaxy, is distorted into a closed or almost closed ring by the gravitational field of an intervening galaxy. This phenomenon occurs in situations where the gravitational field is almost rotationally symmetric, with observer and light source close to the axis of symmetry. It is observed primarily, but not exclusively, in the radio range. Distant galaxies are distorted into arcs by the gravitational field of an intervening cluster of galaxies. Here the situation is less symmetric than in the case of rings. The effect is observed in the optical range and may produce “giant luminous arcs”, typically of a characteristic blue color. When a light source passes behind a compact mass, the focusing effect on the light leads to a temporal change in brightness (energy flux). This microlensing effect is routinely observed since the early 1990s by monitoring a large number of stars in the bulge of our Galaxy, in the Magellanic Clouds and in the Andromeda galaxy. Microlensing has also been observed on quasars. Image distortion by weak lensing. In cases where the distortion effect on galaxies is too weak for producing rings or arcs, it can be verified with statistical methods. By evaluating the shape of a large number of background galaxies in the field of a galaxy cluster, one can determine the surface mass density of the cluster. By evaluating fields without a foreground cluster one gets information about the large-scale mass distribution. Observational aspects of gravitational lensing and methods of how to use lensing as a tool in astrophysics are the subject of the Living Review by Wambsganss . There the reader may also find some notes on the history of lensing. The present review is meant as complementary to the review by Wambsganss. While all the theoretical methods reviewed in rely on quasi-Newtonian approximations, the present review is devoted to the theory of gravitational lensing from a spaectime perspective, without such approximations. Here the terminology is as follows: “Lensing from a spacetime perspective” means that light propagation is described in terms of lightlike geodesics of a general-relativistic spacetime metric, without further approximations. (The term “non-perturbative lensing” is sometimes used in the same sense.) “Quasi-Newtonian approximation” means that the general-relativistic spacetime formalism is reduced by approximative assumptions to essentially Newtonian terms (Newtonian space, Newtonian time, Newtonian gravitational field). The quasi-Newtonian approximation formalism of lensing comes in several variants, and the relation to the exact formalism is not always evident because sometimes plausibility and ad-hoc assumptions are implicitly made. A common feature of all variants is that they are “weak-field approximations” in the sense that the spacetime metric is decomposed into a background (“spacetime without the lens”) and a small perturbation of this background (“gravitational field of the lens”). For the background one usually chooses either Minkowski spacetime (isolated lens) or a spatially flat Robertson–Walker spacetime (lens embedded in a cosmological model). The background then defines a Euclidean 3-space, similar to Newtonian space, and the gravitational field of the lens is similar to a Newtonian gravitational field on this Euclidean 3-space. Treating the lens as a small perturbation of the background means that the gravitational field of the lens is weak and causes only a small deviation of the light rays from the straight lines in Euclidean 3-space. In its most traditional version, the formalism assumes in addition that the lens is “thin”, and that the lens and the light sources are at rest in Euclidean 3-space, but there are also variants for “thick” and moving lenses. Also, modifications for a spatially curved Robertson–Walker background exist, but in all variants a non-trivial topological or causal structure of spacetime is (explicitly or implicitly) excluded. At the center of the quasi-Newtonian formalism is a “lens equation” or “lens map”, which relates the position of a “lensed image” to the position of the corresponding “unlensed image”. In the most traditional version one considers a thin lens at rest, modeled by a Newtonian gravitational potential given on a plane in Euclidean 3-space (“lens plane”). The light rays are taken to be straight lines in Euclidean 3-space except for a sharp bend at the lens plane. For a fixed observer and light sources distributed on a plane parallel to the lens plane (“source plane”), the lens map is then a map from the lens plane to the source plane. In this way, the geometric spacetime setting of general relativity is completely covered behind a curtain of approximations, and one is left simply with a map from a plane to a plane. Details of the quasi-Newtonian approximation formalism can be found not only in the above-mentioned Living Review , but also in the monographs of Schneider, Ehlers, and Falco and Petters, Levine, and Wambsganss . The quasi-Newtonian approximation formalism has proven very successful for using gravitational lensing as a tool in astrophysics. This is impressively demonstrated by the work reviewed in . On the other hand, studying lensing from a spacetime perspective is of relevance under three aspects: The theoretical foundations of lensing can be properly formulated only in terms of the full formalism of general relativity. Working out examples with strong curvature and with non-trivial causal or topological structure demonstrates that, in principle, lensing situations can be much more complicated than suggested by the quasi-Newtonian formalism. General theorems on lensing (e.g., criteria for multiple imaging, characterizations of caustics, etc.) should be formulated within the exact spacetime setting of general relativity, if possible, to make sure that they are not just an artifact of approximative assumptions. For those results which do not hold in arbitrary spacetimes, one should try to find the precise conditions on the spacetime under which they are true. There are some situations of astrophysical interest to which the quasi-Newtonian formalism does not apply. For instance, near a black hole light rays are so strongly bent that, in principle, they can make arbitrarily many turns around the hole. Clearly, in this situation it is impossible to use the quasi-Newtonian formalism which would treat these light rays as small perturbations of straight lines. The present review tries to elucidate all three aspects. More precisely, the following subjects will be covered: This introduction ends with some notes on subjects not covered in this review: In the electromagnetic theory, light is described by wavelike solutions to Maxwell’s equations. The ray-optical treatment used throughout this review is the standard high-frequency approximation (geometric optics approximation) of the electromagnetic theory for light propagation in vacuum on a general-relativistic spacetime (see, e.g., , § 22.5 or , Section 3.2). (Other notions of vacuum light rays, based on a different approximation procedure, have been occasionally suggested , but will not be considered here. Also, results specific to spacetime dimensions other than four or to gravitational theories other than Einstein’s are not covered.) For most applications to lensing the ray-optical treatment is valid and appropriate. An exception, where wave-optical corrections are necessary, is the calculation of the brightness of images if a light source comes very close to the caustic of the observer’s light cone (see Section 2.6). Light propagation in matter. If light is directly influenced by a medium, the light rays are no longer the lightlike geodesics of the spacetime metric. For an isotropic non-dispersive medium, they are the lightlike geodesics of another metric which is again of Lorentzian signature. (This “optical metric” was introduced by Gordon . For a rigourous derivation, starting from Maxwell’s equation in an isotropic non-dispersive medium, see Ehlers .) Hence, the formalism used throughout this review still applies to this situation after an appropriate re-interpretation of the metric. In anisotropic or dispersive media, however, the light rays are not the lightlike geodesics of a Lorentzian metric. There are some lensing situations where the influence of matter has to be taken into account. For instance., for the deflection of radio signals by our Sun the influence of the plasma in the Solar corona (to be treated as a dispersive medium) is very well measurable. However, such situations will not be considered in this review. For light propagation in media on a general-relativistic spacetime, see and references cited therein. As an alternative to the (geometric optics approximation of) electromagnetic theory, light can be treated as a photon gas, using the formalism of kinetic theory. This has relevance, e.g., for the cosmic background radiation. For basic notions of general-relativistic kinetic theory see, e.g., . Apart from some occasional remarks, kinetic theory will not be considered in this review. Derivation of the quasi-Newtonian formalism. It is not satisfacory if the quasi-Newtonian formalism of lensing is set up with the help of ad-hoc assumptions, even if the latter look plausible. From a methodological point of view, it is more desirable to start from the exact spacetime setting of general relativity and to derive the quasi-Newtonian lens equation by a well-defined approximation procedure. In comparison to earlier such derivations [298, 293, 302] more recent effort has led to considerable improvements. For lenses embedded in a cosmological model, see Pyne and Birkinshaw who consider lenses that need not be thin and may be moving on a Robertson–Walker background (with positive, negative, or zero spatial curvature). For the non-cosmological situation, a Lorentz covariant approximation formalism was derived by Kopeikin and Schäfer . Here Minkowski spacetime is taken as the background, and again the lenses need not be thin and may be moving. © Max Planck Society and the author(s)
http://relativity.livingreviews.org/Articles/lrr-2004-9/articlese1.html
13
30
To receive more information about up-to-date research on micronutrients, sign up for the free, semi-annual LPI Research Newsletter here. Structure and Physiology Bone Composition and Structure. Our skeleton may seem an inert structure, but it is an active organ, made up of tissue and cells in a continual state of activity throughout a lifetime. Bone tissue is comprised of a mixture of minerals deposited around a protein matrix, which together contribute to the strength and flexibility of our skeletons. Sixty-five percent of bone tissue is inorganic mineral, which provides the hardness of bone. The major minerals found in bone are calcium and phosphorus in the form of an insoluble salt called hydroxyapatite (HA) [chemical formula: (Ca)10(PO4)6(OH)2)]. HA crystals lie adjacent and bound to the organic protein matrix. Magnesium, sodium, potassium, and citrate ions are also present, conjugated to HA crystals rather than forming distinct crystals of their own (1). The remaining 35% of bone tissue is an organic protein matrix, 90-95% of which is type I collagen. Collagen fibers twist around each other and provide the interior scaffolding upon which bone minerals are deposited (1). Types of Bone. There are two types of bone tissue: cortical (compact) bone and trabecular (spongy or cancellous) bone (2). Eighty percent of the skeleton is cortical bone, which forms the outer surface of all bones. The small bones of the wrists, hands, and feet are entirely cortical bone. Cortical bone looks solid but actually has microscopic openings that allow for the passage of blood vessels and nerves. The other 20% of skeleton is trabecular bone, found within the ends of long bones and inside flat bones (skull, pelvis, sternum, ribs, and scapula) and spinal vertebrae. Both cortical and trabecular bone have the same mineral and matrix components but differ in their porosity and microstructure: trabecular bone is much less dense, has a greater surface area, and undergoes more rapid rates of turnover (see Bone Remodeling/Turnover below). There are three phases of bone development: growth, modeling (or consolidation), and remodeling (see figure). During the growth phase, the size of our bones increases. Bone growth is rapid from birth to age two, continues in spurts throughout childhood and adolescence, and eventually ceases in the late teens and early twenties. Although bones stop growing in length by about 20 years of age, they change shape and thickness and continue accruing mass when stressed during the modeling phase. For example, weight training and body weight exert mechanical stresses that influence the shape of bones. Thus, acquisition of bone mass occurs during both the growth and modeling/consolidation phases of bone development. The remodeling phase consists of a constant process of bone resorption (breakdown) and formation that predominates during adulthood and continues throughout life. Beginning around age 34, the rate of bone resorption exceeds that of bone formation, leading to an inevitable loss of bone mass with age (3). Peak Bone Mass. Bone mass refers to the quantity of bone present, both matrix and mineral. Bone mass increases through adolescence and peaks in the late teen years and into our twenties. The maximum amount of bone acquired is known as peak bone mass (PBM) (see figure) (4, 5). Achieving one’s genetically determined PBM is influenced by several environmental factors, discussed more extensively below (see Determinants of Adult Bone Health below). Technically, we cannot detect the matrix component of bone, so bone mass cannot be measured directly. We can, however, detect bone mineral by using dual X-ray absorptiometry (DEXA). In this technique, the absorption of photons from an X-ray is a function of the amount of mineral present in the path of the beam. Therefore, bone mineral density (BMD) measures the quantity of mineral present in a given section of bone and is used as a proxy for bone mass (6). Although BMD is a convenient clinical marker to assess bone mass and is associated with osteoporotic fracture risk, it is not the sole determinant of fracture risk. Bone quality (architecture, strength) and propensity to fall (balance, mobility) also factor into risk assessment and should be considered when deciding upon an intervention strategy (see Osteoporosis). Bone Remodeling/Turnover. Bone tissue, both mineral and organic matrix, is continually being broken down and rebuilt in a process known as remodeling or turnover. During remodeling, bone resorption and formation are always “coupled”—osteoclasts first dissolve a section of bone and osteoblasts then invade the newly created space and secrete bone matrix (6). The goal of remodeling is to repair and maintain a healthy skeleton, adapt bone structure to new loads, and regulate calcium concentration in extracellular fluids (7). The bone remodeling cycle, which refers to the time required to complete the entire series of cellular events from resorption to final mineralization, lasts approximately 40 weeks (8, 9). Additionally, remodeling units cycle at staggered stages. Thus, any intervention that influences bone remodeling will affect newly initiated remodeling cycles at first, and there is a lag time, known as the “bone remodeling transient,” until all remodeling cycles are synchronized to the treatment exposure (8). Considering the bone remodeling transient and the length of time required to complete a remodeling cycle, a minimum of two years is needed to realize steady-state treatment effects on BMD (10). The rates of bone tissue turnover differ depending on the type of bone: trabecular bone has a faster rate of turnover than cortical bone. Osteoporotic fracture manifests in trabecular bone, primarily as fractures of the hip and spine, and many osteoporotic therapies target remodeling activities in order to alter bone mass (11). Bone Cells. The cells responsible for bone formation and resorption are osteoblasts and osteoclasts, respectively. Osteoblasts prompt the formation of new bone by secreting the collagen-containing component of bone that is subsequently mineralized (1). The enzyme alkaline phosphatase is secreted by osteoblasts while they are actively depositing bone matrix; alkaline phosphatase travels to the bloodstream and is therefore used as a clinical marker of bone formation rate. Osteoblasts have receptors for vitamin D, estrogen, and parathyroid hormone (PTH). As a result, these hormones have potent effects on bone health through their regulation of osteoblastic activity. Once they have finished secreting matrix, osteoblasts either die, become lining cells, or transform into osteocytes, a type of bone cell embedded deep within the organic matrix (9, 12). Osteocytes make up 90-95% of all bone cells and are very long-lived (up to decades) (12). They secrete soluble factors that influence osteoclastic and osteoblastic activity and play a central role in bone remodeling in response to mechanical stress (9, 12, 13). Osteoclasts erode the surface of bones by secreting enzymes and acids that dissolve bone. More specifically, enzymes degrade the organic matrix and acids solubilize bone mineral salts (1). Osteoclasts work in small, concentrated masses and take approximately three weeks to dissolve bone, at which point they die and osteoblasts invade the space to form new bone tissue. In this way, bone resorption and formation are always “coupled.” End products of bone matrix breakdown (hydroxyproline and amino-terminal collagen peptides) are excreted in the urine and can be used as convenient biochemical measures of bone resorption rates. Maximum Attainment of Peak Bone Mass. The majority of bone mass is acquired during the growth phase of bone development (see figure) (4, 6). Attaining one’s peak bone mass (PBM) (i.e., the maximum amount of bone) is the product of genetic, lifestyle, and environmental factors (5, 14). Sixty to 80% of PBM is determined by genetics, while the remaining 20-40% is influenced by lifestyle factors, primarily nutrition and physical activity (15). In other words, diet and exercise are known to contribute to bone mass acquisition but can only augment PBM within an individual’s genetic potential. Acquisition of bone mass during the growth phase is sometimes likened to a “bone bank account” (4, 5). As such, maximizing PBM is important when we are young in order to protect against the consequences of age-related bone loss. However, improvements in bone mineral density (BMD) generally do not persist once a supplement or exercise intervention is terminated (16, 17). Thus, attention to diet and physical activity during all phases of bone development is beneficial for bone mass accrual and skeletal health. Rate of Bone Loss with Aging. Bone remodeling is a lifelong process, with resorption and formation linked in space and time. Yet the scales tip such that bone loss outpaces bone gain as we age. Beginning around age 34, the rate of bone resorption exceeds the rate of bone formation, leading to an inevitable loss of bone mass with age (see figure) (18). Age-related estrogen reduction is associated with increased bone remodeling activity—both resorption and formation—in both sexes (13). However, the altered rate of bone formation does not match that of resorption; thus, estrogen deficiency contributes to loss of bone mass over time (9, 13). The first three to five years following the onset of menopause ('early menopause') are associated with an accelerated, self-limiting loss of bone mass (3, 18, 19). Subsequent postmenopausal bone loss occurs at a linear rate as we age (3). As we continue to lose bone, we near the threshold for osteoporosis and are at high-risk for fractures of the hip and spine. Osteomalacia. Osteomalacia, also known as “adult rickets,” is a failure to mineralize bone. Stereotypically, osteomalacia results from vitamin D deficiency (serum 25-hydroxyvitamin D levels <20 nmol/L or <8 ng/mL) and the associated inability to absorb dietary calcium and phosphorus across the small intestine. Plasma calcium concentration is tightly controlled, and the body has a number of mechanisms in place to adjust to fluctuating blood calcium levels. In response to low blood calcium, PTH levels increase and vitamin D is activated. The increase in PTH stimulates bone remodeling activity—both resorption and formation, which are always coupled. Thus, osteoclasts release calcium and phosphorus from bone in order to restore blood calcium levels, and osteoblasts mobilize to replace the resorbed bone. During osteomalacia, however, the deficiency of calcium and phosphorus results in incomplete mineralization of the newly secreted bone matrix. In severe cases, newly formed, unmineralized bone loses its stiffness and can become deformed under the strain of body weight. Osteopenia. Simply put, osteopenia and osteoporosis are varying degrees of low bone mass. Whereas osteomalacia is characterized by low-mineral and high-matrix content, osteopenia and osteoporosis result from low levels of both. As defined by the World Health Organization (WHO), osteopenia precedes osteoporosis and occurs when one’s bone mineral density (BMD) is between 1 and 2.5 standard deviations (SD) below that of the average young adult (30 years of age) woman (see figure). Osteoporosis. Osteoporosis is a condition of increased bone fragility and susceptibility to fracture due to loss of bone mass. Clinically, osteoporosis is defined as a BMD that is greater than 2.5 SD below the mean for young adult women (see figure). It has been estimated that fracture risk in adults is approximately doubled for each SD reduction in BMD (6). Common sites of osteoporotic fracture are the hip, femoral neck, and vertebrae of spinal column—skeletal sites rich in trabecular bone. BMD, the quantity of mineral present per given area/volume of bone, is only a surrogate for bone strength. Although it is a convenient biomarker used in clinical and research settings to predict fracture risk, the likelihood of experiencing an osteoporotic fracture cannot be predicted solely by BMD (6). The risk of osteoporotic fracture is influenced by additional factors, including bone quality (microarchitecture, geometry) and propensity to fall (balance, mobility, muscular strength). Other modifiable and non-modifiable factors also play into osteoporotic fracture risk, and they are generally additive (21). The WHO Fracture Risk Assessment Tool was designed to account for some of these additional risk factors. Once you have your BMD measurement, visit the WHO Web site to calculate your 10-year probability of fracture, taking some of these additional risk factors into account. Paying attention to modifiable risk factors for osteoporosis is an important component of fracture prevention strategies. For more details about individual dietary factors and osteoporosis, see the Micronutrient Information Center's Disease Index and the LPI Research Newsletter article by Dr. Jane Higdon. Micronutrient supply plays a prominent role in bone health. Several minerals have direct roles in hydroxyapatite (HA) crystal formation and structure; other nutrients have indirect roles as cofactors or as regulators of cellular activity (22, 23).Table 1 below lists the dietary reference intakes (DRIs) for micronutrients important to bone health. The average dietary intake of Americans (aged 2 years and older) is also provided for comparative purposes (24). |Table 1. DRIs for Micronutrients Important to Bone Health| |Micronutrient||RDA or AI*||UL (≥19 y)||Mean intake (≥2 y, all food sources) (24)| 1,000 mg/d (19-70y) 1,200 mg/d (>70y) 1,000 mg/d (19-50y) 1,200 mg/d (>50y) |Men & Women: 2,500 mg/d (19-50y) 2,000 mg/d (>50y) |Phosphorus||Men & Women: |Men & Women: 4 g/d (19-70y) 3 g/d (>70y) |Fluoride||Men: 4 mg/d* Women: 3 mg/d* |Men & Women: 400 mg/d (19-30y) 420 mg/d (>31y) 310 mg/d (19-30y) 320 mg/d (>31y) |Men & Women: |Sodium||Men & Women: 1.5 g/d (19-50y) 1.3 g/d (51-70y) 1.2 g/d (>70y) |Men & Women: |Vitamin D||Men & Women: 15 mcg (600 IU)/d (19-70y) 20 mcg (800 IU)/d (>70y) |Men & Women: (3,000 IU)/db Women: |Men & Women: 3,000 mcg (10,000 IU)/db |ND||80 mcg/d||Vitamin C||Men: |Men & Women: 1.3 mg/d (19-50y) 1.7 mg/d (>50y) 1.3 mg/d (19-50y) 1.5 mg/d (>50y) |Men & Women: |Folate||Men & Women: |Men & Women: |Vitamin B12||Men & Women: |Abbreviations: RDA, recommended dietary allowance; AI, adequate intake; UL, tolerable upper intake level; y, years; d, day; g, gram; mg, milligram; mcg, microgram; IU, international units; ND, not determinable| aApplies only to the supplemental form bApplies only to preformed retinol cApplies to the synthetic form in fortified foods and supplements Calcium. Calcium is the most common mineral in the human body. About 99% of the calcium in the body is found in bones and teeth, while the other 1% is found in blood and soft tissues. Calcium levels in the blood must be maintained within a very narrow concentration range for normal physiological functioning, namely muscle contraction and nerve impulse conduction. These functions are so vital to survival that the body will demineralize bone to maintain normal blood calcium levels when calcium intake is inadequate. In response to low blood calcium, parathyroid hormone (PTH) is secreted. PTH targets three main axes in order to restore blood calcium concentration: (1) vitamin D is activated (see the section on vitamin D below), (2) filtered calcium is retained by the kidneys, and (3) bone resorption is induced (1). It is critical to obtain enough dietary calcium in order to balance the calcium taken from our bones in response to fluctuating blood calcium concentrations. Several randomized, placebo-controlled trials (RCTs) have tested whether calcium supplementation reduces age-related bone loss and fracture incidence in postmenopausal women. In the Women’s Health Initiative (WHI), 36,282 healthy, postmenopausal women (aged 50 to 79 years; mean age 62 years) were randomly assigned to receive placebo or 1,000 mg calcium carbonate and 400 IU vitamin D3 daily (25). After a mean of seven years of follow-up, the supplement group had significantly less bone loss at the hip. A 12% reduction in the incidence of hip fracture in the supplement group did not reach statistical significance, possibly due to the low rates of absolute hip fracture in the 50 to 60 year age range. The main adverse event reported in the supplement group was an increased proportion of women with kidney stones. Another RCT assessed the effect of 1,000 mg of calcium citrate versus placebo on bone density and fracture incidence in 1,472 healthy postmenopausal women (aged 74±4 years) (26). Calcium had a significant beneficial effect on bone mineral density (BMD) but an uncertain effect on fracture rates. The high incidence of constipation with calcium supplementation may have contributed to poor compliance, which limits data interpretation and clinical efficacy. Hip fracture was significantly reduced in an RCT involving 1,765 healthy, elderly women living in nursing homes (mean age 86±6 years) given 1,200 mg calcium triphosphate and 800 IU vitamin D3 daily for 18 months (27). The number of hip fractures was 43% lower and the number of nonvertebral fractures was 32% lower in women treated with calcium and vitamin D3 supplements compared to placebo. While there is a clear treatment benefit in this trial, the institutionalized elderly population is known to be at high risk for vitamin deficiencies and fracture rates and may not be representative of the general population. Overall, the majority of calcium supplementation trials (and meta-analyses thereof) show a positive effect on BMD, although the size of the effect is modest (3, 7, 28, 29). Furthermore, the response to calcium supplementation may depend on habitual calcium intake and age: those with chronic low intakes will benefit most from supplementation (7, 29), and women within the first five years after menopause are somewhat resistant to calcium supplementation (7, 10). The current recommendations in the U.S. for calcium are based on a combination of balance data and clinical trial evidence, and they appear to be set at levels that support bone health (see table 1 above) (30, 31). Aside from the importance of meeting the RDA, calcium is a critical adjuvant for therapeutic regimens used to treat osteoporosis (7, 11). The therapy (e.g., estrogen replacement, pharmaceutical agent, and physical activity) provides a bone-building stimulus that must be matched by raw materials (nutrients) obtained from the diet. Thus, calcium supplements are a necessary component of any osteoporosis treatment strategy. A recent meta-analysis (32) and prospective study (33) have raised concern over the safety of calcium supplements, either alone or with vitamin D, on the risk of cardiovascular events. Although these analyses raise an issue that needs further attention, there is insufficient evidence available at this time to definitely refute or support the claims that calcium supplementation increases the risk of cardiovascular disease. For more extensive discussion of this issue, visit the LPI Spring/Summer 2012 Research Newsletter or the LPI News Article. Phosphorus. More than half the mass of bone mineral is comprised of phosphorus, which combines with calcium to form HA crystals. In addition to this structural role, osteoblastic activity relies heavily on local phosphate concentrations in the bone matrix (11, 34). Given its prominent functions in bone, phosphorus deficiency could contribute to impaired bone mineralization (34). However, in healthy individuals, phosphorus deficiency is uncommon, and there is little evidence that phosphorus deficiency affects the incidence of osteoporosis (23). Excess phosphorus intake has negligible affects on calcium excretion and has not been linked to a negative impact on bone (35). Fluoride. Fluoride has a high affinity for calcium, and 99% of our body fluoride is stored in calcified tissues, i.e., teeth and bones (36). In our teeth, very dense HA crystals are embedded in collagen fibers. The presence of fluoride in the HA crystals (fluoroapatite) enhances resistance to destruction by plaque bacteria (1, 36), and fluoride has proven efficacy in the prevention of dental caries (37). While fluoride is known to stimulate bone formation through direct effects on osteoblasts (38), high-dose fluoride supplementation may not benefit BMD or reduce fracture rates (39, 40). The presence of fluoride in HA increases the crystal size and contributes to bone fragility; thus, uncertainties remain about the quality of newly formed bone tissue with fluoride supplementation (9, 23). Chronic intake of fluoridated water, on the other hand, may benefit bone health (9, 36). Two large prospective studies comparing fracture rates between fluoridated and non-fluoridated communities demonstrate that long-term, continuous exposure to fluoridated water (1 mg/L) is safe and associated with reduced incidence of fracture in elderly individuals (41, 42). Magnesium. Magnesium (Mg) is a major mineral with essential structural and functional roles in the body. It is a critical component of our skeleton, with 50-60% of total body Mg found in bone where it colocalizes with HA, influencing the size and strength of HA crystals (23). Mg also serves a regulatory role in mineral metabolism. Mg deficiency is associated with impaired secretion of PTH and end-organ resistance to the actions of PTH and 1,25-dihydroxyvitamin D3 (43). Low dietary intake of Mg is common in the U.S. population (24), and it has therefore been suggested that Mg deficiency could impair bone mineralization and represent a risk factor for osteoporosis. However, observational studies of the association between Mg intake and bone mass or bone loss have produced mixed results, with most showing no association (34). The effect of Mg supplementation on trabecular bone density in postmenopausal women was assessed in one controlled intervention trial (44). Thirty-one postmenopausal women (mean age, 57.6±10.6 years) received two to six tablets of 125 mg each magnesium hydroxide (depending on individual tolerance levels) for six months, followed by two tablets daily for another 18 months. Twenty-three age-matched osteoporotic women who refused treatment served as controls. After one year of Mg supplementation, there was either an increase or no change in bone density in 27 out of 31 patients; bone density was significantly decreased in controls after one year. Although encouraging, this is a very small study, and only ten Mg-supplemented patients persisted into the second year. Sodium. Sodium is thought to influence skeletal health through its impact on urinary calcium excretion (34). High-sodium intake increases calcium excretion by the kidneys. If the urinary calcium loss is not compensated for by increased intestinal absorption from dietary sources, bone calcium will be mobilized and could potentially affect skeletal health. However, even with the typical high sodium intakes of Americans (2,500 mg or more per day), the body apparently increases calcium absorption efficiency to account for renal losses, and a direct connection between sodium intake and abnormal bone status in humans has not been reported (34, 45). Nonetheless, compensatory mechanisms in calcium balance may diminish with age (11), and keeping sodium within recommended levels is associated with numerous health benefits. Vitamin A. Both vitamin A deficiency and excess can negatively affect skeletal health. Vitamin A deficiency is a major public health concern worldwide, especially in developing nations. In growing animals, vitamin A deficiency causes bone abnormalities due to impaired osteoclastic and osteoblastic activity (46). These abnormalities can be reversed upon vitamin A repletion (47). In animals, vitamin A toxicity (hypervitaminosis A) is associated with poor bone growth, loss of bone mineral content, and increased rate of fractures (22). Case studies in humans have indicated that extremely high vitamin A intakes (100,000 IU/day or more, several fold above the tolerable upper intake level [UL] (see table 1 above) are associated with hypercalcemia and bone resorption (48-50). The question remains, however, if habitual, excessive vitamin A intake has a negative effect on bone (22, 51, 52). There is some observational evidence that high vitamin A intake (generally in supplement users and at intake levels >1,500 mcg [5,000 IU]/day) is associated with an increased risk of osteoporosis and hip fracture (53-55). However, methods to assess vitamin A intake and status are notoriously unreliable (56), and the observational studies evaluating the association between vitamin A status or vitamin A intake with bone health report inconsistent results (57, 58). At this time, striving for the recommended dietary intake (RDA) for vitamin A (see table 1 above) is an important and safe goal for optimizing skeletal health. Vitamin D. The primary function of vitamin D is to maintain calcium and phosphorus absorption in order to supply the raw materials of bone mineralization (9, 59). In response to low blood calcium, vitamin D is activated and promotes the active absorption of calcium across the intestinal cell (59). In conjunction with PTH, activated 1,25-dihydroxyvitamin D3 retains filtered calcium by the kidneys. By increasing calcium absorption and retention, 1,25-dihydroxyvitamin D3 helps to offset calcium lost from the skeleton. Low circulating 25-hydoxyvitamin D3 (the storage form of vitamin D3) triggers a compensatory increase in PTH, a signal to resorb bone. The Institute of Medicine determined that maintaining a serum 25-hydroxyvitamin D3 level of 50 nmol/L (20 ng/ml) benefits bone health across all age groups (31). However, debate remains over the level of serum 25-hydroxyvitamin D3 that corresponds to optimum bone health. Based on a recent review of clinical trial data, the authors concluded that serum 25-hydroxyvitamin D3 should be maintained at 75-110 nmol/L (30-44 ng/ml) for optimal protection against fracture and falls with minimal risk of hypercalcemia (60). The level of intake associated with this higher serum 25-hydroxyvitamin D3 range is 1,800 to 4,000 IU per day, significantly higher than the current RDA (see table 1 above) (60). As mentioned in the Calcium section above, several randomized controlled trials (and meta-analyses) have shown that combined calcium and vitamin D supplementation decreases fracture incidence in older adults (29, 61-63). The efficacy of vitamin D supplementation may depend on habitual calcium intake and the dose of vitamin D used. In combination with calcium supplementation, the dose of vitamin D associated with a protective effect is 800 IU or more per day (29, 64). In further support of this value, a recent dosing study performed in 167 healthy, postmenopausal, white women (aged 57 to 90 years old) with vitamin D insufficiency (15.6 ng/mL at baseline) demonstrated that 800 IU/d of vitamin D3 achieved a serum 25-hydoxyvitamin D3 level greater than 20 ng/mL (65). The dosing study, which included seven groups ranging from 0 to 4,800 IU per day of vitamin D3 plus calcium supplementation for one year, also revealed that serum 25-hydroxyvitamin D3 response was curvilinear and plateaued at approximately 112 nmol/L (45 ng/mL) in subjects receiving more than 3,200 IU per day of vitamin D3. Some trials have evaluated the effect of high-dose vitamin D supplementation on bone health outcomes. In one RCT, high-dose vitamin D supplementation was no better than the standard dose of 800 IU/d for improving bone mineral density (BMD) at the hip and lumbar spine (66). In particular, 297 postmenopausal women with low bone mass (T-score ≤-2.0) were randomized to receive high-dose (20,000 IU vitamin D3 twice per week plus 800 IU per day) or standard-dose (placebo plus 800 IU per day) for one year; both groups also received 1,000 mg elemental calcium per day. After one year, both groups had reduced serum PTH, increased serum 25-hydroxyvitamin D3, and increased urinary calcium/creatinine ratio, although to a significantly greater extent in the high-dose group. BMD was similarly unchanged or slightly improved in both groups at all measurement sites. In the Vital D study, 2,256 elderly women (aged 70 years and older) received a single annual dose of 500,000 IU of vitamin D3 or placebo administered orally in the autumn or winter for three to five years (67). Calcium intake was quantified annually by questionnaire; both groups had a median daily calcium intake of 976 mg. The vitamin D group experienced significantly more falls and fractures compared to placebo, particularly within the first three months after dosing. Not only was this regimen ineffective at lowering risk, it suggests that the safety of infrequent, high-dose vitamin D supplementation warrants further study. The RDAs for calcium and vitamin D go together, and the requirement for one nutrient assumes that the need for the other nutrient is being met (31). Thus, the evidence supports the use of combined calcium and vitamin D supplements in the prevention of osteoporosis in older adults. Vitamin K. The major function of vitamin K1 (phylloquinone) is as a cofactor for a specific enzymatic reaction that modifies proteins to a form that facilitates calcium-binding (68). Although only a small number of vitamin-K-dependent proteins have been identified, four are present in bone tissue: osteocalcin (also called bone GLA protein), matrix GLA protein (MGP), protein S, and Gas 6 (68, 69). The putative role of vitamin K in bone biology is attributed to its role as cofactor in the carboxylation of these glutamic acid (GLA)-containing proteins (70). There is observational evidence that diets rich in vitamin K are associated with a decreased risk of hip fracture in both men and women; however, the association between vitamin K intake and BMD is less certain (70). It is possible that a higher intake of vitamin K1, which is present in green leafy vegetables, is a marker of a healthy lifestyle that is responsible for driving the beneficial effect on fracture risk (68, 70). Furthermore, a protective effect of vitamin K1 supplementation on bone loss has not been confirmed in randomized controlled trials (69-71). Vitamin K2 (menaquinone) at therapeutic doses (45 mg/day) is used in Japan to treat osteoporosis (see the Micronutrient Information Center’s Disease Index). Although a 2006 meta-analysis reported an overall protective effect of menaquinone-4 (MK-4) supplementation on fracture risk at the hip and spine (72), more recent data have not corroborated a protective effect of MK-4 and may change the outcome of the meta-analysis if included in the dataset (70). A double-blind, placebo-controlled intervention performed in 2009 observed no effect of either vitamin K1 (1 mg/d) or MK-4 (45 mg/d) supplementation on markers of bone turnover or BMD among healthy, postmenopausal women (N=381) receiving calcium and vitamin D supplements (69). In the Postmenopausal Health Study II, the effect of supplemental calcium, vitamin D, and vitamin K (in fortified dairy products) and lifestyle counseling on bone health was examined in healthy, postmenopausal women (73, 74). One hundred fifty women (mean age 62 years) were randomly assigned to one of four groups: (1) 800 mg calcium plus 10 mcg vitamin D3 (N=26); (2) 800 mg calcium, 10 mcg vitamin D3, plus 100 mcg vitamin K1 (N=26); (3) 800 mg calcium, 10 mcg vitamin D3, plus 100 mcg MK-7 (N=24); and (4) control group receiving no dietary intervention or counseling. Supplemental nutrients were delivered via fortified milk and yoghurt, and subjects were advised to consume one portion of each on a daily basis and to attend biweekly counseling sessions during the one-year intervention. BMD significantly increased in all three treatments compared to controls. Between the three diet groups, a significant effect of K1 or MK-7 on BMD remained only at the lumbar spine (not at hip and total body) after controlling for serum vitamin D and calcium intake. Overall, the positive influence on BMD was attributed to the combined effect of diet and lifestyle changes associated with the intervention, rather than with an isolated effect of vitamin K or MK-7 (73). We often discuss the mineral aspect of bone, but the organic matrix is also an integral aspect of bone quality and health. Collagen makes up 90% of the organic matrix of bone. Type I collagen fibers twist around each other in a triple helix and become the scaffold upon which minerals are deposited. Vitamin C is a required cofactor for the hydroxylation of lysine and proline during collagen synthesis by osteoblasts (75). In guinea pigs, vitamin C deficiency is associated with defective bone matrix production, both quantity and quality (76). Unlike humans and guinea pigs, rats can synthesize ascorbic acid on their own. Using a special strain of rats with a genetic defect in ascorbic acid synthesis (Osteogenic Disorder Shionogi [ODS] rats), researchers can mimic human scurvy by feeding these animals a vitamin C-deficient diet (77). Ascorbic acid-deficient ODS rats have a marked reduction in bone formation with no defect in bone mineralization (78). More specifically, ascorbic acid deficiency impairs collagen synthesis, the hydroxylation of collagenous proline and lysine residues, and osteoblastic adhesion to bone matrix (78). In observational studies, vitamin C intake and status is inconsistently associated with bone mineral density and fracture risk (22). A double-blind, placebo-controlled trial was performed with the premise that improving the collagenous bone matrix will enhance the efficacy of mineral supplementation to counteract bone loss (75). Sixty osteopenic women (35 to 55 years of age) received a placebo comprised of calcium and vitamin D (1,000 mg calcium carbonate plus 250 IU vitamin D) or this placebo plus CB6Pro (500 mg vitamin C, 75 mg vitamin B6, and 500 mg proline) daily for one year. In contrast to controls receiving calcium plus vitamin D alone, there was no bone loss detected in the spine and femur in the CB6Pro group. High levels of a metabolite known as homocysteine (hcy) are an independent risk factor for cardiovascular disease (CVD) (see the Disease Index) and may also be a modifiable risk factor for osteoporotic fracture (22). A link between hcy and the skeleton was first noted in studies of hyperhomocysteinuria, a metabolic disorder characterized by exceedingly high levels of hcy in the plasma and urine. Individuals with hyperhomocysteinuria exhibit numerous skeletal defects, including reduced bone mineral density (BMD) and osteopenia (79). In vitro studies indicate that a metabolite of hcy inhibits lysyl oxidase, an enzyme involved in collagen cross-linking, and that elevated hcy itself may stimulate osteoclastic activity (80-82). The effect of more subtle elevations of plasma hcy on bone health is more difficult to demonstrate, and observational studies in humans report conflicting results (79, 83). Some report an association between elevated plasma hcy and fracture risk (84-86), while others find no relationship (87-89). A recent meta-analysis of 12 observational studies reported that elevated plasma homocysteine is associated with increased risk of incident fracture (90). Folate, vitamin B12, and vitamin B6 help keep blood levels of hcy low; thus, efforts to reduce plasma hcy levels by meeting recommended intake levels for these vitamins may benefit bone health (83). Few intervention trials evaluating hcy-lowering therapy on bone health outcomes have been conducted. In one trial, 5,522 participants (aged 55 years and older) in the Heart Outcomes Prevention Evaluation (HOPE) 2 trial were randomized to receive daily hcy level-lowering therapy (2.5 mg folic acid, 50 mg vitamin B6, and 1 mg vitamin B12) or placebo for a mean duration of five years (91). Notably, HOPE 2 participants were at high-risk for cardiovascular disease and have preexisting CVD, diabetes mellitus, or another CVD risk factor. Although plasma hcy levels were reduced in the treatment group, there were no significant differences between treatment and placebo on the incidence of skeletal fracture. A randomized, double-blind, placebo-controlled intervention is under way that will assess the effect of vitamin B12 and folate supplementation on fracture incidence in elderly individuals (92). During the B-PROOF (B-vitamins for the Prevention Of Osteoporotic Fracture) trial, 2,919 subjects (65 years and older) with elevated hcy (≥12 micromol/L) will receive placebo or a daily tablet with 500 mcg B12 plus 400 mcg folic acid for two years (both groups also receive 15 mcg [600 IU] vitamin D daily). The first results are expected in 2013 and may help clarify the relationship between hcy, B-vitamin status, and osteoporotic hip fracture. Smoking. Cigarette smoking has an independent, negative effect on bone mineral density (BMD) and fracture risk in both men and women (93, 94). Several meta-analyses have been conducted to assess the relationship between cigarette smoking and bone health. After pooling data from a number of similar studies, there is a consistent, significant reduction in bone mass and increased risk of fracture in smokers compared to non-smokers (95-97). The effects were dose-dependent and had a strong association with age. Smoking cessation may slow or partially reverse the bone loss caused by years of smoking. Unhealthy lifestyle habits and low body weight present in smokers may contribute to the negative impact on bone health (93, 94). Additionally, smoking leads to alterations in hormone (e.g., 1,25-dihydroxyvitamin D3 and estrogen) production and metabolism that could affect bone cell activity and function (93, 94). The deleterious effects of smoking on bone appear to be reversible; thus, efforts to stop smoking will benefit many aspects of general health, including bone health. Alcohol. Chronic light alcohol intake is associated with a positive effect on bone density (98). If one standard drink contains 10 g ethanol, then this level of intake translates to one drink per day for women and two drinks per day for men (98). The effect of higher alcohol intakes (11-30 g ethanol per day) on BMD is more variable and may depend on age, gender, hormonal status, and type of alcoholic beverage consumed (98). At the other end of the spectrum, chronic alcoholism has a documented negative effect on bone and increases fracture risk (98). Alcoholics consuming 100-200 g ethanol per day have low bone density, impaired osteoblastic activity, and metabolic abnormalities that compromise bone health (98, 99). Physical Activity. Physical activity is highly beneficial to skeletal health across all stages of bone development. Regular resistance exercise helps to reduce osteoporotic fracture risk for two reasons: it both directly and indirectly increases bone mass, and it reduces falling risk by improving strength, balance, and coordination (100). Physical activity increases bone mass because mechanical forces imposed on bone induce an adaptive osteogenic (bone-forming) response. Bone adjusts its strength in proportion to the degree of bone stress (1), and the intensity and novelty of the load, rather than number of repetitions or sets, matter for building bone mass (101). The American College of Sports Medicine suggests that adults engage in the following exercise regimen in order to maintain bone health (see table 2 below) (100): |Table 2. Exercise recommendations for bone health according to the American College of Sports Medicine| |MODE||Weight-bearing endurance activities||Tennis, stair climbing, jogging| |Activities that involve jumping||Volleyball, basketball| |Resistance exercise||Weight lifting| |INTENSITY||Moderate to high| |FREQUENCY||Weight-bearing endurance activities||3-5 times per week| |Resistance exercise||2-3 times per week| |DURATION||30-60 minutes per day||Combination of weight-bearing endurance activities, activities that involve jumping, and resistance exercise that targets all major muscle groups| Additionally, the ability of the skeleton to respond to physical activity can be either constrained or enabled by nutritional factors. For example, calcium insufficiency diminishes the effectiveness of mechanical loading to increase bone mass, and highly active people who are malnourished are at increased fracture risk (2, 100). Thus, exercise can be detrimental to bone health when the body is not receiving the nutrients it needs to remodel bone tissue in response to physical activity. Micronutrients play a prominent role in bone health. The emerging theme with supplementation trials seems to be that habitual intake influences the efficacy of the intervention. In other words, correcting a deficiency and meeting the RDAs of micronutrients involved in bone health will improve bone mineral density (BMD) and benefit the skeleton (see table 1). To realize lasting effects on bone, the intervention must persist throughout a lifetime. At all stages of life, high impact and resistance exercise in conjunction with adequate intake of nutrients involved in bone health are critical factors in maintaining a healthy skeleton and minimizing bone loss. The propensity of clinical trial data supports supplementation with calcium and vitamin D in older adults as a preventive strategy against osteoporosis. Habitual, high intake of vitamin A at doses >1,500 mcg (5,000 IU) per day may negatively impact bone. Although low dietary vitamin K intake is associated with increased fracture risk, RCTs have not supported a direct role for vitamin K1 (phylloquinone) or vitamin K2 (menaquinone) supplementation in fracture risk reduction. The other micronutrients important to bone health (phosphorus, fluoride, magnesium, sodium, and vitamin C) have essential roles in bone, but clinical evidence in support of supplementation beyond recommended levels of intake to improve BMD or reduce fracture incidence is lacking. Many Americans, especially the elderly, are at high risk for deficiencies of several micronutrients (24). Some of these nutrients are critical for bone health, and the LPI recommends supplemental calcium, vitamin D, and magnesium for healthy adults (see the LPI Rx for Health). Written in August 2012 by: Giana Angelo, Ph.D. Linus Pauling Institute Oregon State University Reviewed in August 2012 by: Connie M. Weaver, Ph.D. Distinguished Professor and Department Head Department of Nutrition Science This article was underwritten, in part, by a grant from Bayer Consumer Care AG, Basel, Switzerland. Copyright 2012-2013 Linus Pauling Institute The Linus Pauling Institute Micronutrient Information Center provides scientific information on the health aspects of dietary factors and supplements, foods, and beverages for the general public. The information is made available with the understanding that the author and publisher are not providing medical, psychological, or nutritional counseling services on this site. The information should not be used in place of a consultation with a competent health care or nutrition professional. The information on dietary factors and supplements, foods, and beverages contained on this Web site does not cover all possible uses, actions, precautions, side effects, and interactions. It is not intended as nutritional or medical advice for individual problems. Liability for individual actions or omissions based upon the contents of this site is expressly disclaimed. Thank you for subscribing to the Linus Pauling Institute's Research Newsletter. You should receive your first issue within a month. We appreciate your interest in our work.
http://lpi.oregonstate.edu/infocenter/bonehealth.html
13
14
Logarithms are exponents that are relative to a given base. Calculations involving multiplication, division, raising to powers and extraction of roots can usually be carried out more easily with the use of logarithms. Logarithms contain three parts: the number, the base, and the logarithm. In the following logarithm example, the number is 1000, the base is 10 and the logarithm is 3. There are two types of logarithms that appear most often. The first type has a base of ten like the example. The second type has a base of e where e ~ 2.718. Since these logarithms appear so often, they are abbreviated. For a logarithm with a base of 10, the base is not written and it is assumed. For a logarithm with a base of e, it is abbreviated to ln, also with no written base. Rules of logarithms It is possible to change the base of a logarithm. This is helpful when using bases that are not the two most common bases. This makes it possible to change the base of the logarithm so that it can be calculated using a calculator since most calculators only have the two bases. It is also possible to split a logarithm apart. This becomes more useful when variables are involved. It also becomes useful later when graphing data on a log scale and finding an equation for the line. If you evaluate this rule in terms of the multiplication rule for exponents, it becomes easy to see why this rule is true. Using the rule of multiplication, logarithms can be evaluated with exponents. If the number contains an exponent, it is possible to pull that outside of the logarithm
http://www.nde-ed.org/EducationResources/Math/Math-Logs.htm
13
19
The brain's executive function is a kind of internal "air traffic control system" that is a group of skills that helps us to focus on multiple streams of information at the same time, monitor errors, make decisions in light of available information, revise plans as necessary and resist the urge to let frustration lead to hasty actions. The development of solid executive function is one of the key learning tasks of early childhood, and a significant contributor to later success in life. In his recent webinar on the topic, Scientific Learning Chief Scientific Officer and Co-Founder Dr. William Jenkins dug deep into the three interrelated skills which comprise this air traffic control system: working memory, inhibitory control, and cognitive/mental flexibility. These three skills help us keep information in mind, master our impulses, and remain flexible in the face of change—and are crucial building blocks for the development of both cognitive and social interaction skills in young children. Dr. Jenkins outlined a number of reasons that parent should take an interest in helping their children develop sound executive function skills in early childhood: 1. Strong executive function skills provide the best possible foundation for school readiness. In many ways, executive function skills could be called the "biological foundation" for school readiness. It has been shown that children with strong working memory, inhibitory control, and cognitive/mental flexibility skills make greater gains in academic areas than peers with weaker executive function skills. Coming to school with these foundational skills well-developed is just as important, if not more important, than fluency with letters and numbers. 2. Executive function skills begin at home. Executive function skills are not automatic. These skills are built over time through practice, and can be observed in infants as early as six months, when some infants can understand and obey a simple directive such as "don't touch that plate." Parents can support (or "scaffold") the development of these skills from early childhood by teaching and reinforcing common concepts such as taking turns and using "inside" and "outside" voices. In addition to the home, executive function skills continue to be developed in childcare programs, pre-schools, elementary school classrooms, and other social settings, into adolescence. As Dr. Jenkins notes in the webinar, elementary school teachers are keenly aware of the importance of executive function. Parents who are actively, consciously participating in the development of their child’s executive function skills will have a richer understanding of the importance of all activities and expectations revolving around classroom life, from the way one lines up for lunch to the way one studies for a spelling test. This has the potential for a dynamic, integrated educational experience for the student, teacher, and parents, working together to build a better brain for each child. 4. Executive function skills help lay the foundation for the kind of student, citizen, and social being a child will become. Ultimately, the skills that cohere into executive function are the skills we use to navigate family, school, and work settings for our entire lives: retaining and using information, filtering thoughts and impulses, focusing on a task at hand, recognizing errors, changing plans, and understanding how different rules apply in different settings are all skills that require stewardship from birth to adulthood. Parents armed with this knowledge are more apt to take an active part in the development of these skills from an early age. 5. Understanding executive function gives parents a fuller understanding of a child who is struggling. It is a mistake to immediately brand a child who struggles with things like inhibitory control as a "bad kid". Understanding the concepts behind executive function gives parents a fuller picture of what is happening with their child when he or she is having difficulty controlling impulses, focusing on a given task, or understanding that different rules may apply at different times. This will help parents decide if outside help may be needed to help their child (studies show there is at least short-term effectiveness in interventions that support executive function development). Interested in learning more? Listen to Dr. Jenkins’ webinar here for more in-depth information on all aspects of executive function and its importance in early childhood development and brain fitness. Attend one of our popular webinars with thought leaders in learning. Live and pre-recorded webinars are available. Register today!
http://www.scilearn.com/blog/5-reasons-why-every-parent-should-be-familiar-with-executive-function-skills.php
13
69
Heat transfer coefficient The heat transfer coefficient, in thermodynamics and in mechanical and chemical engineering, is used in calculating the heat transfer, typically by convection or phase transition between a fluid and a solid: - Q = heat flow in input or lost heat flow , J/s = W - h = heat transfer coefficient, W/(m2K) - A = heat transfer surface area, m2 - = difference in temperature between the solid surface and surrounding fluid area From the above equation, the heat transfer coefficient is the proportionality coefficient between the heat flux, that is heat flow per unit area, q/A, and the thermodynamic driving force for the flow of heat (i.e., the temperature difference, ΔT). The heat transfer coefficient has SI units in watts per squared meter kelvin: W/(m2K). There are numerous methods for calculating the heat transfer coefficient in different heat transfer modes, different fluids, flow regimes, and under different thermohydraulic conditions. Often it can be estimated by dividing the thermal conductivity of the convection fluid by a length scale. The heat transfer coefficient is often calculated from the Nusselt number (a dimensionless number). There are also online calculators available specifically for heat transfer fluid applications. An understanding of convection boundary layers is necessary to understanding convective heat transfer between a surface and a fluid flowing past it. A thermal boundary layer develops if the fluid free stream temperature and the surface temperatures differ. A temperature profile exists due to the energy exchange resulting from this temperature difference. The heat transfer rate can then be written as, And because heat transfer at the surface is by conduction, These two terms are equal; thus Making it dimensionless by multiplying by representative length L, The right hand side is now the ratio of the temperature gradient at the surface to the reference temperature gradient. While the left hand side is similar to the Biot modulus. This becomes the ratio of conductive thermal resistance to the convective thermal resistance of the fluid, otherwise known as the Nusselt number, Nu. Alternative Method (A simple method for determining the overall heat transfer coefficient) A simple method for determining an overall heat transfer coefficient that is useful to find the heat transfer between simple elements such as walls in buildings or across heat exchangers is shown below. Note that this method only accounts for conduction within materials, it does not take into account heat transfer through methods such as radiation. The method is as follows: - = the overall heat transfer coefficient (W/m2 K) - = the contact area for each fluid side (m2) (with A_1 and A_2 expressing either surface) - = the thermal conductivity of the material (W/mK) - = the individual convection heat transfer coefficient for each fluid (W/m2 K) - = the wall thickness (m) As the areas for each surface approach being equal the equation can be written as the transfer coefficient per unit area as shown below: NOTE: Often the value for is referred to as the difference of two radii where the inner and outer radii are used to define the thickness of a pipe carrying a fluid, however, this figure may also be considered as a wall thickness in a flat plate transfer mechanism or other common flat surfaces such as a wall in a building when the area difference between each edge of the transmission surface approaches zero. In the walls of buildings the above formula can be used to derive the formula commonly used to calculate the heat through building components. Architects and engineers call the resulting values either the U-Value or the R-Value of a construction assembly like a wall. Each type of value (R or U) are related as the inverse of each other such that R-Value = 1/U-Value and both are more fully understood through the concept of an overall heat transfer coefficient described in lower section of this document. Convective heat transfer Correlations Although convective heat transfer can be derived analytically through dimensional analysis, exact analysis of the boundary layer, approximate integral analysis of the boundary layer and analogies between energy and momentum transfer, these analytic approaches may not offer practical solutions to all problems when there are no mathematical models applicable. As such, many correlations were developed by various authors to estimate the convective heat transfer coefficient in various cases including natural convection, forced convection for internal flow and forced convection for external flow. These empirical correlations are presented for their particular geometry and flow conditions. As the fluid properties are temperature dependent, they are evaluated at the film temperature , which is the average of the surface and the surrounding bulk temperature, . Natural convection External flow, Vertical plane Churchill and Chu correlation for natural convection adjacent to vertical planes. NuL applies to all fluids for both laminar and turbulent flows. L is the characteristic length with respect to the direction of gravity, and RaL is the Rayleigh Number with respect to this length. For laminar flows in the range of , the following equation can be further improved. External flow, Vertical cylinders For cylinders with their axes vertical, the expressions for plane surfaces can be used provided the curvature effect is not too significant. This represents the limit where boundary layer thickness is small relative to cylinder diameter D. The correlations for vertical plane walls can be used when where is the Grashof number. External flow, Horizontal plates W.H. McAdams suggested the following correlations. The induced buoyancy will be different depending upon whether the hot surface is facing up or down. For a hot surface facing up or a cold surface facing down, For a hot surface facing down or a cold surface facing up, The length is the ratio of the plate surface area to perimeter. If the plane surface is inclined at an angle θ, the equations for vertical plane by Churchill and Chu may be used for θ up to . When boundary layer flow is laminar, the gravitational constant g is replaced with g cosθ for calculating the Ra in the equation for laminar flow External flow, Horizontal cylinder For cylinders of sufficient length and negligible end effects, Churchill and Chu has the following correlation for External flow, Spheres For spheres, T. Yuge has the following correlation. for Pr≃1 and Forced convection Internal flow, Laminar flow Sieder and Tate has the following correlation for laminar flow in tubes where D is the internal diameter, μ_b is the fluid viscosity at the bulk mean temperature, μ_w is the viscosity at the tube wall surface temperature. Internal flow, Turbulent flow The Dittus-Boelter correlation (1930) is a common and particularly simple correlation useful for many applications. This correlation is applicable when forced convection is the only mode of heat transfer; i.e., there is no boiling, condensation, significant radiation, etc. The accuracy of this correlation is anticipated to be ±15%. For a fluid flowing in a straight circular pipe with a Reynolds number between 10 000 and 120 000 (in the turbulent pipe flow range), when the fluid's Prandtl number is between 0.7 and 120, for a location far from the pipe entrance (more than 10 pipe diameters; more than 50 diameters according to many authors) or other flow disturbances, and when the pipe surface is hydraulically smooth, the heat transfer coefficient between the bulk of the fluid and the pipe surface can be expressed as: - - thermal conductivity of the bulk fluid - - - Hydraulic diameter - Nu - Nusselt number - (Dittus-Boelter correlation) - Pr - Prandtl number - Re - Reynolds number - n = 0.4 for heating (wall hotter than the bulk fluid) and 0.33 for cooling (wall cooler than the bulk fluid). The fluid properties necessary for the application of this equation are evaluated at the bulk temperature thus avoiding iteration Forced convection, External flow In analyzing the heat transfer associated with the flow past the exterior surface of a solid, the situation is complicated by phenomena such as boundary layer separation. Various authors have correlated charts and graphs for different geometries and flow conditions. For Flow parallel to a Plane Surface, where x is the distance from the edge and L is the height of the boundary layer, a mean Nusselt number can be calculated using the Colburn analogy. Thom correlation There exist simple fluid-specific correlations for heat transfer coefficient in boiling. The Thom correlation is for flow boiling of water (subcooled or saturated at pressures up to about 20 MPa) under conditions where the nucleate boiling contribution predominates over forced convection. This correlation is useful for rough estimation of expected temperature difference given the heat flux: - is the wall temperature elevation above the saturation temperature, K - q is the heat flux, MW/m2 - P is the pressure of water, MPa Note that this empirical correlation is specific to the units given. Heat transfer coefficient of pipe wall The resistance to the flow of heat by the material of pipe wall can be expressed as a "heat transfer coefficient of the pipe wall". However, one needs to select if the heat flux is based on the pipe inner or the outer diameter. where k is the effective thermal conductivity of the wall material and x is the wall thickness. If the above assumption does not hold, then the wall heat transfer coefficient can be calculated using the following expression: where di and do are the inner and outer diameters of the pipe, respectively. The thermal conductivity of the tube material usually depends on temperature; the mean thermal conductivity is often used. Combining heat transfer coefficients For two or more heat transfer processes acting in parallel, heat transfer coefficients simply add: For two or more heat transfer processes connected in series, heat transfer coefficients add inversely: For example, consider a pipe with a fluid flowing inside. The rate of heat transfer between the bulk of the fluid inside the pipe and the pipe external surface is: - q = heat transfer rate (W) - h = heat transfer coefficient (W/(m2·K)) - t = wall thickness (m) - k = wall thermal conductivity (W/m·K) - A = area (m2) - = difference in temperature. Overall heat transfer coefficient The overall heat transfer coefficient is a measure of the overall ability of a series of conductive and convective barriers to transfer heat. It is commonly applied to the calculation of heat transfer in heat exchangers, but can be applied equally well to other problems. For the case of a heat exchanger, can be used to determine the total heat transfer between the two streams in the heat exchanger by the following relationship: - = heat transfer rate (W) - = overall heat transfer coefficient (W/(m²·K)) - = heat transfer surface area (m2) - = log mean temperature difference (K) The overall heat transfer coefficient takes into account the individual heat transfer coefficients of each stream and the resistance of the pipe material. It can be calculated as the reciprocal of the sum of a series of thermal resistances (but more complex relationships exist, for example when heat transfer takes place by different routes in parallel): - R = Resistance(s) to heat flow in pipe wall (K/W) - Other parameters are as above. The heat transfer coefficient is the heat transferred per unit area per kelvin. Thus area is included in the equation as it represents the area over which the transfer of heat takes place. The areas for each flow will be different as they represent the contact area for each fluid side. The thermal resistance due to the pipe wall is calculated by the following relationship: - x = the wall thickness (m) - k = the thermal conductivity of the material (W/(m·K)) - A = the total area of the heat exchanger (m2) This represents the heat transfer by conduction in the pipe. As mentioned earlier in the article the convection heat transfer coefficient for each stream depends on the type of fluid, flow properties and temperature properties. Some typical heat transfer coefficients include: - Air - h = 10 to 100 W/(m2K) - Water - h = 500 to 10,000 W/(m2K) Thermal resistance due to fouling deposits Surface coatings can build on heat transfer surfaces during heat exchanger operation due to fouling. These add extra thermal resistance to the wall and may noticeably decrease the overall heat transfer coefficient and thus performance. (Fouling can also cause other problems.) The additional thermal resistance due to fouling can be found by comparing the overall heat transfer coefficient determined from laboratory readings with calculations based on theoretical correlations. They can also be evaluated from the development of the overall heat transfer coefficient with time (assuming the heat exchanger operates under otherwise identical conditions). This is commonly applied in practice, e.g. The following relationship is often used: - = overall heat transfer coefficient based on experimental data for the heat exchanger in the "fouled" state, - = overall heat transfer coefficient based on calculated or measured ("clean heat exchanger") data, - = thermal resistance due to fouling, See also - Convective heat transfer - Heat sink - Churchill-Bernstein Equation - Heat pump - Heisler Chart - Thermal conductivity - Fourier number - Nusselt number - James R. Welty; Charles E. Wicks; Robert E. Wilson; Gregory L. Rorrer., "Fundamentals of Momentum, Heat and Mass transfer" 5th edition, John Wiley and Sons - S.S. Kutateladze and V.M. Borishanskii, A Concise Encyclopedia of Heat Transfer, Pergamon Press, 1966. - F. Kreith (editor), "The CRC Handbook of Thermal Engineering", CRC Press, 2000. - W.Rohsenow, J.Hartnet, Y.Cho, "Handbook of Heat Transfer", 3rd edition, McGraw-Hill, 1998. - This relationship is similar to the harmonic mean; however, note that it is not multiplied with the number n of terms. - Coulson and Richardson, "Chemical Engineering", Volume 1,Elsevier, 2000 - Turner C.W.; Klimas S.J.; Bbrideau M.G., "Thermal resistance of steam-generator tube deposits under single-phase forced convection and flow-boiling heat transfer", Canadian Journal of Chemical Engineering, 2000, vol. 78, No 1, pp. 53-60
http://en.wikipedia.org/wiki/Heat_transfer_coefficient
13
11
Linear Theory of Ocean Surface Waves Looking out to sea from the shore, we can see waves on the sea surface. Looking carefully, we notice the waves are undulations of the sea surface with a height of around a meter, where height is the vertical distance between the bottom of a trough and the top of a nearby crest. The wavelength, which we might take to be the distance between prominent crests, is around 50m - 100m. Watching the waves for a few minutes, we notice that wave-height and wave-length are not constant. The heights vary randomly in time and space, and the statistical properties of the waves, such as the mean height averaged for a few hundred waves, change from day to day. These prominent offshore waves are generated by wind. Sometimes the local wind generates the waves, other times distant storms generate waves which ultimately reach the coast. For example, waves breaking on the Southern California coast on a summer day may come from vast storms offshore of Antarctica 10,000km away. If we watch closely for a long time, we notice that sea level changes from hour to hour. Over a period of a day, sea level increases and decreases relative to a point on the shore by about a meter. The slow rise and fall of sea level is due to the tides, another type of wave on the sea surface. Tides have wavelengths of thousands of kilometers, and they are generated by the slow, very small changes in gravity due to the motion of the sun and the moon relative to Earth. Surface waves are inherently nonlinear: The solution of the equations of motion depends on the surface boundary conditions, but the surface boundary conditions are the waves we wish to calculate. How can we proceed? We begin by assuming that the amplitude of waves on the water surface is infinitely small so the surface is almost exactly a plane. To simplify the mathematics, we can also assume that the flow is 2-dimensional with waves traveling in the x-direction. We also assume that the Coriolis force and viscosity can be neglected. If we retain rotation, we get Kelvin waves. With these assumptions, the sea-surface elevation of a wave traveling in the direction is: where is wave frequency in radians per second, is the wave frequency in Hertz (Hz), is wave number, is wave period, is wave-length, and where we assume, as stated above, that . The wave period is the time it takes two successive wave crests or troughs to pass a fixed point. The wave-length is the distance between two successive wave crests or troughs at a fixed time. where is the water depth and is the acceleration of gravity. Two approximations are especially useful. - Deep-water approximation is valid if the water depth is much greater than the wave-length . In this case, >> , >> 1, and . - Shallow-water approximation is valid if the water depth is much less than a wavelength. In this case, , << 1, and . For these two limits of water depth compared with wavelength the dispersion relation reduces to: for for the Deep-water dispersion relation and , for the Shallow-water dispersion relation. The stated limits for give a dispersion relation accurate within 10%. Because many wave properties can be measured with accuracies of 5-10%, the approximations are useful for calculating wave properties. Later we will learn to calculate wave properties as the waves propagate from deep to shallow water. The phase velocity c is the speed at which a particular phase of the wave propagates, for example, the speed of propagation of the wave crest. In one wave period the crest advances one wave-length \lambda and the phase speed is . Thus, the definition of phase speed is: The direction of propagation is perpendicular to the wave crest and toward the positive direction. The deep- and shallow-water approximations for the dispersion relation give: The approximations are accurate to about 5% for limits stated above. In deep water, the phase speed depends on wave-length or wave frequency. Longer waves travel faster. Thus, deep-water waves are said to be dispersive. In shallow water, the phase speed is independent of the wave; it depends only on the depth of the water. Shallow-water waves are non-dispersive. The concept of group velocity is fundamental for understanding the propagation of linear and nonlinear waves. First, it is the velocity at which a group of waves travels across the ocean. More importantly, it is also the propagation velocity of wave energy. Whitham 1974 ( §1.3 and §11.6) gives a clear derivation of the concept and the fundamental equation. The definition of group velocity in two dimensions is: Using the approximations for the dispersion relation: For ocean-surface waves, the direction of propagation is perpendicular to the wave crests in the positive x direction. In the more general case of other types of waves, such as Kelvin and Rossby waves, the group velocity is not necessarily in the direction perpendicular to wave crests. Notice that a group of deep-water waves moves at half the phase speed of the waves making up the group. How can this happen? If we could watch closely a group of waves crossing the sea, we would see waves crests appear at the back of the wave train, move through the train, and disappear at the leading edge of the group. Each wave crest moves at twice the speed of the group. Do real ocean waves move in groups governed by the dispersion relation? Yes. Munk et al. 1963 in a remarkable series of experiments in the 1960s showed that ocean waves propagating over great distances are dispersive, and that the dispersion could be used to track storms. They recorded waves for many days using an array of three pressure gauges just offshore of San Clemente Island, 60 miles due west of San Diego, California. Wave spectra were calculated for each day's data. (The concept of a spectra is discussed below.) From the spectra, the amplitudes and frequencies of the low-frequency waves and the propagation direction of the waves were calculated. Finally, they plotted contours of wave energy on a frequency-time diagram (Figure 1). To understand the figure, consider a distant storm that produces waves of many frequencies. The lowest-frequency waves (smallest w) travel the fastest and they arrive before other, higher-frequency waves. The further away the storm, the longer the delay between arrivals of waves of different frequencies. The ridges of high wave energy seen in the Figure are produced by individual storm . The slope of the ridge gives the distance to the storm in degrees along a great circle; and the phase information from the array gives the angle to the storm. The two angles give the storm's location relative to San Clemente. Thus waves arriving from 15 to 18 September produce a ridge indicating the storm was 115° away at an angle of 205° which is south of new Zealand near Antarctica. The locations of the storms producing the waves recorded from June through October 1959 were compared with the location of storms plotted on weather maps and in most cases the two agreed well. The wave energy density in Joules per square meter is related to the variance of sea-surface displacement by: where is water density, is gravity, and the brackets denote a time average. Note that this formula requires that there is quasi steady state so that the average kinetic and potential energies are equal and is only valid for linear waves. Although the formula in theory will give different energy densities for different locations (e.g. for a standing wave there will be locations where the displacement, hence also the energy density, will always be zero), it will in practice give a good result which doesn't vary much from location to location. What do we mean by wave-height? If we look at a wind-driven sea, we see waves of various heights. Some are much larger than most, others are much smaller (Figure 2). A practical definition that is often used is the height of the highest 1/3 of the waves, . The height is computed as follows: measure wave-height for a few minutes, pick out say 120 wave crests and record their heights. Pick the 40 largest waves and calculate the average height of the 40 values. This is for the wave record. The concept of significant wave-height was developed during the World War II as part of a project to forecast ocean wave-heights and periods. Wiegel 1964: p. 198 reports that work at the Scripps Institution of Oceanography showed ... wave-height estimated by observers corresponds to the average of the highest 20 to 40 per cent of waves... Originally, the term significant wave-height was attached to the average of these observations, the highest 30 percent of the waves, but has evolved to become the average of the highest one-third of the waves, (designated or ) More recently, significant wave-height is calculated from measured wave displacement. If the sea contains a narrow range of wave frequencies, is related to the standard deviation of sea-surface displacement (NAS 1963: 22; Hoffman and Karst 1975) where is the standard deviation of surface displacement. This relationship is much more useful, and it is now the accepted way to calculate wave-height from wave measurements The material in this page has come from Introduction to Physical Oceanography by Robert Stewart.
http://www.wikiwaves.org/Linear_Theory_of_Ocean_Surface_Waves
13
11
Sticky, in the social sciences and particularly economics, describes a situation in which a variable is resistant to change. Sticky prices are an important part of macroeconomic theory since they may be used to explain why markets might not reach equilibrium in the short run or even possibly the long-run. Nominal wages may also be sticky. Market forces may reduce the real value of labour in an industry, but wages will tend to remain at previous levels in the short run. This can be due to institutional factors such as price regulations, legal contractual commitments (e.g. office leases and employment contracts), labour unions, human stubbornness, human needs, or self-interest. Stickiness may apply in one direction. For example, a variable that is "sticky downward" will be reluctant to drop even if conditions dictate that it should. However, in the long run it will drop to the equilibrium level. Economists tend to cite four possible causes of price stickiness: menu costs, money illusion, imperfect information with regard to price changes, and fairness concerns.Robert Hall cites incentive and cost barriers on the part of firms to help explain stickiness in wages. Examples of stickiness Many firms, during recessions, lay off workers. Yet many of these same firms are reluctant to begin hiring, even as the economic situation improves. This can result in slow job growth during a recovery. Wages, prices, and employment levels can all be sticky. Normally, a variable oscillates according to changing market conditions, but when stickiness enters the system, oscillations in one direction are favored over the other, and the variable exhibits "creep"—it gradually moves in one direction or another. This is also called the "ratchet effect". Over time a variable will have ratcheted in one direction. For example, in the absence of competition, firms rarely lower prices, even when production costs decrease (i.e. supply increases) or demand drops. Instead, when production becomes cheaper, firms take the difference as profit, and when demand decreases they are more likely to hold prices constant, while cutting production, than to lower them. Therefore, prices are sometimes observed to be sticky downward, and the net result is one kind of inflation. Prices in an oligopoly can often be considered sticky-upward. The kinked demand curve, resulting in elastic price elasticity of demand above the current market clearing price, and inelasticity below it, requires firms to match price reductions by their competitors to maintain market share. Note: For a general discussion of asymmetric upward- and downward-stickiness with respect to upstream prices see Asymmetric price transmission. Modeling sticky prices Economists have tried to model sticky prices in a number of ways. These models can be classified as either time-dependent, where firms change prices with the passage of time and decide to change prices independently of the economic environment, or state-dependent, where firms decide to change prices in response to changes in the economic environment. The differences can be thought of as differences in a two-stage process: In time-dependent models, firms decide to change prices and then evaluate market conditions; In state-dependent models, firms evaluate market conditions and then decide how to respond. In time-dependent models price changes are staggered exogenously, so a fixed percentage of firms change prices at a given time. There is no selection as to which firms change prices. Two commonly used time-dependent models based on papers by John B. Taylor and Guillermo Calvo. In Taylor (1980), firms change prices every nth period. In Calvo (1983), firms change prices at random. In both models the choice of changing prices is independent of the inflation rate. The Taylor model is one where firms set the price knowing exactly how long the price will last (the duration of the price spell). Firms are divided into cohorts, so that each period the same proportion of firms reset their price. For example, with two period price-spells, half of the firm reset their price each period. Thus the aggregate price level is an average of the new price set this period and the price set last period and still remaining for half of the firms. In general, if price-spells last for n periods, a proportion of 1/n firms reset their price each period and the general price is an average of the prices set now and in the preceding n-1 periods. At any point in time, there will be a uniform distribution of ages of price-spells: (1/n) will be new prices in their first period, 1/n in their second period, and so on until 1/n will be n periods old. The average age of price-spells will be (n+1)/2 (if you count the first period as 1). In the Calvo staggered contracts model, there is a constant probability h that the firm can set a new price. Thus a proportion h of firms can reset their price in any period, whilst the remaining proportion (1-h) keep their price constant. In the Calvo model, when a firm sets its price, it does not know how long the price-spell will last. Instead, the firm faces a probability distribution over possible price-spell durations. The probability that the price will last for i periods is (1-h)(i-1), and the expected duration is h-1. For example, if h=0.25, then a quarter of firms will rest their price each period, and the expected duration for the price-spell is 4. There is no upper limit to how long price-spells may last: although the probability becomes small over time, it is always strictly positive. Unlike the Taylor model where all completed price-spells have the same length, there will at any time be a distribution of completed price-spell lengths. In state-dependent models the decision to change prices is based on changes in the market and are not related to the passage of time. Most models relate the decision to change prices changes to menu costs. Firms change prices when the benefit of changing a price becomes larger than the menu cost of changing a price. Price changes may be bunched or staggered over time. Prices change faster and monetary shocks are over faster under state dependent than time. Examples of state-dependent models include the one proposed by Golosov and Lucas and one suggested by Dotsey, King and Wolman Significance in macroeconomics Sticky prices play an important role in Keynesian, macroeconomic theory, especially in new Keynesian thought. Keynesian macroeconomists suggest that markets fail to clear because prices fail to drop to market clearing levels when there is a drop in demand. Economists have also looked at sticky wages as an explanation for why there is unemployment. Huw Dixon and Claus Hansen showed that even if only part of the economy has sticky prices, this can influence prices in other sectors and lead to prices in the rest of the economy becoming less responsive to changes in demand. Thus price and wage stickiness in one sector can "spill over" and lead to the economy behaving in a more Keynesian way. Mathematical example: a little price stickiness can go a long way. To see how a small sector with a fixed price can affect the way rest of the flexible prices behave, suppose that there are two sectors in the economy: a proportion a with flexible prices Pf and a proportion 1-a that are affected by menu costs with sticky prices Pm. Suppose that the flexible price sector price Pf has the market clearing condition of the following form: where is the aggregate price index (which would result if consumers had Cobb-Douglas preferences over the two goods). The equilibrium condition says that the real flexible price equals some constant (for example could be real marginal cost). Now we have a remarkable result: no matter how small the menu cost sector, so long as a<1, the flexible prices get "pegged" to the fixed price. Using the aggregate price index the equilibrium condition becomes which implies that What this result says is that no matter how small the sector affected by menu-costs, it will tie down the flexible price. In macroeconomic terms all nominal prices will be sticky, even those in the potentially flexible price sector, so that changes in nominal demand will feed through into changes in output in both the meno-cost sector and the flexible price sector. Now, this is of course an extreme result resulting from the real rigidity taking the form of a constant real marginal cost. For example, if we allowed for the real marginal cost to vary with aggregate output Y, then we would have so that the flexible prices would vary with output Y. However, the presence of the fixed prices in the menu-cost sector would still act to dampen the responsiveness of the flexible prices, although this would now depend upon the size of the menu-cost sector a, the sensitivity of to Y and so on. Sticky information is a term used in macroeconomics to refer to the fact that agents at any particular time may be basing their behavior on information that is old and does not take into account recent events. The first model of Sticky information was developed by Stanley Fischer in his 1977 article. He adopted a "staggered" or "overlapping" contract model. Suppose that there are two unions in the economy, who take turns to choose wages. When it is a union's turn, it chooses the wages it will set for the next two periods. In contrast to John B. Taylor's model where the nominal wage is constant over the contract life, in Fischer's model the union can choose a different wage for each period over the contract. The key point is that at any time t, the union setting its new contract will be using the up to date latest information to choose its wages for the next two periods. However, the other union is still choosing its wage based on the contract it planned last period, which is based on the old information. The importance of sticky information in Fischer's model is that whilst wages in some sectors of the economy are reacting to the latest information, those in other sectors are not. This has important implications for monetary policy. A sudden change in monetary policy can have real effects, because of the sector where wages have not had a chance to adjust to the new information. The idea of Sticky information was later developed by N. Gregory Mankiw and Ricardo Reis. This added a new feature to Fischer's model: there is a fixed probability that you can replan your wages or prices each period. Using quarterly data, they assumed a value of 25%: that is, each quarter 25% of randomly chosen firms/unions can plan a trajectory of current and future prices based on current information. Thus if we consider the current period: 25% of prices will be based on the latest information available; the rest on information that was available when they last were able to replan their price trajectory. Mankiw and Reis found that the model of sticky information provided a good way of explaining inflation persistence. Evaluation of sticky information models Sticky information models do not have nominal rigidity: firms or unions are free to choose different prices or wages for each period. It is the information that is sticky, not the prices. Thus when a firm gets lucky and can re-plan its current and future prices, it will choose a trajectory of what it believes will be the optimal prices now and in the future. In general, this will involve setting a different price every period covered by the plan. This is at odds with the empirical evidence on prices, . There are now many studies of price rigidity in different countries: the US, the Eurozone, the UK and others. These studies all show that whilst there are some sectors where prices change frequently, there are also other sectors where prices remain fixed over time. The lack of sticky prices in the sticky information model is inconsistent with the behavior of prices in most of the economy. This has led to attempts to formulate a "dual Stickiness" model that combines sticky information with sticky prices. - Taylor, John B. (1980), “Aggregate Dynamics and Staggered Contracts,” Journal of Political Economy. 88(1), 1-23. - Calvo, Guillermo A. (1983), “Staggered Prices in a Utility-Maximizing Framework,” Journal of Monetary Economics. 12(3), 383-398. - Oleksiy Kryvtsov and Peter J. Klenow. "State-Dependent or Time-Dependent Pricing: Does It Matter For Recent U.S. Inflation?" The Quarterly Journal of Economics, MIT Press, vol. 123(3), pages 863-904, August. - Mikhail Golosov & Robert E. Lucas Jr., 2007. "Menu Costs and Phillips Curves," Journal of Political Economy, University of Chicago Press, vol. 115, pages 171-199. - Dotsey M, King R, Wolman A State-Dependent Pricing And The General Equilibrium Dynamics Of Money And Output, Quarterly Journal of Economics, volume 114, pages 655-690. - Dixon, Huw and Hansen, Claus A mixed industrial structure magnifies the importance of menu costs, European Economic Review, 1999, pages 1475–1499. - Dixon, Huw Nominal wage flexibility in a partly unionised economy, The Manchester School of Economic and Social Studies, 1992, 60, 295-306. - Dixon, Huw Macroeconomic Price and Quantity responses with heterogeneous Product Markets, Oxford Economic Papers, 1994, vol. 46(3), pages 385-402, July. - Dixon (1992), Proposition 1 page 301 - Fischer, S. (1977): “Long-Term Contracts, Rational Expectations, and the Optimal Money Supply Rule,” Journal of Political Economy, 85(1), 191–205. - Mankiw, N.G. and R. Reis (2002) "Sticky Information Versus Sticky Prices: A Proposal To Replace The New Keynesian Phillips Curve," Quarterly Journal of Economics, 117(4), 1295–1328 - V. V. Chari, Patrick J. Kehoe, Ellen R. McGrattan (2008), New Keynesian Models: Not Yet Useful for Policy Analysis, Federal Reserve Bank of Minneapolis Research Department Staff Report 409 - Edward S. Knotec II. (2010), A Tale of Two Rigidities: Sticky Prices in a Sticky-Information Environment. Journal of Money, Credit and Banking 42:8, 1543–1564 - Peter J. Klenow & Oleksiy Kryvtsov, 2008. "State-Dependent or Time-Dependent Pricing: Does It Matter for Recent U.S. Inflation?," The Quarterly Journal of Economics, MIT Press, vol. 123(3), pages 863-904, - Luis J. Álvarez & Emmanuel Dhyne & Marco Hoeberichts & Claudia Kwapil & Hervé Le Bihan & Patrick Lünnemann & Fernando Martins & Roberto Sabbatini & Harald Stahl & Philip Vermeulen & Jouko Vilmunen, 2006. "Sticky Prices in the Euro Area: A Summary of New Micro-Evidence," Journal of the European Economic Association, MIT Press, vol. 4(2-3), pages 575-584, - Philip Bunn & Colin Ellis, 2012. "Examining The Behaviour Of Individual UK Consumer Prices," Economic Journal, Royal Economic Society, vol. 122(558), pages F35-F55 - Knotec (2010) - Bill Dupor, Tomiyuki Kitamura, Takayuki Tsuruga, Integrating Sticky Prices and Sticky Information, Review of Economics and Statistics, August 2010, Vol. 92, No. 3, Pages 657-669 - Arrow, Kenneth J.; Hahn, Frank H. (1973). General competitive analysis. Advanced textbooks in economics 12 (1980 reprint of (1971) San Francisco, CA: Holden-Day, Inc. Mathematical economics texts. 6 ed.). Amsterdam: North-Holland. ISBN 0-444-85497-5. MR 439057. - Fisher, F. M. (1983). Disequilibrium foundations of equilibrium economics. Econometric Society Monographs (1989 paperback ed.). New York: Cambridge University Press. p. 248. ISBN 978-0-521-37856-7. - Gale, Douglas (1982). Money: in equilibrium. Cambridge economic handbooks 2. Cambridge, U.K.: Cambridge University Press. p. 349. ISBN 978-0-521-28900-9. - Gale, Douglas (1983). Money: in disequilibrium. Cambridge economic handbooks. Cambridge, U.K.: Cambridge University Press. p. 382. ISBN 978-0-521-26917-9. - Grandmont, Jean-Michel (1985). Money and value: A reconsideration of classical and neoclassical monetary economics. Econometric Society Monographs 5. Cambridge University Press. p. 212. ISBN 978-0-521-31364-3. MR 934017. - Grandmont, Jean-Michel, ed. (1988). Temporary equilibrium: Selected readings. Economic Theory, Econometrics, and Mathematical Economics. Academic Press. p. 512. ISBN 0-12-295146-8, ISBN 978-0-12-295146-6 Check |isbn=value (help). MR 987252. - Herschel I. Grossman, 1987.“monetary disequilibrium and market clearing” in The New Palgrave: A Dictionary of Economics, v. 3, pp. 504–06. - The New Palgrave Dictionary of Economics, 2008, 2nd Edition. Abstracts: - "monetary overhang" by Holger C. Wolf. - "non-clearing markets in general equilibrium" by Jean-Pascal Bénassy. - "fixprice models" by Joaquim Silvestre. "inflation dynamics" by Timothy Cogley. - "temporary equilibrium" by J.-M. Grandmont.
http://en.m.wikipedia.org/wiki/Sticky_(economics)
13
11
Mathematical Notation and Schools – 11 Expressions and Formulas: A Larger Deviation from the Standard Notation In this series, we’ve done a review of mathematical notation, with an eye on how each notation helps or hinders student learning. The focus here is modest: on what teachers can do, even if their textbook sticks to the standard notation, to help disambiguate the standard notation for students. In the last several posts, we’ve been looking at notations for expressions. In the previous post, I reviewed slight variations on the standard notation for expressions; in this post I’ll offer a more drastic variation. This drastic variation doesn’t exhaust the subject – we’ll return for more later. Let’s start with an expression given in standard notation: , and remind ourselves that its meaning is derived from the sequence of operations that are indicated: some values are to be multiplied, others added, etc. This same meaning is expressed above in the form of a tree, and this tree is a particular kind of notation for the expression. (Some useful vocabulary: the tree is said to have nodes and branches. One node is called the root, and is shown at the bottom. Other nodes are called leaf nodes and represent the starting points: e.g. 3, the value of a. The remaining nodes are called internal nodes, and represent operations.) Here, “*” represents multiplication of two numbers, and “+” represents addition of two numbers. Somewhere along the way, students become familiar with the idea that the operations of addition and multiplication are special compared to subtraction, division and exponentiation: when you add a bunch of numbers, it doesn’t matter in which order the addition is done. If I need to add a and b and c, I can add a + b and then add c to the intermediate result; or I can add b+c first and then add a to it; or I could add c+a and then add b to it. In formal terms, addition is both commutative and associative. Multiplication is commutative and associative as well. Recognizing the special role of multiplication and addition allows for a very useful re-writing of the tree shown above, as follows: This notation can peacefully coexist with the standard notation in the textbooks. Students get the pictorial form without any trouble, and can manipulate the expression trees with ease. They tend to think of the internal nodes as little machines that are sitting there waiting for inputs, and then producing an output. I intend to come back to this notational scheme and other variations in future posts – I want to take an excursion first to look at notation for functions: I think this will prove very fruitful.
http://unlearningmath.com/2011/06/29/mathematical-notation-and-schools-11/
13
10
A team of astronomers at the American space agency recently combined data from two space telescopes to produce one of the most amazing images ever captured of the renowned Helix Nebula. The cosmic structure is located around 650 light-years away from Earth, in the constellation of Aquarius, and is known among astronomers as NGC 7293. Officially, it is cataloged as a planetary nebula. The name was given by early astronomers, whose telescopes saw these nebula as gas giants. In order to capture this image, experts used infrared data from the NASA Spitzer Space Telescope , which is operated by the NASA Jet Propulsion Laboratory (JPL), and the Galaxy Evolution Explorer (GALEX), which is currently operated by the California Institute of Technology (Caltech). The progenitor star for this nebula is currently dying, astronomers say. As a result, it is shedding the outer layers of its atmosphere, producing a dusty shell that is heavily radiated, at ultraviolet wavelengths, by the enduring, hot stellar core. The latter is known as a white dwarf. Its current state will be reached by the Sun as well, in about 5 billion years or so. White dwarfs are the cores of former stars that have exhausted their hydrogen fuel supplies, and have switched to burning helium. The element is a byproduct of nuclear fusion. The hydrogen isotopes deuterium and tritium fuse at the cores of stars to create helium and vast amounts of energy. When a star around the size of the Sun dies, it switches from one fuel to the other, and can endure as a white dwarf for more than 3 billion years. Our own parent star will produce a planetary nebula when it reaches the end of its main sequence. It will begin to burn helium, and convert it into heavier elements, including nitrogen, carbon and oxygen. “Eventually, the helium will also be exhausted, and the star dies, puffing off its outer gaseous layers and leaving behind the tiny, hot, dense core [… which] is about the size of Earth, but has a mass very close to that of the original star; in fact, a teaspoon of [material] would weigh as much as a few elephants!” NASA reports. The background of this image was collected by the NASA Wide-field Infrared Survey Explorer (WISE) spacecraft, which produced a map of the Universe at IR wavelengths. “The white dwarf star itself is a tiny white pinprick right at the center of the nebula,” NASA experts conclude.
http://news.softpedia.com/news/NASA-Releases-Amazing-Composite-Image-of-the-Helix-Nebula-296798.shtml
13
23
An introduction to the access technologies that allow multiple users to share a common communications channel. Access methods are multiplexing techniques that provide communications services to multiple users in a single-bandwidth wired or wireless medium. Communications channels, whether they’re wireless spectrum segments or cable connections, are expensive. Communications services providers must engage multiple paid users over limited resources to make a profit. Access methods allow many users to share these limited channels to provide the economy of scale necessary for a successful communications business. There are five basic access or multiplexing methods: frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA), orthogonal frequency division multiple access (OFDMA), and spatial division multiple access (SDMA). Table Of Contents FDMA is the process of dividing one channel or bandwidth into multiple individual bands, each for use by a single user (Fig. 1). Each individual band or channel is wide enough to accommodate the signal spectra of the transmissions to be propagated. The data to be transmitted is modulated on to each subcarrier, and all of them are linearly mixed together. 1. FDMA divides the shared medium bandwidth into individual channels. Subcarriers modulated by the information to be transmitted occupy each subchannel. The best example of this is the cable television system. The medium is a single coax cable that is used to broadcast hundreds of channels of video/audio programming to homes. The coax cable has a useful bandwidth from about 4 MHz to 1 GHz. This bandwidth is divided up into 6-MHz wide channels. Initially, one TV station or channel used a single 6-MHz band. But with digital techniques, multiple TV channels may share a single band today thanks to compression and multiplexing techniques used in each channel. This technique is also used in fiber optic communications systems. A single fiber optic cable has enormous bandwidth that can be subdivided to provide FDMA. Different data or information sources are each assigned a different light frequency for transmission. Light generally isn’t referred to by frequency but by its wavelength (λ). As a result, fiber optic FDMA is called wavelength division multiple access (WDMA) or just wavelength division multiplexing (WDM). One of the older FDMA systems is the original analog telephone system, which used a hierarchy of frequency multiplex techniques to put multiple telephone calls on single line. The analog 300-Hz to 3400-Hz voice signals were used to modulate subcarriers in 12 channels from 60 kHz to 108 kHz. Modulator/mixers created single sideband (SSB) signals, both upper and lower sidebands. These subcarriers were then further frequency multiplexed on subcarriers in the 312-kHz to 552-kHz range using the same modulation methods. At the receiving end of the system, the signals were sorted out and recovered with filters and demodulators. Original aerospace telemetry systems used an FDMA system to accommodate multiple sensor data on a single radio channel. Early satellite systems shared individual 36-MHz bandwidth transponders in the 4-GHz to 6-GHz range with multiple voice, video, or data signals via FDMA. Today, all of these applications use TDMA digital techniques. TDMA is a digital technique that divides a single channel or band into time slots. Each time slot is used to transmit one byte or another digital segment of each signal in sequential serial data format. This technique works well with slow voice data signals, but it’s also useful for compressed video and other high-speed data. A good example is the widely used T1 transmission system, which has been used for years in the telecom industry. T1 lines carry up to 24 individual voice telephone calls on a single line (Fig. 2). Each voice signal usually covers 300 Hz to 3000 Hz and is digitized at an 8-kHz rate, which is just a bit more than the minimal Nyquist rate of two times the highest-frequency component needed to retain all the analog content. 2. This T1 digital telephony frame illustrates TDM and TDMA. Each time slot is allocated to one user. The high data rate makes the user unaware of the lack of simultaneity. The digitized voice appears as individual serial bytes that occur at a 64-kHz rate, and 24 of these bytes are interleaved, producing one T1 frame of data. The frame occurs at a 1.536-MHz rate (24 by 64 kHz) for a total of 192 bits. A single synchronizing bit is added for timing purposes for an overall data rate of 1.544 Mbits/s. At the receiving end, the individual voice bytes are recovered at the 64-kHz rate and passed through a digital-to-analog converter (DAC) that reproduces the analog voice. The basic GSM (Global System of Mobile Communications) cellular phone system is TDMA-based. It divides up the radio spectrum into 200-kHz bands and then uses time division techniques to put eight voice calls into one channel. Figure 3 shows one frame of a GSM TDMA signal. The eight time slots can be voice signals or data such as texts or e-mails. The frame is transmitted at a 270-kbit/s rate using Gaussian minimum shift keying (GMSK), which is a form of frequency shift keying (FSK) modulation. 3. This GSM digital cellular method shows how up to eight users can share a 200-kHz channel in different time slots within a frame of 1248 bits. CDMA is another pure digital technique. It is also known as spread spectrum because it takes the digitized version of an analog signal and spreads it out over a wider bandwidth at a lower power level. This method is called direct sequence spread spectrum (DSSS) as well (Fig. 4). The digitized and compressed voice signal in serial data form is spread by processing it in an XOR circuit along with a chipping signal at a much higher frequency. In the cdma IS-95 standard, a 1.2288-Mbit/s chipping signal spreads the digitized compressed voice at 13 kbits/s. 4. Spread spectrum is the technique of CDMA. The compressed and digitized voice signal is processed in an XOR logic circuit along with a higher-frequency coded chipping signal. The result is that the digital voice is spread over a much wider bandwidth that can be shared with other users using different codes. The chipping signal is derived from a pseudorandom code generator that assigns a unique code to each user of the channel. This code spreads the voice signal over a bandwidth of 1.25 MHz. The resulting signal is at a low power level and appears more like noise. Many such signals can occupy the same channel simultaneously. For example, using 64 unique chipping codes allows up to 64 users to occupy the same 1.25-MHz channel at the same time. At the receiver, a correlating circuit finds and identifies a specific caller’s code and recovers it. The third generation (3G) cell-phone technology called wideband CDMA (WCDMA) uses a similar method with compressed voice and 3.84-Mbit/s chipping codes in a 5-MHz channel to allow multiple users to share the same band. OFDMA is the access technique used in Long-Term Evolution (LTE) cellular systems to accommodate multiple users in a given bandwidth. Orthogonal frequency division multiplexing (OFDM) is a modulation method that divides a channel into multiple narrow orthogonal bands that are spaced so they don’t interfere with one another. Each band is divided into hundreds or even thousands of 15-kHz wide subcarriers. The data to be transmitted is divided into many lower-speed bit streams and modulated onto the subcarriers. Time slots within each subchannel data stream are used to package the data to be transmitted (Fig. 5). This technique is very spectrally efficient, so it provides very high data rates. It also is less affected by multipath propagation effects. 5. OFDMA assigns a group of subcarriers to each user. The subcarriers are part of the large number of subcarriers used to implement OFDM for LTE. The data may be voice, video, or something else, and it’s assembled into time segments that are then transmitted over some of the assigned subcarriers. To implement OFDMA, each user is assigned a group of subchannels and related time slots. The smallest group of subchannels assigned is 12 and called a resource block (RB). The system assigns the number of RBs to each user as needed. SDMA uses physical separation methods that permit the sharing of wireless channels. For instance, a single channel may be used simultaneously if the users are spaced far enough from one another to avoid interference. Known as frequency reuse, the method is widely used in cellular radio systems. Cell sites are spaced from one another to minimize interference. In addition to spacing, directional antennas are used to avoid interference. Most cell sites use three antennas to create 120° sectors that allow frequency sharing (Fig. 6a). New technologies like smart antennas or adaptive arrays use dynamic beamforming to shrink signals into narrow beams that can be focused on specific users, excluding all others (Fig. 6b). 6. SDMA separates users on shared frequencies by isolating them with directional antennas. Most cell sites have three antenna arrays to separate their coverage into isolated 120° sectors (a). Adaptive arrays use beamforming to pinpoint desired users while ignoring any others on the same frequency (b). One unique variation of SDMA, polarization division multiple access (PDMA), separates signals by using different polarizations of the antennas. Two different signals then can use the same frequency, one transmitting a vertically polarized signal and the other transmitting a horizontally polarized signal. The signals won’t interfere with one another even if they’re on the same frequency because they’re orthogonal and the antennas won’t respond to the oppositely polarized signal. Separate vertical and horizontal receiver antennas are used to recover the two orthogonal signals. This technique is widely used in satellite systems. Polarization is also used for multiplexing in fiber optic systems. The new 100-Gbit/s systems use dual polarization quadrature phase shift keying (DP-QPSK) to achieve high speeds on a single fiber. The high-speed data is divided into two slower data streams, one using vertical light polarization and the other horizontal light polarization. Polarization filters separate the two signals at the transmitter and receiver and merge them back into the high-speed stream. A unique and widely used method of multiple access is carrier sense multiple access with collision detection (CSMA-CD). This is the classical access method used in Ethernet local-area networks (LANs). It allows multiple users of the network to access the single cable for transmission. All network nodes listen continuously. When they want to send data, they listen first and then transmit if no other signals are on the line. For instance, the transmission will be one packet or frame. Then the process repeats. If two or more transmissions occur simultaneously, a collision occurs. The network interface circuitry can detect a collision, and then the nodes will wait a random time before retransmitting. A variation of this method is called carrier sense multiple access with collision avoidance (CSMA-CA). This method is similar to CSMA-CD. However, a special scheduling algorithm is used to determine the appropriate time to transmit over the shared channel. While the CSMA-CD technique is most used in wired networks, CSMA-CA is the preferred method in wireless networks. - Frenzel, Louis E., Principles of Electronic Communication Systems, 3rd Edition, McGraw Hill, 2008. - Gibson, Jerry D., Editor, The Communications Handbook, CRC Press, 1997. - Skylar, Bernard, Digital Communications, 2nd Edition, Prentice Hall, 2001. - Tomasi, Wayne, Advanced Electronic Communications Systems, 4th Edition, Prentice Hall, 1998.
http://electronicdesign.com/communications/fundamentals-communications-access-technologies-fdma-tdma-cdma-ofdma-and-sdma
13
20
Computer software are developed to either automate some tasks or solve some problems. Either way, a software achieves the goal with the help of the logic that the developer of that software writes. Every logic requires some services like computing the length of a string, opening a file etc. Standard services are catered by some functions or calls that are provided for this purpose only. Like for calculating string length, there exists a standard function like strlen(), for opening a file, there exists functions like open() and fopen(). We call these functions as standard functions as any application can use them. These standard functions can be classified into two major categories : - Library function calls. - System function calls. In this article, we will try to discuss the concept behind the system and library calls in form of various points and wherever required, I will provide the difference between the two. 1. Library functions Vs System calls The functions which are a part of standard C library are known as Library functions. For example the standard string manipulation functions like strcmp(), strlen() etc are all library functions. The functions which change the execution mode of the program from user mode to kernel mode are known as system calls. These calls are required in case some services are required by the program from kernel. For example, if we want to change the date and time of the system or if we want to create a network socket then these services can only be provided by kernel and hence these cases require system calls. For example, socket() is a system call. 2. Why do we need system calls? System calls acts as entry point to OS kernel. There are certain tasks that can only be done if a process is running in kernel mode. Examples of these tasks can be interacting with hardware etc. So if a process wants to do such kind of task then it would require itself to be running in kernel mode which is made possible by system calls. 3. Types of library functions Library functions can be of two types : - Functions which do not call any system call. - Functions that make a system call. There are library functions that do not make any system call. For example, the string manipulation functions like strlen() etc fall under this category. Also, there are library functions that further make system calls, for example the fopen() function which a standard library function but internally uses the open() sytem call. 4. Interaction between components The following diagram to depict how Library functions, system calls, application code interact with each other. The diagram above makes it clear that the application code can interact with Library functions or system calls. Also, a library function can also call system function from within. But only system calls have access to kernel which further can access computer hardware. 5. fopen() vs open() Some of us may argue that why do we have two functions for the same operation ie opening a file? Well, the answer to this is the fact that fopen() is a library function which provides buffered I/O services for opening a file while open() is a system call that provides non-buffered I/O services. Though open() function is also available for applications to use but application should avoid using it directly. In general, if a library function corresponding to a system call exists, then applications should use the library function because : - Library functions are portable which means an application using standard library functions will run on all systems. While on the other hand an application relying on the corresponding system call may not run on every system as system call interface may vary from system to system. - Sometimes the corresponding library function makes the load to system call lesser resulting in non-frequent switches from user mode to kernel mode. For example if there is an application that reads data from file very frequently, then using fread() instead of read() would provide buffered I/O which means that not every call to fread() would result in a call to system call read(). The fread() may read larger chunk of data(than required by the user) in one go and hence subsequent fread() will not require a call to system function read(). 6. Is malloc() a system call? This is one of the very popular misconception that people have. Lets make it clear that malloc() is not a system call. The function call malloc() is a library function call that further uses the brk() or sbrk() system call for memory allocation. 7. System calls : Switching execution modes Traditionally, the mechanism of raising an interrupt of ‘int $0×80′ to kernel was used. After trapping the interrupt, kernel processes it and changes the execution mode from user to kernel mode. Today, the systenter/sysexit instructions are used for switching the execution mode. 8. Some other differences Besides all the above, here are a few more differences between a system and library call : - A library function is linked to the user program and executes in user space while a system call is not linked to a user program and executes in kernel space. - A library function execution time is counted in user level time while a system call execution time is counted as a part of system time. - Library functions can be debugged easily using a debugger while System calls cannot be debugged as they are executed by the kernel. Get the Linux Sysadmin Course Now!
http://www.thegeekstuff.com/2012/07/system-calls-library-functions/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheGeekStuff+%28The+Geek+Stuff%29
13
15
National Strategies Secondary Maths Collection - ICT Supporting Mathematics: Geometry and Measures Collection Author: Craig Barton - Maths AST and creator of www.mrbartonmaths.com (TES Name: mrbartonmaths) The teaching of geometry has been revolutionised by the widespread availability of dynamic geometry software, which allows us to explore constructions, transformations, loci, measures and geometrical reasoning in new and dynamic ways. Static diagrams can be replaced by models that move and allow pupils to explore and deepen their understanding of the principles governing the movements. There are many commercially available materials that support the teaching of geometry, and we have also provided in this strand some free to use ready-prepared activities. However, it is important to remember that pupils also need to learn how to use geometry software as one of their ICT tools to help solve problems. 1. Transformations and Co-ordinates Dynamic geometry software has the power to create moving models and images to help pupils gain an understanding of transformations. Some spreadsheets and graphing software also have the capacity to demonstrate movement, sometimes in a simpler more accessible form. On the following pages are some examples of geometry on a spreadsheet, with lesson notes and guidance on their use. - The resource consists of a spreadsheet that models rotation, enlargement, reflection and translation within a pair of axes using all four quadrants. Each transformation is dynamic and the initial image can change size and position. - In this exercise pupils are encouraged to explore some strange maps of the world that are drawn in a ‘proportional’ style. Pupils then go on to draw a map of the UK based on time taken to make journeys from London to other cities rather than on distances. 2. Measures and Mensuration The ability to manipulate shapes using dynamic geometry software can help pupils explore relationships between shapes and their areas. For example, in one of the examples provided, a parallelogram can be changed to form a rectangle while conserving its area, enabling pupils to understand the connection between the formula for the area of a rectangle and the area of a parallelogram. - These resources, provided in two formats of dynamic geometry software, allow teachers to help pupils develop an understanding for the formulae of areas of shapes such as parallelograms. - This resource uses dynamic geometry to look at the geometric properties of a penny-farthing. The associated worksheet asks questions regarding circumference, scale factors and arcs and touches on the tangent-radius property.
http://www.tes.co.uk/article.aspx?storyCode=6088345
13
12
Oct. 17, 2008 About three times a second, a 10,000-year-old stellar corpse sweeps a beam of gamma-rays toward Earth. This object, known as a pulsar, is the first one known to "blink" only in gamma rays, and was discovered by the Large Area Telescope (LAT) onboard NASA's Fermi Gamma-ray Space Telescope, a collaboration with the U.S. Department of Energy (DOE) and international partners. "This is the first example of a new class of pulsars that will give us fundamental insights into how stars work," says Stanford University's Peter Michelson, principal investigator for the LAT. The LAT data is processed by the DOE's Stanford Linear Accelerator Center and analyzed by the International LAT Collaboration. The gamma-ray-only pulsar lies within a supernova remnant known as CTA 1, which is located about 4,600 light-years away in the constellation Cepheus. Its lighthouse-like beam sweeps Earth's way every 316.86 milliseconds and emits 1,000 times the energy of our sun. These results appear in the Oct. 16 edition of Science Express. A pulsar is a rapidly spinning neutron star, the crushed core left behind when a massive sun explodes. Astronomers have cataloged nearly 1,800 pulsars. Although most were found through their pulses at radio wavelengths, some of these objects also beam energy in other forms, including visible light and X-rays. Unlike previously discovered pulsars, the source in CTA 1 appears to blink only in gamma-ray energies, offering researchers a new way to study the stars in our universe. Scientists think CTA 1 is only the first of a large population of similar objects. "The LAT provides us with a unique probe of the galaxy's pulsar population, revealing objects we would not otherwise even know exist," says Fermi Gamma-ray Space Telescope Project Scientist Steve Ritz, at NASA's Goddard Space Flight Center in Greenbelt, Md. The pulsar in CTA 1 is not located at the center of the remnant's expanding gaseous shell. Supernova explosions can be asymmetrical, often imparting a "kick" that sends the neutron star careening through space. Based on the remnant's age and the pulsar's distance from its center, astronomers believe the neutron star is moving at about a million miles per hour--a typical speed. The LAT scans the entire sky every 3 hours and detects photons with energies ranging from 20 million to over 300 billion times the energy of visible light. The instrument sees about one gamma ray each minute from CTA 1. That's enough for scientists to piece together the neutron star's pulsing behavior, its rotation period, and the rate at which it's slowing down. A pulsar's beams arise because neutron stars possess intense magnetic fields and rotate rapidly. Charged particles stream outward from the star's magnetic poles at nearly the speed of light to create the gamma-ray beams the telescope sees. Because the beams are powered by the neutron star's rotation, they gradually slow the pulsar's spin. In the case of CTA 1, the rotation period is increasing by about one second every 87,000 years. This measurement is also vital to understanding the dynamics of the pulsar's behavior and can be used to estimate the pulsar's age. From the slowing period, researchers have determined that the pulsar is actually powering all the activity in the nebula where it resides. "This observation shows the power of the LAT," Michelson says. "It is so sensitive that we can now discover new types of objects just by observing their gamma-ray emissions." NASA's Fermi Gamma-ray Space Telescope is an astrophysics and particle physics partnership, developed in collaboration with the U.S. Department of Energy, along with important contributions from academic institutions and partners in France, Germany, Italy, Japan, Sweden and the United States. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2008/10/081016141421.htm
13
10
103: Introduction to Logic Syllogistic Fallacies: Exclusive Premisses Abstract: The Fallacy of Two Negative Premisses or Exclusive Premisses is illustrated and explained. I. We continue our study of fallacies with a fifth fallacy. Consider the following argument. internal combustion engines are nonpolluting power plants, and no nonpolluting power plants are safe devices. Therefore, no internal combustion engines are safe devices." A. First, let's put the argument in standard form: [nonpolluting power plants] are [safe devices]. combustion engines] are [nonpolluting power plants]. combustion engines] are [safe devices]. 1. The Venn diagram shows this argument to be invalid. 2. Note that both premisses are negative. As most people are intuitively aware, about what a thing is not, do not carry much information about what that thing is. If I say I am thinking of something that is not a tree, you would not know very much about what I am thinking. 3. By referring to the mnemonic of the mechanism of the syllogism sketched here, we can surmise that the basis of the syllogism is captured by noting that two things related to the same thing should be somehow related to each other, if at least one of them is totally related. 4. However, when both premisses are negative, our mnemonic shows the classes are not related in some way to each other, and this information is of no use to see how the terms in the conclusion are related. This state of affairs can be illustrated as follows. B. This Rule of Quality states that no standard form syllogism with two negative premisses is valid. 1. The fallacy is called either the Fallacy of Exclusive Premisses or the Fallacy of Two 2. Reason: When a syllogism has exclusive premisses, all that is being asserted is that S is wholly or partially excluded from part or all of the M class, likewise for the P class; but since this statement is true for every possible syllogism, the premisses entail no information. 3. Note that you can detect the fallacy of Exclusive Premisses merely by inspecting the mood of the syllogism. Test yourself on the following examples.
http://philosophy.lander.edu/logic/exclusive_fall.html
13
21
The genome of an organism is the sum total of its genetic information. The genome is not only a blueprint for the organism it also contains historical notes on the evolution of the organism. The ability to determine the sequence of deoxyribonucleic acid (DNA) and thus read the messages in the genome is of immense biological importance because it not only describes the organism in detail but also indicates its evolutionary history. DNA is a linear chain of four nucleotides : adenosine (A), thymidine (T), cytidine (C), and guanosine (G). The genetic information in DNA is encoded in the sequence of these nucleotides much like the information in a word is encoded in a sequence of letters. The technique for determining the sequence of nucleotides in DNA is based on the same mechanism by which DNA is replicated in the cell. DNA is composed of two complementary strands in which the As of one strand are paired with the Ts of the complementary strand and the Cs of one strand are paired with the Gs of the complementary strand. When DNA is replicated, a new DNA strand (primer strand) is extended by using the information in the complementary (template) strand. The DNA has a direction (polarity); the growing end of a DNA strand is the end that is 3 and the other end is the 5 . An enzyme , DNA polymerase, replicates DNA by adding nucleotides to the 3 end of the primer strand, which complement the template strand. (Figure 2.) DNA polymerase has an absolute requirement for a hydroxyl group (OH) on the 3 end of the template strand. If the 3 hydroxyl group is missing no further nucleotides can be added to the template strand. This termination of the elongation of the template strand is the basis for determining the DNA sequence. If the DNA polymerase is presented with a mixture of nucleotides, some of which have 3 OH groups and others of which have no 3 OH group (and are bound to a colored dye), both types of nucleotides are added to the growing template strand. When a nucleotide with no OH group is added to the primer strand, elongation is terminated with the colored dye at the 3 end of the strand. All essential elements for determining the sequence of nucleotides in the primer DNA strand are in place. A DNA synthesis reaction is set up in a test tube (in vitro), including DNA polymerase, a template DNA strand, a short uniform primer DNA strand, and a mixture of the four nucleotides (A, T, C, and G). The short primer DNA strands are synthesized chemically and are identical so they pair with a specific sequence in the template DNA strand. Each of the nucleotides is present in two forms, the normal form with a 3 hydroxyl group and the terminating form with a colored dye and no 3 hydroxyl group. Each different terminating nucleotide (A, T, C, and G) has a different colored dye attached. The amount of normal nucleotides present in the reaction is much larger than the terminating nucleotides so that DNA synthesis proceeds almost normally, and only occasionally is the elongation of the primer strand terminated by the incorporation of a dye labeled nucleotide lacking a 3 hydroxyl group. However, eventually all of the primer strands do incorporate a dye labeled nucleotide and their elongation is terminated. Thus, at the end of the reaction there is a vast collection of primer strands of varying lengths each terminated with a nucleotide that has a colored dye specific to the terminal nucleotide. All of the primer strands start at the same point, specified by the sequence of the short uniform primer DNA. Thus, the length of the primer strand corresponds to the position of the terminal nucleotide in the DNA sequence relative to the starting position of the primer DNA strand. The color of the dye on the primer strand identifies the terminal nucleotide as an A, T, C, or G. Once the primer strands are arranged according to length, the DNA sequence will be indicated by the series of colors on progressively longer primer strands. The DNA strands can be readily separated according to length by acrylamide gel electrophoresis (see Figure 1). The acrylamide gel is a loose matrix of fibers through which the DNA can migrate. The DNA molecules have a large negative charge and thus are pulled toward the plus electrode in an electric field. The whole collection of primer strand DNA molecules is placed in a well at the top of an acrylamide gel with the plus electrode at the bottom of the gel. When the electric field is applied the DNA molecules are drawn toward the plus electrode, with shorter molecules passing through the gel matrix more easily than longer molecules. Thus the smaller DNA molecules move the fastest. After a fixed period of time, the DNA molecules are separated according to length with the shortest molecules moving furthest down the gel. All of the molecules of a given length will form a band and will have the same terminal nucleotide and thus the same color. The DNA sequence can be read from the colors of the bands. One reads the sequence of the DNA from the 5 end starting at the bottom of the gel to the 3 end at the top of the gel. In practice the whole process is automated; the bands are scanned with a laser as they pass a specific point in the gel. These scans produce profiles for each nucleotide, as shown in the lower portion of Figure 3. A computer program then determines the DNA sequence from these colored profiles, as shown in the upper portion of Figure 3. A single automated DNA sequencing instrument can determine more than 100,000 nucleotides of DNA sequence per day and a large sequencing facility can often produce over 10 million nucleotides of sequence per day. This high sequencing capacity has made it feasible to determine the complete DNA sequence of large genomes including the human genome. Hartl, Daniel L., and Elizabeth W. Jones. Genetics: Principles and Analysis, 4th ed. Sudbury, MA: Jones and Bartlett, 1998. Raven, Peter H., and George B. Johnson. Biology. New York: McGraw-Hill, 1999. Watson, James D., Michael Gilman, Jan Witkowski, and Mark Zoller. Recombinant DNA, 2nd ed. New York: Scientific American Books, 1992.
http://www.biologyreference.com/Co-Dn/DNA-Sequencing.html
13
12
Mar. 6, 2003 Because of Earth's dynamic climate, winds and atmospheric pressure systems experience constant change. These fluctuations may affect how our planet rotates on its axis, according to NASA-funded research that used wind and satellite data. NASA's Earth Science Enterprise (ESE) mission is to understand the Earth system and its response to natural and human-induced changes for better prediction of climate, weather and natural hazards, such as atmospheric changes or El Niño events that may have contributed to the affect on Earth's rotation. "Changes in the atmosphere, specifically atmospheric pressure around the world, and the motions of the winds that may be related to such climate signals as El Niño are strong enough that their effect is observed in the Earth’s rotation signal," said David A. Salstein, an atmospheric scientist from Atmospheric and Environmental Research, Inc., of Lexington, Mass., who led a recent study. From year to year, winds and air pressure patterns change, causing different forces to act on the solid Earth. During El Niño years, for example, the rotation of the Earth may slow ever so slightly because of stronger winds, increasing the length of a day by a fraction of a millisecond (thousandth of a second). Issac Newton's laws of motion explain how those quantities are related to the Earth's rotation rate (leading to a change in the length of day) as well as the exact position in which the North Pole points in the heavens (known also as polar motion, or Earth wobble). To understand the concept of angular momentum, visualize the Earth spinning in space. Given Earth’s overall mass and its rotation, it contains a certain amount of angular momentum. When an additional force acting at a distance from the Earth's rotational axis occurs, referred to as a torque, such as changes in surface winds, or the distribution of high and low pressure patterns, especially near mountains, it can act to change the rate of the Earth's rotation or even the direction of the rotational axis. Because of the law of "conservation of angular momentum," small but detectable changes in the Earth's rotation and those in the rotation of the atmosphere are linked. The conservation of angular momentum is a law of physics that states the total angular momentum of a rotating object with no outside force remains constant regardless of changes within the system. An example of this principle occurs when a skater pulls his or her arms inward during a spin (changing the mass distribution to one nearer the rotation axis, reducing the "moment of inertia," and speeds up (increasing the skater's spin); because the moment of inertia goes down, the spin rate must increase to keep the total angular momentum of the system unchanged. "The key is that the sum of the angular momentum (push) of the solid Earth plus atmosphere system must stay constant unless an outside force (torque) is applied," Salstein said. "So if the atmosphere speeds up (stronger westerly winds) then the solid Earth must slow down (length-of-day increases). Also if more atmosphere moves to a lower latitude (further from the axis of rotation), and atmospheric pressure increases, it also gains angular momentum and the Earth would slow down as well." Other motions of the atmosphere such as larger mass in one hemisphere than the other can lead to a wobble (like a washing machine with clothes off-balance) and the poles move, in accordance to the law of the conservation of angular momentum. Salstein looked at wind and pressure measurements from a National Weather Service analysis that makes use of a combination of ground-based, aircraft, and space-based observations. The measurements for the Earth's motions come from a variety of space-based measurements including satellites, like those in the Global Positioning System (GPS), the geodetic satellites that included records from NASA's older LAGEOS satellite, and observations of distant astronomical objects using a technique known as Very Long Baseline Interferometry. Understanding the atmospheric pressure patterns, moreover, is essential to interpret results from NASA's Gravity Recovery and Climate Experiment (GRACE). The fact that the two vastly different systems, namely the meteorological and the astronomical, are in good agreement according to the conservation of angular momentum gives us assurance that both these types of measurements must be accurate. It shows, moreover, that changes in climate signals can have global implications on Earth's overall rotation. NASA's ESE research focuses on the changes and variability in the Earth system, including atmospheric, oceanic, and geodetic areas. This research was recently presented at the annual meeting of the American Meteorological Society in Long Beach, Calif. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2003/03/030306075514.htm
13
35
The sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation. It is named after the function sine, of which it is the graph. It occurs often in pure and applied mathematics, as well as physics, engineering, signal processing and many other fields. Its most basic form as a function of time (t) is: - A, the amplitude, is the peak deviation of the function from zero. - f, the ordinary frequency, is the number of oscillations (cycles) that occur each second of time. - ω = 2πf, the angular frequency, is the rate of change of the function argument in units of radians per second - φ, the phase, specifies (in radians) where in its cycle the oscillation is at t = 0. - When φ is non-zero, the entire waveform appears to be shifted in time by the amount φ/ω seconds. A negative value represents a delay, and a positive value represents an advance. 5 seconds of a 220 Hz sine wave |Problems playing this file? See media help.| The sine wave is important in physics because it retains its waveshape when added to another sine wave of the same frequency and arbitrary phase and magnitude. It is the only periodic waveform that has this property. This property leads to its importance in Fourier analysis and makes it acoustically unique. In general, the function may also have: - a spatial dimension, x (aka position), with wavenumber k - a non-zero center amplitude, D The wavenumber is related to the angular frequency by:. This equation gives a sine wave for a single dimension, thus the generalized equation given above gives the amplitude of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire. In two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed. A cosine wave is said to be "sinusoidal", because which is also a sine wave with a phase-shift of π/2. Because of this "head start", it is often said that the cosine function leads the sine function or the sine lags the cosine. The human ear can recognize single cosine waves as sounding clear because sine waves are representations of a single frequency with no harmonics; some sounds that approximate a pure sine wave are whistling, a crystal glass set to vibrate by running a wet finger around its rim, and the sound made by a tuning fork. In 1822, Joseph Fourier, a French mathematician, discovered that sinusoidal waves can be used as simple building blocks to describe and approximate any periodic waveform including square waves. Fourier used it as an analytical tool in the study of waves and heat flow. It is frequently used in signal processing and the statistical analysis of time series. Traveling and standing waves Since sine waves propagate without changing form in distributed linear systems, they are often used to analyze wave propagation. Sine waves traveling in two directions can be represented as When two waves having the same amplitude and frequency, and traveling in opposite directions, superpose each other, then a standing wave pattern is created. - Crest (physics) - Fourier transform - Harmonic series (mathematics) - Harmonic series (music) - Helmholtz equation - Instantaneous phase - Pure tone - Sawtooth wave - Simple harmonic motion - Sinusoidal model - Square wave - Triangle wave - Wave (physics) - Wave equation
http://en.wikipedia.org/wiki/Sine_wave
13
48
Measurements of AC magnitude So far we know that AC voltage alternates in polarity and AC current alternates in direction. We also know that AC can alternate in a variety of different ways, and by tracing the alternation over time we can plot it as a “waveform.” We can measure the rate of alternation by measuring the time it takes for a wave to evolve before it repeats itself (the “period”), and express this as cycles per unit time, or “frequency.” In music, frequency is the same as pitch, which is the essential property distinguishing one note from another. However, we encounter a measurement problem if we try to express how large or small an AC quantity is. With DC, where quantities of voltage and current are generally stable, we have little trouble expressing how much voltage or current we have in any part of a circuit. But how do you grant a single measurement of magnitude to something that is constantly changing? One way to express the intensity, or magnitude (also called the amplitude), of an AC quantity is to measure its peak height on a waveform graph. This is known as the peak or crest value of an AC waveform: Figure below Peak voltage of a waveform. Another way is to measure the total height between opposite peaks. This is known as the peak-to-peak (P-P) value of an AC waveform: Figure below Peak-to-peak voltage of a waveform. Unfortunately, either one of these expressions of waveform amplitude can be misleading when comparing two different types of waves. For example, a square wave peaking at 10 volts is obviously a greater amount of voltage for a greater amount of time than a triangle wave peaking at 10 volts. The effects of these two AC voltages powering a load would be quite different: Figure below A square wave produces a greater heating effect than the same peak voltage triangle wave. One way of expressing the amplitude of different waveshapes in a more equivalent fashion is to mathematically average the values of all the points on a waveform's graph to a single, aggregate number. This amplitude measure is known simply as the average value of the waveform. If we average all the points on the waveform algebraically (that is, to consider their sign, either positive or negative), the average value for most waveforms is technically zero, because all the positive points cancel out all the negative points over a full cycle: Figure below The average value of a sinewave is zero. This, of course, will be true for any waveform having equal-area portions above and below the “zero” line of a plot. However, as a practical measure of a waveform's aggregate value, “average” is usually defined as the mathematical mean of all the points' absolute values over a cycle. In other words, we calculate the practical average value of the waveform by considering all points on the wave as positive quantities, as if the waveform looked like this: Figure below Waveform seen by AC “average responding” meter. Polarity-insensitive mechanical meter movements (meters designed to respond equally to the positive and negative half-cycles of an alternating voltage or current) register in proportion to the waveform's (practical) average value, because the inertia of the pointer against the tension of the spring naturally averages the force produced by the varying voltage/current values over time. Conversely, polarity-sensitive meter movements vibrate uselessly if exposed to AC voltage or current, their needles oscillating rapidly about the zero mark, indicating the true (algebraic) average value of zero for a symmetrical waveform. When the “average” value of a waveform is referenced in this text, it will be assumed that the “practical” definition of average is intended unless otherwise specified. Another method of deriving an aggregate value for waveform amplitude is based on the waveform's ability to do useful work when applied to a load resistance. Unfortunately, an AC measurement based on work performed by a waveform is not the same as that waveform's “average” value, because the power dissipated by a given load (work performed per unit time) is not directly proportional to the magnitude of either the voltage or current impressed upon it. Rather, power is proportional to the square of the voltage or current applied to a resistance (P = E2/R, and P = I2R). Although the mathematics of such an amplitude measurement might not be straightforward, the utility of it is. Consider a bandsaw and a jigsaw, two pieces of modern woodworking equipment. Both types of saws cut with a thin, toothed, motor-powered metal blade to cut wood. But while the bandsaw uses a continuous motion of the blade to cut, the jigsaw uses a back-and-forth motion. The comparison of alternating current (AC) to direct current (DC) may be likened to the comparison of these two saw types: Figure below Bandsaw-jigsaw analogy of DC vs AC. The problem of trying to describe the changing quantities of AC voltage or current in a single, aggregate measurement is also present in this saw analogy: how might we express the speed of a jigsaw blade? A bandsaw blade moves with a constant speed, similar to the way DC voltage pushes or DC current moves with a constant magnitude. A jigsaw blade, on the other hand, moves back and forth, its blade speed constantly changing. What is more, the back-and-forth motion of any two jigsaws may not be of the same type, depending on the mechanical design of the saws. One jigsaw might move its blade with a sine-wave motion, while another with a triangle-wave motion. To rate a jigsaw based on its peak blade speed would be quite misleading when comparing one jigsaw to another (or a jigsaw with a bandsaw!). Despite the fact that these different saws move their blades in different manners, they are equal in one respect: they all cut wood, and a quantitative comparison of this common function can serve as a common basis for which to rate blade speed. Picture a jigsaw and bandsaw side-by-side, equipped with identical blades (same tooth pitch, angle, etc.), equally capable of cutting the same thickness of the same type of wood at the same rate. We might say that the two saws were equivalent or equal in their cutting capacity. Might this comparison be used to assign a “bandsaw equivalent” blade speed to the jigsaw's back-and-forth blade motion; to relate the wood-cutting effectiveness of one to the other? This is the general idea used to assign a “DC equivalent” measurement to any AC voltage or current: whatever magnitude of DC voltage or current would produce the same amount of heat energy dissipation through an equal resistance:Figure below An RMS voltage produces the same heating effect as a the same DC voltage In the two circuits above, we have the same amount of load resistance (2 Ω) dissipating the same amount of power in the form of heat (50 watts), one powered by AC and the other by DC. Because the AC voltage source pictured above is equivalent (in terms of power delivered to a load) to a 10 volt DC battery, we would call this a “10 volt” AC source. More specifically, we would denote its voltage value as being 10 volts RMS. The qualifier “RMS” stands for Root Mean Square, the algorithm used to obtain the DC equivalent value from points on a graph (essentially, the procedure consists of squaring all the positive and negative points on a waveform graph, averaging those squared values, then taking the square root of that average to obtain the final answer). Sometimes the alternative terms equivalent or DC equivalent are used instead of “RMS,” but the quantity and principle are both the same. RMS amplitude measurement is the best way to relate AC quantities to DC quantities, or other AC quantities of differing waveform shapes, when dealing with measurements of electric power. For other considerations, peak or peak-to-peak measurements may be the best to employ. For instance, when determining the proper size of wire (ampacity) to conduct electric power from a source to a load, RMS current measurement is the best to use, because the principal concern with current is overheating of the wire, which is a function of power dissipation caused by current through the resistance of the wire. However, when rating insulators for service in high-voltage AC applications, peak voltage measurements are the most appropriate, because the principal concern here is insulator “flashover” caused by brief spikes of voltage, irrespective of time. Peak and peak-to-peak measurements are best performed with an oscilloscope, which can capture the crests of the waveform with a high degree of accuracy due to the fast action of the cathode-ray-tube in response to changes in voltage. For RMS measurements, analog meter movements (D'Arsonval, Weston, iron vane, electrodynamometer) will work so long as they have been calibrated in RMS figures. Because the mechanical inertia and dampening effects of an electromechanical meter movement makes the deflection of the needle naturally proportional to the average value of the AC, not the true RMS value, analog meters must be specifically calibrated (or mis-calibrated, depending on how you look at it) to indicate voltage or current in RMS units. The accuracy of this calibration depends on an assumed waveshape, usually a sine wave. Electronic meters specifically designed for RMS measurement are best for the task. Some instrument manufacturers have designed ingenious methods for determining the RMS value of any waveform. One such manufacturer produces “True-RMS” meters with a tiny resistive heating element powered by a voltage proportional to that being measured. The heating effect of that resistance element is measured thermally to give a true RMS value with no mathematical calculations whatsoever, just the laws of physics in action in fulfillment of the definition of RMS. The accuracy of this type of RMS measurement is independent of waveshape. For “pure” waveforms, simple conversion coefficients exist for equating Peak, Peak-to-Peak, Average (practical, not algebraic), and RMS measurements to one another: Figure below Conversion factors for common waveforms. In addition to RMS, average, peak (crest), and peak-to-peak measures of an AC waveform, there are ratios expressing the proportionality between some of these fundamental measurements. The crest factor of an AC waveform, for instance, is the ratio of its peak (crest) value divided by its RMS value. The form factor of an AC waveform is the ratio of its RMS value divided by its average value. Square-shaped waveforms always have crest and form factors equal to 1, since the peak is the same as the RMS and average values. Sinusoidal waveforms have an RMS value of 0.707 (the reciprocal of the square root of 2) and a form factor of 1.11 (0.707/0.636). Triangle- and sawtooth-shaped waveforms have RMS values of 0.577 (the reciprocal of square root of 3) and form factors of 1.15 (0.577/0.5). Bear in mind that the conversion constants shown here for peak, RMS, and average amplitudes of sine waves, square waves, and triangle waves hold true only for pure forms of these waveshapes. The RMS and average values of distorted waveshapes are not related by the same ratios: Figure below Arbitrary waveforms have no simple conversions. This is a very important concept to understand when using an analog D'Arsonval meter movement to measure AC voltage or current. An analog D'Arsonval movement, calibrated to indicate sine-wave RMS amplitude, will only be accurate when measuring pure sine waves. If the waveform of the voltage or current being measured is anything but a pure sine wave, the indication given by the meter will not be the true RMS value of the waveform, because the degree of needle deflection in an analog D'Arsonval meter movement is proportional to the average value of the waveform, not the RMS. RMS meter calibration is obtained by “skewing” the span of the meter so that it displays a small multiple of the average value, which will be equal to be the RMS value for a particular waveshape and a particular waveshape only. Since the sine-wave shape is most common in electrical measurements, it is the waveshape assumed for analog meter calibration, and the small multiple used in the calibration of the meter is 1.1107 (the form factor: 0.707/0.636: the ratio of RMS divided by average for a sinusoidal waveform). Any waveshape other than a pure sine wave will have a different ratio of RMS and average values, and thus a meter calibrated for sine-wave voltage or current will not indicate true RMS when reading a non-sinusoidal wave. Bear in mind that this limitation applies only to simple, analog AC meters not employing “True-RMS” technology. - The amplitude of an AC waveform is its height as depicted on a graph over time. An amplitude measurement can take the form of peak, peak-to-peak, average, or RMS quantity. - Peak amplitude is the height of an AC waveform as measured from the zero mark to the highest positive or lowest negative point on a graph. Also known as the crest amplitude of a wave. - Peak-to-peak amplitude is the total height of an AC waveform as measured from maximum positive to maximum negative peaks on a graph. Often abbreviated as “P-P”. - Average amplitude is the mathematical “mean” of all a waveform's points over the period of one cycle. Technically, the average amplitude of any waveform with equal-area portions above and below the “zero” line on a graph is zero. However, as a practical measure of amplitude, a waveform's average value is often calculated as the mathematical mean of all the points' absolute values (taking all the negative values and considering them as positive). For a sine wave, the average value so calculated is approximately 0.637 of its peak value. - “RMS” stands for Root Mean Square, and is a way of expressing an AC quantity of voltage or current in terms functionally equivalent to DC. For example, 10 volts AC RMS is the amount of voltage that would produce the same amount of heat dissipation across a resistor of given value as a 10 volt DC power supply. Also known as the “equivalent” or “DC equivalent” value of an AC voltage or current. For a sine wave, the RMS value is approximately 0.707 of its peak value. - The crest factor of an AC waveform is the ratio of its peak (crest) to its RMS value. - The form factor of an AC waveform is the ratio of its RMS value to its average value. - Analog, electromechanical meter movements respond proportionally to the average value of an AC voltage or current. When RMS indication is desired, the meter's calibration must be “skewed” accordingly. This means that the accuracy of an electromechanical meter's RMS indication is dependent on the purity of the waveform: whether it is the exact same waveshape as the waveform used in calibrating.
http://www.allaboutcircuits.com/vol_2/chpt_1/3.html
13
13
Functions and variables to which only the class member functions (and friends) have access. Functions, and rarely non-constant variables, that are directly accessible through an object. The protected keyword behaves the same as the private keyword, with the exception that protected variables are directly accessible from within An object is an instance of a class; it is a variable with all the functionality specified in the class's definition. A data member is a variable declared in a class definition. Functions that belong to a class and operate on a its data members. The constructor of a class is the function that is called automatically when a new object is created. It should initialize the class's data members and allocate any necessary memory. A destructor is the function called when an object goes out of scope. It should free memory dynamically allocated for the object's data members. A friend function is a function that has access to all the class's data members and member functions, including those under the private and protected Inheritance is the property exhibited when a subclass is derived from a superclass. In particular it refers to the fact that an instance of the subclass has all of the data members and member functions of the superclass (and A base class is a class from which another class, called a derived class, A derived class is a class which has inherited the components of another class, called the base class. A class which has one or more data members (and functions) of some unspecified data type. By defining a template, the programmer can create an object using any data type or types. Composition is the use of an object as a member variable of another class as an alternative to creating a subclass. A C++ key word used to qualify functions and inheritance.
http://www.sparknotes.com/cs/c-plus-plus-fundamentals/classes/terms.html
13
32
Triangles are the polygons with three sides. We say that the triangle is the closed figure with three sides. Now we need to know that the Triangles are classified as per the measure of their lengths of their sides. We say that the triangles which have all the sides of the same measure, then the triangles are called equilateral triangles. Now we will talk about isosceles triangles: An Isosceles Triangle has two equal sides and the third side is unequal, which works as the base of the triangle. If we look at the special properties of the isosceles triangle, we say: As isosceles triangle have two lines of same length As the two sides of the triangle are same, we say that the angles corresponding to the equal sides are also equal. Also the Median of the isosceles triangle, drawn from non equal side, is also the perpendicular bisector of the non equal side. As we know that the sum of the three angles of the triangle are equal, thus if one angle of the triangle is known, we are able to find the remaining two angles of the isosceles triangle. Let us see how: If the triangle has one angle = 70 degree, then if we want to find the measure of two angles of the triangle, which are unequal. Let the measure of those angles = x degrees. Now we say that 70 + x + x = 180, 2x + 70 = 180, 2x = 180- 70, 2x = 110, X = 55, so other two angles of the triangle are 55 and 55 degrees Similarly, if we know the two equal angles of the triangle, then we double the measure and then subtract it from 180 degrees to get the third angle of the triangle. Points Shown above are the Special Features of Isosceles Triangles. Special Features of Isoceles Triangles. Triangles are classified according to their length of the line segments and as per their angles. We know that if we have the Triangles classified as per the length of their line segments then the triangles are of following types: 1. Equilateral Triangle 2. Isosceles triangle 3. Scalene triangle. Here we are going to study about an Iso...Read More A Median of a triangle is defined as the line segment which is used to join a vertex of a triangle to the midpoint of the opposite sides of a triangle. There are three vertices present in a triangle, so three medians are present in a triangle. Median value depends on the vertices. There are some properties of An Isosceles Triangle with a median which ...Read More
http://www.tutorcircle.com/special-features-of-isosceles-triangles-t4HAp.html
13
10
Einstein's Cosmological Considerations of the General Theory of Relativity - Einstein's paper Cosmological Considerations of the General Theory of Relativity was, yet another, key paper by Einstein which changed our view of the universe forever. Albert Einstein, in 1917, published a paper entitled Cosmological Considerations of the General Theory of Relativity. No longer was he concerned with the way gravity affects starlight or causes the precession of planetary orbits. Instead, he turned his attention to the role of gravity on the largest cosmic scale. The Cosmological Principle Einstein made his task easier by making the assumption now known as Einstein's cosmological principle. - Einstein's cosmological principle states that the universe is more or less the same everywhere. That is, it is homogenous and isotropic. A homogeneous universe is one whose composition is the same everywhere. One conclusion to draw from this is that the earth does not have a privileged position. The same sort of elements, space-time parameters, and other physical entities, are the same on and around the Earth as they are anywhere else in the universe. Isotropic means looks the same in every direction. An isotropic universe looks the same in every direction. Homogeneity does not imply isotropy, e.g. if galaxies were arranged in north to south lines the universe would look very different if you tilted your head to one side, but everywhere in the universe could have the same curious arrangement of galaxies. Isotropy does not imply homogeneity. Just because the density of galaxies looks the same in every direction from Earth does not imply such symmetry applies elsewhere (although it would be very strange if it did not). The number of distant galaxies is observed to be the same in every direction - the universe is isotropic on the large scale. Measurements of the microwave background radiation also justify the presumption of isotropy. Einstein's gravity formula, like Newton's universal law of gravitation implies that every object in the universe is pulled to every other. This might eventually lead to a big crunch. But in 1917 Albert Einstein and the scientific establishment believed the universe was static. So Einstein changed his gravity formula to include a cosmological constant that "imbued empty space with an inherent pressure that pushed the universe apart". Singh p.148. Big Bang by Simon Singh provides more detailed discussion of Albert Einstein's cosmological principle and other cosmological considerations of the general theory of relativity.
http://www.321books.co.uk/biography/einstein/cosmology.htm
13
10
ABSTRACT: Children use different semantic functions to express ideas. This is evident in single word utterances known holophrases. A holophrase is a single word – used by infants up to the age of 2;00 years – which has the force of a whole phrase which would typically be made up of several (adult) words. Beyond the so-called One Word Stage, children relate different semantic categories to create meaningful, longer utterances. Recall that it is at the One Word Stage that we can appropriately talk about a child's expressive language. Recall also that at the later One Word Stage (14-24 months) children begin to use a range of single words to refer to things in their environment or to actions, i.e. they have a referential meaning (see Language Development for an explanation of Word Stages). Now, consider the following interaction between a father and his 2;00 year old daughter. |father:||that's right, mummy's coming home soon| |father:||er...you want to play with mummy?| |father:||oh, you want mummy's hat?| |[child smiles as father passes the hat]| Sequences like these are not uncommon, the young child repeatedly using just one word and the adult taking the burden of the conversation through a series of guesses until the correct response is made. Difficulties arise because, as adults, we tend to have fairly specific meanings for each of the words we use. For most adults mummy will probably refer to a female person. So if I say, 'mummy' you will most likely think that I am referring to this person. In the above sequence, however, mummy had more to do with possession of an object than the name of a person, i.e. the child pointed to a hat that belonged to mummy and uttered, 'mummy.' Thus, in this instance, the word mummy is not referring directly to the person but, rather, to an object that mummy possesses – the hat. We would, therefore, describe this child's utterance as having the function of possession. It is possible, of course, that the child could use the word mummy to refer to a person. Had the child intended this in the above sequence then the father's first response, 'that's right, mummy's coming home soon' may have been correct. In this instance the word mummy would have had the function of naming. Single words used by infants to mean many different things are called holophrases. For, although the child is only producing one word at a time, the words often have a composite meaning. We have noted, for example, that the word mummy could be used for naming a person or for indicating possession of an object. The confusion for the adult listener in interpreting holophrases arises, in part, because the child utters just one word and the adult's interpretation of this utterance is largely dependent upon the context in which it is spoken. In our example, the child's father only understood what was meant when the child eventually pointed to the hat, i.e. the child supported her utterance through the use of non-verbal communication. Some indication of the various meanings of holophrases may, therefore, come from the child's facial expression, gesture, actions, and so on. Further examples of so-called semantic functions (such as 'possession' and 'naming') include the following: The child may use a single word like more to indicate, 'I want some more cheese' or 'I want you to play with me some more.' Words that express the meaning, 'I want...' or 'I need...' are considered to function instrumentally. A word such as car may have a regulatory function if its intended meaning is, 'let's play with the car'. Words are considered to have this function if they are interpretable as the child indicating to the adult, 'do as I tell you', i.e. they regulate an adult's behavior. This function is represented by words that can be interpreted by the adult as meaning, 'let's pretend'. So, for example, if the child says, woof! this may be intended to mean, 'let's pretend that we are dogs'. There are many more functions than the ones I have described here but these few examples should give a flavour of the various meanings that a single word can express. It should also help to explain why adults often misinterpret the talk of children up to the age of about 2;00 years. Over Extension and Under Extension Another possible reason for adults misinterpreting infants' talk is that some children use the same word to refer to many different objects. For example, the word dog may be used to mean all animals whether or not the animal being referred to is a cat, a horse, a pig, or whatever. This is known as over extension and it is common during the One Word Stage. In the same way that some children use a particular word to refer to many objects or people, others restrict their use of a word that could appropriately be applied to many objects, people, and so on, to only one or two things. For example, a child may use the word drink to refer solely to orange squash and not use the word at all to refer to milk, tea, lemonade, and so on. This is known as under extension and this is also a common feature of the One Word Stage. Beyond the Holophrase The elegance of language development is that it is so logical. If I can express a meaning with just one word then, surely, I can express more with two? If I say mummy this may be ambiguous. But if I say mummy gone then the meaning is more specific. The two-word utterances that children produce are not, however, the product of random combinations of words. Rather, children are systematic and logical in the way they combine words to express meaning. Children appear to produce two-word utterances by relating so-called semantic categories. An explanation of this is beyond the scope of this article but a simple example to illustrate how this functions is as follows. Children usually make a distinction between animate beings and inanimate objects. Animate beings are capable of acting voluntarily: each is an AGENT (e.g. mummy, daddy). Children also categorise ACTION words, e.g. kick, run, bark. Now, a common relation found in the majority of children between the ages of about 20-30 months is that of AGENT + ACTION, e.g. daddy go (where daddy represents an animate being (AGENT) and go expresses the meaning of an ACTION). Further examples include: dog bark, mummy run and bird sing. Children will, therefore, extend the length of their utterances beyond the holophrase and up to the Complex Utterance Stage (see Language Development) in a similar fashion, by combining words drawn from relevant semantic categories. Beyond the Two and Three Word Stages, children also increase the length of their utterances through the repeated use of 'and' as a connector. Between 25-35 months at least four femantic functions develop: additive, temporal, causal and adversative. There is a cumulative effect to using these functions, each function being dependent upon the function that precedes it. READ MORE>> |< Prev||Next >|
http://speech-therapy-information-and-resources.com/holophrase.html
13