score
int64
10
1.34k
text
stringlengths
296
618k
url
stringlengths
16
1.13k
year
int64
13
18
19
Any number line can be drawn with Positive and Negative Numbers. We say that on a number line 0 is the number at center. As we move to left of number line, numbers goes on decreasing and as we move to right of number line, numbers goes on increasing. Let us learn how to represent Fractions on a number line. For this we will first check if given fraction number, which we need to represent on number line is a proper fraction or not. Thus we say that to mark a fraction on number line, we recall that fraction with Numerator < Denominator is a proper fraction and fraction with denominator < numerator is an improper fraction number. In case we have fraction as improper, we say that on dividing numerator by denominator, we will get a Whole Number, a remainder, which is a numerator and denominator will remain same. In case the fraction is a proper fraction number, we say that fraction will exist between numbers 0 and 1. Also we say that to plot this proper fractions, we will divide the area between zero and 1 in equal parts which is equals to denominator of fraction. Suppose we have a fraction 3/7, here we observe that fraction is a proper fraction number. So we will divide line between 0 and 1 in 7 equal parts ( which is the denominator), so we will mark third division to represent 3/7. In order to represent improper fraction number on number line, we will first convert improper fraction in form of mixed fraction number and then marking of fraction number becomes easy.
http://www.tutorcircle.com/fractions-on-a-number-line-fNXwq.html
13
15
Epipolar geometry is the geometry of stereo vision. When two cameras view a 3D scene from two distinct positions, there are a number of geometric relations between the 3D points and their projections onto the 2D images that lead to constraints between the image points. These relations are derived based on the assumption that the cameras can be approximated by the pinhole camera model. Epipole or epipolar point: is the 3d point in both left and right image and and are focal points of left and right camera. Since the two focal points of the cameras are distinct, each focal point projects onto a distinct point into the other camera's image plane. These two image points are denoted by and and are called epipoles or epipolar points. Both epipoles and in their respective image planes and both focal points and lie on a single 3D line. The line is seen by the left camera as a point because it is directly in line with that camera's focal point. However, the right camera sees this line as a line in its image plane. That line in the right camera is called an epipolar line. Symmetrically, the line seen by the right camera as a point is seen as epipolar line by the left camera. An epipolar line is a function of the 3D point , i.e. there is a set of epipolar lines in both images if we allow to vary over all 3D points. Since the 3D line passes through camera focal point , the corresponding epipolar line in the right image must pass through the epipole (and correspondingly for epipolar lines in the left image). This means that all epipolar lines in one image must intersect the epipolar point of that image. In fact, any line which intersects with the epipolar point is an epipolar line since it can be derived from some 3D point . I want to calculate 3d coordinate from 2d coordinate of a point using epipolar geometry and capturing 2 frames of same image point with 2 cameras. I know how to reconstruct 3d point from 2d points. The cameras are located in a circular region radius of 7 feet and the cameras are mounted exactly at opposite sides of the circle's diameter. If the camera1 is mounted at 0 degree of the circular area then camera2 is located at 180 degree of the same area. If I take a picture of a 2d point which coincides with epipoles in image frames of camera1 and camera2 then can epipolar geometry be used when cameras are located at two opposite sides of a region? Sorry if I'm wrong but doesn't in this case epipolar line will be a point? Can anyone kindly tell me if epipolar geometry be used to find 3d coordinate in this special case? If not is there any other method that I can use to calculate 3d coordinate of this 2d point?
http://mathhelpforum.com/advanced-algebra/190064-2d-3d-image-point-projection-epipolar-geometry-epiline-question.html
13
13
ASTR 1210 (O'Connell) Study Guide 15. MERCURY AND VENUS Radar map of Venus' surface, from the Magellan Mission. The red color is artificial, intended to represent the effects of Venus' thick clouds. Click for enlargement. A. The "Inferior" Planets Mercury and Venus are called "inner" or "inferior" planets because they are closer to the Sun than is Earth Both revolve around the Sun in shorter times than the Earth (88 and 225 days, - Elongation is the angular distance of a planet from the Sun as viewed from Earth. The term "configurations" refers to the various characteristic elongations possible for planets as shown in the figure above. - See the illustration above. As viewed from the Earth, the two planets inside the Earth's orbit can never appear at large angles from the Sun. Mercury and Venus always stay within 27o and 48o, respectively, of the Sun. These are their "maximal - Copernicus showed that in his heliocentric model, the sizes of the orbits of Mercury and Venus (relative to Earth's orbit) could be deduced from these angles. In the Ptolemaic model, there was no simple geometric method for determining the sizes of the planetary orbits. - Consequently, Venus and Mercury are visible in the sky only near sunset or sunrise. Venus is the most common evening or morning - Because Venus' orbital period is similar to Earth's, it tends to linger in the sky near the horizon for many weeks at a time. [Recall the planetarium simulations shown during our discussion of the Maya obsession - Because of its proximity to Earth and the high albedo (~70%) produced by its thick cloud layers, Venus is the brightest object in the sky other than the Sun or the Moon. Its intense brightness and white color make it look artificial. - ===> Venus is the classic "Unidentified Flying Object" (UFO). See Guide 18 for more discussion. - The planets outside Earth's orbit ("superior" planets), starting with Mars, can be seen at up to 180o from the Sun. At that point they are highest in the sky at midnight and are said to be at "opposition" with respect to the Sun. - As the figure shows, when a planet is at opposition, it is also nearest the Earth and therefore brightest. It will also be undergoing its fastest "retrograde motion" at that point. Image of the Caloris Basin on Mercury taken by the Color coding is for different mineral types. Mercury is hard to observe from Earth because it is above the nighttime horizon only for brief periods. It has been less well studied than most other planets. Until 2007 there had been only 2 spacecraft visits, both flybys, in contrast to Venus, which has been a major destination of space Here is hemispheric view of Mercury from Mariner 10 (1974). is an ongoing mission to study Mercury at close range in 3 flybys followed by long-term in-orbit observations. Mercury has a high average density of 5.4 grams/cc, like Earth, but Mercury's mass (& therefore gravity compression) is smaller. That implies Mercury is more rich in heavy elements than Earth Mercury's surface is similar to Earth's Moon (impact-driven terrain), but with important differences (e.g. shallower craters) due to slower cooling and higher gravity. See image at right and compare to Earth's Moon topography Mercury's orbit is an important test of General Relativity, the revised interpretation of gravity proposed by Einstein in 1916. Mercury's perihelion (the closest point to the Sun in its elliptical orbit) shifts 43 arc-seconds/century more than predicted for Newtonian gravity. This is only 1/10 millionth of its total orbital motion per orbit, but it can be measured highly accurately over many orbits. The extra shift is predicted exactly by Einstein's GR Venusian cloud layers in UV/optical bands (image from Mariner 10, 1974) C. Venus: Introduction Venus is a near "twin" of Earth in global properties: diameter (95%); mass (82%); distance from Sun (0.7 AU) But unlike Earth, thick cloud layers completely obscure its surface. See image above (click for enlargement). USSR & USA space missions to Venus have included flybys, orbiters, atmospheric probes, and Results from these missions, as well as Earth-based radio-wave observations, quickly demolished the notion of a Venusian tropical paradise: - The surface of Venus cannot be studied from outside its atmosphere at optical wavelengths. - Clouds in planetary atmospheres are composed of liquid droplets or ice crystals and are distinct from the atmosphere (gas) in which they - Therefore, we can't determine cloud composition by spectroscopy (easy only for vapors). - The naive presumption, giving Venus' appearance and overall similarity to the Earth, was that the clouds were made of water and that the planet probably hosted a flourishing, wet, jungle-like biosphere. Radio and infrared measurements from early flyby and lander missions (1962-72) showed that Venus' surface temperature was almost 500o C (900o F) and the lower atmosphere was crushingly dense. Landers returned images of a bleak, lava-covered surface: Above is a wide angle color image of Venus' surface returned by the USSR Venera 13 lander (1982). It shows a lava-strewn plain, extending to the horizon at right. Color is produced by the thick cloud layer. Click for enlargement. D. Venus: Surface/Topography For Venus, the only feasible mapping technique was to use radar to penetrate the thick clouds. Radar systems emit a short burst of radio waves and then detect the reflected burst to determine a target's distance and (through the Doppler effect) motion. Radar map of Venus (Pioneer Mission, 1981) The image above is a relief map of Venus derived from radar observations with the Pioneer mission. Best mapping coverage was from Mission (radar orbiter, 1990-94). The overall topography is flatter than Earth's. There are only two "continent"-like features (Ishtar and Aphrodite in the map above). Continents and domelike features are evidence of modest tectonic activity, but this is much less conspicuous than on Earth, as can be seen in the comparison images above. There are no Given the surface temperature, there are obviously no oceans on Venus. Vast lava flows cover 85% of the surface, but there are no large basins, neither impact (like the Moon's marias) nor tectonic-related (like Earth ocean beds). Most flow regions are smooth. There is little current eruptive activity. There are many dormant volcanoes, from 500 km diameter to tiny vents; 3000 over 20 km diameter; 100,000 altogether! Over 160 larger than the largest volcano on Earth (Hawaii). This radar image shows four overlapping volcanic domes. They average about 16 miles in diameter There are also many impact craters, but fewer per unit area than on the Moon or Mercury. This implies a younger surface than those planets. with maximum heights of 2,500 feet. They were produced by eruptions of thick lava coming from vents on the relatively level ground, allowing the lava to flow in an even Click for enlargement. Surprisingly, Venus shows a uniform distribution of craters across - Shown at right is a radar image of a 30-mile diameter impact crater surrounded by a bright "splash blanket" of ejecta. Lighter-toned regions on radar images are rougher; darker-toned are smoother. Click for a larger view. This situation is unique in the Solar System (see discussions of the Moon and the outer satellites in other Guides). It implies the whole surface formed at one time. Judged by the density of impact craters, the surface is relatively young---only about 500 Myr old, unlike the 4+ Byr-old surfaces of the Moon, Mars, etc. The combined evidence indicates that the entire planet underwent sudden catastrophic melting & resurfacing, possibly induced by heat trapping under a thick crust. This process could be cyclic, repeating after sufficient interior heat builds up. - Venus' surface history will be discussed in the video "Venus E. Venus: Atmosphere Venus' atmosphere is dense, hot, dry, and corrosive. It is entirely hostile to Earth-like life. Despite the dense and corrosive atmosphere, there is little weathering of surface features on Venus because windspeeds are very low (and the sulfuric acid rain evaporates at high altitude before reaching the ground). - The bulk of the atmosphere is carbon dioxide (CO2) - H2O vapor has only 1/10000 of its abundance on Earth, and there is no liquid water on the surface. A dessicated - We will find later (Study Guide 19) that the absence of water is a key to the bizarre properties of the Venusian atmosphere. - Lack of liquid water, which on Earth is a lubricant for the outer layers of the interior, may also help to inhibit tectonic activity on Venus. - The Venusian cloud decks? The clouds are sulfuric acid(!!) droplets - They originate from volcanic outgassing in the absence of rainfall - See the atmospheric profile chart at right: - Remarkable differences from Earth's atmosphere - Temperatures and pressures like those at Earth's surface occur at an altitude of 50 km in Venus atmosphere. Below that, pressures and temperatures are much higher than on Earth. - Surface Temp ~ 750oK (480oC)! - Venus' surface is hotter than Mercury's, despite its larger distance from the Sun! - Surface Pressure = 90x Earth's. This implies Venus' atmosphere is 90x more massive than Earth's! The Greenhouse Effect Venus would be warmer than the Earth simply because it is nearer the Sun. But the extraordinarily high Venusian temperature is not caused by higher solar input. Instead, it is produced by Effect, an atmospheric process which inhibits surface - The Greenhouse Effect was first recognized in the 1820's, and the first quantitative discussion was published by Arrhenius in 1896. - The main heat input to any planetary atmosphere (including Earth's) is from the Sun. This occurs mainly at visible wavelengths, where the Sun is brightest. - Cooling from the surface is by radiation to space. Because the temperature of planetary surfaces is (fortunately for us!) much lower than the Sun's, this occurs not at visible but instead at infrared II and III to remind yourself of the characteristics of radiation from dense objects like planets.) - The final temperature is determined by the equilibrium point, where the heating rate balances the cooling rate. - Certain "Greenhouse gases" (H2O, CO2, CH4) act like a blanket to "trap the heat." They preferentially absorb infrared radiation and reflect it back to the surface, thereby reducing radiative cooling. See sketch above right (click for enlargement). - This causes a significant temperature rise to the point where the surface can radiate as much energy to space (through the Greenhouse blocking) as it receives from the Sun. The situation is like the level of a lake adjusting to the increased height of its outlet dam. - Because all of the surface cooling must take place in the infrared, any gas that can impede IR radiation is an effective Greenhouse agent. Even the tiny amounts of Greenhouse gases in the Earth's atmosphere can have a big is a chart that shows the radiative input, output, and Greenhouse gas blocking as a function of wavelength. - On Earth, where the Greenhouse gases are only "trace" constituents of the atmosphere (CO2 totals only 0.04% of the atmosphere's mass), the Greenhouse temperature increase is a modest 30o C (or 54o F), which is just enough to keep Earth's surface "comfortable" by human standards and prevent the oceans from freezing over. - But on Venus, where the atmosphere is almost pure CO2 and massive enough to block large regions of the infrared spectrum, the temperature rise is 400o C. F. Venus and Earth Venus is a sobering lesson in comparative planetology. The incredible differences between terrestrial and Venusian conditions were a great shock to astronomers. How can the atmospheres of Venus and Earth, despite their similarities in size, mass, and distance from Sun, be so different? The culprit is probably the seemingly small difference in distance to the Sun (30%), as we will see in Study Guide 19. Venus is totally unsuitable for a biosphere for two entirely different reasons: its hostile atmosphere and its episodes of catastrophic resurfacing (both related to heat-trapping). It is ironic that this horrific world was named in many cultures for the Goddess of Love. The Maya, who believed it was a vicious god bent on destruction, were closer to the truth. Here is another astronomical touchstone for human societies. It was the recognition of the power of the Greenhouse Effect on Venus that first led atmospheric scientists to become concerned about global warming on Earth. Spaceman Spiff zooms past Venus on his way to Mars --- next Reading for this lecture: Bennett textbook: pp. 203-204; Secs. 9.3, 9.5. Study Guide 15 Viewing: video shown in class: "NOVA: Venus Unveiled" Reading for next lecture: If you missed the class, the video can be viewed in Clemons Library. Its call number is VHS 13769. Bennett textbook: p. 206; Sec. 9.4. Study Guide 16 March 2013 by rwo Venus images copyright © 1997, Calvin J. Hamilton. Atmosphere profile copyright © Harcourt, Inc. Greenhouse effect drawing copyright © Toby Smith. Text copyright © 1998-2013 Robert W. O'Connell. All rights reserved. These notes are intended for the private, noncommercial use of students enrolled in Astronomy 1210 at the University of Virginia.
http://www.astro.virginia.edu/class/oconnell/astr1210/guide15.html
13
24
CATEGORIZING CELESTIAL OBJECTS Background, Activities and Critical Analysis Adnaan Wasey, Online NewsHour Space science, earth science, general science Time: One 45-minute period with options to extend Level: 9-12 (lesson can be modified for lower grades) astronomy concepts to develop and test a classification system for planets. 2. Participate in a class vote on planet classification. 3. Read an article about astronomers' response to the planetary categorization. 4. Discuss and write an essay about the scientist's role as decision-makers for the public. THESE LESSONS BETTER to National Standards of an Online NewsHour article about the debate over Pluto's planetary status, "Pluto Debate Eclipsing More Important Research, Some Say", (If students do not have Internet access, PDF) of the "What is a planet?" instructions, data sheet, worksheet, rubric and homework instructions (printer-friendly PDF) (one per student) and transparency (optional) of "How does your definition compare?" instructions and worksheet (one per student or one transparency) of rubric (printer-friendly PDF) of "Are these planets?" (printer-friendly PDF) - Pen and paper for each student, calculators (optional) (optional, for transparencies) system posters or astronomy text books (if available) all materials as one file (printer-friendly PDF) - Full coverage of the "Pluto Debate" by the Online NewsHour, including NewsHour with Jim Lehrer TV segments, is available at - In August 2006, the International Astronomical Union voted to demote Pluto to "dwarf planet" status, leaving eight full "planets" in the - The new "planet" definition is based on the ability for a celestial object to keep a spherical shape and to dominate its orbit around the this decision, there was no official definition of a "planet." celestial bodies are now given the "dwarf planet" distinction: (an object discovered in 2004 in the Kuiper Belt, a region at the edge of the solar system) (a body in the solar system's main asteroid belt, discovered 200 in recent years (Eris and the other planet-like objects in other star systems) have brought the question of what a planet is to the forefront in the space science community. 1. Ask students for examples of scientific classification schemes (taxonomy in biology, geologic timescales, planet categorization, etc.). Ask the students how they think scientists create these classifications systems. 2. Tell the students about the IAU's August 2006 decision to demote Pluto from "planet" to "dwarf planet" then introduce the planet categorization exercise. Students will examine scientific data, develop a scientifically based definition of a planet in small groups, then present their definition to the class so the class can vote for their favorite. is a planet?" handouts containing the assignment instructions, celestial body data sheet, a worksheet, rubric and homework instructions. You may also choose to display data on the projector or blackboard. the assignment and rubric with the students and answer any clarification questions. Tell the students that astronomers were considering many definitions, that science is always evolving, and that the IAU realizes that their current "planet" definition may need to change. appropriate groups, or have the students work on their own, to examine the data sheets and develop their planetary definition. 6. As the students work, refer to the "Are these planets?" page to help the students refine their definitions. 7. Once the students have created their definitions, collect their worksheets. Ask for a representative from each group, or volunteers from the class, to present their definition and reasoning to the class. You may also want to read the definitions yourself to maintain anonymity. a poll to see which definition the students like best. 9. Show the class the International Astronomical Union's actual August 2006 planetary does your definition compare?" by distributing handouts, showing a transparency, or reading the definition aloud. Discuss similarities and differences between the definition the class chose and the one chosen Ask the students to read the article "Pluto Debate Eclipsing More Important Research, Some Say" (http://www.pbs.org/newshour/indepth_coverage/science/pluto/news.html; if students do not have Internet access, PDF) in class or as a homework assignment. Students may also visit the "Pluto Debate" Web site at the Online NewsHour: Extension Activity) Download the homework assignment (printer-friendly Lead a discussion about the role of the scientist in their community as a precursor to a essay, or ask students to investigate the subject on their own as part of their homework assignment. Students may examine one or more of - Is voting on a scientific concept in keeping with the ethical tradition of science and the scientist's search for the truth? - Is voting a valid way to get the opinion of the scientific community? scientists accept a "best available explanation" though they know it may not be correct? - Do you think the IAU's planetary definition decision was made by examining enough data, using logical arguments, and using an appropriate amount - What new scientific evidence would help resolve the planetary definition debate? Resources for Teachers NewsHour's in-depth coverage of planetary categorization, "The Pluto Debate," including segments from the NewsHour with Jim Lehrer TV segments, is available at: to National Science Standards (from the National Science Education Standards site at http://books.nap.edu/html/nses/html/6e.html): G: History and Nature of Science Compendium of K-12 Standards Addressed: Standard 11. Understands the nature of scientific knowledge Standard 12. Understands the nature of scientific inquiry Standard 13. Understands the scientific enterprise Standard 1. Uses a variety of strategies in the problem-solving process Standard 2. Understands and applies basic and advanced properties of the concepts of numbers Standard 4. Understands and applies basic and advanced properties of the concepts of measurement Standard 9. Understands the general nature and uses of mathematics Standard 1. Uses the general skills and strategies of the writing process Standard 3. Uses grammatical and mechanical conventions in written compositions Standard 5. Uses the general skills and strategies of the reading process Standard 7. Uses reading skills and strategies to understand and interpret a variety of informational texts Standard 8. Uses listening and speaking strategies for different purposes Standard 1. Contributes to the overall effort of a group Standard 3. Works well with diverse individuals and in diverse situations Standard 4. Displays effective interpersonal communication skills Standard 2. Uses various information sources, including those of a technical nature, to accomplish specific tasks Standard 1. Understands and applies the basic principles of presenting Standard 2. Understands and applies basic principles of logic and reasoning Standard 3. Effectively uses mental processes that are based on identifying similarities and differences Standard 6. Applies decision-making techniques the Author Adnaan Wasey is an associate editor at the Online NewsHour. He has degrees in Engineering and Chemistry and taught high school science before joining the NewsHour. find out more about opportunities to contribute to this site, contact Leah Clapman at [email protected].
http://www.pbs.org/newshour/extra/teachers/lessonplans/science/planet_categorization.html
13
14
When (VP) itself is raised to the Qth powerWhat is [(102)]3 ? Obviously, we need multiply 10 by itself many times. Since [(102)]3 should multiply the above by itself for a total of 3 times: in all 2.3=6 factors of 10. In other words, if 10 is raised to the second power, and the result is then raised to the 3rd power, that is the same as raising 10 to the (2.3) = 6th power. Indeed, the same argument can be made for any numbers, as long as they are raised to powers which are whole numbers. For any choice of whole numbers P and Q, if you raise some number V to power P and then raise the result to power Q, the result is like raising the number V to power (P.Q): Say V = 10P so that P = log V. Raising V to power Q and keeping the same rules found for integers P and Q Taking the logarithm of both sides In this manner, given a number V, logarithms help calculate VQ, even if Q is not a whole number (as discussed below). The prescription: ---Take the numberV ---Find its logarithm P, ---Multiply P by Q to get Q log V. Say that is the number U. --Find the number whose logarithm U is, that is, find 10U . That is the value of VQ . But how does one interpret, raising a number to the power of an arbitrary real number? (1) We may start with a simple example. Suppose If any powers a and b satisfy then V behaves the way one expects 101/2 or 100.5 to do. Namely, multiplying if by itself: (2) Similarly for the Qth root of 10--the number which must be multiplied by itself Q times to get 10. If written as 101/Q, it would satisfy the equation for x(a+b), with 101/Q multiplied by itself Q times. (We do not go now into the question of how the Qth root is derived: there exist methods.) This can also be expressed using the earlier equation (in the box above). If 101/Q is raised to the Qth power and the earlier relation still holds, then Furthermore, if the general relation holds for any two factors, let Raising this to the Pth power This lets one formally define the power (P/Q), i.e. any rational number: first raise V to power 1/Q (that is, take the Qth root of V), then raise all that to the Pth power, by multiplying it by itself P times. (3) There is no good way to raise 10 to a power which is "irrational," a number which cannot be written as a fraction P/Q; for instance, raising 10 to power √2 or π . However, even though such numbers can never be exactly expressed as a fraction, there exist ways of approaching them through a series of fractions F1, F2, F3 ... which approximate them ever more closely; log π, for instance, is near 1022/7 and even closer to 10355/113, and by continuing this sequence (or more conventionally, using a sequence of decimal fractions), one can approximate it as closely as one wishes. In general, the numbers N1, N2, N3...equal to 10 raised to the powers F1, F2, F3 approaching the irrational number will also get closer to each other. One guesses that if the process is taken far enough, the result represents 10 to the irrational power as accurately as one may wish. One example of raising numbers to power 3/2, which also shows a graphical application of logarithms, is in Keplers third law. Suppose we seek to examine that law. We have the list of average distances r of planets from the Sun and of corresponding orbital periods T , and they both grow together, though not in strict proportion. We suspect that T is some power of r--but how can we check this, and how can we find what that power is? The data (see section on Kepler's laws We already know of course that Kepler found T2 was proportional to R3 (so columns 4 and 5 should be equal, except the values used here are not accurate), Kepler's famous 3rd law. It can be written Exploring FurtherOther fractional exponents arise from the gas laws. You may have learned that in a gas the pressure P of a given quantity of gas (say, one gram) is inversely proportional to its volume V That, however, only holds true if the temperature stays the same. In fact, when you pump gas into a container of half the volume, it not only generates higher pressure, but you invest energy overcoming the pressure of the gas already there. As a result (as users of bicycle pumps know well) the gas heats up, and its pressure increases more than twice. It turns out that if heat is not allowed to flow in or out, a good approximation for the gas law is Author and Curator: Dr. David P. Stern Mail to Dr.Stern: stargaze("at" symbol)phy6.org . Last updated 10 November 2007
http://www-istp.gsfc.nasa.gov/stargaze/Slog3.htm
13
11
Video:Tips for Creating 4th Grade Level Word Problemswith Scott Word problems help develop different skills sets for young math students. Learn how to create fourth grade level word problems to help your child understand math better.See Transcript Transcript:Tips for Creating 4th Grade Level Word ProblemsHi, I’m Scott for About.com, today I have a few tips for you on how to create word problems from Math.About.com. Word problems are a great way to teach children how the math they are learning in school applies to everyday practical situations. More specifically, word problems show students that math is not just about dry number calculations, but about real life problems. Word problems or story problems help children develop critical thinking skills. Review Basic Concepts in the Word Problems> Tip 1: Start by becoming familiar with the general concepts your child is learning in math. Students in the fourth grade should be learning how to understand basic patterns and algebra, data management and probability, number concepts and basic geometry concepts and types of measurement. For an overview of fourth grade math skills, go to: Math.About.com. Word Problems Should Relate to the Student's LifeTip 2: Once you’ve decided the type of word problem you’re going to create, relate the problem to a real-life situation. For example, Children in the 4th grade should be able to understand basic patterns and algebra, data management and probability, number concepts and basic geometry concepts and types of measurement. Fourth grade word problems should test these specific skills, grounding abstract math concepts in practical situations that third graders can relate to such as classroom or play situations. When creating word problems for fourth graders, use people, objects, places, or concepts that they are familiar with. In the following word problem, familiar every circumstances such as preparing to go to school help the student relate to abstract fourth grade number concepts.Here is a sample problem: Kerri has to be to school by 8:30. It takes her 5 minutes to brush her teeth, 10 minutes to shower, 20 minutes to dry her hair, 10 minutes to eat breakfast and 25 minutes to walk to school. What time will she need to get up? Create a Strategy to Solve the Word ProblemTip 3: Math is all about problem solving. One of the best ways to help children learn math is to present them with a problem in which they have to devise their own strategies to find the solution(s). There is usually more than 1 way to solve many math problems, so try to devise a problem that can be solved in two or more different ways using math concepts familiar to the child. Here is the sample problem: You and two friends are ready to share your Birthday cake. Just before you cut the cake, a 4th friend comes to join you. Show and explain what you will do.Here, the question can solved using the concept of fractions. At first, the child must cut the cake into thirds (as there were three total students). But in the end there were four total students so they must cut it into fourths.The answer could be arrived at by using geometry concepts just as well. Since in the end there were four students, you could cut the cake in two half circles, then cut each half circle into quarter circles to get four equal pieces. Show How the Math Problem is SolvedTip 4: Have the students justify their solutions. You can find word problem worksheets according to each grade on Math.About.com along with more practical problem solving tips. Thank you for watching. For more information, visit: Math.About.com About videos are made available on an "as is" basis, subject to the User Agreement.
http://video.about.com/math/Tips-for-Creating-4th-Grade-Level-Word-Problems.htm
13
10
We have this transcript available for download as a PDF Battle of the Bulge offers insights into American history topics including World War II, military strategy, the importance of technology in war, first-person accounts of war, unilateralism or multilateralism in foreign policy, and the role of the military in a democratic society. You can use part or all of the film, or delve into the rich resources available on this website to learn more, either in a classroom or on your own. The following activities are grouped into 4 categories: geography, economics, history, and civics. You can also read a few helpful hints below for completing the activities. 1. Divide the class into groups of three persons each. Within each group, one person should create a map showing Germany's borders as of 1938; one should create a map showing the territory controlled by Germany as of 1942; and one should create a map showing the postwar division of Germany. Post these maps around the class and discuss the events that caused Germany's expansion and later defeat and division. List these events on a timeline on the board. 2. As the film notes, a major factor in the Allied victory in the Battle of the Bulge was air power. Today, air power remains a vital part of American military strategy. Prepare a brief report comparing a specific element of the United States' use of air power in World War II and more recent conflicts, such as those over Kosovo and Afghanistan. For example, you might compare the capabilities (for example, range, speed, and armament) of American military aircraft in these two eras, or you might examine how technological advances have improved the accuracy of bombing since World War II. 1. What percentage of its economy, usually expressed as its Gross Domestic Product (GDP), do you think the United States devoted to the armed forces during World War II? What percentage of its economy do you think the United States devotes to the armed forces today? Take an informal poll of five classmates, friends, or family members on these two questions. Combine your results with those of your classmates, keeping in mind that no person should respond to the poll more than once. What is the range of guesses for each question? What is the average? Now look up the answers to these questions. How accurate were people? 2. The greatest "cost" of World War II was in lives. Find out the number of persons (both soldiers and civilians) killed in the war in each of the major nations that took part in the war. (a) Present this data in two forms: as a bar graph and as a pie chart. (b) Divide the number of American deaths by the total U.S. population at the start of the war to find out roughly what percentage of the total U.S. population died in the war. Now, multiply this percentage by the current U.S. population. American deaths during World War II were equivalent (as a share of population) to how many deaths today? 1. What if Hitler's gamble at the Battle of the Bulge had succeeded and Germany had permanently stopped the Allied advance in the West? Imagine that you are a historian in 1960 looking back on the battle. Write a brief article in which you describe Germany's victory in the battle and the effects that the victory had on subsequent events. 2. The Battle of the Bulge was neither the first nor the last time that U.S. military forces faced a desperate situation. Divide the class into 6 groups and assign each group one of the following: Valley Forge (1777-1778), the Battle of Chancellorsville (1863), the Battle of the Little Bighorn (1876), the Japanese invasion of the Philippines (1941), the North Korean invasion of South Korea (1950), and the Tet Offensive (1968). Each group should prepare a brief oral report for the class, answering the following questions: Why was the U.S. situation desperate? How did the U.S. forces respond? What was the outcome, both in the short term and over the course of the conflict as a whole? 1. As the film notes, one of the heroes of the Battle of the Bulge was General George Patton. View the Academy Award-winning 1970 film "Patton," which dramatizes events from his life (including his role in the Battle of the Bulge) and shows some of the contrasts between Patton and other American generals. Write an essay explaining what you do and do not admire about Patton as he is portrayed in the film. What does his story tell you about the special challenges of being a military leader in a democratic society like the United States? 2. The United States fought World War II as part of an international coalition. Similarly, the United States has sought allies in its current war against terrorism. Divide the class into two groups and hold a debate on the following question: Should the United States generally pursue its foreign policy goals by cooperating with other nations, or should it generally act on its own? Support your view with specific examples from past and current events. 1. You also should make sure students are aware of Germany's reunification in 1990 following the collapse of East Germany. (You might want to compare Germany's present-day borders to those of 1938.) 2. Students also might want to explore the debate, which has continued in one form or another for decades, about the potential of air power to partially or even completely eliminate the need for ground troops to achieve military objectives. 1. During World War II, military spending accounted for more than 35 percent of GDP, according to a Congressional Research Service report. Today it accounts for about 3.5 percent of GDP, according to an analysis (table 9) of President Bush's fiscal year 2003 request done by the nonprofit Center for Strategic and Budgetary Assessment. 2a. If students are having trouble with the pie chart, explain that the total pie represents the total number of deaths suffered by all major warring nations. 2b. You also might have students compare Soviet deaths during World War II with the current U.S. population so they can get a sense of the enormous cost of the war to the Soviet Union. 1. Though there is no way to know what the consequences of a German victory would have been, students might imagine that the United States ended the war by dropping atomic bombs on Germany, or that the Soviet Union eventually defeated Germany and occupied much or all of continental Europe. 2. After the presentations, ask students how, in the cases where U.S. forces were defeated in the battle, the United States nevertheless was able to win the war. 1. If students need help getting started, ask them to think about the tension between discipline and hierarchy, which the armed forces need in order to function effectively, and individual freedom and equality, which are highly valued in the United States. You also might use the essays as the starting point of a larger discussion about the role of the military in the United States and to what degree that role has changed as the United States has become a world power. 2. Students who favor cooperation might point out that other nations can provide various forms of support to promote American goals. Students who favor unilateral action might point out that working with allies often forces a nation to make compromises in its policies. My American Experience Were you there for the Battle of the Bulge? The storming of Normandy beach? The Victory in the Pacific? Or perhaps your friends and relatives have passed on stories of their own World War II experiences that you would like to share.
http://www.pbs.org/wgbh/americanexperience/features/teachers-resources/bulge-teachers-guide/
13
19
- This page is about the measurement using water as a reference. For a general use of specific gravity, see relative density. See intensive property for the property implied by specific. Specific gravity is the ratio of the density of a substance compared to the density (mass of the same unit volume) of a reference substance. Apparent specific gravity is the ratio of the weight of a volume of the substance to the weight of an equal volume of the reference substance. The reference substance is nearly always water for liquids or air for gases. Temperature and pressure must be specified for both the sample and the reference. Pressure is nearly always 1 atm equal to 101.325 kPa. Temperatures for both sample and reference vary from industry to industry. In British brewing practice the specific gravity as specified above is multiplied by 1000. Specific gravity is commonly used in industry as a simple means of obtaining information about the concentration of solutions of various materials such as brines, hydrocarbons, sugar solutions (syrups, juices, honeys, brewers wort, must etc.) and acids. Specific gravity, as it is a ratio of densities, is a dimensionless quantity. Specific gravity varies with temperature and pressure; reference and sample must be compared at the same temperature and pressure, or corrected to a standard reference temperature and pressure. Substances with a specific gravity of 1 are neutrally buoyant in water, those with SG greater than one are denser than water, and so (ignoring surface tension effects) will sink in it, and those with an SG of less than one are less dense than water, and so will float. In scientific work the relationship of mass to volume is usually expressed directly in terms of the density (mass per unit volume) of the substance under study. It is in industry where specific gravity finds wide application, often for historical reasons. True specific gravity can be expressed mathematically as: where is the density of the sample and is the density of water. The apparent specific gravity is simply the ratio of the weights of equal volumes of sample and water in air: where represents the weight of sample and the weight of water, both measured in air. It can be shown that true specific gravity can be computed from different properties: where is the local acceleration due to gravity, is the volume of the sample and of water (the same for both), is the density of the sample, is the density of water and represents a weight obtained in vacuum. The density of water varies with temperature and pressure as does the density of the sample so that it is necessary to specify the temperatures and pressures at which the densities or weights were determined. It is nearly always the case that measurements are made at nominally 1 atmosphere (1013.25 mb ± the variations caused by changing weather patterns) but as specific gravity usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products) variations in density caused by pressure are usually neglected at least where apparent specific gravity is being measured. For true (in vacuo) specific gravity calculations air pressure must be considered (see below). Temperatures are specified by the notation with representing the temperature at which the sample's density was determined and the temperature at which the reference (water) density is specified. For example SG (20°C/4°C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4°C. Taking into account different sample and reference temperatures we note that while (20°C/20°C) it is also the case that (20°C/4°C). Here temperature is being specified using the current ITS-90 scale and the densities used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale the densities at 20 °C and 4 °C are, respectively, 0.9982071 and 0.9999720 resulting in an SG (20°C/4°C) value for water of 0.9982343. As the principal use of specific gravity measurements in industry is determination of the concentrations of substances in aqueous solutions and these are found in tables of SG vs concentration it is extremely important that the analyst enter the table with the correct form of specific gravity. For example, in the brewing industry, the Plato table, which lists sucrose concentration by weight against true SG, were originally (20°C/4°C) i.e. based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density of equal to 0.999972 g·cm−3 or SI units (or 62.43 lbm·ft−3 in United States customary units). The ASBC table in use today in North America, while it is derived from the original Plato table is for apparent specific gravity measurements at (20°C/20°C) on the IPTS-68 scale where the density of water is 0.9982071 g·cm−3. In the sugar, soft drink, honey, fruit juice and related industries sucrose concentration by weight is taken from a table prepared by A. Brix which uses SG (17.5°C/17.5°C). As a final example, the British SG units are based on reference and sample temperatures of 60F and are thus (15.56°C/15.56°C). Given the specific gravity of a substance, its actual density can be calculated by rearranging the above formula: Occasionally a reference substance other than water is specified (for example, air), in which case specific gravity means density relative to that reference. Measurement: apparent and true specific gravity Specific gravity can be measured in a number of ways. The following illustration involving the use of the pycnometer is instructive. A pycnometer is simply a bottle which can be precisely filled to a specific, but not necessarily accurately known volume, . Placed upon a balance of some sort it will exert a force . where is the mass of the bottle and the gravitational acceleration at the location at which the measurements are being made. is the density of the air at the ambient pressure and is the density of the material of which the bottle is made (usually glass) so that the second term is the mass of air displaced by the glass of the bottle whose weight, by Archimedes Principle must be subtracted. The bottle is, of course, filled with air but as that air displaces an equal amount of air the weight of that air is canceled by the weight of the air displaced. Now we fill the bottle with the reference fluid e.g. pure water. The force exerted on the pan of the balance becomes: If we subtract the force measured on the empty bottle from this (or tare the balance before making the water measurement) we obtain. where the subscript n indicated that this force is net of the force of the empty bottle. The bottle is now emptied, thoroughly dried and refilled with the sample. The force, net of the empty bottle, is now: where is the density of the sample. The ratio of the sample and water forces is: This is called the Apparent Specific Gravity, denoted by subscript A, because it is what we would obtain if we took the ratio of net weighings in air from an analytical balance or used a hydrometer (the stem displaces air). Note that the result does not depend on the calibration of the balance. The only requirement on it is that it read linearly with force. Nor does depend on the actual volume of the pycnometer. Further manipulation and finally substitution of ,the true specific gravity,(the subscript V is used because this is often referred to as the specific gravity in vacuo) for gives the relationship between apparent and true specific gravity. In the usual case we will have measured weights and want the true specific gravity. This is found from Since the density of dry air at 1013.25 mb at 20 °C is 0.001205 g·cm−3 and that of water is 0.998203 g·cm−3 the difference between true and apparent specific gravities for a substance with specific gravity (20°C/20°C) of about 1.100 would be 0.000120. Where the specific gravity of the sample is close to that of water (for example dilute ethanol solutions) the correction is even smaller. Digital density meters Hydrostatic Pressure-based Instruments: This technology relies upon Pascal's Principle which states that the pressure difference between two points within a vertical column of fluid is dependent upon the vertical distance between the two points, the density of the fluid and the gravitational force. This technology is often used for tank gauging applications as a convenient means of liquid level and density measure. Vibrating Element Transducers: This type of instrument requires a vibrating element to be placed in contact with the fluid of interest. The resonant frequency of the element is measured and is related to the density of the fluid by a characterization that is dependent upon the design of the element. In modern laboratories precise measurements of specific gravity are made using oscillating U-tube meters. These are capable of measurement to 5 to 6 places beyond the decimal point and are used in the brewing, distilling, pharmaceutical, petroleum and other industries. The instruments measure the actual mass of fluid contained in a fixed volume at temperatures between 0 and 80 °C but as they are microprocessor based can calculate apparent or true specific gravity and contain tables relating these to the strengths of common acids, sugar solutions, etc. The vibrating fork immersion probe is another good example of this technology. This technology also includes many coriolis-type mass flow meters which are widely used in chemical and petroleum industry for high accuracy mass flow measurement and can be configured to also output density information based on the resonant frequency of the vibrating flow tubes. Ultrasonic Transducer: Ultrasonic waves are passed from a source, through the fluid of interest, and into a detector which measures the acoustic spectroscopy of the waves. Fluid properties such as density and viscosity can be inferred from the spectrum. Radiation-based Gauge: Radiation is passed from a source, through the fluid of interest, and into a scintillation detector, or counter. As the fluid density increases, the detected radiation "counts" will decrease. The source is typically the radioactive isotope cesium-137, with a half-life of about 30 years. A key advantage for this technology is that the instrument is not required to be in contact with the fluid – typically the source and detector are mounted on the outside of tanks or piping. . Buoyant Force Transducer: the buoyancy force produced by a float in a homogeneous liquid is equal to the weight of the liquid that is displaced by the float. Since buoyancy force is linear with respect to the density of the liquid within which the float is submerged, the measure of the buoyancy force yields a measure of the density of the liquid. One commercially available unit claims the instrument is capable of measuring specific gravity with an accuracy of +/- 0.005 SG units. The submersible probe head contains a mathematically characterized spring-float system. When the head is immersed vertically in the liquid, the float moves vertically and the position of the float controls the position of a permanent magnet whose displacement is sensed by a concentric array of Hall-effect linear displacement sensors. The output signals of the sensors are mixed in a dedicated electronics module that provides an output voltage whose magnitude is a direct linear measure of the quantity to be measured. In-Line Continuous Measurement: Slurry is weighed as it travels through the metered section of pipe using a patented, high resolution load cell. This section of pipe is of optimal length such that a truly representative mass of the slurry may be determined. This representative mass is then interrogated by the load cell 110 times per second to ensure accurate and repeatable measurement of the slurry. - Helium gas has a density of 0.164g/liter It is 0.139 times as dense as air. - Air has a density of 1.18g/l - Ethyl alcohol has a specific gravity of 0.789, so it is 0.789 times as dense as water. - Water has a specific gravity of 1. - Table salt has a specific gravity of 2.17, so it is 2.17 times as dense as water. - Aluminum has a specific gravity of 2.7, so it is 2.7 times as dense as water. - Iron has a specific gravity of 7.87, so it is 7.87 times as dense as water. - Lead has a specific gravity of 11.35, so it is 11.35 times as dense as water. - Mercury has a specific gravity of 13.56, so it is 13.56 times as dense as water. - Gold has a specific gravity of 19.3, so it is 19.3 times as dense as water. - Osmium, the densest naturally occurring chemical element, has a specific gravity of 22.59, so it is 22.59 times as dense as water. - Urine normally has a specific gravity between 1.003 and 1.035. - Blood normally has a specific gravity of ~1.060. (Samples may vary, so most of these figures are approximate.) See also - Hough, J.S., Briggs, D.E., Stevens, R and Young, T.W. Malting and Brewing Science, Vol. II Hopped Wort and Beer, Chapman and Hall, London, 1991, p. 881 - Bettin, H.; Spieweck, F.: "Die Dichte des Wassers als Funktion der Temperatur nach Einführung des Internationalen Temperaturskala von 1990" PTB-Mitteilungen 100 (1990) pp. 195–196 - ASBC Methods of Analysis Preface to Table 1: Extract in Wort and Beer, American Society of Brewing Chemists, St Paul, 2009 - ASBC Methods of Analysis op. cit. Table 1: Extract in Wort and Beer - DIN51 757 (04.1994): Testing of mineral oils and related materials; determination of density - Density – VEGA Americas, Inc. Ohmartvega.com. Retrieved on 2011-11-18. - Process Control Digital Electronic Hydrometer. Gardco. Retrieved on 2011-11-18.
http://en.wikipedia.org/wiki/Specific_gravity
13
18
How black holes change gear Astronomers found that black holes do not necessarily come with two different engines, but that each black hole can run in two different regimes, like two gears of the same engine. June 18, 2012 Black holes are extremely powerful and efficient engines that not only swallow up matter, but also return a lot of energy to the universe in exchange for the mass they eat. When black holes attract mass, they also trigger the release of intense X-ray radiation and power strong jets. But not all black holes do this the same way. This has long baffled astronomers. By studying two active black holes, researchers at the Netherlands Institute for Space Research (SRON) have now gathered evidence that suggests that each black hole can change between two different regimes, like changing the gears of an engine. Artist's impression of a black hole in one gear... Credit: P. Jonker/Rob Hynes Black hole jets — lighthouse-like beams of material that race outward at close to the speed of light — can have a major impact on the evolution of their environment. For example, jets from the supermassive black holes found at the center of galaxies can blow huge bubbles in and heat the gas found in clusters of galaxies. ...and in its other gear. Credit: P. Jonker/Rob Hynes Another stunning example of what black hole jets can do is known as Hanny's Voorwerp, a cloud of gas where stars started forming after it was hit by the jet-beam of a black hole in a neighboring galaxy. These phenomena demonstrate the importance of research into the way black holes produce and distribute energy, but until recently, much of this has remained uncertain. In 2003, it became clear from astronomical observations that there is a connection between the X-ray emission from a black hole and its jet outflow. This connection needs to be explained if scientists want to understand how the black hole engine works. In the first years after this connection was discovered, it seemed that it was the same for all feeding black holes, but soon oddballs were found. These unusual examples still have a clear connection between the energy released in the X-ray emission and that put in the jet ejection. But the proportion differs from that in the "standard" black holes. As the number of oddballs grew, it started to appear that there were two groups of black hole engines working in a slightly different way, as if one were running on petrol and the other on diesel. For years, astronomers struggled to justify this difference based on the properties of the two groups of black holes, but to no avail. Recently, a step forward was made: A team of astronomers led by Michael Coriat from the University of Southampton in the United Kingdom found a black hole that seemed to switch between the two types of X-ray/jet connections, depending on its brightness change. This suggested that black holes do not necessarily come with two different engines, but that each black hole can run in two different regimes, like two gears of the same engine. Peter Jonker and Eva Ratti from the SRON have taken an important step forward in the attempts to solve this puzzle. Using X-ray observations from the Chandra X-ray Observatory and radio observations from the Expanded Very Large Array in New Mexico, they watched two black hole systems until their feeding frenzies ended. "We found that these two black holes could also 'change gear,' demonstrating that this is not an exceptional property of one peculiar black hole,” said Ratti. “Our work suggests that changing gear might be common among black holes. We also found that the switch between gears happens at a similar X-ray luminosity for all the three black holes." These discoveries provide a new and important input to theoretical models that aim to explain both the functioning of the black hole engine itself and its impact on the surrounding environment.
http://astronomy.com/en/News-Observing/News/2012/06/How%20black%20holes%20change%20gear.aspx
13
12
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why? Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48. Investigate the sum of the numbers on the top and bottom faces of a line of three dice. What do you notice? Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number? For this challenge, you'll need to play Got It! Can you explain the strategy for winning this game with any target? How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results? Ben’s class were making cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see? What happens if you join every second point on this circle? How about every third point? Try with different steps and see if you can predict what will happen. In this problem we are looking at sets of parallel sticks that cross each other. What is the least number of crossings you can make? And the greatest? List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it? Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers. . . . Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him? Imagine a large cube made from small red cubes being dropped into a pot of yellow paint. How many of the small cubes will have yellow paint on their faces? A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . . Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? Try adding together the dates of all the days in one week. Now multiply the first date by 7 and add 21. Can you explain what An investigation that gives you the opportunity to make and justify In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square? Can you dissect an equilateral triangle into 6 smaller ones? What number of smaller equilateral triangles is it NOT possible to dissect a larger equilateral triangle into? Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws? How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...? Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. Find the sum of all three-digit numbers each of whose digits is Can you find an efficient method to work out how many handshakes there would be if hundreds of people met? Imagine we have four bags containing numbers from a sequence. What numbers can we make now? We can arrange dots in a similar way to the 5 on a dice and they usually sit quite well into a rectangular shape. How many altogether in this 3 by 5? What happens for other sizes? This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter. A collection of games on the NIM theme Square numbers can be represented as the sum of consecutive odd numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153? Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning? Try entering different sets of numbers in the number pyramids. How does the total at the top change? Can you tangle yourself up and reach any fraction? It starts quite simple but great opportunities for number discoveries and patterns! How many moves does it take to swap over some red and blue frogs? Do you have a method? If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable. This challenge asks you to imagine a snake coiling on itself. What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters. Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total? Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? A package contains a set of resources designed to develop pupils’ mathematical thinking. This package places a particular emphasis on “generalising” and is designed to meet the. . . . Imagine you have a large supply of 3kg and 8kg weights. How many of each weight would you need for the average (mean) of the weights to be 6kg? What other averages could you have? Use the animation to help you work out how many lines are needed to draw mystic roses of different sizes. The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of A country has decided to have just two different coins, 3z and 5z coins. Which totals can be made? Is there a largest total that cannot be made? How do you know?
http://nrich.maths.org/public/leg.php?code=72&cl=2&cldcmpid=6966
13
10
The average power (often simply called "power" when the context makes it clear) is the average amount of work done or energy transferred per unit time. The instantaneous power is then the limiting value of the average power as the time interval Δt approaches zero. When the rate of energy transfer or work is constant, all of this can be simplified to where W and E are, respectively, the work done or energy transferred in time t (usually measured in seconds). This is often summarized by saying that work is equal to the force acting on an object times its displacement (how far the object moves while the force acts on it). Note that only motion that is along the same axis as the force "counts", however; motion in the same direction as force gives positive work, and motion in the opposite direction gives negative work, while motion perpendicular to the force yields zero work. Differentiating by time gives that the instantaneous power is equal to the force times the object's velocity v(t): The average power is then This formula is important in characterizing engines—the power put out by an engine is equal to the force it exerts times its velocity. The average power is therefore The instantaneous electrical power P delivered to a component is given by If the component is a resistor, then: If the component is reactive (e.g. a capacitor or an inductor), then the instantaneous power is negative when the component is giving stored energy back to its environment, i.e., when the current and voltage are of opposite signs. The average power consumed by a sinusoidally-driven linear two-terminal electrical device is a function of the root mean square (rms) values of the voltage across the terminals and the current through the device, and of the phase angle between the voltage and current sinusoids. That is, The amplitudes of sinusoidal voltages and currents, such as those used almost universally in mains electrical supplies, are normally specified in terms of root mean square values. This makes the above calculation a simple matter of multiplying the two stated numbers together. This figure can also be called the effective power, as compared to the larger apparent power which is expressed in volt-amperes (VA) and does not include the cos φ term due to the current and voltage being out of phase. For simple domestic appliances or a purely resistive network, the cos φ term (called the power factor) can often be assumed to be unity, and can therefore be omitted from the equation. In this case, the effective and apparent power are assumed to be equal. Where v(t) and i(t) are, respectively, the instantaneous voltage and current as functions of time. For purely resistive devices, the average power is equal to the product of the rms voltage and rms current, even if the waveforms are not sinusoidal. The formula works for any waveform, periodic or otherwise, that has a mean square; that is why the rms formulation is so useful. For devices more complex than a resistor, the average effective power can still be expressed in general as a power factor times the product of rms voltage and rms current, but the power factor is no longer as simple as the cosine of a phase angle if the drive is non-sinusoidal or the device is not linear. In the case of a periodic signal of period , like a train of identical pulses, the instantaneous power is also a periodic function of period . The peak power is simply defined by: The peak power is not always readily measurable, however, and the measurement of the average power is more commonly performed by an instrument. If one defines the energy per pulse as: then the average power is: One may define the pulse length such that so that the ratios are equal. These ratios are called the duty cycle of the pulse train. In optics, or radiometry, the term power sometimes refers to radiant flux, the average rate of energy transport by electromagnetic radiation, measured in watts. The term "power" is also, however, used to express the ability of a lens or other optical device to focus light. It is measured in dioptres (inverse metres), and equals the inverse of the focal length of the optical device.
http://www.reference.com/browse/Apparent+power
13
32
In mathematics, a group is an algebraic structure consisting of a set together with an operation that combines any two of its elements to form a third element. To qualify as a group, the set and the operation must satisfy a few conditions called group axioms, namely associativity, identity and invertibility. While these are familiar from many mathematical structures, such as number systems—for example, the integers endowed with the addition operation form a group—the formulation of the axioms is detached from the concrete nature of the group and its operation. This allows one to handle entities of very different mathematical origins in a flexible way, while retaining essential structural aspects of many objects in abstract algebra and beyond. The ubiquity of groups in numerous areas—both within and outside mathematics—makes them a central organizing principle of contemporary mathematics. Groups share a fundamental kinship with the notion of symmetry. A symmetry group encodes symmetry features of a geometrical object: it consists of the set of transformations that leave the object unchanged, and the operation of combining two such transformations by performing one after the other. Such symmetry groups, particularly the continuous Lie groups, play an important role in many academic disciplines. Matrix groups, for example, can be used to understand fundamental physical laws underlying special relativity and symmetry phenomena in molecular chemistry. The concept of a group arose from the study of polynomial equations, starting with Évariste Galois in the 1830s. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—a very active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely (its group representations), both from a theoretical and a computational point of view. A particularly rich theory has been developed for finite groups, which culminated with the monumental classification of finite simple groups completed in 1983. Since mid-1980s geometric group theory, which studies finitely generated groups as geometric objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers ..., −4, −3, −2, −1, 0, 1, 2, 3, 4, ...The following properties of integer addition serve as a model for the abstract group axioms given in the definition below. The integers, together with the operation "+", form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures without dealing with every concrete case separately, the following abstract definition is developed to encompass the above example along with many others, one of which is the symmetry group detailed below. A group is a set, G, together with an operation "•" that combines any two elements a and b to form another element denoted . The symbol "•" is a general placeholder for a concretely given operation, such as the addition above. To qualify as a group, the set and operation, , must satisfy four requirements known as the group axioms: The order in which the group operation is carried out can be significant. In other words, the result of combining element a with element b need not yield the same result as combining element b with element a; the equation a • b = b • amay not always be true. This equation does always hold in the group of integers under addition, because a + b = b + a for any two integers (commutativity of addition). However, it does not always hold in the symmetry group below. Groups for which the equation a • b = b • a always holds are called abelian (in honor of Niels Abel). Thus, the integer addition group is abelian, but the following symmetry group is not. id (keeping it as is) r1 (rotation by 90° right) r2 (rotation by 180° right) r3 (rotation by 270° right) fv (vertical flip) fh (horizontal flip) fd (diagonal flip) fc (counter-diagonal flip) |The elements of the symmetry group of the square (D4). The vertices are colored and numbered only to visualize the operations.| b • a ("apply the symmetry b after performing the symmetry a". The right-to-left notation stems from composition of functions).The group table on the right lists the results of all such compositions possible. For example, rotating by 270° right (r3) and then flipping horizontally (fh) is the same as performing a reflection along the diagonal (fd). Using the above symbols, highlighted in blue in the group table: fh • r3 = fd. |The elements id, r1, r2, and r3 form a subgroup, highlighted in red (upper left region). A left and right coset of this subgroup is highlighted in green (in the last row) and yellow (last column), respectively.| r3 • fh = fc,i.e. rotating 270° right after flipping horizontally equals flipping along the counter-diagonal (fc). Indeed every other combination of two symmetries still gives a symmetry, as can be checked using the group table. (a • b) • c = a • (b • c)means that the composition of the three elements is independent of the priority of the operations, i.e. composing first a after b, and c to the result thereof amounts to performing a after the composition of b and c.For example, (fd • fv) • r2 = fd • (fv • r2) can be checked using the group table at the right |(fd • fv) • r2||=||r3 • r2||=||r1, which equals| |fd • (fv • r2)||=||fd • fh||=||r1.| id • a = a, a • id = a. fh • fh = id, r3 • r1 = r1 • r3 = id. In contrast to the group of integers above, where the order of the operation is irrelevant, it does matter in D4: fh • r1 = fc but r1 • fh = fd. In other words, D4 is not abelian, which makes the group structure more difficult than the integers introduced first. See main article: History of group theory. The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois' ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation θn = 1 (1854) gives the first abstract definition of a finite group. Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884. The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss' number-theoretical work Disquisitiones Arithmeticae (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer led early attempts to prove Fermat's Last Theorem to a climax by developing groups describing factorization into prime numbers. The convergence of these various sources into a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques (1870). Walther von Dyck (1882) gave the first statement of the modern definition of an abstract group. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside, who worked on representation theory of finite groups, Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was pushed by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by pivotal work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, classified all finite simple groups in 1982. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research is ongoing to simplify the proof of this classification. These days, group theory is still a highly active mathematical branch crucially impacting many other fields. See main article: Elementary group theory. Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. For example, repeated applications of the associativity axiom show that the unambiguity of a • b • c = (a • b) • c = a • (b • c)generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted. The axioms may be weakened to assert only the existence of a left identity and left inverses. Both can be shown to be actually two-sided, so the resulting definition is equivalent to the one given above. Two important consequences of the group axioms are the uniqueness of the identity element and the uniqueness of inverse elements. There can be only one identity element in a group, and each element in a group has exactly one inverse element. Thus, it is customary to speak of the identity, and the inverse of an element. To prove the uniqueness of an inverse element of a, suppose that a has two inverses, denoted l and r. Then |l||=||l • e||as e is the identity element| |=||l • (a • r)||because r is an inverse of a, so e = a • r| |=||(l • a) • r||by associativity, which allows to rearrange the parentheses| |=||e • r||since l is an inverse of a, i.e. l • a = e| |=||r||for e is the identity element| Hence the two extremal terms l and r are connected by a chain of equalities, so they agree. In other words there is only one inverse element of a. In groups, it is possible to perform division: given elements a and b of the group G, there is exactly one solution x in G to the equation x • a = b. In fact, right multiplication of the equation by a-1 gives the solution x = x • a • a-1 = b • a-1. Similarly there is exactly one solution y in G to the equation a • y = b, namely y = a-1 • b. In general, x and y need not agree. The following sections use mathematical symbols such as X = to denote a set X containing elements x, y, and z, or alternatively x ∈ X to restate that x is an element of X. The notation means f is a function assigning to every element of X an element of Y. See also: Glossary of group theory. To understand groups beyond the level of mere symbolic manipulations as above, more structural concepts have to be employed. There is a conceptual principle underlying all of the following notions: to take advantage of the structure offered by groups (which for example sets—being "structureless"—don't have) constructions related to groups have to be compatible with the group operation. This compatibility manifests itself in the following notions in various ways. For example, groups can be related to each other via functions called group homomorphisms. By the mentioned principle, they are required to respect the group structures in a precise sense. The structure of groups can also be understood by breaking them into pieces called subgroups and quotient groups. The principle of "preserving structures"—a recurring topic in mathematics throughout—is an instance of working in a category, in this case the category of groups. See main article: Group homomorphism. Group homomorphisms are functions that preserve group structure. A function a: G → H between two groups is a homomorphism if the equation a(g • k) = a(g) • a(k).holds for all elements g, k in G, i.e. the result is the same when performing the group operation after or before applying the map a. This requirement ensures that a(1G) = 1H, and also a(g)-1 = a(g-1) for all g in G. Thus a group homomorphism respects all the structure of G provided by the group axioms. Two groups G and H are called isomorphic if there exist group homomorphisms a: G → H and b: H → G, such that applying the two functions one after another (in each of the two possible orders) equal the identity function of G and H, respectively. That is, a(b(h)) = h and b(a(g)) = g for any g in G and h in H. From an abstract point of view, isomorphic groups carry the same information. For example, proving that g • g = 1 for some element g of G is equivalent to proving that a(g) • a(g) = 1, because applying a to the first equality yields the second, and applying b to the second gives back the first. See main article: Subgroup. Informally, a subgroup is a group H contained within a bigger one, G. Concretely, the identity element of G is contained in H, and whenever h1 and h2 are in H, then so are and h1-1, so the elements of H, equipped with the group operation on G restricted to H, form indeed a group. In the example above, the identity and the rotations constitute a subgroup R =
http://everything.explained.at/Group_(mathematics)/
13
24
Send a Note to Zig   | Table of content Chapter 9 : Analysis of Trusses - 9.1 Definition of trusses - 9.2 Properties of 2-force members - 9.3 Method of Joints - 9.4 Method of Sections - 9.5 Compound Trusses - 9.6 Trusses in 3-D - 9.7 Summary - 9.8 Self-Test and computer Program TRUSS Trusses are structures consisting of two or more straight, slender members connected to each other at their endpoints. Trusses are often used to support roofs, bridges, power-line towers, and appear in many other applications. Here is a collection of various structures involving trusses I have come across. Object of our calculations is to determine the external support forces as well as the forces acting on each of the members for given external In order to make calculations possible a few assumptions are made which in most cases reflect reality sufficiently close so that our theoretical results match experimentally determined ones sufficiently accurate. These assumptions pertain to two- as well as three-dimensional trusses. The three assumptions (or maybe better called idealizations) If doubt arises that for a given design any of the three assumptions may not reflect reality accurately enough a more advanced analysis should be conducted. - Each joint consists of a single pin to which the respective members are connected individually. In reality we of course find that members are connected by a variety of means : bolted, welded, glued, rivited or they are joined by gusset plates. Here are some photos of real-life joints. - No member extends beyond a joint. In Fig. 9.1a the schematic of a 2-dimensional truss is shown. That truss consists of 9 members and 6 joints. There is a member from joint A to B, another from joint B to C, and a third from joint C to D. In reality we may have a single beam extended all the way from joint A to D, but if this beam is slender (long in comparison to a lenght representing the size of its cross section) it is permissible to think of this long beam being represented by individual members going just from joint to Fig. 9.1a :  Example of 2-D Truss - Support forces (R1 and R2) and external loads (P1 and P2) are only applied at joints. In reality this may not quite be the case. But if for example the weight of a member has to be taken into account we could represent that by two forces each equal to half the weight acting at either end point. In similar fashion one can assign snow loads on roofs to single forces acting at the joints. Click here for a glimpse at some commonly employed trusses. The three assumptions brought in the previous chapter render each individual member of a truss to be what is called a "2-force member", that is a member with only two points (usually the end-points) at which forces As an example let's look at the member CE extracted from Fig. 9.1a 2-force member plus forces as shown in Fig. 9.2a. I also show the joints C and E with the arrows representing the forces exerted by the connected members onto each joint. In red are entered the forces exerted by the member CE onto the two joints. Acting on member CE we have the two forces FCE and FEC, respectively, which by the principle of action=reaction, are exactly equal but oppositely directed to the (red) forces the member exerts on the two joints it is connected to. Member CE has to be in equilibrium and therefore : - In order for the sum of the moments about point C to be zero, the line of action of force FEC has to go through point C. - In order for the sum of the moments about point E to be zero, the line of action of force FCE has to go through point E. - In order for the sum of the forces in the direction of line CE to be zero the two forces, FCE and FEC, have to be equal but oppositely directed. Note that the three points mentioned above pertain equally to two- and 2-force member with forces Fig. 9.2b shows this in graphical form. The two forces acting on member CE either pull or push at either endpoint in opposite direction with equal strength. If they pull we say that the member is under tension, if they push, it is said to be under compression. For the case inbetween, when the forces at either endpoint are zero, we speak of a zero-force member. This distinction is of great importance and you never should forget to indicate tension, compression, and zero-force clearly for each member of a truss when asked to determine the forces. The reason for this distinction is a consequence of the different ways a particular member of a truss can fail. If a member is under tension the only failure mode occurs when the forces trying to pull so hard that somewhere along the beam adjacent molecules/atoms cannot hold onto each other any longer and separate. If a member is under compression two different types of failures can occur : if the member is somewhat short and stubby molecules/atoms will not be able to resists the external forces and the member will start to crumble or deform to a shorter piece of material. If on the other side the member is long and slender a phenomenon called buckling may set in way before "crumbling" occurs. The member simply does not want to stay straight anymore. To prevent buckling we often employ Nominally these members do not carry any load but they prevent a member under compression from buckling by providing lateral support. The Method of Joints makes use of the properties of 2-force members as derived in section 9.2 in an interesting way which I demonstrate using the sample truss from section 9.1. For two-dimensional this method results in a sequence of sets of two ( three) linear equations. Fig. 9.3a shows this truss again with its geometry given in terms of the angles alpha, beta, and gamma as well as the length a,b, and c. The members AB, BC, CD, and EF are parallel to each other. Method of joints Assume that the forces P1 and P2 are known I also entered the as of yet unknown support forces. Because at point A we have a roller-type connection the support force R1 has only a vertical component. At point D we have a pin/hole type connection which gives rise to a vertical as well as horizontal component for the support force R2. Furthermore, I entered all forces ( in purple ) the 2-force members exert on their respective joints. Remember that each member pulls/pushes with equal force on its two joints. In the figure I labelled these forces according to the labels of the joints involved and assumed that each member pulls on each joint. I have done this just for the purpose of easy book-keeping. For those members actually under compression the value for the respective force will then come out to be negative. (no need to go back into the drawing and change the direction of the arrow, everybody in the business will see the negative sign of the answer and look at your drawing and knows what's going on.) Principle of Method In the Method of Joints we consider now the equilibrium of each joint. - For a 2-dimensional truss as shown here that gives us two equations for each joint : sum of the forces in horizontal and sum of the forces in the vertical direction for example. In the above example we have 6 joints and therefore get a total of For a 3-D truss we have to satify 3 equilibrium equations for each joint. - As far as unknowns is concerned we have one unknown force for each of the 9 members and 3 unknown support forces for a total of 12 unknowns for our example. A truss (2-D or 3-D) is statically determined only if the number of unknown forces (one per member plus unkonowns stemming from the support forces) is equal to the number of available equations ( 2 (3) times the number of joints). 3 foot notes - If the number of unknown forces exceeds the number of available equations the truss is said to be statically undetermined, one needs more information (usually about the way individual members deform under influence of forces) to determine the forces. - If the number of unknown forces is less than the number of available equations the truss will collapse. - On first sight one is tempted to think that by considering the equilibrium of the entire truss more equations can be derived and hence the number of unknowns can be increased correspondingly. Unfortunately, as it turns out, these new equations are linearly dependent on the equilibrium equations on all joints and therefore are automatically satified once the equilibrium equations on all joints are satisfied. On the good side, this redundancy can be used to tests your calculations and/or to solve the system of equations faster. Feel free to test your abilities to write out such equilibrium equations and check against mine. For the truss shown in Fig. 9.3a I looked at the equilibrium of each joint individually, just click on the latter in the following list and compare my sketches and equations with yours : Solving the Equation System As an example I have summarized all 12 equations representing the equilibrium conditions on the joints of the truss shown in Fig. 9.1a. Click here for a closer look. Mathematicians would classify this system as a system of linear equations with constant coefficients ( the values of cos , sin of the various angles) in which the forces are the unknowns. To solve such equation systems various methods are available, many of them based on the Gauss-elimination method or various matrix methods. I have written such a program in a web-based format. ( Program Truss , 2-D version , 3-D version ). For many trusses, the example in Fig. 9.1a being no exception, it is possible to solve for the unknowns forces "manually" by considering the joints in a particular order which can be detected by inspection. Often it is necessary to involve also the equilibrium equations for the entire truss as shown here. The principle of this method is to find by inspection ( of Fig. 9.3a if you like to work along ) a joint which is acted upon by forces of which at most 2 forces (3 forces in 3-D) Solve the equlibrium equations for this joint and repeat. If you are lucky you can solve for all the unknown forces and then use the equlibrium equations of the entire truss to check up on your results. Quite often you will get "stuck" though (or even don't get started in the first place). Don't dispair, here are two tricks which might help you out and "deliver" a joint with only two unknown forces : If you employ one or both of the above tricks and then solve subsequently for the remaining unknown forces you will be left with at least one joint the equilibrium of which you do not need to consider. My recommendation : check the equilibrium of this final joint anyway with your previously obtained values of the forces. (Hey, that little bit of checking is better than a bridge collapsing). - Solve as many of the overall equilibrium equations as you can. - Find zero-force members. Click here if you want to find out how to do that (might save you later ?!). If you like, click here to see the order in which I would solve for the forces of the truss in Fig. 9.1a and read some more useful info. Well, does the manual version of the Method of Joints, including the two tricks, always work ? The answer is unfortunately NO, and here is an example. Problem 9.3a : 2-member truss Problem 9.3b : 4-member truss Problem 9.3c : 7-member truss Problem 9.3d : Roof-truss, Fink, snow load Problem 9.3e : Roof-truss, Howe, snow load One disadvantage of the Method of Joints when employed without the help of computer programs like Program TRUSS is its sequential nature. That is, in order to calculate forces based on the equilibrium equations on a particular joint we have to use results of preceeding calculations. Hence errors propagate and way too often get magnified in the process. In contrast to that, the Method of Sections aims at calculating the force of selected members directly and can therefore be used to check results obtained by the Method of Joints (my favorite usage). Additionally, in the absence of computer programs you find yourself sometimes in the position that you have to jump-start the Method of Principle of Method The Method of Joints was used to analyze the forces in a truss by looking at the equilibrium of its individual members ( discovering the properties of two-force members ) and individual joints (to find equations to be solved for the values of the forces individual members exert and the forces supporting the truss). In the Method of Sections we consider the equilibrium of a selected part of a truss consisting of any number of members and joints. Often this is done after the overall equilibrium equations have been solved. Here I describe the method as it applies to two-dimensional trusses which usually means that you will have to solve three equilibrium equations which still can be done "manually". For three-dimensional trusses this would result in six such equations. As example, assume that our task is to find the force in the member CE of the sample truss shown in Fig. 9.4a. Also, assume that the geometry of the truss, the external loads and support forces are known. Our strategy is now to "mentally" remove three members according to the following two rules : - One of the members is the one the force of which you wish to calculate. - The removal of the three members has to divide the truss into two separate sections. Often you will have several equivalent choices. For the truss in Fig. 9.4a there is only one, namely removal of the three members BC, CE, and EF. You also might think of these three members as pieces which hold the two sections together and exert onto them just enough forces ( again only in the direction of these members) to hold each section in equilibrium. In Fig. 9.b we see the two resulting parts in terms of their respective Free-Body-Diagrams. Each part is exposed to the external loads/support forces as well as the forces the three members exert onto it. Sample Truss, 3 members removed Solving now the equilibrium equations of either part ( the choice is yours ) you obtain the forces in the three removed members. In the above example we could look at the sum of the force in vertical direction on the left section of the truss : R1 - P1 - FCE cos( β ) = 0 If you happen to be interested in the force FBC the sum of the moments about point E (of either the left or the right part) would be just fine because it contains only FBC as unknown. And for FEF ? FOOTNOTE : In many text books you find instead of "removing three members" the phrases "cut three members" or "section the truss". The latter is probably the origin of the title "Method of Section". Problem 9.4a : Roof-truss, Fink, snow load Problem 9.4b : Roof-truss, Howe, snow load Problem 9.4c : Escalator Support Problem 9.4d : Stadium Roof, I Problem 9.4e : Stadium Roof, II Compound Trusses are trusses which one can divide into two or more sub-trusses. This might help in the determination of internal forces. Whether a truss is a compound truss depends very much on who is looking. Fig. 9.5a is an example of a compound truss. The members 12, 13, 23, 24, and 34 could be viewed as comprising one sub-truss, let's call this the sub-truss 1234. The other members making up a second sub-truss, called 4567. This division can help us in this case because each of the two sub-trusses is actually a 2-force member, that is each sub-truss has only two points at which forces are acting (joint 1 and 4 for the left sub-truss and joint 7 and 4 for the right sub-truss. I tried to convey this in Fig. 9.5b. For known load P and geometry we now can determine the forces FL and FR from the equilibrium equation on joint 4. Forces in Compound Truss After determining FL and FR by analyzing the equilibrium of joint 4 all external forces on the two sub-trusses are known and each sub-truss can be analyzed separately. The analysis of 3-dimensional trusses (extremely wide-spread in practice) is usually not content of an introductory course into statics although the underlying principles for their analysis are identical to that of We have the same restrictions on the location of the loads, joints are now of ball/socket type and support forces may have now 3,2, or only 1 unknown component depending on the type of support employed. All members are still 2-force members with the forces they exert on the joints at their two endpoints stil equal but oppositely directly and in line with the line connecting the two endpoints. Hence, these forces have now in general three components and we have three equlibrium equations A 3-dimensional truss is statically determined only if the number of unknown forces (one per member plus support forces) is equal to the number number of available equations ( 3 times the number of joints). Setting up the equilibrium equations and solving them is though an order of magnitude (at least) more tedious than for 2-dimensional trusses. Fig. 9.6a is a simple example where the truss consists of a single tetrahedron with vertices A, B, C, and D. A single load P (having x-, y-, and z-components) is applied at joint D. 3-D Truss, example The support forces are chosen such that the tetrahedron (think of it as a solid body) cannot move away nor rotate in any which way. In 3 dimensions this necessitates 6 components of support forces. At joint A I have specified a ball/socket connection ( 3 unknown components), at point C we have a roller-type connection (2 unknown components) and at point B single component in the We can solve for the unknown forces in the 6 members and the 6 components of support forces by applying the method of joints in the following order : - Joint D : 3 equations for the three forces in the members AD, BD, and CD. - Joint B : 3 equations for the single support force component and the forces in members AB and BC. - Joint C : 3 equations for the two support force components and the force in member AC. - Joint A : 3 equations for the three support force components. You then can use the overall equilibirum equations for a check-up. It is nearly impossible to do these calculations without vector notation. Some more examples of 3-dimensional trusses can be found as sample cases for a 3-D truss program. In this chapter we were concerned with the determination of support forces and forces internal members are exposed to. The structures we could investigate were called trusses which have the properties of : - Consisting only of 2-force members. - Loads and support forces act only on joints. Two principle methods are available to obtain the desired forces : - The Method of Joints which provides us with two ( three in 3-D cases) equations per joint leading to a system of linear equations for the unknown forces. If the truss is statically determined this system can always be solved by a computer program (like Program TRUSS) or in many cases by inspecting the truss as to the order in which these equations must be solved. Depending on the truss geometry this approach is not always possible but solving the overall equilibrium equations and/or looking at the truss as a compound truss might help. When solving for the forces without a computer the sequential nature of the Method of Joints is a disadvantage because errors made initially affect subsequent calculations. - The Method of Sections can also be used to "jump-start" the method of joints. It is very useful when the force of only a few internal members are to be determined. The principle is here to remove 3 members (in 2-dimensional cases) with one member being the one of which we wish to determine the forces. The removal of the 3 members has to divide the truss into two separate parts. The study of the equilibrium of either part yields the forces of the 3 removed members. The self-test is a multiple-choice test. It allows you to ascertain your knowledge of the definition of terms and your understanding of Click here to do the test. Computer Program TRUSS This program is based on the Method of Joints. The user specifies the geometry of the truss in terms of the location of all joints and how these are connected by members and then specifies given external forces and finally provides information concerning the support forces acting on the truss. For more information follow the links below : A warning in particular to my students. Usage of a computer program (except for special parameter studies of which we will do one or the other) does not teach you anything more than just how to use that particular program. The real juice lies in the understanding of the different methods employed and evaluating whether the obtained results make sense. Send a Note to Zig   | Table of content Zig Herzog, [email protected] Last revised: 08/21/09
http://mac6.ma.psu.edu/em211/p09a.html
13
61
A Basic Introduction to the Science Underlying WHAT IS A GENOME? Life is specified by genomes. Every organism, including humans, has a genome that contains all of the biological information needed to build and maintain a living example of that organism. The biological information contained in a genome is encoded in its deoxyribonucleic acid (DNA) and is divided into discrete units called genes. Genes code for proteins that attach to the genome at the appropriate positions and switch on a series of reactions called gene expression. |In 1909, Danish botanist Wilhelm Johanssen coined the word gene for the hereditary unit found on a chromosome. Nearly 50 years earlier, Gregor Mendel had characterized hereditary units as factors— observable differences that were passed from parent to offspring. Today we know that a single gene consists of a unique sequence of DNA that provides the complete instructions to make a functional product, called a protein. Genes instruct each cell type— such as skin, brain, and liver—to make discrete sets of proteins at just the right times, and it is through this specificity that unique organisms arise. The Physical Structure of the Human Genome Inside each of our cells lies a nucleus, a membrane-bounded region that provides a sanctuary for genetic information. The nucleus contains long strands of DNA that encode this genetic information. A DNA chain is made up of four chemical bases: adenine (A) and guanine (G), which are called purines, and cytosine (C) and thymine (T), referred to as pyrimidines. Each base has a slightly different composition, or combination of oxygen, carbon, nitrogen, and hydrogen. In a DNA chain, every base is attached to a sugar molecule (deoxyribose) and a phosphate molecule, resulting in a nucleic acid or nucleotide. Individual nucleotides are linked through the phosphate group, and it is the precise order, or sequence, of nucleotides that determines the product made from that gene. Figure 1. The four DNA bases. Each DNA base is made up of the sugar 2'-deoxyribose linked to a phosphate group and one of the four bases depicted above: adenine (top left), cytosine (top right), guanine (bottom left), and thymine (bottom right). |A DNA chain, also called a strand, has a sense of direction, in which one end is chemically different than the other. The so-called 5' end terminates in a 5' phosphate group (-PO4); the 3' end terminates in a 3' hydroxyl group (-OH). This is important because DNA strands are always synthesized in the 5' to 3' direction. The DNA that constitutes a gene is a double-stranded molecule consisting of two chains running in opposite directions. The chemical nature of the bases in double-stranded DNA creates a slight twisting force that gives DNA its characteristic gently coiled structure, known as the double helix. The two strands are connected to each other by chemical pairing of each base on one strand to a specific partner on the other strand. Adenine (A) pairs with thymine (T), and guanine (G) pairs with cytosine (C). Thus, A-T and G-C base pairs are said to be complementary. This complementary base pairing is what makes DNA a suitable molecule for carrying our genetic information—one strand of DNA can act as a template to direct the synthesis of a complementary strand. In this way, the information in a DNA sequence is readily copied and passed on to the next generation Not all genetic information is found in nuclear DNA. Both plants and animals have an organelle—a "little organ" within the cell— called the mitochondrion. Each mitochondrion has its own set of genes. Plants also have a second organelle, the chloroplast, which also has its own DNA. Cells often have multiple mitochondria, particularly cells requiring lots of energy, such as active muscle cells. This is because mitochondria are responsible for converting the energy stored in macromolecules into a form usable by the cell, namely, the adenosine triphosphate (ATP) molecule. Thus, they are often referred to as the power generators of the cell. Unlike nuclear DNA (the DNA found within the nucleus of a cell), half of which comes from our mother and half from our father, mitochondrial DNA is only inherited from our mother. This is because mitochondria are only found in the female gametes or "eggs" of sexually reproducing animals, not in the male gamete, or sperm. Mitochondrial DNA also does not recombine; there is no shuffling of genes from one generation to the other, as there is with nuclear genes. |Large numbers of mitochondria are found in the tail of sperm, providing them with an engine that generates the energy needed for swimming toward the egg. However, when the sperm enters the egg during fertilization, the tail falls off, taking away the father's mitochondria. Why Is There a Separate Mitochondrial Genome? The energy-conversion process that takes place in the mitochondria takes place aerobically, in the presence of oxygen. Other energy conversion processes in the cell take place anaerobically, or without oxygen. The independent aerobic function of these organelles is thought to have evolved from bacteria that lived inside of other simple organisms in a mutually beneficial, or symbiotic, relationship, providing them with aerobic capacity. Through the process of evolution, these tiny organisms became incorporated into the cell, and their genetic systems and cellular functions became integrated to form a single functioning cellular unit. Because mitochondria have their own DNA, RNA, and ribosomes, this scenario is quite possible. This theory is also supported by the existence of a eukaryotic organism, called the amoeba, which lacks mitochondria. Therefore, amoeba must always have a symbiotic relationship with an aerobic bacterium. Why Study Mitochondria? There are many diseases caused by mutations in mitochondrial DNA (mtDNA). Because the mitochondria produce energy in cells, symptoms of mitochondrial diseases often involve degeneration or functional failure of tissue. For example, mtDNA mutations have been identified in some forms of diabetes, deafness, and certain inherited heart diseases. In addition, mutations in mtDNA are able to accumulate throughout an individual's lifetime. This is different from mutations in nuclear DNA, which has sophisticated repair mechanisms to limit the accumulation of mutations. Mitochondrial DNA mutations can also concentrate in the mitochondria of specific tissues. A variety of deadly diseases are attributable to a large number of accumulated mutations in mitochondria. There is even a theory, the Mitochondrial Theory of Aging, that suggests that accumulation of mutations in mitochondria contributes to, or drives, the aging process. These defects are associated with Parkinson's and Alzheimer's disease, although it is not known whether the defects actually cause or are a direct result of the diseases. However, evidence suggests that the mutations contribute to the progression of both diseases. In addition to the critical cellular energy-related functions, mitochondrial genes are useful to evolutionary biologists because of their maternal inheritance and high rate of mutation. By studying patterns of mutations, scientists are able to reconstruct patterns of migration and evolution within and between species. For example, mtDNA analysis has been used to trace the migration of people from Asia across the Bering Strait to North and South America. It has also been used to identify an ancient maternal lineage from which modern man evolved. |In addition to mRNA, DNA codes for other forms of RNA, including ribosomal RNAs (rRNAs), transfer RNAs (tRNAs), and small nuclear RNAs (snRNAs). rRNAs and tRNAs participate in protein assembly whereas snRNAs aid in a process called splicing —the process of editing of mRNA before it can be used as a template for protein synthesis. Just like DNA, ribonucleic acid (RNA) is a chain, or polymer, of nucleotides with the same 5' to 3' direction of its strands. However, the ribose sugar component of RNA is slightly different chemically than that of DNA. RNA has a 2' oxygen atom that is not present in DNA. Other fundamental structural differences exist. For example, uracil takes the place of the thymine nucleotide found in DNA, and RNA is, for the most part, a single-stranded molecule. DNA directs the synthesis of a variety of RNA molecules, each with a unique role in cellular function. For example, all genes that code for proteins are first made into an RNA strand in the nucleus called a messenger RNA (mRNA). The mRNA carries the information encoded in DNA out of the nucleus to the protein assembly machinery, called the ribosome, in the cytoplasm. The ribosome complex uses mRNA as a template to synthesize the exact protein coded for by the gene. |"DNA makes RNA, RNA makes protein, and proteins make us." Although DNA is the carrier of genetic information in a cell, proteins do the bulk of the work. Proteins are long chains containing as many as 20 different kinds of amino acids. Each cell contains thousands of different proteins: enzymes that make new molecules and catalyze nearly all chemical processes in cells; structural components that give cells their shape and help them move; hormones that transmit signals throughout the body; antibodies that recognize foreign molecules; and transport molecules that carry oxygen. The genetic code carried by DNA is what specifies the order and number of amino acids and, therefore, the shape and function of the protein. The "Central Dogma"—a fundamental principle of molecular biology—states that genetic information flows from DNA to RNA to protein. Ultimately, however, the genetic code resides in DNA because only DNA is passed from generation to generation. Yet, in the process of making a protein, the encoded information must be faithfully transmitted first to RNA then to protein. Transferring the code from DNA to RNA is a fairly straightforward process called transcription. Deciphering the code in the resulting mRNA is a little more complex. It first requires that the mRNA leave the nucleus and associate with a large complex of specialized RNAs and proteins that, collectively, are called the ribosome. Here the mRNA is translated into protein by decoding the mRNA sequence in blocks of three RNA bases, called codons, where each codon specifies a particular amino acid. In this way, the ribosomal complex builds a protein one amino acid at a time, with the order of amino acids determined precisely by the order of the codons in the mRNA. |In 1961, Marshall Nirenberg and Heinrich Matthaei correlated the first codon (UUU) with the amino acid phenylalanine. After that, it was not long before the genetic code for all 20 amino acids A given amino acid can have more than one codon. These redundant codons usually differ at the third position. For example, the amino acid serine is encoded by UCU, UCC, UCA, and/or UCG. This redundancy is key to accommodating mutations that occur naturally as DNA is replicated and new cells are produced. By allowing some of the random changes in DNA to have no effect on the ultimate protein sequence, a sort of genetic safety net is created. Some codons do not code for an amino acid at all but instruct the ribosome when to stop adding new amino acids. Table 1. RNA triplet codons and their corresponding amino | AAU Asparagine | AGU Serine A translation chart of the 64 RNA codons. The Core Gene Sequence: Introns and Exons Genes make up about 1 percent of the total DNA in our genome. In the human genome, the coding portions of a gene, called exons, are interrupted by intervening sequences, called introns. In addition, a eukaryotic gene does not code for a protein in one continuous stretch of DNA. Both exons and introns are "transcribed" into mRNA, but before it is transported to the ribosome, the primary mRNA transcript is edited. This editing process removes the introns, joins the exons together, and adds unique features to each end of the transcript to make a "mature" mRNA. One might then ask what the purpose of an intron is if it is spliced out after it is transcribed? It is still unclear what all the functions of introns are, but scientists believe that some serve as the site for recombination, the process by which progeny derive a combination of genes different from that of either parent, resulting in novel genes with new combinations of exons, the key to evolution. Figure 2. Recombination. Recombination involves pairing between complementary strands of two parental duplex DNAs (top and middle panel). This process creates a stretch of hybrid DNA (bottom panel) in which the single strand of one duplex is paired with its complement from the other duplex. Gene Prediction Using Computers When the complete mRNA sequence for a gene is known, computer programs are used to align the mRNA sequence with the appropriate region of the genomic DNA sequence. This provides a reliable indication of the beginning and end of the coding region for that gene. In the absence of a complete mRNA sequence, the boundaries can be estimated by ever-improving, but still inexact, gene prediction software. The problem is the lack of a single sequence pattern that indicates the beginning or end of a eukaryotic gene. Fortunately, the middle of a gene, referred to as the core gene sequence--has enough consistent features to allow more reliable predictions. From Genes to Proteins: Start to Finish We just discussed that the journey from DNA to mRNA to protein requires that a cell identify where a gene begins and ends. This must be done both during the transcription and the translation process. Transcription, the synthesis of an RNA copy from a sequence of DNA, is carried out by an enzyme called RNA polymerase. This molecule has the job of recognizing the DNA sequence where transcription is initiated, called the promoter site. In general, there are two "promoter" sequences upstream from the beginning of every gene. The location and base sequence of each promoter site vary for prokaryotes (bacteria) and eukaryotes (higher organisms), but they are both recognized by RNA polymerase, which can then grab hold of the sequence and drive the production of an mRNA. Eukaryotic cells have three different RNA polymerases, each recognizing three classes of genes. RNA polymerase II is responsible for synthesis of mRNAs from protein-coding genes. This polymerase requires a sequence resembling TATAA, commonly referred to as the TATA box, which is found 25-30 nucleotides upstream of the beginning of the gene, referred to as the initiator sequence. Transcription terminates when the polymerase stumbles upon a termination, or stop signal. In eukaryotes, this process is not fully understood. Prokaryotes, however, tend to have a short region composed of G's and C's that is able to fold in on itself and form complementary base pairs, creating a stem in the new mRNA. This stem then causes the polymerase to trip and release the nascent, or newly formed, mRNA. The beginning of translation, the process in which the genetic code carried by mRNA directs the synthesis of proteins from amino acids, differs slightly for prokaryotes and eukaryotes, although both processes always initiate at a codon for methionine. For prokaryotes, the ribosome recognizes and attaches at the sequence AGGAGGU on the mRNA, called the Shine-Delgarno sequence, that appears just upstream from the methionine (AUG) codon. Curiously, eukaryotes lack this recognition sequence and simply initiate translation at the amino acid methionine, usually coded for by the bases AUG, but sometimes GUG. Translation is terminated for both prokaryotes and eukaryotes when the ribosome reaches one of the three stop codons. Structural Genes, Junk DNA, and Regulatory Sequences |Over 98 percent of the genome is of unknown function. Although often referred to as "junk" DNA, scientists are beginning to uncover the function of many of these intergenic sequences—the DNA found between genes. Sequences that code for proteins are called structural genes. Although it is true that proteins are the major components of structural elements in a cell, proteins are also the real workhorses of the cell. They perform such functions as transporting nutrients into the cell; synthesizing new DNA, RNA, and protein molecules; and transmitting chemical signals from outside to inside the cell, as well as throughout the cell—both critical to the process of A class of sequences called regulatory sequences makes up a numerically insignificant fraction of the genome but provides critical functions. For example, certain sequences indicate the beginning and end of genes, sites for initiating replication and recombination, or provide landing sites for proteins that turn genes on and off. Like structural genes, regulatory sequences are inherited; however, they are not commonly referred to as genes. Other DNA Regions Forty to forty-five percent of our genome is made up of short sequences that are repeated, sometimes hundreds of times. There are numerous forms of this "repetitive DNA", and a few have known functions, such as stabilizing the chromosome structure or inactivating one of the two X chromosomes in developing females, a process called X-inactivation. The most highly repeated sequences found so far in mammals are called "satellite DNA" because their unusual composition allows them to be easily separated from other DNA. These sequences are associated with chromosome structure and are found at the centromeres (or centers) and telomeres (ends) of chromosomes. Although they do not play a role in the coding of proteins, they do play a significant role in chromosome structure, duplication, and cell division. The highly variable nature of these sequences makes them an excellent "marker" by which individuals can be identified based on their unique pattern of their satellite DNA. Figure 3. A chromosome. A chromosome is composed of a very long molecule of DNA and associated proteins that carry hereditary information. The centromere, shown at the center of this chromosome, is a specialized structure that appears during cell division and ensures the correct distribution of duplicated chromosomes to daughter cells. Telomeres are the structures that seal the end of a chromosome. Telomeres play a critical role in chromosome replication and maintenance by counteracting the tendency of the chromosome to otherwise shorten with each round of replication. Another class of non-coding DNA is the "pseudogene", so named because it is believed to be a remnant of a real gene that has suffered mutations and is no longer functional. Pseudogenes may have arisen through the duplication of a functional gene, followed by inactivation of one of the copies. Comparing the presence or absence of pseudogenes is one method used by evolutionary geneticists to group species and to determine relatedness. Thus, these sequences are thought to carry a record of our evolutionary history. How Many Genes Do Humans Have? In February 2001, two largely independent draft versions of the human genome were published. Both studies estimated that there are 30,000 to 40,000 genes in the human genome, roughly one-third the number of previous estimates. More recently scientists estimated that there are less than 30,000 human genes. However, we still have to make guesses at the actual number of genes, because not all of the human genome sequence is annotated and not all of the known sequence has been assigned a particular position in the genome. So, how do scientists estimate the number of genes in a genome? For the most part, they look for tell-tale signs of genes in a DNA sequence. These include: open reading frames, stretches of DNA, usually greater than 100 bases, that are not interrupted by a stop codon such as TAA, TAG or TGA; start codons such as ATG; specific sequences found at splice junctions, a location in the DNA sequence where RNA removes the non-coding areas to form a continuous gene transcript for translation into a protein; and gene regulatory sequences. This process is dependent on computer programs that search for these patterns in various sequence databases and then make predictions about the existence of a gene. From One Gene–One Protein to a More Global Perspective Only a small percentage of the 3 billion bases in the human genome becomes an expressed gene product. However, of the approximately 1 percent of our genome that is expressed, 40 percent is alternatively spliced to produce multiple proteins from a single gene. Alternative splicing refers to the cutting and pasting of the primary mRNA transcript into various combinations of mature mRNA. Therefore the one gene–one protein theory, originally framed as "one gene–one enzyme", does not precisely hold. With so much DNA in the genome, why restrict transcription to a tiny portion, and why make that tiny portion work overtime to produce many alternate transcripts? This process may have evolved as a way to limit the deleterious effects of mutations. Genetic mutations occur randomly, and the effect of a small number of mutations on a single gene may be minimal. However, an individual having many genes each with small changes could weaken the individual, and thus the species. On the other hand, if a single mutation affects several alternate transcripts at once, it is more likely that the effect will be devastating—the individual may not survive to contribute to the next generation. Thus, alternate transcripts from a single gene could reduce the chances that a mutated gene is transmitted. Gene Switching: Turning Genes On and Off The estimated number of genes for humans, less than 30,000, is not so different from the 25,300 known genes of Arabidopsis thaliana, commonly called mustard grass. Yet, we appear, at least at first glance, to be a far more complex organism. A person may wonder how this increased complexity is achieved. One answer lies in the regulatory system that turns genes on and off. This system also precisely controls the amount of a gene product that is produced and can further modify the product after it is made. This exquisite control requires multiple regulatory input points. One very efficient point occurs at transcription, such that an mRNA is produced only when a gene product is needed. Cells also regulate gene expression by post-transcriptional modification; by allowing only a subset of the mRNAs to go on to translation; or by restricting translation of specific mRNAs to only when the product is needed. At other levels, cells regulate gene expression through DNA folding, chemical modification of the nucleotide bases, and intricate "feedback mechanisms" in which some of the gene's own protein product directs the cell to cease further protein production. Promoters and Regulatory Sequences Transcription is the process whereby RNA is made from DNA. It is initiated when an enzyme, RNA polymerase, binds to a site on the DNA called a promoter sequence. In most cases, the polymerase is aided by a group of proteins called "transcription factors" that perform specialized functions, such as DNA sequence recognition and regulation of the polymerase's enzyme activity. Other regulatory sequences include activators, repressors, and enhancers. These sequences can be cis-acting (affecting genes that are adjacent to the sequence) or trans-acting (affecting expression of the gene from a distant site), even on another chromosome. Globin Genes: An Example of Transcriptional Regulation An example of transcriptional control occurs in the family of genes responsible for the production of globin. Globin is the protein that complexes with the iron-containing heme molecule to make hemoglobin. Hemoglobin transports oxygen to our tissues via red blood cells. In the adult, red blood cells do not contain DNA for making new globin; they are ready-made with all of the hemoglobin they will need. During the first few weeks of life, embryonic globin is expressed in the yolk sac of the egg. By week five of gestation, globin is expressed in early liver cells. By birth, red blood cells are being produced, and globin is expressed in the bone marrow. Yet, the globin found in the yolk is not produced from the same gene as is the globin found in the liver or bone marrow stem cells. In fact, at each stage of development, different globin genes are turned on and off through a process of transcriptional regulation called "switching". To further complicate matters, globin is made from two different protein chains: an alpha-like chain coded for on chromosome 16; and a beta-like chain coded for on chromosome 11. Each chromosome has the embryonic, fetal, and adult form lined up on the chromosome in a sequential order for developmental expression. The developmentally regulated transcription of globin is controlled by a number of cis-acting DNA sequences, and although there remains a lot to be learned about the interaction of these sequences, one known control sequence is an enhancer called the Locus Control Region (LCR). The LCR sits far upstream on the sequence and controls the alpha genes on chromosome 16. It may also interact with other factors to determine which alpha gene is turned on. Thalassemias are a group of diseases characterized by the absence or decreased production of normal globin, and thus hemoglobin, leading to decreased oxygen in the system. There are alpha and beta thalassemias, defined by the defective gene, and there are variations of each of these, depending on whether the embryonic, fetal, or adult forms are affected and/or expressed. Although there is no known cure for the thalassemias, there are medical treatments that have been developed based on our current understanding of both gene regulation and cell differentiation. Treatments include blood transfusions, iron chelators, and bone marrow transplants. With continuing research in the areas of gene regulation and cell differentiation, new and more effective treatments may soon be on the horizon, such as the advent of gene transfer therapies. The Influence of DNA Structure and Binding Domains Sequences that are important in regulating transcription do not necessarily code for transcription factors or other proteins. Transcription can also be regulated by subtle variations in DNA structure and by chemical changes in the bases to which transcription factors bind. As stated previously, the chemical properties of the four DNA bases differ slightly, providing each base with unique opportunities to chemically react with other molecules. One chemical modification of DNA, called methylation, involves the addition of a methyl group (-CH3). Methylation frequently occurs at cytosine residues that are preceded by guanine bases, oftentimes in the vicinity of promoter sequences. The methylation status of DNA often correlates with its functional activity, where inactive genes tend to be more heavily methylated. This is because the methyl group serves to inhibit transcription by attracting a protein that binds specifically to methylated DNA, thereby interfering with polymerase binding. Methylation also plays an important role in genomic imprinting, which occurs when both maternal and paternal alleles are present but only one allele is expressed while the other remains inactive. Another way to think of genomic imprinting is as "parent of origin differences" in the expression of inherited traits. Considerable intrigue surrounds the effects of DNA methylation, and many researchers are working to unlock the mystery behind this concept. Translation is the process whereby the genetic code carried by an mRNA directs the synthesis of proteins. Translational regulation occurs through the binding of specific molecules, called repressor proteins, to a sequence found on an RNA molecule. Repressor proteins prevent a gene from being expressed. As we have just discussed, the default state for a gene is that of being expressed via the recognition of its promoter by RNA polymerase. Close to the promoter region is another cis-acting site called the operator, the target for the repressor protein. When the repressor protein binds to the operator, RNA polymerase is prevented from initiating transcription, and gene expression is Translational control plays a significant role in the process of embryonic development and cell differentiation. Upon fertilization, an egg cell begins to multiply to produce a ball of cells that are all the same. At some point, however, these cells begin to differentiate, or change into specific cell types. Some will become blood cells or kidney cells, whereas others may become nerve or brain cells. When all of the cells formed are alike, the same genes are turned on. However, once differentiation begins, various genes in different cells must become active to meet the needs of that cell type. In some organisms, the egg houses store immature mRNAs that become translationally active only after fertilization. Fertilization then serves to trigger mechanisms that initiate the efficient translation of mRNA into proteins. Similar mechanisms serve to activate mRNAs at other stages of development and differentiation, such as when specific protein products are needed. Mechanisms of Genetic Variation and Heredity Does Everyone Have the Same Genes? When you look at the human species, you see evidence of a process called genetic variation, that is, there are immediately recognizable differences in human traits, such as hair and eye color, skin pigment, and height. Then there are the not so obvious genetic variations, such as blood type. These expressed, or phenotypic, traits are attributable to genotypic variation in a person's DNA sequence. When two individuals display different phenotypes of the same trait, they are said to have two different alleles for the same gene. This means that the gene's sequence is slightly different in the two individuals, and the gene is said to be polymorphic, "poly" meaning many and "morph" meaning shape or form. Therefore, although people generally have the same genes, the genes do not have exactly the same DNA sequence. These polymorphic sites influence gene expression and also serve as markers for genomic research |The cell cycle is the process that a cell undergoes to replicate. Most genetic variation occurs during the phases of the cell cycle when DNA is duplicated. Mutations in the new DNA strand can manifest as base substitutions, such as when a single base gets replaced with another; deletions, where one or more bases are left out; or insertions, where one or more bases are added. Mutations can either be synonymous, in which the variation still results in a codon for the same amino acid or non-synonymous, in which the variation results in a codon for a different amino acid. Mutations can also cause a frame shift, which occurs when the variation bumps the reference point for reading the genetic code down a base or two and results in loss of part, or sometimes all, of that gene product. DNA mutations can also be introduced by toxic chemicals and, particularly in skin cells, exposure to ultraviolet radiation. |The manner in which a cell replicates differs with the various classes of life forms, as well as with the end purpose of the cell replication. Cells that compose tissues in multicellular organisms typically replicate by organized duplication and spatial separation of their cellular genetic material, a process called mitosis. Meiosis is the mode of cell replication for the formation of sperm and egg cells in plants, animals, and many other multicellular life forms. Meiosis differs significantly from mitosis in that the cellular progeny have their complement of genetic material reduced to half that of the parent cell. |Mutations that occur in somatic cells—any cell in the body except gametes and their precursors—will not be passed on to the next generation. This does not mean, however, that somatic cell mutations, sometimes called acquired mutations, are benign. For example, as your skin cells prepare to divide and produce new skin cells, errors may be inadvertently introduced when the DNA is duplicated, resulting in a daughter cell that contains the error. Although most defective cells die quickly, some can persist and may even become cancerous if the mutation affects the ability to regulate Mutations and the Next Generation There are two places where mutations can be introduced and carried into the next generation. In the first stages of development, a sperm cell and egg cell fuse. They then begin to divide, giving rise to cells that differentiate into tissue-specific cell types. One early type of differentiated cell is the germ line cell, which may ultimately develop into mature gametes. If a mutation occurs in the developing germ line cell, it may persist until that individual reaches reproductive age. Now the mutation has the potential to be passed on to the next generation. Mutations may also be introduced during meiosis, the mode of cell replication for the formation of sperm and egg cells. In this case, the germ line cell is healthy, and the mutation is introduced during the actual process of gamete replication. Once again, the sperm or egg will contain the mutation, and during the reproductive process, this mutation may then be passed on to the offspring. One should bear in mind that not all mutations are bad. Mutations also provide a species with the opportunity to adapt to new environments, as well as to protect a species from new pathogens. Mutations are what lie behind the popular saying of "survival of the fittest", the basic theory of evolution proposed by Charles Darwin in 1859. This theory proposes that as new environments arise, individuals carrying certain mutations that enable an evolutionary advantage will survive to pass this mutation on to its offspring. It does not suggest that a mutation is derived from the environment, but that survival in that environment is enhanced by a particular mutation. Some genes, and even some organisms, have evolved to tolerate mutations better than others. For example, some viral genes are known to have high mutation rates. Mutations serve the virus well by enabling adaptive traits, such as changes in the outer protein coat so that it can escape detection and thereby destruction by the host's immune system. Viruses also produce certain enzymes that are necessary for infection of a host cell. A mutation within such an enzyme may result in a new form that still allows the virus to infect its host but that is no longer blocked by an anti-viral drug. This will allow the virus to propagate freely in its environment. Mendel's Laws—How We Inherit Our Genes In 1866, Gregor Mendel studied the transmission of seven different pea traits by carefully test-crossing many distinct varieties of peas. Studying garden peas might seem trivial to those of us who live in a modern world of cloned sheep and gene transfer, but Mendel's simple approach led to fundamental insights into genetic inheritance, known today as Mendel's Laws. Mendel did not actually know or understand the cellular mechanisms that produced the results he observed. Nonetheless, he correctly surmised the behavior of traits and the mathematical predictions of their transmission, the independent segregation of alleles during gamete production, and the independent assortment of genes. Perhaps as amazing as Mendel's discoveries was the fact that his work was largely ignored by the scientific community for over 30 years! Principles of Genetic Inheritance Law of Segregation: Each of the two inherited factors (alleles) possessed by the parent will segregate and pass into separate gametes (eggs or sperm) during meiosis, which will each carry only one of the factors. Law of Independent Assortment: In the gametes, alleles of one gene separate independently of those of another gene, and thus all possible combinations of alleles are equally Law of Dominance: Each trait is determined by two factors (alleles), inherited one from each parent. These factors each exhibit a characteristic dominant, co-dominant, or recessive expression, and those that are dominant will mask the expression of those that are recessive. How Does Inheritance Work? Our discussion here is restricted to sexually reproducing organisms where each gene in an individual is represented by two copies, called alleles—one on each chromosome pair. There may be more than two alleles, or variants, for a given gene in a population, but only two alleles can be found in an individual. Therefore, the probability that a particular allele will be inherited is 50:50, that is, alleles randomly and independently segregate into daughter cells, although there are some exceptions to this rule. The term diploid describes a state in which a cell has two sets of homologous chromosomes, or two chromosomes that are the same. The maturation of germ line stem cells into gametes requires the diploid number of each chromosome be reduced by half. Hence, gametes are said to be haploid—having only a single set of homologous chromosomes. This reduction is accomplished through a process called meiosis, where one chromosome in a diploid pair is sent to each daughter gamete. Human gametes, therefore, contain 23 chromosomes, half the number of somatic cells—all the other cells of the body. Because the chromosome in one pair separates independently of all other chromosomes, each new gamete has the potential for a totally new combination of chromosomes. In humans, the independent segregation of the 23 chromosomes can lead to as many as 16 to 17 million different combinations in one individual's gametes. Only one of these gametes will combine with one of the nearly 17 million possible combinations from the other parent, generating a staggering potential for individual variation. Yet, this is just the beginning. Even more variation is possible when you consider the recombination between sections of chromosomes during meiosis as well as the random mutation that can occur during DNA replication. With such a range of possibilities, it is amazing that siblings look so much alike! Expression of Inherited Genes Gene expression, as reflected in an organism's phenotype, is based on conditions specific for each copy of a gene. As we just discussed, for every human gene there are two copies, and for every gene there can be several variants or alleles. If both alleles are the same, the gene is said to be homozygous. If the alleles are different, they are said to be heterozygous. For some alleles, their influence on phenotype takes precedence over all other alleles. For others, expression depends on whether the gene appears in the homozygous or heterozygous state. Still other phenotypic traits are a combination of several alleles from several different genes. Determining the allelic condition used to be accomplished solely through the analysis of pedigrees, much the way Mendel carried out his experiments on peas. However, this method can leave many questions unanswered, particularly for traits that are a result of the interaction between several different genes. Today, molecular genetic techniques exist that can assist researchers in tracking the transmission of traits by pinpointing the location of individual genes, identifying allelic variants, and identifying those traits that are caused by Nature of Alleles A dominant allele is an allele that is almost always expressed, even if only one copy is present. Dominant alleles express their phenotype even when paired with a different allele, that is, when heterozygous. In this case, the phenotype appears the same in both the heterozygous and homozygous states. Just how the dominant allele overshadows the other allele depends on the gene, but in some cases the dominant gene produces a gene product that the other allele does not. Well-known dominant alleles occur in the human genes for Huntington disease, a form of dwarfism called achondroplasia, and polydactylism (extra fingers and toes). On the other hand, a recessive allele will be expressed only if there are two identical copies of that allele, or for a male, if one copy is present on the X chromosome. The phenotype of a recessive allele is only seen when both alleles are the same. When an individual has one dominant allele and one recessive allele, the trait is not expressed because it is overshadowed by the dominant allele. The individual is said to be a carrier for that trait. Examples of recessive disorders in humans include sickle cell anemia, Tay-Sachs disease, and A particularly important category of genetic linkage has to do with the X and Y sex chromosomes. These chromosomes not only carry the genes that determine male and female traits, but also those for some other characteristics as well. Genes that are carried by either sex chromosome are said to be sex linked. Men normally have an X and a Y combination of sex chromosomes, whereas women have two X's. Because only men inherit Y chromosomes, they are the only ones to inherit Y-linked traits. Both men and women can have X-linked traits because both inherit X-linked traits not related to feminine body characteristics are primarily expressed in the phenotype of men. This is because men have only one X chromosome. Subsequently, genes on that chromosome that do not code for gender are expressed in the male phenotype, even if they are recessive. In women, a recessive allele on one X chromosome is often masked in their phenotype by a dominant normal allele on the other. This explains why women are frequently carriers of X-linked traits but more rarely have them expressed in their own phenotypes. In humans, at least 320 genes are X-linked. These include the genes for hemophilia, red–green color blindness, and congenital night blindness. There are at least a dozen Y-linked genes, in addition to those that code for masculine physical traits. |It is now known that one of the X chromosomes in the cells of human females is completely, or mostly, inactivated early in embryonic life. This is a normal self-preservation action to prevent a potentially harmful double dose of genes. Recent research points to the "Xist" gene on the X chromosome as being responsible for a sequence of events that silences one of the X chromosomes in women. The inactivated X chromosomes become highly compacted structures known as Barr bodies. The presence of Barr bodies has been used at international sport competitions as a test to determine whether an athlete is a male or a female. Exceptions to Mendel's Laws There are many examples of inheritance that appear to be exceptions to Mendel's laws. Usually, they turn out to represent complex interactions among various allelic conditions. For example, co-dominant alleles both contribute to a phenotype. Neither is dominant over the other. Control of the human blood group system provides a good example of co-dominant alleles. Four Basic Blood Types There are four basic blood types, and they are O, A, B, and AB. We know that our blood type is determined by the "alleles" that we inherit from our parents. For the blood type gene, there are three basic blood type alleles: A, B, and O. We all have two alleles, one inherited from each parent. The possible combinations of the three alleles are OO, AO, BO, AB, AA, and BB. Blood types A and B are "co-dominant" alleles, whereas O is "recessive". A codominant allele is apparent even if only one is present; a recessive allele is apparent only if two recessive alleles are present. Because blood type O is recessive, it is not apparent if the person inherits an A or B allele along with it. So, the possible allele combinations result in a particular blood type in this way: OO = blood type O AO = blood type A BO = blood type B AB = blood type AB AA = blood type A BB = blood type B You can see that a person with blood type B may have a B and an O allele, or they may have two B alleles. If both parents are blood type B and both have a B and a recessive O, then their children will either be BB, BO, or OO. If the child is BB or BO, they have blood type B. If the child is OO, he or she will have blood type O. Pleiotropism, or pleotrophy, refers to the phenomenon in which a single gene is responsible for producing multiple, distinct, and apparently unrelated phenotypic traits, that is, an individual can exhibit many different phenotypic outcomes. This is because the gene product is active in many places in the body. An example is Marfan's syndrome, where there is a defect in the gene coding for a connective tissue protein. Individuals with Marfan's syndrome exhibit abnormalities in their eyes, skeletal system, and cardiovascular Some genes mask the expression of other genes just as a fully dominant allele masks the expression of its recessive counterpart. A gene that masks the phenotypic effect of another gene is called an epistatic gene; the gene it subordinates is the hypostatic gene. The gene for albinism in humans is an epistatic gene. It is not part of the interacting skin-color genes. Rather, its dominant allele is necessary for the development of any skin pigment, and its recessive homozygous state results in the albino condition, regardless of how many other pigment genes may be present. Because of the effects of an epistatic gene, some individuals who inherit the dominant, disease-causing gene show only partial symptoms of the disease. Some, in fact, may show no expression of the disease-causing gene, a condition referred to as nonpenetrance. The individual in whom such a nonpenetrant mutant gene exists will be phenotypically normal but still capable of passing the deleterious gene on to offspring, who may exhibit the full-blown disease. Then we have traits that are multigenic, that is, they result from the expression of several different genes. This is true for human eye color, in which at least three different genes are responsible for determining eye color. A brown/blue gene and a central brown gene are both found on chromosome 15, whereas a green/blue gene is found on chromosome 19. The interaction between these genes is not well understood. It is speculated that there may be other genes that control other factors, such as the amount of pigment deposited in the iris. This multigenic system explains why two blue-eyed individuals can have a brown-eyed child. Speaking of eye color, have you ever seen someone with one green eye and one brown eye? In this case, somatic mosaicism may be the culprit. This is probably easier to describe than explain. In multicellular organisms, every cell in the adult is ultimately derived from the single-cell fertilized egg. Therefore, every cell in the adult normally carries the same genetic information. However, what would happen if a mutation occurred in only one cell at the two-cell stage of development? Then the adult would be composed of two types of cells: cells with the mutation and cells without. If a mutation affecting melanin production occurred in one of the cells in the cell lineage of one eye but not the other, then the eyes would have different genetic potential for melanin synthesis. This could produce eyes of two different colors. Penetrance refers to the degree to which a particular allele is expressed in a population phenotype. If every individual carrying a dominant mutant gene demonstrates the mutant phenotype, the gene is said to show complete penetrance. Molecular Genetics: The Study of Heredity, Genes, and DNA As we have just learned, DNA provides a blueprint that directs all cellular activities and specifies the developmental plan of multicellular organisms. Therefore, an understanding of DNA, gene structure, and function is fundamental for an appreciation of the molecular biology of the cell. Yet, it is important to recognize that progress in any scientific field depends on the availability of experimental tools that allow researchers to make new scientific observations and conduct novel experiments. The last section of the genetic primer concludes with a discussion of some of the laboratory tools and technologies that allow researchers to study cells and their DNA. |Back to top | Revised: March 31, 2004.
http://www.ncbi.nlm.nih.gov/About/primer/genetics_genome.html
13
20
This tutorial continues with the discussion of what components make a computer network work. TCP/IP is likely the most commonly used pair of protocols in the world and will be the focus of this tutorial. Transmission Control Protocol (TCP) is software that resides on the computer; when the program you are running needs to send data over the network, it hands that data over to the TCP. TCP is used throughout many Internet applications including browsers, e-mail, File Transfer Protocol (FTP) transfers and even some streaming media applications. TCP is used to guarantee data delivery. The receiving computer’s TCP will notify the sender’s TCP that each packet was received correctly, and if the sender does not get a confirmation that the packet was received, then another packet is sent replacing the lost one. A timer is also used in case the sent packet goes astray. Because both the Ethernet and the Internet are two-way data channels, no forward error correction is used because each packet of data can be acknowledged. For example, when a file is to be transmitted via FTP, the entire file, which could be any size from a few kilobytes to several gigabytes, is handed off to TCP where it divides the file into segments that in turn are the payload of the IP packets. TCP also includes the port numbers for the packet header. Port numbers tell a computer what sort of service this data is associated with, e.g. an e-mail, FTP, http or Network Time Protocol. Ports allow a program to only see data intended for it. The Internet Protocol (IP) part encapsulates these segments into an IP packet and adds a header with the destination and source IP address as well as other information about the packet. This IP packet is then encapsulated into an Ethernet frame for transport over the local Ethernet network. This Ethernet frame includes the MAC address (see below) of both the source and destination. Then this Ethernet frame is sent to the network interface card, and the data is sent over Cat 5 cabling. (See Figure 1.) Each device that plugs into a network must have a unique Machine Access Control (MAC) address. These are 48-bit numbers that identify each network port; if a device has 16, 24 or 48 network ports, it has as many MAC addresses. Many network protocols use the MAC address to direct IP traffic. When a computer is first plugged into the local network switch, its MAC address is not known to any other devices on the network. When this new computer attempts to send data to another computer using its IP address, it must broadcast a request for the MAC address of the computer with that IP address. Within this request is the IP and MAC address of the new computer so the responding device will know how to reach it. The network switch (which only works with MAC address) will broadcast this request to all of its ports. When the request reaches the computer with the correct IP address, it responds back to the originating computer with its MAC address. The network switch also sees this exchange and adds the MAC addresses for both computers to its ARP cache (see below). At this point, the new computer is finally able to send the data. This exchange is how both computers and the network switch are able to know how to address and direct the Ethernet frame through the network to its destination. Each time data is sent, all this information must be known. To speed things up and keep the amount of network traffic down, each device remembers this information the first time it happens so it can use it the next time. Network switches are used to connect the various computers on a network. Switches direct data via a computer’s MAC address. When data is sent out from a computer to the network switch, it carries with it the IP and MAC address of the sender and the IP and MAC address of the intended recipient. The network switch looks at the MAC address and directs the data to the correct port of the switch. It does this by keeping a record of which port is connected to which MAC-addressed equipment. This record is kept in a content addressable memory (CAM) table within the switch, which operates at very high speeds and expedites the movement of Ethernet frames through the switch. The network switch monitors all traffic flowing through it, looking for MAC addresses and keeping a record of them. When a computer needs to send data to another computer, it must first know its IP address and then its MAC address. If it does not already know the MAC address, it sends out an Address Resolution Protocol (ARP) request over the network to all devices asking for the MAC address of the owner of this particular IP address. When the computer with that IP address responds, this data is stored on the requesting computer’s ARP cache so it can use it again. The IP and MAC addresses of the requesting computer are also stored on all the computers on the network that received the request. The MAC addresses for both computers, requesting and responding, are stored on the network switch as well. Getting an IP address Each device on a LAN must have a unique IP address, and there are only two ways to get one. The first is to have it assigned manually when the TCP/IP of the computer is first set up. The TCP/IP control panel of most computers requires the following: IP address, subnet mask, default gateway or router and primary and secondary DNS. IP addresses and subnet masks were covered in previous tutorials. The default gateway is the IP address of the local router, which enables the computer to know where to send IP packets with IP addresses that fall outside the range of the subnet mask. An easier way is to use Dynamic Host Configuration Protocol (DHCP). This allows computers on a network to be automatically assigned all the information needed to communicate over the network. DHCP works when a new computer sends out a broadcast over the network looking for a DHCP server. When the DHCP server responds, it sends all the IP configuration information the new computer needs. The exchange ends when the computer accepts the IP information. DHCP IP addresses are leased to the requesting computer for a certain time frame, which can be hours or days. DHCP servers are allocated a range of IP address within the subnet of the network. DHCP servers can be computers or even routers, but there can only be one DHCP server on any subnet or network. IP addresses can be static (manually assigned) or dynamic (using DHCP). For most computers, a dynamic IP address is easiest, while for other devices such as servers, routers and printers, a static IP address makes it easier for other computers to find these frequently accessed devices. Domain Name Servers (DNS) are computers that reside on wide area networks that translate human-readable addresses into IP addresses. For example, the URL is the human-readable address such as www.broadcastengineering.com. These names are much easier to remember than a string of numbers for an IP address, but an IP address is required to locate the sites Web servers on the Internet. When you type in a name like www.broadcastengineering.com, your Internet browser knows to send a request for the IP of BroadcastEngineering.com. TCP/IP uses the IP address in the DNS location to send the request. The DNS computer looks up the name BroadcastEngineering.com and returns the IP address associated with it. The computer now has the correct IP address and can send messages to BroadcastEngineering.com asking for its main Web page. This happens each time you type a new URL into your browser. The request the computer in the aforementioned example sent out was called an ARP request, which is the method devices use to find out another device’s MAC address when only its IP is known. As a device receives an ARP or an answer to its own ARP request, it builds an ARP cache or memory of the IP addresses and associated MAC addresses. Network switches monitor all traffic and build their own CAM, so they know which MAC-addressed device is attached to which of its ports. Because network devices can be swapped out and to keep the cache from being outdated, APR caches are cleared on a regular basis and new ARP requests must be sent out to obtain the MAC address again. Remember, on an Ethernet network each IP packet sent has to have the IP and MAC address of its destination, and one data transfer may require anywhere from one to thousands of packets to send all the data. Once the first packet is addressed correctly, all the subsequent ones are addressed similarly using the ARP cache. Here are several tests you can try on your computer to actually see some of the things covered in the last few newsletters. Checking the ARP cache and the IP configuration of your computers can be a valuable troubleshooting tool. Checking both is quite easy on a PC or Macintosh computer. - ARP on the PC Under the Start menu select Run. A new window will open; type in “CMD” then hit enter. Type “ARP –A” with a space between “ARP” and “–A.” You will see a list of all known IP addresses with their associated MAC address. (See Figure 2.) (Note: This only works properly on a wired network.) - IP configuration on the PC Under the Start menu select Run. A new window will open; type in “CMD” then hit enter. Type “ipconfig.” Now you will see the IP configuration of this computer with IP address, subnet mask, default gateway, primary DNS and secondary DNS. (See Figure 3.) - ARP on the Macintosh Start the Terminal program from the Utilities folder in Applications. Type “ARP –A” and you will see a list of all known IP addresses with their associated MAC addresses. (See Figure 4.) - IP configuration on the Macintosh Start the Network Utility program from the Utilities folder in Applications. From the bar at the top of the window, choose Info and you will see the present configuration of all the network interfaces on the computer including IP and MAC address. Pinging is a simple test you can perform to find out if a particular IP address is in use on a network. When you ping an IP address, you are asking the device that uses it to respond; this lets you know it is on the network and working. Some devices can be programmed to not respond to a ping, but this is uncommon. - Ping on a PC Under the Start menu select Run. A new window will open; type in “CMD” and hit enter. Type “ping ###.###.##.###.” Replace the # with the IP address you are looking for. Try an IP address from the ARP command above. (See Figure 5.) - Ping on a Macintosh Start the Network Utility program from the Utilities folder in Applications. From the bar at the top of the window, choose Ping and type in the IP address and hit start. (See Figure 6.) The next “Transition to Digital” tutorial will explore troubleshooting a real-world computer network problem.
http://broadcastengineering.com/infrastructure/computer-networks-part-iii-0716
13
18
Ask a question about 'Formal logic' Start a new discussion about 'Formal logic' Answer questions from other users Classical or traditional system of determining the validity or invalidity of a conclusion (inference) deduced from two or more statements (premises). Based on the theory of syllogism of the Greek philosopher Aristotle (384-322 BC) systematized in his book 'Organon,' its focus is not on what is stated (the content) but on the structure (form) of the argument and the validity of the inference drawn from the premises of the argument-if the premises are true then the inference (also called logical consequence) must also be true. The basic principles of formal logic are (1) Principle of identity: if a statement is true then it is true. (2) Principle of excluded middle: a statement is either true or false. (3) Principle of contradiction: no statement can be both true and false at the same time. Also called Aristotelian logic. An inference rule is a method of deriving conclusions from premises. When inference rules for a formal language are codified it becomes a formal logic. For a language to become a logic it must be capable of expressing propositions, for which purpose there must be an appropriate syntactic category, e.g. sentences, or formulae. Examples of languages which fail to be suitable for logics for lack of such a category are the lambda calculus and most programming languages, which are designed for defining algorithms or functions, not for making or proving assertions. In the case of a typed language a distinguished type may suffice, a type of propositions or truth values, e.g. BOOL in HOL. This technique was used by Alonzo Church to make a logic (his simple theory of types) from the typed lambda calculus. To define the logic it is next necessary to determine the axioms of the logic. The axioms are those sentences which are to be considered true without proof. Next the inference rules must be defined. Inference rules determine how new theorems can be derived from one or more previously proven theorems. It is normal that the definitions of the axioms and inference rules be effective in the sense that a computer could be used to check what is an axiom or to check the correctness of an application of an inference rule. The logic is consistent (in the sense of Emil Post) if there are some sentences of the language which are not provable from the axioms using the inference rules. If the language has a semantics this will usually determine a subset of the sentences of the language which are "true", these are the sentences which should be provable in the logic. If all the sentences which can be proven using the logic are true then the logic is sound. If all the true sentences are provable then the logic is complete.
http://www.absoluteastronomy.com/topics/Formal_logic
13
18
Mirages are generally images of terrestrial objects, so we think of terrestrial refraction as being responsible for them. On the other hand, the ray bending involved in terrestrial refraction is just a part of the whole astronomical refraction — the part produced by the lowest layers of the atmosphere. So we'd like to see how these phenomena fit together. In addition, there's a curious difference between the two refractions, with respect to their magnifications. In astronomical refraction, the magnification at the horizon, which explains the flattening of the setting Sun, depends mainly on the lapse rate below the observer. But in terrestrial refraction, a constant lapse rate is usually considered to produce only a displacement, not a change in the angular size of objects — a result first obtained in 1759 by Lambert. So the lapse rate affects the apparent size of astronomical objects, but not the apparent size of terrestrial ones. But if the terrestrial refraction is just part of the astronomical refraction, shouldn't the two behave similarly? People often compare the action of the refracting atmosphere to that of a prism. A prism that produces deviation comparable to the horizontal refraction (about half a degree) is just a thin wedge of glass, so it hardly produces any detectable distortion. That makes the prism behave like the terrestrial refraction, not the astronomical refraction; that's why I don't use this comparison in discussing astronomical refraction. If, instead of using the prism analogy, you think of atmospheric refraction in terms of bending, you can see that the amount of bending is proportional to the distance the ray travels through the dense lower atmosphere. In particular, if you know that Wegener's principle makes the contribution to astronomical refraction from air above eye level almost constant near the horizon, you can see that the increase in astronomical refraction below the astronomical horizon — and hence, its rate of change with apparent altitude, which is the astronomical magnification — depends entirely on the increasing path length in the air below eye level with increasing depression below the astronomical horizon. (This relation between path length and total bending underlies Laplace's extinction theorem.) But, when we think of terrestrial refraction, we're usually dealing with an object (like a lighthouse, or a distant mountain) whose extent along the line of sight is much smaller than its height. In effect, we're thinking in terms of a vertical target. Now, the distance from the eye to all parts of a vertical object is nearly the same for all parts of the object. That means that the angular bending of the rays is nearly the same for both the top and bottom of the object. And that means that the object is displaced vertically by terrestrial refraction, but not distorted. In other words, the prism analogy works for terrestrial refraction, even though it doesn't work for astronomical refraction. In fact, this argument was used by Lambert to argue that mirages are impossible! He claimed that the constant density gradient of the lower atmosphere could only displace, but not distort, the images of distant terrestrial objects. Of course, Lambert assumed a fixed density gradient. And this just produces a fixed bending, which produces (nearly) the same refractive displacement for all parts of a terrestrial object, because they're all at essentially the same distance from the observer. If, instead, we assume a density gradient that varies with height, we'll have a vertical displacement that varies with height, and consequently a distorted image. If the size varies, the magnification differs from unity. And if the lapse rate varies enough, we can even get negative magnification — i.e., the inverted image of a mirage. So one feature that makes terrestrial refraction behave differently from astronomical refraction is just the amount of bending atmosphere in the line of sight. Remember Laplace's theorem: near the horizon, the astronomical refraction is proportional to the air mass in the line of sight. For terrestrial objects, the amount of air we look through is nearly the same for all parts of the object, and so is the refraction. So, how does the terrestrial refraction depend on atmospheric structure? A constant lapse rate (i.e., a linear temperature profile) produces a constant angular displacement for all parts of a vertical target. A parabolic temperature profile produces a constant magnification of that target. (As Biot showed, this can already be enough to produce the inverted image of a mirage.) And a more complicated temperature profile produces more complicated distortions. A constant lapse rate corresponds to a constant terrestrial refraction coefficient, which is the ratio k of curvatures of the ray and the Earth. Surveyors use this to correct their observed angles for refraction. And, of course, their correction is proportional to the distance of the object — i.e., to the amount of air in the line of sight. But just a minute. If the terrestrial refraction is part of the astronomical refraction, shouldn't we be able to use the surveyor's refraction coefficient to correct the astronomical refraction near the horizon as well? Indeed, shouldn't such a correction be required? This notion has already occurred to several people. For example, Hervé Faye suggested in 1854 that astronomical refraction could be improved near the horizon by making use of the terrestrial refraction coefficient. This suggestion was extremely controversial, being criticized first by Biot and then by several other astronomers, after Faye refused to listen. In 1976, the same idea was raised by Livieratos, who proposed that it be used the other way around: use astronomical observations of refraction to improve geodesy! Rather than repeat all the arguments, let's look at what's missing here. If there are no superior mirages, the terrestrial refraction is unaffected by air above both the observer and the objects observed; so unless those objects are tall mountains, we get no information about the lower atmosphere from the terrestrial refraction. Even when we have superior mirages, so that the eye is inside a duct, the terrestrial refraction is not usually affected by air above the top of the duct. If it were permissible to extrapolate the lapse rate near the ground into the lower troposphere, we could fill in the gap. But the lapse rate near the ground is restricted to the bottom of the boundary layer; the air above that is basically unconnected to the air lower down. As the boundary layer is usually several hundred meters thick, there's a kilometer or two of air above it that strongly affects astronomical refraction near the horizon, but has no influence on terrestrial refraction. So there's no way to infer the temperature distribution above the boundary layer from terrestrial refraction observations. And that's what's needed to calculate astronomical refraction near the horizon, and hence to connect terrestrial and astronomical refraction. To be sure, there is a unique relation between horizontal ray curvature and the local density gradient (which, for a given temperature and pressure at eye level, depends only on the local lapse rate), and this unique relation extends to the terrestrial refraction coefficient. And the local lapse rate also tells you the magnification of astronomical objects at the horizon, through Biot's magnification theorem. So the gradient of the astronomical refraction at the horizon is uniquely determined; and it's connected to the terrestrial refraction, through the local lapse rate. Unfortunately, this isn't enough to determine the actual amount of astronomical refraction at the horizon. That also depends on the thickness of the boundary layer through which this special lapse rate extends. And of course, the rate at which the perturbed refraction blends into the region (above about 15° altitude) where the astronomical refraction becomes insensitive to the lapse rate, and depends only on the temperature and pressure at the observer, also depends on the thickness of this boundary layer, as well as on the temperature distribution for some distance above the boundary layer. You can see some examples of how the refraction near the horizon depends on the thickness of the boundary layer, and the lapse rate in it, in my 2004 paper on refraction near the horizon. (I adopted the standard-atmosphere lapse rate above the boundary layer.) Because the refraction is uniquely determined by the temperature profile, it's tempting to think we could use observations of astronomical refraction to determine the temperature profile. But that turns out to be an ill-posed problem above the astronomical horizon: the temperature distribution is so smeared out in the refraction as a function of zenith distance that errors in the observations are enormously amplified in the retrieved temperature distribution. So it's impractical to try to infer the temperature profile from refraction observations above the horizon. On the other hand, below the astronomical horizon, the layer where the ray is horizontal is so heavily weighted in the refraction integral that the problem becomes well-posed — but only for the part of the profile below eye level. Then a retrieval of the temperature distribution below eye level is possible (see the papers by Bruton and Kattawar in 1997 and 1998 for an example, and the history of the problem, respectively.) However, it's just the part of the profile below eye level that can be determined this way. That's enough to determine the terrestrial refraction of objects that are also below eye level, but not objects above it. So this isn't a general solution to the terrestrial-refraction problem. However, as it is at least possible to infer the lapse rate below eye level from the gradient of astronomical refraction at the horizon — e.g., from the flattening of the Sun or Moon — you might imagine that we could use these flattenings to provide the geodesists and surveyors with the refraction coefficient for terrestrial refraction. But the lapse rate near the ground is changing rapidly near sunrise and sunset, because of the rapid transition between solar heating of the surface during the day and radiative cooling of the surface at night. And there's not much else that's regularly observable at the horizon: the Moon's flattening is difficult to measure accurately because of phase effects, and stars aren't observable in the daytime against the bright horizon sky. About all that leaves is the possibility of estimating the terrestrial refraction at night from stellar observations. But it's hard to see terrestrial features at night. (Livieratos tried using lasers as terrestrial markers at night.) A further problem in connecting terrestrial and astronomical refraction is the effect of distance. When we view astronomical objects, which are optically at infinity, any ray bending anywhere in the atmosphere produces an equal angular displacement of the object's apparent position. But when the object is at a finite distance, the angular displacement is smaller than the amount of ray bending. The closer the object, the less is the resulting displacement. This effect is often called parallactic refraction. It's important in reducing observations of artificial satellites and sounding rockets. You'll find just a few references to it in my bibliography. The geodesists usually assume the ray path is a circular arc, which means the bending is uniformly distributed between target and observer. Then only half of the total ray bending appears as angular displacement of the target. So, while the whole bending is the same as the astronomical refraction, only half of the bending between observer and target is the angular displacement — the terrestrial refraction — of the terrestrial target. It's as though all the bending were concentrated at the point halfway between target and observer. If the bending occurs even closer to the target, its angular displacement is still less. That's why a ship appears undistorted against a highly distorted sunset in Plate XIX of O'Connell's book: the Sun is distorted by the atmospheric layers in which the ship is embedded, but the ship is not. This same effect plays an important role in mirages: objects are more distorted, the farther they are beyond the horizon. Those at or within the horizon are not distorted appreciably at all. So, while terrestrial objects beyond the horizon can be distorted, the setting Sun is usually distorted even more. So we need to know the exact distribution of the bending along the line of sight to compute its effects on terrestrial objects, because the displacement of terrestrial objects depends on their distance from the distorting medium. You can regard the dip of the horizon as the sum of the geometric dip and the terrestrial refraction at the apparent horizon. The dip depends on the average lapse rate between eye level and the surface at the apparent horizon. Of course, the dip affects the time of sunset. But notice that the dip is much less affected by refraction than is the apparent position of the Sun. First, the line of sight to the setting Sun traverses the atmosphere below eye level twice, while the line of sight to the apparent terrestrial horizon only goes through it once. Second, the refractive displacement of the horizon is only about half of the total bending in this part of the path. So the refractive displacement of the horizon is only about one fourth of the part of the Sun's refraction produced by the layers below eye level. (Of course, the Sun is also refracted by the air above eye level, which does not affect the sea horizon at all.) So, even though refraction raises both the Sun and the apparent horizon above their geometric positions, these effects don't cancel out. But, even though the refractive displacement of the horizon is small compared to the Sun's, one should take these variations into account in comparing observed and computed sunset times. As a practical matter, we may as well forget about trying to tie terrestrial and astronomical refraction together quantitatively, although terrestrial mirages at the horizon certainly promise even greater distortions of the sunset. As the geodesists have worked out good rules of thumb for the variation of their refraction coefficient as a function of time of day, and the boundary-layer meteorologists have also some understanding of the diurnal variations in the boundary layer, it's not beyond the realm of possibility that someone might put all these pieces together, and eventually come up with a way to predict the astronomical refraction near the horizon with improved accuracy. But, until then, astronomers are content to stay away from the horizon, and work in the region where Oriani's theorem promises a refraction that depends only on the local temperature and pressure. Just don't expect average refraction tables to be very close to the actual refraction at a particular time and place, when you're looking close to the horizon. Copyright © 2006 – 2007, 2010 Andrew T. Young main mirage page GF home page or the website overview page
http://mintaka.sdsu.edu/GF/explain/atmos_refr/terrestrial.html
13
21
Lagrange's theorem (group theory) Lagrange's theorem, in the mathematics of group theory, states that for any finite group G, the order (number of elements) of every subgroup H of G divides the order of G. The theorem is named after Joseph-Louis Lagrange. Proof of Lagrange's Theorem This can be shown using the concept of left cosets of H in G. The left cosets are the equivalence classes of a certain equivalence relation on G and therefore form a partition of G. Specifically, x and y in G are related if and only if there exists h in H such that x = yh. If we can show that all cosets of H have the same number of elements, then each coset of H has precisely |H| elements. We are then done since the order of H times the number of cosets is equal to the number of elements in G, thereby proving that the order of H divides the order of G. Now, if aH and bH are two left cosets of H, we can define a map f : aH → bH by setting f(x) = ba−1x. This map is bijective because its inverse is given by This proof also shows that the quotient of the orders |G| / |H| is equal to the index [G : H] (the number of left cosets of H in G). If we write this statement as Using the theorem A consequence of the theorem is that the order of any element a of a finite group (i.e. the smallest positive integer number k with ak = e, where e is the identity element of the group) divides the order of that group, since the order of a is equal to the order of the cyclic subgroup generated by a. If the group has n elements, it follows Existence of subgroups of given order Lagrange's theorem raises the converse question as to whether every divisor of the order of a group is the order of some subgroup. This does not hold in general: given a finite group G and a divisor d of |G|, there does not necessarily exist a subgroup of G with order d. The smallest example is the alternating group G = A4, which has 12 elements but no subgroup of order 6. A CLT group is a finite group with the property that for every divisor of the order of the group, there is a subgroup of that order. It is known that a CLT group must be solvable and that every supersolvable group is a CLT group: however there exist solvable groups that are not CLT (for example A4, the alternating group of degree 4) and CLT groups that are not supersolvable (for example S4, the symmetric group of degree 4). There are partial converses to Lagrange's theorem. For general groups, Cauchy's theorem guarantees the existence of an element, and hence of a cyclic subgroup, of order any prime dividing the group order; Sylow's theorem extends this to the existence of a subgroup of order equal to the maximal power of any prime dividing the group order. For solvable groups, Hall's theorems assert the existence of a subgroup of order equal to any unitary divisor of the group order (that is, a divisor coprime to its cofactor). Lagrange did not prove Lagrange's theorem in its general form. He stated, in his article Réflexions sur la résolution algébrique des équations, that if a polynomial in n variables has its variables permuted in all n ! ways, the number of different polynomials that are obtained is always a factor of n !. (For example if the variables x, y, and z are permuted in all 6 possible ways in the polynomial x + y - z then we get a total of 3 different polynomials: x + y − z, x + z - y, and y + z − x. Note that 3 is a factor of 6.) The number of such polynomials is the index in the symmetric group Sn of the subgroup H of permutations that preserve the polynomial. (For the example of x + y − z, the subgroup H in S3 contains the identity and the transposition (xy).) So the size of H divides n !. With the later development of abstract groups, this result of Lagrange on polynomials was recognized to extend to the general theorem about finite groups which now bears his name. Lagrange did not prove his theorem; all he did, essentially, was to discuss some special cases. The first complete proof of the theorem was provided by Gauss and published in his Disquisitiones Arithmeticae in 1801. - Lagrange, J. L. (1771) "Réflexions sur la résolution algébrique des équations" [Reflections on the algebraic solution of equations] (part II), Nouveaux Mémoires de l’Académie Royale des Sciences et Belles-Lettres de Berlin, pages 138-254; see especially pages 202-203. Available on-line (in French, among Lagrange's collected works) at: http://math-doc.ujf-grenoble.fr/cgi-bin/oeitem?id=OE_LAGRANGE__3_205_0 [Click on "Section seconde. De la résolution des équations du quatrième degré 254-304"]. - Bray, Henry G. (1968), "A note on CLT groups", Pacific J. Math. 27 (2): 229–231 - Gallian, Joseph (2006), Contemporary Abstract Algebra (6th ed.), Boston: Houghton Mifflin, ISBN 978-0-618-51471-7 - Dummit, David S.; Foote, Richard M. (2004), Abstract algebra (3rd ed.), New York: John Wiley & Sons, ISBN 978-0-471-43334-7, MR 2286236 - Roth, Richard R. (2001), "A History of Lagrange's Theorem on Groups", Mathematics Magazine 74 (2): 99–108, doi:10.2307/2690624, JSTOR 2690624
http://en.wikipedia.org/wiki/Lagrange's_theorem_(group_theory)
13
12
Creating models for sets of data is a key skill for the businessperson, scientist, politician, or anyone else who uses data to make decisions. This is because modeling enables one to make predictions. The graphing calculator can help with this task. Given a set of data, one can create a model that is a linear, quadratic, cubic, quartic, exponential, logarithmic, sinusoidal, logistical, or power function. Determining which model is the most appropriate model is addressed in statistics courses, although a fundamental rule of thumb has to do with the plot of the residuals of the regression. This appendix shows how to use the two most popular graphing calculators—the TI 8xx series (83+, 84, and 84+) and the TI Nspire—in working through three problems: a linear regression based on data obtained from a group of students reading a tongue twister, a quadratic regression based on the heights of a bouncing ball measured with a motion detector, and an exponential regression using the maximum heights to which the ball bounces. The keystrokes and screen shots are given so that you can use the tool.
http://www.netplaces.com/algebra-guide/regression-with-graphing-calculators/
13
12
We are exposed to nuclear radiation every day. Some radiation comes from natural sources and some is anthropogenic (i.e., a result of human activity). Natural sources include cosmic radiation, radiation from lighter, unstable nuclei produced by the bombardment of the atmosphere by cosmic radiation, and radiation from heavy, unstable nuclei produced by the decay of long-lived nuclides in the earth's crust. Artificial sources include medical procedures, commercial products, and fallout from nuclear testing. Nuclear radiation can cause biological damage because it is highly energetic. Nuclear radiation loses its energy when it passes through matter by ionizing the absorbing material. For this reason, nuclear radiation is called ionizing radiation. In the ionization process, neutral atoms in the absorbing material lose electrons, forming positive ions. Frequently, the ejected electrons possess sufficient energy to cause other atoms to ionize. The average amount of energy required to ionize an atom is 35 electron volts (eV). (1 eV is the amount of energy acquired by an electron accelerated in an electric field of 1 V; 1 eV is equal to 1.6 × 10-19 joules (J).) The energy of a single particle from a nuclear decay can be as high as 8 million electron volts (8 MeV), and one 8 MeV particle can produce 2 × 105 ions. The magnitude of the radioactivity in a sample can be expressed as an activity, exposure, or absorbed dose. Activity is the number of nuclei that decay (disintegrate) per unit time. The most common unit of activity is the curie (Ci), which is defined as 3.7 × 1010 disintegrations per second. (Marie Curie discovered Ra-226, and 1 Ci is the activity of 1 gram of Ra-226.) The SI unit of activity is 1 disintegration per second or becquerel. Exposure is the amount of ionization caused by radioactive material. One roentgen is the amount of radiation that produces ions with a total charge of 1 electrostatic unit in 1 cm3 of dry air. (1 roentgen is equal to 2.58 × 104 Coulomb/kg of air). Absorbed dose is the amount of energy absorbed by a substance exposed to ionizing radiation. One radiation absorbed dose or rad is equal to 1 × 105 J/g. Different kinds of radiation cause different biological effects for the same amount of energy absorbed. For this reason, roentgen equivalent in man or rem was introduced. One rem is equal to one rad multiplied by a factor, Q, that accounts for the relative biological effect of radiation on humans. For X-rays, Q ~ 1, while for particles and fast neutrons, Q ~ 20. The ionizing power of radiation depends on the type of radiation. An alpha particle, which is a helium nucleus, 4He2+, is relatively massive and ionizes virtually every atom in its path. However, alpha particles lose most of their energy after traveling only a few centimeters in air or less than 0.005 mm in aluminum. A beta particle, which is an electron, is relatively light and ionizes only a fraction of the atoms in its path. However, beta particles can travel more than a meter in air or several millimeters in aluminum. For most people, cosmic radiation is the major source of adsorbed radiation. At sea level, the average person absorbs 26 millirem (mrem) per year. The atmosphere shields the surface of the earth from cosmic radiation; however, for each 100-meter increase in elevation, the dosage absorbed increases by about 1.5 mrem per year. A person traveling by commercial jet aircraft on a long flight, such as Los Angeles to London, can receive as much as 10 mrem during the flight. When cosmic radiation interacts with gases in the atmosphere, it causes nuclear transformations that release neutrons and protons. These neutrons and protons interact with other nuclei in the atmosphere, producing radioactive nuclei, such as carbon-14 and tritium (3H). Carbon-14 is responsible for less than 1 mrem per year of absorbed radiation in humans, and tritium about 1 microrem. Long-lived radioisotopes in the earth's crust are also a source of radiation. Potassium is one of the most abundant elements, and an essential component of food. Potassium-40 makes up 0.019% of all potassium, and has a half-life of 1.3 × 109 years. The average absorbed dose for a person from external potassium-40 is about 12 mrem per year, while that from internal potassium-40 is about 20 mrem per year. For more information about environmental radiation, see "Radioactivity in Everyday Life," in the May, 1997, issue of the Journal for Chemical Education, page 501. How old is the Earth, the solar system, or a piece of charcoal from an ancient campfire? Until the beginning of the 20th Century, geologists had no method by which to determine the absolute age of a material. The age of the earth was believed to be at most tens of millions of years. Not long after the discovery of radioactivity in 1896, scientists realized that radioactive decay constitutes a "clock" capable of measuring absolute geologic time. By 1907, the discovery that lead was the final product of uranium decay provided evidence that geologic time should be measured in billions of years. Uranium occurs in numerous minerals, such as pitchblende (UO3·UO2·PbO) and carnotite (K2O·2U2O3·V2O5·3H2O), and is more plentiful in the Earth's crust than mercury or silver. Uranium was first isolated in 1841 by the reduction of uranium(IV) chloride with potassium. Uranium is sufficiently radioactive to expose a photographic plate in one hour. Naturally occurring uranium contains 14 isotopes, all of which are radioactive. The three most abundant are U-238 (99.28%), U-235 (0.71%), and U-234 (0.006%). In contrast to chemical reactions, where the isotopes of an element behave similarly, in nuclear reactions, isotopes behave quite differently. For example, of the three most abundant uranium isotopes, only U-235 undergoes fission. U-238 decays by alpha emission to Th-234. |238U||234Th + 4He||t½ = 4.5 × 109 years| The product of this reaction, Th-234, is also radioactive and undergoes beta decay. |234Th||234Pa + -1e||t½ = 24 days| Protactinium-234 also decays by emitting a beta particle. These two reactions are the beginning of a series of 14 nuclear decay steps, referred to as the uranium decay series. After the emission of 8 alpha particles and 6 beta particles, the stable isotope Pb-206 is produced. The intermediate isotopes are called "daughters", and have half-lives that range from 1.6 × 10-4 seconds for Po-214 to 2.5 × 105 years for U-234. Two other radioactive series occur in nature, one that starts with U-235 and the other with Th-232. The uranium decay series has been used to estimate the age of the oldest rocks in the Earth's crust. The ratio of U-238 to Pb-206 in a rock changes slowly as the U-238 in the rock decays. Because the half-life of U-238 is 20,000 times that of the next longest half-life in the series, the rate of decay of U-238 is the rate-determining step in the conversion of U-238 to Pb-206. The rate of radioactive decay is first order in the amount of decaying isotope. Two other radioactive clocks are used for dating geologic time. One is potassium-40, which decays by electron capture to argon-40. |40K + -1e||40Ar||t½ = 1.3 × 109 years| The other is rubidium-87 which emits a beta particle to form Sr-87. |87Rb||87Sr + -1e||t½ = 5.7 × 1010 years| These radioactive "clocks" are more useful for dating rock samples than uranium because potassium and rubidium are more widely distributed in rock samples. All radiochemical methods of dating have uncertainties associated with them. Several assumptions are made in determining an age. The most significant assumption is that the sample is a closed system, which is to say that no parent or daughter isotopes were gained or lost by the sample over time. Another assumption involves the amount of daughter isotope present at the time the sample was formed. For rare isotopes, this is generally assumed to be zero. The strongest evidence for the age of a sample is obtained when two different radiochemical dating methods produce the same result. Because the chemical properties of daughter products are very different, any geological transformation of a rock sample will have very different effects on the sample's daughter isotope content. Potassium and rubidium frequently occur together in rock samples, making this pair particularly important for radiochemical dating. Radiochemical dating of samples from the Earth's crust yield a maximum age of about 3.5 × 109 years; however, the earth is believed to be older than this. The oldest meteorites and moon rocks are 4.5 × 109 years old. If these other members of the solar system were formed at the same time, then the Earth may also have formed 4.5 billion years ago. The isotopic composition of lead supports this conclusion. Of the four lead isotopes, only Pb-204 is not produced by radioactive decay of parent U-238, U-235, or Th-232. Comparing the isotopic composition of lead in the Earth's crust to t hat of meteorites free of uranium and thorium indicates that about 4.5 billion years of U and Th decay would be required to produce the Pb isotope ratios found on Earth.
http://scifun.chem.wisc.edu/CHEMWEEK/radiation/radiation.html
13
10
It is widely believed among planetary scientists that carbon dioxide, not ammonia and methane, dominated Earth's earliest atmosphere. The primal, early conditions, essentially Earth as a molten ball with an atmosphere of carbon dioxide, nitrogen, and other volcanic emissions may not have favored widespread synthesis of organic molecules. The fact that life appeared soon after the termination of the heavy bombardment about 3.8 billion years ago suggests that it seems more reasonable that the incoming comets and asteroids delivered the compounds essential for life. The 2005 Deep Impact mission to Comet Tempel 1 discovered a mixture of organic and clay particles inside the comet that show it is overwhelmingly likely that life began in space, according to resaerch by Cardiff University scientists, professor Chandra Wickramasinghe and colleagues at the University’s Center for Astrobiology. One theory for the origins of life proposes that clay particles acted as a catalyst, converting simple organic molecules into more complex structures. The 2004 Stardust Mission to Comet Wild 2 found a range of complex hydrocarbon molecules - potential building blocks for life. The Cardiff team proposes the controversial theory that radioactive elements can keep water in liquid form in comet interiors for millions of years, making them potentially ideal “incubators” for early life. They also point out that the billions of comets in our solar system and across the galaxy contain far more clay than the early Earth did. The researchers calculate the odds of life starting on Earth rather than inside a comet at one trillion trillion (10 to the power of 24) to one against. Professor Wickramasinghe said: “The findings of the comet missions, which surprised many, strengthen the argument for panspermia. We now have a mechanism for how it could have happened. All the necessary elements - clay, organic molecules and water - are there. The longer time scale and the greater mass of comets make it overwhelmingly more likely that life began in space than on earth.” In his essay, Extraterrestrials: A Modern View, University of Washington professor Guillermo Gonzalez wrote: "The kind of origin of life theory a scientist holds seems to depend on his/her field of specialty: oceanographers like to think it began in a deep sea thermal vent, biochemists like Stanley Miller prefer a warm tidal pool on the Earth's surface, astronomers insist that comets played an essential role by delivering complex molecules, and scientists who write science fiction part time imagine that the Earth was "seeded" nu interstellar microbes." Posted by Casey Kazan via Cardiff University Comet probes reveal evidence of origin of life, scientists claim
http://www.dailygalaxy.com/my_weblog/2010/04/are-comets-incubators-of-life-new-research-points-to-yes.html
13
31
Education Services home page Step 4: Displaying Information One of the most powerful ways to communicate data is by using graphs. Data which is presented in a graph can be quick and easy to understand. A graph should: Different graphs are useful for different types of information, and it is important that the right graph for the type of data is selected. - be simple and not too cluttered - show data without changing the data’s message - clearly show any trend or differences in the data - be accurate in a visual sense (e.g if one value is 15 and another 30, then 30 should appear to be twice the size of 15). A bar graph may be either horizontal or vertical. To differentiate between the two, a vertical bar graph is called a column graph. An important point about bar graphs is the length of the bars: the greater the length, the greater the value. A column graph usually represents discrete data. Note that a column graph has a gap between each column or set of columns. Figure 1: Example column graph (Source: Australian Bureau of Statistics, (2001 Census of Population and Housing)) The above graph is a multiple column graph. It makes comparisons between males and females easier to understand. It is important that each graph has a heading, the axes are labelled and there is a key. Notice that each axis is evenly divided. Horizontal Bar Graph The advantage of a horizontal bar graph over a column graph is that the category labels in a horizontal bar graph can be fully displayed making the graph easier to read. Figure 2: Example horizontal bar graph (Source: Australian Bureau of Statistics, ABS Publication 1331.0 Statistics: A Powerful Edge) A histogram is similar to a column graph, however, there is no gap between columns. The frequency is measured by the area of the column. Generally, a histogram will have equal width columns, although when class intervals vary in size this will not be the case. Choosing the appropriate width of the columns for a histogram is very important. Figure 3: Example histogram (Source: Australian Bureau of Statistics, (ABS Publication 1331.0 Statistics: A Powerful Edge)) Line graphs should always be used when you are trying to display a trend in data over time. When drawing line graphs, it is important to use a consistent scale on each axis, otherwise the line's shape can give incorrect impressions about the information. Figure 4: Example line graph (Source: Australian Demographic Statistics, cat. no. 3101.0; Australian Demographic Trends, cat. no. 3102.0; (Official Year Book of the Commonwealth of Australia, 1901-1910)) Pie charts (often called pie graphs, sector graphs or sector charts) are used for simple comparisons of a small number of categories that make up the total number of responses. Using more than five categories will make a pie chart difficult to read. It is very important to label the slices with their actual values to make the comparison easier. Figure 5: Example pie chart (Source: Australian Bureau of Statistics, (ABS Publication 1331.0 Statistics: A Powerful Edge)) A dot chart is able to convey a lot of information in a simple way without clutter. It contrasts values very clearly, and can display many data values. Figure 6: Example dot chart (Source: Australian System of National Accounts, 2005-06 cat. no. 5204.0) To represent the population age structure, the ABS uses an age pyramid. Age pyramids are a very effective way of showing change in a country’s age structure over time, or for comparing different countries. Estimates and projections of Australia's population from 1971 to 2050 are available on the ABS Animated Age Pyramid page. Figure 7: Example age pyramid (Source: Australian Bureau of Statistics) In this section: Step 1: A Problem to Solve Step 2: Collecting Data Step 3: Organising Data Step 4: Displaying Information Step 5: Drawing Conclusions This page last updated 1 August 2011
http://abs.gov.au/websitedbs/cashome.nsf/51c9a3d36edfd0dfca256acb00118404/d7e1433e95c21f3cca2572fe001e58b7!OpenDocument
13
16
This is a more basic article, in that most people dealing with 3D graphics probably have a pretty good idea how addition works, but it may still provide some insight into how to visualize vector addition. I thought it was pretty cool myself as I was drawing the figures. One of the most basic laws about adding numbers is that if A + B = C , then B + A = C is also true. Likewise, if A + B + C = D , then A + C + B = D , and B + A + C = D , and B + C + A = D , and C + B + A = D , and C + A + B = D . No matter which order you add the numbers, they always add up to D. The same rule holds true when adding Vectors (groups of numbers). The way you add vectors is you take the first components and add them together to produce the first component of the resulting vector, take the second components of the vectors and add them together, and continue until all components are added. Vectors are commonly only 2 component or 3 component, to represent points in 2D (flat) and 3D space, but they can be more. To add (6, 2) with (11, 3) , the result is (6 + 11, 2 + 3) or (17, 5) Now comes the fun part, visualization. This first picture shows 4 two-dimensional vectors, all coming from the origin (point (0, 0) The following images show how to visualize adding the four vectors together. If you connect all the vectors together, head to tail, head to tail, starting at the origin and progressing through space, they will end up at a point which when considered as a vector itself starting at the origin, is the result vector of the addition. As with normal addition, the order you connect the vectors head to tail does not matter at all. In the three examples, the order is always different, but the final point always lands on the tip of the new vector E , which is the result of the addition. This visualization trick works in 3D as well as 2D. Vector Subtraction can be visualized in the same way, except that when you subtract a vector, you stick its head to the point where its tail would normally attach, and as with normal subtraction, the order in which you subtract vectors DOES matter. You can use visualization to help make sure you're subtracting vectors in the right order. Copyright 1998-1999 by Seumas McNally. No reproduction may be made without the author's written consent. Courtesy Of Longbow Digital Artists
http://www.gamedev.net/page/resources/_/technical/math-and-physics/visualizing-vector-addition-r1129
13
10
Physics with Calculus/Modern/Special Relativity One of Einstein's most famous theories was the theory of special relativity. Relativity explains the dynamics of systems in motion relative to one other. One popular example is that of a person on a train watching a stationary person on the ground. When you view the problem in one light, you may think that the person on the train is moving (this is from the stationary persons vantage point). Now, think of the problem from the viewpoint of the person on the moving train--he thinks that the person on the ground is moving. Ultimately, you may choose to view the problem in either way--there is no right or wrong answer. Everything just depends on your viewpoint. All of special relativity can be derived from two principles 1) Physics is the same in any inertial reference frame. If you're riding in your car on the highway (with the windows up) and you drop a ball it falls in the same way as if you were stopped at a stop sign and dropped the same ball. 2) Nothing can travel faster than the speed of light (which should really be called the speed of a massless particle). This speed is approximately Strictly speaking, information cannot travel faster than the speed of light. It can be shown that if a signal travels faster than light, in some reference frame, the information is received before it is sent, which is nonsense. However, there is no restriction on other things. For example, if I had a projector very far away from a screen, and I wave my hand in front of it very quickly, the shadow will travel faster than light. Also, the phase velocity of light waves in some circumstances will be greater than the speed of light. However, it is impossible to transmit any signal faster than light with these objects. We could derive all of relativity from the two postulates, but that would not be very enlightening, because it is a rather technical proof. Instead, consider that Alice is in a frame moving at velocity v with respect to Bob. Let's call Alice's frame A', and Bob's frame A. It follows that Compare this to the Galilean transform, The startling thing about the first set of equations, called the Lorentz transform, is that if substituted into Maxwell's equations, they leave them unchanged! If you try to substitute the second set of equations in, you get different equations. Anywhere there is electricity and magnetism, the Galilean transform must be wrong. It is possible to derive all the results in relativity from the Lorentz transforms, much like all of non-relativistic mechanics comes from the Gallilean transform and Newton's laws. Let's look at some properties of the Lorentz transform. First, the inverse of the transform (solving for the unprimed components in terms of the primed ones) gives exactly the same thing except with v replaced with -v, as we would expect. If there is a burst of light, defined by it remains unchanged. . With new notation, . Define a four-vector to be a 4-tuple with components such that when transformed to a new frame with characteristic , then the components transform as above. For example, (x,y,z,ct) is a four vector. If is a four vector, then when changing frames (when is in the x direction), we have .
http://en.m.wikibooks.org/wiki/Physics_with_Calculus/Modern/Special_Relativity
13
18
Proof of the area of a trapezoid A first good way to start off with the proof of the area of a trapezoid is to draw a trapezoid and turn the trapezoid into a rectangle. Look at the trapezoid ABCD above. How would you turn this into a rectangle? Draw the average base (shown in red) which connects the midpoint of the two sides that are not parallel Then, make 4 triangles as shown below: Let's call the two parallel sides in blue (the bases) b1 and b2 Since triangles EDI and CFI are congruent or equal and triangles KAJ and RBJ are equal, you could make a rectangle by rotating triangles EDI around point I, 180 degrees counterclockwise. And by rotating triangle KAJ clockwise, but still 180 degrees around point J. Because you could make a rectangle with the trapezoid, both figures have the same area The reason that triangle EDI is equal to triangle IFC is because we can find two angles inside the triangles that are the same. If two angles are the same, then the third or last angle must be the same The angles that are the same are shown below. They are in red and green. The angles in green are right angles. The angles in red are vertical angles This is important because if these two triangles are not congruent or the same, we cannot make the rectangle with the trapezoid by rotating triangle EDI. It would not fit properly Again, this same argument applies for the two triangles on the left Therefore, if we can find the area of the rectangle, the trapezoid will have the same area Let us find the area of the rectangle.We will need the following figure again: First, make these important observations: b1 = RC BF = BR + b1 + CF b2 = AD KE = AD − AK − ED, so KE = b2 − AK − ED AK = BR and ED = CF Notice also that you can find the length of the line in red ( the average base ) by taking the average of length BF and length KE Since the length of the line in red is the same as the base of the rectangle, we can just times that by the hegiht to get the area of the trapezoid Finally, we get : An alternative proof of the area of a trapezoid could be done this way. Start with the same trapezoid. Draw heights from From vertex B and C. This will break the trapezoid down into 3 shapes: 2 triangles and a rectangle. Label the base of the small triangle x and the base of the bigger triangle y Label the small base of the trapezoid b1 and b2 b1 = b2 − ( x + y), so x + y = b2 − b1 The area of the rectangle is b1 × h, but the area of the triangles with base x and y are : To get the total area, just add these areas together: The proof of the area of a trapezoid is complete. Any questions, contact me. |Powered by Site Build It|
http://www.basic-mathematics.com/proof-of-the-area-of-a-trapezoid.html
13
27
In astronomy, star classification or stellar classification is a classification of stars based initially on photospheric temperature and its associated spectral characteristics, and subsequently refined in terms of other characteristics. Stellar temperatures can be classified by using Wien's displacement law; but this poses difficulties for distant stars. stellar spectroscopy offers a way to classify stars according to their absorption lines; particular absorption lines can be observed only for a certain range of temperatures because only in that range are the involved atomic energy levels populated. An early scheme (from the 19th century) ranked stars from A to Q, which is the origin of the currently used spectral classes. Morgan-Keenan stellar classification. This stellar classification is the most commonly used. The common classes are normally listed from hottest to coldest (with mass, radius and luminosity compared to the Sun) and are given in the following table. The colors in this table are greatly exaggerated for illustration. The actual color of the listed stars is mostly white with a very faint tint of the color indicated; often stars' colors are too subtle to notice and may be affected by their proximity to the horizon (from the perspective of the observer). |Class||Temperature||Star colour||Mass||Radius||Luminosity||Hydrogen lines| |O||30,000 - 60,000 K||Bluish ("blue")||60||15||1,400,000||Weak| |B||10,000 - 30,000 K||Bluish-white ("blue-white")||18||7||20,000||Medium| |A||7,500 - 10,000 K||White with bluish tinge ("white")||3.2||2.5||80||Strong| |F||6,000 - 7,500 K||White ("yellow-white")||1.7||1.3||6||Medium| |G||5,000 - 6,000 K||Light yellow ("yellow")||1.1||1.1||1.2||Weak| |K||3,500 - 5,000 K||Light orange ("orange")||0.8||0.9||0.4||Very weak| |M||2,000 - 3,500 K||Reddish orange ("red")||0.3||0.4||0.04||Very weak| The sizes listed for each class are appropriate only for stars on the main sequence portion of their lives and so are not appropriate for red giants. A popular mnemonic for remembering the order is "Oh Be A Fine Girl, Kiss Me" (there are many variants of this mnemonic). This scheme was developed in the 1900s, by Annie J. Cannon and the Harvard College Observatory. The Hertzsprung-Russell diagram relates stellar classification with Absolute magnitude, luminosity, and surface temperature. While these descriptions of stellar colors are traditional in astronomy, they really describe the light after it has been scattered by the atmosphere. The Sun is not in fact a yellow star, but has essentially the color temperature of a black body of 5780 K; this is a white with no trace of yellow which is sometimes used as a definition for standard white. The reason for the odd arrangement of letters is historical. When people first started taking spectra of stars, they noticed that stars had very different Hydrogen spectral lines strengths, and so they classified stars based on the strength of the hydrogen balmer series lines from A (strongest) to Q (weakest). Other lines of neutral and ionized species then came into play (H&K lines of calcium, Sodium D lines etc). Later it was found that some of the classes were actually duplicates and those classes were removed. It was only much later that it was discovered that the strength of the hydrogen line was connected with the surface temperature of the star. The basic work was done by the "girls" of Harvard College Observatory, primarily Annie Jump Cannon and Antonia Maury, based on the work of Williamina Fleming. Spectral classes are further subdivided by Arabic numerals (0-9). For example, A0 denotes the hottest stars in the A class and A9 denotes the coolest ones. The sun is classified as G2. O, B, and A spectra are sometimes misleadingly called "early spectra", while K and M stars are said to have "late spectra". This stems from an early 20th century theory, now obsolete, that stars start their lives as very hot "early type" stars, and then gradually cool down, thereby evolving into "late type" stars. We now know that this theory is entirely wrong (see: stellar evolution). Spectral types of stellar classification. The following illustration represents star classes with the colors very close to those actually percieved by the the human eye. The relative sizes are for main sequence or "dwarf" stars. Stellar classification: Class O Class O stars are very hot and very luminous, being bluish in colour; in fact, most of their output is in the ultraviolet range. These are the rarest of all main sequence stars, constituting as few as 1 in 32,000. (LeDrew) O-stars shine with a power over a million times our Sun's output. These stars have prominent ionized and neutral helium lines and only weak hydrogen lines. Because they are so huge, Class O stars burn through their hydrogen fuel very quickly, and are the first stars to leave the main sequence. Recent observations by the Spitzer space telescope indicate that planetary formation does not occur within the vicinity of an O class star due to the Photo evaporation effect. Class B stars are extremely luminous and blue. Their spectra have neutral helium and moderate hydrogen lines. As O and B stars are so powerful, they live for a very short time. They do not stray far from the area in which they were formed as they don't have the time. They therefore tend to cluster together in what we call OB1 associations, which are associated with giant Molecular clouds. The Orion OB1 association is an entire Spiral arm of our Galaxy (brighter stars make the spiral arms look brighter, there aren't more stars there) and contains all of the constellation of Orion. They constitute about 0.13% of main sequence stars -- rare, but much more common than those of class O.(LeDrew) Class A stars are amongst the more common naked eye stars. As with all class A stars, they are white or bluish-white. They have strong hydrogen lines and also lines of ionized metals. They comprise perhaps 0.63% of all main sequence stars.(LeDrew) Class F stars are still quite powerful but they tend to be main sequence stars. Their spectra is characterized by the weaker hydrogen lines and ionized metals, their colour is white with a slight tinge of yellow. These represent 3.1% of all main sequence stars.(LeDrew) Class G stars are probably the best known, if only for the reason that our Sun is of this class. They have even weaker hydrogen lines than F, but along with the ionized metals, they have neutral metals. G is host to the "Yellow Evolutionary Void". Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the G classification as this is an extremely unstable place for a supergiant to be. These are about 8% of all main sequence stars. Class K are orangish stars which are slightly cooler than our Sun. Some K stars are giants and supergiants, such as Arcturus while others like Alpha Centauri B are main sequence stars. They have extremely weak hydrogen lines, if they are present at all, and mostly neutral metals. These make up some 13% of main sequence stars.(LeDrew) Class M is by far the most common class. Over 78% of stars are red dwarfs, such as Proxima Centauri (LeDrew). M is also host to most giants and some supergiants such as Antares and Betelgeuse, as well as Mira variables. The spectrum of an M star shows lines belonging to molecules and all neutral metals but hydrogen are usually absent. Titanium oxide can be strong in M stars. Extended Spectral types A number of new spectral types have been taken into use from newly discovered types of stars. Class W or WR represents the superluminous Wolf-Rayet stars, notably unusual since they have mostly helium in their atmospheres instead of hydrogen. They are thought to be dying supergiants with their hydrogen layer blown away by hot stellar winds caused by their high temperatures, thereby directly exposing their hot helium shell. Class W is subdivided into subclasses WN and WC according to the dominance of nitrogen or carbon in their spectra (and outer layers). Class L, dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. L does not mean Lithium Dwarf; a large fraction of these stars do not have lithium in their spectra. Some of these objects are of substellar mass (do not support fusion) and some are not, so collectively this class of objects should be referred to as "L dwarfs", not "L stars." They are a very dark red in colour and brightest in Infrared. Their gas is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra. Class T stars, (methane dwarfs), are very young and low density stars often found in the interstellar clouds they were born in. These are stars barely big enough to be stars and others that are substellar, being of the brown dwarf variety. They are black, emitting little or no visible light but being strongest in Infrared. Their surface temperature is a stark contrast to the fifty thousand kelvins or more for Class O stars, being merely up to 1,000 K. Complex molecules can form, evidenced by the strong methane lines in their spectra. Class T and L could be more common than all the other classes combined, if recent research is accurate. From studying the number of proplyds (protoplanetary discs, clumps of gas in nebulae from which stars and solar systems are formed) then the number of stars in the Galaxy should be several orders of magnitude higher than what we know about. It's theorised that these proplyds are in a race with each other. The first one to form will become a proto-star, which are very violent objects and will disrupt other proplyds in the vicinity, stripping them of their gas. The victim proplyds will then probably go on to become main sequence stars or brown dwarf stars of the L and T classes, but quite invisible to us. Since they live so long, these smaller stars will accumulate over time. Class Y stars, (ultra-cool dwarfs), are much cooler than T-dwarfs. None have been found as of yet. Originally classified as R and N stars, these are also known as 'carbon stars'. These are red giants, near the end of their lives, in which there is an excess of carbon in the atmosphere. The old R and N classes ran parallel to the normal classification system from roughly mid G to late M. These have more recently been remapped into a unified carbon classifier C, with N0 starting at roughly C6. Class S stars have ZrO lines in addition to (or, rarely, instead of) those of TiO, and are in between the Class M stars and the carbon stars. S stars have excess amounts of zirconium and other elements produced by the S-process, and have their carbon and oxygen abundances closer to equal than is the case for M stars. The latter condition results in both C and O being locked up almost entirely in CO molecules. For stars cool enough for CO to form that molecule tends to "eat up" all of whichever element is less abundant, resulting in "leftover oxygen" (which becomes available to form TiO) in stars of normal composition, "leftover carbon" (which becomes available to form the diatomic carbon molecules) in carbon stars, and "leftover nothing" in the S stars. In reality the relation between these stars and the traditional main sequence suggest a rather large continuum of carbon abundance and if fully explored would add another dimension to the stellar classification system. Class P & Q Finally, the classes P and Q are occasionally used for certain non-stellar objects. Type P objects are Planetary nebulae and type Q objects are Novae. White dwarf classifications The class D is sometimes used for white dwarfs, the state most stars end their life in. Class D is further divided into classes DA, DB, DC, DO, DZ, and DQ. The letters are not related to the letters used in the classification of true stars, but instead indicate the composition of the white dwarf's outer layer or "atmosphere". The white dwarf classes are as follows: All class D stars use the same sequence from 1 to 9, with 1 indicating a temperature above 37,500 K and 9 indicating a temperature below 5,500 K. (The number is by definition equal to 50,400/T, where T is the effective temperature of the star.) Extended White Dwarf Class Stellar classification: Spectral peculiarities. Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum. |:||bleeding and/or uncertain spectral value| |…||undescribed spectral pecularities exist| |e||emission lines present| |[e]||"forbidden" emission lines present| |er||"reversed" center of emission lines weaker than edges| |ep||emission lines with peculiarity| |eq||emission lines with ^P-Cygni//gr 304.446667, 38.032944^ profile| |ev||spectral emission that exhibits variability| |f||NIII and HeII emission| |(f)||weak emission lines of He| |((f))||no emission of He| |He wk||weak He lines| |k||spectra with interstellar absorption features| |m||enhanced metal features| |n||broad ("nebulous") absorption due to spinning| |nn||very broad absorption features due to spinning very fast| |neb||a nebula's spectrum mixed in| |p||peculiar spectrum, strong spectral lines due to metal| |pq||peculiar spectrum, similar to the spectra of novae| |q||red & blue shifts line present| |s||narrowly "sharp" absorption lines| |ss||very narrow lines| |v||variable spectral feature (also "var")| |w||weak lines (also "wl" & "wk)| |d Del||type A and F giants with weak calcium H and K lines, as in prototype ^delta Delphini//gr 310.8647916, 15.074694^| |d Sct||type A and F stars with spectra similar to that of short-period variable ^delta Scuti//gr 280.568417, -9.0525556^| |Code||If has enhance metal features| |Si||abnormally strong Silicon lines| |Ba||abnormally strong Barium| |Cr||abnormally strong Chromium| |Eu||abnormally strong Europium| |He||abnormally strong Helium| |Hg||abnormally strong Mercury| |k||abnormally strong Calcium line| |Mg||abnormally strong Manganese| |Sr||abnormally strong Strontium| For example, Epsilon Ursae Majoris is listed as spectral type A0pCr, indicating general classification A0 with an unspecified peculiarity and strong emission lines of the element chromium. Yerkes spectral classification. The Yerkes spectral classification, also called the MKK system from the authors' initials, is a system of stellar spectral classification introduced in 1943 by William Wilson Morgan, Phillip C. Keenan and Edith Kellman of Yerkes Observatory. This classification is based on spectral lines sensitive to stellar surface gravity which is related to luminosity, as opposed to the Harvard classification which is based on surface temperature. Since the radius of a Giant star is much larger than a dwarf star while their masses are roughly comparable, the gravity and thus the gas density and pressure on the surface of a giant star are much lower than for a dwarf. These differences manifest themselves in the form of luminosity effects which affect both the width and the intensity of spectral lines which can then be measured. Denser stars with higher surface gravity will exhibit greater pressure broadening of spectral lines. A number of different luminosity classes are distinguished: Marginal cases are allowed; for instance a star classified as Ia-0 would be a very luminous supergiant, verging on hypergiant. Examples are below. The spectral type of the star are not a factor. |-||G I-II||The star is between super giant and bright giant.| |+||O Ia+||The star is on verge of being a hypergiant star.| |/||M IV/V||The star is either a subgiant or a dwarf star.| Stars can also be classified using photometric data from any photometric system. For example, we can calibrate colour index diagrams UB,BV in the UBV system according to spectral and luminosity classes. Nevertheless, this callibration is not straightforward, because many effects are superimposed in such diagrams: metallicity, interstellar reddening, binary and multiple stars. The more colours and more narrow passbands in photometric systems we use, the more precisely we can derive star's class (and, hence, physical parameters). The best are, of course, spectral measurements, but we not always have enough time to get qualitative spectra with high signal-to-noise ratio.
http://www.universe-galaxies-stars.com/Stellar_classification_print.html
13
16
What are interference fringes? What shape are they? Why do they occur in the wild? If these are questions that you have pondered/are pondering, then you have come to the right page! To understand more about interference and the fringes that result, one must understand wave motion. In our case, the waves are light, but the following arguments work for any kind of wave whether it be a wave on a string, a sound wave, or a wave in a pool. Each wave originates from a source, whether they be the same source our two different ones. Two independent waves from different sources, when the interfere with each other will interfere only with intensity, NOT amplitude. Due to the differences in phase of the independent sources, when the waves reach a point of intersection it will not be possible to combine them into stationary waves. This type of wave is described as incoherent or as you may have heard the term, incoherent light. Now, if we have light emitted from a single source, the phase of the light will be the same at all times because as the source changes phase the light emitted instantaneously changes phase with the source. Therefore, the waves from the same source have the ability to superpose because their amplitudes combine, NOT their intensity. This kind of wave or light is called coherent. Thus, it is only possible to produce stationary waves/effects by using two or more waves ( in our case light beams) that originate from the same point on the same source. This superposition of waves results in what we know as fringes. As simple as that sounds, the mathematics behind fringes takes some thought. To understand the fringes on the Michelson Interferometer, we know we have the following situation: In the above picture, M1 and M2' are the plane images produced by the separate paths of the Michelson Interferometer inclined at an angle alpha. The eye of the observer is positioned at O, and R is the foot of the perpendicular from O on to the plane with M1. The ray OR exists along the z-axis, thus P is a point (x,y). We will assume the light the observer sees at O originates at P, so the angle between them is theta. The path difference of the rays from the planes M1 and M2' is 2dCos(theta) where d is the separation of the planes at P. From geometry, we arrive at the following relationships: Thus, if the film thickness at R is e, the thickness at P is . Knowing a little about constructive interference, specifically that it occurs at (where n is the order of interference, d is the thickness at P, and lambda is the wavelength) we can substitute in the above relationships. The substitutions result in the following equation responsible for the shapes of the fringes: Above, we merely substituted in our equations for cosine of theta and the thickness at P. Now that we have a relationship that governs the behavior of the fringes, we can look at the special cases! The first case involves the two planes M1 and M2' intersecting at the perpendicular R. If this were to occur, the thickness at point P e would be zero. Our equation therefore reduces to: This relation further reduces as follows: . The result is an equation that is linear, or a line when the angle alpha is small. The fringes are equidistant from each other and appear localized on the plane M1. This condition only holds for z>>x+y. As it turns out, these fringes tend to be fairly weak and hard to photograph. Here is a picture of fairly linear fringes from a HeNe laser: The above fringes are almost linear and very difficult to distinguish. This brings us to our second special case, when the two planes are parallel. When parallel, the angle alpha between the two planes is zero. Our fringe governing equation then reduces to: If we do a little manipulation and squaring of terms, we get a nice result! If the observer is at a fixed z, the right hand side of the above equation is a constant which yields the equation of a circle! Hence, our fringes are spherical when the surfaces are parallel. Here are some pictures of circular fringes from a HeNe laser (left) and sulfur light (two on right): Clearly, the fringes above are spherical, althought much more clearly shown by the HeNe laser light. Another characteristic of amplitude combining waves are beats. Beats result from the superposition of two or more waves with slightly different frequencies travelling in the same direction. Hence, the waves are in and out of phase periodically which results in an alternation between constructive and destructive interference, or temporal interference. Now, it becomes clear why one cannot observe beat patterns using a HeNe laser in the Michelson Interferometer, because the light is only of one wavelength (632.8 nm)! However, when one uses sodium or the "ever so elusive" white light, one can observe these beat patterns. They are characterized by the fringes going from really distinct to blurry. To view an .AVI file that shows one of these coveted sodium beats, click here (....if you have a lot of time that is....the file is approximately 1 meg and takes a while to download if you have a modem! Otherwise, if you have a T1 connection you might as well splurge!!. <<Back to Our Setup Onwards to our Conclusions>> Tolansky, S. An Introduction to Interferometry. William Clowes and Sons, Ltd: London. 1966. Serway, Raymond A. Physics For Scientists and Engineers with Modern Physics. Saunders College Publishing: Chicago. 1992.
http://www.phy.davidson.edu/stuhome/cabell_f/interferometer/pages/interference_pictures.htm
13
28
Introduction to coaxial cables A coaxial cable is one that consists of two conductors that share a common axis. The inner conductor is typically a straight wire, either solid or stranded and the outer conductor is typically a shield that might be braided or a foil. Coaxial cable is a cable type used to carry radio signals, video signals, measurement signals and data signals. Coaxial cables exists because we can't run open-wire line near metallic objects (such as ducting) or bury it. We trade signal loss for convenience and flexibility. Coaxial cable consists of an insulated ceter conductor which is covered with a shield. The signal is carried between the cable shield and the center conductor. This arrangement give quite good shielding agains noise from outside cable, keeps the signal well inside the cable and keeps cable characteristics stable. Coaxial cables and systems connected to them are not ideal. There is always some signal radiating from coaxial cable. Hence, the outer conductor also functions as a shield to reduce coupling of the signal into adjacent wiring. More shield coverage means less radiation of energy (but it does not necessarily mean less signal attenuation). Coaxial cable are typically characterized with the impedance and cable loss. The length has nothing to do with a coaxial cable impedance. Characteristic impedance is determined by the size and spacing of the conductors and the type of dielectric used between them. For ordinary coaxial cable used at reasonable frequency, the characteristic impedance depends on the dimensions of the inner and outer conductors. The characteristic impedance of a cable (Zo) is determined by the formula 138 log b/a, where b represents the inside diameter of the outer conductor (read: shield or braid), and a represents the outside diameter of the inner conductor. Most common coaxial cable impedances in use in various applications are 50 ohms and 75 ohms. 50 ohms cable is used in radio transmitter antenna connections, many measurement devices and in data communications (Ethernet). 75 ohms coaxial cable is used to carry video signals, TV antenna signals and digital audio signals. There are also other impedances in use in some special applications (for example 93 ohms). It is possible to build cables at other impedances, but those mentioned earlier are the standard ones that are easy to get. It is usually no point in trying to get something very little different for some marginal benefit, because standard cables are easy to get, cheap and generally very good. Different impedances have different characteristics. For maximum power handling, somewhere between 30 and 44 Ohms is the optimum. Impedance somewhere around 77 Ohms gives the lowest loss in a dielectric filled line. 93 Ohms cable gives low capacitance per foot. It is practically very hard to find any coaxial cables with impedance much higher than that. Here is a quick overview of common coaxial cable impedances and their main uses: - 50 ohms: 50 ohms coaxial cable is very widely used with radio transmitter applications. It is used here because it matches nicely to many common transmitter antenna types, can quite easily handle high transmitter power and is traditionally used in this type of applications (transmitters are generally matched to 50 ohms impedance). In addition to this 50 ohm coaxial cable can be found on coaxial Ethernet networks, electronics laboratory interconnection (foe example high frequency oscilloscope probe cables) and high frequency digital applications (fe example ECL and PECL logic matches nicely to 50 ohms cable). Commonly used 50 Ohm constructions include RG-8 and RG-58. - 60 Ohms: Europe chose 60 ohms for radio applications around 1950s. It was used in both transmitting applications and antenna networks. The use of this cable has been pretty much phased out, and nowdays RF system in Europe use either 50 ohms or 75 ohms cable depending on the application. - 75 ohms: The characteristic impedance 75 ohms is an international standard, based on optimizing the design of long distance coaxial cables. 75 ohms video cable is the coaxial cable type widely used in video, audio and telecommunications applications. Generally all baseband video applications that use coaxial cable (both analogue and digital) are matched for 75 ohm impedance cable. Also RF video signal systems like antenna signal distribution networks in houses and cable TV systems are built from 75 ohms coaxial cable (those applications use very low loss cable types). In audio world digital audio (S/PDIF and coaxial AES/EBU) uses 75 ohms coaxial cable, as well as radio receiver connections at home and in car. In addition to this some telecom applications (for example some E1 links) use 75 ohms coaxial cable. 75 Ohms is the telecommunications standard, because in a dielectric filled line, somewhere around 77 Ohms gives the lowest loss. For 75 Ohm use common cables are RG-6, RG-11 and RG-59. - 93 Ohms: This is not much used nowadays. 93 ohms was once used for short runs such as the connection between computers and their monitors because of low capacitance per foot which would reduce the loading on circuits and allow longer cable runs. In addition thsi was used in some digital commication systems (IBM 3270 terminal networks) and some early LAN systems. The characteristic impedance of a coaxial cable is determined by the relation of outer conductor diameter to inner conductor diameter and by the dielectric constant of the insulation. The impednage of the coaxial cable chanes soemwhat with the frequency. Impedance changes with frequency until resitance is a minor effect and until dielectric dielectric constant is table. Where it levels out is the "characteristic impedance". The freqnency where the impedance matches to the characteristic impedance varies somwehat between different cables, but this generally happens at frequency range of around 100 kHz (can vary). Essential properties of coaxial cables are their characteristic impedance and its regularity, their attenuation as well as their behaviour concerning the electrical separation of cable and environment, i.e. their screening efficiency. In applications where the cable is used to supply voltage for active components in the cabling system, the DC resistance has significance. Also the cable velocity information is needed on some applications. The coaxial cable velocity of propagation is defined by the velocity of the dielectric. It is expressed in percents of speed of light. Here is some data of come common coaxial cable insulation materials and their velocities: Polyethylene (PE) 66% Teflon 70% Foam 78..86% Return loss is one number which shows cable performance meaning how well it matches the nominal impedance. Poor cable return loss can show cable manufacturing defects and installation defects (cable damaged on installation). With a good quality coaxial cable in good condition you generally get better than -30 dB return loss, and you should generally not got much worse than -20 dB. Return loss is same thing as VSWR term used in radio world, only expressed differently (15 dB return loss = 1.43:1 VSWR, 23 dB return loss = 1.15:1 VSWR etc.). Often used coaxial cable types General data on some commonly used coaxial cables compared (most data from http://dct.draka.com.sg/coaxial_cables.htm, http://www.drakausa.com/pdfsDSC/pCOAX.pdf and http://users.viawest.net/~aloomis/coaxdat.htm): Cable type RG-6 RG-59 B/U RG-11 RG-11 A/U RG-12 A/U RG-58 C/U RG-213U RG-62 A/U Impedance (ohms) 75 75 75 75 75 50 50 93 Conductor material Bare Copper Bare Tinned Tinned Tinned Bare Copper Copper Planted Copper Copper Copper Copper Copper Planted Steel Steel Conductor strands 1 1 1 7 7 19 7 1 Conductor area (mm2) 0.95 0.58 1.63 0.40 0.40 0.18 0.75 0.64 Conductor diameter 0.028" 0.023" 0.048" 0.035" 0.089" 0.025" 21AWG 23AWG 18AWG 20AWG 13AWG 22AWG Insulation material Foam PE PE Foam PE PE PE PE Pe PE (semi-solid) Insulation diameter 4.6 mm 3.7 mm 7.24 mm 7.25 mm 9.25 mm 2.95 7.25 3.7 mm Outer conductor Aluminium Bare Aluminium Bare Base Tinned Bare Bare polyester copper polyester copper copper copper copper copper tape and wire tape and wire wire wire wire wire tin copper braid tin copper braid braid braid braid braid braid braid Coverage Foil 100% 95 % Foil 100% 95% 95% 95% 97% 95% braid 61% Braid 61% Outer sheath PVC PVC PVC PVC PE PVC PVC PVC Outside diameter 6.90 mm 6.15 mm 10.3 mm 10.3 mm 14.1 mm 4.95 mm 10.3 6.15 mm Capacitance per meter 67 pF 67 pF 57 pF 67 pF 67 pF 100 pF 100 pF Capacitance per feet 18.6 20.5 16.9 20.6 20.6 pF 28.3 pF 30.8 13.5 pF Velocity 78% 66% 78% 66% 66% 66% 66% 83% Weight (g/m) 59 56 108 140 220 38 Attenuation db/100m 50 MHz 5.3 8 3.3 4.6 4.6 6.3 100 MHz 8.5 12 4.9 7 7 16 7 10 200 MHz 10 18 7.2 10 10 23 9 13 400 MHz 12.5 24 10.5 14 14 33 14 17 500 MHz 16.2 27.5 12.1 16 16 20 900 MHz 21 39.5 17.1 24 24 28.5 NOTE: The comparision table above is for information only. There is no guarantee of correctness of data presented. When selecting cable for a certain application, check the cable data supplied by the cable manifacturer. There can be some differences on the performance and specifications of different cables from different manufacturers. For example the insulation rating of cables vary. Many PE insulated coax cables can handle several kilovots voltage, while some foam insulated coax cables cna handle only 200 volts or so. NOTE: Several of cables mentioned above are available with foam insulation material. This changes the capacitances to somewhat lower value and gives higher velocity (typically around 0.80). Cable type RG-6 RG-59 B/U RG-11 RG-11 A/U RG-12 A/U TELLU 13 Tasker RGB-75 Impedance (ohms) 75 75 75 75 75 75 75 Impedance accuracy +-2 ohms +-3 ohms +-2 ohms +-3% Conductor material Bare Copper Bare Tinned Tinned Bare Bare Copper Planted Copper Copper Copper Copper Copper Steel Conductor strands 1 1 1 7 7 1 10 Conductor strand(mm2) 0.95 0.58 1.63 0.40 0.40 1mm diameter 0.10mm diameter Resistance (ohm/km) 44 159 21 21 22 210 Insulation material Foam PE PE Foam PE PE PE Foam PE Insulation diameter 4.6 mm 3.7 mm 7.24 mm 7.25 mm 9.25 mm Outer conductor Aluminium Bare Aluminium Bare Base Copper Tinned polyester copper polyester copper copper foil under copper tape and wire tape and wire wire bare copper tin copper braid tin copper braid braid braid braid braid Coverage Foil 100% 95 % Foil 100% 95% 95% Foil ~95% braid 61% Braid 61% Braid 66% Resistance (ohm/km) 6.5 8.5 4 4 12 ~40 Outer sheath PVC PVC PVC PVC PE PVC (white) PVC Outside diameter 6.90 mm 6.15 mm 10.3 mm 10.3 mm 14.1 mm 7.0 mm 2.8 mm Capacitance per meter 67 pF 67 pF 57 pF 67 pF 67 pF 55 pF ~85 pF Capacitance per feet 18.6 20.5 16.9 20.6 20.6 pF Velocity 78% 66% 78% 66% 66% 80% 66% Screening factor 80 dB Typical voltage (max) 2000V 5000V 1500V Weight (g/m) 59 56 108 140 220 58 Attenuation db/100m 5 MHz 2.5 1.5 50 MHz 5.3 8 3.3 4.6 4.6 4.7 19.5 100 MHz 8.5 12 4.9 7 7 6.2 28.5 200 MHz 10 18 7.2 10 10 8.6 35.6 400 MHz 12.5 24 10.5 14 14 12.6 60.0 500 MHz 16.2 27.5 12.1 16 16 ~14 ~70 900 MHz 21 39.5 17.1 24 24 19.2 90.0 2150 MHz 31.6 3000 MHz 37.4NOTE: The numbers with ~ mark in front of them are approximations calculated and/or measured from cables or cable data. Those numbera are not from manufacturer literature. NOTE2: Several of cables mentioned above are available in sepcial versionswith foam insulation material. This changes the capacitances to somewhat lower value and gives higher velocity (typically around 0.80). General coaxial cable details The dielectric of a coaxial cable serves but one purpose - to maintain physical support and a constant spacing between the inner conductor and the outer shield. In terms of efficiency, there is no better dielectric material than air. In most practical cables cable companies use a variety of hydrocarbon-based materials such as polystyrene, polypropylenes, polyolefins and other synthetics to maintain structural integrity. Sometimes coaxial cables are used also for carrying low frequency signals, like audio signals or measurement device signals. In audio applications especially the coaxial cable impedance does not matter much (it is a high frequency property of cable). Generally coaxial has a certain amount of capacitance (50 pF/foot is typical) and a certain amount of inductance. But it has very little resistance. General characteristics of cables: - A typical 50 ohm coax coaxial cable is pretty much 30pf per foot (doesn't apply to miniature cables or big transmitter cables, check a cable catalogue for more details). 50 ohms coaxial cables are used in most radio applications, in coaxial Ethernet and in many instrumentation applications. - A typical 75 ohm coaxial cable is about 20 pf per foot (doesn't apply to miniature cables or big transmitter cables, check a cable catalogue for more details). 75 ohms cable is used for all video application (baseband video, monitor cables, antenna networks cable TV, CCTV etc.), for digital audio (S/PDIF, coaxial AES/EBU) and for telecommunication application (for example for E1 coaxial cabling). - A typical 93 ohm is around 13 pf per foot (does not apply to special cables). This cable type is ued for some special applications. Please note that these are general statements. A specific 75 ohm cable could be 20pF/ft. Another 75 ohm cable could be 16pF/ft. There is no exact correlation between characteristic impedance and capacitance. In general, a constant impedance (including connectors) cable, when terminated at both ends with the correct load, represents pure resistive loss. Thus, cale capacitance is immaterial for video and digital applications. Typical coaxial cable constructions are: - Flexible (Braided) Coaxial Cable is by far the most common type of closed transmission line because of its flexibility. It is a coaxial cable, meaning that both the signal and the ground conductors are on the same center axis. The outer conductor is made from fine braided wire, hence the name "braided coaxial cable". This type of cable is used in practically all applications requiring complete shielding of the center conductor. The effectiveness of the shielding depends upon the weave of the braid and the number of braid layers. One of the draw-backs of braided cable is that the shielding is not 100% effective, especially at higher frequencies. This is because the braided construction can permit small amounts of short wavelength (high frequency) energy to radiate. Normally this does not present a problem; however, if a higher degree of shielding is required, semirigid coaxial cable is recommended. In some high frequency flexible coaxial cables the outer shield consists if normal braids and an extra aluminium foil shield to give better high frequency shielding. - Semirigid Coaxial Cable uses a solid tubular outer conductor, so that all the RF energy is contained within the cable. For applications using frequencies higher than 30 GHz a miniature semirigid cable is recommended. - Ribbon Coaxial Cable combines the advantages of both ribbon cable and coaxial cable. Ribbon Coaxial Cable consists of many tiny coaxial cables placed physically on the side of each other to form a flat cable. Each individual coaxial cable consists of the signal conductor, dielectric, a foil shield and a drain wire which is in continuous contact with the foil. The entire assembly is then covered with an outer insulating jacket. The major advantage of this cable is the speed and ease with which it can be mass terminated with the insulation displacement technique. Often you will hear the term shielded cable. This is very similar to coaxial cable except the spacing between center conductor and shield is not carefully controlled during manufacture, resulting in non-constant impedance. If the cable impedance is critical enough to worry about correctly choosing between 50 and 75 Ohms, then the capacitance will not matter. The reason this is so is that the cable will be either load terminated or source terminated, or both, and the distributed capacitance of the cable combines with its distributed inductance to form its impedance. A cable with a matched termination resistance at the other end appears in all respects resistive, no matter whether it is an inch long or a mile. The capacitance is not relevant except insofar as it affects the impedance, already accounted for. In fact, there is no electrical measurement you could make, at just the end of the cable, that could distinguish a 75 Ohm (ideal) cable with a 75 Ohm load on the far end from that same load without intervening cable. Given that the line is teminated with a proper 75 ohm load (and if it's not, it damn well should be!), the load is 75 ohms resistive, and the lumped capacitance of the cable is irrelevant. Same applies to other impedance cables also when terminated to their nominal impedance. There exist an effect that characteristic impedance of a cable if changed with frequency. If this frequency-dependent change in impedance is large enough, the cable will be impedance-matched to the load and source at some frequencies, and mismatched at others. Characteristic impedance is not the only detail in cable. However there is another effect that can cause loss of detail fast-risetime signals. There is such a thing as frequency-dependent losses in the cable. There is also a property of controlled impedance cables known as dispersion, where different frequencies travel at slightly different velocities and with slightly different loss. In some communications applications a pair of 50 ohm coaxial cables are used to transmit a differential signal on two non-interacting pieces of 50-ohm coax. The total voltage between the two coaxial conductors is double the single-ended voltage, but the net current in each is the same, so the differential impedance between two coax cable used in a differential configuration would be 100 ohms. As long as the signal paths don't interact, the differential impedance is always precisely twice the single-ended impedance of either path. RF coax(ial) connectors are a vital link in the system which uses coaxial cables and high frequency signals. Coax connectors are often used to interface two units such as the antenna to a transmission line, a receiver or a transmitter. The proper choice of a coax connector will facilitate this interface. Coax connectors come in many impedances, sizes, shapes and finishings. There are also female and male versions of each. As a consequence, there are thousands of models and variations, each with its advantages and disadvantages. Coax connectors are usually referred to by series designations. Fortunately there are only about a dozen or so groupings or series designations. Each has its own important characteristics, The most popular RF coax connector series not in any particular order are UHF, N, BNC, TNC , SMA, 7-16 DIN and F. Here is quicl introduction to those connector types: - "UHF" connector: The "UHF" connector is the old industry standby for frequencies above 50 MHz (during World War II, 100 MHz was considered UHF). The UHF connector is primarily an inexpensive all purpose screw on type that is not truly 50 Ohms. Therefore, it's primarily used below 300 MHz. Power handling of this connector is 500 Watts through 300 MHz. The frequency range is 0-300 MHz. - "N" connectors: "N" connectors were developed at Bell Labs soon after World War II so it is one of the oldest high performance coax connectors. It has good VSWR and low loss through 11 GHz. Power handling of this connector is 300 Watts through 1 GHz. The frequency range is 0-11 GHz. - "BNC" connctor: "BNC" connectors have a bayonet-lock interface which is suitable for uses where where numerous quick connect/disconnect insertions are required. BNC connector are for exampel used in various laboratory instruments and radio equipment. BNC connector has much lower cutoff frequency and higher loss than the N connector. BNC connectors are commonly available at 50 ohms and 75 ohms versions. Power handling of this connector is 80 Watts at 1 GHz. The frequency range is 0-4 GHz. - "TNC" connectors are an improved version of the BNC with a threaded interface. Power handling of this connector is 100 Watts at 1 GHz. The frequency range is 0-11 GHz. - "SMA" connector: "SMA" or miniature connectors became available in the mid 1960's. They are primarily designed for semi-rigid small diameter (0.141" OD and less) metal jacketed cable. Power handling of this connector is 100 Watts at 1 GHz. The frequency range is 0-18 GHz. - "7-16 DIN" connector: "7-16 DIN" connectors are recently developed in Europe. The part number represents the size in metric millimeters and DIN specifications. This quite expensive connector series was primarily designed for high power applications where many devices are co-located (like cellular poles). Power handling of this connector is 2500 Watts at 1 GHz. The frequency range is 0-7.5 GHz. - "F" connector: "F" connectors were primarily designed for very low cost high volume 75 Ohm applications much as TV and CATV. In this connector the center wire of the coax becomes the center conductor. - "IEC antenna connector": This is a very low-cost high volume 75 ohm connector used for TV and radio antenna connections around Europe. Tomi Engdahl <[email protected]>
http://www.epanorama.net/documents/wiring/coaxcable.html
13
14
Active Galactic Nuclei When astronomers first recognized in the 1920s that galaxies are huge stellar islands comprising up to one hundred billion stars, they also realized that some galaxies are much brighter than others at certain wavelengths. These were called "active galaxies." As observations expanded into the radio, X-ray, and gamma-ray wavelength bands, astronomers noticed that certain active galaxies emit huge amounts of radiation outside the visible spectrum. Over time, these objects were classified according to their observed properties. Some have strong radio emission extending over broad "lobes" (radio galaxies); others are very luminous and are physically compact (quasars and blazars); still others have intensely bright nuclei (Seyfert galaxies). Nowadays we think that all kinds of active galaxies -- whether radio galaxies, blazars, or Seyferts -- function according to the same physical principles. In the center of each active galactic nucleus (AGN) sits a supermassive black hole. The black hole siphons matter from nearby stars, spinning it into a rapidly rotating accretion disk. The material in the accretion disk is heated to extremely high energies, and the high-energy matter and radiation are eventually ejected along the rotational axis of the disk in jets that extend far The unified model of active galactic nuclei (AGN). Surrounding the supermassive black hole is a large accretion disk of orbiting dust and gas. There is often a jet (or two back-to-back jets) of accelerated particles aligned with the axis of rotation of the black hole. The type of AGN seen from Earth depends on the viewing angle. Image credit: NASA. Hence, the radiation observed from an AGN is driven by the "black hole engine" at its center. Depending on our viewing angle with respect to the galaxy, we may observe a different class of object. Viewing the jet edge-on, we see extended radio and Seyfert galaxies. If we observe the jet face-on, we observe a compact, intense source (quasars and blazars). Cosmic ray acceleration in AGNs Cosmic rays are charged particles of extraordinarily high energies. They are accelerated when they cross shock waves in the hot gas of interstellar space. Because cosmic rays are charged, their trajectories will be bent by magnetic fields. Therefore, a strong magnetic field can "confine" a cosmic ray to a small region of space. If a cosmic ray is confined to a region near a shock wave, it can cross the shock repeatedly and gain a tremendous amount of energy. At some point, the particle gains so much energy that it escapes the magnetic field; and eventually, we detect it at earth as a cosmic ray. There is considerable evidence that the accretion disks and jets in AGNs contain very large magnetic fields and very intense shocks. Hence, AGNs are strong candidates for cosmic ray acceleration. However, the exact mechanism of cosmic ray acceleration in AGNs is still not known. In addition, the environment around an AGN is thick with visible, X-ray, and gamma-ray radiation. Charged particles are expected to lose considerable energy as they move out of the radiation field and into space. Understanding how cosmic rays are accelerated in such an environment remains a considerable technical problem. The identification of AGNs as sites of charged particle acceleration raises many fascinating and important questions. These are among the many we still have to answer before we can truly solve the mystery of ultrahigh energy cosmic rays.
http://www.auger.org/news/PRagn/about_AGN.html
13
25
* In the 1960s there was an international competition to build a supersonic transport (SST), which resulted in the development of two supersonic airliners, the Anglo-French "Concorde" and the Soviet Tupolev "Tu-144". Although the SST was seen as the way of the future, that wasn't how things actually turned out. This document provides a short history of the rise and fall of the supersonic transport. * With the push towards supersonic combat aircraft during the 1950s, aircraft manufacturers began to think about developing a supersonic airliner, what would eventually become known as a "supersonic transport (SST)". In 1961, Douglas Aircraft publicized a design study for an SST that would be capable of flying at Mach 3 at 21,350 meters (71,000 feet) and could be flying by 1970. Douglas forecast a market for hundreds of such machines. At the time, such a forecast seemed realistic. During the 1950s, commercial air transport had made a radical shift from piston-powered airliners to the new jetliners like the Boeing 707. Going to an SST was simply the next logical step. In fact, as discussed in the next section, Europe was moving even faster down this road than the US. In 1962 the British and French signed an agreement to actually build an SST, the "Concorde". With the Europeans committed to the SST, of course the Americans had to follow, and the US Federal Aviation Administration (FAA) set up a competition for an SST that would be faster, bigger, and better than the Concorde. In 1964, SST proposals from North American, Lockheed, and Boeing were selected as finalists. Although North American had built the two XB-70 Valkyrie experimental Mach 3 bombers, which had a configuration and performance similar to that of an SST and were used as testbeds for SST concepts, the company was eliminated from the competition in 1966. Lockheed proposed the "L-2000", a double-delta machine with a capacity of 220 passengers, but the winner was Boeing's "Model 2707", the name obviously implying a Mach 2 aircraft that would be as significant as the classic Boeing 707. Boeing was awarded a contract for two prototypes on 1 May 1967. The 2707 was to be a large aircraft, about 90 meters (300 feet) long, with a maximum load of 350 passengers. It would be able to cruise at Mach 2.7 over a range of 6,440 kilometers (4,000 miles) with 313 passengers. At first, the 2707 was envisioned as fitted with variable geometry "swing wings" to permit efficient high-speed flight -- with the wings swept back -- and good low-speed handling -- with the wings extended. Powerplants were to be four General Electric GE-J5P afterburning turbojet engines, derived from the GE J93 engines used on the XB-70, with a maximum afterburning thrust of 267 kN (27,200 kgp / 60,000 lbf) each. The engines were to be fitted into separate nacelles under the wing. Further work on the design demonstrated that the swing-wing configuration was simply too heavy, and so Boeing engineers came up with a new design, the "2707-300", that had fixed wings. However, America in the late 1960s was all but overwhelmed by social upheaval that involved questioning the need to come up with something bigger and better, as well as much increased concerns over the environment. Critics massed against the SST, voicing worries about its sonic booms and the possible effects of its high-altitude cruise on the ozone layer. The US Congress finally zeroed funds for the program on 24 March 1971 after the expenditure of about a billion USD on the project. There were 121 orders on the books for the aircraft when it was canceled. SST advocates were dismayed, but later events would prove that -- even ignoring the arguments over environmental issues -- the SST was simply not a good business proposition and proceeding with the project would have been a big mistake.BACK_TO_TOP * As mentioned, the British and French were actually ahead of the US on SST plans. In 1955, officials of the British aviation industry and British government agencies had discussions on the notion of an SST, leading to the formation of the "Supersonic Transport Aircraft Committee (STAC)" in 1956. STAC conducted a series of design studies, leading to leading to the Bristol company's "Bristol 198", which was a slim, delta-winged machine with eight turbojet engines designed to cross the Atlantic at Mach 2. This evolved into the somewhat less ambitious "Bristol 223", which had four engines and 110 seats. In the meantime, the French had been conducting roughly similar studies, with Sud-Aviation of France coming up with a design surprisingly similar to the Bristol 223, named the "Super Caravelle" after the innovative Caravelle twinjet airliner developed by Sud-Aviation in the 1950s. Given the similarity in the designs and the high cost of developing an SST, British and French government and industry officials began talks in September 1961 to see if the two nations could join hands for the effort. After extensive discussions, on 29 November 1962, the British and French governments signed a collaborative agreement to develop an Anglo-French SST, which became the "Concorde". It was to be built by the British Aircraft Corporation (BAC), into which Bristol had been absorbed in the meantime, and Rolls-Royce in the UK; and Sud-Aviation and the SNECMA engine firm in France. The original plan was to build a 100-seat long-range aircraft for transoceanic operations and a 90-seat mid-range aircraft for continental flights. In fact, the mid-range aircraft would never be built. The initial contract specified the construction of two flight prototypes, two static-test prototypes, and two preproduction aircraft. BAC was responsible for development and production of: Sud-Aviation was responsible for development and production of: Design of the automatic flight control system was subcontracted by Aerospatiale to Marconi (now GEC-Marconi) in Britain and SFENA (now Sextant Avionique) in France. Final assembly of British Concordes was at Filton and of French Concordes was at Toulouse. Airlines began to place options for purchase of Concordes in June 1963, with service deliveries originally expected to begin in 1968. That proved a bit over-optimistic. Prototype construction began in February 1965. The initial "001" prototype was rolled out at Toulouse on 11 December 1967, but it didn't perform its first flight for another 15 months, finally taking to the air on 2 March 1969, with a flight crew consisting of Andre Turcat, Jacques Guignard, Michel Retif, and Henri Perrier. The first flight of the "002" prototype took place from Filton on 9 April 1969. Flight trials showed the design to be workable, though it was such a "bleeding edge" machine that there were a lot of bugs to be worked out. First supersonic flight by 001 wasn't until 1 October 1969, and its first Mach 2 flight wasn't until 4 November 1970. The first preproduction machine, "101", performed its initial flight from Toulouse on 17 December 1971, followed by the second, "102", which performed its initial flight from Filton on 10 January 1973. The first French production aircraft, "201", performed its initial flight from Toulouse on 6 December 1973, by which time Sud-Aviation had been absorbed into Aerospatiale. The first British production machine, "202", performed its initial flight from Filton on 13 February 1974, both machines well exceeding Mach 1 on their first flight. These two production machines were used for flight test and never entered commercial service. 14 more production machines were built, the last performing its initial flight on 20 April 1979, with seven Concordes going into service with British Airways and seven into service with Air France. The Concorde received French certification for passenger operations on 13 October 1975, followed by British certification on 5 December 1975. Both British Airways and Air France began commercial flights on 21 January 1976. The Concorde was finally in service. There has never been a full accounting of how much it cost the British and French governments to get it there, but one modern estimate is about 1.1 billion pounds in 1976 values, or about 11 billion pounds or $18.1 billion USD in 2003 values. Of the 20 Concordes built, six never carried any paying passengers. In fact, only nine of the production machines were sold at "list value". The other five were simply given to British Airways and Air France for literally pocket change, apparently just to get them out of the factories. * The initial routes were London to Bahrain, and Paris to Rio de Janiero via Dakar. Service to Washington DC began on 24 May 1976, followed by flights to New York City in December 1977. Other routes were added later, and there were also large numbers of charter flights, conducted mostly by British Airways. The manufacturers had obtained options for 78 Concordes, most prominently from the US carrier Pan-American, but by the time the aircraft was ready to enter service interest had evaporated. Sonic boom ensured that it could not be operated on overland routes, a consideration that had helped kill off the mid-range Concorde, and even on the trans-Atlantic route the thundering noise of the four Olympus engines led to restrictions on night flights to New York City, cutting the aircraft's utilization on the prime trans-Atlantic route in half. The worst problem, however, was that the 1970s were characterized by rising fuel prices that rendered the thirsty SST clearly uneconomical to operate. It required 3.5 times more fuel to carry a passenger in the Concorde than in a Boeing 747 with its modern, fuel-efficient high-bypass turbofans. The Americans had been sensible to kill off the Boeing 2707-300: even if the environmental threat of the machine had been greatly exaggerated, the 2707-300 would have never paid itself off. There was some muttering in Britain and France that Pan-Am's cancellation of its Concorde orders and the restrictions on night flights into New York City were part of a jealous American conspiracy to kill the Concorde, but Pan-Am brass had simply done the numbers and wisely decided the Concorde didn't make business sense. Pan Am had analyzed use of the Concorde on trans-Pacific flights, such as from San Francisco to Tokyo, and quickly realized that its relatively limited range meant refueling stops in Honolulu and Wake Island. A Boeing 747 could make the long-haul trip without any stops, and in fact would get to Tokyo faster than the Concorde under such circumstances. First-class customers would also have a much more comfortable ride on the 747. The Port Authority of New York & New Jersey was mainly worried about irate townspeople raising hell over noisy Concordes waking them up in the middle of the night. These "townspeople" were assertive New Yorkers, after all, and they had been pressuring the Port Authority with various complaints, justified or not, over aircraft operations from Idlewild / Kennedy International Airport since 1958. In fact there were few jetliners noisier than the Concorde, and in another unfortunate irony the new high-bypass turbofans used by airliners such as the 747 were not only much more fuel-efficient than older engines, they were much quieter, making the Concorde look all that much worse in comparison. Some Europeans were not surprised by the Concorde's problems. In 1966, Henri Ziegler, then head of Breguet Aviation of France, commented with classic French directness: "Concorde is a typical example of a prestige program hastily launched without the benefit of detailed specifications studied in partnership with airlines." Ziegler would soon become the first boss of Airbus Industries, which would rise to effectively challenge mighty Boeing for the world's airliner market. Airbus was established on the basis of such consultations between aircraft manufacturers and airlines. The Concorde program would have important lessons for Airbus, though mostly along the lines of how not to do things. The full duplication of Concorde production lines in the UK and France was seen as a particular blunder that substantially increased program costs. Airbus took the more sensible strategy of having different elements built in different countries, then transporting them to Toulouse for final assembly and flight check. * The Concorde was a long, dartlike machine with a low-mounted delta wing and four Orpheus afterburning turbojets, with two mounted in a pod under each wing. It was mostly made of aircraft aluminum alloys plus some steel assemblies, but featured selective high-temperature elements fabricated from Iconel nickel alloy. It was designed for a cruise speed of Mach 2.2. Higher speeds would have required much more extensive use of titanium and other high-temperature materials. The pilot and copilot sat side-by-side, with a flight engineer behind on the right, and provision for a fourth seat. The crew flew the aircraft with an automatic flight control system, guiding their flight with an inertial navigation system backed up by radio navigation systems. Avionics also included a suite of radios, as well as a flight data recorder. The nose was drooped hydraulically to improve the forward view during takeoff and landing. A retractable transparent visor covered the forward windscreen during supersonic cruise flight. There were short "strake" flight surfaces beneath the cockpit, just behind the drooping nose, apparently to help ensure airflow over the tailfin when the aircraft was flying at high angles of attack. Each of the four Rolls-Royce / SNECMA Olympus 593 Mark 10 engines was rated at 169.3 kN (17,255 kgp / 38,050 lbf) thrust with 17% afterburning. The engine inlets had electrical de-icing, variable ramps on top of the inlet throat, and auxiliary inlet / outlet doors on the bottom. Each engine was fitted with a bucket-style variable exhaust / thrust reverser. The Olympus had been originally developed in a non-afterburning form for the Avro Vulcan bomber, and a Vulcan had been used in trials of the Concorde engines. The Concorde used afterburner to get off the ground and up to operating speed and altitude, and then cruised at Mach 2 on dry (non-afterburning) thrust. It was one of the first, possibly the first, operational aircraft to actually cruise continuously at supersonic speeds. Interestingly, at subsonic speeds the aircraft was inefficient, requiring high engine power that drained the fuel tanks rapidly. Total fuel capacity was 119,786 liters (26,350 Imperial gallons / 31,645 US gallons), with four tanks in the fuselage and five in each wing. Fuel trim was maintained by an automatic system that shuttled fuel between trim tanks, one in the tail and a set in the forward section of the wings, to maintain the proper center of gravity in different flight phases. The wing had an elegantly curved "ogival" form factor, and a chord-to-thickness ratio of 3% at the wing root, and featured six hydraulically-operated elevon control surfaces on each wing, organized in pairs. The tailfin featured a two-section rudder, apparently to provide redundancy and improve safety. The Concorde had tricycle landing gear, with a twin-wheel steerable nosewheel retracting forward, and four-wheel bogies in a 2-by-2 arrangement for the main gear, retracting inward. The landing gear featured carbon disk brakes and an antiskid system. There was a retractable tail bumper wheel to protect the rear of the aircraft on takeoff and landing. Maximum capacity was in principle 144 passengers with a high-density seating layout, but in practice seating was not more than 128, and usually more like 100. Of course all accommodations were pressurized and climate-controlled, and the soundproofing was excellent, resulting in a smooth and quiet ride. There were toilets at the front and middle of the fuselage, and galleys front and back. Customer service on the flights placed substantial demands on the stewards and stewardesses because at cruise speed, the Concorde would reach the limit of its range in three hours. AEROSPATIALE-BAC CONCORDE: _____________________ _________________ _______________________ spec metric english _____________________ _________________ _______________________ wingspan 25.56 meters 83 feet 10 inches wing area 385.25 sq_meters 3,856 sq_feet length 62.10 meters 203 feet 9 inches height 11.40 meters 37 feet 5 inches empty weight 78,700 kilograms 173,500 pounds MTO weight 185,065 kilograms 408,000 pounds max cruise speed 2,180 KPH 1,345 MPH / 1,175 KT service ceiling 18,300 meters 60,000 feet range 6,580 kilometers 4,090 MI / 3,550 NMI _____________________ _________________ _______________________ The two prototypes had been slightly shorter and had been fitted with less powerful Olympus engines. A "Concorde B" was considered, with airframe changes -- including leading edge flaps, wingtip extensions, modified control surfaces, and 4.8% more fuel capacity -- plus significantly improved Olympus engines that provided incrementally better fuel economy, allowing a nonstop trans-Pacific flight, and greater dry thrust, allowing takeoffs without noisy afterburner. However, the Concorde B still couldn't operate over land and it still couldn't compete with modern subsonic jetliners in terms of fuel economy. It never got off the drawing board. * On 25 July 2000, an Air France Concorde was departing from the Charles de Gaulle airport outside Paris when one of its tires hit a piece of metal lying on the runway. The tire disintegrated and a piece of rubber spun off and hit the aircraft, setting up a shockwave that ruptured a fuel tank. The airliner went down in flames and crashed near the town of Gonesse, killing all 109 people aboard and four people who had the bad luck to be in the impact area. All 12 surviving Concordes were immediately grounded pending an investigation. Safety modifications were made to all seven British Airways and all five surviving Air France Concordes. The bottom of the fuel tanks, except those in the wing outboard of the engines, was fitted with flexible Kevlar-rubber liners to provide them with a limited "self sealing" capability; minor safety modifications were made to some electrical systems; and new "no blowout" tires developed by Michelin were fitted. British Airways also implemented a previously planned update program to fit their seven aircraft with new passenger accommodations. The Concorde returned to flight status on 7 November 2001, but it was a hollow triumph. The economics of even operating the Concorde, let alone developing it, were marginal, and with the economic slump of the early 21st century both Air France and British Airways were losing money on Concorde flights. In the spring of 2003, Air France announced that they would cease Concorde operations as of 31 May 2003, while British Airways would cease flights by the end of October 2003. The announcement led to unprecedented levels of passenger bookings for the final flights. Air France's most worked aircraft, named the "Fox Alpha", had performed 5,845 flights and accumulated 17,723 flight hours. One Air France technical manager claimed that the British and French Concorde fleets had accumulated more supersonic time than all the military aircraft ever built. That may be an exaggeration -- how anyone could compile and validate such a statistic is a good question -- but it does illustrate the unique capabilities of the aircraft. Interestingly, spares were never a problem, despite the age and small numbers of Concordes, since large inventories of parts had been stockpiled for the machines. It was a sign of the Concorde's mystique that the aircraft were in great demand as museum pieces. Air France CEO Jean-Cyril Spinetta said: "We had more requests for donations than we have aircraft." One ended up on display at the Charles de Gaulle Airport near Paris, while another found a home at the US National Air & Space Museum's Steven F. Udvar-Hazy Center at Dulles International Airport in Washington DC. In something of an irony, one of the British Concordes was given to the Museum of Flight at Boeing Field in Seattle, Washington. The last operational flight of the Concorde was on 24 October 2003, with a British Airways machine flying from New York to London. British aviation enthusiasts flocked to Heathrow to see the arrival. As it taxied off the runway it passed under an honorary "water arch" created by the water cannons of two fire engines. During the type's lifetime, Air France had racked up 105,000 hours of commercial flight operations with the Concorde, while British Airways had run up a tally of 150,000 hours. On 25 November 2003, a Concorde that had landed at Kennedy on 10 November was hauled up the Hudson river on a barge past the Statue of Liberty for display at New York City's Intrepid Air Museum. New Yorkers turned out along the waterfront to greet the arrival. The very last flight of a Concorde was on 26 November 2003, when a British Airways Concorde took off from Heathrow, performed a ceremonial loop over the Bay of Biscay and then flew back to Filton, where it was to be put on display. The aircraft performed a "photo op" by flying over Isambard Kingdom Brunel's famous chain suspension bridge at Clifton, not far from Filton; as the crew taxied the airliner after landing, they hung Union Jacks out the windows and raised the nose up and down to please the crowd of 20,000 that was on hand. When the Olympus engines were shut down for the very last time, the crew got out and handed over the flight logs to HRH Prince Andrew in a formal ceremony.BACK_TO_TOP * Of course, during the 1960s the Soviets and the West were in competition, and anything spectacular the West wanted to do, the Soviets wanted to do as well. That included an SST. The Soviet Tupolev design bureau developed the USSR's answer to the Concorde, the Tupolev "Tu-144", also known by the NATO codename "Charger". The Tu-144 prototype performed its first flight on 31 December 1968, with test pilot Eduard Elyan at the controls, beating the Concorde by three months. 17 Tu-144s were built, the last one coming off the production line in 1981. This sum includes one prototype; two "Tu-144C" preproduction aircraft; and 14 full production machines, including nine initial-production "Tu-144S" aircraft, and five final production "Tu-144Ds" with improved engines. * The Tu-144 got off to a terrible start, the second Tu-144C preproduction machine breaking up in midair during a demonstration at the Paris Air Show on 9 June 1973 and the debris falling into the village of Goussainville. All six crew in the aircraft and eight French citizens on the ground were killed, 15 houses were destroyed, and 60 people were injured. Since the initial reaction of the crowd watching the accident was that hundreds of people were likely to have been killed, there was some small relief that the casualties were relatively light. The entire ghastly accident was captured on film. The details of the incident remain murky. The Concorde had put on a flight display just before the takeoff of the Tu-144, and a French air force Dassault Mirage fighter was in the air, observing the two aircraft. The Concorde crew had been alerted that the fighter was in the area, but the Tu-144 crew had not. The speculation is that the pilot of the Tu-144, M.V. Kozlov, saw the Mirage shadowing him. Although the Mirage was keeping a safe distance, Kozlov might have been surprised and nosed the Tu-144 down sharply to avoid a collision. Whatever the reason for the nosedive, it flamed out all of the engines. Kozlov put the aircraft into a dive so he could get a relight and overstressed the airframe when he tried to pull out. This scenario remains speculation. Other scenarios suggest that Kozlov was trying too hard to outperform the Concorde and took the machine out of its envelope. After a year's investigation, the French and Soviet governments issued a brief statement saying that the cause of the accident could not be determined. Some suspect a cover-up, but it is impossible to make a credible judgement given the muddy trail, particularly since the people who could have told exactly what had happened weren't among the living any more. * The Tu-144 resembled the Concorde, sometimes being called the "Concordski", and there were accusations that it was a copy. Many Western observers pointed out that there were also similarities between the Concorde and American SST proposals, and there was no reason to believe the resemblances between the Concorde and the Tu-144 were much more than a matter of the normal influence of published design concepts on organizations -- as well as "convergent evolution", or the simple fact that two machines designed separately to do the same task may out of simple necessity look alike. The truth was muddier. Building an SST was an enormous design challenge for the Soviet Union. As a matter of national prestige, it had to be done, with the Soviet aircraft doing it first, and since the USSR was behind the West's learning curve the logical thing to do was steal. An organization was established to collect and analyze open-source material on SSTs from the West, and Soviet intelligence targeted the Concorde effort for penetration. In 1964, French counterintelligence got wise to this game and sent out an alert to relevant organizations to beware of snoops and to be careful about releases of information. They began to keep tabs on Sergei Pavlov, the head of the Paris office of Aeroflot, whose official job gave him legitimate reasons for obtaining information from the French aviation industry and put him in an excellent position to spy on the Concorde effort. Pavlov was not aware that French counterintelligence was on to him, and so the French fed him misinformation to send Soviet research efforts down dead ends. Eventually, on 1 February 1965, the French arrested him while he was going to a lunch date with a contact, and found that he had plans for the Concorde's landing gear in his briefcase. Pavlov was thrown out of the country. However, the Soviets had another agent, Sergei Fabiew, collecting intelligence on the Concorde effort, and French counterintelligence knew nothing about him. His cover was finally blown in 1977 by a Soviet defector, leading to Fabiew's arrest. Fabiew had been highly productive up to that time. In the documents they seized from him, they found a congratulations from Moscow for passing on a complete set of Concorde blueprints. * Although the Soviets did obtain considerable useful intelligence on the Concorde, they were traditionally willing to use their own ideas or stolen ideas on the basis of which seemed the best. They could make good use of fundamental research obtained from the Concorde program to avoid dead ends and get a leg up, and they could leverage designs of Concorde subsystems to cut the time needed to build subsystems for the Tu-144. In other words, the Tu-144 was still by no means a straight copy of the Concorde. The general configuration of the two aircraft was similar, both being dartlike delta-type aircraft with four afterburning engines paired in two nacelles; a drooping nose to permit better view on takeoff and landing; and a flight crew of three. Both were mostly built of conventional aircraft alloys. However, there were many differences in detail: The Tu-144 was powered by four Kuznetsov NK-144 afterburning turbofans with 196.2 kN (20,000 kgp / 44,100 lbf) afterburning thrust each. The engines had separate inlet ducts in each nacelle and variable ramps in the inlets. The Tu-144D, which performed its first flight in 1978, was fitted with Kolesov RD-36-51 engines that featured much improved fuel economy and apparently uprated thrust. Production machines seem to have had thrust reversers, but some sources claim early machines used drag parachutes instead. TUPOLEV TU-144: _____________________ _________________ _______________________ spec metric english _____________________ _________________ _______________________ wingspan 28.80 meters 94 feet 6 inches wing area 438.00 sq_meters 4,715 sq_feet length 65.70 meters 215 feet 6 inches height 12.85 meters 42 feet 2 inches empty weight 85,000 kilograms 187,395 pounds MTO weight 180,000 kilograms 396,830 pounds max cruise speed 2,500 KPH 1,555 MPH / 1,350 KT service ceiling 18,300 meters 60,000 feet range 6,500 kilometers 4,040 MI / 3,515 NMI _____________________ _________________ _______________________ The Tu-144 prototype was a bit shorter and had ejection seats, though production aircraft did not, and the prototype also lacked the retractable canards. The engines fitted to the prototype had a lower thrust rating and were fitted into a single engine box, not a split box as in the production machines. Pictures of the preproduction machines show them to have had a production configuration, though no doubt they differed in minor details. * The Tu-144 was not put into service until 26 December 1976, and then only for cargo and mail transport by Aeroflot between Moscow and Alma Ata, Kazakhstan, for operational evaluation. The Tu-144 didn't begin passenger service until 1 November 1977, and then apparently it was a cramped and uncomfortably noisy ride. Operating costs were unsurprisingly high and apparently the aircraft's reliability left something to be desired, which would not be surprising given its "bleeding edge" nature and particularly the haste in which it was developed. The next year, on 23 May 1978, the first Tu-144D caught fire, had to perform an emergency landing, and was destroyed with some fatalities. The program never recovered. The Tu-144 only performed a total of 102 passenger-carrying flights. Some flight research was performed on two of the aircraft up to 1990, when the Tu-144 was finally grounded. That was not quite the end of the story. As discussed in the next section, even though the Concorde and Tu-144 were clearly not money-making propositions, interest in building improved SSTs lingered on through the 1980s and 1990s. The US National Aeronautics & Space Administration (NASA) conducted studies on such aircraft, and in June 1993 officials the Tupolev organization met with NASA officials at the Paris Air Show to discuss pulling one of the Tu-144s out of mothballs to be used as an experimental platform for improved SST design. The meeting had been arranged by British intermediaries. In October 1993, the Russians and Americans announced that they would conduct a joint advanced SST research effort. The program was formalized in an agreement signed by American Vice-President Al Gore and Russian Prime Minister Viktor Chernomyrdin at Vancouver, Canada, in June 1994. This agreement also formalized NASA shuttle flights to the Russian Mir space station. The final production Tu-144D was selected for the tests, since it had only 83 flight hours when it was mothballed. Tupolev performed a major refurbishment on it, providing new uprated engines; strengthening the wing to handle the new engines; updating the fuel, hydraulic, electrical, and avionics systems; and adding about 500 sensors feeding a French-designed digital data-acquisition system. The modified Tu-144D was redesignated the "Tu-144LL", where "LL" stood for "Letnoya Laboritoya (Flying Laboratory)", a common Russian suffix for testbeds. The new engines were Kuznetsov NK-321 turbofans, used on the huge Tupolev Tu-160 "Blackjack" bomber, replacing the Tu-144's Kolesov RD-36 engines. The NK-321 provided about 20% more power than the RD-36-51 and still better fuel economy. Each NK-321 had a max dry thrust of 137.3 kN (14,000 kgp / 31,000 lbf) and an afterburning thrust of 245.2 kN (25,000 kgp / 55,000 lbf). The details of the NK-321s were secret, and the Western partners in the venture were not allowed to inspect them. A sequence of about 26 test flights was conducted in Russia with officials from the NASA Langley center at the Zhukovsky Flight Test Center from 1996 into 1999. Two NASA pilots, including NASA space shuttle pilot C. Gordon Fullerton, flew the machine during the course of the trials. As also discussed in the next section, the whole exercise came to nothing, but it was at least nice to get the machine back in the air one last time.BACK_TO_TOP * Although the US had given up on the Boeing 2707-300 in 1971, NASA continued to conduct paper studies on SSTs, and in 1985 US President Ronald Reagan announced that the US was going to develop a high-speed transport named the "Orient Express". The announcement was a bit confusing because it blended an attempt to develop a hypersonic spaceplane, which emerged as the dead-end "National Aerospace Plane (NASP)" effort, with NASA studies for an improved commercial SST. By the early 1990s, NASA's SST studies had emerged as the "High Speed Research (HSR)" effort, a collaboration with US aircraft industries to develop a "High Speed Civil Transport (HSCT)" that would carry up to 300 passengers at speeds from Mach 2 to 3 over a distance of 10,500 kilometers (6,500 miles), with a ticket price only 20% more than that of a conventional subsonic airliner. The fact that an SST could move more people in a shorter period of time was seen as a possible economic advantage. The NASA studies focused heavily on finding solutions to the concerns over high-altitude air pollution, airport vicinity noise levels, and sonic boom that had killed the 2707-300. Other nations also conducted SST studies, with Japan flying large rocket-boosted scale models in the Australian outback, and there was an interest in international collaborative development efforts. The biggest non-environmental obstacle was simple development cost. While it might have been possible to develop an SST with reasonable operating costs -- though obviously not as low as those of a subsonic fanjet airliner -- given the high development costs it was hard to see how such a machine could be offered at a competitive price and achieve the sales volumes needed to make it worthwhile to build. Some aerospace firms took a different approach on the matter, proposing small "supersonic business jets (SSBJs)". The idea was that there is a market of people who regard time as money and who would be willing to pay a high premium to shave a few hours for a trip across the ocean. Development costs of such a machine would be relatively modest, and the business model of serving a wealthy elite, along with delivering small volumes of urgent parcels in the cargo hold, seemed realistic. Firms such as Dassault in France, Gulfstream in the US, and Sukhoi in Russia came up with concepts in the early 1990s, but the idea didn't go anywhere at the time. * Although the NASA HSR program did put the Tu-144LL back in the air, the study was finally axed in 1999. NASA, in good bureaucratic form, kept the program's cancellation very quiet, in contrast to the grand press releases that had accompanied the effort. That was understandable since NASA has to be wary of politicians out to grab headlines by publicly attacking government boondoggles, but in a sense the agency had nothing to hide: NASA studied the matter front to back, and one official stated off the record that in the end nobody could figure out how to make money on the HSCT. From an engineering point of view, a conclusive negative answer is as useful as a conclusive positive answer -- but few politicians have an engineering background and understand such things. Some aircraft manufacturers didn't give up on SST research after the fall of the HSCT program. One of the major obstacles to selling an SST was the fact that sonic booms prevented it from being operated at high speed over land, limiting its appeal, and of course an SST that didn't produce a sonic boom would overcome that obstacle. Studies showed that sonic boom decreased with aircraft length and with reduction in aircraft size. There was absolutely no way the big HSCT, which was on a scale comparable to that of the Boeing 2707-300, could fly without generating a sonic boom, and so current industry notional configurations envision an SSBJ or small supersonic airliner. Gulfstream released a notional configuration of a "Quiet Supersonic Jet (QSJ)" that would seat 24 passengers, have a gross takeoff weight of 68,000 kilograms (150,000 pounds), a length of 49 meters (160 feet), and swing wings. Gulfstream officials projected a market of from 180 to 400 machines over ten years, and added that the company had made a good profit building machines in productions runs as small as 200 aircraft. Other manufacturers have envisioned small SSTs with up to 50 seats. * In 2005 Aerion Corporation, a startup in Reno, Nevada, announced concepts for an SSBJ designed to carry 8 to 12 passengers, with a maximum range of 7,400 kilometers (4,000 NMI) at Mach 1.5, a length of 45.18 meters (149 feet 2 inches), a span of 19.56 meters (64 feet 2 inches), and a maximum takeoff weight of 45,350 kilograms (100,000 pounds). The machine is technologically conservative in most respects, with no flashy features such as swing wings or drooping nose. Current configurations envision a dartlike aircraft, with wedge-style wings fitted with long leading-edge strakes, a steeply swept tailfin with a center-mounted wedge-style tailplane, and twin engines mounted on stub pylons on the rear of the wings. A fly-by-wire system will provide controllability over a wide range of flight conditions. The wings are ultra-thin, to be made of carbon composite materials, and feature full-span trailing-edge flaps to allow takeoffs on typical runways. The currently planned engines are Pratt & Whitney JT8D-219 turbofans, each derated to 80.1 kN (8,165 kgp / 18,000 lbf). The JT8Ds are non-afterburning and use a fixed supersonic inlet configuration. The expected power-to-weight ratio at normal operating weights is expected to be about 40%, about the same as a Northrop F-5 fighter in afterburner. The Aerion SSBJ will be able to operate efficiently at high subsonic or low supersonic speeds over populated areas, where sonic boom would be unacceptable. The company believes there is a market for 250 to 300 SSBJs, and opened up the books for orders at the Dubai air show in 2007. The company claims to have dozens of orders on the books, but no prototype has flown yet and there is no indication of when one will be. * Even with the final grounding of the Concorde, the idea of the SST continues to flicker on. In 2011, the European Aerospace & Defense Systems (EADS) group released a concept for a "Zero-Emissions Hypersonic Transport (ZEHST) that could carry up to 100 passengers at Mach 4 using turbofan / ramjet / rocket propulsion. It was nothing more than an interesting blue-sky concept with no prospect of entering development any time soon. Most agree that the SST is a sexy idea; few are confident that it can be made to pay.BACK_TO_TOP * In hindsight, the SST mania that produced the Concorde sounded persuasive at the time, but it suffered from a certain lack of realism. Although the Concorde was a lovely, magnificent machine and a technological marvel even when it was retired, it was also a testimony to a certain naivete that characterized the 1950s and 1960s, when people thought that technology could accomplish anything and set out on unbelievably grand projects. Some of these projects they incredibly pulled off, but some of them turned out very differently than expected. It's still hard not to admire their dash. There's also a certain perverse humor to the whole thing. The French and the British actually built the Concorde, while the Americans, in typical grand style, cooked up a plan to build a machine that was twice as big and faster -- and never got it off the ground. The irony was that Americans made the right decision when they killed the 2707-300. The further irony was that they did it for environmental reasons that, whether they were right or wrong, were irrelevant given the fact they would have lost their shirts on it. Development and purchase costs were almost guaranteed to have been greater than those of a subsonic airliner, the SST being much more like a combat aircraft; maintenance costs were by a good bet to have been higher as well; an SST would have only been useful for transcontinental operations and would have been absurd as a bulk cargo carrier, meaning production volumes would have been relatively low; and by the example of the Concorde, which had about three times the fuel burn per passenger-mile of a Boeing 747, there's no doubt that the costs of fuel would have made a 300-seat SST hopelessly uncompetitive to operate for a mass market. It would have been very interesting to have fielded a 12-seat supersonic business jet, a much less challenging proposition from both the technical and commercial points of view, in the 1970s, but people simply could not think small. * Incidentally, interest in SSTs from the late 1950s through much of the 1960s was so great that most companies that came up with large supersonic combat aircraft also cooked up concepts for SST derivatives. General Dynamics considered a "stretched" derivative of the company's B-58 Hustler bomber designated the "Model 58-9", and the MiG organization of the USSR even came up with an SSBJ derivative of the MiG-25 "Foxbat" interceptor. Of course, none of these notions ever amounted to much more than "back of envelope" designs. * Sources include: The information on the Soviet effort to penetrate the Concorde program was obtained from "Supersonic Spies", an episode of the US Public Broadcasting System's NOVA TV program, released in early 1998. NASA's website also provided some useful details on the Tu-144LL test program and the Tu-144 in general, as did the surprisingly good Russian Monino aviation museum website. * Revision history: v1.0.0 / 01 aug 03 / gvg v1.0.1 / 01 nov 03 / gvg / Cleanup, comments on final Concorde flight. v1.0.2 / 01 dec 03 / gvg / Comments on QSP. v1.0.3 / 01 jan 04 / gvg / A few minor tweaks on the Concorde. v1.0.4 / 01 feb 04 / gvg / Very last flight of Concorde. v1.0.5 / 01 dec 05 / gvg / Cosmetic changes, SSBJ efforts. v1.0.7 / 01 nov 07 / gvg / Review & polish. v1.0.8 / 01 jan 09 / gvg / Review & polish. v1.0.9 / 01 nov 09 / gvg / Corrected Paris accident details. v1.1.0 / 01 oct 11 / gvg / Review & polish.BACK_TO_TOP
http://www.airvectors.net/avsst.html
13
37
1. Why is adaptive optics needed? Turbulence in the Earth's atmosphere limits the performance of ground-based astronomical telescopes. In addition to making a star twinkle, turbulence spreads out the light from a star so that it appears as a fuzzy blob when viewed through a telescope. This blurring effect is so strong that even the largest ground-based telescopes, the two 10-m Keck Telescopes in Hawaii, have no better spatial resolution than a modest 8-inch backyard telescope! One of the major motivations for launching telescopes into space is to overcome this blurring due to the Earth's atmosphere, so that images will have higher spatial resolution than has been possible to date from the ground. The Figure below illustrates the blurring effect of the atmosphere in a long-exposure image (left) and a short "snapshot" image (center). When the effects of turbulence in the Earth's atmosphere are corrected, this distant star would look like the image on the right. Bright Star (Arcturus) Observed with Lick Observatory's 1-m Telescope: Image credit: Lawrence Livermore National Laboratory and NSF Center for Adaptive Optics. Graphic can be obtained at the Center for Adaptive Optics, University of California at Santa Cruz, (831) 459-5592 or [email protected]. 2. How adaptive optics works. Adaptive optics technology can correct for the blurring caused by the Earth's atmosphere, and can make Earth-bound telescopes "see" almost as clearly as if they were in space. The principles behind adaptive optics technology are illustrated in the Figure below. Assume that you wish to observe a faint galaxy. The first step is to find a relatively bright star close to the galaxy. a) Light from both this "guide star" and the galaxy passes through the telescope's optics. The star's light is sent to a special high-speed camera, called a "wavefront sensor," that can measure hundreds of times a second how the star's light is distorted by the turbulence. b) This information is sent to a fast computer, which calculates the shape to apply to a special "deformable mirror" (usually placed behind the main mirror of the telescope). This mirror cancels out the distortions due to turbulence. c) Light from both the "guide star" and the galaxy is reflected off the deformable mirror. Both are now sharpened because the distortions due to turbulence have been removed. Image credits: Lawrence Livermore National Laboratory and NSF Center for Adaptive Optics. Graphics can be obtained at the Center for Adaptive Optics, University of California at Santa Cruz, (831) 459-5592 or [email protected]. In practice, the process described schematically in the top Figure above is a continuous one, and involves a "feedback loop" or "control system" that is always sending small corrections to the shape of the deformable mirror. This is illustrated in bottom Figure. 3. The role of laser guide stars. Until the past few years, astronomical adaptive optics relied exclusively on the presence of a bright star positioned on the sky very close to the astronomical object being observed. Typically, for infrared observations this "guide star" must lie within about 30 seconds of arc of the astronomical target. (One second of arc corresponds to a dime viewed from a distance of about 2 kilometers.) For visible-light observations the "guide star" must be within about 10 seconds of arc. These limitations are quite severe, and mean that the fraction of objects in the sky that are favored with a suitable guide star is a few percent or less. Of course astronomers would like to be able to look anywhere in the sky, not just at those lucky locations that have a bright guide star very nearby. To accomplish this, a laser can be used to make an "artificial star" almost anywhere in the sky. The concept of "laser guide stars" was suggested in the early 1980's in both the military and astronomical communities. A laser beam is mounted on the telescope and pointed at the object to be observed. One concept is shown in the Figure on the left below. In this "sodium laser guide star" concept, the laser light is tuned to a yellow color (similar to the color of low-pressure sodium street lights) that excites a layer of sodium atoms about 60 miles up in the atmosphere. The sodium atoms glow in a small spot on the sky, making an "artificial star" that can be used to measure atmospheric turbulence. This concept is currently implemented on the ALFA system at Calar Alto in Spain, and at the Lick Observatory in California (Figure below right ). Image credits: Lawrence Livermore National Laboratory and NSF Center for Adaptive Optics. Graphics can be obtained at the Center for Adaptive Optics, University of California at Santa Cruz, (831) 459-5592 or [email protected]. An alternative laser scheme that has been successfully implemented on an Air Force 1.5 m telescope at the Starfire Optical Range by Robert Fugate uses a pulsed green laser focused at an altitude of about 15 km. The green laser light scatters from molecules in the air, and the detector measuring the turbulence is timed so as to observe the scattered laser light at just the time when a laser pulse has traveled up and back from the desired 15-km altitude. This concept is called a "Rayleigh beacon" (named after the type of scattering from air molecules), and is currently being implemented for astronomy at the Mt. Wilson 100-inch telescope by Laird Thompson (University of Illinois). 4. The heritage of adaptive optics technology. Adaptive optics development contains threads from both the astronomical and military communities. The concept was first proposed in a 1953 paper by astronomer Horace Babcock (then at Mt. Wilson and Palomar Observatories, now renamed The Carnegie Observatories). However 1950's technologies were not ready to deal with the exacting requirements needed for a successful adaptive optics system. In the late 1960's and early 1970's, the military and aerospace communities built the first significant adaptive optics systems, and began a sustained effort that has now led to very sophisticated Air Foce systems at the Starfire Optical Range in Albuquerque NM (contact: Robert Fugate) and at the Advanced Electro- Optical Facility on Maui in Hawaii (contact: Paul Kervin). Luis Alvarez's research group at DOE's Lawrence Berkeley Laboratory performed one of the first astronomy experiments, in which they built a simple deformable mirror that corrected only in one dimension but demonstrated that it could sharpen the image of a star. Early theoretical work on the capabilities and limitations of adaptive optics systems was done by Freeman Dyson (Institute for Advanced Study), Francois Roddier (University of Hawaii), and John Hardy (Itek Corporation, now retired). Today the technical emphases of military and astronomical adaptive optics systems are rather different. Military applications typically need high adaptive optics performance, operate at visible wavelengths, require very rapid response times for their turbulence measurements, and utilize bright objects as "guide stars." In contrast, astronomical systems are limited by the fact that bright natural stars are rare. Until laser guide stars are in broad use, astronomical systems must operate with guide stars that are fainter. As a consequence, their response times are slower (in order to take longer exposures and gather more light during turbulence measurements), they choose to operate at infrared wavelengths (because these are easier for obtaining good performance), and the objects being imaged are fainter (requiring long time-exposures to obtain clear images or spectra). Today's adaptive optics systems include the two Air Force facilities mentioned above (3-m class telescopes at the Starfire Optical Range in Albuquerque and the Advanced Electro-Optical Facility on Maui), and about half a dozen 3 -5 m class astronomical telescopes around the world. The newest additions are systems for the new generation of 8 - 10 m astronomical telescopes: the Keck I and II Telescopes and the Gemini North Telescope, all atop the Mauna Kea volcano in Hawaii. Image credit: W. M. Keck Observatory Image Credit: Gemini Observatory 5. Key components of an adaptive optics system. The most distinctive components of an adaptive optics system are the "deformable mirror" which actually makes the optical corrections, and the "wavefront sensor" which measures the turbulence hundreds of times a second. These are connected together by a high-speed computer. Today, deformable mirrors for astronomy are usually made of a very thin sheet of glass with a diameter of several inches. Attached to the back of the glass are various kinds of "actuators" -devices which expand or contract in length in response to a voltage signal, bending the thin sheet of glass locally. The schematic below illustrates in very simplified form how such a deformable mirror is able to correct a distorted beam of light from a star, by straightening out its wavefront. Light is incident on the mirror from the left, and is reflected moving back to the left. If the deformable mirror has a depression that is half the depth of the initial distortion in the wavefront's shape, then by the time the light has reflected from the mirror and gone back the other way, the rest of the wavefront will have caught up with the "notched" section and the wavefront will be flat, or perfect. 1) Hubble has very broad wavelength coverage, whereas today's astronomical adaptive optics systems on 8 - 10 m telescopes are designed to work only when observing in infrared light. Hubble has clear superiority for observations using ultra-violet and visible light, which cannot be done on today's largest telescopes even with adaptive optics. 2) Today, Hubble's infrared camera (NICMOS) is no longer operational. Ground-based adaptive optics systems on 8 - 10 m telescopes are thus the "only game in town" for high-resolution infrared observations. The Keck II Telescope's adaptive optics system currently yields spatial resolution in the infrared (e.g. 0.04 seconds of arc) comparable to the resolution which Hubble achieves in visible light (see www2.keck.hawaii.edu:3636/realpublic/ao/aolight.html). 3) NASA plans to revive Hubble's NICMOS infrared camera by the end of 2001. At that time, there will be high-spatial-resolution infrared capabilities both on the ground and in space. How will these compare? Hubble will have the following advantages, relative to ground-based 8 - 10 m telescopes with adaptive optics observing in infrared light: On the other hand, 8 - 10 m ground-based telescopes with adaptive optics will have several advantages over Hubble: This work was performed in part under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405Eng-48. This work has been supported in part by the National Science Foundation Science and Technology Center for Adaptive Optics, managed by the University of California at Santa Cruz under cooperative agreement No. AST - 0043302. 8. Contact information. Claire Max, [email protected] Center for Adaptive Optics University of California 1156 High Street Santa Cruz CA 95064
http://www.ucolick.org/~max/max-web/History_AO_Max.htm
13
10
The scientific importance of these first samples from the Galaxy can't be overstated. The major form of heavy elements in interstellar space is in dust. This interstellar dust plays a major role in the formation of new stars and planetary systems. Our own Solar System formed from gas and dust in the interstellar medium 4.6 billion years ago. The heavy elements making up Earth and our bodies were once interstellar dust. In the words of Joni Mitchell, "We are Stardust." But we don't even know what the typical interstellar dust grain looks like! We are extremely excited about the prospect of studying directly the first contemporary interstellar dust. Interstellar dust was first discovered flowing across the Solar System by dust detectors aboard the Ulysses spacecraft in 1993 and was later confirmed by the Galileo mission to Jupiter. The particles were identified as coming from a location in the sky in the Constellation Ophiuchus, looking toward the center of the Milky Way Galaxy. What we have learned about interstellar dust comes from remote observations of how the dust absorbs, scatters, polarizes, and even emits light. Also, some ancient interstellar dust has been identified in meteorites found on Earth. Interstellar dust is small, ranging in size from 0.01 microns all the way up to 20 microns. They are made of different minerals such as silicates, graphitic carbon, hydrogenated amorphous carbon, alumina, and even diamond carbon. Interstellar dust grains form by condensation in the regions around stars that are coming to the end of their life cycle: red giants, planetary nebulae, white dwarfs, novae, and supernovae. The dust grains mix with the interstellar medium (the stuff between the stars) and slowly experience chemical and isotopic changes from interactions with the gas and radiation in interstellar space. Dust grains do not last for very long in the interstellar medium before being dissociated by radiation, maybe a few hundreds of millions of years. This is why we say that the dust collected by the Stardust mission is contemporary dust, it must be only a few hundred million years old at most, whereas dust found recovered in meteorites would have been incorporated into them at the time of the formation of the Solar System (4.6 billion years ago). While finding the interstellar dust grains captured in Stardusts aerogel collectors is the goal of the Stardust@home project, the identification of these grains is only the first step. The next step is the analysis. Once we have a few examples to examine, a committee of experts will decide on the next steps. Because they are so small and so precious, each track is worth about a million dollars if there turn out to be 100 of them! The analysis of these particles will have to be done extremely carefully and will take many years. Many types of analysis destroy the samples, so we will have to start with the gentlest techniques and proceed very carefully. The great advantage of this type of sample-return mission is that one can take advantage of the improvements in analytical techniques for years or even decades to come. Analytical techniques improved dramatically even during the seven years between the launch and the return of Stardust, and there is no sign of a slowdown in progress. So no matter what, some of these interstellar dust particles will be set aside for our great-grandchildren to analyze. More on interstellar dust from the JPL Stardust website: Aerogel is one of the strangest materials ever developed. It is a solid, yet is only a few times as dense as air. If you hold it in your hand, you can only barely feel its weight, and it looks bluish and ghostly like solid smoke. While it looks blue, it casts an orange shadow. It does this for the same reason that the sky is blue and sunsets are red! Aerogel has extremely bizarre properties. It is a solid, glassy nanofoam, yet weighs next to nothing. Aerogel has the almost magical property that it can capture particles moving at very high speeds (several miles per second or more) better than any other material. In some cases, particles can be captured in a nearly pristine state. Particles moving at these speeds vaporize if they hit any other material. More on aerogel from the JPL Stardust website: An article from the Planetary Society: Aerogel: The "Frozen Smoke" That Made Stardust Possible
http://stardustathome.ssl.berkeley.edu/a_science.php
13
72
|Schematic diagram of a high-bypass turbofan engine| |Part of a series on A turbofan is a type of aircraft gas turbine engine that provides thrust using a combination of a ducted fan and a jet exhaust nozzle. Part of the airstream from the ducted fan passes through the core, providing oxygen to burn fuel to create power. However, the rest of the air flow bypasses the engine core and mixes with the faster stream from the core, significantly reducing exhaust noise. The rather slower bypass airflow produces thrust more efficiently than the high-speed air from the core, and this reduces the specific fuel consumption. A few designs work slightly differently and have the fan blades as a radial extension of an aft-mounted low-pressure turbine unit. Turbofans have a net exhaust speed that is much lower than a turbojet. This makes them much more efficient at subsonic speeds than turbojets, and somewhat more efficient at supersonic speeds up to roughly Mach 1.6, but have also been found to be efficient when used with continuous afterburner at Mach 3 and above. However, the lower speed also reduces thrust at high speeds. All of the jet engines used in currently manufactured commercial jet aircraft are turbofans. They are used commercially mainly because they are highly efficient and relatively quiet in operation. Turbofans are also used in many military jet aircraft, such as the F-15 Eagle. Unlike a reciprocating engine, a turbojet undertakes a continuous-flow combustion process. In a single-spool (or single-shaft) turbojet, which is the most basic form and the earliest type of turbojet to be developed, air enters an intake before being compressed to a higher pressure by a rotating (fan-like) compressor. The compressed air passes on to a combustor, where it is mixed with a fuel (e.g. kerosene) and ignited. The hot combustion gases then enter a windmill-like turbine, where power is extracted to drive the compressor. Although the expansion process in the turbine reduces the gas pressure (and temperature) somewhat, the remaining energy and pressure is employed to provide a high-velocity jet by passing the gas through a propelling nozzle. This process produces a net thrust opposite in direction to that of the jet flow. After World War II, 2-spool (or 2-shaft) turbojets were developed to make it easier to throttle back compression systems with a high design overall pressure ratio (i.e., combustor inlet pressure/intake delivery pressure). Adopting the 2-spool arrangement enables the compression system to be split in two, with a Low Pressure (LP) Compressor supercharging a High Pressure (HP) Compressor. Each compressor is mounted on a separate (co-axial) shaft, driven by its own turbine (i.e HP Turbine and LP Turbine). Otherwise a 2-spool turbojet is much like a single-spool engine. Modern turbofans evolved from the 2-spool axial-flow turbojet engine, essentially by increasing the relative size of the Low Pressure (LP) Compressor to the point where some (if not most) of the air exiting the unit actually bypasses the core (or gas-generator) stream, passing through the main combustor. This bypass air either expands through a separate propelling nozzle, or is mixed with the hot gases leaving the Low Pressure (LP) Turbine, before expanding through a Mixed Stream Propelling Nozzle. Owing to a lower jet velocity, a modern civil turbofan is quieter than the equivalent turbojet. Turbofans also have a better thermal efficiency, which is explained later in the article. In a turbofan, the LP Compressor is often called a fan. Civil-aviation turbofans usually have a single fan stage, whereas most military-aviation turbofans (e.g. combat and trainer aircraft applications) have multi-stage fans. It should be noted, however, that modern military transport turbofan engines are similar to those that propel civil jetliners. Turboprop engines are gas-turbine engines that deliver almost all of their power to a shaft to drive a propeller. Turboprops remain popular on very small or slow aircraft, such as small commuter airliners, for their fuel efficiency at lower speeds, as well as on medium military transports and patrol planes, such as the C-130 Hercules and P-3 Orion, for their high takeoff performance and mission endurance benefits respectively. If the turboprop is better at moderate flight speeds and the turbojet is better at very high speeds, it might be imagined that at some speed range in the middle a mixture of the two is best. Such an engine is the turbofan (originally termed bypass turbojet by the inventors at Rolls Royce). Another name sometimes used is ducted fan, though that term is also used for propellers and fans used in vertical-flight applications. The difference between a turbofan and a propeller, besides direct thrust, is that the intake duct of the former slows the air before it arrives at the fan face. As both propeller and fan blades must operate at subsonic inlet velocities to be efficient, ducted fans allow efficient operation at higher vehicle speeds. Depending on specific thrust (i.e. net thrust/intake airflow), ducted fans operate best from about 400 to 2000 km/h (250 to 1300 mph), which is why turbofans are the most common type of engine for aviation use today in airliners as well as subsonic/supersonic military fighter and trainer aircraft. It should be noted, however, that turbofans use extensive ducting to force incoming air to subsonic velocities (thus reducing shock waves throughout the engine). The noise of any type of jet engine is strongly related to the velocity of the exhaust gases, typically being proportional to the eighth power of the jet velocity. High-bypass-ratio (i.e., low-specific-thrust) turbofans are relatively quiet compared to turbojets and low-bypass-ratio (i.e., high-specific-thrust) turbofans. A low-specific-thrust engine has a low jet velocity by definition, as the following approximate equation for net thrust implies: Rearranging the above equation, specific thrust is given by: So for zero flight velocity, specific thrust is directly proportional to jet velocity. Relatively speaking, low-specific-thrust engines are large in diameter to accommodate the high airflow required for a given thrust. Jet aircraft are often considered loud, but a conventional piston engine or a turboprop engine delivering the same thrust would be much louder. Early turbojet engines were very fuel-inefficient, as their overall pressure ratio and turbine inlet temperature were severely limited by the technology available at the time. The very first running turbofan was the German Daimler-Benz DB 670 (also known as 109-007) which was operated on its testbed on April 1, 1943. The engine was abandoned later while the war went on and problems could not be solved. The British wartime Metrovick F.2 axial flow jet was given a fan to create the first British turbofan. Improved materials, and the introduction of twin compressors such as in the Pratt & Whitney JT3C engine, increased the overall pressure ratio and thus the thermodynamic efficiency of engines, but they also led to a poor propulsive efficiency, as pure turbojets have a high specific thrust/high velocity exhaust better suited to supersonic flight. The original low-bypass turbofan engines were designed to improve propulsive efficiency by reducing the exhaust velocity to a value closer to that of the aircraft. The Rolls-Royce Conway, the first production turbofan, had a bypass ratio of 0.3, similar to the modern General Electric F404 fighter engine. Civilian turbofan engines of the 1960s, such as the Pratt & Whitney JT8D and the Rolls-Royce Spey had bypass ratios closer to 1, but were not dissimilar to their military equivalents. The unusual General Electric CF700 turbofan engine was developed as an aft-fan engine with a 2.0 bypass ratio. This was derived from the T-38 Talon and the Learjet General Electric J85/CJ610 turbojet (2,850 lbf or 12,650 N) to power the larger Rockwell Sabreliner 75/80 model aircraft, as well as the Dassault Falcon 20 with about a 50% increase in thrust (4,200 lbf or 18,700 N). The CF700 was the first small turbofan in the world to be certified by the Federal Aviation Administration (FAA). There are now over 400 CF700 aircraft in operation around the world, with an experience base of over 10 million service hours. The CF700 turbofan engine was also used to train Moon-bound astronauts in Project Apollo as the powerplant for the Lunar Landing Research Vehicle. A high specific thrust/low bypass ratio turbofan normally has a multi-stage fan, developing a relatively high pressure ratio and, thus, yielding a high (mixed or cold) exhaust velocity. The core airflow needs to be large enough to give sufficient core power to drive the fan. A smaller core flow/higher bypass ratio cycle can be achieved by raising the (HP) turbine rotor inlet temperature. Imagine a retrofit situation where a new low bypass ratio, mixed exhaust, turbofan is replacing an old turbojet, in a particular military application. Say the new engine is to have the same airflow and net thrust (i.e. same specific thrust) as the one it is replacing. A bypass flow can only be introduced if the turbine inlet temperature is allowed to increase, to compensate for a correspondingly smaller core flow. Improvements in turbine cooling/material technology would facilitate the use of a higher turbine inlet temperature, despite increases in cooling air temperature, resulting from a probable increase in overall pressure ratio. Efficiently done, the resulting turbofan would probably operate at a higher nozzle pressure ratio than the turbojet, but with a lower exhaust temperature to retain net thrust. Since the temperature rise across the whole engine (intake to nozzle) would be lower, the (dry power) fuel flow would also be reduced, resulting in a better specific fuel consumption (SFC). A few low-bypass ratio military turbofans (e.g. F404) have Variable Inlet Guide Vanes, with piano-style hinges, to direct air onto the first rotor stage. This improves the fan surge margin (see compressor map) in the mid-flow range. The swing wing F-111 achieved a very high range / payload capability by pioneering the use of this engine, and it was also the heart of the famous F-14 Tomcat air superiority fighter which used the same engines in a smaller, more agile airframe to achieve efficient cruise and Mach 2 speed. Since the 1970s, most jet fighter engines have been low/medium bypass turbofans with a mixed exhaust, afterburner and variable area final nozzle. An afterburner is a combustor located downstream of the turbine blades and directly upstream of the nozzle, which burns fuel from afterburner-specific fuel injectors. When lit, prodigious amounts of fuel are burnt in the afterburner, raising the temperature of exhaust gases by a significant degree, resulting in a higher exhaust velocity/engine specific thrust. The variable geometry nozzle must open to a larger throat area to accommodate the extra volume flow when the afterburner is lit. Afterburning is often designed to give a significant thrust boost for take off, transonic acceleration and combat maneuvers, but is very fuel intensive. Consequently afterburning can only be used for short portions of a mission. However the Mach 3 SR-71 was designed for continuous operation and to be efficient with the afterburner lit. Unlike the main combustor, where the downstream turbine blades must not be damaged by high temperatures, an afterburner can operate at the ideal maximum (stoichiometric) temperature (i.e. about 2100K/3780Ra/3320F). At a fixed total applied fuel:air ratio, the total fuel flow for a given fan airflow will be the same, regardless of the dry specific thrust of the engine. However, a high specific thrust turbofan will, by definition, have a higher nozzle pressure ratio, resulting in a higher afterburning net thrust and, therefore, a lower afterburning specific fuel consumption. However, high specific thrust engines have a high dry SFC. The situation is reversed for a medium specific thrust afterburning turbofan: i.e. poor afterburning SFC/good dry SFC. The former engine is suitable for a combat aircraft which must remain in afterburning combat for a fairly long period, but only has to fight fairly close to the airfield (e.g. cross border skirmishes) The latter engine is better for an aircraft that has to fly some distance, or loiter for a long time, before going into combat. However, the pilot can only afford to stay in afterburning for a short period, before his/her fuel reserves become dangerously low. Modern low-bypass military turbofans include the Pratt & Whitney F119, the Eurojet EJ200 and the General Electric F110 and F414, all of which feature a mixed exhaust, afterburner and variable area propelling nozzle. Non-afterburning engines include the Rolls-Royce/Turbomeca Adour (afterburning in the SEPECAT Jaguar) and the unmixed, vectored thrust, Rolls-Royce Pegasus. The low specific thrust/high bypass ratio turbofans used in today's civil jetliners (and some military transport aircraft) evolved from the high specific thrust/low bypass ratio turbofans used in such aircraft back in the 1960s. Low specific thrust is achieved by replacing the multi-stage fan with a single stage unit. Unlike some military engines, modern civil turbofans do not have any stationary inlet guide vanes in front of the fan rotor. The fan is scaled to achieve the desired net thrust. The core (or gas generator) of the engine must generate sufficient core power to at least drive the fan at its design flow and pressure ratio. Through improvements in turbine cooling/material technology, a higher (HP) turbine rotor inlet temperature can be used, thus facilitating a smaller (and lighter) core and (potentially) improving the core thermal efficiency. Reducing the core mass flow tends to increase the load on the LP turbine, so this unit may require additional stages to reduce the average stage loading and to maintain LP turbine efficiency. Reducing core flow also increases bypass ratio (5:1, or more, is now common). Further improvements in core thermal efficiency can be achieved by raising the overall pressure ratio of the core. Improved blade aerodynamics reduces the number of extra compressor stages required. With multiple compressors (i.e. LPC, IPC, HPC) dramatic increases in overall pressure ratio have become possible. Variable geometry (i.e. stators) enable high pressure ratio compressors to work surge-free at all throttle settings. The first high-bypass turbofan engine was the General Electric TF39, designed in mid 1960s to power the Lockheed C-5 Galaxy military transport aircraft. The civil General Electric CF6 engine used a derived design. Other high-bypass turbofans are the Pratt & Whitney JT9D, the three-shaft Rolls-Royce RB211 and the CFM International CFM56. More recent large high-bypass turbofans include the Pratt & Whitney PW4000, the three-shaft Rolls-Royce Trent, the General Electric GE90/GEnx and the GP7000, produced jointly by GE and P&W. High-bypass turbofan engines are generally quieter than the earlier low bypass ratio civil engines. This is not so much due to the higher bypass ratio, as to the use of a low pressure ratio, single stage, fan, which significantly reduces specific thrust and, thereby, jet velocity. The combination of a higher overall pressure ratio and turbine inlet temperature improves thermal efficiency. This, together with a lower specific thrust (better propulsive efficiency), leads to a lower specific fuel consumption. For reasons of fuel economy, and also of reduced noise, almost all of today's jet airliners are powered by high-bypass turbofans. Although modern combat aircraft tend to use low bypass ratio turbofans, military transport aircraft (e.g. C-17 ) mainly use high bypass ratio turbofans (or turboprops) for fuel efficiency. Because of the implied low mean jet velocity, a high bypass ratio/low specific thrust turbofan has a high thrust lapse rate (with rising flight speed). Consequently the engine must be over-sized to give sufficient thrust during climb/cruise at high flight speeds (e.g. Mach 0.83). Because of the high thrust lapse rate, the static (i.e. Mach 0) thrust is consequently relatively high. This enables heavily laden, wide body aircraft to accelerate quickly during take-off and consequently lift-off within a reasonable runway length. The turbofans on twin engined airliners are further over-sized to cope with losing one engine during take-off, which reduces the aircraft's net thrust by 50%. Modern twin engined airliners normally climb very steeply immediately after take-off. If one engine is lost, the climb-out is much shallower, but sufficient to clear obstacles in the flightpath. The Soviet Union's engine technology was less advanced than the West's and its first wide-body aircraft, the Ilyushin Il-86, was powered by low-bypass engines. The Yakovlev Yak-42, a medium-range, rear-engined aircraft seating up to 120 passengers introduced in 1980 was the first Soviet aircraft to use high-bypass engines. Turbofan engines come in a variety of engine configurations. For a given engine cycle (i.e. same airflow, bypass ratio, fan pressure ratio, overall pressure ratio and HP turbine rotor inlet temperature), the choice of turbofan configuration has little impact upon the design point performance (e.g. net thrust, SFC), as long as overall component performance is maintained. Off-design performance and stability is, however, affected by engine configuration. As the design overall pressure ratio of an engine cycle increases, it becomes more difficult to throttle the compression system, without encountering an instability known as compressor surge. This occurs when some of the compressor aerofoils stall (like the wings of an aircraft) causing a violent change in the direction of the airflow. However, compressor stall can be avoided, at throttled conditions, by progressively: 1) opening interstage/intercompressor blow-off valves (inefficient) 2) closing variable stators within the compressor Most modern American civil turbofans employ a relatively high pressure ratio High Pressure (HP) Compressor, with many rows of variable stators to control surge margin at part-throttle. In the three-spool RB211/Trent the core compression system is split into two, with the IP compressor, which supercharges the HP compressor, being on a different coaxial shaft and driven by a separate (IP) turbine. As the HP Compressor has a modest pressure ratio it can be throttled-back surge-free, without employing variable geometry. However, because a shallow IP compressor working line is inevitable, the IPC requires at least one stage of variable geometry. Although far from common, the Single Shaft Turbofan is probably the simplest configuration, comprising a fan and high pressure compressor driven by a single turbine unit, all on the same shaft. The SNECMA M53, which powers Mirage fighter aircraft, is an example of a Single Shaft Turbofan. Despite the simplicity of the turbomachinery configuration, the M53 requires a variable area mixer to facilitate part-throttle operation. One of the earliest turbofans was a derivative of the General Electric J79 turbojet, known as the CJ805, which featured an integrated aft fan/low pressure (LP) turbine unit located in the turbojet exhaust jetpipe. Hot gas from the turbojet turbine exhaust expanded through the LP turbine, the fan blades being a radial extension of the turbine blades. This Aft Fan configuration was later exploited in the General Electric GE-36 UDF (propfan) Demonstrator of the early 80's. One of the problems with the Aft Fan configuration is hot gas leakage from the LP turbine to the fan. Many turbofans have the Basic Two Spool configuration where both the fan and LP turbine (i.e. LP spool) are mounted on a second (LP) shaft, running concentrically with the HP spool (i.e. HP compressor driven by HP turbine). The BR710 is typical of this configuration. At the smaller thrust sizes, instead of all-axial blading, the HP compressor configuration may be axial-centrifugal (e.g. General Electric CFE738), double-centrifugal or even diagonal/centrifugal (e.g. Pratt & Whitney Canada PW600). Higher overall pressure ratios can be achieved by either raising the HP compressor pressure ratio or adding an Intermediate Pressure (IP) Compressor between the fan and HP compressor, to supercharge or boost the latter unit helping to raise the overall pressure ratio of the engine cycle to the very high levels employed today (i.e. greater than 40:1, typically). All of the large American turbofans (e.g. General Electric CF6, GE90 and GEnx plus Pratt & Whitney JT9D and PW4000) feature an IP compressor mounted on the LP shaft and driven, like the fan, by the LP turbine, the mechanical speed of which is dictated by the tip speed and diameter of the fan. The high bypass ratios (i.e. fan duct flow/core flow) used in modern civil turbofans tends to reduce the relative diameter of the attached IP compressor, causing its mean tip speed to decrease. Consequently more IPC stages are required to develop the necessary IPC pressure rise. Rolls-Royce chose a Three Spool configuration for their large civil turbofans (i.e. the RB211 and Trent families), where the Intermediate Pressure (IP) compressor is mounted on a separate (IP) shaft, running concentrically with the LP and HP shafts, and is driven by a separate IP Turbine. Consequently, the IP compressor can rotate faster than the fan, increasing its mean tip speed, thereby reducing the number of IP stages required for a given IPC pressure rise. Because the RB211/Trent designs have a higher IPC pressure rise than the American engines, the HPC pressure rise is less resulting in a shorter, lighter engine. However, three spool engines are harder to both build and maintain. As bypass ratio increases, the mean radius ratio of the fan and LP turbine increases. Consequently, if the fan is to rotate at its optimum blade speed the LP turbine blading will spin slowly, so additional LPT stages will be required, to extract sufficient energy to drive the fan. Introducing a (planetary) reduction gearbox, with a suitable gear ratio, between the LP shaft and the fan, enables both the fan and LP turbine to operate at their optimum speeds. Typical of this configuration are the long-established Honeywell TFE731, the Honeywell ALF 502/507, and the recent Pratt & Whitney PW1000G. Most of the configurations discussed above are used in civil turbofans, while modern military turbofans (e.g. SNECMA M88) are usually Basic Two Spool. Most civil turbofans use a high efficiency, 2-stage HP turbine to drive the HP compressor. The CFM56 uses an alternative approach: a single stage, high-work unit. While this approach is probably less efficient, there are savings on cooling air, weight and cost. In the RB211 and Trent series, Rolls-Royce split the two stages into two discrete units; one on the HP shaft driving the HP compressor; the other on the IP shaft driving the IP (Intermediate Pressure) Compressor. Modern military turbofans tend to use single stage HP turbines. Modern civil turbofans have multi-stage LP turbines (e.g. 3, 4, 5, 6, 7). The number of stages required depends on the engine cycle bypass ratio and how much supercharging (i.e. IP compression) is on the LP shaft, behind the fan. A geared fan may reduce the number of required LPT stages in some applications. Because of the much lower bypass ratios employed, military turbofans only require one or two LP turbine stages. Consider a mixed turbofan with a fixed bypass ratio and airflow. Increasing the overall pressure ratio of the compression system raises the combustor entry temperature. Therefore, at a fixed fuel flow there is an increase in (HP) turbine rotor inlet temperature. Although the higher temperature rise across the compression system implies a larger temperature drop over the turbine system, the mixed nozzle temperature is unaffected, because the same amount of heat is being added to the system. There is, however, a rise in nozzle pressure, because overall pressure ratio increases faster than the turbine expansion ratio, causing an increase in the hot mixer entry pressure. Consequently, net thrust increases, whilst specific fuel consumption (fuel flow/net thrust) decreases. A similar trend occurs with unmixed turbofans. So turbofans can be made more fuel efficient by raising overall pressure ratio and turbine rotor inlet temperature in unison. However, better turbine materials and/or improved vane/blade cooling are required to cope with increases in both turbine rotor inlet temperature and compressor delivery temperature. Increasing the latter may require better compressor materials. Overall pressure ratio can be increased by improving fan (or) LP compressor pressure ratio and/or HP compressor pressure ratio. If the latter is held constant, the increase in (HP) compressor delivery temperature (from raising overall pressure ratio) implies an increase in HP mechanical speed. However, stressing considerations might limit this parameter, implying, despite an increase in overall pressure ratio, a reduction in HP compressor pressure ratio. According to simple theory, if the ratio turbine rotor inlet temperature/(HP) compressor delivery temperature is maintained, the HP turbine throat area can be retained. However, this assumes that cycle improvements are obtained, whilst retaining the datum (HP) compressor exit flow function (non-dimensional flow). In practise, changes to the non-dimensional speed of the (HP) compressor and cooling bleed extraction would probably make this assumption invalid, making some adjustment to HP turbine throat area unavoidable. This means the HP turbine nozzle guide vanes would have to be different from the original! In all probability, the downstream LP turbine nozzle guide vanes would have to be changed anyway. Thrust growth is obtained by increasing core power. There are two basic routes available: a) hot route: increase HP turbine rotor inlet temperature b) cold route: increase core mass flow Both routes require an increase in the combustor fuel flow and, therefore, the heat energy added to the core stream. The hot route may require changes in turbine blade/vane materials and/or better blade/vane cooling. The cold route can be obtained by one of the following: all of which increase both overall pressure ratio and core airflow. Alternatively, the core size can be increased, to raise core airflow, without changing overall pressure ratio. This route is expensive, since a new (upflowed) turbine system (and possibly a larger IP compressor) is also required. Changes must also be made to the fan to absorb the extra core power. On a civil engine, jet noise considerations mean that any significant increase in Take-off thrust must be accompanied by a corresponding increase in fan mass flow (to maintain a T/O specific thrust of about 30lbf/lb/s), usually by increasing fan diameter. On military engines, the fan pressure ratio would probably be increased to improve specific thrust, jet noise not normally being an important factor. The turbine blades in a turbofan engine are subject to high heat and stress, and require special fabrication. New material construction methods and material science have allowed blades, which were originally polycrystalline (regular metal), to be made from lined up metallic crystals and more recently mono-crystalline (i.e. single crystal) blades, which can operate at higher temperatures with less distortion. Nickel-based superalloys are used for HP turbine blades in almost all of the modern jet engines. The temperature capabilities of turbine blades have increased mainly through four approaches: the manufacturing (casting) process, cooling path design, thermal barrier coating (TBC), and alloy development. Although turbine blade (and vane) materials have improved over the years, much of the increase in (HP) turbine inlet temperatures is due to improvements in blade/vane cooling technology. Relatively cool air is bled from the compression system, bypassing the combustion process, and enters the hollow blade or vane. After picking up heat from the blade/vane, the cooling air is dumped into the main gas stream. If the local gas temperatures are low enough, downstream blades/vanes are uncooled and solid. Strictly speaking, cycle-wise the HP Turbine Rotor Inlet Temperature (after the temperature drop across the HPT stator) is more important than the (HP) turbine inlet temperature. Although some modern military and civil engines have peak RITs of the order of 3300 °R (2840 °F) or 1833 K (1560 °C), such temperatures are only experienced for a short time (during take-off) on civil engines. The turbofan engine market is dominated by General Electric, Rolls-Royce plc and Pratt & Whitney, in order of market share. GE and SNECMA of France have a joint venture, CFM International which, as the 3rd largest manufacturer in terms of market share, fits between Rolls Royce and Pratt & Whitney. Rolls Royce and Pratt & Whitney also have a joint venture, International Aero Engines, specializing in engines for the Airbus A320 family, whilst finally, Pratt & Whitney and General Electric have a joint venture, Engine Alliance marketing a range of engines for aircraft such as the Airbus A380. Williams International is the world leader in smaller business jet turbofans. GE Aviation, part of the General Electric Conglomerate, currently has the largest share of the turbofan engine market. Some of their engine models include the CF6 (available on the Boeing 767, Boeing 747, Airbus A330 and more), GE90 (only the Boeing 777) and GEnx (developed for the Boeing 747-8 & Boeing 787 and proposed for the Airbus A350, currently in development) engines. On the military side, GE engines power many U.S. military aircraft, including the F110, powering 80% of the US Air Force's F-16 Fighting Falcons and the F404 and F414 engines, which power the Navy's F/A-18 Hornet and Super Hornet. Rolls Royce and General Electric are jointly developing the F136 engine to power the Joint Strike Fighter. CFM International is a joint venture between GE Aircraft Engines and SNECMA of France. They have created the very successful CFM56 series, used on Boeing 737, Airbus A340, and Airbus A320 family aircraft. Rolls-Royce plc is the second largest manufacturer of turbofans and is most noted for their RB211 and Trent series, as well as their joint venture engines for the Airbus A320 and Boeing MD-90 families (IAE V2500 with Pratt & Whitney and others), the Panavia Tornado (Turbo-Union RB199) and the Boeing 717 (BR700). Rolls Royce, as owners of the Allison Engine Company, have their engines powering the C-130 Hercules and several Embraer regional jets. Rolls-Royce Trent 970s were the first engines to power the new Airbus A380. It was also Rolls-Royce Olympus/SNECMA jets that powered the now retired Concorde although they were turbojets rather than turbofans. The famous thrust vectoring Pegasus engine is the primary powerplant of the Harrier "Jump Jet" and its derivatives. Pratt & Whitney is third behind GE and Rolls-Royce in market share. The JT9D has the distinction of being chosen by Boeing to power the original Boeing 747 "Jumbo jet". The PW4000 series is the successor to the JT9D, and powers some Airbus A310, Airbus A300, Boeing 747, Boeing 767, Boeing 777, Airbus A330 and MD-11 aircraft. The PW4000 is certified for 180-minute ETOPS when used in twinjets. The first family has a 94 inch fan diameter and is designed to power the Boeing 767, Boeing 747, MD-11, and the Airbus A300. The second family is the 100 inch (2.5 m) fan engine developed specifically for the Airbus A330 twinjet, and the third family has a diameter of 112 inch designed to power Boeing 777. The Pratt & Whitney F119 and its derivative, the F135, power the United States Air Force's F-22 Raptor and the international F-35 Lightning II, respectively. Rolls Royce are responsible for the lift fan which will provide the F-35B variants with a STOVL capability. The F100 engine was first used on the F-15 Eagle and F-16 Fighting Falcon. Newer Eagles and Falcons also come with GE F110 as an option, and the two are in competition. Aviadvigatel (Russian:Авиационный Двиѓатель) is the Russian aircraft engine company that succeeded the Soviet Soloviev Design Bureau. They have made 1 engine on the market, the Aviadvigatel PS-90. The engine is used on the Ilyushin Il-96-300, -400, T, Tupolev Tu-204, Tu-214 series and the Ilyushin Il-76-MD-90. Later, the company changed its name to Perm Engine Company. Ivchenko-Progress is the Ukrainian aircraft engine company that succeeded the Soviet Ivchenko Design Bureau. Some of their engine models include Progress D-436 available on the Antonov An-72/74, Yakovlev Yak-42, Beriev Be-200, Antonov An-148 and Tupolev Tu-334 and Progress D-18T that powers two of the world largest airplanes, Antonov An-124 and Antonov An-225. In the 1970s Rolls-Royce/SNECMA tested a M45SD-02 turbofan fitted with variable pitch fan blades to improve handling at ultra low fan pressure ratios and to provide thrust reverse down to zero aircraft speed. The engine was aimed at ultra quiet STOL aircraft operating from city centre airports. In a bid for increased efficiency with speed, a development of the turbofan and turboprop known as a propfan engine, was created that had an unducted fan. The fan blades are situated outside of the duct, so that it appears like a turboprop with wide scimitar-like blades. Both General Electric and Pratt & Whitney/Allison demonstrated propfan engines in the 1980s. Excessive cabin noise and relatively cheap jet fuel prevented the engines being put into service. The Unicode standard includes a turbofan character, #274B, in the dingbats range. Its official name is "HEAVY EIGHT TEARDROP-SPOKED PROPELLER. ASTERISK. = turbofan". In appropriately-configured browsers, it should appear in the box on the right.
http://www.thefullwiki.org/Turbofan
13
12
Science Fair Project Encyclopedia The Lorentz transformation (LT), named after its discoverer, the Dutch physicist and mathematician Hendrik Antoon Lorentz (1853-1928), forms the basis for the special theory of relativity, which has been introduced to remove contradictions between the theories of electromagnetism and classical mechanics. Under these transformations, the speed of light is the same in all reference frames, as postulated by special relativity. Although the equations are associated with special relativity, they were developed before special relativity and were proposed by Lorentz in 1904 as a means of explaining the Michelson-Morley experiment through contraction of lengths. This is in contrast to the more intuitive Galilean transformation, which is sufficient at non-relativistic speeds. It can be used (for example) to calculate how a particle trajectory looks if viewed from an inertial reference frame that is moving with constant velocity (with respect to the initial reference frame). It replaces the earlier Galilean transformation. The velocity of light, c, enters as a parameter in the Lorentz transformation. If v is low enough with respect to c then , and the Galilean transformation is recovered, so it may be identified as a limiting case. The Lorentz transformation is a group transformation that is used to transform the space and time coordinates (or in general any four-vector) of one inertial reference frame, S, into those of another one, S', with S' traveling at a relative speed of v to S along the x-axis. If an event has space-time coordinates of (t,x,y,z) in S and (t',x',y',z') in S', then these are related according to the Lorentz transformation in the following way: - x' = γ(x - vt) - y' = y - z' = z The four equations above can be expressed together in matrix form as or alternatively as The first matrix formulation has the advantage of being easily seen to collapse to the Galilean transformation in the limit . The second matrix formulation has the advantage of being easily seen to preserve the spacetime interval ds2 = (cdt)2 - dx2 - dy2 - dz2, which is a fundamental invariant in special relativity. These equations only work if v is pointed along the x-axis of S. In cases where v does not point along the x-axis of S, it is generally easier to perform a rotation so that v does point along the x-axis of S than to bother with the general case of the Lorentz transformation. For a boost in an arbitrary direction it is convenient to decompose the spatial vector into components perpendicular and parallel to the velocity : . Only the component in direction of is warped by the factor γ: These equations can be expressed in matrix form as Another limiting factor of the above transformation is that the "position" of the origins must coincide at 0. What this means is that (0,0,0,0) in frame S must be the same as (0,0,0,0) in S'. A generalization of Lorentz transformations that relaxes this restriction is the Poincaré transformations. and X is the 4-vector describing spacetime displacements, is the most general Lorentz transformation. Such defined matrices Λ form a representation of the group SO(3,1) also known as the Lorentz group. Lorentz discovered in 1900 that the transformation preserved Maxwell's equations. Lorentz believed the luminiferous aether hypothesis; it was Albert Einstein who developed the theory of relativity to provide a proper foundation for its application. The Lorentz transformations were first published in 1904, but their formalism was at the time imperfect. Henri Poincaré, the French mathematician, revised Lorentz's formalism to make the four equations into the coherent, self-consistent whole we know today. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Lorentz_transformation
13
21
Logical Reasoning is our guide to good decisions. It is also a guide to sorting out truth from falsehood. Like every subject, Logic has its own vocabulary and it is important that you understand the meanings of some important words/terms on which problems are usually framed in the Common Admission Test. Once you have become familiar with the vocabulary of Logic, it will be imperative that you also understand some rules/principles on which questions can be solved. Some of the important types and styles of problems in logic are: a. Problems based on ‘Propositions and their Implications’ These problems typically have a proposition followed by either a deductive or an inductive argument. An argument has a minimum of two statements — a premise and a conclusion, in any order.It may also have more than one premise (statement) and more than one conclusion. The information in the premise(s) either makes the conclusion weak or makes it strong. The examinee is usually required to: i. identify the position of the premise(s) vis-à-vis the conclusion, that is, is the premise weakening or strengthening the conclusion ii. identify if the conclusion drawn based on the premise(s) is correct or incorrect iii. identify if only one conclusion follows, either conclusion follows, or neither conclusion follows, or both the conclusions follow (assuming the problem has two premises and two conclusions) iv. identify an option in which the third statement is implied by the first two statements; this type of question is called a — Syllogism v. identify the correct ordered pair where the first statement implies the second statement and both these statements are logically consistent with the main proposition (assuming, each question has a main proposition followed by four statements A, B, C, D) vi. identify the set in which the statements are most logically related (assuming, each question has six statements and there are four options listed as —sets of combinations of three statements ABD, ECF, ABF, BCE etc.) vii. identify the option where the third segment can be logically deduced from the preceding two (assuming, each question has a set of four statements and each of these statements has three segment, for example: A. Tedd is beautiful; Robo is beautiful too; Ted is Robo. B. Some apples are guavas; Some guavas are oranges; Oranges are apples. C. Tedd is beautiful; Robo is beautiful too; Tedd may be Robo. D. Apples are guavas; Guavas are oranges; Oranges are grapes. (a) Only C (b) Only A (c) A and C (d) Only B The answer to the above question is option (c) The above is in no way an exhaustive list of problems on logic, but it gives a fair view of the types and styles of questions that one may face.
http://www.jagranjosh.com/articles/cat-logical-reasoning-format-syllabus-and-types-of-problem-1338317908-1
13
17
Part 3: More on the Tolerance Interval In this article series, we continue to develop and review the statistical interval concept focusing on the tolerance interval. We also continue to use the example from Parts 1 and 2 demonstrating the idea of a tolerance interval to show a direct comparison. Q: What is a tolerance interval? A: A tolerance interval is an interval constructed using a set of sample data so as to contain a specified proportion of all future observation(s) with some stated confidence. Another way to state this is to say that at least 100p percent of a population will fall within the tolerance interval with some stated confidence (0 < p < 1). Recall that the prediction type interval is only used for forecasting several (say k ≥ 1) future observations. Note the difference; the tolerance interval is stating that at least some proportion p of all future observations will be contained within the interval. We continue to require that all samples used for this analysis were selected under the same conditions and from the same population or process and that the sample was random, or the process was in a state of statistical control. In the same manner as was used for prediction, tolerance intervals can be constructed for one-sided as well as two-sided interval cases. The two-sided case takes the form of the number pair [L, U], where we state that at least some proportion, p, from the future output of the process or the population will fall within this interval. For one-sided cases, we use one of the forms (-∞, U] or [L, ∞). We can also use a specified distribution such as the normal or the exponential, or we can construct nonparametric tolerance intervals where we are uncertain about the distribution that may apply. In addition, we can have additional variations as, for example, when we know the value of a parameter such as a standard deviation in the normal case. Suppose we have a random sample of n observations X1, X2, .., Xn, and we assume the data came from a normal distribution. It is very common for practitioners to construct intervals of the form Such intervals are claimed to capture or include 68, 95 and 99.7 percent of the population, respectively, which the data represent. Most often, there is no discussion of sample size or confidence in the stated result. The interval is just used as-is. What the novice may fail to understand is that this interval is only true when we use the true mean μ and standard deviation σ. We really should be using μ ± kσ when we make such a claim. In practice, we never know the values of μ and σ, and so these must be estimated by the sample average and standard deviation. In sample estimation there is uncertainty in any stated interval such as the ones described above. This uncertainty arises due to the fact that we are using statistics in the interval construction and that a small sample size may have been used. Tolerance intervals were invented to remove this uncertainty. For the normal distribution case, the two-sided tolerance interval construction is of the form ± kS, where k is a function of the sample size n, the confidence C and the minimum proportion p, claimed to be captured by the interval. The mathematics for determining the value of k for unknown μ and σ is complicated and must be done using numerical analysis. Approximation formulas are available, however, and many texts use such formulas. The interested reader should consult References 1 through 3 for details. Table 1 contains such factors for selected common values of C, p and n. Table 1 — Factors (k) for Constructing Tolerance Limits for a Normal Distribution For the one-sided case where σ can be assumed known, we do have a closed form formula that only requires a table of the standard normal distribution. We illustrate this for the lower bound case. The form of the lower bound is - kσ. The value of k is given by: For confidence level C and proportion captured p, Z0 and ZC are determined using the standard normal distribution where P(Z > Z0) = 1 - p and P(Z < ZC) = C. For example, with C = 0.95 and p = 0.99, we find Z0 = 2.326 and ZC = 1.64. If a sample of n = 12 is used, k is determined using Equation 2 as k = 2.80. Then if a sample of n = 12 is taken we can be 95 percent confident that the population that produced the data has at least 99 percent of its future output at or above - 2.80σ. We turn next to an illustration where both μ and σ are unknown. In Parts 1 and 2 of this series, we considered n = 22 tensile adhesion test results made on U-700 alloy specimens. The load at failure had = 13.71 and s = 3.55, which produced a 95 percent confidence interval for μ of 12.14 ≤ μ ≤ 15.28. We now want to determine a tolerance interval for the load at failure, which will include 90 percent of the population values with 95 percent confidence. We can use Table 1 to find k for n = 22, p = 0.90 and confidence C = 0.95. The value of k is 2.264. Using Equation 1, the desired tolerance interval is Our interpretation of this tolerance interval is that we can be 95 percent confident that at least 90 percent of the load at failure values for the U-700 alloy will lie between 5.67 and 21.75 megapascals. This tolerance interval is much wider than the 95 percent confidence interval for the mean. It is also interesting to note that as n ➝ ∞, the k value approaches the Z value corresponding to the desired level of p for the normal distribution. As an example, suppose we desire p = 0.90 for a two-sided tolerance interval. In this case, k approaches Z1-(1-p)/2 = Z0.95 = 1.645 as n ➝ ∞. In fact, as n ➝ ∞, the 100p percent prediction interval for a (k = 1) future value approaches the tolerance interval that contains 100p percent of the distribution. To illustrate this, in Part 2, the 95 percent prediction interval for a future value was found to be 6.16 ≤ X23 ≤ 21.26, which is slightly smaller in width than the 95 percent tolerance interval shown in this article for n = 22 values. 1. Natrella, Mary Gibbons, Experimental Statistics, Handbook 91, U.S. Department of Commerce, 1963. 2. NIST/SEMATECH, e-Handbook of Statistical Methods. 3. Proschan, Frank, “Confidence and Tolerance Intervals for the Normal Distribution,” Journal of the American Statistical Association, Vol. 48, No. 263, 1953. Stephen N. Luko, Hamilton Sundstrand, Windsor Locks, Conn., is the immediate past chairman of Committee E11 on Quality and Statistics, and a fellow of ASTM International. Dean V. Neubauer, Corning Inc., Corning, N.Y., is an ASTM fellow; he serves as vice chairman of Committee E11 on Quality and Statistics, chairman of Subcommittee E11.30 on Statistical Quality Control, chairman of E11.90.03 on Publications and coordinator of the DataPoints column. Statistics play an important role in the ASTM International standards you write, and a panel of experts is ready to answer your questions about how to use statistical principles in ASTM standards. Please send your questions to SN Editor in Chief Maryann Gorman at [email protected] or ASTM International, 100 Barr Harbor Drive, P.O. Box C700, West Conshohocken, PA 19428-2959. This article appears in the issue of Standardization News.
http://www.astm.org/standardization-news/data-points/statistical-intervals-part-3-nd11.html
13
12
Seaward border of a continental shelf. The world's combined continental slope is about 200,000 mi (300,000 km) long and descends at an average angle of about 4° from the edge of the continental shelf to the beginning of the ocean basins at depths of 330–10,500 ft (100–3,200 m). The slope is most gradual off stable coasts without major rivers and is steepest off coasts with young mountain ranges and narrow continental shelves. Slopes off mountainous coastlines and narrow shelves commonly have outcrops of rock. The dominant sediments of continental slopes are muds; there are smaller amounts of sediments of sand or gravel. Learn more about continental slope with a free trial on Britannica.com. The continental shelf is the extended perimeter of each continent and associated coastal plain, which is covered during interglacial periods such as the current epoch by relatively shallow seas (known as shelf seas) and gulfs. The continental rise is below the slope, but landward of the abyssal plains. Its gradient is intermediate between the slope and the shelf, on the order of 0.5-1°. Extending as far as 500 km from the slope, it consists of thick sediments deposited by turbidity currents from the shelf and slope. Sediment cascades down the slope and accumulates as a pile of sediment at the base of the slope, called the continental rise. Though the continental shelf is treated as a physiographic province of the ocean, it is not part of the deep ocean basin proper, but the flooded margins of the continent. Passive continental margins such as most of the Atlantic coasts have wide and shallow shelves, made of thick sedimentary wedges derived from long erosion of a neighboring continent. Active continental margins have narrow, relatively steep shelves, due to frequent earthquakes that move sediment to the deep sea. Sediments usually become increasingly fine with distance from the coast; sand is limited to shallow, wave-agitated waters, while silt and clays are deposited in quieter, deep water far offshore. These shelf sediments accumulate at an average rate of 30 cm/1000 years, with a range from 15-40 cm. Though slow by human standards, this rate is much faster than that for deep-sea pelagic sediments. Combined with the sunlight available in shallow waters, the continental shelves teem with life compared to the biotic desert of the oceans' abyssal plain. The pelagic (water column) environment of the continental shelf constitutes the neritic zone, and the benthic (sea floor) province of the shelf is the sublittoral zone. The relatively accessible continental shelf is the best understood part of the ocean floor. Most commercial exploitation from the sea, such as metallic-ore, non-metallic ore, and hydrocarbon extraction, takes place on the continental shelf. Sovereign rights over their continental shelves up to 350 nautical miles from the coast were claimed by the marine nations that signed the Convention on the Continental Shelf drawn up by the UN's International Law Commission in 1958 partly superseded by the 1982 United Nations Convention on the Law of the Sea.
http://www.reference.com/browse/continental+slope
13
12
1.(MeSH)Double-stranded DNA of MITOCHONDRIA. In eukaryotes, the mitochondrial GENOME is circular and codes for ribosomal RNAs, transfer RNAs, and about 10 proteins. B-DNA, Deoxyribonucleic Acid, DNA, DNA, B-Form, DNA, Double-Stranded, ds-DNA - Cistron, Gene, Genes, Genetic Materials - Cytoplasmic Inheritance, Extrachromosomal Inheritance, Extranuclear Inheritance[Hyper.] Circular DNA, DNA, Circular[Hyper.] Mitochondrial DNA (n.) [MeSH] Mitochondrial DNA (mtDNA or mDNA) is the DNA located in organelles called mitochondria, structures within eukaryotic cells that convert the chemical energy from food into a form that cells can use, adenosine triphosphate (ATP). Most other DNA present in eukaryotic organisms is found in the cell nucleus. Mitochondrial DNA can be regarded as the smallest chromosome. Human mitochondrial DNA was the first significant part of the human genome to be sequenced. In most species, including humans, mtDNA is inherited solely from the mother. The DNA sequence of mtDNA has been determined from a large number of organisms and individuals (including some organisms that are extinct), and the comparison of those DNA sequences represents a mainstay of phylogenetics, in that it allows biologists to elucidate the evolutionary relationships among species. It also permits an examination of the relatedness of populations, and so has become important in anthropology and field biology. Nuclear and mitochondrial DNA are thought to be of separate evolutionary origin, with the mtDNA being derived from the circular genomes of the bacteria that were engulfed by the early ancestors of today's eukaryotic cells. This theory is called the endosymbiotic theory. Each mitochondrion is estimated to contain 2-10 mtDNA copies. In the cells of extant organisms, the vast majority of the proteins present in the mitochondria (numbering approximately 1500 different types in mammals) are coded for by nuclear DNA, but the genes for some of them, if not most, are thought to have originally been of bacterial origin, having since been transferred to the eukaryotic nucleus during evolution. In most multicellular organisms, mtDNA is inherited from the mother (maternally inherited). Mechanisms for this include simple dilution (an egg contains 100,000 to 1,000,000 mtDNA molecules, whereas a sperm contains only 100 to 1000), degradation of sperm mtDNA in the fertilized egg, and, at least in a few organisms, failure of sperm mtDNA to enter the egg. Whatever the mechanism, this single parent (uniparental) pattern of mtDNA inheritance is found in most animals, most plants and in fungi as well. In sexual reproduction, mitochondria are normally inherited exclusively from the mother. The mitochondria in mammalian sperm are usually destroyed by the egg cell after fertilization. Also, most mitochondria are present at the base of the sperm's tail, which is used for propelling the sperm cells. Sometimes the tail is lost during fertilization. In 1999 it was reported that paternal sperm mitochondria (containing mtDNA) are marked with ubiquitin to select them for later destruction inside the embryo. Some in vitro fertilization techniques, particularly injecting a sperm into an oocyte, may interfere with this. The fact that mitochondrial DNA is maternally inherited enables researchers to trace maternal lineage far back in time. ( Y-chromosomal DNA, paternally inherited, is used in an analogous way to trace the agnate lineage.) This is accomplished on human mitochondrial DNA by sequencing one or more of the hypervariable control regions (HVR1 or HVR2) of the mitochondrial DNA, as with a genealogical DNA test. HVR1 consists of about 440 base pairs. These 440 base pairs are then compared to the control regions of other individuals (either specific people or subjects in a database) to determine maternal lineage. Most often, the comparison is made to the revised Cambridge Reference Sequence. Vilà et al. have published studies tracing the matrilineal descent of domestic dogs to wolves. The concept of the Mitochondrial Eve is based on the same type of analysis, attempting to discover the origin of humanity by tracking the lineage back in time. Because mtDNA is not highly conserved and has a rapid mutation rate, it is useful for studying the evolutionary relationships - phylogeny - of organisms. Biologists can determine and then compare mtDNA sequences among different species and use the comparisons to build an evolutionary tree for the species examined. Because mtDNA is transmitted from mother to child (both male and female), it can be a useful tool in genealogical research into a person's maternal line. It has been reported that mitochondria can occasionally be inherited from the father in some species such as mussels. Paternally inherited mitochondria have additionally been reported in some insects such as fruit flies, honeybees, and periodical cicadas. Evidence supports rare instances of male mitochondrial inheritance in some mammals as well. Specifically, documented occurrences exist for mice, where the male-inherited mitochondria was subsequently rejected. It has also been found in sheep, and in cloned cattle. It has been found in a single case in a human male. While many of these cases involve cloned embryos or subsequent rejection of the paternal mitochondria, others document in vivo inheritance and persistence under lab conditions. In most multicellular organisms the mtDNA is organized as a circular, covalently closed, double-stranded DNA. But in many unicellular (e.g. the ciliate Tetrahymena or the green alga Chlamydomonas reinhardtii) and in rare cases also in multicellular organisms (e.g. in some species of Cnidaria) the mtDNA is found as linearly organized DNA. Most of these linear mtDNA's possess telomerase independent telomers (i.e. the ends of the linear DNA) with different modes of replication, which have made them interesting objects of research, as many of these unicellular organisms with linear mtDNA are known pathogens. For human mitochondrial DNA (and probably for that of metazoans in general), 100-10,000 separate copies of mtDNA are usually present per cell (egg and sperm cells are exceptions). In mammals, each double-stranded circular mtDNA molecule consists of 15,000-17,000 base pairs. The two strands of mtDNA are differentiated by their nucleotide content with the guanine rich strand referred to as the heavy strand, and the cytosine rich strand referred to as the light strand. The heavy strand encodes 28 genes, and the light strand encodes 9 genes for a total of 37 genes. Of the 37 genes, 13 are for proteins (polypeptides), 22 are for transfer RNA (tRNA) and two are for the small and large subunits of ribosomal RNA (rRNA). This pattern is also seen among most metazoans, although in some cases one or more of the 37 genes is absent and the mtDNA size range is greater. Even greater variation in mtDNA gene content and size exists among fungi and plants, although there appears to be a core subset of genes that are present in all eukaryotes (except for the few that have no mitochondria at all). Some plant species have enormous mtDNAs (as many as 2,500,000 base pairs per mtDNA molecule) but, surprisingly, even those huge mtDNAs contain the same number and kinds of genes as related plants with much smaller mtDNAs. The genome of the mitochondrion of the cucumber (Cucumis sativus) consists of three circular chromosomes (lengths 1556, 84 and 45 kilobases), which are entirely or largely autonomous with regard to their replication. mtDNA is replicated by the DNA polymerase gamma complex which is composed of a 140 kDa catalytic DNA polymerase encoded by the POLG gene and a 55 kDa accessory subunit encoded by the POLG2 gene. During embryogenesis, replication of mtDNA is strictly down-regulated from the fertilized oocyte through the preimplantation embryo. At the blastocyst stage, the onset of mtDNA replication is specific to the cells of the trophectoderm. In contrast, the cells of the inner cell mass restrict mtDNA replication until they receive the signals to differentiate to specific cell types. mtDNA is particularly susceptible to reactive oxygen species generated by the respiratory chain due to its proximity. Though mtDNA is packaged by proteins and harbors significant DNA repair capacity, these protective functions are less robust than those operating on nuclear DNA and are therefore thought to contribute to enhanced susceptibility of mtDNA to oxidative damage. The outcome of mutation in mtDNA may be alteration in the coding instructions for some proteins, which may have an effect on organism metabolism and/or fitness. Mutations of mitochondrial DNA can lead to a number of illnesses including exercise intolerance and Kearns-Sayre syndrome (KSS), which causes a person to lose full function of heart, eye, and muscle movements. Some evidence suggests that they might be major contributors to the aging process and age-associated pathologies. Unlike nuclear DNA, which is inherited from both parents and in which genes are rearranged in the process of recombination, there is usually no change in mtDNA from parent to offspring. Although mtDNA also recombines, it does so with copies of itself within the same mitochondrion. Because of this and because the mutation rate of animal mtDNA is higher than that of nuclear DNA, mtDNA is a powerful tool for tracking ancestry through females (matrilineage) and has been used in this role to track the ancestry of many species back hundreds of generations. The low effective population size and rapid mutation rate (in animals) makes mtDNA useful for assessing genetic relationships of individuals or groups within a species and also for identifying and quantifying the phylogeny (evolutionary relationships; see phylogenetics) among different species, provided they are not too distantly related. To do this, biologists determine and then compare the mtDNA sequences from different individuals or species. Data from the comparisons is used to construct a network of relationships among the sequences, which provides an estimate of the relationships among the individuals or species from which the mtDNAs were taken. This approach has limits that are imposed by the rate of mtDNA sequence change. In animals, the high mutation rate makes mtDNA most useful for comparisons of individuals within species and for comparisons of species that are closely or moderately-closely related, among which the number of sequence differences can be easily counted. As the species become more distantly related, the number of sequence differences becomes very large; changes begin to accumulate on changes until an accurate count becomes impossible. Mitochondrial DNA was discovered in the 1960s by Margit M. K. Nass and Sylvan Nass by electron microscopy as DNase-sensitive thread inside mitochondria, and by Ellen Haslbrunner, Hans Tuppy and Gottfried Schatz by biochemical assays on highly purified mitochondrial fractions. Contenu de sensagent Dictionnaire et traducteur pour mobile Nouveau : sensagent est maintenant disponible sur votre mobile dictionnaire et traducteur pour sites web Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Avec la boîte de recherches Sensagent, les visiteurs de votre site peuvent également accéder à une information de référence pertinente parmi plus de 5 millions de pages web indexées sur Sensagent.com. Vous pouvez Choisir la taille qui convient le mieux à votre site et adapter la charte graphique. Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Les jeux de lettres anagramme, mot-croisé, joker, Lettris et Boggle sont proposés par Memodata. Le service web Alexandria est motorisé par Memodata pour faciliter les recherches sur Ebay. La SensagentBox est offerte par sensAgent. Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
http://dictionnaire.sensagent.com/Mitochondrial_DNA/en-en/
13
21
The next part of designing your science fair project is to run your study. First do a trial run to check your procedure. Then start collecting data. (This article is the seventh in a series of 10 which walks a student and his parent through creating a science fair project. See the end of this post for links to the other articles. ) Run Your Study You have worked hard to get this far in your project. Now it is time for the fun part. First you need do a trial run of your procedure. The purpose of a trial run is to check your procedure. You need to make sure it works as planned before you begin recording data. If there are any problems during your trial run, you can adjust your procedure. After you have worked out any problems in your procedure, you can run your study and begin collecting your data. You should follow your procedure without changing it. You can record data in your log book. Have somebody take lots of pictures of you running your procedure. There are many limits on what can be included on your display, but you can always use lots of pictures to show what you did. It is better to have many pictures to choose from than to wish you had taken a picture later when you are creating your display. In general, the more data you collect, the better. So if you are doing a collection, collect as many specimens as you can. If you are doing an experiment, run the experiment several times to see if you get the same results each time. If you are doing a model, make lots of notes in your log book about how you designed and constructed the model. If you are doing an observation, do as many trials of your procedure as is practical. If you are doing an invention, do plenty of trials to see if your invention really meets your specifications. Your assignment for this step is to do a dry run to perfect your procedure and then begin collecting data. Don’t forget to get lots of photographs. Note to Parents For this step you will be a mentor. Check the procedure with your child and use questions to help the student see any problem areas. Remember not to tell the child what to do. Just use questions to guide such as “Do you need another step before you ___________?” or “Is that step you just performed written in your procedure?” Remember to let the student make the final decisions, even if you disagree.This article is part of the Creating a Science Fair Project series. See the list below for links to the other articles in this series. - Creating a Science Fair Project Step 1 - The Log Book - Creating a Science Fair Project Step 2 – Choosing a Topic - Creating a Science Fair Project Step 3 – Collect Information - Creating a Science Fair Project Step 4 – Problem and Hypothesis - Creating a Science Fair Project Step 5 – DesignType Category - Creating a Science Fair Project Step 6 – The Procedure - Creating a Science Fair Project Step 7 – Run Your Study - Creating a Science Fair Project Step 8 – Analyze and Interpret Your Results - Creating a Science Fair Project Step 9 – Arrive at a Conclusion - Creating a Science Fair Project Step 10 – Create Your Display
http://the-science-mom.com/16/creating-a-science-fair-project-step-7-run-your-study/
13
29
Identities are equations true for any value of the variable. Since a right triangle drawn in the unit circle has a hypotenuse of length 1, we define the trigonometric identies x=cos(theta) and y=sin(theta). In the same triangle, tan(theta)=x/y, so substituting we get tan(theta)=sin(theta)/cos(theta), the tangent identity. Another key trigonometric identity sin2(theta) + cos2(theta)=1 comes from using the unit circle and the Pythagorean Theorem. I want to talk about trigonometric identities. Now recall an identity is an equation that is true for all applicable values of the variable. Here are 2 examples. x squared minus 1, the difference of squares is x plu- x-1 times x+1 and ln of e to the x equals x. I want to find some trigonometric identities and the unit circle can help me out. So I've got the unit circle drawn over here. Remember the unit circle is the circle with equation with equation x squared plus y squared equals 1. So it's got radius 1 center to the origin and that the sine and cosine are defined in the following way. x equals the cosine of theta and y equals the sine of theta. And remember the tangent of theta is y over x, and therefore tangent equals sine over cosine. Sine over cosine. Note also that combining these these 2 sets of information, we get the Pythagorean identity, cosine squared theta plus sine squared theta. So these are going to be really important identities and this idea of using the unit circle to derive new identities is what we're going to use in upcoming episodes.
http://brightstorm.com/math/precalculus/trigonometric-functions/trigonometric-identities/
13
13
During their formation, many planetary bodies in our solar system melted significantly, allowing denser materials to sink to their centers in a process known as differentiation. But how widespread this process was when it came to another class of early solar system body, asteroids, remains unclear. New findings published in the latest issue of the journal Nature suggest that for at least two of our solar system's major asteroids, melting was dramatic. By measuring the types and amounts of different isotopes present in a range of meteorites, Richard C. Greenwood of Open University in Milton Keynes, U.K. and his colleagues reconstructed their histories. The studied samples were formed from their parent asteroids--the 530-kilometer wide Vesta and a second unnamed asteroid--more than four billion years ago. The researchers determined that all the meteorites from Vesta exhibit the same ratio of oxygen isotopes, as did the meteorites from the second source. The find suggests that both asteroids experienced widespread melting with more than 50 percent of each object becoming liquid. In the magma oceans, other elements in the asteroids would have separated out according to mass, the researchers report. The resulting layered composition of such an asteroid could have contributed to the uneven distribution of elements among the planets, they say, if developing protoplanets crashed into the asteroid once it had cooled. In this scenario, the elements abundant in the crust would be transferred to one planet and those present in its core would end up on another. According to the report, Earth's high magnesium to silicon ratio is one anomalous feature that could be explained under these circumstances.
http://www.scientificamerican.com/article.cfm?id=magma-oceans-covered-earl
13
11
(Click on figures to see larger versions.) Presentation courtesy of William Blair, JHU. A hydrogen atom has a single proton and a single electron. If one adds a neutron to the nucleus, the atom is still hydrogen, but the atom will have a different atomic weight. This is called an isotope of hydrogen. Other elements have isotopes, too, but hydrogen's isotope is so important that it has been given it's own name: deuterium. Deuterium is also known as "heavy hydrogen" because of the extra neutron in the atomic nucleus. [Hydrogen has a third isotope called tritium, with a proton and two neutrons, but it radioactively decays in a short time and is unimportant for the observations described here.] Interestingly, astronomers think that the only significant source of the deuterium isotope was the Big Bang itself. Big Bang nucleosynthesis models predict the relative amounts of the lightest elements and isotopes, which makes the relative amount of deuterium one of the key measurements needed for understanding the Big Bang. But there are complications. Deuterium is a relatively fragile isotope, and the amount of deuterium has not stayed constant over time. In particular, when stars form out of gas, any deuterium in the star is just about the first thing to get destroyed. And so, over the history of the Universe, as material has been processed through stars, astronomers expect the relative amount of deuterium to decrease. Astronomers refer to this decrease in deuterium abundance as astration. In addition though, processing of gas through stars and back into the interstellar medium also changes the abundances of other elements such as oxygen, nitrogen, etc., so there is a way to test this idea. Understanding how much deuterium was created in the Big Bang and how much has been destroyed over time, are two of the Holy Grails of modern astrophysics. Caption: (left) The graceful arcs of the Vela supernova remnant are seen against the rich star field of the Milky Way. This image shows material cast off from an exploded star after having been "processed" in the star's nuclear furnace. Such material has lower deuterium and higher heavy element abundance fractions. Over time, as shown at right, the cumulative effects are visible. (Image © Anglo-Australian Observatory.) This is where FUSE and other telescopes enter the picture. The spectral fingerprints of deuterium are very prevalent in the FUSE spectral range, especially for objects in and near our galaxy (low redshift). However, the deuterium fingerprints are only slightly separated from those of normal hydrogen, and so one needs the high spectral resolution of FUSE in addition to the spectral coverage. FUSE has the right combination to tackle deuterium measurements in the local universe. Caption: A small portion of the FUSE spectrum of the white dwarf star WD 1634-573. Each panel shows the spectral region near a hydrogen absoprtion line. The blue regions indicate the spectral fingerprint of deuterium (marked "D I"). The depth and shape of these fingerprints compared to others such as oxygen and regular hydrogen, tells astronomers the relative abundance of deuterium in the gas being sampled on the sight line to the star. Note that the x-axis has been converted from wavelength into a velocity scale. (Graphic courtesy of JHU FUSE project.) (Click on figure for enlarged version.) But sampling deuterium abundances in the local universe is a tough job. Even with FUSE, the measurements are very difficult, and the analysis complicated. Furthermore, we can't just measure deuterium toward one or two stars and get "the answer" we are after. It requires a compilation of data for MANY sight lines to stars of differing distances from the sun. This guarantees that we sample a range of conditions and don't get tricked by looking at any single objects that may be peculiar. Even now, more than five years after launch, many new results are still being analyzed and published. We are still trying to get to the bottom line, but some very intriguing results are taking shape. What has FUSE Found? The ensemble of FUSE observations is showing several interesting results. First, by observing numerous stars in the region directly around the sun (within 100 parsecs or about 325 light years), FUSE observations have very strongly confirmed that the ratio of deuterium to hydrogen (D/H) is constant, at a value of about 15 parts per million (ppm) of D relative to H. Researchers have been working toward confirming this idea and quantifying the local value of D/H for decades! This important result means that this region, which is often referred to as the "Local Bubble," is chemically well-mixed and reasonably uniform. Second, FUSE observations for stars at greater distances are finding a broad range of D/H ratios, some as high as about 25 ppm and some as low as only 7 ppm. The figure below summarizes this result graphically. This raises some very important questions that astronomers now need to address, such as: What is the cause of the observed variation in D/H ratio?", and What is the true local universe D/H ratio for comparison to astration models?" Caption: The deuterium-to-hydrogen ratio is plotted versus distance (left) and versus the neutral hydrogen column density (right). A flat line fits the points on the left side of each panel, but then the points scatter (both up and down) for points in the middle region of each panel. There is some indication that the D/H ratio for the most distant sightlines sampled (at far right in each panel) are uniformly low, but this is still being stidied. (Graphic courtesy of Wood et al. (2004). Full reference below.) As is so often the case in science, we start out asking one question, but by obtaining new state-of-the-art data, we get pointed in another direction and toward new questions. There are already groups testing both hypotheses, that (a) the higher values of D/H are the real local universe values, or (b) the lower values are more representative. The interesting thing is that confirming EITHER of these hypotheses will carry with it the need to answer further questions about the chemical history of our Galaxy and the universe in which we live. Wood, B.E., Linsky, J.L., Hébrard, G., Williger, G.M., Moos, H.W., & Blair, W.P. 2004, The Astrophysical Journal, Vol. 609, 838-853 "Two New Low Galactic D/H Measurements from the Far Ultraviolet Spectroscopic Explorer"
http://fuse.pha.jhu.edu/wpb/sci_d2h.html
13
25
A router is a device that forwards data packets between computer networks, creating an overlay internetwork. A router is connected to two or more data lines from different networks. When a data packet comes in one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it reaches its destination node. The most familiar type of routers are home and small office routers that simply pass data, such as web pages, email, IM, and videos between the home computers and the Internet. An example of a router would be the owner's cable or DSL modem, which connects to the Internet through an ISP. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone. Though routers are typically dedicated hardware devices, use of software-based routers has grown increasingly common. When multiple routers are used in interconnected networks, the routers exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router has interfaces for different physical types of network connections, (such as copper cables, fiber optic, or wireless transmission). It also contains firmware for different networking Communications protocol standards. Each network interface uses this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another. Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different sub-network address. The subnets addresses recorded in the router do not necessarily map directly to the physical interface connections. A router has two stages of operation called planes: - Control plane: A router records a routing table listing what route should be used to forward a data packet, and through which physical interface connection. It does this using internal pre-configured addresses, called static routes. - Forwarding plane: The router forwards data packets between incoming and outgoing interface connections. It routes it to the correct network type using information that the packet header contains. It uses data recorded in the routing table control plane. Routers may provide connectivity within enterprises, between enterprises and the Internet, and between internet service providers (ISPs) networks. The largest routers (such as the Cisco CRS-1 or Juniper T1600) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks. Other networking solutions may be provided by a backbone Wireless Distribution System (WDS), which avoids the costs of introducing networking cables into buildings. All sizes of routers may be found inside enterprises. The most powerful routers are usually found in ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever increasing demands of intranet data traffic. A three-layer model is in common use, not all of which need be present in smaller networks. Access routers, including 'small office/home office' (SOHO) models, are located at customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmwares like Tomato, OpenWrt or DD-WRT. Distribution routers aggregate traffic from multiple access routers, either at the same site, or to collect the data streams from multiple sites to a major enterprise location. Distribution routers are often responsible for enforcing quality of service across a WAN, so they may have considerable memory installed, multiple WAN interface connections, and substantial onboard data processing routines. They may also provide connectivity to groups of file servers or other external networks. External networks must be carefully considered as part of the overall security strategy. Separate from the router may be a firewall or VPN handling device, or the router may include these and other security functions. Many companies produced security-oriented routers, including Cisco Systems' PIX and ASA5500 series, Juniper's Netscreen, Watchguard's Firebox, Barracuda's variety of mail-oriented devices, and many others. In enterprises, a core router may provide a "collapsed backbone" interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth, but lack some of the features of Edge Routers. Internet connectivity and internal use Routers intended for ISP and major enterprise connectivity usually exchange routing information using the Border Gateway Protocol (BGP). RFC 4098 standard defines the types of BGP-protocol routers according to the routers' functions: - Edge router: Also called a Provider Edge router, is placed at the edge of an ISP network. The router uses External BGP to EBGP protocol routers in other ISPs, or a large enterprise Autonomous System. - Subscriber edge router: Also called a Customer Edge router, is located at the edge of the subscriber's network, it also uses EBGP protocol to its provider's Autonomous System. It is typically used in an (enterprise) organization. - Inter-provider border router: Interconnecting ISPs, is a BGP-protocol router that maintains BGP sessions with other BGP protocol routers in ISP Autonomous Systems. - Core router: A core router resides within an Autonomous System as a back bone to carry traffic between edge routers. - Within an ISP: In the ISPs Autonomous System, a router uses internal BGP protocol to communicate with other ISP edge routers, other intranet core routers, or the ISPs intranet provider border routers. - "Internet backbone:" The Internet no longer has a clearly identifiable backbone, unlike its predecessor networks. See default-free zone (DFZ). The major ISPs system routers make up what could be considered to be the current Internet backbone core. ISPs operate all four types of the BGP-protocol routers described here. An ISP "core" router is used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multi-Protocol Label Switching protocols. - Port forwarding: Routers are also used for port forwarding between private internet connected servers. - Voice/Data/Fax/Video Processing Routers: Commonly referred to as access servers or gateways, these devices are used to route and process voice, data, video, and fax traffic on the internet. Since 2005, most long-distance phone calls have been processed as IP traffic (VOIP) through a voice gateway. Voice traffic that the traditional cable networks once carried[clarification needed]. Use of access server type routers expanded with the advent of the internet, first with dial-up access, and another resurgence with voice phone service. Historical and technical information The very first device that had fundamentally the same functionality as a router does today, was the Interface Message Processor (IMP); IMPs were the devices that made up the ARPANET, the first packet network. The idea for a router (called "gateways" at the time) initially came about through an international group of computer networking researchers called the International Network Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, later that year it became a subcommittee of the International Federation for Information Processing. These devices were different from most previous packet networks in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that entirely to the hosts (this particular idea had been previously pioneered in the CYCLADES network). The idea was explored in more detail, with the intention to produce a prototype system, as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture in use today. The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system, due to corporate intellectual property concerns it received little attention outside Xerox for years. Some time after early 1974 the first Xerox routers became operational. The first true IP router was developed by Virginia Strazisar at BBN, as part of that DARPA-initiated effort, during 1975-1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet. The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981; the Stanford router was done by William Yeager, and the MIT one by Noel Chiappa; both were also based on PDP-11s. Virtually all networking now uses TCP/IP, but multiprotocol routers are still manufactured. They were important in the early stages of the growth of computer networking, when protocols other than TCP/IP were in use. Modern Internet routers that handle both IPv4 and IPv6 are multiprotocol, but are simpler devices than routers processing AppleTalk, DECnet, IP, and Xerox protocols. From the mid-1970s and in the 1980s, general-purpose mini-computers served as routers. Modern high-speed routers are highly specialized computers with extra hardware added to speed both common routing functions, such as packet forwarding, and specialised functions such as IPsec encryption. There is substantial use of Linux and Unix software based machines, running open source routing code, for research and other applications. Cisco's operating system was independently designed. Major router operating systems, such as those from Juniper Networks and Extreme Networks, are extensively modified versions of Unix software. For pure Internet Protocol (IP) forwarding function, a router is designed to minimize the state information associated with individual packets. The main purpose of a router is to connect multiple networks and forward packets destined either for its own networks or other networks. A router is considered a Layer 3 device because its primary forwarding decision is based on the information in the Layer 3 IP packet, specifically the destination IP address. This process is known as routing. When each router receives a packet, it searches its routing table to find the best match between the destination IP address of the packet and one of the network addresses in the routing table. Once a match is found, the packet is encapsulated in the Layer 2 data link frame for that outgoing interface. A router does not look into the actual data contents that the packet carries, but only at the layer 3 addresses to make a forwarding decision, plus optionally other information in the header for hints on, for example, QoS. Once a packet is forwarded, the router does not retain any historical information about the packet, but the forwarding action can be collected into the statistical data, if so configured. Forwarding decisions can involve decisions at layers other than layer 3. A function that forwards based on layer 2 information is properly called a bridge. This function is referred to as layer 2 bridging, as the addresses it uses to forward the traffic are layer 2 addresses (e.g. MAC addresses on Ethernet). Besides making decision as which interface a packet is forwarded to, which is handled primarily via the routing table, a router also has to manage congestion, when packets arrive at a rate higher than the router can process. Three policies commonly used in the Internet are tail drop, random early detection (RED), and weighted random early detection (WRED). Tail drop is the simplest and most easily implemented; the router simply drops packets once the length of the queue exceeds the size of the buffers in the router. RED probabilistically drops datagrams early when the queue exceeds a pre-configured portion of the buffer, until a pre-determined max, when it becomes tail drop. WRED requires a weight on the average queue size to act upon when the traffic is about to exceed the pre-configured size, so that short bursts will not trigger random drops. Another function a router performs is to decide which packet should be processed first when multiple queues exist. This is managed through quality of service (QoS), which is critical when Voice over IP is deployed, so that delays between packets do not exceed 150ms to maintain the quality of voice conversations. Yet another function a router performs is called policy-based routing where special rules are constructed to override the rules derived from the routing table when a packet forwarding decision is made. These functions may be performed through the same internal paths that the packets travel inside the router. Some of the functions may be performed through an application-specific integrated circuit (ASIC) to avoid overhead caused by multiple CPU cycles, and others may have to be performed through the CPU as these packets need special attention that cannot be handled by an ASIC. - "Overview Of Key Routing Protocol Concepts: Architectures, Protocol Types, Algorithms and Metrics". Tcpipguide.com. Retrieved 15 January 2011. - Requirements for IPv4 Routers,RFC 1812, F. Baker, June 1995 - Requirements for Separation of IP Control and Forwarding,RFC 3654, H. Khosravi & T. Anderson, November 2003 - "Setting uo Netflow on Cisco Routers". MY-Technet.com date unknown. Retrieved 15 January 2011. - "Windows Home Server: Router Setup". Microsoft Technet 14 Aug 2010. Retrieved 15 January 2011. - Oppenheimer, Pr (2004). Top-Down Network Design. Indianapolis: Cisco Press. ISBN 1-58705-152-4. - "Windows Small Business Server 2008: Router Setup". Microsoft Technet Nov 2010. Retrieved 15 january 2011. - "Core Network Planning". Microsoft Technet May 28, 2009. Retrieved 15 January 2011. - Terminology for Benchmarking BGP Device Convergence in the Control Plane,RFC 4098, H. Berkowitz et al.,June 2005 - "M160 Internet Backbone Router". Juniper Networks Date unknown. Retrieved 15 January 2011. - "Virtual Backbone Routers". IronBridge Networks, Inc. September, 2000. Retrieved 15 January 2011. - BGP/MPLS VPNs,RFC 2547, E. Rosen and Y. Rekhter, April 2004 - Davies, Shanks, Heart, Barker, Despres, Detwiler, and Riml, "Report of Subgroup 1 on Communication System", INWG Note #1. - Vinton Cerf, Robert Kahn, "A Protocol for Packet Network Intercommunication", IEEE Transactions on Communications, Volume 22, Issue 5, May 1974, pp. 637 - 648. - David Boggs, John Shoch, Edward Taft, Robert Metcalfe, "Pup: An Internetwork Architecture", IEEE Transactions on Communications, Volume 28, Issue 4, April 1980, pp. 612- 624. - Craig Partridge, S. Blumenthal, "Data networking at BBN"; IEEE Annals of the History of Computing, Volume 28, Issue 1; January–March 2006. - Valley of the Nerds: Who Really Invented the Multiprotocol Router, and Why Should We Care?, Public Broadcasting Service, Accessed August 11, 2007. - Router Man, NetworkWorld, Accessed June 22, 2007. - David D. Clark, "M.I.T. Campus Network Implementation", CCNG-2, Campus Computer Network Group, M.I.T., Cambridge, 1982; pp. 26. - Pete Carey, "A Start-Up's True Tale: Often-told story of Cisco's launch leaves out the drama, intrigue", San Jose Mercury News, December 1, 2001. |Wikimedia Commons has media related to: Network routers| |Wikibooks has a book on the topic of: Communication Networks/Routing| |Look up router in Wiktionary, the free dictionary.| - Internet Engineering Task Force, the Routing Area last checked 21 January 2011. - Internet Corporation for Assigned Names and Numbers - North American Network Operators Group - Réseaux IP Européens (European IP Networks) - American Registry for Internet Numbers - Router Default IP and Username Database - Router clustering - Asia-Pacific Network Information Center - Latin American and the Caribbean Network Information Center - African Region Internet Registry
http://en.wikipedia.org/wiki/Router_(computing)
13
24
Mathematics » High School: Number and Quantity » Vector & Matrix Quantities Standards in this domain: Represent and model with vector quantities. - CCSS.Math.Content.HSN-VM.A.1 (+) Recognize vector quantities as having both magnitude and direction. Represent vector quantities by directed line segments, and use appropriate symbols for vectors and their magnitudes (e.g., v, |v|, ||v||, v). - CCSS.Math.Content.HSN-VM.A.2 (+) Find the components of a vector by subtracting the coordinates of an initial point from the coordinates of a terminal point. - CCSS.Math.Content.HSN-VM.A.3 (+) Solve problems involving velocity and other quantities that can be represented by vectors. Perform operations on vectors. - CCSS.Math.Content.HSN-VM.B.4 (+) Add and subtract vectors. - CCSS.Math.Content.HSN-VM.B.4a Add vectors end-to-end, component-wise, and by the parallelogram rule. Understand that the magnitude of a sum of two vectors is typically not the sum of the magnitudes. - CCSS.Math.Content.HSN-VM.B.4b Given two vectors in magnitude and direction form, determine the magnitude and direction of their sum. - CCSS.Math.Content.HSN-VM.B.4c Understand vector subtraction v – w as v + (–w), where –w is the additive inverse of w, with the same magnitude as w and pointing in the opposite direction. Represent vector subtraction graphically by connecting the tips in the appropriate order, and perform vector subtraction component-wise. - CCSS.Math.Content.HSN-VM.B.5 (+) Multiply a vector by a scalar. - CCSS.Math.Content.HSN-VM.B.5a Represent scalar multiplication graphically by scaling vectors and possibly reversing their direction; perform scalar multiplication component-wise, e.g., as c(vx, vy) = (cvx, cvy). - CCSS.Math.Content.HSN-VM.B.5b Compute the magnitude of a scalar multiple cv using ||cv|| = |c|v. Compute the direction of cv knowing that when |c|v ≠ 0, the direction of cv is either along v (for c > 0) or against v (for c < 0). Perform operations on matrices and use matrices in applications. - CCSS.Math.Content.HSN-VM.C.6 (+) Use matrices to represent and manipulate data, e.g., to represent payoffs or incidence relationships in a network. - CCSS.Math.Content.HSN-VM.C.7 (+) Multiply matrices by scalars to produce new matrices, e.g., as when all of the payoffs in a game are doubled. - CCSS.Math.Content.HSN-VM.C.8 (+) Add, subtract, and multiply matrices of appropriate dimensions. - CCSS.Math.Content.HSN-VM.C.9 (+) Understand that, unlike multiplication of numbers, matrix multiplication for square matrices is not a commutative operation, but still satisfies the associative and distributive properties. - CCSS.Math.Content.HSN-VM.C.10 (+) Understand that the zero and identity matrices play a role in matrix addition and multiplication similar to the role of 0 and 1 in the real numbers. The determinant of a square matrix is nonzero if and only if the matrix has a multiplicative inverse. - CCSS.Math.Content.HSN-VM.C.11 (+) Multiply a vector (regarded as a matrix with one column) by a matrix of suitable dimensions to produce another vector. Work with matrices as transformations of vectors. - CCSS.Math.Content.HSN-VM.C.12 (+) Work with 2 × 2 matrices as a transformations of the plane, and interpret the absolute value of the determinant in terms of area.
http://www.corestandards.org/Math/Content/HSN/VM
13
22
Introduction to Tension An introduction to tension. Solving for the tension(s) in a set of wires when a weight is hanging from them. Introduction to Tension ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - I will now introduce you to the concept of tension. - So tension is really just the force that exists either - within or applied by a string or wire. - It's usually lifting something or pulling on something. - So let's say I had a weight. - Let's say I have a weight here. - And let's say it's 100 Newtons. - And it's suspended from this wire, which is right here. - Let's say it's attached to the ceiling right there. - Well we already know that the force-- if we're on this - planet that this weight is being pull down by gravity. - So we already know that there's a downward force on - this weight, which is a force of gravity. - And that equals 100 Newtons. - But we also know that this weight isn't accelerating, - it's actually stationary. - It also has no velocity. - But the important thing is it's not accelerating. - But given that, we know that the net force on it must be 0 - by Newton's laws. - So what is the counteracting force? - You didn't have to know about tension to say well, the - string's pulling on it. - The string is what's keeping the weight from falling. - So the force that the string or this wire applies on this - weight you can view as the force of tension. - Another way to think about it is that's also the force - that's within the wire. - And that is going to exactly offset the force of gravity on - this weight. - And that's what keeps this point right here stationery - and keeps it from accelerating. - That's pretty straightforward. - Tension, it's just the force of a string. - And just so you can conceptualize it, on a guitar, - the more you pull on some of those higher-- what was it? - The really thin strings that sound higher pitched. - The more you pull on it, the higher the tension. - It actually creates a higher pitched note. - So you've dealt with tension a lot. - I think actually when they sell wires or strings they'll - probably tell you the tension that that wire or string can - support, which is important if you're going to build a bridge - or a swing or something. - So tension is something that should be hopefully, a little - bit intuitive to you. - So let's, with that fairly simple example done, let's - create a slightly more complicated example. - So let's take the same weight. - Instead of making the ceiling here, let's - add two more strings. - Let's add this green string. - Green string there. - And it's attached to the ceiling up here. - That's the ceiling now. - And let's see. - This is the wall. - And let's say there's another string right here - attached to the wall. - So my question to you is, what is the tension in these two - strings So let's call this T1 and T2. - Well like the first problem, this point right here, this - red point, is stationary. - It's not accelerating in either the left/right - directions and it's not accelerating in the up/down - So we know that the net forces in both the x and y - dimensions must be 0. - My second question to you is, what is - going to be the offset? - Because we know already that at this point right here, - there's going to be a downward force, which is the force of - gravity again. - The weight of this whole thing. - We can assume that the wires have no weight for simplicity. - So we know that there's going to be a downward force here, - this is the force of gravity, right? - The whole weight of this entire object of weight plus - wire is pulling down. - So what is going to be the upward force here? - Well let's look at each of the wires. - This second wire, T2, or we could call it w2, I guess. - The second wire is just pulling to the left. - It has no y components. - It's not lifting up at all. - So it's just pulling to the left. - So all of the upward lifting, all of that's going to occur - from this first wire, from T1. - So we know that the y component of T1, so let's - call-- so if we say that this vector here. - Let me do it in a different color. - Because I know when I draw these diagrams it starts to - get confusing. - Let me actually use the line tool. - So I have this. - Let me make a thicker line. - So we have this vector here, which is T1. - And we would need to figure out what that is. - And then we have the other vector, which is its y - component, and I'll draw that like here. - This is its y component. - We could call this T1 sub y. - And then of course, it has an x component too, and I'll do - that in-- let's see. - I'll do that in red. - Once again, this is just breaking up a force into its - component vectors like we've-- a vector force into its x and - y components like we've been doing in the last several - problems. And these are just trigonometry problems, right? - We could actually now, visually see that this is T - sub 1 x and this is T sub 1 sub y. - Oh, and I forgot to give you an important property of this - problem that you needed to know before solving it. - Is that the angle that the first wire forms with the - ceiling, this is 30 degrees. - So if that is 30 degrees, we also know that this is a - parallel line to this. - So if this is 30 degrees, this is also - going to be 30 degrees. - So this angle right here is also going to be 30 degrees. - And that's from our-- you know, we know about parallel - lines and alternate interior angles. - We could have done it the other way. - We could have said that if this angle is 30 degrees, this - angle is 60 degrees. - This is a right angle, so this is also 30. - But that's just review of geometry - that you already know. - But anyway, we know that this angle is 30 degrees, so what's - its y component? - Well the y component, let's see. - What involves the hypotenuse and the opposite side? - Let me write soh cah toa at the top because this is really - just trigonometry. - soh cah toa in blood red. - So what involves the opposite and the hypotenuse? - So opposite over hypotenuse. - So that we know the sine-- let me switch to the sine of 30 - degrees is equal to T1 sub y over the tension in the string - going in this direction. - So if we solve for T1 sub y we get T1 sine of 30 degrees is - equal to T1 sub y. - And what did we just say before we kind of - dived into the math? - We said all of the lifting on this point is being done by - the y component of T1. - Because T2 is not doing any lifting up or down, it's only - pulling to the left. - So the entire component that's keeping this object up, - keeping it from falling is the y component of - this tension vector. - So that has to equal the force of gravity pulling down. - This has to equal the force of gravity. - That has to equal this or this point. - So that's 100 Newtons. - And I really want to hit this point home because it might be - a little confusing to you. - We just said, this point is stationery. - It's not moving up or down. - It's not accelerating up or down. - And so we know that there's a downward force of 100 Newtons, - so there must be an upward force that's being provided by - these two wires. - This wire is providing no upward force. - So all of the upward force must be the y component or the - upward component of this force vector on the first wire. - So given that, we can now solve for the tension in this - first wire because we have T1-- what's sine of 30? - Sine of 30 degrees, in case you haven't memorized it, sine - of 30 degrees is 1/2. - So T1 times 1/2 is equal to 100 Newtons. - Divide both sides by 1/2 and you get T1 is - equal to 200 Newtons. - So now we've got to figure out what the tension in this - second wire is. - And we also, there's another clue here. - This point isn't moving left or right, it's stationary. - So we know that whatever the tension in this wire must be, - it must be being offset by a tension or some other force in - the opposite direction. - And that force in the opposite direction is the x component - of the first wire's tension. - So it's this. - So T2 is equal to the x component of the - first wire's tension. - And what's the x component? - Well, it's going to be the tension in the first wire, 200 - Newtons times the cosine of 30 degrees. - It's adjacent over hypotenuse. - And that's square root of 3 over 2. - So it's 200 times the square root of 3 over 2, which equals - 100 square root of 3. - So the tension in this wire is 100 square root of 3, which - completely offsets to the left and the x component of this - wire is 100 square root of 3 Newtons to the right. - Hopefully I didn't confuse you. - See you in the next video. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/science/physics/forces-newtons-laws/tension-tutorial/v/introduction-to-tension
13
10
|Figure 1. Click to see an animation of a tsunami generated by an earthquake. Tsunami is a set of ocean waves caused by any large, abrupt disturbance of the sea-surface. If the disturbance is close to the coastline, local tsunamis can demolish coastal communities within minutes. A very large disturbance can cause local devastation AND export tsunami destruction thousands of miles away. The word tsunami is a Japanese word, represented by two characters: tsu, meaning, "harbor", and nami meaning, "wave". Tsunamis rank high on the scale of natural disasters. Since 1850 alone, tsunamis have been responsible for the loss of over 420,000 lives and billions of dollars of damage to coastal structures and habitats. Most of these casualties were caused by local tsunamis that occur about once per year somewhere in the world. For example, the December 26, 2004, tsunami killed about 130,000 people close to the earthquake and about 58,000 people on distant shores. Predicting when and where the next tsunami will strike is currently impossible. Once the tsunami is generated, forecasting tsunami arrival and impact is possible through modeling and measurement Generation. Tsunamis are most commonly generated by earthquakes in marine and coastal regions. Major tsunamis are produced by large (greater than 7 on the Richer scale), shallow focus (< 30km depth in the earth) earthquakes associated with the movement of oceanic and continental plates. They frequently occur in the Pacific, where dense oceanic plates slide under the lighter continental plates. When these plates fracture they provide a vertical movement of the seafloor that allows a quick and efficient transfer of energy from the solid earth to the ocean (try the animation in Figure 1). When a powerful earthquake (magnitude 9.3) struck the coastal region of Indonesia in 2004, the movement of the seafloor produced a tsunami in excess of 30 meters (100 feet) along the adjacent coastline killing more than 240,000 people. From this source the tsunami radiated outward and within 2 hours had claimed 58,000 lives in Thailand, Sri Lanka, and India. Underwater landslides associated with smaller earthquakes are also capable of generating destructive tsunamis. The tsunami that devastated the northwestern coast of Papua New Guinea on July 17, 1998, was generated by an earthquake that registered 7.0 on the Richter scale that apparently triggered a large underwater landslide. Three waves measuring more than 7 meter high struck a 10-kilometer stretch of coastline within ten minutes of the earthquake/slump. Three coastal villages were swept completely clean by the deadly attack leaving nothing but sand and 2,200 people dead. Other large-scale disturbances of the sea -surface that can generate tsunamis are explosive volcanoes and asteroid impacts. The eruption of the volcano Krakatoa in the East Indies on Aug. 27, 1883 produced a 30-meter tsunami that killed over 36,000 people. In 1997, scientists discovered evidence of a 4km diameter asteroid that landed offshore of Chile approximately 2 million years ago that produced a huge tsunami that swept over portions of South America and Antarctica. 2. Click to see the propagation of the December 24, 2004 Sumatra Wave Propagation.Because earth movements associated with large earthquakes are thousand of square kilometers in area, any vertical movement of the seafloor immediately changes the sea-surface. The resulting tsunami propagates as a set of waves whose energy is concentrated at wavelengths corresponding to the earth movements (~100 km), at wave heights determined by vertical displacement (~1m), and at wave directions determined by the adjacent coastline geometry. Because each earthquake is unique, every tsunami has unique wavelengths, wave heights, and directionality (Figure 2 shows the propagation of the December 24, 2004 Sumatra tsunami.) From a tsunami warning perspective, this makes the problem of forecasting tsunamis in real time daunting. Warning Systems. Since 1946, the tsunami warning system has provided warnings of potential tsunami danger in the pacific basin by monitoring earthquake activity and the passage of tsunami waves at tide gauges. However, neither seismometers nor coastal tide gauges provide data that allow accurate prediction of the impact of a tsunami at a particular coastal location. Monitoring earthquakes gives a good estimate of the potential for tsunami generation, based on earthquake size and location, but gives no direct information about the tsunami itself. Tide gauges in harbors provide direct measurements of the tsunami, but the tsunami is significantly altered by local bathymetry and harbor shapes, which severely limits their use in forecasting tsunami impact at other locations. Partly because of these data limitations, 15 of 20 tsunami warnings issued since 1946 were considered false alarms because the tsunami that arrived was too weak to cause damage. 3. Click to see a real-time deep ocean tsunami detection system responding to a tsunami generated by seismic activity. Forecasting impacts. Recently developed real-time, deep ocean tsunami detectors (Figure 3) will provide the data necessary to make tsunami forecasts. The November 17, 2003, Rat Is. tsunami in Alaska provided the most comprehensive test for the forecast methodology. The Mw 7.8 earthquake on the shelf near Rat Islands, Alaska, generated a tsunami that was detected by three tsunameters located along the Aleutian Trench-the first tsunami detection by the newly developed real-time tsunameter system. These real-time data combined with the model database (Figure 4) were then used to produce the real-time model tsunami forecast. For the first time, tsunami model predictions were obtained during the tsunami propagation, before the waves had reached many coastlines. The initial offshore forecast was obtained immediately after preliminary earthquake parameters (location and magnitude Ms = 7.5) became available from the West Coast/Alaska TWC (about 15-20 minutes after the earthquake). The model estimates provided expected tsunami time series at tsunameter locations. When the closest tsunameter recorded the first tsunami wave, about 80 minutes after the tsunami, the model predictions were compared with the deep-ocean data and the updated forecast was adjusted immediately.. These offshore model scenarios were then used as input for the high-resolution inundation model for Hilo Bay. The model computed tsunami dynamics on several nested grids, with the highest spatial resolution of 30 meters inside the Hilo Bay (Figure 5). None of the tsunamis produced inundation at Hilo, but all of them recorded nearly half a meter (peak-to-trough) signal at Hilo gage. Model forecast predictions for this tide gage are compared with observed data in Figure 5. The comparison demonstrates that amplitudes, arrival time and periods of several first waves of the tsunami wave train were correctly forecasted. More tests are required to ensure that the inundation forecast will work for every likely-to-occur tsunami. When implemented, such forecast will be obtained even faster and would provide enough lead time for potential evacuation or warning cancellation for Hawaii and the U.S. West Coast. |Figure 4. Rat Island, Alaska Tsunami of November 17, 2003, as measured at the tsunameter located at 50 N 171 W in 4700 m water depth. ||Figure 5. Coastal forecast at Hilo, HI for 2003 Rat island, showing comparison of the forecasted (red line) and measured (blue line) gage data. Reduction of impact. The recent development of real-time deep ocean tsunami detectors and tsunami inundation models has given coastal communities the tools they need to reduce the impact of future tsunamis. If these tools are used in conjunction with a continuing educational program at the community level, at least 25% of the tsunami related deaths might be averted. By contrasting the casualties from the 1993 Sea of Japan tsunami with that of the 1998 Papua New Guinea tsunami, we can conclude that these tools work. For the Aonae, Japan case about 15% of the population at risk died from a tsunami that struck within 10 minutes of the earthquake because the population was educated about tsunamis, evacuation plans had been developed, and a warning was issued. For the Warapa, Papua New Guinea case about 40% of the at risk population died from a tsunami that arrived within 15 minutes of the earthquake because the population was not educated, no evacuation plan was available, and no warning system existed. Eddie N. Bernard Bernard, E.N. (1998): Program aims to reduce impact of tsunamis on Pacific states. Eos Trans. AGU, 79(22), 258, 262-263. Bernard, E.N. (1999): Tsunami. Natural Disaster Management, Tudor Rose, Leicester, England, 58-60. Synolakis, C., P. Liu, G. Carrier, H. Yeh, Tsunamigenic Sea-Floor Deformations, Science, 278, 598-600, 1997. Dudley, Walter C., and Min Lee (1998): Tsunami! Second Edition, University of Hawai'i Press, Honolulu, Hawaii.
http://www.tsunami.noaa.gov/tsunami_story.html
13
25
Bacteria are ubiquitous and some of them are real survival specialists – a property, which is particularly challenging for space missions. The spacecraft that are sent on their long journey into space should be as clean as possible and considerably reduced in microbial burden, since the risk of biological contamination of other planets is high. Such a contamination could affect the detection of extraterrestrial life or make it even impossible. For this reason, spacecraft are assembled in so-called "clean rooms" under the most stringent controls for bio-contamination. Nevertheless, microorganisms exist that can deal with the prevailing extreme conditions such as dryness, lack of nutrients or presence of disinfectants. Space agencies have defined standards by which to measure the bioburden and diversity of microbial species in the clean rooms and on the spacecraft. The DSMZ now offers, in cooperation with the European Space Agency (ESA), the first public collection of extremotolerant bacterial strains adapted to the harsh conditions within the clean rooms. This collection is an important resource for research institutes and industry to investigate adaptive mechanisms of bacteria (for instance, resistance to heat, UV radiation, ionizing radiation, desiccation, disinfectants). The journal "Astrobiology" reports about this culture collection in its current issue. "For any space mission an upper limit of bioburden is defined,” says Dr. Rüdiger Pukall, a microbiologist at DSMZ. "Bioburden measurements are a crucial part of Planetary Protection requirements”. This concept includes all activities that prevent contamination of planets and other celestial bodies by terrestrial forms of life, such as microorganisms, in the context of inter-planetary space missions. It is of particular importance to evaluate the biodiversity of microbial communities on spaceflight hardware and in associated assembly facilities. The objective in doing so is to develop appropriate decontamination strategies to avoid microbial hitchhikers during the next mission to Mars." Selected surface areas within European spacecraft associated clean rooms as well as the surface of the Herschel Space Observatory located therein were sampled between 2007 until 2009 by microbiologists from the Leibniz-Institute DSMZ, the German Aerospace Center (DLR) and the University Regensburg. The Herschel Space Observatory was constructed at various locations around the globe: clean rooms in Friedrichshafen (Germany), Noordwijk (Netherlands), and at Europe’s spaceport in Kourou (French Guiana). „A clean room is a particularly extreme habitat for microbial survivalists", explains Rüdiger Pukall. "The nutrient-poor environment, controlled moisture and temperature, air filtering and frequent decontamination of surfaces create a special habitat for spore-forming, autotrophic, multi-resistant, facultatively or obligate anaerobic bacteria." Even taking a sample of the bacteria in the clean rooms is a special challenge for the researchers. "In order not to introduce any foreign microorganisms or particles, the microbiologists have to wear protective suits and face masks ", reports Rüdiger Pukall. “Here the samples were taken with special swabs and wipes, according to defined standard procedures from ESA. The samples were subsequently analysed at the University Regensburg and the DLR in Köln and the bacteria were isolated using diverse cultivation strategies." Then Dr. Rüdiger Pukall's team at DSMZ in Braunschweig identified the bacterial strains by 16S rRNA gene sequence analysis. For long term storage, the bacteria were freeze-dried and stored in liquid nitrogen. Bacteria that could not be cultivated were also identified via sequencing after extraction of the total genomic DNA from the samples. A collection of "survivalists" The core of this special collection consists of about 300 bacterial strains that were isolated from various clean rooms. All bacteria belong to Risk group 1 or 2. A large portion of the isolates can be assigned to the Gram-positive bacteria, whereby spore-forming bacteria from the species Bacillus as well as Micrococcus- and Staphylococcus-species are represented. Gram-negative bacteria are predominantly represented by the species Acinetobacter, Pseudomonas and Stenotrophomonas. Recently, an additional set of 60 isolates affiliated to these genera were added to the ESA collection. The isolates derived from samples taken in 2003 and 2004 within an ESA founded project in cooperation of DLR and DSMZ during preparation of the missions SMART-1 (interplanetary satellite, lunar mission) and ROSETTA (exploration of comets) in Noordwijk and Kourou. Five clean room isolates were provided by the NASA Jet Propulsion Laboratory (USA) in addition. About 30 per cent of the microbes within the ESA collection are still unknown and have now been made available for research purposes. Some of these isolates have been described recently as a novel species, such as Paenibacillus purispatii (DSM 22991) or Tersicoccus phoenicis (KO_PS43, DSM 30849), a representative of a new bacterial genus. Significance of the ESA collection The collection of extremotolerant microbes which are adapted to the artificial biotope of the clean rooms offers an extraordinary valuable and beneficial source for industry and research. For ESA, the culture collection is an essential tool to understand the biological contamination and its potential risk and to evaluate novel biological decontamination procedures and disinfection strategies. The collection will be expanded within the next three years by including more extremotolerant bacteria that could be of interest for industry and research. Link to the collection: 1)Moissl-Eichinger C, Rettberg P, Pukall R. (2012). The first collection of spacecraft-associated microorganisms: a public source for extremotolerant microorganisms from spacecraft assembly clean rooms. Astrobiology 12(11):1024-1034. 2)Stieglmeier M, Rettberg P, Barczyk S, Bohmeier M, Pukall R, Wirth R, Moissl-Eichinger C. (2012). Abundance and diversity of microbial inhabitants in European spacecraft-associated clean rooms. Astrobiology 12(6):572-585. 1)Behrendt U, Schumann P, Stieglmeier M, Pukall R, Augustin J, Spröer C, Schwendner P, Moissl-Eichinger C, Ulrich A. (2010). Characterization of heterotrophic nitrifying bacteria with respiratory ammonification and denitrification activity--description of Paenibacillus uliginis sp. nov., an inhabitant of fen peat soil and Paenibacillus purispatii sp. nov., isolated from a spacecraft assembly clean room. Syst Appl Microbiol. 2010 Oct; 33(6):328-36. 2)Vaishampayan P, Moissl-Eichinger C, Pukall R, Schumann P, Spröer C, Augustus A, Hayden Roberts A, Namba G, Cisneros J, Salmassi T, Venkateswaran K. (2012). Description of Tersicoccus phoenicis gen. nov., sp. nov. isolated from spacecraft assembly clean room environments. Int J Syst Evol Microbiol. 2012 Dec 7. [Epub ahead of print] Planetary protection is the term that describes the aim of protecting solar system bodies from contamination by terrestrial life, and protecting Earth from possible life forms that may be returned from other solar system bodies. Regulations are based on obligations identified in the United Nations Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and other Celestial Bodies, and advice provided by the Committee on Space Research (COSPAR). Please find the press release, image material and further information for download at http://www.dsmz.de/de/start/details/entry/microbes-possible.html Head of press and communication Leibniz-Institut DSMZ – Deutsche Sammlung von Mikroorganismen und Zellkulturen GmbH Inhoffenstraße 7 B Deutschland / Germany About the Leibniz-Institute DSMZ The Leibniz-Institute DSMZ – Deutsche Sammlung von Mikroorganismen und Zellkulturen GmbH [German Collection of Microorganisms and Cell Cultures] is an establishment of the Leibniz Gemeinschaft [Leibniz Association], and with its comprehensive scientific services and broad spectrum of biological materials has been a worldwide partner for research and industry for decades. As one of the largest biological resource centres of its kind, the DSMZ was confirmed as compliant with the globally accepted quality standard ISO 9001:2008. As a patent depository, the DSMZ offers the singular possibility throughout the whole of Germany to gather biological material according to the requirements of the Budapest Treaty. Aside from the scientific service, the collection-based research forms the second pillar of the DSMZ. The collection, with headquarters in Braunschweig, has existed for 42 years and houses more than 32,000 cultures and biomaterials. The DSMZ is the most diverse collection worldwide: in addition to fungi, yeasts, bacteria and archaea, human and animal cell cultures as well plant viruses and plant cell cultures are researched and archived there. www.dsmz.de The Leibniz Gemeinschaft [Leibniz Association] connects 86 independent research establishments. Their focus encompasses the sciences of nature, engineering and environment as well as economics, aerospace and social science – and even the humanities. The Leibniz-Institutes deal with issues that are relevant from a societal, economic and ecologic perspective. They conduct knowledge- and application-oriented fundamental research. They maintain scientific infrastructures and offer research-based services. The focus of the Leibniz-Gemeinschaft is knowledge transfer in the fields of politics, science, economics and the public. The Leibniz-Institutes maintain close cooperation with institutions of higher education – among others in the form of science campuses –, with industry and other partners in-country and abroad. They are subject to a standard-setting transparent and independent evaluation process. Based on their significance for the states and nation, the federation and countries support the institutes of the Leibniz-Gemeinschaft together. The Leibniz-Institutes have about 16,500 employees, among them 7,700 male and female scientists. The total budget of the Institute is 1.4 billion euros. Susanne Thiele | Source: Leibniz-Institut DSMZ Further information: www.dsmz.de Further Reports about: Astrobiology > bacterial strains > biological material > cell cultures > clean room > DSM > DSMZ > ESA > Herschel Space Observatory > Leibniz-Institut > Mikroorganismus > Observatory > Paenibacillus > Planetary > plant virus > solar system > terrestrial life > Zellkultur More articles from Life Sciences: 'Chase and run' cell movement mechanism explains process of metastasis 17.06.2013 | University College London Memory-boosting chemical is identified in mice 17.06.2013 | University of California - San Francisco ... two engines aircraft project “Elektro E6”. The countdown has been started for opening the gates again for the worldwide leading aviation and space event in Le Bourget, Paris from June 17th - 23rd, 2013. EADCO & PC-Aero will present at the Paris Air Show in Hall H4 booth F-7 their new future aircraft and innovative project: ... Siemens scientists have developed new kinds of ceramics in which they can embed transformers. The new development allows power supply transformers to be reduced to one fifth of their current size so that the normally separate switched-mode power supply units of light-emitting diodes can be integrated into the module's heat sink. The new technology was developed in cooperation with industrial and research partners who ... Cheaper clean-energy technologies could be made possible thanks to a new discovery. Led by Raymond Schaak, a professor of chemistry at Penn State University, research team members have found that an important chemical reaction that generates hydrogen from water is effectively triggered -- or catalyzed -- by a nanoparticle composed of nickel and phosphorus, two inexpensive elements that are abundant on Earth. ... The Fraunhofer Institute for Laser Technology ILT generated a lot of interest at the LASER World of Photonics 2013 trade fair with its numerous industrial laser technology innovations. Its highlights included beam sources and manufacturing processes for ultrashort laser pulses as well as ways to systematically optimize machining processes using computer simulations. There was even a specialist booth at the fair dedicated to the revolutionary technological potential of digital photonic production. Now in its fortieth year, LASER World ... It's not reruns of "The Jetsons", but researchers working at the National Institute of Standards and Technology (NIST) have developed a new microscopy technique that uses a process similar to how an old tube television produces a picture—cathodoluminescence—to image nanoscale features. Combining the best features of optical and scanning electron microscopy, the fast, versatile, and high-resolution technique allows scientists to view surface and subsurface features potentially as small as 10 nanometers in size. The new microscopy technique, described in the journal AIP Advances,* uses a beam of electrons to excite a specially ... 17.06.2013 | Studies and Analyses 17.06.2013 | Health and Medicine 17.06.2013 | Life Sciences 14.06.2013 | Event News 13.06.2013 | Event News 10.06.2013 | Event News
http://www.innovations-report.com/html/reports/life_sciences/microbes_hitchhikers_space_208853.html
13
12
can be applied in several forms and has applications in several places. We would discuss some forms of applications and then application of proportions in similar triangles Direct Proportion/ Direct Variation If two quantities changes or vary in such a manner that their ratio always remains constant, the quantities are said to be in proportion . In other words, the two quantities are related in such a manner that the positive change in one quantity leads to proportionately same positive change in the other quantity. We represent this proportionality using 'α' ( ). Thus x α y is termed as 'x is directly proportional to y'. If y α x then y = kx , where k is a non-zero constant called constant of proportionality . The equation is called the equation of direct proportionality Inverse proportion/ Inverse Variation If two quantities changes or vary in such a manner that an increase or decrease in one quantity leads to the proportional decrease or increase respectively in the other quantity, they are said to be in inverse proportion If y α (1/x), then y = k/x where k is again a constant of proportionality and a non zero constant. In this we have both types of variations, direct and indirect. If x α (1/y) and x α z, then combining them we can say x α (z/y) then x =((k*z)/y). Some more results on proportions: If a α b and b α c then a α c If a α b and c α d then ac α bd If a α b then ap α bp, where b is the constant of proportionality. If a α b then a^n α b^ n. If a α b and c α b then (a (+, -) c ) α b . Application of proportion in similar triangles have a property that the corresponding side are in proportion . If ΔABC is similar to ΔXYZ, then corresponding sides are AB -> XY BC -> YZ CA -> ZX These sides are in proportion If we know 4 of these sides, we can calculate the remaining two sides using the two equations which can be formed. This lesson has been accessed 8772 times.
http://www.algebra.com/algebra/homework/proportions/proportions-app.lesson
13
10
Bubbles Theme Page This "Theme Page" has links to two types of resources related to the study of Bubbles. Students and teachers will find curricular resources (information, content...) to help them learn about this topic. In addition, there are also links to instructional materials (lesson plans) which will help teachers provide instruction in this theme. Please read our disclaimer. - Bubbles exist in air and are a thin film of liquid surrounding air. Antibubbles exist in liquids and are a thin film of air surrounding a liquid. This site explores antibubbles' properties, how to make them, and tricks using them. - This site has detailed instructions on how to create antibubbles. Art and Science of Bubbles - This Soap and Detergent Agency site is full of online and offline activities for kids. Included are tips on creating bigger and better bubbles, magic tricks, bubble art & sculpture, and scientific experiments. (You may experience a delay in making a connection - be patient.) - Three lesson plans on bubbles from the AskERIC database. Bubbles are used to teach primary students about the colour spectrum. For developing primary students' observational skills. - The Exploratorium looks at the forces that mold bubbles. Their web site explains the science and mathematics behind the 'stickiness' of bubbles, their shape, bubble combinations, the role of soap in making bubbles, and the origin of colour. They also provide bubble formulae. - This site will definitely dispel the myth that bubbles are simple things. From bubble history to games to bubble adventures, there is something for every bubble enthusiast. - A brief lesson plan in which bubble experimentation can be used to help students in grades 4-8 learn about the scientific method. - Another single page site with information on formulae, tools, and bubble a Soap Bubble Company - Designed for K-1 students, this lesson plan has students compare commercial soap bubbles to home made bubbles, experiment with producing their own formula, and then investigate the costs of producing their own product and forming their own soap bubble company. - This experiment allows students to examine the properties of soap bubbles closely, by floating them on a layer of carbon dioxide. - Various home-based experiments/activities from the Exploratorium. Blow-Up Students discover that not all bubbles are made from soap. Instructions on how to make an observatory to examine colours in bubbles. Bomb An experiment in which children, under adult supervision use baking soda and vinegar to pop a plastic bag. In addition to the instructions for the experiment, the site also contains a brief explanation of the underlying science as well as a number of variations that students can - "Snacks" from the Exploratorium are miniature science exhibits that teachers can make using common, inexpensive, easily available materials. These links are to two activities on bubbles. - A single page with bubble formulae and experiments. Note: The sites listed above will serve as a source of curricular content in Bubbles. For other resources in Science (e.g., curricular content), or for lesson plans and theme pages, click the "previous screen" button below. Or, click here if you wish to return directly to the CLN menu which will give you access to educational resources in all of our subjects.
http://www.cln.org/themes/bubbles.html
13
20
1. Draw a circuit that will cause output D to go true when switch A and switch B are closed or when switch C is closed. 2. Draw a circuit that will cause output D to be on when push button A is on, or either B or C are on. 3. Design a circuit for a car that considers the variables below to control the motor M. Also add a second output that uses any outputs not used for motor control. 4. Make a simple circuit that will turn on the outputs with the binary patterns when the corresponding buttons are pushed. Inputs X, Y, and Z will never be on at the same time. 5. Convert the following Boolean equation to the simplest possible circuit. 6. Simplify the following boolean equations. 7. Simplify the following Boolean equations, 8. Simplify the Boolean expression below. 9. Given the Boolean expression a) draw a digital circuit and b) simplify the expression. 10. Simplify the following Boolean equation and write a corresponding circuit. 11. For the following Boolean equation, a) Write the logic circuit for the unsimplified equation. b) Simplify the equation. c) Write the circuit for the simplified equation. 12. a) Write a Boolean equation for the following truth table. (Hint: do this by writing an expression for each line with a true output, and then ORing them together.) b) Write the results in a) in a Boolean equation. c) Simplify the Boolean equation in b) 13. Simplify the following Boolean equation, and create the simplest circuit. 14. Simplify the following boolean equation with Boolean algebra and write the corresponding circuit. 15. a) Develop the Boolean expression for the circuit below. b) Simplify the Boolean expression. c) Draw a simpler circuit for the equation in b). 16. Given a system that is described with the following equation, a) Simplify the equation using Boolean Algebra. b) Implement the original and then the simplified equation with a digital circuit. 17. Simplify the following and implement the original and simplified equations with gates. 18. Simplify the following Boolean equation and implement it in a circuit. 19. Use Boolean equations to develop simplified circuit for the following truth table where A, B, C and D are inputs, and X and Y are outputs. 20. Convert the truth table below to a Boolean equation, and then simplify it. The output is X and the inputs are A, B, C and D. 21. Simplify the following Boolean equation. Convert both the unsimplified and simplified equations to a circuit.
http://engineeronadisk.com/V2/book_analysis/engineeronadisk-124.html
13
30
Temperature sensors, thermocouples, thermistors, and power transistor heat design Temperature and heat play a large role in electronics. It's heat that makes PN junctions and transistors work, and gives us the relation I = Io exp (qV/kT) for the current across a forward-biased PN junction. T is the absolute temperature, q the charge on the electron, and k Boltzmann's constant (the gas constant per molecule). kT/q = 25 mV at room temperature, and is called the thermal voltage. 3kT/2 is the average kinetic energy of a free molecule, and kT the average energy of a harmonic oscillator at temperature T. The absolute temperature T is usually measured in kelvin, K, and 0°C is 273.15K. Temperature is actually not a fundamental concept, just the rate of change of energy with entropy of a system, but is important because it specifies thermal equilibrium, and the direction of heat flow, on which our personal comfort also depends. The unit of temperature is arbitrary, as is zero point of temperature scales linearly related to absolute temperature, such as Celsius, Fahrenheit or Réaumur. None is any more "metric" than another, but the Celsius scale is the most widely used, and its degree is the same as the Kelvin degree. The Fahrenheit degree is 5/9 of the Celsius degree, and is 32 when Celsius is zero. All these things are familiar. It is not surprising that PN junctions can be used for thermometry. The differential voltage vd between the bases of a differential amplifier is related to the currents in the transistors by vd = (kT/q) ln (I1/I2). If the ratio of the currents can be held constant, then this voltage is proportional to the absolute temperature, and gives a linear thermometer. This principle is used in the LM335 temperature sensor, whose input stage is shown at the right. A voltage divider makes vd exactly 1/50 of the voltage across the device. The difference in the collector currents of the two transistors in the differential amplifier stage is fed back to a circuit that adjusts the voltage across the device until a constant ratio of collector currents results. Since this voltage is proportional to vd, it is proportional to the absolute temperature. The current ratio is selected to make the proportionality constant 10 mV/K. The LM335 is used as in the diagram on the left. The 2k resistor programs the current to about 1 mA, since the voltage across the LM335 is about 3 V at room temperature. The 10k potentiometer calibrates the device to exactly 10 mV/K by a single adjustment, which serves for all temperatures, since the output is linear and proportional to absolute temperature. The potentiometer can usually be omitted, since even a random device is fairly well calibrated, and will be in error by only a degree or two. The LM335 is specified for -40 to 100 °C. Its cousin LM235 for -40 to 125 °C, and the LM135 over the full range of -55 to 150 °C. Each of the types can be used intermittently at higher temperatures, up to 200 °C for the LM135, but the life is reduced. At lower temperatures, of course, silicon stops being a semiconductor. The greatest restriction on semiconductor temperature sensors is the limited temperature range, but this range includes most environmental temperatures, so it is very useful. The best thing about them is that they are linear, and no calibration curves are necessary. If you want a voltage proportional to Celsius temperatures, you must subtract a constant 2.7315 V from the output of the sensor. This can be done with op-amp voltage references, but is a bother. The easy way is to use an LM35 Celsius temperature sensor, that includes all of this internally. The LM35 comes in the same TO-92 package as the LM335, but the connections are quite different, as shown in the diagram on the right. The LM35 needs no programming resistor, and is returned to ground. The output may be pulled below ground by a resistor that sinks 50 μA from the output, if you want to measure negative temperatures. The circuit is dead simple, as shown at the left. These sensors are very well calibrated. A random example gave 0.277 V output, or 27.7 °C, when a good mercury thermometer read 27.5 °C, and this is much closer than required for government work. The LM35 is an excellent choice for a thermometer. There is a related Fahrenheit sensor, the LM34. The speed of response of a sensor depends on how rapidly its temperature agrees with that of its surroundings, whose temperature is desired. In still air, the TO-92 package approaches 90% of its final value in 2 minutes, and is practically in equilibrium after 4 minutes. In a stirred oil bath, these times are reduced to 2 seconds and 4 seconds, respectively. The thermal time constants quoted are 80s in still air, 10s in 100 ft/min air, and 1s in stirred oil. These times will be affected by any covering of the TO-92 to protect it from liquids. To measure the temperature of a solid, the package can be cemented to the surface to ensure good thermal contact. The usual slip-on heat sinks can also reduce the equilibration time. In many cases, a desire to measure temperature to fractions of a degree is relatively useless, since there may be no uniformity or equilibration in the surroundings. Suppose two different conducting wires are brought into contact at the ends, making a closed loop. Charge carriers can move back and forth between the two conductors at each contact. Usually the rates in the two directions are not the same, so one conductor loses electrons and becomes positive, and the other gains electrons and becomes negative. The difference in potential adjusts the rates so that they are equal in equilibrium. The result is a contact potential. The minute you try to measure this contact potential, you are frustrated, because you introduce other contacts. All the conductors assume their potentials, but no current flows in equilibrium. However, if the two contacts are at different temperatures, the corresponding charge carrier rates of flow are usually different, so the contact potential is different, and a current flows. An electron picks up energy at the end with greater contact potential, and releases less energy at the end with the lesser contact potential; the difference appears as resistive heat. This is a kind of heat engine, which is called the Peltier effect, while the difference in potentials is called the Seebeck effect. The Seebeck voltage can be used to measure temperature. These thermoelectric effects occur whenever two different conductors are joined, not just when we want to measure temperature, and can be the source of strange behavior in some circuits. A thermocouple is a junction between two dissimilar metals, usually welded or brazed. If you take two such thermocouples, and connect the same metals on one side, and measure the voltage between the same metals on the other side, you will find the Seebeck voltage if the two junctions are at different temperatures. The junctions with the copper measurement wires exactly cancel out, if they are at the same temperature. The circuit is shown in the diagram at the right. One junction is usually held at 0 °C in an ice-water bath, and is called the cold junction. The other is the hot junction. The combinations generally used are standardized. A chromel (90 Ni, 10 Cr) - alumel (96 Ni, 2 Mn, 2 Al) couple is called type K, and gives about 40.28 μV/K at room temperature, and is useful up to about 1370 °C. An iron - constantan (55 Cu, 45 Ni) couple is called type J, gives about 51.45 μV/K, and is good up to 760 °C. A copper - constantan couple is called type T, gives 40.28 μV/K, and is good to 400 °C. Sometimes seen is the type S couple, platinum - 90 Pt 10 Rh, good up to 1750 °C, but giving only 5.88 μV/K. The voltage change with temperature is not linear, but tables are available to correlate voltage and temperature. There are many variations in the alloys, and the numbers in the references do not always agree exactly. For rough estimation, 40 μV/K can be used for types K and T thermocouples, which gives 1.00 mV at 25 °C. It is somewhat inconvenient to mess with an ice-water bath, and the bath must be stirred for good accuracy as well. The cold junction can be replaced by a cold junction compensation circuit that furnishes the same change in voltage that the couple itself would. If the ambient temperature is above 0 °C, the extra voltage is supplied to make the output the same as if an actual cold junction was used. At 0 °C, the circuit must cancel the emf from the junctions of the thermocouple metals to copper, to give a zero output. At other temperatures, it should give the same mV difference as a cold junction. A circuit that will do this is shown at the left, with values for a type T couple. This is not the most convenient circuit, but it shows the principle clearly. The upper section uses an LM335 to make a voltage that increases at the same rate as the type T thermocouple voltage, about 40 μV/K. The voltage divider resistors (200k and 856, 200k and 315) should be 1% resistors. The lower section uses an LM329 6.9V reference to provide a constant voltage equal to the ouput of the LM335 section at 0 °C., plus the thermovoltage, about 11 mV. My LM329 gave 6.97V (specs give 6.7 to 7.2 V). Whatever the voltage, it will not change with temperature (no more than 20 ppm/K for the B version). Then, when the hot junction is also at 0 °C, the output is zero, as it should be. The zero adjustment on the LM329 side is very useful.When the ambient temperature increases, the equivalent voltage from that temperature to zero is added in, just as if a cold junction were present. All the copper connections should be at the same temperature to avoid unwanted thermovoltages. Simply to test the circuit, the resistors need not be exact values. 820 and 300 ohms give reasonable results. I tested the circuits with what I believed to be a type K thermocouple. Putting the thermocouple in ice tea, the output was adjusted to 0 mV with the 10k pot. The thermocouple by itself gave -1.4 mV in ice tea when measured by the DMM (some references seem to think this will be zero, but it's not). When the thermocouple was brought to room temperature (which was about 25 °C, since it was summer), the DMM showed 1.0 mV. This was on the edge of the meter's sensitivity, so the measurement was rough, showing only that the circuit was probably working. If you do not get reasonable results, interchange the thermocouple leads. They look the same, but there are two possibilities, and Murphy's Law says that you will initially make the wrong assumption. Thermocouples can be purchased from Omega Corporation in packages of four. Working with this circuit will show you how cold junction compensation works. Next winter, I shall take it outdoors to actually test the compensation. Arrays of thermocouples can be used for heating or cooling, taking advantage of the Peltier heat. These devices require large currents at low voltages, and are not very efficient because of heat conduction between the hot and cold sides, which are close together and connected by metal. Nevertheless, these coolers (or heaters) are useful in unusual applications. Thermocouples can also be used to generate electricity from heat. A thousand in series taking advantage of a 100 °C temperature difference will give 4.0 V. Thermocouples, unfortunately, have low voltages. The conductivity of a semiconductor may be affected greatly by thermal activation of charge carriers, which causes the conductivity to increase and the resistance to decrease. A device making use of this effect is called a thermistor. In the usual negative-coefficient thermistor, the resistance is given pretty well by the formula R = RO exp(A/T), where T is the absolute temperature. If the natural log of R is plotted against 1/T, a fairly good straight line results, from which the constant A can be determined. For a small blue thermistor of resistance 10k at 25 °C, I found A = 3225 K from the table of values furnished with the thermistor. The resulting formula R = 0.166 exp(3225/T) gave its resistance pretty well over a wide range. By the DMM, I found the resistance 10.16k at room temperature, and 25.41k in ice tea. This gives an idea of the range of variation, which is quite large. Thermistors are not linear, and probably not accurate, but they are very good for rough temperature sensing and very simple to use. Beware of the heating of the thermistor by the current passing through it. There are also thermistors with a positive temperature coefficient that use some different kinds of material, and the temperature dependence of the resistance follows some other law than the one given above. Ice tea and room temperature are two calibration points. If you use boiling water, remember that water boils below 100 °C at high altitudes. An immersion heater might provide the boiling, as well as intermediate temperatures as the water cools down. While we are at it, it is good to remember that the electrical resistance of copper depends on temperature according to the equation R = Ro(1 + αot), where t is the Celsius temperature and αo is 0.00427. When you go from freezing to boiling, the resistance of copper wire increases by 43%, which is actually quite a lot. The temperature coefficient of resistance can be defined as α = (1/R)(dR/dT). Even for exact linear expansion, it is a function of temperature. A transistor obeying the exponential law given above has a coefficient of -A/T2. For the blue thermistor, this was -0.0376 at 20 °C, while the coefficient of copper at the same temperature is +0.00393. The thermistor's change is ten times faster than copper's. The usual 1/4W resistors are carbon film resistors. Lacking information on the temperature coefficient, I measured a 10k resistor at room temperature and in ice tea. The resistance increased slightly in the ice tea (as expected for carbon), by about 60Ω, giving a coefficient of -0.00026. This is a rough measurement, but shows that the coefficient is low, about 260 ppm/K. The temperature change of resistance is used to measure temperature over a wide range. A resistance thermometer may be made of the inert metal platinum, and tables are available of its behavior. Heat and electrostatic discharge are the enemies of semiconductor devices. Heat usually is Joule heat, and its rate of production is P = VI W. Amount of heat Q is measured in joules, J, and a watt is a joule per second. The heat capacity C of a body is the ratio of the heat transferred to it, divided by the change in temperature, or Q = CΔT, and is measured in J/K. The heat capacity of water is 4.186 J/K. The calorie is 4.186 J, so the heat capacity of water is 1 cal/g-K (by definition). The usual food calorie is the kilocalorie, 1000 cal. The Btu is the heat required to raise one pound of water 1°F, or 1054 J. You may often find the heat capacity of materials, or other thermal quantities, specified in calories or Btu, which is why they are mentioned here. Heat is random motion of the molecular constituents of matter, better thought of as a transfer of energy to this form, since it can be transformed into other forms of energy and is not a quantity, as if it were a fluid. There is no confusion when we are considering the heat produced in electronics. Heat is transferred by conduction, convection or radiation. Conduction obeys the law q = (kA/L)(t1 - t2), where q is the heat flow in W, A is the cross-sectional area, and L the length at whose ends the temperatures are t1 and t2. The coefficient k is the heat conductivity, in W/cm/K. Copper has a very high heat conductivity, 3.86 W/cm/K. Nonmetallic substances have conductivities in the region of 0.02 W/cm/K, organic liquids around 0.002 W/cm/K. Water has the highest thermal conductivity of any liquid, 0.0056 W/cm/K. Thermal resistance R is the ratio of heat flow to temperature difference, K/W. We see that for conduction, R = L/kA, just as for electrical resistance, where k takes the place of the electrical conductivity. A 1 cm. length of #22 Cu wire has a thermal resistance of 79.6 K/W. The analogy between thermal and electrical resistance gives many interesting results. Consider a body of heat capacity C at a temperature To at t = 0, in surroundings at a constant temperature T', with a thermal resistance of R between the body and its surroundings. If T(t) is the temperature of the body, then q = (T - T')/R, and also q = C dT/dt. Therefore, (T' - T) = RC(dT/dt), or dT/dt + T/RC = T'/RC. This is analogous to an RC circuit, and the solution is T - T' = (To - T')exp(-t/RC), or the temperature difference decreases exponentially to zero. The product RC is the thermal time constant in seconds. The rule that the heat loss is proportional to the temperature difference is called Newton's Law of Cooling, and is well obeyed when the heat transfer is by conduction. In fluids, heat is more readily transferred by convection, where the heat is carried away by the moving fluid. In natural convection, the density differences due to temperature differences drive the fluid motion. In forced convection the fluid is moved by a fan or other means. The rate of convective heat transfer is q = hA(t1 - t2), where t1 is the temperature of the body, t2 the temperature of the fluid away from the body, and h is the film coefficient in W/cm2/K. For one side of a vertical plate in natural convection in air, an approximate value of h is 1.78 x 10-4(t1 - t2)0.25. There are different empirical formulas for each case, so no general rules can be given. We see that the thermal resistance is a function of the temperature difference, not a constant, and Newton's Law is not obeyed exactly. The third heat transfer process is radiation. The hot body radiates into its surroundings, and the surroundings radiate back to the hot body. The net heat radiated is proportional to the difference in the fourth powers of the absolute temperature. The rate of radiation is q = 5.67 x 10-12eT4 W/cm2, called the Stefan-Boltzmann Law. e is the emissivity of the surface. For polished metals, it can be quite small, 0.04 for Cu and 0.05 for Al. For an oxidized surface, the emissivity may rise to 0.6 or larger. Most nonmetallic surfaces, including paint, have high emissivity, which can be taken as 0.9. To see the effect of radiation, consider a vertical plate of 1 cm2 area at 100 °C, with surroundings at 20 °C. For one side of the plate, the convection transfer is about 0.425 W, while the radiation transfer is 0.061 (assuming e = 0.9). About 87% is by convection, 13% by radiation. The usual TO-92 package transistor is rated at 600 mW at 25 °C, and is rated to be used without any cooling help. The permissible power above room temperature is found by linearly decreasing the power to zero at 150 °C, the maximum chip temperature. This corresponds to a thermal resistance of 21 °/W, junction to ambient (the heat producing element is called the junction). The package is assumed to be cooled by natural convection, and by conduction down the leads. If a "heat sink" is added, which for the TO-92 are usually fins, convectional transfer is improved. This may help in getting the maximum current from small voltage regulators, or in lengthening the life of transistors that run hot. Power transistors are meant to be used with a separate heat sink,a nd must be properly mounted. One package is the TO-3, the metal package originally designed for power transistors. The collector of the transistor is connected to the metal case to facilitate heat transfer. An example is the 2N3055, with maximum IC of 15A, maximum IB of 7A and maximum VCE 60 V, dissipating up to 115W. Beta is 20 at IC = 4 A, 5 at 10 A. This is a popular transistor for rugged work, such as power supply pass transistors. The thermal resistance from junction to case is 1.52 °C/W, and the junction temperature should not exceed 200 °C. For the full 115W, the thermal resistance to ambient at 25 °C must not exceed 1.52 °C/W, which not surprisingly is the quoted value. If the heat sink has a thermal resistance of 2.0 °C/W from case to ambient, then the total thermal resistance would be 3.52 °C/W, and the maximum power would be 175/3.52 = 50W. This calculation must be carried out in each case to determine the permissible power dissipation, and shows the importance of the heat sink. When making these calculations, consider the thermal resistances from junction to case, case to heat sink, and heat sink to ambient, and the maximum junction (usually 150 °C in a plastic package) and ambient temperatures. A more modern and cheaper package is the TO-220, where the collector is brought out from the plastic encapsulation to a metal tab for cooling. An example equivalent to the 2N3055 is the 2N6487, with maximum IC = 15A, IB = 5A and VCE = 60V. The thermal resistance, junction to tab, is 1.67 °C/W, which means a maximum power dissipation of 75W, assuming a maximum junction temperature of 175 °C. The smaller TIP29 with a maximum IC of 1A, and a beta of 15 at this current, has a thermal resistance of 4.16 °C/W, which gives a maximum dissipation of 30W. To show the effect of a heat sink, the 2N6487 has a thermal resistance from junction to ambient of 70 °C/W, which permits a dissipation of only 1.8W. The TIP29 has a thermal resistance of 62.5 °C/W to ambient. 2W is about the maximum power that can be dissipated by a TO-220 used without a heat sink. There is also the smaller TO-202 package with a smaller tab with a "waist" for lower power. Heat sinks are available in a great number of forms at low cost, and can be made from sheet metal for special cases. It is often necessary to insulate the case or tab from the metallic heat sink, and this is usually done with 0.002" mica insulators, which have a low thermal resistance, aided by compounds of zinc powder in silicone grease. Grease should always be used with mica insulators. Special insulating pads with low thermal resistance are also available that eliminate the mess of silicone grease. There are kits for the TO-3 and the TO-220 that contain all that is required for mounting. The 6-32 screws should be tightened to 8 in-lb torque. Metal-to-metal, the thermal resistance will be about 1 °C/W, 1.6 °C/W for mica, both greased, for TO-220. For the TO-3, the figures are 0.1 and 0.36, respectively (there is a larger area). With insulation, transistors can use a metal box or chassis as a heat sink. Where thermal resistance values are available, the calculations are easy. Sometimes heat sinks are specified for so many watts, and this could mean a variety of things. Actual temperature measurements can help to make things clear. If possible, the whole heat sink can be insulated, so that insulators are not necessary between the transistor and the heat sink. The leads on TO-220 packages are not designed to support the packages, so the tab should always be supported. This rule is often broken. Collector junctions can break down under heavy currents and high voltages, before the power limitations are reached. This is called "second breakdown" and should be considered in critical cases, in addition to junction temperature. You are always safe if VCE is reduced sufficiently below the maximum value. Transistor data sheets give graphs showing safe operating areas. BJT's cannot be paralleled without special considerations, because a hotter junction has a lower voltage drop, which hogs the current and causes even more heating. A series resistor is generally necessary to equalize the currents safely. Incidentally, FET's have the opposite characteristic, and can be safely paralleled for higher current. FET's require little drive, and so are often used, although fragile and inferior to bipolar transistors for power. Composed by J. B. Calvert Created 11 August 2001 Last revised 14 August 2001
http://mysite.du.edu/~etuttle/electron/elect23.htm
13
16
May 26, 1999 The Hubble Space Telescope Key Project team has announced that it has completed efforts to measure precise distances to far-flung galaxies, an essential ingredient needed to determine the age, size and fate of the universe. "Before Hubble, astronomers could not decide if the universe was 10 billion or 20 billion years old. The size scale of the universe had a range so vast that it didn't allow astronomers to confront with any certainty many of the most basic questions about the origin and eventual fate of the cosmos," said team leader Wendy Freedman, of the Observatories of the Carnegie Institution of Washington. "After all these years, we are finally entering an era of precision cosmology. Now we can more reliably address the broader picture of the universe's origin, evolution and destiny." The team's precise measurements are the key to learning about the expansion rate of the universe, called the Hubble constant. Measuring the Hubble constant was one of the three major goals for NASA's Hubble Space Telescope before it was launched in 1990. For the past 70 years astronomers have sought a precise measurement of the Hubble constant, ever since astronomer Edwin Hubble realized that galaxies were rushing away from each other at a rate proportional to their distance, i.e. the farther away, the faster the recession. For many years, right up until the launch of the Hubble telescope - the range of measured values for the expansion rate was from 50 to 100 kilometers per second per megaparsec (a megaparsec, or mpc, is 3.26 million light-years). The team measured Hubble's constant at 70 km/sec/mpc, with an uncertainty of 10 percent. This means that a galaxy appears to be moving 160,000 miles per hour faster for every 3.3 million light-years away from Earth. "The truth is out there, and we will find it," said Dr. Robert Kirshner, of Harvard University. "We used to disagree by a factor of 2; now we are just as passionate about 10 percent. A factor of two is like being unsure if you have one foot or two. Ten percent is like arguing about one toe. It's a big step forward." Added Dr. Robert Kennicutt of the University of Arizona, a co-leader of the team: "Things are beginning to add up. The factor of two controversy is over." The team used the Hubble telescope to observe 18 galaxies out to 65 million light-years. They discovered almost 800 Cepheid variable stars, a special class of pulsating star used for accurate distance measurement. Although Cepheids are rare, they provide a very reliable "standard candle" for estimating intergalactic distances. The team used the stars to calibrate many different methods for measuring distances. "Our results are a legacy from Hubble telescope that will be used in a variety of future research," said Dr. Jeremy Mould, of the Australian National University, also a co-leader of the team. "It's exciting to see the different methods of measuring galaxy distances converge, calibrated by the Hubble Space Telescope." Combining the Hubble constant measurement with estimates for the density of the universe, the team determined that the universe is approximately 12 billion years old - similar to the oldest stars. This discovery clears up a nagging paradox that arose from previous age estimates. The researchers emphasize that the age estimate holds true if the universe is below the so-called "critical density" where it is delicately balanced between expanding forever or collapsing. Or, the universe is pervaded by a mysterious force pushing the galaxies farther apart, in which case the Hubble measurements point to an even older universe. The universe's age is calculated using the expansion rate from precise distance measurements, and the calculated age is refined based on whether the universe appears to be accelerating or decelerating, given the amount of matter observed in space. A rapid expansion rate indicates the universe did not require as much time to reach its present size, and so it is younger than if it were expanding more slowly. The Hubble Space Telescope Key Project team is an international group of 27 astronomers from 13 different U.S. and international institutions. The Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy, Inc. for NASA, under contract with NASA's Goddard Space Flight Center in Greenbelt, MD. - end - NOTE TO EDITORS: Image files, captions, illustrations and animations are Higher resolution (300 dpi JPEG and TIFF) image files are available at http://oposite.stsci.edu/pubinfo/pr/1999/19/extra-photos.html Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/1999/05/990526061527.htm
13
24
|SPACE TODAY ONLINE COVERING SPACE FROM EARTH TO THE EDGE OF THE UNIVERSE| |COVER||SOLAR SYSTEM||DEEP SPACE||SHUTTLES||STATIONS||ASTRONAUTS||SATELLITES||ROCKETS||HISTORY||GLOBAL LINKS||SEARCH| Are Black Holes Fuzzballs? A string theory description of the insides of black holes is surprising scientists. NASA artist concept of how a black hole might look if it were surrounded by a disk of hot gas and a large doughnut, or torus, of cooler gas and dust. The light blue ring on the back of the torus comes from the fluorescence of iron atoms excited by X-rays from the hot gas disk. [click to enlarge NASA/CXC/SAO photo] Most cosmologists used to believe that anything that entered a black hole ceased to exist. That is, the interior of a black hole was not changed by particles that entered it. Now, physicists at Ohio State University understand that all particles in the Universe are made of tiny vibrating strings. Their equations suggest things that enter a black hole continue to exist and, in fact, become bound up in a giant tangle of strings that fills a black hole from its core to its surface. In other words, black holes are not smooth and featureless as had been thought. Instead, they are stringy fuzzballs. How black holes are born. Astronomers believe a black hole forms when a supermassive object – a dying giant star – collapses in on itself to form a very small point of infinite gravity. That point is called a singularity. A special region in space surrounds the singularity. The border of that region is known as the event horizon. Any object that crosses the event horizon is pulled into the black hole, never to return. That means that not even light can escape from a black hole. How big is a black hole? The diameter of the event horizon depends on the mass of the object that formed it. For instance: - If the Sun were to collapse into a singularity, its event horizon would be about 1.9 miles across. - If the collapsing body were Earth, its event horizon would be only 0.4 inches across. So far, physicists have not known what lies between the event horizon and the singularity. That is, that don't know what is inside the black hole. What is a Singularity? The center of a black hole is a singularity. NASA's Goddard Space Flight Center defines it as, "A place where spacetime becomes so strongly curved that the laws of Einstein's general relativity break down and quantum gravity must take over." It also has been defined as: - the point where the curvature of space-time is infinite. - the point at which spacetime becomes compressed to the point of being infinitely dense and infinitely small. - the zero-dimensional point at the center of a black hole or other significant object – such as the Universe at the instant of the Big Bang – at which all conceptions of space and time break down and become incomprehensible. - the dimensionless point at the center of a black hole, where all the mass of the collapsing star has shrunk to infinite density. - the object of zero radius into which the matter in a black hole is believed to fall. Previously, they had thought there was no structure or anything measurable inside the event horizon. The inside of a black hole was uniform and featureless throughout. What was wrong with the old theory? The problem with the old theory was physicists must be able to trace the end product of any process, including the process that makes a black hole, back to the conditions that created it. But, if all black holes are the same, then no black hole can be traced back to its unique beginning, and any information about the particles that created it is lost forever at the moment the hole forms. Now the Ohio State researchers have described the structure of black holes in a new way. According to string theory, all the fundamental particles of the Universe – protons, neutrons, electrons – are made of different combinations of strings. But as tiny as strings are, they can form large black holes through a phenomenon known as fractional tension. Strings can be stretched, but each carries a certain amount of tension, like a guitar string. With fractional tension, the tension decreases as the string gets longer. Just as a long guitar string is easier to pluck than a short guitar string, a long strand of quantum mechanics strings joined together is easier to stretch than a single string, according to the new theory. So, when a great many strings join together, as they would in order to form the many particles necessary for a very massive object like a black hole, the combined ball of string is very stretchy, and expands to a wide diameter. When the Ohio State physicists derived their formula for the diameter of a fuzzy black hole made of strings, they found that it matched the diameter of the black hole event horizon suggested by the old theory. Since the new theory suggests strings continue to exist inside a black hole, and the nature of the strings depends on the particles that made up the original source material, then each black hole is as unique as are the stars, planets, or galaxy that formed it. The strings from any subsequent material that enters the black hole would remain traceable. That means a black hole can be traced back to its original conditions, and information about what entered survives. SOURCE: OHIO STATE UNIVERSITY Top of this page Mysteries of Deep Space Deep Space index Search STO STO cover Copyright 2004 Space Today Online
http://www.spacetoday.org/DeepSpace/Stars/BlackHoles/BlackHoleFuzzball.html
13
157
Ratio And Proportion Activities DOC Proportion Activity (One Foot Tale) Class: Algebra I. Summary: In this lesson, students apply proportions to examine real life questions. ... Write a ratio using your height in inches: 12” : _____ (actual height) A ratio is a comparison of two quantities that tells the scale between them. Ratios can be expressed as quotients, fractions, decimals, percents, or in the form of a:b. Here are some examples: The ratio of girls to boys on the swim team is 2:3 or . Number / Rate Ratio Proportion / Video Interactive / Print Activity. Name: ( adding or subtracting multiplying or dividing. Title: Rate Ratio Proportion PRINT ACTIVITY Author: Alberta Learning Last modified by: Mike.Olsson Created Date: This can be done playing concentration with the words and definitions or similar activities. (D) Ratio, Proportion, Scale: What is the connection? Using the FWL (Fifty Words or Less) strategy (attachment 8) the students will explain the connection between ratio, proportion, and scale. Activities (P/E ratio) Definition. The P/E ratio of a stock is the price of a company's stock divided by the company's earnings per share. ... Tutorial (RATIO AND PROPORTION) Author: 1234 Last modified by: vtc Created Date: 9/15/2003 1:39:00 AM Group activities. Question & Answer during the session.. Learner engagement during session. Worksheet Linked Functional Skills: ... Ratio, Proportion and Scale (Sample Lesson Plan) Functional Skills: Mathematics Level 2. Scale Card 2. Ratio and Proportion Chapter 10 in the Impact Text. Home Activities. Title: Ratio and Proportion Chapter 10 in the Impact Text Author: DOE Last modified by: DOE Created Date: 11/15/2012 3:03:00 PM Company: DOE Other titles: Ratio (Proportion), Percent (Share), and Rate. 1) Ratio and Proportion. ... When you examine “GDP Composition by Economic Activities in Major Countries (2003)” in page 18, how could you describe the main character of US GDP composition? A: To know ratio and proportion. References to the Framework: Year 5 - Numbers and the number system, Ratio and Proportion, p26 – To solve simple problems involving ratio and proportion. Activities: Organisation. Whole class for mental starter and teacher input; Unit 10 Ratio, proportion, ... 6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions Find pairs of numbers with a sum of 100; ... Ratio for one serving, (i.e. if the recipe uses 1 cup of sugar, and the recipe serves 8, the ratio for one serving equals 1/8 c. sugar). Proportion used to increase recipe to 30 servings. 1/8 servings=x/30 servings. Show the work to solve proportion. Ratios and Proportion Online activities 3.25. ... a subset : set ratio of 4:9 can be expressed equivalently as 4/9 = 0.‾ 4 ˜ 44.44%) Balance the blobs 5.0 understand ratio as both set: set comparison (for example, number of boys : ... RATIO/PROPORTION WORD PROBLEMS A . ratio/rate. is the comparison of two quantities. Examples: The ratio of 12 miles to 18 miles = 12 mi / 18 mi = 2 mi / 3 mi. The rate of $22.50 for 3 hours = $22.50 / 3 hrs. NOTE: 1 ... If we’re interested in the proportion of all M&Ms© that are orange, what are the parameter,, and the statistic,? What is your value of ? p = proportion of all M&Ms that are orange = proportion of M&Ms in my sample of size n that are orange = x/n. Write the proportion. 8 = 192 . 3 n. 2. Write the cross products 8 * n = 192 * 3. 3. Multiply 8n = 576. 4. Undo ... the male to female ratio is 6:6. If there are 160 players in the league, how many are female? 22. RATE/RATIO/PROPORTION /% UNIT. Day 1: Monday. Objective: Students will be able to find the unit rate in various situations. Warm-Up: (Review from last week) Put on board or overhead. Express each decimal as a fraction in simplest form:.60 2) 1.25 3) .35. Students complete Independent Practice activities that serve as a summative assessment, since instructional feedback is not provided at this level. ... Ratio and Proportion Level 1- Level 6 (Individually) http://www.thinkingblocks.com/TB_Ratio/tb_ratio1.html . Fractions, decimals and percentages, ratio and proportion. Year 6 Autumn term Unit ... 6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions Understand the ratio concept and use ratio reasoning to solve related problems. ... John A. Van de Walle’s Developing Concepts of Ratio and Proportion gives the instructor activities for the development of proportional reasoning in the student. The activities which follow include a good introduction to graphing as well as a great application of ratio and proportion. Directions: What’s the Story, What’s the Math . Proportion. Ratio. Similarity. Generic. Activities: Visit Little Studio Lincoln Room. View Standing Lincoln Exhibit. Examine Resin cast of Volk’s life mask of Lincoln. ... Additional Activities: Visit Atrium Gallery. View Bust Cast from Standing Lincoln Statue in 1910. Activities: Divide the class into small groups. Have the students create in Geogebra create Fibonacci Rectangles and the Shell Spirals. ... which is the Golden ratio, which has many applications in the human body, architecture, and nature. Activities: The following exercises meet the Gateway Standards for Algebra I – 3.0 (Patterns, Functions and Algebraic Thinking) ... The exterior of the Parthenon likewise fits into the golden proportion so that the ratio of the entablature ... Concept Development Activities . 7- 1 Ratio and proportion: Restless rectangles . activity require students to compare and contrast rectangles of the same shapes but different sizes and make a discovery of their lengths and width to discover the properties of similar rectangles. Definitions of ratio, proportion, extremes, means and cross products Write and simplify ratios The difference between a ratio and a proportion ... Learning Activities ... Once completed, students should calculate the ratio of the length to the width by dividing. In both cases, students should calculate a ratio of L :W approximately equal to 1.6 if rounded to the nearest tenth. Ratio, Proportion, Data Handling and Problem Solving. Five Daily Lessons. Unit ... Year Group 5/6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities / Focus Questions Find pairs of numbers ... Apply the concept of ratio, proportion, and similarity* in problem-solving situations*. 4.5.a. ... Planning Daily Lesson Activities that Incorporate the MYP: (AOIs, LP, rigor, holistic learning, communication, internationalism, etc) Activities 1) Use guided practice to refresh the concept of ratio and proportion, and the process for solving proportions for a missing value. 2) ... ... Gulliver's Travels Swift 7-12 Ratio, proportion, measurement Webpage Holes Sachar 6-8 Ratio, proportion, data collection , percent ... Activities, Stories, Puzzles, and Games Gonzales, Mitchell & Stone Mathematical activities, puzzles, stories & games from history Moja ... Apply knowledge of ratio and proportion to solve relationships between similar geometric figures. ... Digital Cameras Activities– Heather Sparks. Literature: If you Hopped Like a Frog and Lesson. Materials: Jim and the Bean Stalk Book. They calculate the ratio of circumference to diameter for each object in an attempt ... The interactive Paper Pool game provides an opportunity for students to develop their understanding of ratio, proportion, ... The three activities in this investigation center on situations involving rational ... LessonTitle: Proportion Activities (One Foot Tale, ... Application of Ratio and Proportion Vocabulary Focus. Proportion Materials . A literature book such as Gulliver’s Travels or If You Hopped Like a Frog, Catalogues, Measuring tools. Assess ratio. is a comparison of two numbers. How much sugar do you put in your favorite cookie recipe? How much flour? ... What is the ratio of browns to rainbows? Proportion. A . proportion. is two equal ratios. Look at the first example on page 1 again. Algebra: Ratio & proportion, formulas (Statistics & Prob.: ... questions, and student activities associated with the delivery of the lesson. Nothing should be left to the imagination. Other teachers should be able to reproduce this exact lesson using this lesson plan. Math Forum: Clearinghouse of ratio and proportion activities for 6th grade. http://mathforum.org/mathtools/cell/m6,8.9,ALL,ALL/ Middle School Portal: Here you will find games, problems, ... What is a ratio and how do you use it to solve problems? ... Read and write a proportion. Determining how to solve proportions by cross multiplying. ... Activities. Day 1. Jumping Jacks: the test of endurance: • Is the approach to ratio, proportion and percentages compatible with work in mathematics? ... Athletic activities use measurement of height, distance and time, and data-logging devices to quantify, explore, and improve performance. This material can also be used in everyday problem solving that stems from activities such as baking. Goals and Standards. ... I also expect students to be familiar with the word ratio, ... Write the proportion you find from number 1 in 4 different ways. (Use properties 2-4) (Write and solve a proportion using the scale as one ratio.) ... CDGOALS\BK 6-8\Chp3\AA\Activities\Making a Scale Drawing (n = 1( x 4; n = 14 ft. 2 cm n cm. 25 km 80 km. Title: GRADE SIX-CONTENT STANDARD #4 Author: Compaq Last modified by: Unit 5 Fractions, decimals, percentages, ratio and proportion Term: Summer Year Group: 4/5 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions Y4 ... ... Algebra, 5701, Trade and Industrial, Measurement, Circle, Area, Estimation, Ratio, Proportion, Scale. June, 2005 Career and Technical Education Sample Lesson Plan Format. Title: Constructing a Holiday Wreath. ... Activities: Sell stock. Purchase supplies. Identify the audience in writing ... ACTIVITIES: ICT: RESOURCES: MATHSWATCH: Clip 101 Estimating Answers. Clip 160 Upper & Lower Bounds Difficult Questions. B-A. ... Solve simple ratio and proportion problems such as finding the ratio of teachers to students in a school. Ratio Method of Comparison Significance ... cash plus cash equivalents plus cash flow from operating activities Average Collection Period. ... Total Assets Shows proportion of all assets that are financed with debt Long Term Debt to Total Capitalization Long Term Debt. Ratio, proportion, fraction, equivalent, lowest terms, simplify, percentage, ... ACTIVITIES ICT RESOURCES How to get more pupils from L3 to L5 in mathematics part 2: Learning from misconceptions: Fractions and Decimals Resource sheet A5. formulate how and when a ratio is used . write appropriate labels . apply knowledge of ratios to the project (ie. Holocaust Ratio Project) identify basic rates . differentiate between rates and ratios. ... Proportion. Differentiated Learning Activities. KS3 Framework reference Targeted activities for the introduction or plenary part of lesson Activity Ref: Simplify or transform linear expressions by collecting like terms; multiply a single term over a bracket. ... Ratio & Proportion Date: 2 HRS Ratio and proportion; e. Scale factor; f. Dilations; g. Real-life examples of similarity and congruency; h. Angle measures; j. ... Activities exploring similarity and congruence in three-dimensional figures and analyze the relationship of the area, ... Ratio and proportion. Topic/Sub-topic: Proportions of the human body. Foundational objective(s): ... contribute positively in group learning activities, and treat with respect themselves, others, and the learning materials used (PSVS) UEN- Lesson “Ratio, Rate, and Proportion” Activities 1 and 2 from . http://mypages.iit.edu/~smart/dvorber/lesson3.htm. Sample Formative Assessment Tasks Skill-based task. Identify (given examples) the difference between a ratio and a rate. Problem Task.
http://freepdfdb.com/doc/ratio-and-proportion-activities
13
28
The NCTE Committee on Critical Thinking and the Language Arts defines critical thinking as "a process which stresses an attitude of suspended judgment, incorporates logical inquiry and problem solving, and leads to an evaluative decision or action." In a new monograph copublished by the ERIC Clearinghouse on Reading and Communication Skills, Siegel and Carey (1989) emphasize the roles of signs, reflection, and skepticism in this process. Ennis (1987) suggests that "critical thinking is reasonable, reflective thinking that is focused on deciding what to believe or do." However defined, critical thinking refers to a way of reasoning that demands adequate support for one's beliefs and an unwillingness to be persuaded unless the support is forthcoming. Why should we be concerned about critical thinking in our classrooms? Obviously, we want to educate citizens whose decisions and choices will be based on careful, critical thinking. Maintaining the right of free choice itself may depend on the ability to think clearly. Yet, we have been bombarded with a series of national reports which claim that "Johnny can't think" (Mullis, 1983; Gardner, 1983; Action for Excellence, 1983). All of them call for schools to guide students in developing the higher level thinking skills necessary for an informed society. Skills needed to begin to think about issues and problems do not suddenly appear in our students (Tama, 1986; 1989). Teachers who have attempted to incorporate higher level questioning in their discussions or have administered test items demanding some thought rather than just recall from their students are usually dismayed at the preliminary results. Unless the students have been prepared for the change in expectations, both the students and the teacher are likely to experience frustration. What is needed to cultivate these skills in the classroom? A number of researchers claim that the classroom must nurture an environment providing modeling, rehearsal, and coaching, for students and teachers alike, to develop a capacity for informed judgments (Brown, 1984; Hayes and Alvermann, 1986). Hayes and Alvermann report that this coaching led teachers to acknowledge students' remarks more frequently and to respond to the students more elaborately. It significantly increased the proportion of text-connected talk students used as support for their ideas and/or as cited sources of their information. In addition, students' talk became more inferential and analytical. A summary of the literature on the role of "wait time," (the time a teacher allows for a student to respond as well as the time an instructor waits after a student replies) found that it had an impact on students' thinking (Tobin, 1987). In this review of studies, Tobin found that those teachers who allowed a 3-5 second pause between the question and response permitted students to produce cognitively complex discourse. Teachers who consciously managed the duration of pauses after their questioning and provided regular intervals of silence during explanation created an environment where thinking was expected and practiced. However, Tobin concludes that "wait time" in and of itself does not insure critical thinking. A curriculum which provides students with the opportunity to develop thinking skills must be in place. Interestingly, Tobin found that high achievers consistently were permitted more wait time than were less skilled students, ndicating that teachers need to monitor and evaluate their own behavior while using such strategies. Finally, teachers need to become more tolerant of "conflict," or confrontation, in the classroom. They need to raise issues which create dissonance and refrain from expressing their own bias, letting the students debate and resolve problems. Although content area classroom which encourages critical thinking can promote a kind of some psychological discomfort in some students as conflicting accounts of information and ideas are argued and debated, such feelings may motivate them to resolve an issue (Festinger, 1957). They need to get a feel for the debate and the conflict it involves. Isn't there ample everyday evidence of this: Donahue, Geraldo Rivera, USA Today? Authors like Frager (1984) and Johnson and Johnson (1979) claim that to really engage in critical thinking, students must encounter the dissonance of conflicting ideas. Dissonance, as discussed by Festinger, 1957 promotes a psychological discomfort which occurs in the presence of an inconsistency and motivates students to resolve the issue. To help students develop skills in resolving this dissonance, Frager (1984) offers a model for conducting critical thinking classes and provides samples of popular issues that promote it: for example, banning smoking in public places, the bias infused in some sports accounts, and historical incidents written from both American and Russian perspectives. If teachers feel that their concept of thinking is instructionally useful, if they develop the materials necessary for promoting this thinking, and if they practice the procedures necessary, then the use of critical thinking activities in the classroom will produce positive results. Matthew Lipman (1988) writes, "The improvement of student thinking--from ordinary thinking to good thinking--depends heavily upon students' ability to identify and cite good reasons for their opinions." Training students to do critical thinking is not an easy task. Teaching which involves higher level cognitive processes, comprehension, inference, and decision making often proves problematic for students. Such instruction is often associated with delays in the progress of a lesson, with low success and completion rates, and even with direct negotiations by students to alter the demands of work (Doyle, 1985). This negotiation by students is understandable. They have made a career of passive learning. When met by instructional situations in which they may have to use some mental energies, some students resist that intellectual effort. What emerges is what Sizer (1984) calls "conspiracy for the least," an agreement by the teacher and students to do just enough to get by. Despite the difficulties, many teachers are now promoting critical thinking in the classroom. They are nurturing this change from ordinary thinking to good thinking admirably. They are 1) promoting critical thinking by infusing instruction with opportunities for their students to read widely, to write, and to discuss; 2) frequently using course tasks and assignments to focus on an issue, question, or problem; and 3) promoting metacognitive attention to thinking so that students develop a growing awareness of the relationship of thinking to reading, writing, speaking, and listening. (See Tama, 1989.) Another new ERIC/RCS and NCTE monograph (Neilsen, 1989) echoes similar advice, urging teachers to allow learners to be actively involved in the learning process, to provide consequential contexts for learning, to arrange a supportive learning environment that respects student opinions while giving enough direction to ensure their relevance to a topic, and to provide ample opportunities for learners to collaborate. Action for Excellence. A Comprehensive Plan to Improve Our Nation's Schools. Denver: Education Commission of the States, 1983. 60pp. [ED 235 588] Brown, Ann L. "Teaching students to think as they read: Implications for curriculum reform." Paper commissioned by the American Educational Research Association Task Force on Excellence in Education, October 1984. 42pp. [ED 273 567] Doyle, Walter. "Recent research on classroom management: Implications for teacher preparation." Journal of Teacher Education, 36 (3), 1985, pp. 31-35. Ennis, Robert. "A taxonomy of critical thinking dispositions and abilities." In Joan Baron and Robert Sternberg (Eds.) Teaching Thinking Skills: Theory and Practice. New York: W.H. Freeman, 1987. Festinger, Leon. A Theory of Cognitive Dissonance. Evanston, Illinois: Row Peterson, 1957. Frager, Alan. "Conflict: The key to critical reading instruction." Paper presented at annual meeting of The Ohio Council of the International Reading Association Conference, Columbus, Ohio, October 1984. 18pp. [ED 251 806] Gardner, David P., et al. A Nation at Risk: The Imperative for Educational Reform. An Open Letter to the American People. A Report to the Nation and the Secretary of Education. Washington, DC: National Commission on Excellence in Education, 1983. 72pp. [ED 226 006] Hayes, David A., and Alvermann, Donna E. "Video assisted coaching of textbook discussion skills: Its impact on critical reading behavior." Paper presented at the annual meeting of the American Research Association, San Francisco: April 1986. 11pp. [ED 271 734] Johnson, David W., and Johnson, Roger T. "Conflict in the classroom: Controversy and learning," Review of Educational Research, 49, (1), Winter 1979, pp. 51-70. Lipman, Matthew. "Critical thinking--What can it be?" Educational Leadership, 46 (1), September 1988, pp. 38-43. Mullis, Ina V. S., and Mead, Nancy. "How well can students read and write?" Issuegram 9. Denver: Education Commission of the States, 1983. 9pp. [ED 234 352] Neilsen, Allan R., Critical Thinking and Reading: Empowering Learners to Think and Act. Monographs on Teaching Critical Thinking, Number 2. Bloomington, Indiana: ERIC Clearinghouse on Reading and Communication Skills and The National Council of Teachers of English, Urbana, Illinois, 1989. [Available from ERIC/RCS and NCTE.] Siegel, Marjorie, and Carey, Robert F. Critical Thinking: A Semiotic Perspective. Monographs on Teaching Critical Thinking, Number 1. Bloomington, Indiana: ERIC Clearinghouse on Reading and Communication Skills and the National Council of Teachers of English, Urbana, Illinois, 1989. [Available from ERIC/RCS and NCTE.] Sizer, Theodore. Horace's Compromise: The Dilemma of the American High School. Boston: Houghton-Mifflin, 1984. [ED 264 171; not available from EDRS.] Tama, M. Carrol. "Critical thinking has a place in every classroom," Journal of Reading 33 (1), October 1989. Tama, M. Carrol. "Thinking skills: A return to the content area classroom." Paper presented at the annual meeting of the International Reading Association, 1986. 19pp. [ED 271 737] Tobin, Kenneth. "The role of wait time in higher cognitive level learning," Review of Educational Research, 57 (1), Spring 1987, pp. 69-95.
http://www.ericdigests.org/pre-9211/critical.htm
13
38
Perpendicular lines (Coordinate Geometry) the slope of one is the negative reciprocal of the other. If the slope of one line is m, the slope of the other is Drag points C or D. Note the slopes when the lines are at right angles to each other. When two lines are perpendicular to each other (at right angles or 90°), their slopes have a particular relationship to each other. If the slope of one line is m then the slope of the other line is the negative reciprocal of m, or So for example in the figure above, the line AB has a slope of 0.5, meaning it goes up by a half for every one across. The line CD if it is perpendicular to AB has a slope of -1/0.5 or -2. Adjust points C or D to make CD perpendicular to AB and verify this result. Fig 1. Lines are still perpendicular Remember that the equation works both ways, so it doesn't matter which line you start with. In the figure above the slope of CD is -2. So the slope of AB when perpendicular is Note too that the lines to do not have to intersect to be perpendicular. In Fig 1, the two lines are perpendicular to each other even though they do not touch. The slope relationship still holds. Example 1. Are two lines perpendicular? Fig 1. Are these lines perpendicular? In Fig 1, the line AB and a line segment CD appear to be at right angles to each other. Determine if this is true. To do this, we find the slope of each line and then check to see if one slope is the negative reciprocal of the other. If the lines are perpendicular, each will be the negative reciprocal of the other. It doesn't matter which line we start with, so we will pick AB: So, the slope of CD is -2.22, and the negative reciprocal of the slope of AB is -2.79. These are not the same, so the lines are not perpendicular, even though they look it. If you look carefully at the diagram, you can see that the point C is a little too far to the left for the lines to be perpendicular. Example 2. Define a line through a point perpendicular to a line In Fig 1, find a line through the point E that is perpendicular to CD. The point E is on the y-axis and so is the y-intercept of the desired line. Once we know the slope of the line, we can express it using its equation in slope-intercept form y=mx+b, where m is the slope and b is the y-intercept. First find the slope of the line CD: The line we seek will have a slope which is the negative reciprocal of this: The intercept is 10, the point where the line will cross the y-axis. Substituting these values into the equation, the line we need is described by the equation y = 0.45x + 10 This is one of the ways a line can be defined and so we have solved the problem. If we wanted to plot the line, we would find another point on the line using the equation and then draw the line through that point and the intercept. For more on this see Equation of a Line (slope - intercept form) Things to try In the diagram at the top of the page, press 'reset'. Note that because the slope of one line is the negative reciprocal of the other, the lines are perpendicular. Adjust one of the points C,D. The lines are no longer perpendicular. Click on "hide details". Determine the slope of both lines and prove they are not perpendicular. Click "show coordinates" if you wish to know them accurately. Click "show details" to verify. Other Coordinate Geometry entries (C) 2009 Copyright Math Open Reference. All rights reserved
http://www.mathopenref.com/coordperpendicular.html
13
190
An angle is acute if it is less than 90. A triangle with three internal angles, each less than 90°, is called an acute-angled triangle. The amplitude of a wave is half the vertical distance from a crest to a trough. An angle is a measure of turning. Angles are measured in degrees. The symbol for an angle is . Under a rotation about a centre, C, an original point, A, will produce the image, A', with angle ACA' being the angle of rotation. The part of a circle or curve lying between two points on the curve. Area is a measure of surface size and is calculated in square units (e.g. cm, m, sq.ft.). ASA stands for angle-side-angle and refers to the known details of a possible triangle. A back-bearing provides a return direction for a given bearing, e.g. 080° is the back-bearing of 260°. A bearing is the direction (as an angle measured clockwise from north) that a point lies from a given location. It is usually given as a three-figure bearing, e.g. 087°. To bisect something is to cut it in half. In mathematics, lines and angles are often bisected. Two axes that intersect at right angles are used to identify points in a plane. Conventionally, the horizontal axis is labelled as the x axis and the vertical axis as the y axis. This creates a numbered grid on which points are defined by an ordered pair of numbers (x, y). The system is named after the French scholar, René Descartes. Under a rotation, an original point, A, and its image, A', are always the same distance away from the centre of rotation, C. The angle ACA' is the angle of rotation. A straight line segment joining two points of a circle (or any curve). The perimeter (or length of perimeter) of a circle. A circle is circumscribed about a shape if each vertex of the shape lies on the circumference of the circle. The shape is said to be inscribed in the circle. A closed curve is one that is continuous and that begins and ends in the same place. A numerical value which multiplies a term of an expression is called a coefficient. For example, the coefficient of x in the expression is 4. A factor that is shared by two or more different numbers is called a common factor. If two angles sum to 90 then they are complementary angles. A polygon is concave if one or more of its interior angles is greater than 180. A solid with a circular base. All points on the circumference of the base are joined to a vertex in a different plane than the base. Two shapes are congruent if their lengths and angles are identical (i.e. if one can 'fit' exactly over the other). Mirror images, for example, are congruent. A constant is a quantity (such as a number or symbol) that has a fixed value, in contrast to a variable. The difference between consecutive terms of a linear sequence is called the constant difference. A polygon is convex if all of its interior angles are less than 180. A pair of numbers that determine the location of a point on x-y plane are called coordinates. Any set of points, lines, curves and/or shapes are coplanar if they exist in the same plane. For any right-angled triangle, the cosine ratio for an angle of x° is: A counting number is a positive whole number greater than zero: 1, 2, 3, Counting numbers are also called counting natural numbers. Algebraically, a number multiplied by itself three times is called a cube. The cube root of a number yields that number when multiplied by itself three times that is, when it is cubed. For example, 2 is the cube root of 8. A cuboid is a solid with six rectangular faces. A polygon is cyclic if all of its vertices lie on a circle. A solid with a circular base and a parallel circular top whose every parallel slice in between is also a congruent circle. This is the standard abbreviation for decimal places the number of digits that appear after the decimal point in a decimal number A decagon is a ten-sided polygon. A decimal number is a number that includes tenths, hundredths, thousandths and so on, represented by digits after a decimal point. Decimal places are the digits representing tenths, hundredths, thousandths and so on that appear after the decimal point in a decimal number. In a fraction, the denominator is the number written below the dividing line. A straight line segment joining two points on the circumference of a circle and passing through its centre (or the length of this line). Equal to twice the radius length. One of the symbols 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. Numbers are made up of one or more digits. For example, the number 72 has 2 digits and the number 1.807 has 4 digits. A dodecagon is a 12-sided polygon. An enlargement is a type of transformation in which lengths are multiplied whilst directions and angles are preserved. The transformation is specified by a scale factor of enlargement and a centre of enlargement. For every point in the original shape, the transformation multiplies the distance between the point and the centre of enlargement by the scale factor. An equation is a mathematical statement that two expressions have equal value. The expressions are linked with the 'equals' symbol (=). Items that are at an equal distance from an identified point, line or plane are said to be equidistant from it. An equilateral triangle has three sides of the same length and hence three angles of 60°. A fraction with the same value as another fraction is called an equivalent fraction. To evaluate a numerical or an algebraic expression means to find its value. The number written as a superscript above another number is called the exponent. It indicates the number of times the first number is to be multiplied by itself. This is also known as the index or the power of the first number. In mathematics, an expression is combination of known symbols including variables, constants and/or numbers. Algebraic equations are made up of expressions. Examples of expressions include : , , and . When the side of a convex polygon is produced (lengthened), the exterior angle is the angle between this line and an adjacent side. A factor is any of two or more numbers (or other quantities) that are multiplied together. For example, 2 and 5 are factors of 10. A formula is a general equation that shows the algebraic connection between related quantities. The gradient is the mathematical measure of slope. This is the standard abbreviation for the highest common factor the factor of highest value that is shared by two or more different numbers. A heptagon is a seven-sided polygon. A hexagon is a six-sided polygon. The factor of highest value that is shared by two or more different numbers is called the highest common factor. This is often abbreviated to HCF. A horizontal line runs parallel to the Earth's surface and at right angles to a vertical line. The horizontal axis of a graph runs from left to right. The hypotenuse of a right-angled triangle is the side opposite the right angle. A shape that is the result of a transformation on the coordinate plane. The number written as a superscript above another number is called the index. It indicates the number of times the first number is to be multiplied by itself. This is also known as the exponent or the power of the first number. Index notation is a shorthand way of writing a number repeatedly multiplied by itself. For example, can be written as (in words: 3 to the power 4). A shape is inscribed within a circle if each vertex of the shape lies on the circumference of the circle. The circle is then said to be circumscribed about the shape. An integer is any of the natural numbers plus zero and the negative numbers: ,-3, -2, -1, 0, 1, 2, 3, In the Cartesian coordinate system, an intercept is the positive or negative distance from the origin to the point where a line or curve cuts a given axis. An interior angle is the angle between adjacent sides at a vertex of a polygon. To intersect is to have a common point or points. For example, two lines intersect at a point and two planes intersect at a straight line. The point at which two or more lines intersect is called a vertex. Any number that cannot be expressed as the ratio of two integers is an irrational number. For example, the square root of 2 and are both irrational numbers. An isosceles triangle is one that has two sides of equal length and hence two angles of equal size. The lattitude measurement of a point on the Earth's surface is the angular distance north or south of the Equator. This is the standard abbreviation for the least common multiple the smallest-value multiple that is shared by two different numbers. The smallest-value multiple that is shared by two different numbers is called the least common multiple. This is often abbreviated to LCM. A line segment is the set of points on the straight line between any two points, including the two endpoints themselves. In a linear sequence, consecutive terms of the sequence are generated by adding or subtracting the same number each time. A locus is a set of points that all satisfy a particular condition. For instance, the two-dimensional locus of points equidistant from two points A and B is the perpendicular bisector of the line segment AB. The longitude measurement of a point on the Earth's surface is the angular distance measured east to west from the zero line at Greenwich, England. The lowest common multiple of the denominators of two or more fractions is called the lowest common denominator. In the context of vectors, magnitude means the length of a vector. The bigger of the two arcs formed by two points on a circle. The midpoint of a line is the point halfway along it. The smaller of the two arcs formed by two points on a circle. A multiple of an integer is the product of that number and another integer. An n-gon is an n-sided polygon. A natural number is a positive whole number greater than zero: 1, 2, 3, Natural numbers are also called counting numbers. Any number that is less than zero is a negative number. A net is a flat pattern of polygons which, when folded up, creates a polyhedron (a many-sided solid). A nonagon is a nine-sided polygon. Mathematical notation is a convention for writing down ideas in mathematics. Some examples are fraction notation, vector notation and index notation. A line on which numbers are represented graphically is called a number line. In a fraction, the numerator is the number written above the dividing line. An angle is obtuse if it is over 90 but less than 180. A triangle with one internal angle of between 90° and 180° is called an obtuse-angled triangle. An octagon is an eight-sided polygon. The origin is the point of intersection of the x axis and the y axis. It has the coordinates (0,0). Two lines, curves or planes are said to be parallel if the perpendicular distance between them is always the same. A parallelogram is a quadrilateral with two sets of parallel sides. A pentagon is a five-sided polygon. A perfect square is a natural number which is equal to the square of another natural number. For example, 4, 9 and 16 are perfect squares as they are equal to 2, 3 and 4 squared respectively. The perimeter of a shape is the total length of its outside edge(s). A function has a period, p, if for all values of x. So the period of the cosine function is 360° since . A periodic function is one for which for all values of x (for some particular value of p). For instance is a periodic function since , for all values of x. Two lines or planes are perpendicular if they are at right angles to one another. A perpendicular bisector is a line that cuts in half a given line segment and forms a 90° angle with it. The irrational number pi represents the ratio of the lengths of the circumference to the diameter of a circle. It has the approximate value 3.14159265 and is always written using the symbol . A planar figure is one that exists in a single plane. A plane has position, length and width but no height. It is an object with two-dimensions. A point has no properties except position. It is an object with zero dimensions. Points in the x-y plane can be specified using x and y coordinates. A polygon is a closed, planar figure bounded by straight line segments. A number is raised to a particular power when it is multiplied by itself that number of times. Powers are written as a superscript above the number that is multiplied by itself. For example means 3 to the power of 4 or . Any factor of a number that is a prime number is called a prime factor. A prime number is a positive integer that has exactly two factors: itself and 1. A solid whose ends are two parallel congruent polygons. Similar points in each shape are joined by a straight line. When a line segment is produced it is extended in the same direction. The result of multiplying one quantity by another. A proof is an argument consisting of a sequence of justified conclusions that is used to universally demonstrate the validity of a statement. A pyramid has a polygon as a base and one other vertex in another plane. This vertex is joined to each of the polygon's edges by a triangle. Pythagoras' theorem states that, for any right-angled triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides. A Pythagorean triple is a set of three positive integers where the square of the largest is equal to the sum of the squares of the other two numbers. Such a set of numbers conform to Pythagoras' theorem and could therefore be the lengths of the sides of a right-angled triangle. An example of a Pythagorean triple is (3, 4, 5). A plane is divided into quarters called quadrants by the axes in a Cartesian coordinate system. The quadrants are numbered first, second, third and fourth in an anticlockwise direction starting in the upper right quadrant. A quadratic equation has the general form , where a, b and c are constants. A quadrilateral is a polygon with four sides. A straight line segment joining the centre of a circle with a point on its circumference (or the length of this line). A ratio compares two quantities. The ratio of a to b is often written a:b. For example, if the ratio of the width to the length of a swimming pool is 1:3, the length is three times the width. Any number that can be written as the exact ratio of two integers is a rational number. For example, -5, 12 and are all rational numbers. A ray is the set of all points lying on a given side of a certain point on a straight line. The reciprocal of a number is 1 divided by that number. For example, is the reciprocal of 3. The product of any number and its reciprocal is always 1. A rectangle is a quadrilateral with four interior angles of 90. A decimal number with an infinitely repeating digit or group of digits is called a recurring decimal. The repeating group is indicated by a dot above the first and last digit. For example, means 3.125125125125 A reflex angle is over 180 but less than 360. A regular polygon has sides of equal length and interior angles of the same size. The amount left over when an integer is divided by another integer is called the remainder. If an integer is divided by one of its factors, then the remainder is zero. An angle of 90°. A right pyramid has a regular polygon as its base and its vertex is directly over the centre of the base. A triangle with one internal angle of 90° is called a right-angled triangle. A transformation specified by a centre and angle of rotation. To round a number is to express it to a required degree of accuracy. In general, a rule is a procedure for performing a process. In the context of sequences, a rule describes the sequence and can be used to generate or extend it. This is the standard abbreviation for significant figures digits used to write a number to a given accuracy, rather than to denote place value. SAS stands for side-angle-side and refers to the known details of a possible triangle. In the context of vectors, a scalar is a quantity that has magnitude only. The scale factor is the ratio of distances between equivalent points on two geometrically similar shapes. A scalene triangle is one with no equal-length sides and therefore no equal-size angles. The region of a circle bounded by an arc and the two radii joining its end points to the centre. The region of a circle bounded by an arc and the chord joining its two end points. A sequence is a set of numbers in a given order. All the numbers in a sequence are generated by applying a rule. Significant figures are the digits used to write a number to a given accuracy, rather than to denote place value. Two shapes are similar if one is congruent to an enlargement of the other. All squares are similar, as are all circles. Simplifying a fraction means to rewrite it so that the numerator and denominator have as small a value as possible. For any right-angled triangle, the sine ratio for an angle of x° is: Three-dimensional shapes are often referred to as 'solids'. A sphere is a ball-shaped solid whose points are all equidistant from a central point. A square number is a natural number which is equal to the square of another natural number. For example, 4, 9 and 16 are square numbers as they are equal to 2, 3 and 4 squared respectively. A square root is a number whose square is equal to the given number. For example, 2 is the square root of 4. SSA stands for side-side-angle and refers to the known details of a possible triangle. SSS stands for side-side-side and refers to the known details of a possible triangle. A straight line is a set of points related by an equation of the form y = ax + c. It has length and position, but no breadth and is therefore one-dimensional. In algebra, to substitute means to replace a given symbol in an expression by its numerical value. For example, substituting 5 for n in the expression x = 3n gives x = 3 × 5 = 15. . An angle is subtended by a line, A, if the lines forming the angle extend from the endpoints of line A. If two angles sum to 180 then they are supplementary angles. A letter or other mark that represents a quantity. The symbol x is often used to denote a variable quantity, while other letters are used to represent constant numbers. A plane figure has symmetry if the effect of a reflection or rotation is to produce an identical-looking figure in the same position. For any right-angled triangle, the tangent ratio for an angle of x° is: A line that touches a circle or curve at only one point. Each of the numbers in a sequence is called a term. In the sequence 3, 6, 9,... 6 is the second term and 9 is the third term. A decimal number that has a finite number of digits is called a terminating decimal. All terminating decimals can be expressed as fractions in which the denominator is a multiple of 2 and/or 5. A four-sided solid shape. A transformation on a shape is any operation which alters the appearance of the shape in a well defined manner. Geometric translation is a transformation on the coordinate plane which changes the position of a shape while retaining its lengths, angles, and orientation. A transversal line intersects two or more coplanar lines. A trapezium is a quadrilateral with one set of parallel sides and one set of non-parallel sides. A triangle is a three-sided polygon. For any right-angled triangle, these are: for an angle of x°. For any right-angled triangle, these are: for an angle of x°. A turning point on a curve is a point at which the gradient is 0 but the points either side have a non-zero gradient. So a quadratic curve always has a minimum or maximum point which is a turning point. An undecagon is an 11-sided polygon. A fraction in which the numerator is equal to 1 is called a unitary fraction. A variable is a non-specified number which can be represented by a letter. The letters x and y are commonly used to represent variables. A vector is a quantity that has magnitude and direction. A vertical line runs at a right angle to a horizontal line. The vertical axis of a graph runs from top to bottom of a page. The x-y plane is a dimensional grid on which points and curves can be plotted. The x axis is normally the horizontal axis and the y axis the vertical one.
http://www.absorblearning.com/mathematics/glossary.html
13
12
Russian-U.S. Arctic Census 2012 Expedition Purpose Why Are Scientists Exploring Arctic Marine Ecosystems? A key purpose of NOAA’s Ocean Exploration Initiative is to investigate the more than 95 percent of Earth’s underwater world that until now has remained virtually unknown and unseen. Such exploration may reveal clues to the origin of life on Earth, cures for human diseases, answers to how to achieve sustainable use of resources, links to our maritime history, and information to protect endangered species. The Bering Strait is a narrow body of water that separates the western-most point of Alaska from the eastern-most point of Russia, and it provides the only connection between the Pacific and Arctic Oceans. Water flowing through the Strait brings heat, nutrients, and freshwater into the Arctic. Although the Bering Strait is relatively small (about 85 km wide and 50 m deep), this flow has a strong influence on the Arctic Ocean ecosystem, and may also affect the deep ocean thermohaline circulation (the “global conveyor belt” that connects all of Earth’s oceans; see More About the Deep Ocean Thermohaline Circulation, below). Despite its importance, relatively little is known about the processes that affect the Bering Strait throughflow, or about how these processes will respond to rapid changes now being observed in the Arctic climate. [Note: A “strait” is defined as a narrow, navigable channel of water that connects two larger navigable bodies of water. The key feature of a strait is that it provides a way for ships to sail past obstacles that separate two bodies of water. Usually, the obstacles are land masses, but may also be reefs, shallow water, or other features interfering with navigation. For a map of the Bering Strait and surrounding land masses, visit this page. To improve our understanding of Arctic ecosystems and the impacts of climate change, the Russian-American Long-term Census of the Arctic (RUSALCA) was established in 2003 as a cooperative project of the National Oceanic and Atmospheric Administration (NOAA) and the Russian Academy of Sciences (“rusalca” means “mermaid” in the Russian language). The overall purpose of this project is to provide ways to detect and measure changes in Arctic Ocean ecosystems. In 2004, the first RUSALCA expedition began investigations of ecosystems in the Bering and Chukchi Seas. A key component of these investigations was the installation of instrument packages attached to moored buoys to measure chemical and physical properties of water flowing through the Bering Strait (this was part of a multi-year measuring program begun in 1990). In addition, RUSALCA scientists studied hydrothermal systems, atmospheric conditions, and Arctic marine life including fishes, plankton, bottom communities, and food webs. Links to reports and photographs from previous RUSALCA expedition are available online. These studies showed that nutrients (needed by marine plants for photosynthesis) are highly concentrated in the western portions of the Bering Sea, with extremely low nutrient concentrations near the coast of Alaska. Rates of photosynthesis were highest just north of the Bering Strait and in the central Chukchi Sea. Farther north in the Chukchi Sea, photosynthetic rates declined; probably due to lower nutrient concentrations. High nutrient levels provide a foundation for ecosystems that contain large amounts of living organisms. Such ecosystems are said to have “high biological productivity.” In addition to nutrient availability, the productivity of Arctic marine ecosystems is also strongly affected by the presence of sea ice, water temperature, and current patterns. Food webs in these ecosystems tend to be less complex compared to other marine ecosystems, so that changes near the bottom of the food web (primary producers and primary consumers) can quickly affect animals near the top of the food web such as whales, seals, walruses, and sea birds. Benthic (bottom-dwelling) animals include clams, snails, polychaete worms, amphipods, echinoderms, crabs, and fishes. Filter-feeders obtain food from particulate material in the water, while deposit feeders consume organic material from sediments and the remains of other organisms that settle to the bottom. Both groups are important to the recycling of nutrients from degrading organic matter back into the water column, and changes in the distribution of these species may be an indication of changing environmental conditions. Particulate organic matter (POM) is a major food base for marine ecosystems in the study area. High nutrient concentrations in the western Bering Sea contribute to high levels of primary production, which in turn produces an abundant supply of POM some of which is consumed by benthic organisms. Food webs in this area tend to be less complex (primary producer > POM > benthic consumer), because there is an ample supply of food. In the eastern Bering Sea, nutrient concentrations are lower and there is less primary production and less POM. In this area, most of the POM has already been processed by pelagic organisms by the time it reaches the bottom, so food webs are more complex (primary producer > POM > pelagic consumer > pelagic consumer > benthic consumer). So, changes food web structure could be used to detect changes in nutrient content and other characteristics of the surrounding water. The 2004 RUSALCA expedition identified two groups of fishes that may also be useful as indicators of climate change: species that are native to the Chukchi Sea; and species that originate in the North Pacific or Bering Sea that are rarely found in the Chukchi Sea (native species are called “autochthonous” species, while those that originate elsewhere are called “allochthonous” species). To be useful as indicators of climate change, autochthonous species need to be fairly abundant (so that enough individuals can be captured to provide a statistically valid sample) as well as relatively non-mobile (so that changes in the distribution of the species can be easily detected). The Arctic Staghorn Sculpin Gymnocanthus tricuspis meets both requirements since it was the most abundant species found in samples collected by the RUSALCA 2004 expedition, and spends much of its time burrowed in the mud of the Chukchi Sea and is not particularly mobile (the Staghorn Sculpin together with the Shorthorn Sculpin, Bering Flounder, and Arctic Cod accounted for 80 percent of fishes captured). So if the range of the Arctic Staghorn Sculpin were to change, this would suggest a possible change in climatic conditions. Similarly, an increase in the numbers of allochthonous species in the Chukchi Sea could also suggest that waters in the Chukchi Sea had become warmer, making them more suitable for species that originate in more southern areas. In addition to providing baseline information about fish distribution for comparison with future studies, the 2004 RUSALCA expedition identified physical characteristics that affect fish species composition and distribution. This is important to identifying ecosystem change, because physical characteristics can be measured and analyzed more quickly (for example, from moored buoys) than fish can be collected and identified. Understanding the relationship between fishes and physical characteristics of water masses provides important background for using information about physical characteristics to make inferences about ecosystems in the northern Bering and Chukchi Seas. Major scientific questions guiding the Russian – U.S. Arctic Census 2012 - What are major physical and chemical properties of water flowing through the Bering Strait? - What is the velocity of water flowing through the Bering Strait, and how does this velocity vary from month-to-month and year-to-year? - What organisms are native to this area, and what species are migrating into the area from elsewhere? - What are the optimum methods for monitoring environmental conditions and change in other Arctic regions? The key technology component of the RUSALCA 2012 Expedition is moored instruments that measure physical and chemical properties of water flowing through the Bering Strait. Making these measurements is particularly challenging, because the Bering Strait region is covered by ice during the winter, and floating sea-ice can damage equipment. RUSALCA moorings are designed to survive encounters with sea ice by allowing the instruments to be knocked down by sea-ice or very strong currents, and then bounce back when the ice or currents are gone. A typical mooring includes a large (about 1 m diameter) float attached to a length of chain that is anchored in place by a heavy weight (such as railroad wheels). The chain is connected to the weight by an “acoustic release” that will disconnect the chain from the weight when scientists transmit a coded sound signal from the research vessel. Instruments are attached at certain points along the chain to collect data from specific depths. Instruments used in the RUSALCA expeditions include current meters, pressure gauges (to measure variations in water depth), temperature and salinity sensors, nutrient analyzers, transmissometers (to measure water turbidity), fluorometers (to measure chlorophyll concentrations that provide a measure of photosynthesis), and CTDs (see below). For more information about RUSALCA moorings, visit http://psc.apl.washington.edu/HLD/Bstrait/bstrait.html#Basics. A CTD is used to collect data on seawater conductivity, temperature, and depth. These data can be used to determine salinity of the seawater, which is a key indicator of different water masses. A CTD may be carried on a submersible, or attached to a water-sampling array known as a rosette that can be deployed from a research ship, or attached to a moored buoy. For more information on CTDs and rosettes visit http://oceanexplorer.noaa.gov/technology/ More About Studying Food Webs Using Isotope Ratios RUSALCA scientists study food webs in Arctic ecosystems using a tool called “stable isotope analysis.” Isotopes are forms of an element that have different numbers of neutrons. For example, carbon-13 (13C) contains one more neutron than carbon-12 (12C). Both forms occur naturally, but carbon-12 is more common. When an animal eats food that contains both carbon isotopes, carbon-12 is selectively metabolized, so the ratio of carbon-12 to carbon-13 in the tissues of the animal is higher than the ratio of these isotopes in the food they consumed. In other words, carbon-13 is “enriched” in the animal’s tissues. If this animal is eaten by another consumer, the enrichment process will be repeated. So the ratio of carbon-13 to carbon-12 increases with each increase in trophic level (i.e., “each step up the food chain). For additional discussion of stable isotope analysis, see “Who Is Eating Whom?” (http://oceanexplorer.noaa.gov/explorations/ More About the Deep Ocean Thermohaline Circulation The deep ocean thermohaline circulation is driven by changes in seawater density. Two factors affect the density of seawater: temperature (the ‘thermo’ part) and salinity (the ‘haline’ part). Major features of the THC include: - In the Northeastern Atlantic Ocean, atmospheric cooling increases the density of surface waters. Decreased salinity due to freshwater influx partially offsets this increase (since reduced salinity lowers the density of seawater), but temperature has a greater effect, so there is a net increase in seawater density. The formation of sea ice may also play a role as freezing removes water but leaves salt behind causing the density of the unfrozen seawater to increase. The primary locations of dense water formation in the North Atlantic are the Greenland-Iceland-Nordic Seas and the Labrador Sea. - The dense water sinks into the Atlantic to depths of 1000 m and below, and flows south along the east coasts of North and South America. - As the dense water sinks, it is replaced by warm water flowing north in the Gulf Stream and its extension, the North Atlantic Drift (note that the Gulf Stream is primarily a wind driven current and is part of a subtropical gyre that is separate from the THC). - The deep south-flowing current combines with cold, dense waters formed near Antarctica and flows clockwise in the Deep Circumpolar Current. Some of the mass deflects to the north to enter the Indian and Pacific Oceans. - Some of the cold water mass is warmed as it approaches the equator, causing density to decrease. Upwelling of deep waters is difficult to observe, and is believed to occur in many places, particularly in the Southern Ocean in the region of the Antarctic Circumpolar Current. - In the Indian Ocean, the water mass gradually warms and turns in a clockwise direction until it forms a west-moving surface current that moves around the southern tip of Africa into the South Atlantic Ocean. - In the Pacific, the deepwater mass flows to the north on the western side of the Pacific Basin. Some of the mass mixes with warmer water, warms, and dissipates in the North Pacific. The remainder of the mass continues a deep, clockwise circulation. A warm, shallow current also exists in the Pacific, which moves south and west, through the Indonesian archipelago, across the Indian Ocean, around the southern tip of Africa, and into the South Atlantic. - Evaporation increases the salinity of the current, which flows toward the northwest, joins the Gulf Stream, and flows toward the Arctic regions where it replenishes dense sinking water to begin the cycle again. The processes outlined above are greatly simplified. In reality, the deep ocean THC is much more complex, and is not fully understood. Our understanding of the connections between the deep ocean THC and Earth’s ecosystems is similarly incomplete, but most scientists agree that: - The THC affects almost all of the world’s ocean (and for this reason, it is often called the ‘global conveyor belt’); - The THC plays an important role in transporting dissolved oxygen and nutrients from surface waters to biological communities in deep water; and - The THC is at least partially responsible for the fact that countries in northwestern Europe (Britain and Scandinavia) are about 9ºC warmer than other locations at similar latitudes. In recent years, changes in the Arctic climate have led to growing concerns about the possible effects of these changes on the deep ocean THC. In the past 30 years, parts of Alaska and Eurasia have warmed by about six degrees Celsius. In the last 20 years, the extent of Arctic sea ice has decreased by 5 percent, and in some areas, sea ice thickness has decreased by 40 percent. Overall, the Arctic climate is warming more rapidly than elsewhere on Earth. Reasons for this include: - When snow and ice are present, as much as 80% of solar energy that reaches Earth’s surface is reflected back into space. As snow and ice melt, surface reflectivity (called ‘albedo’) is reduced, so more solar energy is absorbed by Earth’s surface; - Less heat is required to warm the atmosphere over the Arctic because the Arctic atmosphere is thinner than elsewhere; - With less sea ice, the heat absorbed by the ocean in summer is more easily transferred to the atmosphere in winter. Dense water sinking in the North Atlantic Ocean is one of the principal forces that drives the circulation of the global conveyor belt. Since an increase in freshwater inflow (such as from melting ice) or warmer temperatures in these areas would weaken the processes that cause seawater density to increase, these changes could also weaken the global conveyor belt. For More Information Director, Education Programs NOAA Office of Ocean Exploration and Research Other lesson plans developed for this Web site are available in the Education Section.
http://oceanexplorer.noaa.gov/explorations/12arctic/background/edu/purpose.html
13
62
To receive more information about up-to-date research on micronutrients, sign up for the free, semi-annual LPI Research Newsletter here. The immune system protects the body against infection and disease. It is a complex and integrated system of cells, tissues, and organs that have specialized roles in defending against foreign substances and pathogenic microorganisms, including bacteria, viruses, and fungi. The immune system also functions to guard against the development of cancer. For these actions, the immune system must recognize foreign invaders as well as abnormal cells and distinguish them from self (1). However, the immune system is a double-edged sword in that host tissues can be damaged in the process of combating and destroying invading pathogens. A key component of the immediate immune response is inflammation, which can cause damage to host tissues, although the damage is usually not significant (2). Inflammation is discussed in a separate article; this article focuses on nutrition and immunity. Cells of the immune system originate in the bone marrow and circulate to peripheral tissues through the blood and lymph. Organs of the immune system include the thymus, spleen, and lymph nodes (3). T-lymphocytes develop in the thymus, which is located in the chest directly above the heart. The spleen, which is located in the upper abdomen, functions to coordinate secretion of antibodies into the blood and also removes old and damaged red blood cells from the circulation (4). Lymph nodes serve as local sentinel stations in tissues throughout the body, trapping antigens and infectious agents and promoting organized immune cell activation. The immune system is broadly divided into two major components: innate immunity and adaptive immunity. Innate immunity involves immediate, nonspecific responses to foreign invaders, while adaptive immunity requires more time to develop its complex, specific responses (1). Innate immunity is the first line of defense against foreign substances and pathogenic microorganisms. It is an immediate, nonspecific defense that does not involve immunologic memory of pathogens. Because of the lack of specificity, the actions of the innate immune system can result in damage to the body’s tissues (5). A lack of immunologic memory means that the same response is mounted regardless of how often a specific antigen is encountered (6). The innate immune system is comprised of various anatomical barriers to infection, including physical barriers (e.g., the skin), chemical barriers (e.g., acidity of stomach secretions), and biological barriers (e.g., normal microflora of the gastrointestinal tract) (1). In addition to anatomical barriers, the innate immune system is comprised of soluble factors and phagocytic cells that form the first line of defense against pathogens. Soluble factors include the complement system, acute phase reactant proteins, and messenger proteins called cytokines (6). The complement system, a biochemical network of more than 30 proteins in plasma and on cellular surfaces, is a key component of innate immunity. The complement system elicits responses that kill invading pathogens by direct lysis (cell rupture) or by promoting phagocytosis. Complement proteins also regulate inflammatory responses, which are an important part of innate immunity (7-9). Acute phase reactant proteins are a class of plasma proteins that are important in inflammation. Cytokines secreted by immune cells in the early stages of inflammation stimulate the synthesis of acute phase reactant proteins in the liver (10). Cytokines are chemical messengers that have important roles in regulating the immune response; some cytokines directly fight pathogens. For example, interferons have antiviral activity (6). These soluble factors are important in recruiting phagocytic cells to local areas of infection. Monocytes, macrophages, and neutrophils are key immune cells that engulf and digest invading microorganisms in the process called phagocytosis. These cells express pattern recognition receptors that identify pathogen-associated molecular patterns (PAMPs) that are unique to pathogenic microorganisms but conserved across several families of pathogens (see figure). For more information about the innate immune response, see the article on Inflammation. Adaptive immunity (also called acquired immunity), a second line of defense against pathogens, takes several days or weeks to fully develop. However, adaptive immunity is much more complex than innate immunity because it involves antigen-specific responses and immunologic “memory.” Exposure to a specific antigen on an invading pathogen stimulates production of immune cells that target the pathogen for destruction (1). Immunologic “memory” means that immune responses upon a second exposure to the same pathogen are faster and stronger because antigens are “remembered.” Primary mediators of the adaptive immune response are B lymphocytes (B cells) and T lymphocytes (T cells). B cells produce antibodies, which are specialized proteins that recognize and bind to foreign proteins or pathogens in order to neutralize them or mark them for destruction by macrophages. The response mediated by antibodies is called humoral immunity. In contrast, cell-mediated immunity is carried out by T cells, lymphocytes that develop in the thymus. Different subgroups of T cells have different roles in adaptive immunity. For instance, cytotoxic T cells (killer T cells) directly attack and kill infected cells, while helper T cells enhance the responses and thus aid in the function of other lymphocytes (5, 6). Regulatory T cells, sometimes called suppressor T cells, suppress immune responses (12). In addition to its vital role in innate immunity, the complement system modulates adaptive immune responses and is one example of the interplay between the innate and adaptive immune systems (7, 13). Components of both innate and adaptive immunity interact and work together to protect the body from infection and disease. Nutritional status can modulate the actions of the immune system; therefore, the sciences of nutrition and immunology are tightly linked. In fact, malnutrition is the most common cause of immunodeficiency in the world (14), and chronic malnutrition is a major risk factor for global morbidity and mortality (15). More than 800 million people are estimated to be undernourished, most in the developing world (16), but undernutrition is also a problem in industrialized nations, especially in hospitalized individuals and the elderly (17). Poor overall nutrition can lead to inadequate intake of energy and macronutrients, as well as deficiencies in certain micronutrients that are required for proper immune function. Such nutrient deficiencies can result in immunosuppression and dysregulation of immune responses. In particular, deficiencies in certain nutrients can impair phagocytic function in innate immunity and adversely affect several aspects of adaptive immunity, including cytokine production as well as antibody- and cell-mediated immunities (18, 19). Overnutrition, a form of malnutrition where nutrients, specifically macronutrients, are provided in excess of dietary requirements, also negatively impacts immune system functions (see Overnutrition and Obesity below). Impaired immune responses induced by malnutrition can increase one’s susceptibility to infection and illness. Infection and illness can, in turn, exacerbate states of malnutrition, for example, by reducing nutrient intake through diminished appetite, impairing nutrient absorption, increasing nutrient losses, or altering the body’s metabolism such that nutrient requirements are increased (19). Thus, states of malnutrition and infection can aggravate each other and lead to a vicious cycle (14). Protein-energy malnutrition (PEM; also sometimes called protein-calorie malnutrition) is a common nutritional problem that principally affects young children and the elderly (20). Clinical conditions of severe PEM are termed marasmus, kwashiorkor, or a hybrid of these two syndromes. Marasmus is a wasting disorder that is characterized by depletion of fat stores and muscle wasting. It results from a deficiency in both protein and calories (i.e., all nutrients). Individuals afflicted with marasmus appear emaciated and are grossly underweight and do not present with edema (21). In contrast, a hallmark of kwashiorkor is the presence of edema. Kwashiorkor is primarily caused by a deficiency in dietary protein, while overall caloric intake may be normal (21, 22). Both forms are more common in developing nations, but certain types of PEM are also present in various subgroups in industrialized nations, such as the elderly and individuals who are hospitalized (17). In the developed world, PEM more commonly occurs secondary to a chronic disease that interferes with nutrient metabolism, such as inflammatory bowel disease, chronic renal failure, or cancer (22). Regardless of the specific cause, PEM significantly increases susceptibility to infection by adversely affecting aspects of both innate immunity and adaptive immunity (15). With respect to innate immunity, PEM has been associated with reduced production of certain cytokines and several complement proteins, as well as impaired phagocyte function (20, 23, 24). Such malnutrition disorders can also compromise the integrity of mucosal barriers, increasing vulnerability to infections of the respiratory, gastrointestinal, and urinary tracts (21). With respect to adaptive immunity, PEM primarily affects cell-mediated aspects instead of components of humoral immunity. In particular, PEM leads to atrophy of the thymus, the organ that produces T cells, which reduces the number of circulating T cells and decreases the effectiveness of the memory response to antigens (21, 24). PEM also compromises functions of other lymphoid tissues, including the spleen and lymph nodes (20). While humoral immunity is affected to a lesser extent, antibody affinity and response is generally decreased in PEM (24). It is important to note that PEM usually occurs in combination with deficiencies in essential micronutrients, especially vitamin A, vitamin B6, folate, vitamin E, zinc, iron, copper, and selenium (21). Experimental studies have shown that several types of dietary lipids (fatty acids) can modulate the immune response (25). Fatty acids that have this role include the long-chain polyunsaturated fatty acids (PUFAs) of the omega-3 and omega-6 classes. PUFAs are fatty acids with more than one double bond between carbons. In all omega-3 fatty acids, the first double bond is located between the third and fourth carbon atom counting from the methyl end of the fatty acid (n-3). Similarly, the first double bond in all omega-6 fatty acids is located between the sixth and seventh carbon atom from the methyl end of the fatty acid (n-6) (26). Humans lack the ability to place a double bond at the n-3 or n-6 positions of a fatty acid; therefore, fatty acids of both classes are considered essential nutrients and must be derived from the diet (26). More information is available in the article on Essential fatty acids. Alpha-linolenic acid (ALA) is a nutritionally essential n-3 fatty acid, and linoleic acid (LA) is a nutritionally essential n-6 fatty acid; dietary intake recommendations for essential fatty acids are for ALA and LA. Other fatty acids in the n-3 and n-6 classes can be endogenously synthesized from ALA or LA (see the figure in a separate article on essential fatty acids). For instance the long-chain n-6 PUFA, arachidonic acid, can be synthesized from LA, and the long-chain n-3 PUFAs, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), can be synthesized from ALA (26). However, synthesis of EPA and, especially, DHA may be insufficient under certain conditions, such as during pregnancy and lactation (27, 28). EPA and DHA, like other PUFAs, modulate cellular function, including immune and inflammatory responses (29). Long-chain PUFAs are incorporated into membrane phospholipids of immune cells, where they modulate cell signaling of immune and inflammatory responses, such as phagocytosis and T-cell signaling. They also modulate the production of eicosanoids and other lipid mediators (29, 30). Eicosanoids are 20-carbon PUFA derivatives that play key roles in inflammatory and immune responses. During an inflammatory response, long-chain PUFAs (e.g., arachidonic acid [AA] of the n-6 series and EPA of the n-3 series) in immune cell membranes can be metabolized by enzymes to form eicosanoids (e.g., prostaglandins, leukotrienes, and thromboxanes), which have varying effects on inflammation (29). Eicosanoids derived from AA can also regulate B- and T-cell functions. Resolvins are lipid mediators derived from EPA and DHA that appear to have anti-inflammatory properties (30). To a certain extent, the relative production of these lipid mediators can be altered by dietary and supplemental intake of lipids. In those who consume a typical Western diet, the amount of AA in immune cell membranes is much greater than the amount of EPA, which results in the formation of more eicosanoids derived from AA than EPA. However, increasing n-3 fatty acid intake dose-dependently increases the EPA content of immune cell membranes. The resulting effect would be increased production of eicosanoids derived from EPA and decreased production of eicosanoids derived from AA, leading to an overall anti-inflammatory effect (30, 31). While eicosanoids derived from EPA are less biologically active than AA-derived eicosanoids (32), supplementation with EPA and other n-3 PUFAs may nevertheless have utility in treating various inflammatory diseases. This is a currently active area of investigation; see the article on Essential fatty acids. While n-3 PUFA supplementation may benefit individuals with inflammatory or autoimmune diseases, high n-3 PUFA intakes could possibly impair host-defense mechanisms and increase vulnerability to infectious disease (for more information, see the article on Essential fatty acids) (25, 33). In addition to PUFAs, isomers of LA called conjugated linoleic acid (CLA) have been shown to modulate immune function, mainly in animal and in vitro studies (34). CLA is found naturally in meat and milk of ruminant animals, but it is also available as a dietary supplement that contains two isomers, cis-9,trans-11 CLA and trans-10,cis-12 CLA. One study in 28 men and women found that CLA supplementation (3 g/day of a 50:50 mixture of the two main CLA isomers) was associated with an increase in plasma levels of IgA and IgM (35), two classes of antibodies. CLA supplementation was also associated with a decrease in levels of two pro-inflammatory cytokines and an increase in levels of an anti-inflammatory cytokine (35). Similar effects on the immune response have been observed in some animal studies (36, 37); however, a few other human studies have not found beneficial effects of CLA on various measures of immune status and function (38-40). More research is needed to understand the effects of CLA on the human immune response. Further, lipids in general have a number of other roles in immunity besides being the precursors of eicosanoids and similar immune mediators. For instance, lipids are metabolized by immune cells to generate energy and are also important structural and functional components of cell membranes. Moreover, lipids can regulate gene expression through stimulation of membrane receptors or through modification of transcription factor activity. Further, lipids can covalently modify proteins, thereby affecting their function (30). Deficiencies in select micronutrients (vitamins and nutritionally-essential minerals) can adversely affect aspects of both innate and adaptive immunity, increasing vulnerability to infection and disease. Micronutrient inadequacies are quite common in the general U.S. population, but especially in the poor, the elderly, and those who are obese (see Overnutrition and Obesity below) (41, 42). According to data from the U.S. National Health and Nutrition Examination Survey (NHANES), 93% of the U.S. population do not meet the estimated average requirement (EAR) for vitamin E, 56% for magnesium, 44% for vitamin A, 31% for vitamin C, 14% for vitamin B6, and 12% for zinc (43). Moreover, vitamin D deficiency is a major problem in the U.S. and elsewhere; it has been estimated that 1 billion people in the world have either vitamin D deficiency or insufficiency (44). Because micronutrients play crucial roles in the development and expression of immune responses, selected micronutrient deficiencies can cause immunosuppression and thus increased susceptibility to infection and disease. The roles of several micronutrients in immune function are addressed below. Vitamin A and its metabolites play critical roles in both innate and adaptive immunity. In innate immunity, the skin and mucosal cells of the eye and respiratory, gastrointestinal, and genitourinary tracts function as a barrier against infections. Vitamin A helps to maintain the structural and functional integrity of these mucosal cells. Vitamin A is also important to the normal function of several types of immune cells important in the innate response, including natural killer (NK) cells, macrophages, and neutrophils. Moreover, vitamin A is needed for proper function of cells that mediate adaptive immunity, such as T and B cells; thus, vitamin A is necessary for the generation of antibody responses to specific antigens (45). Most of the immune effects of vitamin A are carried out by vitamin A derivatives, namely isomers of retinoic acid. Isomers of retinoic acid are steroid hormones that bind to retinoid receptors that belong to two different classes: retinoic acid receptors (RARs) and retinoid X receptors (RXRs). In the classical pathway, RAR must first heterodimerize with RXR and then bind to small sequences of DNA called retinoic acid response elements (RAREs) to initiate a cascade of molecular interactions that modulate the transcription of specific genes (46). More than 500 genes are directly or indirectly regulated by retinoic acid (47). Several of these genes control cellular proliferation and differentiation; thus, vitamin A has obvious importance in immunity. Vitamin A deficiency is a major public health problem worldwide, especially in developing nations, where availability of foods containing preformed vitamin A is limited (for information on sources of vitamin A, see the separate article on Vitamin A). Experimental studies in animal models, along with epidemiological studies, have shown that vitamin A deficiency leads to immunodeficiency and increases the risk of infectious diseases (45). In fact, deficiency in this micronutrient is a leading cause of morbidity and mortality among infants, children, and women in developing nations. Vitamin A-deficient individuals are vulnerable to certain infections, such as measles, malaria, and diarrheal diseases (45). Subclinical vitamin A deficiency might increase risk of infection as well (48). Infections can, in turn, lead to vitamin A deficiency in a number of different ways, for example, by reducing food intake, impairing vitamin absorption, increasing vitamin excretion, interfering with vitamin utilization, or increasing metabolic requirements of vitamin A (49). Many of the specific effects of vitamin A deficiency on the immune system have been elucidated using animal models. Vitamin A deficiency impairs components of innate immunity. As mentioned above, vitamin A is essential in maintaining the mucosal barriers of the innate immune system. Thus, vitamin A deficiency compromises the integrity of this first line of defense, thereby increasing susceptibility to some types of infection, such as eye, respiratory, gastrointestinal, and genitourinary infections (50-56). Vitamin A deficiency results in reductions in both the number and killing activity of NK cells, as well as the function of neutrophils and other cells that phagocytose pathogens like macrophages. Specific measures of functional activity affected appear to include chemotaxis, phagocytosis, and immune cell ability to generate oxidants that kill invading pathogens (45). In addition, cytokine signaling may be altered in vitamin A deficiency, which would affect inflammatory responses of innate immunity. Additionally, vitamin A deficiency impairs various aspects of adaptive immunity, including humoral and cell-mediated immunity. In particular, vitamin A deficiency negatively affects the growth and differentiation of B cells, which are dependent on retinol and its metabolites (57, 58). Vitamin A deficiency also affects B cell function; for example, animal experiments have shown that vitamin A deficiency impairs antibody responses (59-61). With respect to cell-mediated immunity, retinol is important in the activation of T cells (62), and vitamin A deficiency may affect cell-mediated immunity by decreasing the number or distribution of T cells, altering cytokine production, or by decreasing the expression of cell-surface receptors that mediate T-cell signaling (45). Vitamin A supplementation enhances immunity and has been shown to reduce the infection-related morbidity and mortality associated with vitamin A deficiency. A meta-analysis of 12 controlled trials found that vitamin A supplementation in children decreased the risk of all-cause mortality by 30%; this analysis also found that vitamin A supplementation in hospitalized children with measles was associated with a 61% reduced risk of mortality (63). Vitamin A supplementation has been shown to decrease the severity of diarrheal diseases in several studies (64) and has also been shown to decrease the severity, but not the incidence, of other infections, such as measles, malaria, and HIV (45). Moreover, vitamin A supplementation can improve or reverse many of the abovementioned, untoward effects on immune function, such as lowered antibody production and an exacerbated inflammatory response (65). However, vitamin A supplementation is not beneficial in those with lower respiratory infections, such as pneumonia, and supplementation may actually aggravate the condition (45, 66, 67). Because of potential adverse effects, vitamin A supplements should be reserved for undernourished populations and those with evidence of vitamin A deficiency (64). For information on vitamin A toxicity, see the separate article on Vitamin A. Like vitamin A, the active form of vitamin D, 1,25-dihydroxyvitamin D3, functions as a steroid hormone to regulate expression of target genes. Many of the biological effects of 1,25-dihydroxyvitamin D3 are mediated through a nuclear transcription factor known as the vitamin D receptor (VDR) (68). Upon entering the nucleus of a cell, 1,25-dihydroxyvitamin D3 associates with the VDR and promotes its association with the retinoid X receptor (RXR). In the presence of 1,25-dihydroxyvitamin D3, the VDR/RXR complex binds small sequences of DNA known as vitamin D response elements (VDREs) and initiates a cascade of molecular interactions that modulate the transcription of specific genes. More than 200 genes in tissues throughout the body are known to be regulated either directly or indirectly by 1,25-dihydroxyvitamin D3 (44). In addition to its effects on mineral homeostasis and bone metabolism, 1,25-dihydroxyvitamin D3 is now recognized to be a potent modulator of the immune system. The VDR is expressed in several types of immune cells, including monocytes, macrophages, dendritic cells, and activated T cells (69-72). Macrophages also produce the 25-hydroxyvitamin D3-1-hydroxylase enzyme, allowing for local conversion of vitamin D to its active form (73). Studies have demonstrated that 1,25-dihydroxyvitamin D3 modulates both innate and adaptive immune responses. Antimicrobial peptides (AMPs) and proteins are critical components of the innate immune system because they directly kill pathogens, especially bacteria, and thereby enhance immunity (74). AMPs also modulate immune functions through cell-signaling effects (75). The active form of vitamin D regulates an important antimicrobial protein called cathelicidin (76-78). Vitamin D has also been shown to stimulate other components of innate immunity, including immune cell proliferation and cytokine production (79). Through these roles, vitamin D helps protect against infections caused by pathogens. Vitamin D has mainly inhibitory effects on adaptive immunity. In particular, 1,25-dihydroxyvitamin D3 suppresses antibody production by B cells and also inhibits proliferation of T cells in vitro (80-82). Moreover, 1,25-dihydroxyvitamin D3 has been shown to modulate the functional phenotype of helper T cells as well as dendritic cells (75). T cells that express the cell-surface protein CD4 are divided into two subsets depending on the particular cytokines that they produce: T helper (Th)1 cells are primarily involved in activating macrophages and inflammatory responses and Th2 cells are primarily involved in stimulating antibody production by B cells (12). Some studies have shown that 1,25-dihydroxyvitamin D3 inhibits the development and function of Th1 cells (83, 84) but enhances the development and function of Th2 cells (85, 86) and regulatory T cells (87, 88). Because these latter cell types are important regulators in autoimmune disease and graft rejections, vitamin D is suggested to have utility in preventing and treating such conditions (89). Studies employing various animal models of autoimmune diseases and transplantation have reported beneficial effects of 1,25-dihydroxyvitamin D3 (reviewed in 84). Indeed, vitamin D deficiency has been implicated in the development of certain autoimmune diseases, such as insulin-dependent diabetes mellitus (IDDM; type 1 diabetes mellitus), multiple sclerosis (MS), and rheumatoid arthritis (RA). Autoimmune diseases occur when the body mounts an immune response against its own tissues instead of a foreign pathogen. The targets of the inappropriate immune response are the insulin-producing beta-cells of the pancreas in IDDM, the myelin-producing cells of the central nervous system in MS, and the collagen-producing cells of the joints in RA (90). Some epidemiological studies have found the prevalence of various autoimmune conditions increases as latitude increases (91). This suggests that lower exposure to ultraviolet-B radiation (the type of radiation needed to induce vitamin D synthesis in skin) and the associated decrease in endogenous vitamin D synthesis may play a role in the pathology of autoimmune diseases. Additionally, results of several case-control and prospective cohort studies have associated higher vitamin D intake or serum levels with decreased incidence, progression, or symptoms of IDDM (92), MS (93-96), and RA (97). For more information, see the separate article on Vitamin D. It is not yet known whether vitamin D supplementation will reduce the risk of certain autoimmune disorders. Interestingly, a recent systematic review and meta-analysis of observational studies found that vitamin D supplementation during early childhood was associated with a 29% lower risk of developing IDDM (98). More research is needed to determine the role of vitamin D in various autoimmune conditions. Vitamin C is a highly effective antioxidant that protects the body’s cells against reactive oxygen species (ROS) that are generated by immune cells to kill pathogens. Primarily through this role, the vitamin affects several components of innate and adaptive immunity; for example, vitamin C has been shown to stimulate both the production (99-103) and function (104, 105) of leukocytes (white blood cells), especially neutrophils, lymphocytes, and phagocytes. Specific measures of functions stimulated by vitamin C include cellular motility (104), chemotaxis (104, 105), and phagocytosis (105). Neutrophils, which attack foreign bacteria and viruses, seem to be the primary cell type stimulated by vitamin C, but lymphocytes and other phagocytes are also affected (106). Additionally, several studies have shown that supplemental vitamin C increases serum levels of antibodies (107, 108) and C1q complement proteins (109-111) in guinea pigs, which—like humans—cannot synthesize vitamin C and hence depend on dietary vitamin C. However, some studies have reported no beneficial changes in leukocyte production or function with vitamin C treatment (112-115). Vitamin C may also protect the integrity of immune cells. Neutrophils, mononuclear phagocytes, and lymphocytes accumulate vitamin C to high concentrations, which can protect these cell types from oxidative damage (103, 116, 117). In response to invading microorganisms, phagocytic leukocytes release non-specific toxins, such as superoxide radicals, hypochlorous acid (“bleach”), and peroxynitrite; these ROS kill pathogens and, in the process, can damage the leukocytes themselves (118). Vitamin C, through its antioxidant functions, has been shown to protect leukocytes from such effects of autooxidation (119). Phagocytic leukocytes also produce and release cytokines, including interferons, which have antiviral activity (120). Vitamin C has been shown to increase interferon levels in vitro (121). Further, vitamin C regenerates the antioxidant vitamin E from its oxidized form (122). It is widely thought by the general public that vitamin C boosts the function of the immune system, and accordingly, may protect against viral infections and perhaps other diseases. While some studies suggest the biological plausibility of vitamin C as an immune enhancer, human studies published to date are conflicting. Controlled clinical trials of appropriate statistical power would be necessary to determine if supplemental vitamin C boosts the immune system. For a review of vitamin C and the common cold, see the separate article on Vitamin C. Vitamin E is a lipid-soluble antioxidant that protects the integrity of cell membranes from damage caused by free radicals (123). In particular, the alpha-tocopherol form of vitamin E protects against peroxidation of polyunsaturated fatty acids, which can potentially cause cellular damage and subsequently lead to improper immune responses (124). Several studies in animal models as well as humans indicate that vitamin E deficiency impairs both humoral and cell-mediated aspects of adaptive immunity, including B and T cell function (reviewed in 124). Moreover, vitamin E supplementation in excess of current intake recommendations has been shown to enhance immunity and decrease susceptibility to certain infections, especially in elderly individuals. Aging is associated with immune senescence (125). For example, T-cell function declines with increasing age, evidenced by decreased T-cell proliferation and decreased T-cell production of the cytokine, interleukin-2 (126). Studies in mice have found that vitamin E ameliorates these two age-related, immune effects (127, 128). Similar effects have been observed in some human studies (129). A few clinical trials of alpha-tocopherol supplementation in elderly subjects have demonstrated improvements in immunity. For example, elderly adults given 200 mg/day of synthetic alpha-tocopherol (equivalent to 100 mg of RRR-alpha-tocopherol or 150 IU of RRR-tocopherol; RRR-alpha-tocopherol is also referred to as "natural" or d-alpha-tocopherol) for several months displayed increased formation of antibodies in response to hepatitis B vaccine and tetanus vaccine (130). However, it is not known if such enhancements in the immune response of older adults actually translate to increased resistance to infections like the flu (influenza virus) (131). A randomized, placebo-controlled trial in elderly nursing home residents reported that daily supplementation with 200 IU of synthetic alpha-tocopherol (equivalent to 90 mg of RRR-alpha-tocopherol) for one year significantly lowered the risk of contracting upper respiratory tract infections, especially the common cold, but had no effect on lower respiratory tract (lung) infections (132). Yet, other trials have not reported an overall beneficial effect of vitamin E supplements on respiratory tract infections in older adults (133-136). More research is needed to determine whether supplemental vitamin E may protect the elderly against the common cold or other infections. Vitamin B6 is required in the endogenous synthesis and metabolism of amino acids—the building blocks of proteins like cytokines and antibodies. Animal and human studies have demonstrated that vitamin B6 deficiency impairs aspects adaptive immunity, including both humoral and cell-mediated immunity. Specifically, deficiency in this micronutrient has been shown to affect lymphocyte proliferation, differentiation, and maturation as well as cytokine and antibody production (137-139). Correcting the vitamin deficiency restores the affected immune functions (139). The B vitamin, folate, is required as a coenzyme to mediate the transfer of one-carbon units. Folate coenzymes act as acceptors and donors of one-carbon units in a variety of reactions critical to the endogenous synthesis and metabolism of nucleic acids (DNA and RNA) and amino acids (140, 141). Thus, folate has obvious importance in immunity. Folate deficiency results in impaired immune responses, primarily affecting cell-mediated immunity. However, antibody responses of humoral immunity may also be impaired in folate deficiency (142). In humans, vitamin B12 functions as a coenzyme for two enzymatic reactions. One of the vitamin B12-dependent enzymes is involved in the synthesis of the amino acid, methionine, from homocysteine. Methionine in turn is required for the synthesis of S-adenosylmethionine, a methyl group donor used in many biological methylation reactions, including the methylation of a number of sites within DNA and RNA. The other vitamin B12-dependent enzyme, L-methylmalonyl-CoA mutase, converts L-methylmalonyl-CoA to succinyl-CoA, a compound that is important in the production of energy from fats and proteins as well as in the synthesis of hemoglobin, the oxygen carrying pigment in red blood cells (143). Patients with diagnosed vitamin B12 deficiency have been reported to have suppressed natural killer cell activity and decreased numbers of circulating lymphocytes (144, 145). One study found that these immunomodulatory effects were corrected by treating the vitamin deficiency (144). Zinc is critical for normal development and function of cells that mediate both innate and adaptive immunity (146). The cellular functions of zinc can be divided into three categories: 1) catalytic, 2) structural, and 3) regulatory (see Function in the separate article on zinc) (147). Because zinc is not stored in the body, regular dietary intake of the mineral is important in maintaining the integrity of the immune system. Thus, inadequate intake can lead to zinc deficiency and compromised immune responses (148). With respect to innate immunity, zinc deficiency impairs the complement system, cytotoxicity of natural killer cells, phagocytic activity of neutrophils and macrophages, and immune cell ability to generate oxidants that kill invading pathogens (149-151). Zinc deficiency also compromises adaptive immune function, including lymphocyte number and function (152). Even marginal zinc deficiency, which is more common than severe zinc deficiency, can suppress aspects of immunity (148). Zinc-deficient individuals are known to experience increased susceptibility to a variety of infectious agents (see the separate article on Zinc). Adequate selenium intake is essential for the host to mount a proper immune response because it is required for the function of several selenium-dependent enzymes known as selenoproteins (see the separate article on Selenium). For example, the glutathione peroxidases (GPx) are selenoproteins that function as important redox regulators and cellular antioxidants, which reduce potentially damaging reactive oxygen species, such as hydrogen peroxide and lipid hydroperoxides, to harmless products like water and alcohols by coupling their reduction with the oxidation of glutathione (see the diagram in the article on selenium) (153). These roles have implications for immune function and cancer prevention. Selenium deficiency impairs aspects of innate as well as adaptive immunity (154, 155), adversely affecting both humoral immunity (i.e., antibody production) and cell-mediated immunity (156). Selenium deficiency appears to enhance the virulence or progression of some viral infections (see separate article on Selenium). Moreover, selenium supplementation in individuals who are not overtly selenium deficient appears to stimulate the immune response. In two small studies, healthy (157, 158) and immunosuppressed individuals (159) supplemented with 200 micrograms (mcg)/day of selenium as sodium selenite for eight weeks showed an enhanced immune cell response to foreign antigens compared with those taking a placebo. A considerable amount of basic research also indicates that selenium plays a role in regulating the expression of cytokines that orchestrate the immune response (160). Iron is an essential component of hundreds of proteins and enzymes that are involved in oxygen transport and storage, electron transport and energy generation, antioxidant and beneficial pro-oxidant functions, and DNA synthesis (see Function in the article on iron) (161-163). Iron is required by the host in order to mount effective immune responses to invading pathogens, and iron deficiency impairs immune responses (164). Sufficient iron is critical to several immune functions, including the differentiation and proliferation of T lymphocytes and generation of reactive oxygen species (ROS) that kill pathogens. However, iron is also required by most infectious agents for replication and survival. During an acute inflammatory response, serum iron levels decrease while levels of ferritin (the iron storage protein) increase, suggesting that sequestering iron from pathogens is an important host response to infection (162, 165). Moreover, conditions of iron overload (e.g., hereditary hemochromatosis) can have detrimental consequences to immune function, such as impairments in phagocytic function, cytokine production, complement system activation, and T and B lymphocyte function (164). Further, data from the first National Health and Nutrition Examination Survey (NHANES), a U.S. national survey, indicate that elevated iron levels may be a risk factor for cancer and death, especially in men (167). For men and women combined, there were significant trends for increasing risk of cancer and mortality with increasing transferrin saturation, with risks being higher in those with transferrin saturation >40% compared to ≤30% (167). Despite the critical functions of iron in the immune system, the nature of the relationship between iron deficiency and susceptibility to infection, especially with respect to malaria, remains controversial. High-dose iron supplementation of children residing in the tropics has been associated with increased risk of clinical malaria and other infections, such as pneumonia. Studies in cell cultures and animals suggest that the survival of infectious agents that spend part of their life cycle within host cells, such as plasmodia (malaria) and mycobacteria (tuberculosis), may be enhanced by iron therapy. Controlled clinical studies are needed to determine the appropriate use of iron supplementation in regions where malaria is common, as well as in the presence of infectious diseases, such as HIV, tuberculosis, and typhoid (168). Copper is a critical functional component of a number of essential enzymes known as cuproenzymes (see the separate article on Copper). The mineral plays an important role in the development and maintenance of immune system function, but the exact mechanism of its action is not yet known. Copper deficiency results in neutropenia, an abnormally low number of neutrophils (169), which may increase one’s susceptibility to infection. Adverse effects of insufficient copper on immune function appear most pronounced in infants. Infants with Menkes disease, a genetic disorder that results in severe copper deficiency, suffer from frequent and severe infections (170, 171). In a study of 11 malnourished infants with evidence of copper deficiency, the ability of certain white blood cells to engulf pathogens increased significantly after one month of copper supplementation (172). Immune effects have also been observed in adults with low intake of dietary copper. In one study, 11 men on a low-copper diet (0.66 mg copper/day for 24 days and 0.38 mg/day for another 40 days) showed a reduced proliferation response when white blood cells, called mononuclear cells, were isolated from blood and presented with an immune challenge in cell culture (173). While it is known that severe copper deficiency has adverse effects on immune function, the effects of marginal copper deficiency in humans are not yet clear (174). However, long-term high intakes of copper can result in adverse effects on immune function (175). Probiotics are usually defined as live microorganisms that, when administered in sufficient amounts, benefit the overall health of the host (176). Common examples belong to the Lactobacilli and Bifidobacteria species; these probiotics are consumed in yogurt and other fermented foods. Ingested probiotics that survive digestion can transiently inhabit the lower part of the gastrointestinal tract (177). Here, they can modulate immune functions by interacting with various receptors on intestinal epithelial cells and other gut-associated immune cells, including dendritic cells and M-cells (178). Immune modulation requires regular consumption because probiotics have not been shown to permanently alter intestinal microflora (179). Probiotics have been shown to benefit both innate and adaptive immune responses of the host (180). For example, probiotics can strengthen the gut epithelial barrier—an important innate defense—through a number of ways, such as by inhibiting apoptosis and promoting the survival of intestinal epithelial cells (181). Probiotics can also stimulate the production of antibodies and T lymphocytes, which are critical in the adaptive immune response (180). Several immune effects of probiotics are mediated through altering cell-signaling cascades that modify cytokine and other protein expression (181). However, probiotics exert diverse effects on the immune system that are dependent not only on the specific strain but also on the dose, route, and frequency of delivery (182). Probiotics may have utility in the prevention of inflammatory bowel disorders, diarrheal diseases, allergic diseases, gastrointestinal and other types of infections, and certain cancers. However, more clinical research is needed in order to elucidate the health effects of probiotics (180). Overnutrition is a form of malnutrition where nutrients are supplied in excess of the body’s needs. Overnutrition can create an imbalance between energy intake and energy expenditure and lead to excessive energy storage, resulting in obesity (15). Obesity is a major public health problem worldwide, especially in industrialized nations. Obese individuals are at increased risk of morbidity from a number of chronic diseases, including hypertension and cardiovascular diseases, type 2 diabetes, liver and gallbladder disease, osteoarthritis, sleep apnea, and certain cancers (183). Obesity has also been linked to increased risk of mortality (184). Overnutrition and obesity have been shown to alter immunocompetence. Obesity is associated with macrophage infiltration of adipose tissue; macrophage accumulation in adipose tissue is directly proportional to the degree of obesity (185). Studies in mouse models of genetic and high-fat diet-induced obesity have documented a marked up-regulation in expression of inflammation and macrophage-specific genes in white adipose tissue (186). In fact, obesity is characterized by chronic, low-grade inflammation, and inflammation is thought to be an important contributor in the pathogenesis of insulin resistance—a condition that is strongly linked to obesity. Adipose tissue secretes fatty acids and other molecules, including various hormones and cytokines (called adipocytokines or adipokines), that trigger inflammatory processes (185). Leptin is one such hormone and adipokine that plays a key role in the regulation of food intake, body weight, and energy homeostasis (187, 188). Leptin is secreted from adipose tissue and circulates in direct proportion to the amount of fat stores. Normally, higher levels of circulating leptin suppress appetite and thereby lead to a reduction in food intake (189). Leptin has a number of other functions as well, such as modulation of inflammatory responses and aspects of humoral and cell-mediated responses of the adaptive immune system (187, 190). Specific effects of leptin, elucidated in animal and in vitro studies, include the promotion of phagocytic function of immune cells; stimulation of pro-inflammatory cytokine production; and regulation of neutrophil, natural killer (NK) cell, and dendritic cell functions (reviewed in 190). Leptin also affects aspects of cell-mediated immunity; for example, leptin promotes T helper (Th)1 immune responses and thus may have implications in the development of autoimmune disease (191). Th1 cells are primarily involved in activating macrophages and inflammatory responses (12). Obese individuals have been reported to have higher plasma leptin concentrations compared to lean individuals. However, in the obese, the elevated leptin signal is not associated with the normal responses of reduced food intake and increased energy expenditure, suggesting obesity is associated with a state of leptin resistance. Leptin resistance has been documented in mouse models of obesity, but more research is needed to better understand leptin resistance in human obesity (189). Obese individuals may exhibit increased susceptibility to various infections. Some epidemiological studies have shown that obese patients have a higher incidence of postoperative and other nosocomial infections compared with patients of normal weight (192, 193; reviewed in 194). Obesity has been linked to poor wound healing and increased occurrence of skin infections (195-197). A higher body mass index (BMI) may also be associated with increased susceptibility to respiratory, gastrointestinal, liver, and biliary infections (reviewed in 194). In obesity, the increased vulnerability, severity, or complications of certain infections may be related to a number of factors, such as select micronutrient deficiencies. For example, one study in obese children and adolescents associated impairments in cell-mediated immunity with deficiencies in zinc and iron (198). Deficiencies or inadequacies of other micronutrients, including the B vitamins and vitamins A, C, D, and E, have also been associated with obesity (41). Overall, immune responses appear to be compromised in obesity, but more research is needed to clarify the relationship between obesity and infection-related morbidity and mortality. Written in August 2010 by: Victoria J. Drake, Ph.D. Linus Pauling Institute Oregon State University Reviewed in August 2010 by: Adrian F. Gombart, Ph.D. Department of Biochemistry and Biophysics Principal Investigator, Linus Pauling Institute Oregon State University Reviewed in August 2010 by: Malcolm B. Lowry, Ph.D. Department of Microbiology Oregon State University This article was underwritten, in part, by a grant from Bayer Consumer Care AG, Basel, Switzerland. Last updated 9/2/10 Copyright 2010-2013 Linus Pauling Institute The Linus Pauling Institute Micronutrient Information Center provides scientific information on the health aspects of dietary factors and supplements, foods, and beverages for the general public. The information is made available with the understanding that the author and publisher are not providing medical, psychological, or nutritional counseling services on this site. The information should not be used in place of a consultation with a competent health care or nutrition professional. The information on dietary factors and supplements, foods, and beverages contained on this Web site does not cover all possible uses, actions, precautions, side effects, and interactions. It is not intended as nutritional or medical advice for individual problems. Liability for individual actions or omissions based upon the contents of this site is expressly disclaimed. Thank you for subscribing to the Linus Pauling Institute's Research Newsletter. You should receive your first issue within a month. We appreciate your interest in our work.
http://lpi.oregonstate.edu/infocenter/immunity.html
13
12
Johann Carl Friedrich Gauss Johann Carl Friedrich Gauss (April 30, 1777 - February 23, 1855) was a German mathematician, astronomer, and physicist. Although Gauss made many contributions to science and to the understanding of the nature of electricity and magnetism, his true passion was mathematics. He referred to math as the “queen of sciences” and his influence on the field of mathematics was extraordinary. Gauss was, for example, the first mathematician to prove the fundamental theorem of algebra, and he proved it four different ways over the course of his lifetime. Gauss is widely celebrated as one of the greatest mathematicians in history. Gauss was born in Brunswick, Germany into a working class family. His parents had little or no formal education, but their son went to school at age seven and immediately distinguished himself as a math prodigy who could compute complex mathematical solutions in his head. He learned German and Latin and received a scholarship from the Duke of Brunswick to attend an academy where he studied astronomy, math, and geometry. On his own as a teenager he began to discover advanced mathematic principles, and in 1795 – at the age of 18 – Gauss became the first person to prove the Law of Quadratic Reciprocity, a theory of math that allows us to determine whether quadratic equations can be solved. The same year he entered Gottingen University. While at the university, he made one of his most important discoveries. Using a ruler and compass, he constructed a regular 17-sided polygon or heptadecagon. While investigating the underlying theory behind this construction, Gauss revealed an important connection between algebra and geometrical shapes that successfully finalized work first begun by classical Greek mathematicians. Gauss thus changed the world of modern mathematics, while also adding to research begun by 16th century French philosopher and mathematician Renee Descartes. After three years at the university, Gauss left without earning a diploma, and returned to Brunswick. Gauss completed a doctorate degree by submitting a thesis about algebra through the University of Helmstedt. In 1801, Gauss wrote a paper that attempted to predict the orbital path of the dwarf planet or asteroid Ceres, which was newly discovered at the time. His conclusions were radically different from those submitted by other experts in the field of astronomy, but turned out to be the most accurate. To calculate the trajectory of Ceres, Gauss used the method of “least squares” which he had discovered but had not yet revealed to others. His least squares method was officially published in 1809, was widely embraced, and is used today by all branches of science to control and minimize the effect of measurement errors. In 1805, Gauss married Johanna Ostoff, and in 1807 they moved to Gottingen from Brunswick, where he became the director of the Gottingen Observatory. Gauss was very happy at that time in his life. They had three children, but soon tragedy struck and left him grief stricken. In 1808, Gauss’ father died; in 1809, Gauss’ new wife died; and Johanna’s death was followed immediately by the death of Gauss’ second son. Gauss suffered from depression following this chain of events but later remarried and had three children with Minna Waldeck. In 1818, Gauss began work that led to research in the field of differential geometry and the writing of significant theories related to the nature of curves and curvature. He published over 70 papers over the next 12 years, including one that won the Copenhagen University Prize. In 1831, Gauss began to collaborate with Wilhelm Weber, a physicist. Gauss and Weber did extensive research into the nature of electricity and magnetism, creating a simple telegraph machine and discovering Kirchhoff's laws, a set of rules that apply to electrical circuits. The two men also developed the magnometer and the electrodynamometer, instruments that measured electric current and voltage. They also created innovative systems of units for electricity and magnetism. The term “gauss” came to describe a unit of magnetic flux density or magnetic induction. Also in 1831, Gauss's second wife died after a long illness. He continued to live with his daughter, who took care of Gauss for the rest of his life. Johann Carl Friedrich Gauss died February 23, 1855, in Göttingen, Germany. Copyright 2010 - National Imports LLC
http://www.rare-earth-magnets.com/t-johann-carl-friedrich-gauss.aspx
13
11
Syllogistic FallacyType: Formal Fallacy The categorical syllogism is part of the oldest system of formal logic, invented by the first formal logician, Aristotle. There are several techniques devised to test syllogistic forms for validation, including sets of rules, diagrams, and even mnemonic poems. More importantly for us, there are sets of fallacies based upon the rules such that any syllogism which does not commit any of the fallacies will have a validating form. The subfallacies of Syllogistic Fallacy are fallacies of this rule-breaking type. If a categorical syllogism commits none of the subfallacies below, then it has a validating form. To understand these subfallacies, it is necessary to understand some basic terminology about categorical syllogisms: A Short Introduction to Categorical Syllogisms: These four types of proposition are called A, E, I, and O type propositions, as indicated. The variables, S and P, are place-holders for terms which pick out a classor categoryof thing; hence the name "categorical" proposition. In a categorical syllogism there are three terms, two in each premiss, and two occurrences of each term in the entire argument, for a total of six occurrences. The S and P which occur in its conclusionthe Subject and Predicate termsare also called the "minor" and "major" terms, respectively. The major term occurs once in one of the premisses, which is therefore called the "major" premiss. The minor term also occurs once in the other premiss, which is for this reason called the "minor" premiss. The third term occurs once in each premiss, but not in the conclusion, and is called the "middle" term. The notion of distribution plays a role in some of the syllogistic fallacies: the terms in a categorical proposition are said to be "distributed" or "undistributed" in that proposition, depending upon what type of proposition it is, and whether the term is the subject or predicate term. Specifically, the subject term is distributed in the A and E type propositions, and the predicate term is distributed in the E and O type propositions. The other terms are undistributed. In the table above, the distributed terms are in bold, and the undistributed ones are in italic. Finally, the A and I type propositions are called "affirmative" propositions, while the E and O type are "negative", for reasons which should be obvious. Now, you should be equiped to understand the following types of syllogistic fallacy. Irving Copi & Carl Cohen, Introduction to Logic (Tenth Edition) (Prentice Hall, 1998), Chapter 8. Acknowledgment: The print of the bust of Aristotle is available from AllPosters.
http://www.fallacyfiles.org/syllfall.html
13
11
Carbon dioxide, ammonia, silver and more were found in a lunar crater. But where did they come from? Analysis of a plume rising from impact of spent rocket motor into a lunar crater showed a rich supply of water and a complex mix of other compounds. How the compounds, which include hydrocarbons and light metals, got to the moon is a mystery. About 5.6 percent of the mass of the crater can be attributed to water ice alone, the scientists estimate. A NASA prospecting mission to sample the frozen contents of a lunar crater found not only a rich supply of water, but a tapestry of other minerals, origins unknown. The compounds, which include ammonia, carbon dioxide, carbon monoxide, sodium and, surprisingly, silver, could have come from the Earth, from the moon's interior, or from comets and asteroids.. Whatever the source, a rich milieu ended up in Cabeus, a small crater in permanent shadow on the moon's south pole. "It's not like anything we were anticipating. The water by itself was significant. All the other stuff was really startling," Anthony Coleprete, lead scientist for NASA's Lunar Crater Observation and Sensing Satellite (LCROSS) mission, told Discovery News. LCROSS was devised to follow up on previous detections of hydrogen in the lunar soil made by orbiting spacecraft, to determine if it's bound with oxygen as water, if it's hydrated minerals, or if it's solar protons just stuck in the dirt. To draw their sample, scientists chose a crater in an area with strong signs of hydrogen and smashed the empty rocket motor that delivered NASA's Lunar Reconnaissance Orbiter (LRO) to the moon last year into it. The impact carved a hole 70 to 100 feet in diameter and tossed up a fountain of lunar material six feet deep. A plume of debris, dust and vapor rose a half-mile off the ground, high enough to clear the crater's walls and bathe in sunlight. Strategically positioned for optimal viewing was a shepherding spacecraft, outfitted with instruments to pick apart the plume's chemistry for nearly four minutes, until it too crashed into the crater. LRO and ground-based telescopes also analyzed the plume. The mission was sponsored by NASA's Exploration division, which is interested in developing missions that make use of indigenous resources.indigenous If LCROSS had turned up 1 percent water, NASA figures it might be economically viable to extract water from a crater, rather than try to coax it from the soil, where water exists in minuscule amounts. In Cabeus, approximately 5.6 percent of the mass inside the crater could be water ice alone, Coleprate and colleagues report in a series of papers in this week's Science. "We've always been told the moon is bone dry and that was the legacy of Apollo and that's true -- in all the samples we picked up," Brown University planetary geologist Peter Schultz told Discovery News. "It's now a new moon to me. We know there are places we can explore that can tell us brand new things," Schultz said.
http://news.discovery.com/space/moon-crater-water-compounds.htm
13
20
On board each was a ‘lander’, which descended to the surface and sent back spectacular pictures of the Red Planet. Now, more than ever, space probes are providing us with thrilling glimpses of worlds beyond our own. Human explorers need food, water, air, sanitation and shelter from the extremely harsh and airless environment of space. Unmanned spacecraft only need a source of electricity, which can be generated by radioactive heating or solar cells. There are many different types of unmanned craft, including communications and weather satellites, satellites for observing Earth and space telescopes. Most exciting are space probes, which make long journeys to other planets and moons. No humans have been more than a few hundred kilometres from Earth since the last Apollo moon mission in 1972. But unmanned probes have travelled huge distances in our solar system. They have visited the sun, moon and every planet except Pluto. They have visited other moons, plus comets and asteroids. The most basic mission is a fly-by. As it passes close to another world, a fly-by probe takes photographs and makes other measurements such as the magnetic field strength and surface temperature. This data is beamed back to Earth via radio signals. Alternatively, a probe may be put into orbit around a planet, where it can do the same over months or years. Then there are missions that send probes to make a landing millions of kilometres from Earth. Sometimes, one mission has both an orbiter and a lander. The orbiter can make useful measurements from high altitude while acting as a relay station for signals to and from the lander on the surface. NASA’s Viking programme, launched in 1975, was just such a mission. The landers on Viking I and Viking II had sophisticated robot arms designed to collect Martian soil samples and deposit them in self-contained mini-laboratories. One aim was to look for evidence of life, but none was found. The landers also had a ‘meteorology boom’ – a small weather station that measured wind speed, and atmospheric temperature and pressure. They each had several cameras too. The Viking orbiters also carried scientific apparatus and cameras and surveyed most of the planet’s surface. Among the many thousands of photographs they took, one particularly caught the public’s imagination. Nicknamed ‘the Face on Mars’, it is a geological feature that in certain light resembles a human face. Some people suggested it was evidence of an ancient Martian civilisation. Unmanned probes – from NASA and, increasingly, other space agencies – continue to make incredible strides. NASA’s Voyager 2 made its closest approach to Uranus in 1986 and Neptune in 1989. Its goal is to leave the solar system – and it is expected to transmit data into the 2030s. The European Space Agency’s Huygens probe surveyed Saturn before landing on its moon Titan in 2005. It sent back data for 90 minutes. The ESA’s Venus Express is currently orbiting and photographing Venus. Mars is being orbited by three spacecraft and has NASA’s two Mars Exploration Rovers on its surface, though one stopped working in 2010. NASA also landed a car-sized remote-controlled probe named Curiosity in August 2012, intending to study the planet's climate and whether it can support life. The eventual aim is to land humans on Mars. A NASA mission named Messenger began orbiting Mercury in 2011 and NASA’s Juno spacecraft, launched the same year, will orbit Jupiter by 2016. Pluto, the dwarf planet, is the destination for NASA’s New Horizons probe, which launched in January 2006 and should arrive in 2015. lightbox mind boggler
http://www.thesun.co.uk/sol/homepage/hold_ye_front_page/science/2684585/em1976em-Beyond-the-moonto-Mars.html
13
15
What are they? For us to see them from such great distances, quasars must be producing enormous energy -- they are about 1000 times brighter than an average galaxy! And, to make them even more amazing, the energy originates in a region smaller than a single star! Physicists had not thought about mechanisms to produce such energy from such a small volume until they were confronted with quasars. One of their choices for the energy source of quasars was a black hole. As matter falls into a black hole, it is squeezed in such a way that the friction between in-falling particles makes the matter hot; light is emitted from this heated material, and the hotter the matter gets, the higher the frequency of light emitted. As analogy, consider what happens at the entrance to a cinema: If movie-goers do not queue properly, a crowd of people forms as all try to pass through the narrow doors. The heat in the crowd rises first due to the proximity of one person to another and second because of the friction between the bodies. On the one hand, it may seem difficult to think that mere friction between bits of matter could account for the energy of a quasar, but, on the other hand, gravity is unimaginably strong near a black hole. In fact, the black hole solution to the quasar energy problem is the simplest one. Some researchers were somewhat skeptical about black holes altogether (although there is very much indirect evidence for their existence, no black hole has been really seen in a conclusive manner) and tried to find an alternative explanation for quasars. In one such explanation, it was proposed that a large number of supernovae occurring simultaneously over a very long time (as long as a quasar shines) can produce quasars' observed properties. At the present time, it seems that perhaps a combination of both explanations may be acceptable. A black hole in the center could be responsible for producing the frictional energy as well as for triggering a large number of supernovae. Let us now have a closer look at the surroundings of that black hole in the center. How does matter actually rush towards the black hole? The key word to answer this is accretion. It is nothing else than the accumulation of mass onto an object from its surroundings. Like a rolling snowball. But there is something special about astrophysical accretion, something that makes it different from rolling a bigger snowball. Accretion in outer space tends to take place in a disk-like structure that surrounds the "accreting" object. Does this not remind you of the Solar System? Indeed, the most accepted theory on how the Solar System formed includes the process of accretion of matter onto the dense pocket of matter that eventually formed the Sun. Now you may think that in the case of quasars and their central black holes the same physical process may take place, but scaled up a lot. Well, this is not exactly true: Black holes are very small objects, and, as I said, the energy of a quasar comes from a region which may be smaller than a single star. What is the difference then? It is the density, and, therefore, the gravitational pull due to the black hole which makes the difference - remember, gravity is not related to size, but to mass and density! The Unified Model for Active Galactic Nuclei One might ask oneself, does the infalling matter near a black hole not hide the region where the energy forms? How, then, can we see quasars shine so bright? It is true, in fact, that there is matter that hides the quasar's bright center. Before actually getting into the relatively small accretion disk, matter gathers in a large, not-quite-flat "doughnut" of material surrounding the black hole. Thus, depending on one's viewing angle, this torus -- astronomers call it this rather than "doughnut" -- may hide the direct sight of the energy-producing accretion disk. How does a quasar appear to us if we cannot see the accretion disk surrounding its black hole? The answer to this question is part of what makes the whole issue of quasars very interesting: Throughout the modern astronomical age, we have observed many objects in the Universe that have defied us in our attempts to classify them. The modern quasar model offers a quite natural explanation for these seemingly disparate objects. Striking things like radio galaxies and blazars can be explained as the very same objects as quasars, but being viewed from different angles. Such an explanatory power makes a theory very attractive to scientists, and they tend to believe such theories as true very quickly - too quickly sometimes! In any case, at the present time, this accreting-black-hole model is accepted as an explanation for the behavior of quasars, radio galaxies, and blazars, known collectively as active galactic nuclei (AGNs), since they are thought to be always in the nucleus of a galaxy. And such galaxies are part of a class known as "active galaxies." Galactic Nuclei: Old Quasars? One question remains unanswered by astronomers: Because we see quasars exclusively at great distances -- and, therefore, the light we detect from them comes from a long time ago -- does this mean that all galaxies have had an AGN in their center at some stage, or is it the case that only a few of them did, and only during a short time in the past? Try to build your own theory on this by first obtaining information on the relative numbers of normal galaxies and quasars at high redshift and nearby. Lenses through the HST. Two examples of gravitational lensing captured by the Hubble Space Telescope: HST 14164+5215 (left) is a pair of faint lensed images on either side of a brighter galaxy, while HST 15433+5352 (right) is a lensed source visible in this image as an extended arc about the elliptical lensing galaxy. Images courtesy of K. Ratnatunga (Carnegie Mellon Univ.) and NASA. Erik Stengler studied physics in Cologne, Germany, and completed a M.Phil. and a Ph.D. in astronomy at Cambridge University in the United Kingdom. After several years of research in this field, he started a second Ph.D. in science didactics at the University of La Laguna in Spain and recently joined the team in charge of creating the new Interactive Science Museum in San Sebastian, Spain, to be opened in Spring 2000. He can be reached via email at [email protected]. << previous page | 1 | 2 | 3 | 4 | next page >> back to Teachers' Newsletter Main Page
http://www.astrosociety.org/edu/publications/tnl/46/quasars2.html
13
11
Where did Earth's oceans come from? Astronomers have long contended that icy comets and asteroids delivered the water for them during an epoch of heavy bombardment that ended about 3.9 billion years ago. But a new study suggests that Earth supplied its own water, leaching it from the rocks that formed the planet. The finding may help explain why life on Earth appeared so early, and it may indicate that other rocky worlds are also awash in vast seas. Our planet has always harbored water. The rubble that coalesced to form Earth contained trace amounts—tens to hundreds of parts per million—of the stuff. But scientists didn't believe that was enough to create today's oceans, and thus they looked to alien origins for our water supply. Geologist Linda Elkins-Tanton of the Massachusetts Institute of Technology in Cambridge didn't think researchers needed to look that far. To make her case, she conducted a chemical and physical analysis of Earth's library of meteorites—a useful analogue for the building blocks of our planet. She then plugged the data into a computer simulation of early Earth-like planets. Her models show that a large percentage of the water in the molten rock would quickly form a steam atmosphere before cooling and condensing into an ocean. The process would take tens of millions of years, meaning that oceans were sloshing around on Earth by as early as 4.4 billion years ago. Even the scant amount of water in the mantle, which is much drier than the sand in the Sahara, should produce oceans hundreds of meters deep, Elkins-Tanton reports in an upcoming paper in Astrophysics and Space Science. Astrobiologists have been continually surprised by how quickly life evolved on Earth—within 600 million years after the planet's formation, or about 3.9 billion years ago. Elkins-Tanton's findings may help explain why. "If water oceans were present shortly after the impact that formed the moon [some 4.45 billion years ago]," says Dirk Schulze-Makuch, an astrobiologist at Washington State University, Pullman, "much more time would be available for the evolution of life, and it would explain why life was already relatively complex when we find the first traces of it in the rock record." Pin Chen, a planetary scientist at NASA's Jet Propulsion Laboratory in Pasadena, California, says Elkins-Tanton presents a compelling scientific story that oceans form very early in the history of a terrestrial-type planet. Chen notes that the work also supports the suggestion that early Mars had a wetter climate than it does today and thus might have supported life. So, too, might a number of Earth-like planets that astronomers are just beginning to discover, says Schulze-Makuch. Even so, Max Bernstein, an astrochemist at NASA Headquarters in Washington, D.C., notes that Elkins-Tanton's models don't include the possibility that the huge asteroid and comet impacts prevalent during the formation of our solar system boiled off the water. "Just because there was an ocean early on," he says, "doesn't mean that it stuck around long enough for life." Elkins-Tanton counters that even a huge impact would not cause Earth-like planets to lose more than half of their oceans.
http://news.sciencemag.org/sciencenow/2010/11/earth-oceans-were-homegrown.html
13
10
Take a video tour and see how parents and students use K5. Sample Lesson: Click to Play Kindergarten: algebra. Explore simple pictorial, rhythmic and symbolic linear patterns at the Penguin Parade. 3+2 = 2+3. K5 introduces kids to algebra K5 introduces algebraic thinking to kids to broaden and deepen their understanding of math concepts. Kindergarten and grade 1 algebraic thinking involves recognizing, describing and extending patterns. By grade 5 we have introduced the concept of functions and variables, and laid the foundation for later studies in algebra. Why algebra for kids? Early algebra is a generalization of the arithmetic skills learned by kids in elemenatary school. Early exposure to algebraic thinking can deepen the understanding of math concepts. Beginning Algebra Activities Our online algebra lessons help kids learn to: Identify and extend pictorial or symbolic patterns Find the missing element in patterns and identify the repeating part of the pattern Understand numerical relationships Understand the commutative principal (i.e. 3+2 = 2+3) Represent relationships as numerical equations Understand variables, use variables to represent unknown quantities, and write equations with variables Bite-sized lessons provide flexibility Lessons are broken into 5-10 minute segments so that study sessions can be short and flexible. Most kids learn better with more frequent but shorter study sessions. What is K5? K5 Learning is an online reading and math program for kids in kindergarten to grade 5. Kids work at their own level and their own pace through over 3,000 and activities. Designed principally for after school study and summer study, K5 is also used by homeschoolers, special needs, and gifted kids. K5 helps your children build good study habits and excel in school.
http://www.k5learning.com/math/algebra
13
14
Structural Biochemistry/Cell Organelles/Plant Cell Plants are eukaryotes, multicellular organisms that have membrane-bound organelles. Unlike prokaryotic cells, eukaryotic cells have a membrane-bound nucleus. A plant cell is different from other eukaryotic cells in that it has a rigid cell wall, a central vacuole, plasmodesmata, and plastids. Plant cells take part in photosynthesis to convert sunlight, water, and carbon dioxide into glucose, oxygen, and water. Plants are producers that provide food for themselves (making them autotrophs) and other organisms. These are some of the parts common to plant cells: •Cell Wall- tough, rigid layer that provides shape and protection from osmotic swelling. •Cell (Plasma) Membrane- it is composed of a phospholipid lipid bilayer (including polar hydrophilic heads facing outside and hydrophobic tails facing each other inside) that makes it semipermeable and thus capable of selectively allowing certain ions and molecules in/out of the cell. •Cytoplasm- it consists of the jelly-like cytosol where the organelles are located. •Cytoskeleton- is made up of microtubules. It provides shape the shape of the cell and helps in transporting materials in and out of the cell. •Golgi Apparatus (body/complex)- it is the site where membrane-bound vesicles are packed with proteins and carbohydrates. These vesicles will leave the cell. •Vacuole- stores metabolites and degrades and recycles macromolecules. •Mitochondria- is the powerhouse of the cell, responsible for cellular respiration by converting the energy stored in glucose into ATP. •Ribosome- contain RNA for protein synthesis. One type is embedded in Rough ER and another type puts proteins directly into the cytoplasm. •Rough Endoplasmic Reticulum (roughER)- covered with ribosomes, it stores, separates, and transports materials through the cell. It also produces proteins in cisternae, which then go to the Golgi apparatus or insert into the cell membrane. •Smooth Endoplasmic Reticulum (smooth ER)- it has no ribosomes embedded in its surface. Lipids and proteins are produced and digested here. Smooth ER buds off from rough ER to move newly-synthesized proteins and lipids. The proteins and lipids are transported to the Golgi apparatus (where they are made ready for export) and membranes. •Peroxisome- is involved in metabolizing certain fatty acids and producing and degrading hydrogen peroxide. •Nuclear Membrane (envelope)- the membrane that surrounds the nucleus. Its many opening allow traffic in/out of the nucleus. •Nucleus - it contains DNA in the form of chromosomes and controls protein synthesis. •Nucleolus - it is the site of ribosomal RNA synthesis. •Centrosome- consisting of a dense center and radiating tubules, it organizes the microtubules into a mitotic spindle during cell division. •Chloroplast- conducts photosynthesis and produces ATP and carbohydrates from captured light energy. •Stretch Granule- temporarily stores produced carbohydrates from photosynthesis. Exclusive to Plant Cells Cell Wall The cell wall is a tough, usually flexible but fairly rigid layer that surrounds the plant cells. It is located just outside the cell membrane and it provides the cells with structural support and protection. The cell wall also acts as a filtering mechanism. A major function of the cell wall is to act as a pressure vessel, preventing over-expansion when water enters the plant cells. The strongest component of the cell wall is a carbohydrate called cellulose, a polymer of glucose. The cell wall gives rigidity and strength to the plant cells which offers protection against mechanical stress. It also permits the plants to build and hold its shape. It limits the entry of large molecules that may be toxic to the cell. It also creates a stable osmotic environment by helping to retain water, which helps prevent osmotic lysis. While the cell wall is rigid, it is still flexible and so it bends rather than holding a fixed shape due to its tensile strength. The rigidity of primary plant tissues is due to turgor pressure and not from rigid cell walls. This is evident in plants that wilt since the stems and leaves begin to droop and in seaweed that bends in water currents. This proves that the cell wall is indeed flexible. The rigidity of healthy plants is due to a combination of the cell wall construction and turgor pressure. The rigidity of the cell wall is also affected by the inflation of the cell contained. This inflation is a result of the passive uptake of water. Cell rigidity can be increased by the present of a second cell wall, which is a thicker additional layer of cellulose. This additional layer can be formed containing lignin in xylem cell walls, or containing suberin in cork cell walls. These compounds are rigid and waterproof, making the secondary cell wall very stiff. Secondary cell walls are present in both wood and bark cells of trees. The primary cell wall of most plant cells is semi-permeable so that small molecules and proteins are allowed passage into and out of the cell. Key nutrients, such as water and carbon dioxide, are distributed throughout the plant from cell wall to cell wall via apoplastic flow. The major carbohydrates that make up the primary cell wall are cellulose, hemicellulose and pectin. The secondary cell wall contains a wide range of additional compounds that modify their mechanical properties and permeability. Plant cell walls also contain numerous enzymes, such as hydrolases, esterases, peroxidases, and transglycosylases, that cut, trim and cross-link wall polymers. The relative composition of carbohydrates, secondary compounds and protein varies between plants and between the cell type and age. There are up to three strata, or layers, that can be found in plant cell walls: The middle lamella, which is a layer rich in pectins. This is the outermost layer that forms the interface between adjacent plant cells and keeps them together. The primary cell wall which is generally a thin, flexible layer that is formed when the cell is growing. The secondary cell wall which is a thick layer that is formed inside the primary cell wall after the cell is fully grown. It is only found in some cell types. The vacuole is essentially an enclosed compartment that is filled with water containing inorganic and organic molecules including various enzymes in solution. Vacuoles are formed by the fusion of multiple membrane vesicles and are effectively just larger forms of these vesicles. This organelle does not have a basic shape or size since its structure is determined by the needs of the cell. The functions of the vacuole in the plant cell include isolating materials that may be harmful to the cell, containing waste products, maintaining internal hydrostatic pressure within the cell, maintaining an acidic internal pH, containing small molecules, exporting unwanted substances from the cell, and allowing plants to support structures such as leaves and flowers. Vacuoles also play an important role in maintaining a balance between biogenesis and degradation of many substances and cell structures in the organism. Vacuoles aid in the destruction of invading bacteria or of misfolded proteins that are building up within the cell. They have the function of storing food and assist in the digestive and waste management process for the cell. Most mature plant cells have a single large central vacuole that takes up approximately 30% of the cell's volume. It is surrounded by a membrane called the tonoplast, which is the cytoplasmic membrane separating the vacuolar contents from the cell's cytoplasm. It is involved in regulating the movements of ions around the cell, and isolating substances that may be harmful to the cell. Other than storage, the main function of the central vacuole is to maintain turgor pressure against the cell wall. The proteins that are found in the tonoplast control the flow of water into and out of the vacuole through active trasnport, pumping potassium ions into and out of the vacuolar interior. Because of osmosis, water will flow into the vacuole placing pressure on the cell wall. If there is a significant amount of water loss, there is a decline in turgor pressure and the cell will plasmolyse. Turgor pressure exerted by the vacuole is required for cellular elongation as well as for supporting plants in the upright position. Another function of the vacuole is to push all contents of the cell's cytoplasm against the cellular membrane which helps keep the chloroplasts closer to light. Plasmodesmata are microscopic channels that traverse the cell walls of plant cells enabling the transport and communication between the cells. Plasmodesmata enable direct, regulated intercellular transport of substances between the cells. There are two forms of plasmodesmata, primary ones that form during cell division and secondary ones that form between mature cells. They are formed when a portion of the endoplasmic reticulum is trapped across the middle lamella as a new cell wall is laid down between two newly divided plant cells and this eventually becomes the cytoplasmic connection between the two cells. It is here that the cell wall is thickened no further and depressions or thin areas known as pits are formed in the walls. Pits usually pair up between adjacent cells. Plasmodesmata are constructed of three main layers, the plasma membrane, the cytoplasmic sleeve, and the desmotubule. The plasma membrane part of the plasmodesmata is an extension of the cell membrane and it is similar in structure to the cellular phospholipid bilayers. The cytoplasmic sleeve is a fluid-filled space that is enclosed by the plasma membrane and is an extension of the cytosol. The trafficking of molecules and ions through the plasmodesmata occurs through this passage. Smaller molecules, such as sugars and amino acids, and ions can pass through the plasmodesmata via diffusion without the need for additional chemical energy. Proteins can also pass through the cytoplasmic sleeve but it is not yet known just how they are able to pass through. Finally, the desmotubule is a tube of compressed endoplasmic reticulum that runs between adjacent cells. There are some molecules that are known to pass through this tube but it is not the main route for plasmodesmatal transport. The plasmodesmata have been shown to transport proteins, short interfering RNA, messenger RNA, and viral genomes from cell to cell. The size of the molecules that can pass through the plasmodesmata is determined by the size exclusion limit. This limit is highly variable and is subject to active modification. There have been several models that have been proposed for the active transport through the plasmodesmata. One suggestion is that such transport is mediated by the interactions with proteins that are localized on the desmotubule, and/or by chaperones partially unfolding proteins which allows them to fit through the narrow passage. Plastids are the site of manufacture and storage of important chemical compounds that are used by the cell. They often contain pigments used in photosynthesis and the types of pigments present can change or determine the color of the cell. Plastids are responsible for photosynthesis, storage of products like starch, and the ability to differentiate between these and other forms. All plastids derive from proplastids, which happen to be present in the meristematic regions of the plant. In plants, plastids may differentiate into several forms depending on what function they need to play in the cell. Undifferentiated plastids, the proplastids, can develop into the following types of plastids: •Chloroplasts: for photosynthesis •Chromoplasts: for pigment synthesis and storage •Leucoplasts: for monoterpene synthesis Chloroplasts are the organelles that conduct photosynthesis. They capture light energy to conserve free energy in the form of ATP and reduce NADP to NADPH. They are observed as flat discs usually 2 to 10 micrometers in diameter and 1 micrometer thick. The chloroplast is contained by an envelope that consists of an inner and outer phospholipid membrane. Between these layers is the intermembrane space. The material within the chloroplast is called the stroma and it contains one or molecules of small, circular DNA. Within the stroma are stacks of thylakoids, which are the site of photosynthesis. The thylakoids are arranged in stacks called grana. A thylakoid has a flattened disk shape and has an empty space called the thylakoid space or lumen. The process of photosynthesis takes place on the thylakoid membrane. Embedded in the thylakoid membrane are antenna complexes that consist of the light-absorbing pigments, such as chlorophyll and carotenoids, as well as the proteins that bind the pigments. These complexes increase the surface area for light capture and allows the capture of photons with a wider range of wavelengths. The energy of the incident photons is absorbed by the pigments and funneled to the reaction center of the complex through resonance energy transfer. From there, two chlorophyll molecules are ionized, which produces an excited electron which passes on to the photochemical reaction center. Chromoplasts are responsible for pigment synthesis and storage. They are found in the colored organs of plants such as fruit and floral petals, to which they give their distinctive colors. This is always associated with a massive increase in the accumulation of carotenoid pigments. Chromoplasts synthesize and store pigments such as orange carotene, yellow xanthophylls and various other red pigments. The most probably main evolutionary role of chromoplasts is to act as an attractant for pollinating animals or for seed dispersal via the eating of colored fruits. They allow for the accumulation of large quantities of water-insoluble compounds in otherwise watery parts of plants. In chloroplasts, some carotenoids are used as accessory pigments in the process of photosynthesis where they act to increase the efficiency of chlorophyll in harvesting light energy. When leaves change color during autumn, it is because of the loss of green chlorophyll unmasking these carotenoids that are already present in the leaves. The term "chromoplast" is used to include any plastid that has pigment, mainly to emphasize the contrast with leucoplasts which are plastids that have no pigments. Leucoplasts lack pigments and so they are not green. They are located in roots and non-photosynthetic tissues of plants. They can become specialized for bulk storage of starch, lipid or protein and are then known as amyloplasts, elaioplasts, or proteinoplasts, respectively. In many cell types, though, leucoplasts do not have a major storage function and are present to provide a wide range of essential biosynthetic functions, including the synthesis of fatty acids, many amino acids, and tetrapyrrole compounds such as haem. Extensive networkds of stromules interconnecting leucoplasts have been observed in epidermal cells of roots, hypocotyls and petals.
http://en.wikibooks.org/wiki/Structural_Biochemistry/Cell_Organelles/Plant_Cell
13
27
Glossary of Motor and Motion Related Terms This motor terminology glossary is guide to explain and define a variety of terms and characteristics that apply to AC and DC electric motors and motion control related terms. AC (Alternating Current) - The commonly available electric power supplied by an AC generator and is distributed in single- or three-phase forms. AC current changes its direction of flow (cycles). Acceleration - rate of increase in velocity with respect to time; equal to net torque divided by inertia.> Accuracy - difference between the actual value and the measured or expected value. Actuator - A device that creates mechanical motion by converting various forms of energy to rotating or linear mechanical energy. Alternating Current (AC) - The standard power supply available from electric utilities. Ambient temperature - temperature of the surroundings. The standard NEMA rating for ambient temperature is not to exceed 40 degrees C. Ampere (Amp)- The standard unit of electric current. The current produced by a pressure of one volt in a circuit having a resistance of one ohm. Amplifier - electronics that convert low level inputs to high level outputs. Armature The rotating part of a DC or universal motor. Armature Current - Armature current is the DC current required by a DC motor to produce torque and drive a load. The maximum safe, continuous current is stamped on the motor nameplate. This can only be exceeded for initial acceleration, and for short periods of time. Armature current is proportional to the amount of torque being produced, therefore, it rises and falls as the torque demand rises and falls Armature Reaction - The current that flows in the armature winding of a DC motor tends to produce magnetic flux in addition to that produced by the field current. This effect, which reduces the torque capacity, is called armature reaction and can effect the commutation and the magnitude of the motor’s generated voltage. Axial Movement Often called "endplay." The endwise movement of motor or gear shafts. Usually expressed in thousandths of an inch. Axial Thrust - The force or loads that are applied to the motor shaft in a direction Parallel to the axis of the shaft (such as from a fan or pump). Back-EMF - Electromotive force generated when a conductor passes through a magnetic field. In a motor it is generated any time the armature is moving in the field whether the motor is under power or not. The term "back" or "counter" EMF is referring to the polarity of the voltage and the direction of the current flow as being opposed to the supply voltage and current to the motor under power. Back EMF constant - [mV/rpm] It is the constant corresponding to the relationship between the induced voltage in the rotor and the speed of rotation. In brushless motors the back-EMF constant is the constant corresponding to the relationship between the induced voltage in the motor phases and the rotational speed. Backlash - This is the typically undesirable quality of "play" or "slop" in a mechanical system. Gearboxes, depending on the level of the precision of the parts and the type of gearing system involved can have varying degrees of backlash internally. Usually expressed in thousandths of an inch and measured at a specific radius at the output shaft. Back of a Motor - The back of a motor is the end which carries the coupling or driving pulley (NEMA). This is sometimes called the drive end (D.E.) or pulley end (P.E.) Base Speed - Base speed is the manufacturer’s nameplate rating where the motor will develop rated HP at rated load and voltage. With DC drives, it is commonly the point where full armature Bearings - Bearings reduce friction and wear while supporting rotating elements. When used in a motor, they must provide a relatively rigid support for the output shaftBearings act as the connection point between the rotating and stationary elements of a motor. There are various types such as roller, ball, sleeve (journal) and needle. Ball bearings are used in virtually all types and sizes of electric motors. They exhibit low friction loss, are suited for high-speed operation and are compatible with a wide range of temperatures Bifilar winding - indicates two distinct windings in the same physical arrangement; these windings are usually wired together, either in series or in parallel, to form one phase. Bipolar chopper drive - drive that uses the switch mode method to control motor current and polarity. Braking - Braking provides a means of stopping an AC or DC motor and can be accomplished in several ways - A. Dynamic Braking slows the motor by applying a resistive load across the armature leads after disconnection from the DC supply. This must be done while the motor field is energized. The motor then acts as a generator until the energy of the rotating armature is dissipated. This is not a holding brake. B. Regenerative Braking is similar to Dynamic Braking, but is accomplished electronically. The generated power is returned to the line through the power converter. It may also be just dissipated as losses in the converter (within its limitations). Breakdown Torque - The maximum torque a motor can achieve with rated voltage applied at rated frequency, without a sudden drop in speed or stalling. Breakaway Torque - The torque required to start a machine from standstill. It is always greater than the torque needed to maintain motion. Bridge Rectifier - A full-wave rectifier that conducts current in only one direction of the input current. AC applied to the input results in approximate DC at the output. Bridge Rectifier (Diode, SCR) - A diode bridge rectifier is a non-controlled full wave rectifier that produces a constant rectifier DC voltage. An SCR bridge rectifier is a full wave rectifier with an output that can be controlled by switching on the gate control element. Brush - A brush is a conductor, usually composed of some element of carbon, serving to maintain an electrical connection between stationary and moving parts of a machine (commutator of a DC motor). The brush is mounted in a spring-loaded holder and positioned tangent to the commutator segments against which it “brushes”. Pairs of brushes are equally spaced around the circumference of the commutator. Brushed DC motor - class of motors that has a permanent magnet stator and a wound iron-core armature, as well as mechanical brushes for commutation; capable of variable speed control, but not readily adaptable to different environments. Brushless servomotor - class of servomotors that uses electrical feedback rather than mechanical brushes for commutation; durable and adaptable to many different environments. Canadian Standards Association (CSA) - The agency that sets safety standards for motors and other electrical equipment used in Canada. Capacitance - As the measure of electrical storage potential of a capacitor, the unit of capacitance is the farad, but typical values are expressed in microfarads. Capacitor - A device that stores electrical energy. Used on single-phase motors, a capacitor can provide a starting "boost" or allow lower current during operation. Capacitor Motor - A single-phase induction motor with a main winding arranged for direct connection to the power source, and an auxiliary winding connected in series with a capacitor. There are three types of capacitor motors - capacitor start, in which the capacitor phase is in Capacitor Start - The capacitor start single-phase motor is basically the same as the split phase start, except that it has a capacitor in series with the starting winding. The addition of the capacitor provides better phase relation and results in greater starting torque with much less power input. As in the case of the split phase motor, this type can be reversed at rest, but not while running unless special starting and reversing switches are used. When properly equipped for reversing while running, the motor is much more suitable for this service than the split phase start since it provides greater reversing ability at less watts input. Case temperature rating - maximum temperature the motor case can reach without the inside of the motor exceeding its internal temperature rating Center Distance - A basic measurement or size reference for worm gear reducers, measured from the centerline of the worm to the centerline of the worm wheel. Centrifugal Cutout Switch - A centrifugally operated automatic mechanism used in conjunction with split phase and other types of single-phase induction motors. Centrifugal cutout switches will open or disconnect the starting winding when the rotor has reached a predetermined speed and reconnect it when the motor speed falls below it. Without such a device, the starting winding would be susceptible to rapid overheating and subsequent burnout. Closed-loop - describes a system where a measured output value is compared to a desired input value and corrected accordingly (e.g., a servomotor system). Cogging - A condition in which a motor does not rotate smoothly but “steps” or “jerks” from one position to another during shaft revolution. Cogging is most pronounced at low motor speeds and can cause objectionable vibrations in the driven machine. Commutation - A term that refers to the action of steering currents or voltage to the proper motor phases so as to produce optimum motor torque. In brush type motors, commutation is done electromechanically via the brushes and commutator. In brushless motors, commutation is done by the switching electronics using rotor position information typically obtained by hall sensors, a tachsyn, a resolve, or an encoder. Commutator - The commutator is mechanical device in a brushed DC or universal motor that passes current from the brushes to the windings and is fastened to the motor shaft and is considered part of the armature assembly. It consists of segments or “bars” that are electrically connected to two ends of one (or more) armature coils. Current flows from the power supply through the brushes, to the commutator and hence through the armature coils. The arrangement of commutator segments is such that the magnetic polarity of each coil changes a number of times per revolution (the number of times depends on the number of poles in the motor). Continuous stall current - amount of current applied to the motor to achieve the continuous stall torque. Continuous stall torque - maximum amount of torque a motor can provide at zero speed without exceeding its thermal capacity. Continuous Duty (CONT) - A motor that can continue to operate within the insulation temperature limits after it has reached normal operating (equilibrium) temperature. Controller - used to describe collective group of electronics that control the motor (e.g. drive, indexer, etc.). Converter - The process of changing AC to DC. This is accomplished through use of a diode rectifier or thyristor rectifier circuit. The term “converter” may also refer to the process of changing AC to DC to AC (e.g. adjustable frequency drive). A “frequency converter”, such as that found in an adjustable frequency drive, consists of a Rectifier, a DC Intermediate Circuit, and Inverter and a Control Unit Current, AC - The standard power supply available from electric utilities or alternators. Current, DC - The power supply available from batteries, generators (not alternators), or a rectified source used for special purpose applications. Coupling - The mechanical connector joining the motor shaft to the equipment to be driven. Current - The flow of electrons through a conducting material. By convention, current is considered to flow from positive to negative potential. The electrons, however, actually flow in the opposite direction. The unit of measurement is the Ampere and 1 Amp is defined as the constant current produced between two straight infinitely long parallel conductors with negligible cross section diameter and spaced one meter apart in a vacuum. Current Constant - The constant corresponding to the relationship between motor current and motor output torque. Current at peak torque - amount of current required to produce peak torque. DC (Direct Current) - Is the type of current where all electrons are flowing in the same direction continuously. If the flow of electrons reverses periodically, the current is called AC (Alternating Current). Deceleration - rate of decrease in velocity with respect to time. Decibel (dB) - A logarithmic measurement of gain. If G is a system gain (ratio of output to input) then 20 log G = gain in decibels (dB). Demagnetization (Current) - When a permanent magnet DC motor is subjected to high current pulses at which the motor permanent magnets will be demagnetized. This is an irreversible effect which will alter the motor characteristics and degrade performance. Detent torque - torque that is present in a non-energized motor. Drive - amplifier that converts step and direction input to motor currents and voltages. Drive Controller - (also called a Variable Speed Drive) An electronic device that can control the speed, torque horsepower and direction of an AC or DC motor. Drive, PWM - A motor drive utilizing Pulse-Width Modulation techniques to control power to the motor. Typically a high efficiency drive that can be used for high response applications. Drive, SCR - A DC motor drive which utilizes internal silicon controlled rectifiers as the power control elements. Usually used for low bandwidths, higher power applications. Drive, Servo - A motor drive which utilizes internal feedback loops for accurate control of motor current and/or velocity. Drive, Stepper - Electronics which convert step and direction inputs to high power currents and voltages to drive a stepping motor. The stepping motor driver is analogous to the servo motor amplifier. Duty Cycle - The relationship between the operating and rest times or repeatable operation at different loads. A motor which can continue to operate within the temperature limits of its insulation system after it has reached normal operating (equilibrium) temperature is considered to have a continuous duty (CONT.) rating. A motor which neverreaches equilibrium temperature but is permitted to cool down between operations, is operating under intermittent (INT) duty. Conditions such as a crane and hoist motor are often rated 15 or 30 minute intermittent duty. Dynamic Braking - A passive technique for stopping a permanent magnet brush or brushless motor. The motor windings are shorted together through a resistor which results in motor braking with an exponential decrease in speed. Eddy Current - Localized currents induced in an iron core by alternating magnetic flux. These currents translate into losses (heat) and their minimization is an important factor in lamination design. Efficiency - Ratio of mechanical output to electrical input indicated by a percent. In motors, it is the effectiveness with which a motor converts electrical energy into mechanical energy. EMF - The initials of electromotive force which is another term for voltage or potential difference. In DC adjustable speed drives, voltage applied to the motor armature from power supply is the EMF and the voltage generated by the motor is the counter-EMF or CEMF. EMI (Electro-Magnetic Interference) - EMI is noise which, when coupled into sensitive electronic circuits, may cause problems. Enclosure - The term used to describe the motor housing. The most common industrial types are Open Drip Proof (ODP), Totally Enclosed Fan Cooled (TEFC), Totally Enclosed Non-Ventilated (TENV), and Totally Enclosed Air Over (TEAO). Encoder - A type of feedback device which converts mechanical motion into electrical signals to indicate actuator position. Typical encoders are designed with a printed disc and a light source. As the disc turns with the actuator shaft, the light source shines through the printed pattern onto a sensor. The light transmission is interrupted by the pattern on the disc. These interruptions are sensed and converted to electric pulses. By counting the pulses, actuator shaft position is determined. End play - amount of axial displacement resulting from the application of a load equal to the stated maximum axial load. End shield - The part of a motor that houses the bearing supporting the rotor and acts as a protective guard to the internal parts of the motor; sometimes called endbell, endplate or end bracket. Error - Difference between the set point signal and the feedback signal. An error is necessary before a correction can be made in a controlled system. Feedback - The element of a control system that provides an actual operation signal for comparison with the set point to establish an error signal used by the regulator circuit. Field Weakening - The action of reducing the current applied to a DC motor shunt field. This action weakens the strength of the magnetic field and thereby increases the motor speed. Filter - A device that passes a signal or a range of signals and eliminates all others. Floating Ground - A circuit whose electrical common point is not at earth ground potential or the same ground potential as circuitry it is associated with. A voltage difference can exist between the floating ground and earth ground. Force - The tendency to change the motion or position of an object with a push or pull. Force is measured in ounces or pounds. Form Factor - A figure of merit which indicates how much rectified current deviates from pur (nonpulsating) DC. A large departure from unity form factor (pure DC) increases the heating effect of the motor. Mathematically, it is expressed as Irms/Iav (Motor heating current / Torque producing current). Four-Quadrant Operation - The four combinations of forward and reverse rotation and forward and reverse torque of which a regenerative drive is capable. The four combinations are - 1. Forward rotation / forward torque (motoring). 2. Forward rotation / reverse torque (regeneration). 3. Reverse rotation / reverse torque (motoring). 4. Reverse rotation / forward torque (regeneration). Frame - The supporting structure for the stator parts of an AC motor. In a DC motor, the frame usually forms a part of the magnetic coil. The frame also determines mounting. Frequency - Alternating electric current frequency is an expression of how often a complete cycle occurs. Cycles per second describe how many complete cycles occur in a given time increment. Hertz (hz) has been adopted to describe cycles per second so that time as well as number of cycles is specified. The standard power supply in North America is 60 hz. Most of the rest of the world has 50 hz power. Friction Torque - The sum of torque losses independent of motor speed. These losses include those caused by static mechanical friction of the ball bearings and magnetic hysteresis of the stator. Front of a Motor - The end opposite the coupling or driving pulley (NEMA). This is sometimes called the opposite pulley end (O.P.E.) or commutator end (C.E.). Full Load Amperes - Line current (amperage) drawn by a motor when operating at rated load and voltage on motor nameplate. Important for proper wire size selection, and motor starter or drive selection. Also called full load current. Full Load Torque - The torque a motor produces at its rated horsepower and full-load speed. Generator - Any machine that converts mechanical energy into electrical energy. Grounded Circuit - An electrical circuit coupled to earth ground to establish a reference point. An electric circuit malfunction caused by insulation breakdown, allowing current flow to ground rather than through the intended circuit. Horsepower - A measure of the amount of work that a motor can perform in a given period of time. Hysteresis Loss - The resistance offered by materials to becoming magnetized (magnetic orientation of molecular structure) results in energy being expended and corresponding loss. Hysteresis loss in a magnetic circuit is the energy expended to magnetize and demagnetize the core. Inductance - The characteristic of an electric circuit by which varying current in it produces a varying magnetic field which causes voltages in the same circuit or in a nearby circuit Induction Motor - The simplest and most rugged electric motor, it consists of a wound stator and a rotor assembly. The AC induction motor is named because the electric current flowing in its secondary member (the rotor) is induced by the alternating current flowing in its primary member (the stator). The power supply is connected only to the stator. The combined electromagnetic effects of the two currents produce the force to create rotation. Inertia - A measure of a body’s resistance to changes in velocity, whether the body is at rest or moving at a constant velocity. The velocity can be either linear or rotational. The moment of Inertia (WK2) is the product of the weight (W) of an object and the square of the radius of gyration (K2). The radius of gyration is a measure of how the mass of the object is distributed about the axis of rotation. WK2 is usually expressed in units of Ib-ft2. Insulation - In motors, classified by maximum allowable operating temperature. NEMA classifications include - Class A = 105°C, Class B = 130°C, Class F = 155°C and Class H = 180°C. Integral Horsepower Motor - A motor built in a frame having a continuous rating of 1 HP or more. Intermittent Duty (INT) - A motor that never reaches equilibrium temperature (equilibrium), but is permitted to cool down between operations. For example, a crane, hoist or machine tool motor is often rated for 15 or 30 duty. International Electrotechnical Comm (IEC) - The worldwide organization that promotes international unification of standards or norms. Its formal decisions on technical matters express, as nearly as possible, an international consensus. Inverter - An electronic device that converts fixed frequency and fixed voltages to variable frequency and voltage. Enables the user to electrically adjust the speed of an AC motor. IR Compensation - A way to compensate for the voltage drop across resistance of the AC or DC motor circuit and the resultant reduction in speed. This compensation also provides a way to improve the speed regulation haracteristics of the motor, especially at low speeds. Drives that use a tachometer-generator for speed feedback generally do not require an IR Compensation circuit because the tachometer will inherently compensate for the loss of speed. Laminations - The steel portion of the rotor and stator cores make up a series of thin laminations (sheets) which are stacked and fastened together by cleats, rivets or welds. Laminations are used instead of a solid piece in order to reduce eddy-current losses.) Locked Rotor Current - Measured current with the rotor locked and with rated voltage and frequency applied to the motor. Locked Rotor Torque - Measured torque with the rotor locked and with rated voltage and frequency applied to the motor. Meggar Test - A test used to measure an insulation system’s resistance. This is usually measured in megohms and tested by passing a high voltage at low current through the motor windings and measuring the resistance of the various insulation systems. Motor - A device that takes electrical energy and converts it into mechanical energy to turn a shaft. Mechanical Time Constant - [ms] The time required by the motor to reach a speed of 63% of its final no-load speed from standstill. NEMA - The National Electrical Manufacturers Association is a nonprofit organization organized and supported by manufacturers of electrical equipment and supplies. Some of the standards NEMA specifies are - HP ratings, speeds, frame sizes and dimensions, torques and enclosures. Nameplate - The plate on the outside of the motor describing the motor horsepower, voltage, speed efficiency, design, enclosure, etc. Nominal Voltage - [V DC] The voltage applied to the armature at which the nominal motor specifications are measured or calculated. No-load speed - [rpm] The maximum speed the motor attains with no additional torque load at a given voltage. This value varies according to the voltage applied to the motor. No-load current - [A] The current consumption of the motor at nominal voltage and under no-load conditions. This value varies proportionally to speed and is influenced by temperature Open Loop - A control system that lacks feedback Output Power - [W] The mechanical power that the motor generates based on a given input power. Mechanical power can be calculated in a few different ways. For motors, one common way is the multiplication of the output speed and torque and conversion factor. Power - Work done per unit of time. Measured in horsepower or watts - 1 HP = 33,000 ft-lb / min. = 746 watts. Plugging - A method of braking a motor that involves applying partial or full voltage in reverse in order to bring the motor to zero speed. Power Factor - A measurement of the time phase difference between the voltage and current in an AC circuit. It is represented by the cosine of the angle of this phase difference. Power factor is the ratio of Real Power (kW) to total kVA or the ratio of actual power (W) to apparent power (volt-amperes). PID - Proportional-Integral-Derivative. An acronym that describes the compensation structure that can be used in a closed-loop system. PMDC Motor - A motor consisting of a permanent magnet stator and a wound iron-core rotor. These are brush type motors and are operated by application of DC current. Prime Mover - In industry, prime mover is most often an electric motor. Occasionally engines, hydraulic or air motors are used. Special application considerations are called for when other than an electric motor is the prime mover. Pull Out Torque - Also called breakdown torque or maximum torque, this is the maximum torque a motor can deliver without stalling. Pull Up Torque - The minimum torque delivered by a motor between zero and the rated RPM, equal to the maximum load a motor can accelerate to rated RPM. PWM - Pulse width modulation. An acronym which describes a switch-mode control technique used in amplifiers and drivers to control motor voltage and current. This control technique is used in contrast to linear control and offers the advantages of greatly improved efficiency. Rectifier - A device that transforms alternating-current to direct-current. Regeneration - The characteristic of a motor to act as a generator when the CEMF is larger than the drive’s applied voltage (DC drives) or when the rotor synchronous frequency is greater than the applied frequency (AC drives). Reluctance - The characteristics of a magnetic field which resist the flow of magnetic lines of force through it. Resistor - A device that resists the flow of electrical current for the purpose of operation, protection or control. There are two types of resistors - fixed and variable. A fixed resistor has a fixed value of ohms while a variable resistor is adjustable. Resolution - The smallest distinguishable increment into which a quantity can be divided (e.g. position or shaft speed). It is also the degree to which nearly equal values of a quantity can be discriminated. For encoders, it is the number of unique electrically identified positions occurring in 360 degrees of input shaft rotation. Ramping - The acceleration and deceleration of a motor. May also refer to the change in frequency of the applied step pulse signal. Regeneration - The action during motor braking, in which the motor acts as a generator and takes kinetic energy from the load, converts it to electrical energy, and returns it to the amplifier. Resistance - [Ohm] It is the measure of opposition to current flow through a given medium. Substances with high resistances are called insulators and those with low resistances are called conductors. Those in between are known as semiconductors. The unit is the Ohm. 1 Ohm is defined as the resistance between two points on a conductor when an electric potential difference of one volt applied between those points produces a current of one Amp and when that conductor is not the source of any electro motive force. Resonance - The effect of a periodic driving force that causes large amplitude increases at a particular frequency. (Resonance frequency.) RFI - Radio frequency interference. Rise Time - The time required for a signal to rise from 10% of its final value to 90% of its final value. RMS Current - Root mean square current. In an intermittent duty cycle application, the RMS current is equal to the value of steady state current which would produce the equivalent resistive heating over a long period of time. RMS Torque - Root mean square torque. For an intermittent duty cycle application, the RMS torque is equal to the steady state torque which would produce the same amount of motor heating over long periods of time. Rotor - The rotating component of an induction AC motor. It is typically constructed of a laminated, cylindrical iron core with slots for cast-aluminum conductors. Short-circuiting end rings complete the "squirrel cage," which rotates when the moving magnetic field induces a current in the shorted conductors. Self-Locking - The inability of a reducer to be driven backwards by its load. As a matter of safety, no LEESON reducer should be considered self-locking. Servo System - An automatic feedback control system for mechanical motion in which the controlled or output quantity is position, velocity, or acceleration. Servo systems are closed loop systems. Service Factor - When used on a motor nameplate, a number which indicates how much above the nameplate rating a motor can be loaded without causing serious degradation (i.e. A motor with 1.15 S-F can produce 15% greater torque than one with 1.0 S-F). When used in applying motors or gear motors, it is a figure of merit which is used to adjust measured loads in an attempt to compensate for conditions which are difficult to measure or define. Settling Time - The time required for a step response of a system parameter to stop oscillating or ringing and reach its final value. Silicon Controlled Rectifier (SCR) A solid-state switch, sometimes referred to as a thyristor. The SCR has an anode, cathode and control element called the gate. The device provides controlled rectification since it can be turned on at will. The SCR can rapidly switch large currents at high voltages. They are small in size and low in weight. Shock Load - The load seen by a clutch, brake or motor in a system which transmits high peak loads. This type of load is present in crushers, separators, grinders, conveyors, winches and cranes. Short Circuit - A fault or defect in a winding causing part of the normal electrical circuit to be bypassed, frequently resulting in overheating of the winding and burnout. Shunt Resistor - A device located in a servo amplifier for controlling regenerative energy generated when braking a motor. This device dissipates or "dumps" the kinetic energy as heat. Skew - The arrangement of laminations on a rotor or armature to provide a slight angular pattern of their slots with respect to the shaft axis. This pattern helps to eliminate low speed cogging in an armature and minimize induced vibration in a rotor as well as reduce associated noise. Slip - The difference between RPM of the rotating magnetic field and RPM of the rotor in an induction motor. Slip is expressed in percentage and may be calculated by the following formula - Slip = Synchronous Speed - Running Speed x 100/ Synchronous Speed Speed constant - [rpm/V] The speed variation per Volt applied to the motor phases at constant load. Speed Range - The speed minimum and maximum at which a motor must operate under constant or variable torque load conditions. Speed Regulation - In adjustable speed drive systems, speed regulation measures the motor and control's ability to maintain a constant preset speed despite changes in load from zero to 100%. It is expressed as a percentage of the drive system's rated full load speed. Stall torque - The torque developed by the motor at zero speed and nominal voltage. Starting Current - Amount of current drawn at the instant a motor is energized--in most cases much higher than the required for running. Same as locked rotor current. Starting Torque - The torque or twisting force delivered by a motor at the instant it is energized. Starting torque is often higher than rated running or full load torque. Stator - The non-rotating part of a magnetic structure. In a motor the stator usually contains the mounting surface, bearings, and non-rotating windings or permanent magnets Stiffness - The ability of a device to resist deviation due to load change Terminal inductance, phase to phase - [µH] The inductance measured between two phases at 1 kHz. Terminal resistance, phase to phase - The resistance measured between two motor phases. The coil temperature directly affects the value. Thermal resistance Rth 1 / Rth 2 - [K/W] Rth 1 corresponds to the value between the coil and housing. Rth 2 corresponds to the value between the housing and the ambient air. Rth 2 can be reduced by enabling exchange of heat between the motor and the ambient air (for example using a heat sink or forced air cooling. Thermal Protector - A device, sensitive to current and heat, which protects the motor against overheating due to overload or failure to start. Basic types include automatic rest, manual reset and resistance temperature detectors. Thrust Load - Force imposed on a shaft parallel to a shaft's axis. Thrust loads are often induced by the driven machine. Take care to be sure the thrust load rating of the reducer is sufficient enough that it's shafts and bearings can absorb the load without premature failure. Torque - A turning force applied to a shaft, tending to cause rotation. Torque Constant (in-lbs) - This motor parameter provides a relationship between input current and output torque. For each ampere of current applied to the rotor, a fixed amount of torque will result. Torque Control -A method of using current limit circuitry to regulate torque instead of speed. Transducer - A device that converts one energy form to another (e.g. mechanical to electrical) Also, a device that when actuated by signals from one or more systems or media, can supply related signals to one or more other systems or media. Transient - A momentary deviation in an electrical or mechanical system. Transistor - A solid-state three-terminal device that allows amplification of signals and can be used for switching and control. The three terminals are called the emitter, base and collector. Totally Enclosed Enclosure - A motor enclosure, which prevents free exchange of air between the inside and the outside of the enclosure but is not airtight. Different methods of cooling can be used with this enclosure. Totally Enclosed Non-Ventilated (TENV) - No vent openings, tightly enclosed to prevent the free exchange of air, but not airtight. Has no external cooling fan and relies on convection for cooling. Suitable for use where exposed to dirt or dampness, but not for hazardous (explosive) locations. Totally Enclosed Fan Cooled (TEFC) - Same as the TENV except has external fan as an integral part of the motor, to provide cooling by blowing air around the outside frame of the motor. Underwriters Laboratories (UL) - Independent United States testing organization that sets safety standards for motors and other electrical equipment. Voltage - The force that causes a current to flow in an electrical circuit. The unit is the Volt. 1 Volt is defined as the difference of electric potential between two points on a conductor that is carrying a constant current of one ampere when the power dissipated between those points is one watt. Watt - The amount of power required to maintain a current of 1 ampere at a pressure of one volt when the two are in phase with each other. One horsepower is equal to 746 watts. Work - A force moving an object over a distance. Work = force x distance.
http://www.sdtdrivetechnology.co.uk/glossary-of-related-terms/
13
11
In NGC 6240, a galaxy located 400 million light years from the Milky Way, two supermassive black holes are locked in a battle that will eventually end in a mammoth collision. In fact, the collision of two supermassive black holes is as energetic as it can get, generating powerful ripples through the fabric of space. The two supermassive black holes in NGC 6240 are expected to merge as one super-supermassive black hole once the dust has eventually settled. The energetic ripples are known as gravitational waves and astrophysicists are spending a lot of time and money trying to detect them (although they've had little luck so far). The two black holes in NGC 6240 are currently spinning around each other at a distance of only 3000 light years. But how did two supermassive black holes — both with the mass of millions of suns — get so close? After all, these gravitational monsters usually evolve in the center of galaxies, alone. NGC 6240 is in fact two galaxies, that collided, and merged as one. The black holes from each of the galactic nuclei started to orbit around one another about 30 million years ago and they are expected to make contact in tens or hundreds of millions of years from now. In this striking image, optical light (from the Hubble Space Telescope) and X-ray data (from the Chandra X-ray Observatory) have been combined, highlighting the two supermassive black holes as they stare at each other across the chaos of disturbed stars, dust and hot gas in the center of NGC 6240.
http://news.discovery.com/space/history-of-space/supermassive-black-hole-collision.htm
13
21
Coordinate Geometry or the system of coordinate geometry has been derived from the correspondence of the points on the number line and real number. A very unique number coordinate can be represented on a number line and any of the real number can be located on the number line. The number line represents the whole of the real number. By the way of the convection the number line is a horizontal placed line at its right hand side the positive Numbers are placed and on the left hand side the negative number are placed and they both are separated by the 0 or we can say zero is placed in the middle of them. On the vertical axis another number line is created here negative coordinates are below the horizontal line and the positive coordinates are above the horizontal line this system of representing a number line creates a Rectangular Coordinate System. Any Point on a plane is represented by two coordinates which is distance from the x-axis and the distance from the y-axis. Suppose we have a coordinate (3, 5), in this is 3 is represented on x- axis which means 3 units away from origin and 5 is represented on y- axis which means 5 units away from origin. The introduction coordinate geometry or the coordinate system helped the mathematicians to explore Algebra and geometry. The presentation of points, lines, curve, and the various other geometric drawing and construction became easy because of coordinate geometry. Among one of the important applications of coordinate system is the mapping of earth into latitudes and longitudes. It is believed that the first mapping of longitudes and latitudes was propounded by the “Amerigo Vespucci”. Another practical application is in field of computers and television screen which is invention of the pixels. This is all about introduction to coordinate geometry. If we want to describe the location of a Point or body in space, then we need some parameters to show its location in space. These parameters are called as coordinates. Thus, we can say that coordinates are those quantities which are used to express the Position of a point in space. Coordinates are represented on a coordinate plane. Coordinate plane is a plane in wh...Read More
http://www.tutorcircle.com/introduction-to-coordinate-geometry-t9vvp.html
13
22
In this section we’re going to be exploring Pythagoras’s theorem. You will understand how to use Pythagoras’s theorem and be able to find the distance between 2 points in a coordinate grid. Pythagoras was a Greek mathematician about 2699 years ago who discovered the method of calculating the length of the hypotenuse of a right angled triangle without drawing it. What he did was took a triangle and drew squares on each side of the triangle as shown below. Pythagoras looked and analysed the areas of the squares drawn at each end of the triangle. And concluded that; The area of the square on the hypotenuse is equal to the sum of the squares on the shortest two sides of the triangle. The areas of the two small squares drawn on the triangle sum up to give the area of the largest square area, the hypotenuse. The following shows an example of finding the hypotenuse at the longest side the hypotenuse. Find the length of the hypotenuse of the following marked x. Here we imagine that there are three squares drawn in the sides of the above right angled triangle as shown below. The area of the smallest square A is 64; And the area of the other smaller side B is 100; The area of the largest square C is the sum of the areas A and B. If the area of the largest square C is 100 that must mean that; We could have done this quick and simple; Above we have found the largest length the hypotenuse. Suppose we knew the hypotenuse and had to find one of the other smaller lengths. Here is an example: We know that; So we have to make x2 the subject to work it out. Move 82 to the other side and you get; We have managed to find the smaller side of the right angled triangle using Pythagoras theorem. Finding distance between 2 points Suppose we wanted to find the distance between two points on the grid. Here is a grid. A has the coordinate of (3, 3) and B has coordinate (9, 7), suppose we wanted to find the length AB. We can take advantage of Pythagoras here. Looking at the grid we can draw a right angled triangle as shown below. We need to find the length of the triangle, to do that we look at the coordinates as shown on the graph. Now we can use Pythagoras. There we have managed to find the length of AB by drawing the triangle on the grid in connection with the given coordinates and then used Pythagoras to solve the unknown lengths.
http://www.mathsrev.com/pythagoras%E2%80%99s-theorem/
13
10
Every day, more than 100 million tons of carbon dioxide are drawn from the atmosphere into the ocean by billions of microscopic ocean plants called phytoplankton during photosynthesis. In addition to playing a big role in removing greenhouses gases from the atmosphere, phytoplankton are the foundation of the ocean food chain. For nearly a decade, the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) has been making global observations of phytoplankton productivity. On December 6, 2006, NASA-funded scientists announced that warming sea surface temperatures over the past decade have caused a global decline in phytoplankton productivity. This pair of images shows changes in sea surface temperature (top) and phytoplankton productivity (bottom) between 2000 and 2004, after the last strong El Niño event, which occurred between 1997-1998. Places where temperatures rose between 2000 and 2004 (red areas, top image) are the same places where productivity dropped (red areas, bottom image). In general, the reverse situation was also true: where temperatures cooled, productivity rose. The sea surface temperature map is based on data collected by the Advanced Very High Resolution Radiometer (AVHRR) sensors onboard several National Oceanic and Atmosphere Administration satellites. Why do warmer temperatures have a negative influence on phytoplankton growth? The most likely explanation is that the warmer the surface waters become, the less mixing there is between those waters and deeper, more nutrient-rich water. As nutrients become scarce at the surface, where phytoplankton grow, productivity declines. The effect is most obvious in the part of the world’s oceans that scientists describe as the permanently stratified ocean, bounded by black lines in the images. “Permanently stratified” means that rather than being well-mixed, there is already a distinct difference in the density of warmer, fresher water at the surface and colder, saltier water deeper down. Seasonal stratification occurs in other parts of the ocean, but in the labeled area, the stratification exists year-round. In this situation, with “lighter” (i.e., less dense) water on top, and “heavier” (denser) water below, there is little vertical mixing, and nutrients can’t move to the surface. As surface water warms, the stratification, or layering, becomes even more pronounced, suppressing mixing even further. As a result, nutrient transfer from deeper water to surface waters declines, and so does phytoplankton productivity. “Rising levels of carbon dioxide in the atmosphere play a big part in global warming,” said lead author Michael Behrenfeld of Oregon State University, Corvallis. “This study shows that as the climate warms, phytoplankton growth rates go down and along with them the amount of carbon dioxide these ocean plants consume. That allows carbon dioxide to accumulate more rapidly in the atmosphere, which would produce more warming.” “The evidence is pretty clear that the Earth’s climate is changing dramatically, and in this NASA research we see a specific consequence of that change,” said oceanographer Gene Carl Feldman of NASA’s Goddard Space Flight Center. “It is only by understanding how climate and life on Earth are linked that we can realistically hope to predict how the Earth will be able to support life in the future.” - Behrenfeld., M., O’Malley, R., Siegel, D., McClain, C., Sarmiento, J., Feldman, G., Milligan, A., Falkowski, P., Letelier, R., and Boss, E. (2006). Climate-driven trends in contemporary ocean productivity. Nature, 444, 752-755. - NASA press release and related materials - What are phytoplankton? NASA images by Jesse Allen, based on data provided by Robert O’Malley, Oregon State University. - OrbView-2 - SeaWiFS
http://earthobservatory.nasa.gov/IOTD/view.php?id=7187
13
15
Definition: An angle whose measure is exactly 90° Adjust the angle below by dragging an orange dot and see how the angle ∠ Note that it is a right angle only when its measure is exactly 90° Right angles are one of the most common and interesting angles in all of geometry. It even has its own angle symbol. In the figure above, set the angle to 90° and see that instead of a small arc, the angle is marked with a small square symbol. If you see this, the angle measure in degrees is usually omitted. Note also that right triangles are those where one interior angle is a right angle. Talking about right angles In the figure above, press 'reset' to ensure ∠ABC is a right angle. We say that the line segemnts are 'at right angles' to each other. Sometimes we say that these lines are 'orthogonal' or 'normal' to each other, which is the same thing. Types of angle Altogether, there are six types of angle as listed below. Click on an image for a full description of that type and a corresponding interactive applet. Related angle topics (C) 2009 Copyright Math Open Reference. All rights reserved
http://www.mathopenref.com/angleright.html
13
73
Visualization of the quicksort algorithm. The horizontal lines are pivot values. |Worst case performance||O(n2)| |Best case performance||O(n log n)| |Average case performance||O(n log n)| |Worst case space complexity||O(n) auxiliary (naive) O(log n) auxiliary (Sedgewick 1978) Quicksort, or partition-exchange sort, is a sorting algorithm developed by Tony Hoare that, on average, makes O(n log n) comparisons to sort n items. In the worst case, it makes O(n2) comparisons, though this behavior is rare. Quicksort is often faster in practice than other O(n log n) algorithms. Additionally, quicksort's sequential and localized memory references work well with a cache. Quicksort is a comparison sort and, in efficient implementations, is not a stable sort. Quicksort can be implemented with an in-place partitioning algorithm, so the entire sort can be done with only O(log n) additional space used by the stack during the recursion. The quicksort algorithm was developed in 1960 by Tony Hoare while in the Soviet Union, as a visiting student at Moscow State University. At that time, Hoare worked in a project on machine translation for the National Physical Laboratory. He developed the algorithm in order to sort the words to be translated, to make them more easily matched to an already-sorted Russian-to-English dictionary that was stored on magnetic tape. Quicksort is a divide and conquer algorithm. Quicksort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sub-lists. The steps are: - Pick an element, called a pivot, from the list. - Reorder the list so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation. - Recursively apply the above steps to the sub-list of elements with smaller values and separately the sub-list of elements with greater values. The base case of the recursion are lists of size zero or one, which never need to be sorted. Simple version In simple pseudocode, the algorithm might be expressed as this: function quicksort('array') if length('array') ≤ 1 return 'array' // an array of zero or one elements is already sorted select and remove a pivot value 'pivot' from 'array' create empty lists 'less' and 'greater' for each 'x' in 'array' if 'x' ≤ 'pivot' then append 'x' to 'less' else append 'x' to 'greater' return concatenate(quicksort('less'), 'pivot', quicksort('greater')) // two recursive calls Notice that we only examine elements by comparing them to other elements. This makes quicksort a comparison sort. This version is also a stable sort (assuming that the "for each" method retrieves elements in original order, and the pivot selected is the last among those of equal value). The correctness of the partition algorithm is based on the following two arguments: - At each iteration, all the elements processed so far are in the desired position: before the pivot if less than the pivot's value, after the pivot if greater than the pivot's value (loop invariant). - Each iteration leaves one fewer element to be processed (loop variant). The correctness of the overall algorithm can be proven via induction: for zero or one element, the algorithm leaves the data unchanged; for a larger data set it produces the concatenation of two parts, elements less than the pivot and elements greater than it, themselves sorted by the recursive hypothesis. In-place version The disadvantage of the simple version above is that it requires O(n) extra storage space, which is as bad as merge sort. The additional memory allocations required can also drastically impact speed and cache performance in practical implementations. There is a more complex version which uses an in-place partition algorithm and can achieve the complete sort using O(log n) space (not counting the input) on average (for the call stack). We start with a partition function: // left is the index of the leftmost element of the subarray // right is the index of the rightmost element of the subarray (inclusive) // number of elements in subarray = right-left+1 function partition(array, left, right, pivotIndex) pivotValue := array[pivotIndex] swap array[pivotIndex] and array[right] // Move pivot to end storeIndex := left for i from left to right - 1 // left ≤ i < right if array[i] <= pivotValue swap array[i] and array[storeIndex] storeIndex := storeIndex + 1 swap array[storeIndex] and array[right] // Move pivot to its final place return storeIndex This is the in-place partition algorithm. It partitions the portion of the array between indexes left and right, inclusively, by moving all elements less than array[pivotIndex] before the pivot, and the equal or greater elements after it. In the process it also finds the final position for the pivot element, which it returns. It temporarily moves the pivot element to the end of the subarray, so that it doesn't get in the way. Because it only uses exchanges, the final list has the same elements as the original list. Notice that an element may be exchanged multiple times before reaching its final place. Also, in case of pivot duplicates in the input array, they can be spread across the right subarray, in any order. This doesn't represent a partitioning failure, as further sorting will reposition and finally "glue" them together. This form of the partition algorithm is not the original form; multiple variations can be found in various textbooks, such as versions not having the storeIndex. However, this form is probably the easiest to understand. Once we have this, writing quicksort itself is easy: function quicksort(array, left, right) // If the list has 2 or more items if left < right // See "Choice of pivot" section below for possible choices choose any pivotIndex such that left ≤ pivotIndex ≤ right // Get lists of bigger and smaller items and final position of pivot pivotNewIndex := partition(array, left, right, pivotIndex) // Recursively sort elements smaller than the pivot quicksort(array, left, pivotNewIndex - 1) // Recursively sort elements at least as big as the pivot quicksort(array, pivotNewIndex + 1, right) Each recursive call to this quicksort function reduces the size of the array being sorted by at least one element, since in each invocation the element at pivotNewIndex is placed in its final position. Therefore, this algorithm is guaranteed to terminate after at most n recursive calls. However, since partition reorders elements within a partition, this version of quicksort is not a stable sort. Implementation issues Choice of pivot In very early versions of quicksort, the leftmost element of the partition would often be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays, which is a rather common use-case. The problem was easily solved by choosing either a random index for the pivot, choosing the middle index of the partition or (especially for longer partitions) choosing the median of the first, middle and last element of the partition for the pivot (as recommended by R. Sedgewick). Selecting a pivot element is also complicated by the existence of integer overflow. If the boundary indices of the subarray being sorted are sufficiently large, the naïve expression for the middle index, (left + right)/2, will cause overflow and provide an invalid pivot index. This can be overcome by using, for example, left + (right-left)/2 to index the middle element, at the cost of more complex arithmetic. Similar issues arise in some other methods of selecting the pivot element. - To make sure at most O(log N) space is used, recurse first into the smaller half of the array, and use a tail call to recurse into the other. - Use insertion sort, which has a smaller constant factor and is thus faster on small arrays, for invocations on such small arrays (i.e. where the length is less than a threshold t determined experimentally). This can be implemented by leaving such arrays unsorted and running a single insertion sort pass at the end, because insertion sort handles nearly sorted arrays efficiently. A separate insertion sort of each small segment as they are identified adds the overhead of starting and stopping many small sorts, but avoids wasting effort comparing keys across the many segment boundaries, which keys will be in order due to the workings of the quicksort process. It also improves the cache use. Like merge sort, quicksort can also be parallelized due to its divide-and-conquer nature. Individual in-place partition operations are difficult to parallelize, but once divided, different sections of the list can be sorted in parallel. The following is a straightforward approach: If we have processors, we can divide a list of elements into sublists in O(n) average time, then sort each of these in average time. Ignoring the O(n) preprocessing and merge times, this is linear speedup. If the split is blind, ignoring the values, the merge naïvely costs O(n). If the split partitions based on a succession of pivots, it is tricky to parallelize and naïvely costs O(n). Given O(log n) or more processors, only O(n) time is required overall, whereas an approach with linear speedup would achieve O(log n) time for overall. One advantage of this simple parallel quicksort over other parallel sort algorithms is that no synchronization is required, but the disadvantage is that sorting is still O(n) and only a sublinear speedup of O(log n) is achieved. A new thread is started as soon as a sublist is available for it to work on and it does not communicate with other threads. When all threads complete, the sort is done. Other more sophisticated parallel sorting algorithms can achieve even better time bounds. For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in O(log n) time on a CRCW PRAM with n processors by performing partitioning implicitly. Formal analysis Average-case analysis using discrete probability Quicksort takes O(n log n) time on average, when the input is a random permutation. Why? For a start, it is not hard to see that the partition operation takes O(n) time. In the most unbalanced case, each time we perform a partition we divide the list into two sublists of size 0 and (for example, if all elements of the array are equal). This means each recursive call processes a list of size one less than the previous list. Consequently, we can make nested calls before we reach a list of size 1. This means that the call tree is a linear chain of nested calls. The th call does work to do the partition, and , so in that case Quicksort takes time. That is the worst case: given knowledge of which comparisons are performed by the sort, there are adaptive algorithms that are effective at generating worst-case input for quicksort on-the-fly, regardless of the pivot selection strategy. In the most balanced case, each time we perform a partition we divide the list into two nearly equal pieces. This means each recursive call processes a list of half the size. Consequently, we can make only nested calls before we reach a list of size 1. This means that the depth of the call tree is . But no two calls at the same level of the call tree process the same part of the original list; thus, each level of calls needs only O(n) time all together (each call has some constant overhead, but since there are only O(n) calls at each level, this is subsumed in the O(n) factor). The result is that the algorithm uses only O(n log n) time. In fact, it's not necessary to be perfectly balanced; even if each pivot splits the elements with 75% on one side and 25% on the other side (or any other fixed fraction), the call depth is still limited to , so the total running time is still O(n log n). So what happens on average? If the pivot has rank somewhere in the middle 50 percent, that is, between the 25th percentile and the 75th percentile, then it splits the elements with at least 25% and at most 75% on each side. If we could consistently choose a pivot from the two middle 50 percent, we would only have to split the list at most times before reaching lists of size 1, yielding an O(n log n) algorithm. When the input is a random permutation, the pivot has a random rank, and so it is not guaranteed to be in the middle 50 percent. However, when we start from a random permutation, in each recursive call the pivot has a random rank in its list, and so it is in the middle 50 percent about half the time. That is good enough. Imagine that you flip a coin: heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't. Imagine that you are flipping a coin over and over until you get k heads. Although this could take a long time, on average only 2k flips are required, and the chance that you won't get heads after flips is highly improbable (this can be made rigorous using Chernoff bounds). By the same argument, Quicksort's recursion will terminate on average at a call depth of only . But if its average call depth is O(log n), and each level of the call tree processes at most elements, the total amount of work done on average is the product, O(n log n). Note that the algorithm does not have to verify that the pivot is in the middle half—if we hit it any constant fraction of the times, that is enough for the desired complexity. Average-case analysis using recurrences An alternative approach is to set up a recurrence relation for the T(n) factor, the time needed to sort a list of size . In the most unbalanced case, a single Quicksort call involves O(n) work plus two recursive calls on lists of size and , so the recurrence relation is In the most balanced case, a single quicksort call involves O(n) work plus two recursive calls on lists of size , so the recurrence relation is The master theorem tells us that T(n) = O(n log n). The outline of a formal proof of the O(n log n) expected time complexity follows. Assume that there are no duplicates as duplicates could be handled with linear time pre- and post-processing, or considered cases easier than the analyzed. When the input is a random permutation, the rank of the pivot is uniform random from 0 to n-1. Then the resulting parts of the partition have sizes i and n-i-1, and i is uniform random from 0 to n-1. So, averaging over all possible splits and noting that the number of comparisons for the partition is , the average number of comparisons over all permutations of the input sequence can be estimated accurately by solving the recurrence relation: Solving the recurrence gives This means that, on average, quicksort performs only about 39% worse than in its best case. In this sense it is closer to the best case than the worst case. Also note that a comparison sort cannot use less than comparisons on average to sort items (as explained in the article Comparison sort) and in case of large , Stirling's approximation yields , so quicksort is not much worse than an ideal comparison sort. This fast average runtime is another reason for quicksort's practical dominance over other sorting algorithms. Analysis of Randomized quicksort Using the same analysis, one can show that Randomized quicksort has the desirable property that, for any input, it requires only O(n log n) expected time (averaged over all choices of pivots). However, there exists a combinatorial proof, more elegant than both the analysis using discrete probability and the analysis using recurrences. To each execution of Quicksort corresponds the following binary search tree (BST): the initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot of the right half is the root of the right subtree, and so on. The number of comparisons of the execution of Quicksort equals the number of comparisons during the construction of the BST by a sequence of insertions. So, the average number of comparisons for randomized Quicksort equals the average cost of constructing a BST when the values inserted form a random permutation. Consider a BST created by insertion of a sequence of values forming a random permutation. Let C denote the cost of creation of the BST. We have: (whether during the insertion of there was a comparison to ). By linearity of expectation, the expected value E(C) of C is Pr(during the insertion of there was a comparison to ). Fix i and j<i. The values , once sorted, define j+1 intervals. The core structural observation is that is compared to in the algorithm if and only if falls inside one of the two intervals adjacent to . Observe that since is a random permutation, is also a random permutation, so the probability that is adjacent to is exactly . We end with a short calculation: Space complexity The space used by quicksort depends on the version used. The in-place version of quicksort has a space complexity of O(log n), even in the worst case, when it is carefully implemented using the following strategies: - in-place partitioning is used. This unstable partition requires O(1) space. - After partitioning, the partition with the fewest elements is (recursively) sorted first, requiring at most O(log n) space. Then the other partition is sorted using tail recursion or iteration, which doesn't add to the call stack. This idea, as discussed above, was described by R. Sedgewick, and keeps the stack depth bounded by O(log n). Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it uses O(log n) space. However, without Sedgewick's trick to limit the recursive calls, in the worst case quicksort could make O(n) nested recursive calls and need O(n) auxiliary space. From a bit complexity viewpoint, variables such as left and right do not use constant space; it takes O(log n) bits to index into a list of n items. Because there are such variables in every stack frame, quicksort using Sedgewick's trick requires bits of space. This space requirement isn't too terrible, though, since if the list contained distinct elements, it would need at least O(n log n) bits of space. Another, less common, not-in-place, version of quicksort uses O(n) space for working storage and can implement a stable sort. The working storage allows the input array to be easily partitioned in a stable manner and then copied back to the input array for successive recursive calls. Sedgewick's optimization is still appropriate. Selection-based pivoting A selection algorithm chooses the kth smallest of a list of numbers; this is an easier problem in general than sorting. One simple but effective selection algorithm works nearly in the same manner as quicksort, except instead of making recursive calls on both sublists, it only makes a single tail-recursive call on the sublist which contains the desired element. This small change lowers the average complexity to linear or O(n) time, and makes it an in-place algorithm. A variation on this algorithm brings the worst-case time down to O(n) (see selection algorithm for more information). Conversely, once we know a worst-case O(n) selection algorithm is available, we can use it to find the ideal pivot (the median) at every step of quicksort, producing a variant with worst-case O(n log n) running time. In practical implementations, however, this variant is considerably slower on average. There are four well known variants of quicksort: - Balanced quicksort: choose a pivot likely to represent the middle of the values to be sorted, and then follow the regular quicksort algorithm. - External quicksort: The same as regular quicksort except the pivot is replaced by a buffer. First, read the M/2 first and last elements into the buffer and sort them. Read the next element from the beginning or end to balance writing. If the next element is less than the least of the buffer, write it to available space at the beginning. If greater than the greatest, write it to the end. Otherwise write the greatest or least of the buffer, and put the next element in the buffer. Keep the maximum lower and minimum upper keys written to avoid resorting middle elements that are in order. When done, write the buffer. Recursively sort the smaller partition, and loop to sort the remaining partition. This is a kind of three-way quicksort in which the middle partition (buffer) represents a sorted subarray of elements that are approximately equal to the pivot. - Three-way radix quicksort (developed by Sedgewick and also known as multikey quicksort): is a combination of radix sort and quicksort. Pick an element from the array (the pivot) and consider the first character (key) of the string (multikey). Partition the remaining elements into three sets: those whose corresponding character is less than, equal to, and greater than the pivot's character. Recursively sort the "less than" and "greater than" partitions on the same character. Recursively sort the "equal to" partition by the next character (key). Given we sort using bytes or words of length W bits, the best case is O(KN) and the worst case O(2KN) or at least O(N2) as for standard quicksort, given for unique keys N<2K, and K is a hidden constant in all standard comparison sort algorithms including quicksort. This is a kind of three-way quicksort in which the middle partition represents a (trivially) sorted subarray of elements that are exactly equal to the pivot. - Quick radix sort (also developed by Powers as a o(K) parallel PRAM algorithm). This is again a combination of radix sort and quicksort but the quicksort left/right partition decision is made on successive bits of the key, and is thus O(KN) for N K-bit keys. Note that all comparison sort algorithms effectively assume an ideal K of O(logN) as if k is smaller we can sort in O(N) using a hash table or integer sorting, and if K >> logN but elements are unique within O(logN) bits, the remaining bits will not be looked at by either quicksort or quick radix sort, and otherwise all comparison sorting algorithms will also have the same overhead of looking through O(K) relatively useless bits but quick radix sort will avoid the worst case O(N2) behaviours of standard quicksort and quick radix sort, and will be faster even in the best case of those comparison algorithms under these conditions of uniqueprefix(K) >> logN. See Powers for further discussion of the hidden overheads in comparison, radix and parallel sorting. Comparison with other sorting algorithms Quicksort is a space-optimized version of the binary tree sort. Instead of inserting items sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is implied by the recursive calls. The algorithms make exactly the same comparisons, but in a different order. An often desirable property of a sorting algorithm is stability - that is the order of elements that compare equal is not changed, allowing controlling order of multikey tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain for in situ (or in place) quicksort (that uses only constant additional space for pointers and buffers, and logN additional space for the management of explicit or implicit recursion). For variant quicksorts involving extra memory due to representations using pointers (e.g. lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex, or disk-bound, data structures tend to increase time cost, in general making increasing use of virtual memory or disk. The most direct competitor of quicksort is heapsort. Heapsort's worst-case running time is always O(n log n). But, heapsort is assumed to be on average somewhat slower than standard in-place quicksort. This is still debated and in research, with some publications indicating the opposite. Introsort is a variant of quicksort that switches to heapsort when a bad case is detected to avoid quicksort's worst-case running time. If it is known in advance that heapsort is going to be necessary, using it directly will be faster than waiting for introsort to switch to it. Quicksort also competes with mergesort, another recursive sort algorithm but with the benefit of worst-case O(n log n) running time. Mergesort is a stable sort, unlike standard in-place quicksort and heapsort, and can be easily adapted to operate on linked lists and very large lists stored on slow-to-access media such as disk storage or network attached storage. Like mergesort, quicksort can be implemented as an in-place stable sort, but this is seldom done. Although quicksort can be written to operate on linked lists, it will often suffer from poor pivot choices without random access. The main disadvantage of mergesort is that, when operating on arrays, efficient implementations require O(n) auxiliary space, whereas the variant of quicksort with in-place partitioning and tail recursion uses only O(log n) space. (Note that when operating on linked lists, mergesort only requires a small, constant amount of auxiliary storage.) Bucket sort with two buckets is very similar to quicksort; the pivot in this case is effectively the value in the middle of the value range, which does well on average for uniformly distributed inputs. See also - Steven S. Skiena (27 April 2011). The Algorithm Design Manual. Springer. p. 129. ISBN 978-1-84800-069-8. Retrieved 27 November 2012. - "Data structures and algorithm: Quicksort". Auckland University. - Shustek, L. (2009). "Interview: An interview with C.A.R. Hoare". Comm. ACM 52 (3): 38–41. doi:10.1145/1467247.1467261. More than one of - Sedgewick, Robert (1 September 1998). Algorithms In C: Fundamentals, Data Structures, Sorting, Searching, Parts 1-4 (3 ed.). Pearson Education. ISBN 978-81-317-1291-7. Retrieved 27 November 2012. - Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM 21 (10): 847–857. doi:10.1145/359619.359631. - qsort.c in GNU libc: , - Miller, Russ; Boxer, Laurence (2000). Algorithms sequential & parallel: a unified approach. Prentice Hall. ISBN 978-0-13-086373-7. Retrieved 27 November 2012. - David M. W. Powers, Parallelized Quicksort and Radixsort with Optimal Speedup, Proceedings of International Conference on Parallel Computing Technologies. Novosibirsk. 1991. - McIlroy, M. D. (1999). "A killer adversary for quicksort". Software: Practice and Experience 29 (4): 341–237. doi:10.1002/(SICI)1097-024X(19990410)29:4<341::AID-SPE237>3.3.CO;2-I. - David M. W. Powers, Parallel Unification: Practical Complexity, Australasian Computer Architecture Workshop, Flinders University, January 1995 - Hsieh, Paul (2004). "Sorting revisited.". www.azillionmonkeys.com. Retrieved 26 April 2010. - MacKay, David (1 December 2005). "Heapsort, Quicksort, and Entropy". users.aims.ac.za/~mackay. Retrieved 26 April 2010. - A Java implementation of in-place stable quicksort - Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM 21 (10): 847–857. doi:10.1145/359619.359631. - Dean, B. C. (2006). "A simple expected running time analysis for randomized "divide and conquer" algorithms". Discrete Applied Mathematics 154: 1–5. doi:10.1016/j.dam.2005.07.005. - Hoare, C. A. R. (1961). "Algorithm 63: Partition". Comm. ACM 4 (7): 321. doi:10.1145/366622.366642. - Hoare, C. A. R. (1961). "Algorithm 64: Quicksort". Comm. ACM 4 (7): 321. doi:10.1145/366622.366644. - Hoare, C. A. R. (1961). "Algorithm 65: Find". Comm. ACM 4 (7): 321–322. doi:10.1145/366622.366647. - Hoare, C. A. R. (1962). "Quicksort". Comput. J. 5 (1): 10–16. doi:10.1093/comjnl/5.1.10. (Reprinted in Hoare and Jones: Essays in computing science, 1989.) - Musser, D. R. (1997). "Introspective Sorting and Selection Algorithms". Software: Practice and Experience 27 (8): 983–993. doi:10.1002/(SICI)1097-024X(199708)27:8<983::AID-SPE117>3.0.CO;2-#. - Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Pages 113–122 of section 5.2.2: Sorting by Exchanging. - Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 7: Quicksort, pp. 145–164. - A. LaMarca and R. E. Ladner. "The Influence of Caches on the Performance of Sorting." Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, 1997. pp. 370–379. - Faron Moller. Analysis of Quicksort. CS 332: Designing Algorithms. Department of Computer Science, Swansea University. - Martínez, C.; Roura, S. (2001). "Optimal Sampling Strategies in Quicksort and Quickselect". SIAM J. Comput. 31 (3): 683–705. doi:10.1137/S0097539700382108. - Bentley, J. L.; McIlroy, M. D. (1993). "Engineering a sort function". Software: Practice and Experience 23 (11): 1249–1265. doi:10.1002/spe.4380231105. |The Wikibook Algorithm implementation has a page on the topic of: Quicksort| - Animated Sorting Algorithms: Quick Sort – graphical demonstration and discussion of quick sort - Animated Sorting Algorithms: 3-Way Partition Quick Sort – graphical demonstration and discussion of 3-way partition quick sort - Interactive Tutorial for Quicksort - Quicksort applet with "level-order" recursive calls to help improve algorithm analysis - Open Data Structures - Section 11.1.2 - Quicksort - Multidimensional quicksort in Java - Literate implementations of Quicksort in various languages on LiteratePrograms - A colored graphical Java applet which allows experimentation with initial state and shows statistics
http://en.wikipedia.org/wiki/Quicksort
13
19
|Technology is essential in teaching and learning mathematics; it influences the mathematics that is taught and enhances students' learning.| Electronic technologiescalculators and computersare essential tools for teaching, learning, and doing mathematics. They furnish visual images of mathematical ideas, they facilitate organizing and analyzing data, and they compute efficiently and accurately. They can support investigation by students in every area of mathematics, including geometry, statistics, algebra, measurement, and number. When technological tools are available, students can focus on decision making, reflection, reasoning, and problem Students can learn more mathematics more deeply with the appropriate use of technology (Dunham and Dick 1994; Sheets 1993; Boers-van Oosterum 1990; Rojano 1996; Groves 1994). Technology should not be used as a replacement for basic understandings and intuitions; rather, it can and should be used to foster those understandings and intuitions. In mathematics-instruction programs, technology should be used widely and responsibly, with the goal of enriching students' learning of mathematics. The existence, versatility, and power of technology make it possible and necessary to reexamine what mathematics students should learn as well as how they can best learn it. In the mathematics classrooms envisioned in Principles and Standards, every student has access to technology to facilitate his or her mathematics learning under the guidance of a skillful teacher. Technology can help students learn mathematics. For example, with calculators and computers students can examine more examples or representational forms than are feasible by hand, so they can make and explore conjectures easily. The graphic power of technological tools affords access to visual models that are powerful but that many students are unable or unwilling to generate independently. The computational capacity of technological tools extends the range of problems accessible to students and also enables them to execute routine procedures quickly and accurately, thus allowing more time for conceptualizing and modeling. Students' engagement with, and ownership of, abstract mathematical ideas can be fostered through technology. Technology enriches the range and quality of investigations by providing a means of viewing mathematical ideas from multiple perspectives. Students' learning is assisted by feedback, which technology can supply: drag a node in a Dynamic Geometry® environment, and the shape on the screen changes; change the defining rules for a spreadsheet, and watch as dependent values are modified. Technology also provides a focus as students discuss with one another and with their teacher the objects on the screen and the effects of the various dynamic transformations that technology allows. Technology offers teachers options for adapting instruction to special student needs. Students who are easily distracted may focus more intently on computer tasks, and those who have organizational difficulties may benefit from the constraints imposed by a computer environment. Students who have trouble with basic procedures can develop and demonstrate other mathematical understandings, which in turn can eventually help them learn the procedures. The possibilities for engaging students with physical challenges in mathematics are dramatically increased with special technologies. The effective use of technology in the mathematics classroom depends on the teacher. Technology is not a panacea. As with any teaching tool, it can be used well or poorly. Teachers should use technology to » enhance their students' learning opportunities by selecting or creating mathematical tasks that take advantage of what technology can do efficiently and wellgraphing, visualizing, and computing. For example, teachers can use simulations to give students experience with problem situations that are difficult to create without technology, or they can use data and resources from the Internet and the World Wide Web to design student tasks. Spreadsheets, dynamic geometry software, and computer microworlds are also useful tools for posing worthwhile problems. Technology does not replace the mathematics teacher. When students are using technological tools, they often spend time working in ways that appear somewhat independent of the teacher, but this impression is misleading. The teacher plays several important roles in a technology-rich classroom, making decisions that affect students' learning in important ways. Initially, the teacher must decide if, when, and how technology will be used. As students use calculators or computers in the classroom, the teacher has an opportunity to observe the students and to focus on their thinking. As students work with technology, they may show ways of thinking about mathematics that are otherwise often difficult to observe. Thus, technology aids in assessment, allowing teachers to examine the processes used by students in their mathematical investigations as well as the results, thus enriching the information available for teachers to use in making Technology not only influences how mathematics is taught and learned but also affects what is taught and when a topic appears in the curriculum. With technology at hand, young children can explore and solve problems involving large numbers, or they can investigate characteristics of shapes using dynamic geometry software. Elementary school students can organize and analyze large sets of data. Middle-grades students can study linear relationships and the ideas of slope and uniform change with computer representations and by performing physical experiments with calculator-based-laboratory systems. High school students can use simulations to study sample distributions, and they can work with computer algebra systems that efficiently perform most of the symbolic manipulation that was the focus of traditional high school mathematics programs. The study of algebra need not be limited to simple situations in which symbolic manipulation is relatively straightforward. Using technological tools, students can reason about more-general issues, such as parameter changes, and they can model and solve complex problems that were heretofore inaccessible to them. Technology also blurs some of the artificial separations among topics in algebra, geometry, and data analysis by allowing students to use ideas from one area of mathematics to better understand another area of mathematics. Technology can help teachers connect the development of skills and procedures to the more general development of mathematical understanding. As some skills that were once considered essential are rendered less necessary by technological tools, students can be asked to work at higher levels of generalization or abstraction. Work with virtual manipulatives (computer simulations of physical manipulatives) or with Logo can allow young children to extend physical experience and » to develop an initial understanding of sophisticated ideas like the use of algorithms. Dynamic geometry software can allow experimentation with families of geometric objects, with an explicit focus on geometric transformations. Similarly, graphing utilities facilitate the exploration of characteristics of classes of functions. Because of technology, many topics in discrete mathematics take on new importance in the contemporary mathematics classroom; the boundaries of the mathematical landscape are being transformed. |Home | Table of Contents | Purchase | Resources| |NCTM Home | Illuminations Web site| Copyright © 2000 by the National Council of Teachers of Mathematics.
http://www.fayar.net/east/teacher.web/Math/Standards/document/chapter2/techn.htm
13
17
- slide 1 of 3 What are Calculus limit problems? Solving or evaluating functions in math can be done using direct and synthetic substitution. Students should have experience in evaluating functions which are: 1. constant functions such as: f(x) = c where c is any real number f(x) = 7 or y = 7 2. polynomial functions such as: f(x) = 3x5 – x2 + 6x – 2 Sometimes evaluating a function may lead to an undefined form such as 1/0 or ∞/∞. The concept of limits is to evaluate a function as x approaches a value but never takes on that value. To solve a limit, see the 4 examples of a limit problems involving direct substitution. When graphing a solution of an equation in calculus, such as example 1, the graph will pass through the y-value 4/3 when x is the value 1. The line will be a straight line and the graph is said to be continuous at x = 1. - slide 2 of 3 Limits That Need Simplification Some limits will need simplification before they can be solved: If direct substitution yields 0/0, undefined, you have to factor & reduce. Lim x2 - 4 x →2 x - 2 The limit exists even though the function is undefined. The limit for this example is 4. To solve an undefined limit, see examples 5 and 6 of limits that need simplification. If direct substitution yields ∞/∞, undefined, then divide by the highest power. lim 3x + 1 x→∞ 5x + 6 The limit exists even though the function is undefined. The limit for this example is 3/5. To solve an undefined limit, see examples 7 and 8 of limits that need simplification. - slide 3 of 3 Limits That Do Not Exist Some limits are of a form called a Limit that does not exist. This means that the function f(x) does not approach a single value, a, as x → a and we say that the limit of f(x) as x → a does not exist. There will be a break or a discontinuity in the graph at the point where x = a. The function is discontinuous at x = a. The following limit does not exist as x approaches 2. Lim x2 + 4 x→2 x - 2 This kind of problem is of the form that will result in 0/0 and usually there is a factor in the numerator and denominator using the value a, that x is approaching. To solve a limit that has the form 0/0, see example 9 of limits that do not exist. There are also special cases of limits to solve involving the difference of radicals in the numerator and denominator. Factoring polynomials such as the difference of squares or difference of cubes help to simplify these functions into solvable limits. I know you don't want to hear this, but practice makes perfect! If you're still having a hard time getting it, solve several different examples and practice identifying all the forms. How to Solve Calculus Limit Problems This series shows how to solve several types of Calculus limit problems. Special cases of limits are solved and the related graphs are described. Solving Calculus limit and derivative problems are made understandable in this guide. See examples of how to find the derivative using derivative rules.
http://www.brighthubeducation.com/homework-math-help/64015-how-to-solve-calculus-limit-problems/
13
10
An international team of astronomers has used nearly three years of high precision data from NASA's Kepler spacecraft to make the first observations of a planet outside our solar system that's smaller than Mercury, the smallest planet orbiting our sun. The planet is about the size of the Earth's moon. It is one of three planets orbiting a star designated Kepler-37 in the Cygnus-Lyra region of the Milky Way. The findings are published were published online on Feb. 20 by the journal Nature. The lead authors are Thomas Barclay of the NASA Ames Research Center in California and the Bay Area Environmental Research Institute and Jason Rowe of NASA Ames and the SETI Institute in California. Steve Kawaler, an Iowa State University professor of physics and astronomy, was part of a team of researchers who studied the oscillations of Kepler-37 to determine its size. "That's basically listening to the star by measuring sound waves," Kawaler said. "The bigger the star, the lower the frequency, or 'pitch' of its song." The team determined Kepler-37's mass is about 80 percent the mass of our sun. That's the lowest mass star astronomers have been able to measure using oscillation data for an ordinary star. Those measurements also allowed the main research team to more accurately measure the three planets orbiting Kepler-37, including the tiny Kepler-37b. "Owing to its extremely small size, similar to that of the Earth's moon, and highly irradiated surface, Kepler-37b is very likely a rocky planet with no atmosphere or water, similar to Mercury," the astronomers wrote in a summary of their findings. "The detection of such a small planet shows for the first time that stellar systems host planets much smaller as well as much larger than anything we see in our own Solar System." Kawaler said the discovery is exciting because of what it says about the Kepler Mission's capabilities to discover new planetary systems around other stars.
http://thedragonstales.blogspot.com/2013/02/kelper-finds-luna-sized-exoplanet.html
13
10
The human skeletal system is comprised of individual bones and cartilage that receive a supply of blood and are held together by fibrous connective tissue, ligaments, and tendons. The three main functions of the skeletal system are protection, motion, and support. The system protects the body by enclosing the vital organs, it permits locomotion by responding at certain joints to the contractile activities of skeletal muscles, and it supports the body by serving as a framework to which tendons and fascia are attached, enabling skeletal muscles, viscera, and skin to obtain a holdfast. It also serves as a depot for calcium, which is vital to proper functioning of cell membranes, and for phosphorus, which is needed in intermediary metabolism. In addition, the skeletal system is important because bone produces blood cells. At birth the human body has about 275 bones, but as the body develops many of these bones fuse together. In the adult human the skeleton consists of 206 name-bearing bones and a variable number of largely unnamed sesamoid bones. Sesamoid bones develop in the capsules of certain joints or in tendons, which hold muscle to bone, where they provide special support or reduce friction. The best-known sesamoids are the patella (kneecap) and pisiform (wrist bone). Classification Of Bones Bones may be classified as long, short, flat, irregular, or sesamoid. The long bones are those of the limbs, except for the wrist, ankle, and kneecap. They consist of a central shaft (diaphysis) between two ends (epiphyses) that form joints with one or more other bones. The short bones are in the wrists and ankles. They consist of a spongy core within a shell of compact bone. The flat bones include the ribs and many skull bones. They consist of two plates of compact bone with a spongy layer, called a diploe, between. All the remaining named bones are irregular bones, except the patella, pisiform, and some bones in the feet. Divisions Of The Skeletal System The skeleton is comprised of an axial and an appendicular division. The axial division consists of the skull, spinal column, and thoracic cage, whereas the appendicular division includes an upper extremity (shoulder girdle and its paired appendages) and a lower extremity (pelvic girdle and its paired appendages). The skull consists of the cranium, which houses the brain, and the face. The cranium is made up of eight cranial bones, including one frontal, two parietal, two temporal, one ethmoid, one occipital, and one sphenoid. The frontal bone is the forehead. Its supraorbital margins, which can be felt through the skin of the eyebrows, provide a certain degree of protection to the eyes. Present within this bone, directly over the eyes, is the frontal sinus, which opens directly into the nasal cavity. The paired parietal bones comprise the walls of the cranium, whereas the paired temporals form the sides and part of the base. Each temporal bone encloses an ear and forms a movable joint with the lower jaw. In addition to enclosing the cochlea for hearing, the temporal bone encloses the labyrinth, which contains special sensory receptor cells (proprioceptors) for body balance. The ethmoid bone lies mainly within the skull cavity, between the cranial and nasal cavities. It contains a sievelike plate (a cribriform plate) through which nerve fibers from olfactory receptors pass en route to the brain. A front view of the ethmoid bone reveals a perpendicular septum (partition), which separates the nostrils from one another. Present on each side of the ethmoid bone are ethmoid sinuses that drain into the nasal cavity. The occipital bone forms the floor and lower rear wall of the cranium. Its outstanding feature is the foramen magnum, a large hole through which the spinal cord passes into the brain. Adjacent to the foramen magnum are two protuberances, called occipital condyles, which articulate with the vertebral column and allow the head to move up and down, as in nodding. The sphenoid (sphinxlike) bone lies immediately in front of (anterior to) the temporal bones. A septum down the midline divides the sphenoid sinuses that drain into the nasal cavity. A large depression, called a sella turcica, is within what might be viewed as the ceiling of the sphenoid bone. This provides protection to the pituitary gland, which is suspended immediately above it. Three paired bones, called the auditory ossicles, are considered part of the cranium of the skull. Present within each middle ear within the temporal bone are a malleus, incus, and stapes. These ear bones transmit sound waves into the inner ear. The Hyoid Bone Another bone that may be grouped with the skull bones is the hyoid (U-shaped) bone. Although this bone is located in the neck, it is suspended by ligaments from the styloid (spear-shaped) processes that extend downward from the temporal bones. It functions as a point of attachment for most of the anterior muscles that lie in the neck. The face contains 14 facial bones. Seven are superficial and can be felt with the fingers, whereas the other seven are located deeply. The deep facial bones include two lacrimals, two palatines, one vomer, and two inferior conchae. The lacrimal bones form part of the median wall of each orbit (eyeball cavity). The palatine bones form part of the lateral walls and floor of the nasal cavity, part of the roof of the mouth, the floor of the orbits, and the rear (posterior) portion of the hard palate. The vomer, which is flat, forms the rear (posterior) and basal portion of the nasal septum. The superficial facial bones include two nasal, two zygomatic, two maxillary (upper jaw), and one mandibular (lower jaw). The nasal bones lie side by side, uppermost along the median line of the nose. They support the flexible cartilaginous inferior region of the nose. The zygomatic bones form the prominences of the cheek and part of the lateral and inferior walls of the orbits. The maxillary bones fuse with one another medially to form the upper jaw, which contains alveoli (sockets) for embedding the teeth. The maxillary bones constitute the bulk of the hard palate in the roof of the mouth. In addition, they form part of the walls of the orbital and nasal cavities. In the course of development, should they fail to fuse medially within the mouth, a deformity known as a cleft (split) palate appears. Like the frontal, ethmoid, and sphenoid bones, the maxillary bones contain sinuses that drain into the nasal cavity. The Spinal Column The spinal column of most adults consists of 26 bony segments (vertebrae): 7 cervical (neck), 12 thoracic (chest), 5 lumbar (lower back), 1 sacral, and 1 coccygeal (tail) bone with multiple segments. Each vertebra, except the first and last, has a so-called body. These bodies are aligned with one another and are separated from each other by an intervertebral disk. The vertebral bodies function to bear weight, as in standing or sitting. The disks, made of a fibrous elastic cartilage, cushion the vertebrae, lubricate the joints between vertebrae, and act as shock absorbers. A pair of stalks, called pedicels, arises laterally from each vertebral body. The pedicels of one vertebra lie adjacent to those of another vertebra. They are arranged to form an opening through which the spinal nerves emerge. In addition to a vertebral body and pedicels, all vertebrae except the coccyx possess a vertebral foramen (opening) through which the spinal cord passes, and various processes that, together with their attached ligament and muscle, limit the kinds of movements the spine can make. The Thoracic Cage The thoracic cage is composed of 12 pairs of ribs, 12 thoracic vertebrae, and the sternum (breastbone). The first 7 pairs of ribs are called true ribs because they form a direct union with the sternum. Ribs 8 through 12 are called false ribs because they are not attached to the sternum. Instead they are united with each other and with rib 7. Ribs 11 and 12 are called floating ribs because they have no anterior attachment. The Shoulder Girdle The shoulder girdle consists of two scapulae (shoulder blades) and two clavicles (collarbones). The entire apparatus, together with its musculature, permits raising the hands skyward and other functions that most animals except primates are incapable of performing. The Upper Paired Appendages Each of the upper paired appendages consists of an arm, forearm, and hand. The arm contains a single bone, called a humerus. The head of the humerus forms a ball-and-socket joint with the scapula by way of a large depression in the scapula known as the glenoid cavity, or fossa. The forearm contains two bones, a radius and an ulna. The hand contains 8 carpals, 5 metacarpals, and 14 phalanges. The carpals constitute the wrist. The metacarpals, whose distal ends are known as knuckles, form the palm. Distal to the metacarpals are the phalanges, two for the thumb and three for each finger. The Pelvic Girdle The pelvic girdle supports the trunk on the thighs while standing, permits sitting, and provides protection to the urinary bladder, ovaries, oviducts and uterus, and rectum. It is formed from two hipbones, a sacrum, which lies between the hipbones, and a coccyx, or tailbone. Each hipbone, or innominate bone, consists of an ilium, ischium, and pubis, which are ossified in adults. The pubis forms the front (anterior) region of the hipbone, where the two hipbones unite by way of a cartilaginous bridge known as a pubic symphysis. Where the three components of each hipbone unite, a socket is formed. Known as an acetabulum, this socket receives the head of the femur (thigh). The Lower Paired Appendages Each lower paired appendage is composed of a thigh, leg, and foot. The thigh is built upon the femur, a large heavy bone that connects the hipbone to the tibia of the leg. The joint between the femur and tibia is provided with some protection by the patella (kneecap), which is embedded in the tendon of the large thigh muscles (quadriceps). In addition to a tibia, the leg contains a fibula. Whereas the tibia (shinbone) serves to bear weight, the fibula facilitates certain movements at the ankle, which is the joint between the tibia, the fibula, and one of the tarsal bones, the talus. The foot contains 7 tarsal bones, 5 metatarsals, and 14 phalanges. Together with surrounding tissues, these bones form a longitudinal arch from heel to toes and a metatarsal arch from side to side. Leukocytes, erythrocytes, and blood platelets are all ultimately derived from unspecialized cells in the bone marrow. Lymphocytes produced in the bone marrow seed the thymus, spleen, and lympg nodes, producing self-replacing lymphocite collonies in these organs. It could be specifically said that the bone marrow manufactures our bodies immune system or at least the cells that are required for it (immune system). The epiphyseal plate, or growth plate, is a band of cartilage located near the distal end of a long bone. Cartilage cells within the epiphyseal plate divide and form new cartilage cells. These cells are replaced by bone at the bottom of the epiphyseal plate, allowing the plate to continue to grow at the distal end. Growth continues until the bone reaches its full lenght, at which time bone has replaced all the cartilage, including the cartilage in the epiphyseal plate. After this, no further longitudial growth is possible in the bone. Ossification is the process by which bones develop. There are two types of ossification - the conversion of cartilage into bone and the formation of membrane layers. Cartilage is a tough, flexible connective tissue. During the first month of human fetal devalopment the entire skeleton is made up of cartilage. Cartilage is composed of elastine, collagen and organic material. During the second month of fetal devalopment osteocytes begin to devalop in this cartilage. The osteocytes release minerals that lodge in empty spaces between the cartilage cells. Eventually most of the cartilage is replaced by bone. However, some cartilage remains between bones, at the end of the nose, in the external ears, and on the inside of the tracheae. It is the cartilage that makes these areas flexible. A few bones, such as those in the clavicle and in some parts of the skull, devalop directly into hard, nonmembranous bone from membranous connective tissue without a distinctive cartilaginous phase. The membranes of the tissue ossify and become flat plates of bone. For example, the membranes in the head are replaced by the long plates that form the skull. The plates have suture lines between them. Bones are composed of hydroxiapatite, collagen and inorganic material. Bones continue to devalop after birth. The removal of certain substances ( stated in the paragraph) from a bone may cause it to become flexible as seen in the movie during class. If these materials could be theoretically added to the bone again they would become harder again. The bones are all smoothly jointed and firmly held together by flexible ligaments that keep the bones aligned during movement. The ends of the bones in each typical joint are padded with cartilage, covered with a thin sheath called the synovial membrane, and oiled with a lubricating, or synovial, fluid so that they can be used constantly and yet be protected against wear and tear. The degree of movement possible in a joint varies. Joints, therefore, are classed as immovable, yielding, or having free motion. For example, the joints of the cranium are immovable; the vertebrae are yielding; and the shoulder joint has free motion. The muscles in general are attached to the bones across the joints so that movements are brought about by the shortening, or contraction, of opposing pairs of muscles. Although arthritis is defined as "joint inflammation," the term is used loosely to include the approximately 100 inflammatory and noninflammatory diseases that affect the joints, connective tissue, and other supporting tissue. These diseases are also called rheumatic diseases or connective tissue diseases. The medical specialty concerned with these diseases is rheumatology. Causes of these disorders include infections, apparent abnormalities of the immune system, effects of injuries and aging, and in some cases hereditary predisposition. About one of seven Americans--40 million people--have some form of arthritis. In 40 percent of the cases, the disease is severe enough to require medical care. The personal and economic burden is enormous, with annual cost in lost wages and medical expenses of more than $15 billion. Of those requiring medical care, half of arthritis patients have osteoarthritis, one-quarter have rheumatoid arthritis, and the remainder have other related diseases. Collagen disease is a term applied to a group of diseases that involve abnormalities of the immune system and inflammation of connective tissue and blood vessels. Collagen fibers are a major component of connective tissue. The name rheumatic disease is often used because the most common of all such diseases, rheumatoid arthritis, shows all of the characteristics of this group of diseases. The blood plasma of many patients with collagen disease shows significant levels of autoantibodies (antibodies against the body's own proteins or cells); the resulting antigen-antibody reaction leads to inflammation in many body tissues. Collagen diseases include rheumatoid ARTHRITIS, RHEUMATIC FEVER, LUPUS ERYTHEMATOSUS, dermatomyositis, polyarteritis nodosa, and scleroderma. Both rheumatoid arthritis and rheumatic fever are characterized by widespread joint pain; rheumatic fever may also result in permanent heart damage. Lupus erythematosus is an autoimmune disease that affects the brain, joints, kidneys, skin, and membranes lining body cavities. Dermatomyositis is most commonly seen as a rash accompanied by muscular pain. In polyarteritis nodosa the walls of arteries are damaged, and in scleroderma thick layers of collagen fibers are deposited; both diseases result in impaired organ function. A dislocation is any displacement of a body structure from its normal position, although it is usually a misalignment of bones at a joint. Bones are normally held together in proper alignment by tough fibrous bands called LIGAMENTS, which are attached to each bone, and by a fibrous sac called the articular capsule, also connected to the bone. These connecting structures are relatively inelastic, but they do allow joint movement within limits. A dislocation is usually caused by a violent movement at the joint that exceeds normal limits, tearing the ligaments and the articular capsule and throwing the bone or bones out of place. Dislocations most commonly occur at the shoulders, fingers, jaw, elbows, knees, and hips as a result of trauma, although they sometimes occur as a result of diseases that affect the joints. In a bone dislocation the joint is immobile, and the affected limb may be locked in an abnormal position; FRACTURES may also be present. FIRST AID treatment consists of immobilizing the joint with a splint, sling, or bandage. Rheumatism is a nonspecific term for several diseases that cause inflammation or degeneration of joints, muscles, ligaments, tendons, and bursae. The term includes rheumatoid ARTHRITIS and other degenerative diseases of the joints; BURSITIS; fibrositis; gout; lumbago; myositis; rheumatic fever; sciatica; and spondylitis. Palindromic rheumatism is a disease that causes frequent and irregular attacks of joint pain, especially in the fingers, but leaves no permanent damage to the joints. Psychogenic rheumatism is common in women between the ages of 40 and 70, although men also contract this disease. Symptoms include complaints of pain in various parts of the musculoskeletal system that cannot be substantiated medically. This condition can be alleviated by psychotherapy. One of the common forms of rheumatism is rheumatoid arthritis, a disease of unknown cause that affects 1 to 3 percent of the population. This disease causes joint deformities and impaired mobility as a consequence of chronic inflammation and thickening of the synovial membranes, which surround joints. As the disease progresses it produces ulceration of cartilage in the joints. Rheumatoid arthritis usually occurs between ages 35 and 40 but can occur at any age. It characteristically follows a course of spontaneous remissions and exacerbations, and in about 10 to 20 percent of patients the remission is permanent. Osteoporosis is a condition of bone characterized by excessive porosity, or bone tissue reduction. Absorption of old bone exceeds deposition of new bone; the result is an enlargement of spaces normally present and a thinning of the bone from the inside. No change occurs in the outside dimensions, except in compression of weight-bearing bones. Senile and postmenopausal, or primary, osteoporosis, the most common type, is found only in elderly persons and in women who have passed through menopause. It is characterized by compression of the vertebrae with resultant back pain and loss of height, and by susceptibility to fractures. Disuse, or secondary, osteoporosis involves bones that have been immobilized by paralytic disease or traumatic fractures or have been subjected to prolonged weightlessness during spaceflight. Other osteoporoses are associated with endocrine diseases and with nutritional disorders such as anorexia nervosa. Exercise programs and calcium supplements are used in prevention and treatment, and slow-release fluorine tablets have been developed for spinal osteoporosis. Older women may receive estrogen therapy but with increased risk of uterine cancer.
http://www.angelfire.com/ut/biocheat/SkeletalSystem.html
13
11
Source: Produced for Teachers' Domain Astronomer Edwin Hubble determined two things that shook the foundations of astronomy: billions of galaxies exist outside of our own, each of which contains billions of stars, and the universe is actually expanding. This adapted video segment uses footage from NOVA and NASA to show how Hubble's findings laid the foundation for the Big Bang theory. Before 1919, most scientists held that the universe was only as large as the Milky Way and that it was a constant size. Then, in 1919, the American astronomer Edwin Hubble — aided by a technologically advanced 100-inch telescope — was able to discern individual stars within what he believed to be a nebula, a fuzzy cloud of light composed of cosmic gases. After calculating that the distance to these stars from Earth was much further than the known reaches of the Milky Way, he concluded that the stars were part of a galaxy separate from our own. The idea that our galaxy was just one of many galaxies changed forever the way we view our place in the universe. To measure distance, Hubble used Cepheid variable stars as reference points. Cepheids are young, bright, massive stars that pulsate, regularly changing their luminosity. The time it takes for one complete pulsation is called its period. Several years earlier, Henrietta Leavitt, another American astronomer, had established that the brighter a Cepheid appeared as seen from Earth, the longer its period was, and that by measuring the period, one could determine its luminosity, or absolute brightness. Drawing on Leavitt's work, Hubble compared the apparent brightness of a Cepheid with its luminosity to determine its distance from Earth. Hubble's method worked because all Cepheids with the same periods have about the same luminosity, and if a star's luminosity is known, its distance can be determined by its apparent brightness. The spectra of light emitted by celestial bodies shift depending upon whether the bodies are moving toward Earth or away from it. A change in the wavelength and frequency of light is perceived as a change in color. For galaxies moving toward Earth, the shift is toward the blue end of the spectrum; for galaxies moving away, the light they emit appears redder. By observing redshifts in the light wavelengths emitted by the galaxies, Hubble saw that galaxies were moving away from each other at a rate constant to the distance between them. He determined that the greater the distance between a galaxy and Earth, the faster that galaxy was moving away from us — a phenomenon now known as Hubble's law. These findings signaled that the universe is expanding and laid the foundations for the Big Bang theory, which states that the universe exploded into existence from a single point or a very small region in time and space and has been expanding ever since. To honor Hubble for his contributions to the field of astronomy, in 1989 NASA named the Hubble Space Telescope after him. Images seen from this telescope are incredibly clear because the telescope views space from outside Earth's atmosphere. There, it is free of the distortion and filtering of light that happens when the light passes through the atmosphere, which it must when space is viewed from Earth. These images provide scientists with information critical to determining with greater accuracy not only the age of the universe, but also its size and expansion rate. Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co. We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment.
http://www.teachersdomain.org/resource/phy03.sci.phys.fund.hubble2/
13
15
Simulation Used: Gas Properties from the PhET at the University of Colorado. Goal of the Lab Experiment: To verify Boyle's and Charle's law for ideal gasses. In a gas where the molecules interact with each other only through collisions, the temperature T, the pressure P, and the volume V satisfy the Ideal Gas Law: Here T is in Kelvin, P in Pascal, V in cubic meters, and the universal gas constant R equals 8.314 J/K. The number of moles n is defined as:     , where      is the Avogadro's number and Vmol is the volume of 1 mol of substance also called molar volume. At Standard Temperature and Pressure (STP) conditions, that is T = 273 K and P = 1 Atm, the molar volume can be directly calculated through the Ideal Gas Law as: When the temperature is kept constant, the process is called isothermic and the product of the pressure and the volume remains constant (Boyle's Law): If V is kept constant, Charle's Law gives: Click on the "Measurement Tools" button and select "Ruler" and "Species Information" Pump in about 200 heavy molecules. If you pump too many, you can open the container at the top and let some molecules out, but the number does not need to be exactly 200. Set up STP conditions: Select "Volume" as a constant parameter from the menu on the right. Add or remove heat until the Temperature indicator shows 273 K. Select "Temperature" as a constant parameter from the menu on the right. Drag the left wall of the container until the Pressure indicator shows approximately 1 Atm. (The pressure will be variable, but should be around as close to 1 Atm as possible.) Make sure the equilibrium state is reached, that is T and P remain 273 K and 1 Atm, respectively. Now, you have the STP conditions. Calculate the volume of the container At STP the volume of one mole of particles is given by: On the other hand, the volume can be calculated using the number of moles as follows: Determine the dimensions of the container. Click on the "Layer tool" from the menu on the left. Drag it to the utmost top of the container and measure the height of the container. Determine the width of the container: Unclick the "Layer tool". Activity 1: Verify Boyle's Law Perform the experiment: Keeping STP conditions, drag the left wall of the container so that the length is L=9 nm Wait until the temperature has reached its equlibrium value T = 273 K and measure the pressure, P. Record the value. Repeat for lengths: 9 nm, 8 nm, 7 nm, 6 nm, 5 nm, 4 nm, and 3 nm. In your lab notebook, write down the data in the following table. Use the dimensions of the container to calculate the volume of the gas in m3. Find the slope of the P vs. (1/V) graph. You can use your calculator, spreadsheet, or you can go to this website. If you choose the latter, clear the data and type in your own data. The slope of the line is given by "m" in the box below the What is the physical meaning of the slope of the graph P vs. (1/V)? Activity 2: Verify Charle's Law Keep volume constant. Record the current number of particles in the container: Record the current length of the container and calculate the volume of the container: Record your initial temperature and pressure in the table below. By adding or removing heat, change the state of the gas. Record the new equilibrium temperature and pressure in the table below. Repeat the procedure 10 times. Record your data in the following table: Plot P vs. T diagram. What is the physical meaning of the slope? Last modified: Fri Dec 05 18:37:45 Eastern Standard Time 2008
http://www.nvcc.edu/alexandria/science/Physics/lab231hybrid/gas_laws/gas_laws.htm
13
13
the Physics Education Technology Project This lesson plan was developed specifically for use with the PhET simulation "The Moving Man". It is intended to help beginning students differentiate velocity vs. time graphs from position vs. time graphs, and also to promote understanding of multiple frames of reference in analyzing an object's motion. It was created by a high school teacher under the sponsorship of the PhET project. SEE RELATED ITEMS BELOW for a link to "The Moving Man" simulation, which must be running to complete the activity. Please note that this resource requires at least version 1.4, Java WebStart of 6-8: 4F/M3a. An unbalanced force acting on an object changes its speed or direction of motion, or both. 9. The Mathematical World 9B. Symbolic Relationships 6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease steadily, increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase or decrease in steps, or do something different from any of these. 9-12: 9B/H4. Tables, graphs, and symbols are alternative ways of representing data and relationships that can be translated from one to another. 11. Common Themes 6-8: 11B/M4. Simulations are often useful in modeling events and processes. Common Core State Standards for Mathematics Alignments Use functions to model relationships between quantities. (8) 8.F.5 Describe qualitatively the functional relationship between two quantities by analyzing a graph (e.g., where the function is increasing or decreasing, linear or nonlinear). Sketch a graph that exhibits the qualitative features of a function that has been described verbally. High School — Functions (9-12) Interpreting Functions (9-12) F-IF.4 For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship.? F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.? Reeves, S. (2008, April 30). PhET Teacher Activities: Moving Man - Velocity vs. Time Graphs. Retrieved June 19, 2013, from Physics Education Technology Project: http://phet.colorado.edu/en/contributions/view/2833 Reeves, Steve. PhET Teacher Activities: Moving Man - Velocity vs. Time Graphs. Boulder: Physics Education Technology Project, April 30, 2008. http://phet.colorado.edu/en/contributions/view/2833 (accessed 19 June 2013). %0 Electronic Source %A Reeves, Steve %D April 30, 2008 %T PhET Teacher Activities: Moving Man - Velocity vs. Time Graphs %I Physics Education Technology Project %V 2013 %N 19 June 2013 %8 April 30, 2008 %9 text/html %U http://phet.colorado.edu/en/contributions/view/2833 Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
http://www.compadre.org/portal/items/detail.cfm?ID=8298
13
14
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. "A statistically significant difference" simply means there is statistical evidence that there is a difference; it does not mean the difference is necessarily large, important, or significant in the common meaning of the word. The significance level of a test is a traditional frequentist statistical hypothesis testing concept. In simple cases, it is defined as the probability of making a decision to reject the null hypothesis when the null hypothesis is actually true (a decision known as a Type I error, or "false positive determination"). The decision is often made using the p-value: if the p-value is less than the significance level, then the null hypothesis is rejected. The smaller the p-value, the more significant the result is said to be. In more complicated, but practically important cases, the significance level of a test is a probability such that the probablility of making a decision to reject the null hypothesis when the null hypothesis is actually true is no more than the stated probability. This allows for those applications where the probability of deciding to reject may be much smaller than the significance level for some sets of assumptions encompassed within the null hypothesis. Use in practice Edit The significance level is usually represented by the Greek symbol, α (alpha). Popular levels of significance are 5%, 1% and 0.1%. If a test of significance gives a p-value lower than the α-level, the null hypothesis is rejected. Such results are informally referred to as 'statistically significant'. For example, if someone argues that "there's only one chance in a thousand this could have happened by coincidence," a 0.1% level of statistical significance is being implied. The lower the significance level, the stronger the evidence. In some situations it is convenient to express the statistical significance as 1 − α. In general, when interpreting a stated significance, one must be careful to note what, precisely, is being tested statistically. Different α-levels have different advantages and disadvantages. Smaller α-levels give greater confidence in the determination of significance, but run greater risks of failing to reject a false null hypothesis (a Type II error, or "false negative determination"), and so have less statistical power. The selection of an α-level inevitably involves a compromise between significance and power, and consequently between the Type I error and the Type II error. In some fields, for example nuclear and particle physics, it is common to express statistical significance in units of "σ" (sigma), the standard deviation of a Gaussian distribution. A statistical significance of "" can be converted into a value of α via use of the error function: The use of σ is motivated by the ubiquitous emergence of the Gaussian distribution in measurement uncertainties. For example, if a theory predicts a parameter to have a value of, say, 100, and one measures the parameter to be 109 ± 3, then one might report the measurement as a "3σ deviation" from the theoretical prediction. In terms of α, this statement is equivalent to saying that "assuming the theory is true, the likelihood of obtaining the experimental result by coincidence is 0.27%" (since 1 − erf(3/√2) = 0.0027). Fixed significance levels such as those mentioned above may be regarded as useful in exploratory data analyses. However, modern statistical advice is that, where the outcome of a test is essentially the final outcome of an experiment or other study, the p-value should be quoted explicitly. And, importantly, it should be quoted whether or not the p-value is judged to be significant. This is to allow maximum information to be transferred from a summary of the study into meta-analyses. A common misconception is that a statistically significant result is always of practical significance, or demonstrates a large effect in the population. Unfortunately, this problem is commonly encountered in scientific writing. Given a sufficiently large sample, extremely small and non-notable differences can be found to be statistically significant, and statistical significance says nothing about the practical significance of a difference. One of the more common problems in significance testing is the tendency for multiple comparisons to yield spurious significant differences even where the null hypothesis is true. For instance, in a study of twenty comparisons, using an α-level of 5%, one comparison will likely yield a significant result despite the null hypothesis being true. In these cases p-values are adjusted in order to control either the false discovery rate or the familywise error rate. Yet another common pitfall often happens when a researcher writes the ambiguous statement "we found no statistically significant difference," which is then misquoted by others as "they found that there was no difference." Actually, statistics cannot be used to prove that there is exactly zero difference between two populations. Failing to find evidence that there is a difference does not constitute evidence that there is no difference. This principle is sometimes described by the maxim "Absence of evidence is not evidence of absence." According to J. Scott Armstrong, attempts to educate researchers on how to avoid pitfalls of using statistical significance have had little success. In the papers "Significance Tests Harm Progress in Forecasting," and "Statistical Significance Tests are Unnecessary Even When Properly Done," Armstrong makes the case that even when done properly, statistical significance tests are of no value. A number of attempts failed to find empirical evidence supporting the use of significance tests. Tests of statistical significance are harmful to the development of scientific knowledge because they distract researchers from the use of proper methods. Armstrong suggests authors should avoid tests of statistical significance; instead, they should report on effect sizes, confidence intervals, replications/extensions, and meta-analyses. Use of the statistical significance test has been called seriously flawed and unscientific by authors Deirdre McCloskey and Stephen Ziliak. They point out that "insignificance" does not mean unimportant, and propose that the scientific community should abandon usage of the test altogether, as it can cause false hypotheses to be accepted and true hypotheses to be rejected. Signal–noise ratio conceptualisation of significance Edit Statistical significance can be considered to be the confidence one has in a given result. In a comparison study, it is dependent on the relative difference between the groups compared, the amount of measurement and the noise associated with the measurement. In other words, the confidence one has in a given result being non-random (i.e. it is not a consequence of chance) depends on the signal-to-noise ratio (SNR) and the sample size. Expressed mathematically, the confidence that a result is not by random chance is given by the following formula by Sackett: For clarity, the above formula is presented in tabular form below. Dependence of confidence with noise, signal and sample size (tabular form) |Parameter||Parameter increases||Parameter decreases| |Noise||Confidence decreases||Confidence increases| |Signal||Confidence increases||Confidence decreases| |Sample size||Confidence increases||Confidence decreases| In words, the dependence of confidence is high if the noise is low and/or the sample size is large and/or the effect size (signal) is large. The confidence of a result (and its associated confidence interval) is not dependent on effect size alone. If the sample size is large and the noise is low a small effect size can be measured with great confidence. Whether a small effect size is considered important is dependent on the context of the events compared. In medicine, small effect sizes (reflected by small increases of risk) are often considered clinically relevant and are frequently used to guide treatment decisions (if there is great confidence in them). Whether a given treatment is considered a worthy endeavour is dependent on the risks, benefits and costs. Tests of statistical significanceEdit - A/B testing - ABX test - Confidence limits (statistics) - Effect size (statistical) - File drawer problem - Fisher's method for combining independent tests of significance - Goodness of fit - Hypothesis testing - Statistical correlation - Statistical measurement - Statistical power - Statistical tests - ↑ 1.0 1.1 Ziliak, Stephen T. and Deirde N. McCloskey. "Size Matters: The Standard Error of Regressions in the American Economic Review" (August 2004). - ↑ Goodman S (1999). Toward evidence-based medical statistics. 1: The P value fallacy.. Ann Intern Med 130 (12): 995–1004. - ↑ Goodman S (1999). Toward evidence-based medical statistics. 2: The Bayes factor.. Ann Intern Med 130 (12): 1005–13. - ↑ Armstrong, J. Scott (2007). Significance tests harm progress in forecasting. International Journal of Forecasting 23: 321–327. - ↑ Armstrong, J. Scott (2007). Statistical Significance Tests are Unnecessary Even When Properly Done. International Journal of Forecasting 23: 335–336. - ↑ McCloskey, Deirdre N.; Stephen T. Ziliak (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives (Economics, Cognition, and Society), The University of Michigan Press. - ↑ Sackett DL (October 2001). Why randomized controlled trials fail but needn't: 2. Failure to employ physiological statistics, or the only formula a clinician-trialist is ever likely to need (or understand!). CMAJ 165 (9): 1226–37. - Raymond Hubbard, M.J. Bayarri, P Values are not Error Probabilities. A working paper that explains the difference between Fisher's evidential p-value and the Neyman-Pearson Type I error rate . - The Concept of Statistical Significance Testing - Article by Bruce Thompon of the ERIC Clearinghouse on Assessment and Evaluation, Washington, D.C. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
http://psychology.wikia.com/wiki/Statistically_significant
13
21
Algebra (elementary mathematics) The elementary algebra east is a branch of the Mathématiques whose object is the study of the laws which govern the numerical quantities. The qualifier of elementary appears at the same time as the modern algebra in order to differentiate it from this one. Today, it is the first approach of the algebra in the school course. The algebra is different from the arithmetic by the introduction of letters (, , ,…, , , ,…, , , ,…) indifferently representing all the numbers and to which are applied same the rules of calculations that if it were about numbers. It is thus possible to establish laws depending only on the nature of the operations, independently of the numbers. The resolutions of equations and inequations, the study of the polynomials are applications of the algebra. An algebraical expression consists of numbers, letters and operational signs: - the sign is used to mark the Addition. - the sign is used to mark the Soustraction. - the signs or are used to mark the Multiplication. When the multiplication relates to two letters, it possible to write instead of . - the sign is used to mark the Division, also being able to be written . - the product of a number increased of 3 by itself is written . - the difference of the squares of two numbers and is written Évaluer an algebraical expression consists in allotting a value to each unknown factor, then to carry out arithmetic calculation obtained. For example to evaluate the expression for consists in carrying out calculation . Properties of the addition Properties of the multiplication Factorization and development To factorize an algebraical expression, , consists in transforming of it the writing in the shape of a product of two or several expressions (, ,…) : To develop an algebraical expression, , consists in transforming of it the writing in the form of a sum (or difference) of two or several expressions. (, ,…) : Algebraic example of resolution of a problem Simple problem: 2 tickets are worth 100 euros. What is worth a ticket? We can disregard nature of the ticket (euro, rouble, dollar,…), the resolution of the problem will be always the same one. By noting the value of a ticket, the problem results in the equation . By dividing each member by 2, one obtains . In elementary the mathematics category - Square (algebra) - Divide check - Equation (elementary mathematics) - Simple equation - Quadratic equation - remarkable Identity (elementary mathematics) - Inequality (mathematics) - Inequation of the first degree - Inequation of the second degree - Inequation being solved by table of signs - Power (elementary mathematics) - System of equations (mathematical elementary) Simple: Elementary will algebra |Random links:||140 | Fenol | Climbing Clérodendron | My sister is thunder | Igli Tares | OJ Manix | Militarisation|
http://www.speedylook.com/Algebra_(elementary_mathematics).html
13
34
|Home||Third Grade News||Curriculum|| Mimosa-Growing with Mathematics Tie-ins This Page is currently a work in progress First Grade Focus - Multiple Topics Covered These math resource links can be included in lesson plans, teacher web pages, etc. to assist students as they explore and learn math concepts. Additional resources will be added as they are reviewed. AAAmath-1st Grade topics Topics support Mimosa topics. Scholastic Online Activities Math Maven Mysteries - students use problem solving and critical thinking skills to solve these mysteries. A nice variety to choose from. Could be used either individually or adapted to group work. Grade 1 Web Activities MHSchool.com offers a student activity sheet with an Internet link to answer questions. First Grade Activities Online activities that support 1st grade activities. First Grade Skills Math activities across topics covered in 1st grade. First Grade Backpack - Math Activities 1. Exploring Mathematics/Reviewing Numbers up to Ten One False Move *Sequencing Numbers* - Funbrain.Com offers this game on sequencing numbers from lowest to highest or highest to lowest. There are 3 levels of play. 2. Using Addition and Subtraction Math Fact Practice Learning Planet offers this timed game to practice facts. Has sound and requires shockwave. Choices are addition, subtraction, multiplication & division Offered by HBSchool, addition and subtraction activities. Little Animals Activity Centre - Math Offered by the BBC - practice addition and subtraction online. 3. Addition Sentences 4. Addition Fact Strategies - Counting on 5. Investigating Numbers 11 and 12 6. Measurement -Length, Volume, and Weight Gamequarium has online activities and games to support students developing measurement skills. Max's Measuring Mania Scholastic provides this online activity. Students measure items in their class, record results and return results to Scholastic. 7. The Numbers 13, 14, and 15 8. Investigating 2D and 3D Shapes Birds Eye View Learn how pilots use polygons and 2 & 3 dimensional views to fly a helicopter. Create a Tangram Find out where Tangrams originate from, and create your own online. 10. The Numbers 16 and 17 11. The Numbers 18 and 19 Addition Facts to 20 Practice your facts up to number 20. 12. Using Money Adding Nickels & Pennies In this activity, students add pennies and nickels. Addiing Dimes, Nickels & Pennies FunBrain offers this game to practice making change - offers easy, medium, hard levels. U.S.Treasury Page for Kids Site provides information on money, online tour and activities. 13. Exploring Equal Groups 14. Visual Representation All About Geometry Explains the different shapes. 15. Exploring Addition Five in a Row Practice your facts and work toward getting 5 in a row. Multiplication - Numbers up to 12 Kidport offers this skills based game. 16. Tens and Ones Adding Two-digit numbers to 100 Kidport offers this addition game - skills based. Mini-lesson on Tens and Ones Provides visual representation. 17. Adding Two-digit numbers Online & Printable Activities Practice facts up to 20, and add two-digit numbers. 18.Investigating Units of Measurement All About Comparing Choose Units of Measurement, or other categories. 19. Subtraction Sentences 20. Relating Addition and Subtraction Subtraction Activities & Worksheets Practice facts up to 20, and subtract 2-digit numbers. 21. Making Equal Groups 22. Subtraction Strategies Subtracting 2 Digit Numbers to 100 Kidport offers this subtraction game - skills based. 23. Problem Solving with Money Money Experience for Kids Three activity choices: Making Change, Spending Money, Piggy Bank Breakin (counting). Ice Cream Shop Scholastic offers this online activity and an activity worksheet. Could be used as a group activity. All About Money Count, Add & Subtract money. U.S. Mint for Kids-Money Various games and activities geared toward elementary level students. 24. Area and Fractions The purpose of this site is to help reduce fraction anxiety (*it has a name -Fractionitis) by providing a visual to understand fractions. Multiple topics regarding fractions are available. A Tale about Zeke and Zack A poem and activity page on fractions - offered by Scholastic. Time Experience for Kids Time for a Crime Scholastic offers this mystery for students to solve. Graph Activity from Marilyn Burns Listed on Creative Classroom Online, this lesson links collecting data and graphing. Skittles Challenge -2002 **This is a collaborative project with participating schools starting January 15, 2002 -March 23, 2002. Each student brings in their own bag of Skittles (red package, original fruit, 2.17 oz size) to divide by color, count and tally on an individual basis. Class compiles results and forwards them to [email protected]. Tally forms are available online along with additional activity suggestions. Create a Graph Choose a graph type, label & enter data, then click create a Graph. Graphs can be printed or copied into another product. Click here to return to Student Resources Click here to return to Teacher Resources Click here to return to Library & Reseach This resource designed and maintained by: Susan Herook, Instructional Technology Specialist (Last Updated April 2004)
http://teachers.yourhomework.com/SMHerook/mimosa1.html
13
10
Severe Weather 101 Detecting Winter Weather Satellite is a very useful tool to determine cloud patterns and movement of winter storms. By looping a series of satellite pictures together, forecasters can watch a storm's development and movement. Radar is a critical for tracking the motion of precipitation and for determining what kind of precipitation is falling. The NWS's dual-polarized radars send electromagnetic wave fields at a 45 degree angle, rather than just horizontally. As these angled fields bounce off an object and are received back at the radar, a computer program separates the fields into horizontal and vertical information. This 2-D snapshot now gives forecasters a measure of the size and shape of the object. With this information, forecasters can clearly identify rain, hail, snow, ice pellets and even bugs. If they know what type of precipitation is falling, they will make more accurate estimates of how much to expect. Doppler radar can show the wind direction too, which is helpful when forecasting near mountains and large bodies of water. If the radar shows wind blowing up the mountain (upslope), forecasters know that automatically, one of the ingredients is in place of the development of precipitation: lift. If the radar shows wind blowing over a large section of a body of water (fetch), then they know that another ingredient is present for the formation of precipitation – moisture. Radar velocities can help identify the location of cold fronts because there is usually a sharp change in wind direction and will show up as a on Doppler radar. What we do: NSSL was a leader and major contributor to the scientific and engineering development of dual-polarized weather radar and now installed on the NOAA NWS weather radars. Dual-polarization radar can clearly identify rain, hail, snow, or ice pellets inside the clouds. NSSL scientists are developing algorithms that will produce estimates of whether the precipitation is falling in liquid or frozen form, or if the precipitation is reaching the ground. NSSL's Hydrometeor Classification Algorithm (HCA) uses dual-polarization technology to automatically sort between ten types of radar echoes including big raindrops and hail. This helps the forecaster quickly assess the precipitation event and better forecast how much will fall. NSSL's Severe Hail Verification Experiment (SHAVE) collects data on winter precipitation by making phone calls to the public. The data is used to refine radar algorithms that detect hail, and other frozen precipitation. NSSL's Precipitation Identification Near the Ground (PING) project also collects data on types of precipitation. Volunteers are invited to submit reports of what is actually falling to the ground at their location. This data is used to refine radar algorithms that detect hail and other frozen precipitation. IPEX – The Intermountain Precipitation Experiment studied winter weather across northern Utah to develop a better understanding of the structure and evolution of winter storms. During January and February 2000, scientists made detailed observations of several large storms including one that produced three feet of snow. They also made unprecedented measurements of electrification and lightning in winter storms and the first dual-Doppler radar analysis of a cold front interacting with the Great Salt Lake and surrounding mountains. Researchers used data gathered to validate precipitation estimates from Doppler weather radars located at high elevations, to improve computer-based forecast models used in mountainous regions, and to study terrain-induced precipitation events and interactions that produce lake-effect snow bands.
http://www.nssl.noaa.gov/education/svrwx101/winter/detection/
13
17
||This article needs additional citations for verification. (August 2010)| Deposition is the geological process by which sediments, soil, and rocks are added to a landform or land mass. Fluids such as wind and water, as well as sediment flowing via gravity, transport previously eroded sediment, which, at the loss of enough kinetic energy in the fluid, is deposited, building up layers of sediment. Deposition occurs when the forces responsible for sediment transportation are no longer sufficient to overcome the forces of particle weight and friction, creating a resistance to motion, this is known as the null-point hypothesis. Deposition can also refer to the buildup of sediment from organically derived matter or chemical processes. For example, chalk is made up partly of the microscopic calcium carbonate skeletons of marine plankton, the deposition of which has induced chemical processes (diagenesis) to deposit further calcium carbonate. Similarly, the formation of coal begins with deposition of organic material, mainly from plants, in anaerobic conditions. Null-point hypothesis The null-point hypothesis explains how sediment is deposited throughout a shore profile according to its grain size. This is due to the influence of hydraulic energy, resulting in a seaward-fining of sediment particle size, or where fluid forcing equals gravity for each grain size. The concept can also be explained as "sediment of a particular size may move across the profile to a position where it is in equilibrium with the wave and flows acting on that sediment grain". This sorting mechanism combines the influence of the down-slope gravitational force of the profile and forces due to flow asymmetry, the position where there is zero net transport is known as the null point and was first proposed by Cornaglia in 1889. Figure 1 illustrates this relationship between sediment grain size and the depth of the marine environment. The first principle underlying the null point theory is due to the gravitational force; finer sediments remain in the water column for longer durations allowing transportation outside the surf zone to deposit under calmer conditions. The gravitational effect, or settling velocity determines the location of deposition for finer sediments, whereas a grain's internal angle of friction determines the deposition of larger grains on a shore profile. The secondary principle to the creation of seaward sediment fining is known as the hypothesis of asymmetrical thresholds under waves; this describes the interaction between the oscillatory flow of waves and tides flowing over the wave ripple bedforms in an asymmetric pattern. "The relatively strong onshore stroke of the wave forms an eddy or vortex on the lee side of the ripple, provided the onshore flow persists, this eddy remains trapped in the lee of the ripple. When the flow reverses, the eddy is thrown upwards off the bottom and a small cloud of suspended sediment generated by the eddy is ejected into the water column above the ripple, the sediment cloud is then moved seaward by the offshore stroke of the wave." Where there is symmetry in ripple shape the vortex is neutralised, the eddy and its associated sediment cloud develops on both sides of the ripple. This creates a cloudy water column which travels under tidal influence as the wave orbital motion is in equilibrium. The Null-point hypothesis has been quantitatively proven in Akaroa Harbour, New Zealand, The Wash, U.K., Bohai Bay and West Huang Sera, Mainland China, and in numerous other studies; Ippen and Eagleson (1955), Eagleson and Dean (1959, 1961) and Miller and Zeigler (1958, 1964). Deposition of non-cohesive sediments Large grain sediments transported by either bed load or suspended load will come to rest when there is insufficient bed shear stress and fluid turbulence to keep the sediment moving, with the suspended load this can be some distance as the particles need to fall through the water column. This is determined by the grains downward acting weight force being matched by a combined buoyancy and fluid drag force and can be expressed by: Downward acting weight force = Upward-acting buoyancy force + Upward-acting fluid drag force - π is the ratio of a circle's circumference to its diameter. - R is the radius of the spherical object (in m), - ρ is the mass density of the fluid (kg/m3), - g is the gravitational acceleration (m/s2), - Cd is the drag coefficient, and - ws is the particle's settling velocity (in m/s). In order to calculate the drag coefficient, the grain's Reynolds number needs to be discovered, which is based on the type of fluid through which the sediment particle is flowing; laminar flow, turbulent flow or a hybrid of both. When the fluid becomes more viscous due to smaller grain sizes or larger settling velocities, prediction is less straight forward and it is applicable to incorporate Stokes Law(also known as the frictional force, or drag force) of settling. Deposition of cohesive sediments Cohesion of sediment occurs with the small grain sizes associated with silts and clays, or particles smaller than 4ϕ on the phi scale. If these fine particles remain dispersed in the water column, Stokes law applies to the settling velocity of the individual grains, although due to sea water being a strong electrolyte bonding agent, flocculation occurs where individual particles create an electrical bond adhering each other together to form flocs. "The face of a clay platelet has a slight negative charge where the edge has a slight positive charge, when two platelets come into close proximity with each other the face of one particle and the edge of the other are electrostatically attracted." Flocs then have a higher combined mass which leads to quicker deposition through a higher fall velocity, and deposition in a more shoreward direction than they would have as the individual fine grains of clay or silt. The occurrence of null point theory Akaroa Harbour is located on Banks Peninsula, Canterbury, New Zealand, . The formation of this harbour has occurred due to active erosional processes on an extinct shield volcano, whereby the sea has flooded the caldera creating an inlet 16 km in length, with an average width of 2 km and a depth of -13 m relative to mean sea level at the 9 km point down the transect of the central axis. The predominant storm wave energy has unlimited fetch for the outer harbour from a southerly direction, with a calmer environment within the inner harbour, though localised harbour breezes create surface currents and chop influencing the marine sedimentation processess. Deposits of loess from subsequent glacial periods have in filled volcanic fissures over millennia, resulting in volcanic basalt and loess as the main sediment types available for deposition in Akaroa Harbour Hart et al. (2009) discovered through bathymetric survey, sieve and pipette analysis of subtidal sediments, that sediment textures were related to three main factors: depth; distance from shoreline; and distance along the central axis of the harbour. Resulting in the fining of sediment textures with increasing depth and towards the central axis of the harbour, or if classified into grain class sizes, “the plotted transect for the central axis goes from silty sands in the intertidal zone, to sandy silts in the inner nearshore, to silts in the outer reaches of the bays to mud at depths of 6 m or more”. See figure 2 for detail. Other studies have shown this process of the winnowing of sediment grain size from the effect of hydrodynamic forcing; Wang, Collins and Zhu (1988) qualitatively correlated increasing intensity of fluid forcing with increasing grain size. "This correlation was demonstrated at the low energy clayey tidal flats of Bohai Bay (China), the moderate environment of the Jiangsu coast (China) where the bottom material is silty, and the sandy flats of the high energy coast of The Wash (U.K.)." This research shows conclusive evidence for the null point theory existing on tidal flats with differing hydrodynamic energy levels and also on flats that are both erosional and accretional. Kirby R. (2002) takes this concept further explaining that the fines are suspended and reworked aerially offshore leaving behind lag deposits of mainly bivalve and gastropod shells separated out from the finer substrate beneath, waves and currents then heap these deposits to form chenier ridges throughout the tidal zone which tend to be forced up the foreshore profile but also along the foreshore. Cheniers can be found at any level on the foreshore and predominantly characterise an erosion-dominated regime. Applications for coastal planning and management The null point theory has been controversial in its acceptance into mainstream coastal science as the theory operates in dynamic equilibrium or unstable equilibrium, and many field and laboratory observations have failed to replicate the state of a null point at each grain size throughout the profile. The interaction of variables and processes over time within the environmental context causes issues; "the large number of variables, the complexity of the processes, and the difficulty in observation, all place serious obstacles in the way of systematisation, therefore in certain narrow fields the basic physical theory may be sound and reliable but the gaps are large" Geomorphologists, engineers, governments and planners should be aware of the processes and outcomes involved with the null point hypothesis when performing tasks such as beach nourishment, issuing building consents or building coastal defence structures. This is because sediment grain size analysis throughout a profile allows inference into the erosion or accretion rates possible if shore dynamics are modified. Planners and managers should also be aware that the coastal environment is dynamic and contextual science should be evaluated before implementation of any shore profile modification. Thus theoretical studies, laboratory experiments, numerical and hydraulic modelling seek to answer questions pertaining to littoral drift and sediment deposition, the results should not be viewed in isolation and a substantial body of purely qualitative observational data should supplement any planning or management decision. - Oldale, Robert N. "Coastal Erosion on Cape Cod: Some Questions and Answers". U.S. Geological Survey. Retrieved 2009-09-11. - Jolliffe, I. P. (1978). "Littoral and offshore sediment transport". Progress in Physical Geography 2 (2): 264–308. doi:10.1177/030913337800200204. - Horn, D. P. (1992). A review and experimental assessment of equilibrium grain size and the ideal wave-graded profile 108 (2). pp. 161–174. Text "Marine Geology" ignored (help); Text "doi: 10.1016/0025-3227(92)90170-m" ignored (help) - Hart, D. Todd, D. Nation, T. McWilliams, Z. (2009). Upper Akaroa Harbour seabed bathymetry and soft sediments: A baseline mapping study Coastal Research Report (Report). - Heuff, D. N., Spigel, R. H., and Ross, A. H. (2005). Evidence of a significant wind‐driven circulation in Akaroa Harbour. Part 1: Data obtained during the September‐November, 1998 field survey. pp. 1097–1109. Text "New Zealand Journal of Marine and Freshwater Research" ignored (help); Text " 39(5)" ignored (help); Text " doi: 10.1080/00288330.2005.9517378" ignored (help) - Raeside, J.D. (1964). Loess deposits of the South Island, New Zealand, and soils formed on them 7. pp. 811–838. Text "New Zealand Journal of Geology and Geophysics" ignored (help) - Wang, Y.,Collins, M.B., and Zhu D. (1988). A comparative study of open coast tidal flats: The Wash (U.K.), Bohai Bay and West Huang Sera (Mainland China). pp. 120–134. Text "Proceedings of ISCZC" ignored (help); Text "China Ocean Press" ignored (help) - Kirby R. (2002). Distinguishing accretion from erosion-dominated muddy coasts. In T. W. Healy, Y. and Healy J. (Ed.), Muddy coasts of the world:Processes, deposits and function. Elsevier. pp. 61–81). - Russell R. (1960). Coastal erosion and protection: nine questions and answers 3. Text "Hydraulics Research Paper" ignored (help) See also - Sedimentary rock - Sedimentary structures - Longshore drift - Shields parameter - Stokes' law
http://en.wikipedia.org/wiki/Deposition_(geology)
13
11
Stage 0: Foundation for Literacy Stage 1: Beginning Literacy Stage 2: Consolidation / Fluency Oral Language & ELL Select an object that will be used for turn-taking For example: A stuffed animal Invite students to join you in a circle Review the group expectations For example: Mutual Respect, Attentive Listening, No Put Downs and The Right to Pass Introduce a topic or a sentence starter For example: “I am grateful that...” Model a complete sentence Pass the object used for turn-taking around the circle and have each student share information related to the topic English Language Learners/ESL: - Prior to the community circle have the student practice his/her response - Allow other students to share ideas first as examples - Prompt the student if necessary LD/Reading & Writing Difficulties: - Repeat the expectations and topic throughout the activity - Keep instructions short Cultural Appropriateness & Diversity: - Select topics that are culturally inclusive - Allow students to share their own experiences and make personal connections to the topic - Hold a Community Circle regularly to promote an inclusive learning environment - Record and display the sentence starter as a visual reference Blomberg, G. (2011). The power of informal talk. Reading Teacher, 46, 460. The goal of Community Circle: Fostering Oral Language Development is to use a method developed by Jeanne Gibbs to promote oral language development by modeling and encouraging spoken language that is purposeful and descriptive. What You Need - Select an object for turn-taking - Teacher shares topic - Students share information related to topic Teacher: - Turn-taking object What You Do Facilitator: - during sharing times Whole class: - when teacher shares topic and students share information - Use a checklist to track student participation and speaking and listening skills - Take anecdotal notes following the Community Circle and track students ideas as well as oral communication skills - Provide oral feedback to the whole class and make general comments and next steps related to students speaking and listening skills - Hold a Community Circle on a regular basis and select a variety of topics a. For example: "One good thing that happened to me on the weekend...I wish...I hope that..." - Have students share a goal that they would like to accomplish at school - Use the Community Circle to introduce or review a subject or topic of study - Have students select the topic by submitting a piece of paper with their topic of choice the day before - The Community Circle is part of the Tribes Learning Community and fosters a safe and caring classroom where students feel included and appreciated by others. Making the Community Circle an important and regular part of your classroom will give all students an opportunity to share their thinking. - Use different methods for sharing and taking turns, including a "talking stick" or a koosh ball. - Strategically seat students who might need prompts or cues to refocus next to you
http://www.oise.utoronto.ca/balancedliteracydiet/Recipe/00135/
13
18
There are three basic ways in which heat is transferred. In fluids, heat is often transferred by convection, in which the motion of the fluid itself carries heat from one place to another. Another way to transfer heat is by conduction, which does not involve any motion of a substance, but rather is a transfer of energy within a substance (or between substances in contact). The third way to transfer energy is by radiation, which involves absorbing or giving off electromagnetic waves. Heat transfer in fluids generally takes place via convection. Convection currents are set up in the fluid because the hotter part of the fluid is not as dense as the cooler part, so there is an upward buoyant force on the hotter fluid, making it rise while the cooler, denser, fluid sinks. Birds and gliders make use of upward convection currents to rise, and we also rely on convection to remove ground-level pollution. Forced convection, where the fluid does not flow of its own accord but is pushed, is often used for heating (e.g., forced-air furnaces) or cooling (e.g., fans, automobile cooling systems). When heat is transferred via conduction, the substance itself does not flow; rather, heat is transferred internally, by vibrations of atoms and molecules. Electrons can also carry heat, which is the reason metals are generally very good conductors of heat. Metals have many free electrons, which move around randomly; these can transfer heat from one part of the metal to another. The equation governing heat conduction along something of length (or thickness) L and cross-sectional area A, in a time t is: k is the thermal conductivity, a constant depending only on the material, and having units of J / (s m °C). Copper, a good thermal conductor, which is why some pots and pans have copper bases, has a thermal conductivity of 390 J / (s m °C). Styrofoam, on the other hand, a good insulator, has a thermal conductivity of 0.01 J / (s m °C). Consider what happens when a layer of ice builds up in a freezer. When this happens, the freezer is much less efficient at keeping food frozen. Under normal operation, a freezer keeps food frozen by transferring heat through the aluminum walls of the freezer. The inside of the freezer is kept at -10 °C; this temperature is maintained by having the other side of the aluminum at a temperature of -25 °C. The aluminum is 1.5 mm thick, and the thermal conductivity of aluminum is 240 J / (s m °C). With a temperature difference of 15°, the amount of heat conducted through the aluminum per second per square meter can be calculated from the conductivity equation: This is quite a large heat-transfer rate. What happens if 5 mm of ice builds up inside the freezer, however? Now the heat must be transferred from the freezer, at -10 °C, through 5 mm of ice, then through 1.5 mm of aluminum, to the outside of the aluminum at -25 °C. The rate of heat transfer must be the same through the ice and the aluminum; this allows the temperature at the ice-aluminum interface to be calculated. Setting the heat-transfer rates equal gives: The thermal conductivity of ice is 2.2 J / (s m °C). Solving for T gives: Now, instead of heat being transferred through the aluminum with a temperature difference of 15°, the difference is only 0.041°. This gives a heat transfer rate of: With a layer of ice covering the walls, the rate of heat transfer is reduced by a factor of more than 300! It's no wonder the freezer has to work much harder to keep the food cold. The third way to transfer heat, in addition to convection and conduction, is by radiation, in which energy is transferred in the form of electromagnetic waves. We'll talk about electromagnetic waves in a lot more detail in PY106; an electromagnetic wave is basically an oscillating electric and magnetic field traveling through space at the speed of light. Don't worry if that definition goes over your head, because you're already familiar with many kinds of electromagnetic waves, such as radio waves, microwaves, the light we see, X-rays, and ultraviolet rays. The only difference between the different kinds is the frequency and wavelength of the wave. Note that the radiation we're talking about here, in regard to heat transfer, is not the same thing as the dangerous radiation associated with nuclear bombs, etc. That radiation comes in the form of very high energy electromagnetic waves, as well as nuclear particles. The radiation associated with heat transfer is entirely electromagnetic waves, with a relatively low (and therefore relatively safe) energy. Everything around us takes in energy from radiation, and gives it off in the form of radiation. When everything is at the same temperature, the amount of energy received is equal to the amount given off. Because there is no net change in energy, no temperature changes occur. When things are at different temperatures, however, the hotter objects give off more energy in the form of radiation than they take in; the reverse is true for the colder objects. The amount of energy an object radiates depends strongly on temperature. For an object with a temperature T (in Kelvin) and a surface area A, the energy radiated in a time t is given by the Stefan-Boltzmann law of radiation: The constant e is known as the emissivity, and it's a measure of the fraction of incident radiation energy is absorbed and radiated by the object. This depends to a large extent on how shiny it is. If an object reflects a lot of energy, it will absorb (and radiate) very little; if it reflects very little energy, it will absorb and radiate quite efficiently. Black objects, for example, generally absorb radiation very well, and would have emissivities close to 1. This is the largest possible value for the emissivity, and an object with e = 1 is called a perfect blackbody, Note that the emissivity of an object depends on the wavelength of radiation. A shiny object may reflect a great deal of visible light, but it may be a good absorber(and therefore emitter) of radiation of a different wavelength, such as ultraviolet or infrared light. Note that the emissivity of an object is a measure of not just how well it absorbs radiation, but also of how well it radiates the energy. This means a black object that absorbs most of the radiation it is exposed to will also radiate energy away at a higher rate than a shiny object with a low emissivity. The Stefan-Boltzmann law tells you how much energy is radiated from an object at temperature T. It can also be used to calculate how much energy is absorbed by an object in an environment where everything around it is at a particular temperature : The net energy change is simply the difference between the radiated energy and the absorbed energy. This can be expressed as a power by dividing the energy by the time. The net power output of an object of temperature T is thus: We've looked at the three types of heat transfer. Conduction and convection rely on temperature differences; radiation does, too, but with radiation the absolute temperature is important. In some cases one method of heat transfer may dominate over the other two, but often heat transfer occurs via two, or even all three, processes simultaneously. A stove and oven are perfect examples of the different kinds of heat transfer. If you boil water in a pot on the stove, heat is conducted from the hot burner through the base of the pot to the water. Heat can also be conducted along the handle of the pot, which is why you need to be careful picking the pot up, and why most pots don't have metal handles. In the water in the pot, convection currents are set up, helping to heat the water uniformly. If you cook something in the oven, on the other hand, heat is transferred from the glowing elements in the oven to the food via radiation. Thermodynamics is the study of systems involving energy in the form of heat and work. A good example of a thermodynamic system is gas confined by a piston in a cylinder. If the gas is heated, it will expand, doing work on the piston; this is one example of how a thermodynamic system can do work. Thermal equilibrium is an important concept in thermodynamics. When two systems are in thermal equilibrium, there is no net heat transfer between them. This occurs when the systems are at the same temperature. In other words, systems at the same temperature will be in thermal equilibrium with each other. The first law of thermodynamics relates changes in internal energy to heat added to a system and the work done by a system. The first law is simply a conservation of energy equation: The internal energy has the symbol U. Q is positive if heat is added to the system, and negative if heat is removed; W is positive if work is done by the system, and negative if work is done on the system. We've talked about how heat can be transferred, so you probably have a good idea about what Q means in the first law. What does it mean for the system to do work? Work is simply a force multiplied by the distance moved in the direction of the force. A good example of a thermodynamic system that can do work is the gas confined by a piston in a cylinder, as shown in the diagram. If the gas is heated, it will expand and push the piston up, thereby doing work on the piston. If the piston is pushed down, on the other hand, the piston does work on the gas and the gas does negative work on the piston. This is an example of how work is done by a thermodynamic system. An example with numbers might make this clearer. Consider a gas in a cylinder at room temperature (T = 293 K), with a volume of 0.065 m3. The gas is confined by a piston with a weight of 100 N and an area of 0.65 m2. The pressure above the piston is atmospheric pressure. (a) What is the pressure of the gas? This can be determined from a free-body diagram of the piston. The weight of the piston acts down, and the atmosphere exerts a downward force as well, coming from force = pressure x area. These two forces are balanced by the upward force coming from the gas pressure. The piston is in equilibrium, so the forces balance. Therefore: Solving for the pressure of the gas gives: The pressure in the gas isn't much bigger than atmospheric pressure, just enough to support the weight of the piston. (b) The gas is heated, expanding it and moving the piston up. If the volume occupied by the gas doubles, how much work has the gas done? An assumption to make here is that the pressure is constant. Once the gas has expanded, the pressure will certainly be the same as before because the same free-body diagram applies. As long as the expansion takes place slowly, it is reasonable to assume that the pressure is constant. If the volume has doubled, then, and the pressure has remained the same, the ideal gas law tells us that the temperature must have doubled too. The work done by the gas can be determined by working out the force applied by the gas and calculating the distance. However, the force applied by the gas is the pressure times the area, so: W = F s = P A s and the area multiplied by the distance is a volume, specifically the change in volume of the gas. So, at constant pressure, work is just the pressure multiplied by the change in volume: This is positive because the force and the distance moved are in the same direction, so this is work done by the gas. As has been discussed, a gas enclosed by a piston in a cylinder can do work on the piston, the work being the pressure multiplied by the change in volume. If the volume doesn't change, no work is done. If the pressure stays constant while the volume changes, the work done is easy to calculate. On the other hand, if pressure and volume are both changing it's somewhat harder to calculate the work done. As an aid in calculating the work done, it's a good idea to draw a pressure-volume graph (with pressure on the y axis and volume on the x-axis). If a system moves from one point on the graph to another and a line is drawn to connect the points, the work done is the area underneath this line. We'll go through some different thermodynamic processes and see how this works. There are a number of different thermodynamic processes that can change the pressure and/or the volume and/or the temperature of a system. To simplify matters, consider what happens when something is kept constant. The different processes are then categorized as follows : If the volume increases while the temperature is constant, the pressure must decrease, and if the volume decreases the pressure must increase. The isothermal and adiabatic processes should be examined in a little more detail. In an isothermal process, the temperature stays constant, so the pressure and volume are inversely proportional to one another. The P-V graph for an isothermal process looks like this: The work done by the system is still the area under the P-V curve, but because this is not a straight line the calculation is a little tricky, and really can only properly be done using calculus. The internal energy of an ideal gas is proportional to the temperature, so if the temperature is kept fixed the internal energy does not change. The first law, which deals with changes in the internal energy, thus becomes 0 = Q - W, so Q = W. If the system does work, the energy comes from heat flowing into the system from the reservoir; if work is done on the system, heat flows out of the system to the reservoir. In an adiabatic process, no heat is added or removed from a system. The first law of thermodynamics is thus reduced to saying that the change in the internal energy of a system undergoing an adiabatic change is equal to -W. Since the internal energy is directly proportional to temperature, the work becomes: An example of an adiabatic process is a gas expanding so quickly that no heat can be transferred. The expansion does work, and the temperature drops. This is exactly what happens with a carbon dioxide fire extinguisher, with the gas coming out at high pressure and cooling as it expands at atmospheric pressure. With liquids and solids that are changing temperature, the heat associated with a temperature change is given by the equation: A similar equation holds for an ideal gas, only instead of writing the equation in terms of the mass of the gas it is written in terms of the number of moles of gas, and use a capital C for the heat capacity, with units of J / (mol K): For an ideal gas, the heat capacity depends on what kind of thermodynamic process the gas is experiencing. Generally, two different heat capacities are stated for a gas, the heat capacity at constant pressure (Cp) and the heat capacity at constant volume (Cv). The value at constant pressure is larger than the value at constant volume because at constant pressure not all of the heat goes into changing the temperature; some goes into doing work. On the other hand, at constant volume no work is done, so all the heat goes into changing the temperature. In other words, it takes less heat to produce a given temperature change at constant volume than it does at constant pressure, so Cv < Cp. That's a qualitative statement about the two different heat capacities, but it's very easy to examine them quantitatively. The first law says: We also know that PV = nRT, and at constant pressure the work done is: Note that this applies for a monatomic ideal gas. For all gases, though, the following is true: Another important number is the ratio of the two specific heats, represented by the Greek letter gamma (g). For a monatomic ideal gas this ratio is: Back to the lecture schedule home page
http://physics.bu.edu/~duffy/py105/notes/Heattransfer.html
13
34
we look into the night sky, the impression we first get may be of an unchanging scene. Certainly, the Moon and planets change position against the backdrop of stars from night to night, and over a period of hours the satellites of Jupiter can be seen to shift around their orbits - but everything appears to happen slowly. Apart, that is, from the brief streak of light from a meteor. Capable of appearing in any part of the sky without warning, these objects can catch the observer completely off-guard. Many people have never seen one, but in fact meteors are visible every cloudless night - if only one has the patience to watch out for them. But these are more than just a means to a free firework display; for two centuries, meteors (or more correctly, the objects which caused them) provided science with its only source of material from beyond the Earth. Meteors are caused by pieces of debris floating through the solar system, entering the Earth's atmosphere at very high speed and exciting the atoms in the air. The piece of material itself is usually destroyed by the immense heat caused by friction with the air. Although the vast majority of such objects are destroyed long before they can reach the surface of the Earth, some of the larger pieces do survive the journey, and fall to the ground. These pieces of rock from beyond the reaches of our planet are known as meteorites. Meteoroids, Meteors and Meteorites nature of meteors remained a mystery for a considerable time, and there was much debate about whether "shooting stars" were phenomena of the Earth, or space. However, the fact that meteorites really were visitors from beyond our atmosphere was proved by Ernest Chladni in 1794, and Jean-Baptiste Biot in 1803 when a meteorite was observed to fall near a are three distinct stages to the phenomenon, and in the vast majority of cases, an object will only survive to the second. These stages are: When a piece of debris , later to enter the Earth's atmosphere, is travelling through space towards our planet, the object is called a Meteor. The brief flash of light we see in the night sky and caused not by the material "burning" with friction from the atmosphere, but rather from the atoms which have been excited in the air from the object's The name given to the relatively small number of objects which are not completely destroyed in the upper atmosphere, but survive to the ground. Meteors and Meteor showers pieces of material which cause meteors enter the upper atmosphere of Earth at very high velocity - typically around 260,000 km/h. The flash of light which we see occurs when the object is approximately 100 km above the surface of the Earth. Perhaps the most surprising feature of these objects is their size. To create a flash which is visible to the naked eye, the particle must be at least the size of a grain of sand. A particle the size of a grape would be a spectacular object, and may even cast shadows on the ground. Such bright meteors are given the name of fireballs. The image here (left, courtesy of the Dutch Meteor Society) shows such a fireball. to the Earth's motion, the best time to see meteors is after midnight, since before this time, only those objects which are travelling faster than the Earth can catch up and fall through the atmosphere. After midnight, the night time sky faces the direction of the Earth's motion, and in effect "scoops up" more meteors. Meteor showers and their origins certain times of the year, many more meteors than usual can be seen at night. The reason for this is linked to the origins of the vast majority of meteors: comets. As a comet orbits near the sun, it loses material, the majority of which is ejected in the tail. The dust and rock particles ejected from the comet are spread behind it in a trail which follows the orbit, as shown on the diagram (left, courtesy of Cambridge University press). At certain times of the year, the Earth crosses these dirt-laden paths, and the particles of material which they contain are swept up by our planet's atmosphere to appear in the sky as a shower of meteors. The dates of these showers are accurately known, and astronomers can prepare for them in advance. Often, the comet responsible for particular showers is known: for instance, both the Eta Aquarid shower (which is at its height around May 4th) and the Orionid shower (October 20th) are both caused by the trail of debris left by Halley's comet. The name of each shower comes from the position of its radiant. Radiants are an effect of perspective - the effect which makes railway tracks seem to meet far in the distance. If the meteors from a particular shower are plotted on a map, and a line drawn through the tail and continuing on behind the meteor, all the lines appear to converge at a particular point on the sky. This point is called the radiant, and is shown in the figure below. shower is named after the constellation in which the radiant lies - such as Gemini for the Geminid shower (December 13th). In the case of the Eta Aquarids, the radiant is found to lie close to the star Eta The movie presented here (courtesy of NASA) shows a fireball of the 2003 Perseid meteor shower. majority of meteors seen in our skies, including probably all those in the showers, are from these cometary trails, and most - even the very bright ones, are caused by particles very small in size (from dust grains to pieces a few centimetres across). These particles are, generally, too small to survive the journey to the surface of the Earth, since they are burned away too rapidly. However, much larger chunks of material do land, and it is believed that these meteorites are from a different source. Rather than a by-product of the passage of a comet, meteorites probably originate from the asteroids. Asteroids may collide with each other, and with other pieces of space debris. This may break pieces off the asteroid, and in some very severe cases, the asteroid may shatter completely. The result is a cloud of "rubble" - large pieces of rock and metals from the original asteroid which may wander into the path of the Earth and fall through the atmosphere. Far more rare than a "normal" comet related sighting, the resulting meteor is remarkably bright - again, capable of casting strong shadows. Although a great deal of the object will be burned away on its passage, some may still survive, and the resulting meteorite provides scientists with a mine of information about the composition of asteroids, and also that of the early solar system when these objects formed. Although travelling at great speed, recordings of injuries from meteorite falls an object fell to earth on August 14th, 1992 in Mbale, Uganda. The meteorite was made of stone and broke up on its fall, scattering debris over an area of around 3 x 7 km. At least 48 separate impacts were found, and it is thought that the original object must have had a mass of around 1000 kg, and so far, 150 kg of fragments have been found. The image here (left, courtesy of the Dutch Meteor Society) shows the largest fragment recovered. most meteorites almost certainly came from the asteroids, there are some exceptions. Some are thought to have come from comets, and others have a composition which is very close to the rocks which are found on the Moon, which suggests that some of the impacts from objects which caused the craters on the Moon's surface, threw pieces of the lunar surface off, and some of these pieces ended up falling into the Earth's atmosphere. At least seven other meteorites in collections are of a type which match closely the materials found on the planet Mars, also probably as a result of impacts on the surface of that planet. A martian meteorite is shown in the image on the left (courtesy of LANL). The composition of meteorites to their asteroidal origins, meteorites provide us with information about the composition of their distant parents. The objects which fall to Earth are very diverse in their composition and appearance, reflecting the different materials in the original bodies. Meteorites are generally divided into three classes according to their composition: This is the most commonly encountered meteorite. The majority of the material (90%) is iron, with a smaller amount of nickel mixed in. The meteorites which fell at Barringer, Arizona, and Wolf Creek, Australia, were both irons. You can click here to see an image of a slice through an iron meteorite. are the most common type to fall; however, because they are very similar in composition to native Earth-rocks, they are difficult to spot, and so this class is not the most common type to find. Some stony meteorites contain small glassy spheres called chondrules, and objects with these spheres are known as chondrites, a sub-class of the stony meteorites. (Stony meteorites without the spheres are called achondrites). This image shows a fragment of the meteorite which fell in Peekskil, New York, USA. This is the same meteorite which was captured on video and was presented above. (Photo courtesy of AstroMall). Stony Irons are the final major class of meteorite, and contain small pieces of stone embedded in a body of iron. The image here shows a slice of a meteorite of this class. The meteorite fell at Esquel Chabut, Argentina, and is pictured courtesy of AstroMall. Impacts with the Earth we are fortunate that the vast majority of debris from space which enters the Earth's atmosphere is small - so small that it rarely survives the fall to the ground. A very small fraction of the pieces do make it all the way, and are recovered as meteorites, usually causing no more damage than a hole in a house roof (as in the image on the right, courtesy of the Dutch Meteor Society), or a small pit in the things were not always this way; throughout the history of the Earth our planet has been hit by very much larger objects which have had a profound effect on its evolution, and in many cases have left visible scars on the surface. For example, some scientists believe that the disappearance of the Dinosaurs about 65 million years ago was caused by the impact of a large asteroid in an area now covered by the Indian Ocean - although this is by no means the only theory for their demise. Theories for the creation of the moon also include one in which the early Earth was hit by a huge body - possibly planet-like in size, which caused a large piece to fracture off our planet and form the satellite which we see today (again, this is only one of many theories put forward to explain the existence of the Moon). photographs of the Earth show a surface remarkably free from the craters which scar the surface of our satellite, this appearance is deceptive. Whilst the Earth's atmosphere does shield it from the smaller bodies, it is inadequate to prevent large pieces of material from hitting the surface; the reason for an apparent lack of craters on our planet is erosion. Whilst the moon is a relatively quiet and inactive body, the Earth has always been far more restless. Craters from long ago can be "wiped off the face of the Earth" by the action of the oceans, weather, volcanoes, earthquakes and other large scale events. However, there are several impact sites still identifiable. These are shown on the chart below. World crater sites - courtesy of the Canadian Geological Survey are the major meteorite falls of today so small in comparison with these apocalyptic events? Early on in the history of the solar system, the pieces of debris which caused major impacts were far more common. The surface of the moon, pock-marked with huge craters, is evidence to the fact that the planets went through a period of very intense bombardment millions of years ago. Through time, the space around the planets has been "swept clear" of the majority of these objects, and so large events are more rare. However, the scars from some of these events are still visible, as shown in the images below, also from the Canadian Geological Survey. Wolfe Creek, Western Australia. than 0.3 million years old, this crater has a diameter of 0.9 km. You can also see an image of a fragment from the Wolf Creek meteorite. Manicouagan, Quebec, Canada. Around 214 million years old, this crater (pictured here from the Space Shuttle) has a diameter of about 100 km. Barringer, Arizona, USA. Possibly the most famous terrestrial impact crater, Barringer is 1.2 km across, and is about 49,000 years old. has also been suggested that there are regular periods of bombardment. By looking at the rocks from different depths in the Earth's crust, scientists find layers of debris which indicate massive planet-wide events. The depth at which these debris layers are found is an indication of the date of the event, and some scientists belive that these impacts occur at regular intervals. They have suggested that this may be caused by a companion star, called Nemesis, orbiting the Sun every 26 million years. This star periodically disturbs the Oort cloud, and sends many comets hurtling in to the centre of the solar system, so that impacts between planets and large meteorites become far perhaps the most intriguing possibility is the one put forward by some scientists who look into the origins of life: some believe that the seeds of the life found on Earth today could have been carried here on some unknown meteorite, millions of years ago. An artists impression of the impact of a large meteorite on the Earth. (Courtesy of Meteor Crater Enterprises)
http://www2.le.ac.uk/departments/physics/research/xroa/astronomical-facilities-1/educational-guide/meteorites
13
18
The Birth of World Religion from the Divine Proportion Sumerian Shamash (sun), Sin (moon) and Ishtar (Venus) hover over the mountain. All three lineages are described and illustrated together over a mountain described in Sumerian tablets and the Hindu Vedas at least as early as 3000 BCE. This mountain is named Mount Sumeru, Masshu or Meru and is described as a supernal bridge or ladder extending from deep under the Earth's oceans up into space with the summit representing a heaven filled 5-tier Mount Meru from the Jain Agamas. It was within this structure that the pantheon of all the other gods lived and danced. Along its central axis, the Axis Mundi, lived serpent deities called "asuras" which Venus or "Asherah" ruled over. As the celestial archetype for the Rod of Asclepius, Staff of Moses, Staff of Hermes and medical Caduceus, the serpents spiraling around the axis of Meru symbolized the natural Fibonacci spiral converging around a pyramid or Egyptian triangle to the golden ratio. This is described by Pingala perhaps as far back as 450 BC, as explained in this scholarly paper. Fibonacci series f(n) = f(n-1) + f(n-2) = 1 1 2 3 5 8 13 21 34 55 89 144 ... Here is a diagram showing PIngala’s Maatraameru pyramid converging as the Fibonacci series to the golden ratio, golden mean or "divine proportion”, commonly represented by the Greek letter Phi or Φ: Meru model based on Fibonacci series. This summit of this theoretical mountain became the gold-paved heaven prototype for all sacred mountains, such as Mount Olympus, Mount Sinai and Mount Moriah / Zion. It was also the universal template for temple building used around the world. Common to the Meru temple model is the number five. Babylonian ziggurats were built in five levels after depictions of this five-level mountain on a Uruk tablet, c3000 BC. Hindu funeral pyres are built similarly in five levels as is the King's Chamber in the Great Pyramid of Egypt. Depictions of Mount Meru in the Jain Agamas parallel diagrams in Babylonia, showing five levels. The importance of five can be explained by the golden ratio Phi being derived from five in the Fibonacci series or more compactly as Φ = (sqrt(5) + 1) / 2. Thus, we find the golden ratio to lie at the heart of ancient religion. But, this is not simply due to a sequencing property of numbers. The planet Venus actually approximates the golden ratio over an 8-year period by retrograding and facing the Earth exactly five times to form a pentacle in the night sky. This is due to the fact that Venus orbits the sun thirteen times for every eight orbits of the Earth, creating the Fibonacci ratio 13:8 = 1.625, which is very close to the golden ratio Φ. This 8-year Venusian cycle was symbolized in ancient Sumer as an 8-point star. This same idea was portrayed in Egyptian hieroglyphs as a 5-points star or pentacle where it meant "rising upwards toward the point of origin" and formed part of words such as "to bring up," "to educate," and "the teacher." (Cirlot, p. 310). Thus, the Venus pentacle was a symbol of enlightenment and knowledge in ancient times, translated into Latin as "Lucifer", the light bringer. As a symbol of the sacred feminine and the golden ratio - found in every intersection of a pentagram - this was the prime knowledge of nature at the heart of all pre-Christian religions. Unfortunately, it was considered pagan and thus suppressed by the Church and forbidden in the Christian Bible (e.g., the Tree of Knowledge and Apple of Knowledge). 8-year Venus cycle creating a pentacle with Earth. To suppress knowledge of a divine constant at work in nature, the Church had to also suppress the feminine aspect of God as represented by Venus and the Moon. In its place, the Jesus deity became a kind of super Green Man - a new kind of solar deity powerful enough to inherit both male and female attributes. While the lineage of male deities all follow the celestial pattern of the Sun's death - beneath The Crux constellation on the Winter solstice, then resurrected three days later on December 25th under the constellation of The Three Kings - this birth-rebirth concept actually seems to have been first introduced through the Sumerian Venus deity Inanna: “Fascinating is the account of Inanna's descent into ‘the land without return’, kur-nu-gi-a, a dry, dusty place, situated below the sweet waters of the earth. She decided to visit this dark realm, which belonged to her enemy and sister goddess, Ereshkigal, ‘the mistress of death’, and assert her own authority there. Having adorned herself with all her finery and left behind Ninshubur, her Vizier, with orders to rescue her should she not return, Inanna descended to kur-nu-gi-a. At each of its seven portals she was obliged to take off a garment or ornament, until at last she appeared naked before Ereshkigal and the seven judges of the dead. ‘At their cruel command, the defenceless goddess was turned into a corpse, which was hung on a stake.’ After three days and nights had passed ... They obtained access to Inanna's corpse and resurrected it with the ‘food of life’ and the ‘water of life’.” -- Oxford Dictionary of World Mythology. For those who might still doubt the ancients could have discovered the "divine proportion" in the orbital pattern of Venus and founded their religions on it, consider that Omen texts from the First Babylonian Dynasty (c. 1900-1660 BC) confirm that the old Mesopotamian sky watchers understood that Venus as the morning star and the evening star were the same thing. By the Seleucid period (c. 301-164 BC), we have a number of late goal-year texts in which the 8-year period was used to predict the appearances of Venus. These texts are clay tablets that list astronomical data for a given year and also for years specified by adding an appropriate number to the starting year. For Venus, the number to be added is eight. Accordingly, the pattern in the table for Venus will work for every eighth year from the year for which the table is prepared. One text from the Neo-Babylonian period (626-539 BC), referring to Venus as Dilbat, records "Dilbat 8 years behind thee come back ... 4 days thou shall subtract." Here, the Mesopotamian planet watcher is instructed to subtract four days to get the right date for Venus. Another much older text called the Tablets of Ammizaduga, inscribed around 1700-1600 BC, provide 21 years of Venus data, including dates of first and last appearances as a morning star and as an evening star along with durations of invisibility. It says: "If on the 25th of Tammuz Venus disappeared in the west, for 7 days remaining absent in the sky, and on the 2nd of Ab Venus was seen in the east, there will be rains in the land; desolation will be wrought. (year8)" - E. C. Krupp, Echoes of the Ancient Skies: The Astronomy of Lost Civilizations. Despite scribal errors, the texts clearly exhibit the 8-year Venus cycle and indicate Mesopotamians in the middle of the second millennium BC were aware of it. And, having tracked the path of Venus so meticulously, they would have certainly observed the five retrograde pauses and connected the dots, so to speak, to form the pentacle and discover the divine proportion in its intersections. Comparing this to the sacred Mayan Tzolkin calendar, based on the 260-day Venus cycle, we find the same 13:8=1.625 proportion discovered by the Babylonians: 260-day Tzolkin cycle * 5 Venus retrogrades = 1300 1300 days / 8-year Venus cycle = 162.5 days Whether discovered separately or shared through ancient trans-Atlantic voyages, the pentagonal orbit of Venus and the golden mean hidden within it was the founding principle in all of the world religions. Reflected in the geometries of flowers, seed patterns in fruit and the human anatomy, the pentacle was proof of a divine order in Nature. No advanced mathematics is needed - just careful observation and a little simple arithmetic was enough to reveal God to our ancestors. While few today are in the least bit aware of it, the celestial harmony between the Sun, Venus and Earth-Moon system is the first and true religion - everything else is a mythical veneer.
http://www.interferencetheory.com/Blog/files/29f6b58ceedbb14d551866b109a6faa6-110.html
13
132
A cache is a small amount of memory which operates more quickly than main memory. Data is moved from the main memory to the cache, so that it can be accessed faster. Modern chip designers put several caches on the same die as the processor; designers often allocate more die area to caches than the CPU itself. Increasing chip performance is typically achieved by increasing the speed and efficiency of chip cache. The cache memory performance is the most significant factor in achieving high processor performance. Cache works by storing a small subset of the external memory contents, typically out of it's original order. Data and instructions that are being used frequently, such as a data array or a small instruction loop, are stored in the cache and can be read quickly without having to access the main memory. Cache runs at the same speed as the rest of the processor, which is typically much faster than the external RAM operates at. This means that if data is in the cache, accessing it is faster than accessing memory. Cache helps to speed up processors because it works on the principal of locality. In this chapter, we will discuss several possible cache arrangements, in increasing order of complexity: - No cache, single-CPU, physical addressing - Single cache, single-CPU, physical addressing - Cache hierarchy: L1, L2, L3, etc. - cache replacement policies: associativity, random replacement, LRU, etc. - Split cache: I-cache and D-cache, on top of a unified cache hierarchy - caching with multiple CPUs - cache hardware that supports virtual memory addressing - the TLB as a kind of cache - how single-address-space virtual memory addressing interacts with cache hardware - how per-process virtual memory addressing interacts with cache hardware No cache Most processors today, such as the processors inside standard keyboards and mice, don't have any cache. Many historically important computers, such as Cray supercomputers, don't have any cache. The vast majority of software neither knows nor cares about the specific details of the cache, or if there is even a cache at all. Processors without a cache are usually limited in performance by the main memory access time. Without a cache, the processor fetches each instruction, one at a time, from main memory, and every LOAD or STORE goes to main memory before executing the next instruction. One way to improve performance is to substitute faster main memory. Alas, that usually has a financial limit: hardly anyone is willing to pay a penny a bit for a gigabyte of really fast main memory. Even if money is no object, eventually one reaches physical limits to main memory access time. Even with the fastest possible memory money can buy, the memory access time for a unified 1 gigabyte main memory is limited by the time it takes a signal to get from the CPU to the most distant part of the memory and back. Single cache Using exactly the same technology, it takes less time for a signal to traverse a small block of memory than a large block of memory. The performance of a processor with a cache is no longer limited by the main memory access time. The performance of a processor with a cache is usually limited in performance by the (much faster) cache memory access time: if the cache access time of a processor could be decreased, the processor would have higher performance. However, cache memory is generally much easier to speed up than main memory: really fast memory is much more affordable when we only buy small amounts of it. If it will improve the performance of a system significantly, lots of people are willing to pay a penny a bit for a kilobyte of really fast cache memory. Principal of Locality There are two types of locality, spatial and temporal. Modern computer programs are typically loop-based, and therefore we have two rules about locality: - Spatial Locality - When a data item is accessed, it is likely that data items in sequential memory locations will also be accessed. Consider the traversal of an array, or the act of storing local variables on a stack. In these cases, when one data item is accessed, it is a good idea to load the surrounding memory area into the cache at the same time. - Temporal Locality - When data item is accessed, it is likely that the same data item will be accessed again. For instance, variables are typically read and written to in rapid succession. If is a good idea to keep recently used items in the cache, and not over-write data that has been recently used. Hit or Miss A hit when talking about cache is when the processor finds data in the cache that it is looking for. A miss is when the processor looks for data in the cache, but the data is not available. In the event of a miss, the cache controller unit must gather the data from the main memory, which can cost more time for the processor. Measurements of "the hit ratio" are typically performed on benchmark applications. The actual hit ratio varies widely from one application to another. In particular, video and audio streaming applications often have a hit ratio close to zero, because each bit of data in the stream is read once for the first time (a compulsory miss), used, and then never read or written again. Even worse, many cache algorithms (in particular, LRU) allow this streaming data fill the cache, pushing out of the cache information that will be used again soon (cache pollution). Cache performance A processor with a cache first looks in the cache for data (or instructions). On a miss, the processor then fetches the data (or instructions) from main memory. On a miss, this process takes *longer* than an equivalent processor without a cache. There are three ways a cache gives better net performance than a processor without a cache: - A hit (read from the cache) is faster than the time it takes a processor without a cache to fetch from main memory. The trick is to design the cache so we get hits often enough that their increase in performance more than makes up for the loss in performance on the occasional miss. (This requires a cache that is faster than main memory). - Multiprocessor computers with a shared main memory often have a bottleneck accessing main memory. When a local cache succeeds in satisfying memory operations without going all the way to main memory, main memory bandwidth is freed up for the other processors, and the local processor doesn't need to wait for the other processors to finish their memory operations. - Many systems are designed so the processor often read multiple items from cache simultaneously -- either 3 separate caches for instruction, data, and TLB; or a multiported cache; or both -- which takes less time than reading the same items from main memory one at a time. The last two ways improve overall performance even if the cache is no faster than main memory. A processor without a cache has a constant memory reference time T of A processor with a cache has an average memory access time of - m is the miss ratio - Tm is the time to make a main memory reference - Th is the time to make a cache reference on a hit - E accounts for various secondary factors (memory refresh time, multiprocessor contention, etc.) Flushing the Cache When the processor needs data, it looks in the cache. If the data is not in the cache, it will then go to memory to find the data. Data from memory is moved to the cache and then used by the processor. Sometimes the entire cache contains useless or old data, and it needs to be flushed. Flushing occurs when the cache controller determines that the cache contains more potential misses than hits. Flushing the cache takes several processor cycles, so much research has gone into developing algorithms to keep the cache up to date. Cache Hierarchy Cache is typically divided between multiple levels. The most common levels are L1, L2, and L3. L1 is the smallest but the fastest. L3 is the largest but the slowest. Many chips do not have L3 cache. Some chips that do have an L3 cache actually have an external L3 module that exists on the motherboard between the microprocessor and the RAM. Inclusive, exclusive, and other cache hierarchy When there are several levels of cache, and a copy of the data in some location in main memory has been cached in the L1 cache, is there another copy of that data in the L2 cache? - No. Some systems are designed to have strictly exclusive cache levels: any particular location in main memory is cached in at most one cache level. - Yes. Other systems are designed to have a strictly inclusive cache levels: whenever some location in main memory is cached in any one level, the same location is also cached in all higher levels. All the data in the L2 cache can also be found in L3 (and also in main memory). All the data in a L1 cache can also be found in L2 and L3 (and also in main memory). - Maybe. In some systems, such as the Intel Pentium 4, some data in the L1 cache is also in the L2 cache, while other data in the L1 cache is not in the L2 cache. This kind of cache policy does not yet have a popular name. Size of Cache There are a number of factors that affect the size of cache on a chip: - Moore's law provides an increasing number of transistors per chip. After around 1989, more transistors are available per chip than a designer can use to make a CPU. These extra transistors are easily converted to large caches. - Processor components become smaller as transistors become smaller. This means there is more area on the die for additional cache. - More cache means fewer delays in accessing data, and therefore better performance. Because of these factors, chip caches tend to get larger and larger with each generation of chip. Cache Tagging Cache can contain non-sequential data items in no particular order. A block of memory in the cache might be empty and contain no data at all. In order for hardware to check the validity of entries in the cache, every cache entry needs to maintain the following pieces of information: - A status bit to determine if the block is empty or full - The memory address of the data in the block - The data from the specified memory address (a "block in the cache", also called a "line in the cache") When the processor looks for data in the cache, it sends a memory address to the cache controller. the cache controller checks the address against all the address fields in the cache. If there is a hit, the cache controller returns the data. If there is a miss, the cache controller must pass the request to the next level of cache or to the main memory unit. The cache controller splits an effective memory address (MSB to LSB) into the tag, the index, and the block offset. Some authors refer to the block offset as simply the "offset" or the "displacement". The memory address of the data in the cache is known as the tag. Memory Stall Cycles If the cache misses, the processor will need to stall the current instruction until the cache can fetch the correct data from a higher level. The amount of time lost by the stall is dependent on a number of factors. The number of memory accesses in a particular program is denoted as Am; some of those accesses will hit the cache, and the rest will miss the cache. The rate of misses, equal to the probability that any particular access will miss, is denoted rm. The average amount of time lost for each miss is known as the miss penalty, and is denoted as Pm. We can calculate the amount of time wasted because of cache miss stalls as: Likewise, if we have the total number of instructions in a program, N, and the average number of misses per instruction, MPI, we can calculate the lost time as: If instead of lost time we measure the miss penalty in the amount of lost cycles, the calculation will instead produce the number of cycles lost to memory stalls, instead of the amount of time lost to memory stalls. Read Stall Times To calculate the amount of time lost to cache read misses, we can perform the same basic calculations as above: Ar is the average number of read accesses, rr is the miss rate on reads, and Pr is the time or cycle penalty associated with a read miss. Write Stall Times Determining the amount of time lost to write stalls is similar, but an additional additive term that represents stalls in the write buffer needs to be included: Where Twb is the amount of time lost because of stalls in the write buffer. The write buffer can stall when the cache attempts to synchronize with main memory. Hierarchy Stall Times In a hierarchical cache system, miss time penalties can be compounded when data is missed in multiple levels of cache. If data is missed in the L1 cache, it will be looked for in the L2 cache. However, if it also misses in the L2 cache, there will be a double-penalty. The L2 needs to load the data from the main memory (or the L3 cache, if the system has one), and then the data needs to be loaded into the L1. Notice that missing in two cache levels and then having to access main memory takes longer than if we had just accessed memory directly. Design Considerations L1 cache is typically designed with the intent of minimizing the time it takes to make a hit. If hit times are sufficiently fast, a sizable miss rate can be accepted. Misses in the L1 will be redirected to the L2 and that is still significantly faster than accesses to main memory. L1 cache tends to have smaller block sizes, but benefits from having more available blocks for the same amount of space. In order to make L1 hit times minimal, L1 are typically direct-mapped or even narrowly 2-way set associative. L2 cache, on the other hand, needs to have a lower miss rate to help avoid accesses to main memory. Accesses to L2 cache are much faster than accesses to memory, so we should do everything possible to ensure that we maximize our hit rate. For this reason, L2 cache tends to be fully associative with large block sizes. This is because memory is typically read and written in sequential memory cells, so large block sizes can take advantage of that sequentiality. L3 cache further continues this trend, with larger block sizes, and minimized miss rate. block size A very small cache block size increases the miss ratio, since a miss will fetch less data at a time. A very large cache block size also increases the miss ratio, since it causes the system to fetch a bunch of extra information that is used less than the data it displaces in the cache. In order to increase the read speed in a cache, many cache designers implement some level of associativity. An associative cache creates a relationship between the original memory location and the location in the cache where that data is stored. The relationship between the address in main memory and the location where the data is stored is known as the mapping of the cache. In this way, if the data exists in the cache at all, the cache controller knows that it can only be in certain locations that satisfy the mapping. A direct-mapped system uses a hashing algorithm to assign an identifier to a memory address. A common hashing algorithm for this purpose is the modulo operation. The modulo operation divides the address by a certain number, p, and takes the remainder r as the result. If a is the main memory address, and n is an arbitrary non-negative integer, then the hashing algorithm must satisfy the following equation: If p is chosen properly by the designer, data will be evenly distributed throughout the cache. In a direct-mapped system, each memory address corresponds to only a single cache location, but a single cache location can correspond to many memory locations. The image above shows a simple cache diagram with 8 blocks. All memory addresses therefore are calculated as n mod 8, where n is the memory address to read into the cache. Memory addresses 0, 8, and 16 will all map to block 0 in the cache. Cache performance is worst when multiple data items with the same hash value are read, and performance is best when data items are close together in memory (such as a sequential block of program instructions, or a sequential array). Most external caches (located on the motherboard, but external to the CPU) are direct-mapped or occasionally 2-way set associative, because it's complicated to build higher-associativity caches out of standard components. If there is such a cache, typically there is only one external cache on the motherboard, shared between all CPUs. The replacement policy for a direct-mapped cache is the simplest possible replacement policy: the new data must go in the one and only one place in the cache it corresponds to. (The old data at the location in the cache, if its dirty bit is set, must be written to main memory first). 2-Way Set Associative In a 2-way set associative cache system, the data value is hashed, but each hash value corresponds to a set of cache blocks. Each block contains multiple data cells, and a data value that is assigned to that block can be inserted anywhere in the block. The read speeds are quick because the cache controller can immediately narrow down its search area to the block that matches the address hash value. The LRU replacement policy for a 2-way set associative cache is one of the simplest replacement policies: The new data must go in one of a set of 2 possible locations. Those 2 locations share a LRU bit that is updated whenever either one is read or written, indicating which one of the two entries in the set was the most-recently used. The new data goes in the *other* location (the least-recently used location). (The old data at that LRU location in the cache, if its dirty bit is set, must be written to main memory first). 2 way skewed associative The 2-way skewed associative cache is "the best tradeoff for .... caches whose sizes are in the range 4K-8K bytes" -- André SeznecAndré Seznec. "A Case for Two-Way Skewed-Associative Caches". http://citeseer.ist.psu.edu/seznec93case.html. Retrieved 2007-12-13. Fully Associative In a fully-associative cache, hash algorithms are not employed and data can be inserted anywhere in the cache that is available. A typical algorithm will write a new data value over the oldest unused data value in the cache. This scheme, however, requires the time an item is loaded or accessed to be stored, which can require lots of additional storage. Cache Misses There are three basic types of misses in a cache: - Conflict Misses - Compulsory Misses - Capacity Misses Conflict Misses A conflict miss occurs in a direct-mapped and 2-way set associative cache when two data items are mapped to the same cache locations. In a data miss, a recently used data item is overwritten with a new data item. Compulsory Misses The image above shows the difference between a conflict miss and a compulsory miss. A compulsory miss is an instance where the cache must miss because it does not contain any data. For instance, when a processor is first powered-on, there is no valid data in the cache and the first few reads will always miss. The compulsory miss demonstrates the need for a cache to differentiate between a space that is empty and one that is full. Consider when we turn the processor on, and we reset all the address values to zero, an attempt to read a memory location with a hash value of zero will hit. We do not want the cache to hit if the blocks are empty. Capacity Misses Capacity misses occur when the cache block is not large enough to hold the data item. Cache Write Policy Data writes require the same time delay as a data read. For this reason, caching systems typically will write data to the cache as well. However, when writing to the cache, it is important to ensure that the data is also written to the main memory, so it is not overwritten by the next cache read. If data in the cache is overwritten without being stored in main memory, the data will be lost. It is imperative that caches write data to the main memory, but exactly when that data is written to the main memory is called the write policy. There are two write policies: write through and write back. Write operations take as long to perform as read operations in main memory. Many cached processors therefore will perform write operations on the cache as well as read operations. Write Through When data is written to memory, a write request is sent simultaneously to the main memory and to the cache. This way, the result data is available in the cache before it can be written (and then read again) from the main memory. When writing to the cache, it's important to make sure the main memory and the cache are synchronized and they contain the same data. In a write through system, data that is written to the cache is immediately written to the main memory as well. If many writes need to occur is sequential instructions, the write buffer may get backed up and cause a stall. Write Back In a write back system, the cache controller keeps track of which data items have been synchronized to main memory. The data items which have not been synchronized are called "dirty", and the cache controller prevents dirty data from being overwritten. The cache controller will synchronize data during processor cycles where no other data is being written to the cache. Write bypass Some processors send writes directly to main memory, bypassing the cache. If that location is *not* already cached, then nothing more needs to be done. If that location *is* already cached, then the old data in the cache(s) needs to be marked "invalid" ("stale") so if the CPU ever reads that location, the CPU will read the latest value from main memory rather than some earlier value(s) in the cache(s). Stale Data It is possible for the data in main memory to be changed by a component besides the microcontroller. For instance, many computer systems have memory-mapped I/O, or a DMA controller that can alter the data. Some computer systems have several CPUs connected to a common main memory. It is important that the cache controller check that data in the cache is correct. Data in the cache that is old and may be incorrect is called "stale". The three most popular approaches to dealing with stale data ("cache coherency protocols") are: - Use simple cache hardware that ignores what the other CPUs are doing. - Set all caches to write-through all STOREs (write-through policy). Use additional cache hardware to listen in ("snoop") whenever some other device writes to main memory, and invalidate local cache line whenever some other device writes to the corresponding cached location in main memory. - Design caches to use the MESI protocol. With simple cache hardware that ignores what the other CPUs are doing, cache coherency is maintained by the OS software. The OS sets up each page in memory as either (a) exclusive to one particular CPU (which is allowed to read, write, and cache it); all other CPUs are not allowed to read or write or cache that page; (b) shared read/write between CPUs, and set to "non-cacheable", in the same way that memory-mapped I/O devices are set to non-cacheable; or (c) shared read-only; all CPUs are allowed to cache but not write that page. Split cache High-performance processors invariably have 2 separate L1 caches, the instruction cache and the data cache (I-cache and D-cache). This "split cache" has several advantages over a unified cache: - Wiring simplicity: the decoder and scheduler are only hooked to the I-cache; the registers and ALU and FPU are only hooked to the D-cache. - Speed: the CPU can be reading data from the D-cache, while simultaneously loading the next instruction(s) from the I-cache. Multi-CPU systems typically have a separate L1 I-cache and L1 D-cache for each CPU, each one direct-mapped for speed. Open question: To speed up running Java applications in a JVM (and similar interpreters and CPU emulators), would it help to have 3 separate caches -- a machine instruction cache indexed by the program counter PC, a byte code cache indexed by the VM's instruction pointer IP, and a data cache ? On the other hand, in a high-performance processor, other levels of cache, if any -- L2, L3, etc. -- as well as main memory -- are typically unified, although there are several exceptions (such as the Itanium 2 Montecito). The advantages of a unified cache (and a unified main memory) are: - Some programs spend most of their time in a small part of the program processing lots of data. Other programs run lots of different subroutines against a small amount of data. A unified cache automatically balances the proportion of the cache used for instructions and the proportion used for data -- to get the same performance on a split cache would require a larger cache. - when instructions are written to memory -- by an OS loading an executable file from storage, or from a just-in-time compiler translating bytecode to executable code -- a split cache requires the CPU to flush and reload the instruction cache; a unified cache doesn't require that. error detection Each cache row entry typically has error detection bits. Since the cache only holds a copy of information in the main memory (except for the write-back queue), when an error is detected, the desired data can be re-fetched from the main memory -- treated as a kind of miss-on-invalid -- and the system can continue as if no error occurred. A few computer systems use Hamming error correction to correct single-bit errors in the "data" field of the cache without going all the way back to main memory. Specialized cache features Many CPUs use exactly the same hardware for the instruction cache and the data cache. (And, of course, the same hardware is used for instructions as for data in a unified cache. The revolutionary idea of a Von Neumann architecture is to use the same hardware for instructions and for data in the main memory itself). For example, the Fairchild CLIPPER used 2 identical CAMMU chips, one for the instruction cache and one for the data cache. Because the various caches are used slightly differently, some CPU designers customize each cache in different ways. - Some CPU designers put the "branch history bits" used for branch prediction in the instruction cache. There's no point to adding such information to a data-only cache. - Many instruction caches are designed in such a way that the only way to deal with stale instructions is to invalidate the entire cache and reload. Data caches are typically designed with more fine-grained response, with extra hardware that can invalidate and reload only the particular cache lines that have gone stale. - The virtual-to-physical address translation process often has a lot of specialized hardware associated with it to make it go faster -- the TLB cache, hardware page-walkers, etc. We will discuss this in more detail in the next chapter, Virtual Memory. - Alan Jay Smith. "Design of CPU Cache Memories". Proc. IEEE TENCON, 1987. - Paul V. Bolotoff. "Functional Principles of Cache Memory". 2007. - John L. Hennessy, David A. Patterson. "Computer Architecture: A Quantitative Approach". 2011. ISBN 012383872X, ISBN 9780123838728. page B-9. - David A. Patterson, John L. Hennessy. "Computer organization and design: the hardware/software interface". 2009. ISBN 0123744938, ISBN 9780123744937 "Chapter 5: Large and Fast: Exploiting the Memory Hierarchy". p. 484. - Gene Cooperman. "Cache Basics". 2003. - Ben Dugan. "Concerning Caches". 2002. - Harvey G. Cragon. "Memory systems and pipelined processors". 1996. ISBN 0867204745, ISBN 9780867204742. "Chapter 4.1: Cache Addressing, Virtual or Real" p. 209 - Paul V. Bolotoff. "Functional Principles of Cache Memory". 2007. - Micro-Architecture "Skewed-associative caches have ... major advantages over conventional set-associative caches." - Paul V. Bolotoff. Functional Principles of Cache Memory. 2007. Further reading - Parallel Computing and Computer Clusters/Memory - simulators available for download at University of Maryland: Memory-Systems Research: "Computational Artifacts" can be used to measure cache performance and power dissipation for a microprocessor design without having to actually build it. This makes it much quicker and cheaper to explore various tradeoffs involved in cache design. ("Given a fixed size chip, if I sacrifice some L2 cache in order to make the L1 cache larger, will that make the overall performance better or worse?" "Is it better to use an extremely fast cycle time cache with low associativity, or a somewhat slower cycle time cache with high associativity giving a better hit rate?")
http://en.wikibooks.org/wiki/Microprocessor_Design/Cache
13
10
Illustration courtesy Mark A. Garlick, University of Warwick Published June 16, 2011 Earlier this year astronomers spied a burst of high-energy gamma rays emanating from the center of a dwarf galaxy 3.8 billion light-years away. The odd flash, dubbed Sw 1644+57, is one is the brightest and longest gamma ray bursts (GRBs) yet seen. In visible light and infrared wavelengths, the burst is as bright as a hundred billion suns. (Related: "Ultrabright Gamma-ray Burst 'Blinded' NASA Telescope.") "We believe this explosive event was caused by a supermassive black hole ten million times the mass of the sun shredding a star that got too close to its gravitational pull," said study leader Joshua Bloom, an astronomer at the University of California, Berkeley. "The mass of the star fell into the black hole, but along the way it heated up and produced a burst of energy in the form of a powerful jet of radiation, [which] we were able to detect through space-based observatories." While supermassive black holes are thought to be lurking at the hearts of most large galaxies, events such as a star getting eaten may happen only once every hundred million years in any given galaxy. "What makes this event even more rare is that we didn't just get a burst of x-ray emissions from the infalling stellar gas, but some of it actually got spit out by the black hole in the form of a gamma ray jet, and we just happen to be looking down the barrel of that jet," Bloom said. "So I would say it's a combination of actually catching a monster black hole in the process of feeding on an unfortunate star that got too close to it, and because we are in a fairly special geometry." NASA's Swift satellite first detected the burst on March 28, 2011, and both the Hubble Space Telescope and the Chandra X-ray Observatory followed the burst's progress. The explosive event was initially thought to be an ordinary gamma ray burst. Originating billions of light-years away, these events are seen every few days across the universe, and they're thought to occur when very massive stars blow up or when two giant stars collide. "Most of these [common gamma ray bursts] are detected and quickly fade away within the course of a day," Bloom said. "But now after two and half months, this new GRB is still going strong. Because it stands out so much observationally, this decidedly makes it something different from any other GRB we have ever seen before." In addition, common gamma ray bursts are normally spied off-center in the main bodies of galaxies. But Sw 1644+57 was found in an unusual location—at the core of a galaxy. "That's the prime reason we started suspecting early on that a supermassive black hole was involved, because we know [galactic cores are] where these beasts reside." Scientists already knew that actively feeding galactic black holes emit huge amounts of radiation, because material falling in gets superheated as it nears the black hole's maw. Sw 1644+57 is surprising, though, because of its spontaneous nature. "What's amazing," Bloom said, "is that we have here an otherwise quiescent, starving black hole that has decided to go on a sudden feeding frenzy for a short period of time." Our own Milky Way also has a quiet supermassive black hole at its center. The new discovery shows it's possible for our cosmic monster to spew powerful radiation jets should a star fall in, Bloom added. Still, because such events are so rare—and the resulting jets are so narrowly focused—it's unlikely we'd detect anything like Sw 1644+57 shooting from our galaxy for millions of years. The black hole eating a star is described in this week's issue of the journal Science. These six scientists were snubbed for awards or robbed of credit for discoveries … because they were women. Sweden needs garbage to maintain its energy habits, so it’s begun importing trash—just over 881,000 tons—from nearby Norway to do it. A boulder-size meteor slammed into the moon in March, igniting an explosion so bright that anyone looking up at right moment might have spotted it.
http://news.nationalgeographic.com/news/2011/06/110615-black-holes-eat-star-galaxy-nasa-swift-gamma-rays-space-science/
13
35
1 These tables present experimental statistics showing the Gross Value of Irrigated Agricultural Production (GVIAP). Annual data are presented for the reference periods from 2000–01 to 2008–09 for Australia, States and Territories, for the Murray-Darling Basin for selected years (2000–01, 2005–06, 2006–07, 2007–08 and 2008–09) and for Natural Resource Management (NRM) regions from 2005–06 to 2008–09, for key agricultural commodity groups. 2 The tables also present the total gross value of agricultural commodities (GVAP) and the Volume of Water Applied (in megalitres) to irrigated crops and pastures. WHAT IS GVIAP? 3 GVIAP refers to the gross value of agricultural commodities that are produced with the assistance of irrigation. The gross value of commodities produced is the value placed on recorded production at the wholesale prices realised in the marketplace. Note that this definition of GVIAP does not refer to the value that irrigation adds to production, or the "net effect" that irrigation has on production (i.e. the value of a particular commodity that has been irrigated "minus" the value of that commodity had it not been irrigated) - rather, it simply describes the gross value of agricultural commodities produced with the assistance of irrigation. 4 ABS estimates of GVIAP attribute all of the gross value of production from irrigated land to irrigated agricultural production. For this reason, extreme care must be taken when attempting to use GVIAP figures to compare different commodities - that is, the gross value of irrigated production should not be used as a proxy for determining the highest value water uses. Rather, it is a more effective tool for measuring changes over time or comparing regional differences in irrigated agricultural production. 5 Estimating the value that irrigation adds to agricultural production is difficult. This is because water used to grow crops and irrigate pastures comes from a variety of sources. In particular, rainwater is usually a component of the water used in irrigated agriculture, and the timing and location of rainfall affects the amount of irrigation water required. Other factors such as evaporation and soil moisture also affect irrigation water requirements. These factors contribute to regional and temporal variations in the use of water for irrigation. In addition, water is not the only input to agricultural production from irrigated land - fertiliser, land, labour, machinery and other inputs are also used. To separate the contribution that these factors make to total production is not currently possible. Gross value of agricultural production 6 These estimates are based on data from Value of Agricultural Commodities Produced (cat. no. 7503.0), which are derived from ABS agricultural censuses and surveys. During the processing phase of the collections, data checking was undertaken to ensure key priority outputs were produced to high quality standards. As a result, some estimates will have been checked more comprehensively than others. 7 It is not feasible to check every item reported by every business, and therefore some anomalies may arise, particularly for small area estimates (e.g. NRM regions). To present these items geographically, agricultural businesses are allocated to a custom region based on where the business reports the location of their 'main agricultural property'. Anomalies can occur if location details for agricultural businesses are not reported precisely enough to accurately code their geographic location. In addition, some businesses operate more than one property, and some large farms may operate across custom region and NRM boundaries, but are coded to a single location. As a result, in some cases, a particular activity may not necessarily occur in the area specified and the Area of Holding and other estimates of agricultural activity may exceed or not account for all activities within that area. For these reasons, the quality of estimates may be lower for some NRMs and other small area geographies. 8 Gross value of agricultural production (GVAP) is the value placed on recorded production of agricultural commodities at the wholesale prices realised in the market place. It is also referred to as the Value of Agricultural Commodities Produced (VACP). 9 In 2005–06, the ABS moved to a business register sourced from the Australian Taxation Office's Australian Business Register (ABR). Previously the ABS had maintained its own register of agricultural establishments. 10 The ABR-based register consists of all businesses on the ABR classified to an 'agricultural' industry, as well as businesses which have indicated they undertake agricultural activities. All businesses with a turnover of $50,000 or more are required to register on the ABR. Many agricultural businesses with a turnover of less than $50,000 have also chosen to register on the ABR. 11 Moving to the ABR-based register required changes to many of the methods used for compiling agriculture commodity and water statistics. These changes included: using new methods for determining whether agricultural businesses were 'in-scope' of the collection; compiling the data in different ways; and improving estimation and imputation techniques. 12 The ABR-based frame was used for the first time to conduct the 2005–06 Agricultural Census. This means that Value of Agricultural Commodities Produced (VACP) data are not directly comparable with historical time series for most commodities. For detailed information about these estimates please refer to the Explanatory Notes in Value of Agricultural Commodities Produced (cat. no. 7503.0). 13 Statistics on area and production of crops relate in the main to crops sown during the reference year ended 30 June. Statistics of perennial crops and livestock relate to the position as at 30 June and the production during the year ended on that date, or of fruit set by that date. Statistics for vegetables, apples, pears and for grapes, which in some states are harvested after 30 June, are collected by supplementary collections. For 2005–06 to 2007–08, the statistics for vegetables, apples, pears and for grapes included in this product are those collected in the 2005–06 Agriculture Census at 30 June 2006, the 2006–07 Agricultural Survey at 30 June 2007 and the Agricultural Resource Management Survey 2007–08 at 30 June 2008, not those collected by the supplementary collections. For this reason the GVAP (VACP) estimates may differ from the published estimates in the products Agricultural Commodities: Small Area Data, Australia, 2005–06 (cat. no. 7125.0) and Value of Agricultural Commodities Produced, Australia (cat. no. 7503.0). 14 Further, the GVAP (Gross Value of Agricultural Production, also referred to as VACP) and GVIAP estimates for 2005–06 and 2006–07 shown in this product have been revised where necessary, for example, when a new price has become available for a commodity after previous publications. 15 The VACP Market Prices survey collected separate prices for undercover and outdoor production for the first time in 2005–06. This enabled the ABS to better reflect the value of undercover and outdoor production for nurseries and cut flowers. The value of the commodity group “nurseries, cut flowers and cultivated turf” was significantly greater from 2005–06, reflecting an increase in production and an improved valuation of undercover production for nurseries and cut flowers. Volume of water applied 16 'Volume of water applied' refers to the volume of water applied to crops and pastures through irrigation. 17 This information is sourced from the ABS Agriculture Census for 2000–01 and 2005–06 and from the ABS Agricultural Survey for all other years, except for 2002–03 when ABS conducted the Water Survey, Agriculture. As explained above in paragraphs 9–12, there was a change to the register of businesses used for these collections, which may have some impact on the estimates. For further information refer to the Explanatory Notes for Water Use on Australian Farms (cat. no. 4618.0). 18 Volume of water applied is expressed in megalitres. A megalitre is one million litres, or one thousand kilolitres. AGRICULTURAL COMMODITY GROUPS 19 GVIAP is calculated for each irrigated 'commodity group' produced by agricultural businesses. That is, GVIAP is generally not calculated for individual commodities, rather for groups of "like" commodities according to irrigated commodity grouping on the ABS Agricultural Census/Survey form. The irrigated commodity groups vary slightly on the survey form from year-to-year. The commodity groups presented in this publication are: - cereals for grain and seed - total hay production - cereals for hay - pastures cut for hay or silage (including lucerne for hay) - pastures for seed production - sugar cane - other broadacre crops (see Appendix 1 for detail) - fruit trees, nut trees, plantation or berry fruits (excluding grapes) - vegetables for human consumption and seed - nurseries, cut flowers and cultivated turf - dairy production - production from meat cattle - production from sheep and other livestock (excluding cattle) 20 Note that the ABS Agricultural Census/Survey collects area and production data for a wide range of individual commodities within the irrigated commodity groups displayed in the list above. Appendix 1 provides more detail of which commodities comprise these groupings. 21 There were differences in data items (for production, area grown and area irrigated) collected on the Agricultural Census/Surveys in different years. This affects the availability of some commodities for some years. Appendix 2 outlines some of the specific differences and how they have been treated in compiling the estimates for this publication, thereby enabling the production of GVIAP estimates for each of the commodity groups displayed in the list above for every year from 2000–01 to 2008–09. 22 Note that in all GVAP tables, “Total GVAP” includes production from pigs, poultry, eggs, honey (2001 only) and beeswax (2001 only), for completeness. These commodities are not included in GVIAP estimates at all because irrigation is not applicable to them. METHOD USED TO CALCULATE GVIAP 23 The statistics presented here calculate GVIAP at the unit (farm) level, using three simple rules: a. If the area of the commodity group irrigated = the total area of the commodity group grown/sown, then GVIAP = GVAP for that commodity group; b. If the area of the commodity group irrigated is greater than zero but less than the total area of the commodity group grown/sown, then a “yield formula” is applied, with a “yield difference factor”, to calculate GVIAP for the irrigated area of the commodity group; c. If the area of the commodity group irrigated = 0, then GVIAP = 0 for that commodity group. 24 These three rules apply to most commodities; however there are some exceptions as outlined below in paragraph 26. It is important to note that the majority of cases follow rules 1 and 3; that is, the commodity group on a particular farm is either 100% irrigated or not irrigated at all. For example, in 2004–05, 90% of total GVAP came from commodity groups that were totally irrigated or not irrigated at all. Therefore, only 10% of GVAP had to be "split" into either "irrigated" or "non-irrigated" using the “yield formula” (described below). The yield formula is explained in full in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). 25 Outlined here is the yield formula referred to in paragraph 20: Ai = area of the commodity under irrigation (ha) Yi = estimated irrigated production for the commodity (t or kg) P = unit price of production for the commodity ($ per t or kg) Q = total quantity of the commodity produced (t or kg) Ad = area of the commodity that is not irrigated (ha) Ydiff = yield difference factor, i.e. estimated ratio of irrigated to non-irrigated yield for the commodity produced Yield difference factors 26 Yield difference factors are the estimated ratio of irrigated to non-irrigated yield for a given commodity group. They are calculated for a particular commodity group by taking the yield (production per hectare sown/grown) of all farms that fully irrigated the commodity group and dividing this "irrigated" yield by the yield of all farms that did not irrigate the commodity group. The yield difference factors used here were determined by analysing data from 2000–01 to 2004–05 and are reported for each commodity group in Appendix 1 of the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). It is anticipated that the yield difference factors will be reviewed following release of data from the 2010-11 Agriculture Census. 27 In this report "yield" is defined as the production of the commodity (in tonnes, kilograms or as a dollar value) per area grown/sown (in hectares). Commodity groups for which the yield formula is used 28 The GVIAP for the following commodities have been calculated using the yield formula, with varying yield differences: Cereals for grain/seed - yield formula with yield difference of 2 Cereals for hay - yield formula with yield difference of 1.5 Pastures for hay - yield formula with yield difference of 2 Pastures for seed - yield formula with yield difference of 2 Sugar cane - yield formula with yield difference of 1.3 (except for 2008–09 - see paragraphs 29 and 31 below) Other broadacre crops - yield formula with yield difference of 2 Fruit and nuts - yield formula with yield difference of 2 Grapes - yield formula with yield difference of 1.2 (except for 2008–09 - see paragraphs 29 and 31 below) Vegetables for human consumption and seed - yield formula with yield difference of 1 Nurseries, cut flowers and cultivated turf - yield formula with yield difference of 1 Note: a yield difference of 1 implies no difference in yield between irrigated and non-irrigated production. 29 However not all agricultural commodity groups can be satisfactorily calculated using this formula, so the GVIAP for a number of commodity groups has been calculated using other methods: Rice - assume all rice production is irrigated. Cotton - production formula (see paragraph 31). Grapes - production formula (2008–09 only - see paragraph 31). Sugar - production formula (2008–09 only - see paragraph 31). Dairy production - assume that if there is any irrigation of grazing land on a farm that is involved in any dairy production, then all dairy production on that farm is classified as irrigated. Meat cattle, sheep and other livestock - take the average of two other methods: 1. calculate the ratio of the area of irrigated grazing land to the total area of grazing land and multiply this ratio by the total production for the commodity group (this is referred to as the “area formula”); 2. if the farm has any irrigation of grazing land then assume that all livestock production on the farm is irrigated. 30 For more information on the “area formula” for calculating GVIAP please refer to the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). 31 In 2008–09, cotton, grapes and sugar were the only commodities for which the production formula was used to estimate GVIAP. This formula is based on the ratio of irrigated production (kg or tonnes) to total production (kg or tonnes) and is outlined in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). The production formula is used for these three commodities because in 2008–09 they were the only commodities for which actual irrigated production was collected on the ABS agricultural censuses and surveys. Note that prior to 2008–09, cotton was the only commodity for which irrigated production data was collected, except in 2007–08, when there were no commodities for which this data was collected. Qi = irrigated production of cotton (kg) Qd = non-irrigated production of cotton (kg) P = unit price of production for cotton ($ per kg) Qt = total quantity of cotton produced (kg) = Qi + Qd 32 Most of the irrigated commodity groups included in these tables are irrigated simply by the application of water directly on to the commodity itself, or the soil in which it is grown. The exception relates to livestock, which obviously includes dairy. For example, the GVIAP of "dairy" simply refers to all dairy production from dairy cattle that grazed on irrigated pastures or crops. Estimates of GVIAP for dairy must be used with caution, because in this case the irrigation is not simply applied directly to the commodity, rather it is applied to a pasture or crop which is then eaten by the animal from which the commodity is derived (milk). Therefore, for dairy production, the true net contribution of irrigation (i.e. the value added by irrigation, or the difference between irrigated and non-irrigated production) will be much lower than the total irrigation-assisted production (the GVIAP estimate). 33 The difference between (a) the net contribution of irrigation to production and (b) the GVIAP estimate, is probably greater for livestock grazing on irrigated crops/pastures than for commodity groups where irrigation is applied directly to the crops or pastures. 34 Similarly, estimates of GVIAP for all other livestock (meat cattle, sheep and other livestock) must be treated with caution, because as for dairy production, the issues around irrigation not being directly applied to the commodity also apply to these commodity groups. 35 The estimates presented in this product are underpinned by estimates of the Value of Agricultural Commodities Produced (VACP), published annually in the ABS publication Value of Agricultural Commodities Produced (cat. no. 7503.0). VACP estimates (referred to as GVAP in this product) are calculated by multiplying the wholesale price by the quantity of agricultural commodities produced. The price used in this calculation is the average unit value of a given commodity realised in the marketplace. Price information for livestock slaughterings and wool is obtained from ABS collections. Price information for other commodities is obtained from non-ABS sources, including marketing authorities and industry sources. It is important to note that prices are state-based average unit values. 36 Sources of price data and the costs of marketing these commodities vary considerably between states and commodities. Where a statutory authority handles marketing of the whole or a portion of a product, data are usually obtained from this source. Information is also obtained from marketing reports, wholesalers, brokers and auctioneers. For all commodities, values are in respect of production during the year (or season) irrespective of when payments were made. For that portion of production not marketed (e.g. hay grown on farm for own use, milk used in farm household, etc.), estimates are made from the best available information and, in general, are valued on a local value basis. 37 It should be noted that the estimates for GVIAP are presented in current prices; that is, estimates are valued at the commodity prices of the period to which the observation relates. Therefore changes between the years shown in these tables reflect the effects of price change. MURRAY-DARLING BASIN (MDB) 38 The gross value of irrigated agricultural production for the MDB is presented for 2000–01 and 2005–06 through to 2008–09. The 2000–01 and 2005–06 data are available because they are sourced from the Agricultural Census which supports finer regional estimates, while the 2006–07, 2007–08 and 2008–09 data are able to be produced because of the improved register of agricultural businesses (described in paragraphs 9–12). 39 The data for the Murray-Darling Basin (MDB) presented in this publication for 2000–01 were derived from a concordance of Statistical Local Area (SLA) regions falling mostly within the MDB. The data for the MDB for 2006–07, 2007–08 and 2008–09 were derived from a concordance of National Resource Management (NRM) regions falling mostly within the MDB. The MDB data for 2005–06 were derived from geo-coded data. As a result, there will be small differences in MDB data across years and this should be taken into consideration when comparisons are made between years. COMPARABILITY WITH PREVIOUSLY PUBLISHED ESTIMATES 40 Because of this new methodology, the experimental estimates presented here are not directly comparable with other estimates of GVIAP released by ABS in Water Account, Australia, 2000–01 (cat. no. 4610), Characteristics of Australia’s Irrigated Farms, 2000–01 to 2003–04 (cat. no. 4623.0), Water Account, Australia, 2004–05 (cat. no. 4610) and Water and the Murray-Darling Basin, A Statistical Profile 2000–01 to 2005–06 (cat. no. 4610.0.55.007). However, the GVIAP estimates published in the Water Account Australia 2008–09 are the same as those published in this publication. 41 As described above, 'Volume of water applied' refers to the volume of water applied to crops and pastures through irrigation. The estimates of 'Volume of water applied' presented in this publication are sourced directly from ABS Agricultural Censuses and Surveys and are the same as those presented in Water Use On Australian Farms (cat.no. 4618.0). Note that these volumes are different to the estimates of agricultural water consumption published in the 2008–09 Water Account Australia (cat. no. 4610.0) as the Water Account Australia estimates focus on total agricultural consumption (i.e. irrigation plus other agricultural water uses) and are compiled using multiple data sources (not just ABS Agricultural Censuses and Surveys). 42 The differences between the methods used to calculate the GVIAP estimates previously released and the method used to produce the estimates presented in this product, are explained in detail in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production, 2008 (cat. no. 4610.0.55.006). 43 In particular some commodity groups will show significant differences with what was previously published. These commodity groups include dairy production, meat production and sheep and other livestock production. 44 The main reason for these differences is that previous methods of calculating GVIAP estimates for these commodity groups were based on businesses being classified to a particular industry class (according to the industry classification ANZSIC), however the new method is based on activity. For example, for dairy production, previous methods of calculating GVIAP only considered dairy production from dairy farms which were categorised as such according to ANZSIC. The new method defines dairy production, in terms of GVIAP, as “all dairy production on farms on which any grazing land (pastures or crops used for grazing) has been irrigated”. Therefore, if there is any irrigation of grazing land on a farm that is involved in any dairy production (regardless of the ANZSIC classification of that farm), then all dairy production on that particular farm is classified as irrigated. 45 Where figures for individual states or territories have been suppressed for reasons of confidentiality, they have been included in relevant totals. RELIABILITY OF THE ESTIMATES 46 The experimental estimates in this product are derived from estimates collected in surveys and censuses, and are subject to sampling and non-sampling error. 47 The estimates for gross value of irrigated agricultural production are based on information obtained from respondents to the ABS Agricultural Censuses and Surveys. These estimates are therefore subject to sampling variability (even in the case of the censuses, because the response rate is less than 100%); that is, they may differ from the figures that would have been produced if all agricultural businesses had been included in the Agricultural Survey or responded in the Agricultural Census. 48 One measure of the likely difference is given by the standard error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample was taken or received. There are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if all establishments had been reported for, and about nineteen chances in twenty that the difference will be less than two SEs. 49 In this publication, sampling variability of the estimates is measured by the relative standard error (RSE) which is obtained by expressing the SE as a percentage of the estimate to which it refers. Most national estimates have RSEs less than 10%. For some States and Territories, and for many Natural Resource Management regions with limited production of certain commodities, RSEs are greater than 10%. Estimates that have an estimated relative standard error higher than 10% are flagged with a comment in the publication tables. If a data cell has an RSE of between 10% and 25, these estimates should be used with caution as they are subject to sampling variability too high for some purposes. For data cells with an RSE between 25% and 50% the estimate should be used with caution as it is subject to sampling variability too high for most practical purposes. Those data cells with with an RSE greater than 50% indicate that the sampling variability causes the estimates to be considered too unreliable for general use. 50 Errors other than those due to sampling may occur because of deficiencies in the list of units from which the sample was selected, non-response, and errors in reporting by providers. Inaccuracies of this kind are referred to as non-sampling error, which may occur in any collection, whether it be a census or a sample. Every effort has been made to reduce non-sampling error to a minimum in the collections by careful design and testing of questionnaires, operating procedures and systems used to compile the statistics. 51 Where figures have been rounded, discrepancies may occur between sums of the component items and totals. 52 ABS publications draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated: without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905. FUTURE DATA RELEASES 53 It is anticipated that ABS will release these estimates on an annual basis. Agricultural Commodities, Australia (cat. no. 7121.0) Agricultural Commodities: Small Area Data, Australia (cat.no. 7125.0) Characteristics of Australia’s Irrigated Farms, 2000–01 to 2003–04 (cat. no. 4623.0) Methods of estimating the Gross Value of Irrigated Agricultural Production (Information Paper) (cat. no. 4610.0.55.006). Value of Agricultural Commodities Produced, Australia (cat. no. 7503.0) Water Account Australia (cat. no. 4610.0) Water and the Murray-Darling Basin, A Statistical Profile, 2000–01 to 2005–06 (cat. no. 4610.0.55.007) Water Use on Australian Farms, Australia (cat. no. 4618.0)
http://www.abs.gov.au/AUSSTATS/[email protected]/Lookup/4610.0.55.008Explanatory%20Notes12000%E2%80%9301%20-%202008%E2%80%9309?OpenDocument
13
18
Picture above is a flat map generated by the Mars Orbiter Laser Altimeter (MOLA), an instrument aboard NASA's Mars Global Surveyor, the high-resolution map represents 27 million elevation measurements gathered in 1998 and 1999. An impact basin deep enough to swallow Mount Everest and surprising slopes in Valles Marineris highlight a global map of Mars that will influence scientific understanding of the red planet for years. Generated by the Mars Orbiter Laser Altimeter (MOLA), an instrument aboard NASA's Mars Global Surveyor, the high-resolution map represents 27 million elevation measurements gathered in 1998 and 1999. The data were assembled into a global grid with each point spaced 37 miles (60 kilometers) apart at the equator, and less elsewhere. Each elevation point is known with an accuracy of 42 feet (13 meters) in general, with large areas of the flat northern hemisphere known to better than six feet (two meters). "This incredible database means that we now know the topography of Mars better than many continental regions on Earth," said Dr. Carl Pilcher, Science Director for Solar System Exploration at NASA Headquarters, Washington, DC. "The data will serve as a basic reference book for Mars scientists for many years, and should inspire a variety of new insights about the planet's geologic history and the ways that water has flowed across its surface during the past four billion years." "The full range of topography on Mars is about 19 miles (30 kilometers), one and a half times the range of elevations found on Earth," noted Dr. David Smith of NASA's Goddard Space Flight Center, Greenbelt, MD, the principal investigator for MOLA and lead author of a study to be published in the May 28, 1999, issue of Science. "The most curious aspect of the topographic map is the striking difference between the planets low, smooth Northern Hemisphere and the heavily cratered Southern Hemisphere," which sits, on average, about three miles (five kilometers) higher than the north, Smith added. The MOLA data show that the Northern Hemisphere depression is distinctly not circular, and suggest that it was shaped by internal geologic processes during the earliest stages of martian evolution. The massive Hellas impact basin in the Southern Hemisphere is another striking feature of the map. Nearly six miles (nine kilometers) deep and 1,300 miles (2,100 kilometers) across, the basin is surrounded by a ring of material that rises 1.25 miles (about two kilometers) above the surroundings and stretches out to 2,500 miles (4,000 kilometers) from the basin center. This ring of material, likely thrown out of the basin during the impact of an asteroid, has a volume equivalent to a two-mile (3.5-kilometer) thick layer spread over the continental United States, and it contributes significantly to the high topography in the Southern Hemisphere. The difference in elevation between the hemispheres results in a slope from the South Pole to North Pole that was the major influence on the global-scale flow of water early in martian history. Scientific models of watersheds using the new elevation map show that the Northern Hemisphere lowlands would have drained three-quarters of the martian surface. On a more regional scale, the new data show that the eastern part of the vast Valles Marineris canyon slopes away from nearby outflow channels, with part of it lying a half-mile (about one kilometer) below the level of the outflow channels. "While water flowed south to north in general, the data clearly reveal the localized areas where water may have once formed ponds, " explained Dr. Maria Zuber of the Massachusetts Institute of Technology, Cambridge, MA, and Goddard. The amount of water on Mars can be estimated using the new data about the south polar cap and information about the North Pole released last year. While the poles appear very different from each other visually, they show a striking similarity in elevation profiles. Based on recent understanding of the North Pole, this suggests that the South Pole has a significant water ice component, in addition to carbon dioxide ice. The upper limit on the present amount of water on the martian surface is 800,000 to 1.2 million cubic miles (3.2 to 4.7 million cubic kilometers), or about 1.5 times the amount of ice covering Greenland. If both caps are composed completely of water, the combined volumes are equivalent to a global layer 66 to 100 feet (22 to 33 meters) deep, about one-third the minimum volume of a proposed ancient ocean on Mars. During the ongoing Mars Global Surveyor mission, the MOLA instrument is collecting about 900,000 measurements of elevation every day. These data will further improve the global model, help engineers assess the area where NASA's Mars Polar Lander mission will set down on Dec. 3, and aid the selection of future landing sites. MOLA was designed and built by the Laser Remote Sensing Branch of the Laboratory for Terrestrial Physics at Goddard. The Mars Global Surveyor mission is managed for NASA's Office of Space Science, Washington, DC, by the Jet Propulsion Laboratory, Pasadena, CA, a division of the California Institute of Technology. MOLA topographic images may be viewed at the following web address: More details about the MOLA instrument and science investigation can be found at: Hi-Resolution Images can be found at: NASA Press Release at: Goddard Space Flight Center
http://mars.jpl.nasa.gov/mgs/sci/mola/mola-may99.html
13
11
Overview Maths GCSE Interpreting and Representing Data Someone once said “there are lies, damned lies and statistics”. Understanding how to interpret and represent data gives you a chance of identifying when companies, politicians, employers (in fact anyone in authority) are using statistics in a less than honest way. To be more positive the interpretation and representation of data enables you to identify and solve real-life problems. There are a number of techniques. None of these techniques is particularly difficult, you just have to practice to learn them. As usual the best way for me to make sure I remember each technique is to:- a) Devise a question b) Explain the approach required to answer the question. c) Answer the question in detail. I find that if I have to understand a technique well enough to set a question and explain how to answer it then I have a good chance of remembering that technique. In addition, if my children ask me a question about it in the future, I will not only be able to give them the answer but also show them how to answer the question for themselves. I had heard of stem-and-leaf diagrams but I soon realised that I didn’t know or remember how they worked. They are a simple but impressively effective way to quickly organise large amounts of data. Tom believed that his parents did not give him enough pocket money. He decided to ask his classmates how much pocket money per week they received. This table shows the data he collected:- a) Use this data to prepare a stem-and-leaf diagram b) How many of Tom’s classmates receive more than £5 pocket money per week? There is no point trying to describe a stem-and-leaf diagram, just take a peek at the answer below! Some key points to bear in mind. - Keep it as simple as possible. The left hand column (the stem) typically has the largest single unit of measure (in this example £‘s). - Don’t try to write the figures in the right-hand columns (the leaves) in ascending order as this may take too long and could easily result in mistakes. The idea of a stem-and-leaf diagram is to quickly sort a mass of data into a manageable table. - Check the total number of entries in your answer is the same as the total number of data points from the original list. - Make sure you include a key to show how the diagram should be read (see example in answer below). a) Stem-and-leaf diagram of Tom’s data:-
http://www.mymathsblog.co.uk/maths-gcse-interpreting-and-representing-data/
13
10
The basic laws of physics have been unchanged for the most part since Newton postulated gravity. These include the two principal laws that we use in solving many power quality problems, namely Kirchoff’s Law and Ohm’s Law. Though one of the unspoken rules about writing interesting articles is to limit use of equations, these two are too valuable not to have in one’s tool kit. If the data you collect shows that you are about to disprove the validity of these rules, you might want to recheck it before trying for the Nobel Peace Prize. Ohm’s Law equates the voltage to the product of current times impedance (V= I * Z). Likewise, the current drawn is equal to the voltage divided by the impedance (I = V / Z). Impedance is the combination of the resistance, inductance, and capacitance. Kirchoff’s Law states that the sum of the voltages at each load and source (generator) around a closed loop must equal zero. Wiring is considered a load, since it has resistance (as well as some inductance and capacitance). If you think of generators as adding in voltages, and loads as subtracting away the voltage, the net result around a closed circuit would end with all of the voltages equally zero when summed together. In the simple, single-phase circuit example in Figure 1, there are two loads and one source, hence VS - VZ - VL = 0. The impedances of the wiring, transformers, capacitors, and breakers between the generator and the load are all lumped together into one equivalent value called the source impedance. All of the loads on that branch circuit are lumped together and called the load impedance. Therefore, the generator is considered an ideal generator with no impedance, only an output voltage. When loads normally are energized, there is an increase in current (I load) based on the load’s impedance (Z load) and line voltage (V source). An increase in current caused by a load change will result in an increased voltage drop across the source impedance (Vz = I load * Zsource). If the source voltage remains constant (which is a reasonable assumption if the source is considered as the electric utility generator), then the voltage across the load will decrease (a sag) by the amount of the voltage drop across the source impedance. Inversely, if a load is suddenly turned off, there will be a decrease in current and subsequent decrease in the voltage drop across the source impedance, resulting in a swell or increase in voltage at the load. This same basic methodology is used when analyzing sags and swells or harmonics, voltage fluctuation, and transients. Kirchoff’s and Ohm’s Laws still apply to these other phenomena, though the mathematics of determining the impedances and effects are more complex. Figure 2 shows an example of a voltage sag beginning in the second cycle of the displayed waveforms caused by the periodic cycling of the heating element in a laser printer. The top waveform is the Line-to-Neutral voltage, the middle is the Line current, and lower is the Neutral-to-Ground (N-G) voltage. Observe how the N-G voltage and current waveforms are very similar. If the source impedance is split between both legs feeding the load, then it can be easily seen how an increase in line current would develop a voltage drop in the neutral leg, which would result in the neutral-to-ground swell seen here. With electric motors, the load impedance changes over time when energized and as the load changes, and results in the current swell and voltage sag in Figure 3. When the load current returns to a smaller, steady state value, the voltage at the load recovers somewhat, since there is less of a voltage drop across the source impedance. Occasionally, I receive a file with data collected from a power quality monitor where the story of the person monitoring the data would result in one or both of these laws being invalid if the situation was really as the person described it. Figure 4 shows an example of such. The monitoring took place on a three-pole, 150-Amp breaker 480V delta feeding pumps and heating, ventilating, and airconditioning (HVAC) units at a chemical plant in western Pennsylvania. The voltage waveform is shown here, while the current waveform recorded during the same time showed undistorted sine waves. The recorded transients are large, negative ones that even cross the zero axis at times. In addition, the rise and fall times of the transients are very fast, suggesting that the monitoring was in close proximity to the cause, such as very near the point where a lightning strike is coupled into the wiring. To have such a voltage waveform with current waveform showing no effects raised a flag. To have that much energy taken out of the voltage without any change in the current was saying that the golden rules were getting tarnished. When encountering such conflicting data, look to see that the monitoring points are truly where you intended them to be. A quick look at the phasor diagram, which most power quality monitors can display, may yield the source of the problem. The voltage and current connections should be on the same pair of wires to get correlating data. Monitoring the line-to-line voltage and line current on a delta circuit with even slightly unbalanced phase loads can produce interesting results, since you can’t actually monitor the current going through the load. The sum of the currents that will split and go through two of the phase loads, for which you are monitoring the voltage across each, is being recorded. Hence, there may not be a one-to-one correspondence between voltage and current. Having the CTs connected off by one phase compared to the voltage on a wye circuit can be a problem (Va and Ib, Vb and Ic, etc.). While single-phase circuits seem pretty simple, some power quality monitors require that you connect the line, neutral, and ground voltage connections in order to measure just the L-N voltage correctly. The bottom line is that, if you think you are ready to book a flight to Sweden and share your results for the Nobel Peace Prize on a revolutionary new concept in physics, check and check again. In the aforementioned example, the problems turned out to be that the voltage monitoring lead was not properly connected, and would open-circuit the monitoring input when the motors below caused a vibration during start-up. Hence, those voltage transients weren’t real, the undistorted current was correct, and there was no need to put all of the transient voltage surge supressor (TVSS) devices in the facility. BINGHAM, manager of products and technology for Dranetz-BMI in Edison, N.J., can be reached at (732) 287-3680.
http://www.ecmag.com/section/your-business/golden-rules
13
18
In addition, great circles represent the shortest distance between two points anywhere on the Earth's surface. Because of this, great circles have been important in navigation for hundreds of years but their presence was discovered by ancient mathematicians. Global Locations of Great CirclesGreat circles are easily identified on a globe based on the lines of latitude and longitude. Each line of longitude, or meridian, is the same length and represents half of a great circle. This is because each meridian has a corresponding line on the opposite side of the Earth. When combined, they cut the globe into equal halves, representing a great circle. For example, the Prime Meridian at 0° is half of a great circle. On the opposite side of the globe is the International Date Line at 180°. It too represents half of a great circle. When the two are combined, they create a full great circle which cuts the Earth into equal halves. The only line of latitude, or parallel, characterized as a great circle is the equator because it passes through the exact center of the Earth and divides it in half. Lines of latitude north and south of the equator are not great circles because their length decreases as they move toward the poles and they do not pass through Earth's center. As such, these parallels are considered small circles. Navigation with Great CirclesThe most famous use of great circles in geography is for navigation because they represent the shortest distance between two points on a sphere. Due to the earth's rotation, sailors and pilots using great circle routes must constantly adjust their route as the heading changes over long distances. The only places on Earth where the heading does not change is on the equator or when traveling due north or south. Because of these adjustments, great circle routes are broken up into shorter lines called Rhumb lines which show the constant compass direction needed for the route being traveled. The Rhumb lines also cross all meridians at the same angle, making them useful for breaking up great circles in navigation. Appearance on MapsTo determine great circle routes for navigation or other knowledge, the gnomic map projection is often used. This is the projection of choice because on these maps the arc of a great circle is depicted as a straight line. These straight lines are then often plotted on a map with the Mercator projection for use in navigation because it follows true compass directions and is therefore useful in such a setting. It is important to note though that when long distance routes following great circles are drawn on Mercator maps, they look curved and longer than straight lines along the same routes. In reality though, the longer looking, curved line is actually shorter because it is on the great circle route. Common Uses of Great Circles TodayToday, great circle routes are still used in long distance travel because they are the most efficient way to move across the globe. They are most commonly used by ships and aircraft where wind and water currents are not a significant factor though because currents like the jet stream are often more efficient for long distance travel than following the great circle. For example in the northern hemisphere, planes traveling west normally follow a great circle route that moves into the Arctic to avoid having to travel in the jet stream when going the opposite direction as its flow. When traveling east however, it is more efficient for these planes to use the jet stream as opposed to the great circle route. Whatever their use though, great circle routes have been an important part of navigation and geography for hundreds of years and knowledge of them is essential for long distance travel across the globe.
http://geography.about.com/od/understandmaps/a/greatcircle.htm
13
15
This is a review chapter. Important concepts treated include the nuclear model of the atom, types of nuclear reactions, wave properties, the electromagnetic spectrum of light, Doppler shifts, blackbody radiation, and galaxy classifications. Read through this material to obtain background for later chapters. Some topics are of particular importance to understanding cosmology, including: - Blackbody radiation is a very specific type of spectrum that corresponds to photons in equilibrium. This radiation is completely characterized by one parameter, the temperature of the emitter. The cosmic background radiation is an example of blackbody radiation, where the emitter is the universe itself. The temperature of the radiation has dropped throughout the history of the cosmos; currently it is a chilly 2.73 Kelvins, i.e. 2.73 degrees above absolute zero. - Nuclear Fusion is the phrase for nuclear reactions that combine light elements into heavier. Stars are powered by fusion. This process also explains how the elements in the universe are built up from the original protons and neutrons in the process of nucleosynthesis . Nuclear reactions that took place in the early universe created helium, whereas nearly all other elements and isotopes are manufactured in stars at various stages of their lifetimes, with the heaviest elements created during supernovae. - Redshifts and Blueshifts are the result of several processes that produce shifts in an observed wave spectrum. One of the most important is the Doppler effect, the shifting of the frequency (and thus the wavelength) due to relative motion between emitter and receiver. The Doppler effect is essentially classical, but must be modified slightly in order to take special relativity (see Chapter 7) into account. Doppler shifts are very familiar when they affect sound waves; this is the reason that an approaching siren seems to rise in pitch until it reaches the observer, after which it drops in pitch as the vehicle recedes. Another important effect is the gravitational shift due to light moving in a gravitational field; light climbing from a point of stronger to a point of weaker gravity is redshifted, whereas light falling in a gravitational field is blueshifted. Finally, the cosmological redshift due to the expansion of space is one of the most important overall shift effects in cosmology. The cosmological redshift causes the temperature of the microwave background to drop as time passes, and affects the light from distant sources. - Luminosity distance is a distance to a source as determined by observing the attenuation of the source's light intensity. Finding the distances to astronomical objects is one great challenges of cosmology; The luminosity distance is an important part of this task. The luminosity distance exploits the simple fact that light intensity is attenuated as the light travels through space, because the light wave front spreads out over an ever-increasing area. Thus a comparison of the observed amount of light received from a given source with some estimate of the intrinsic brightness of the source should enable us to compute the distance to the object. In practice, the determination of luminosity distances is fraught with many potential sources of error, including absorption by intervening matter of unknown type and density, but the principle is very simple. Despite the difficulties, luminosity distance is the best means of measuring the distances to very distant objects. The chapter concludes with a brief description of the types of galaxies seen in the universe. Galaxies are assigned to one of three categories according to their overall shapes and other properties. Elliptical galaxies are roughly spheroidal or ellipsoidal, contain mostly old stars, have little dust or gas, and do not contain well-defined nuclei. Spiral galaxies are flattened disks threaded by pronounced bright spirals, contain much gas and dust, are usually sites of young stars and active star formation, and feature reasonably well defined nuclear bulges. The Milky Way Galaxy is a spiral. The third category, irregular galaxies, is a catchall for galaxies that do not fit one of the other two groups. Many irregular galaxies are interacting with other galaxies in one way or another, either as satellites or in collisions or near encounters with other galaxies. The giant elliptical galaxy M87 is a prime example of its type. This huge galaxy lies at the core of the great cluster of galaxies in Virgo. M100, a spiral galaxy The Large Magellanic Cloud, an irregular galaxy, is a satellite of the Milky
http://www.astro.virginia.edu/~jh8h/Foundations/Foundations_1/chapter4.html
13
13
To determine the input impedance of a device, both the voltage across the device and the current flowing into the device must be known. The impedance is simply the voltage across the device E, divided by the current flowing into it, I. This is given by the following equation: It should be understood that since the voltage, E, and the current, I, are complex quantities the impedance, Z, is also complex. That is to say impedance has a magnitude and an angle associated with it. When measuring loudspeaker input impedance it is common today for many measurements to be made a relatively low drive levels. This is necessitated because of the method employed in the schematic of Figure 1. In this setup a relatively high value resistor, say 1 Kohm, is used for Rs. As seen from the input of the DUT, it is being driven by a high impedance constant current source. Had it been connected directly to the amplifier/measurement system output it would in all likelihood, be driven by a low impedance constant voltage source. Figure 1: Schematic of a common method of measuring loudspeaker impedance. (click to enlarge) In both of these cases constant refers to there being no change in the driving quantity (either voltage or current) as a function of frequency or load. When Rs is much larger than the impedance of the DDT, the current in the circuit is determined only by Rs. If the voltage at the output of the amplifier, Va, is known this current is easily calculated with the following equation and is constant. Now that we know the current flowing in the circuit, all we need to do is measure the voltage across the DUT and we can calculate its input impedance. There is nothing wrong with this method. It is limited, as previously mentioned, however, in that the drive level exciting the DUT will not be very large due to the large value of Rs. For some applications this may be problematic. Loudspeakers are seldom used at the low drive levels to which we are limited using the above method. It may be advantageous to be able to measure the input impedance at drive levels closer to those used in actual operation. If the current in the circuit can be measured rather than having to be assumed constant this limitation can be avoided. Using a measurement system with at least two inputs, as shown in Figure 2, can do just that. Figure 2: Schematic of an alternate method of measuring loudspeaker impedance. (click to enlarge) In this case Rs is made relatively small, say 1 ohm or less. This is called a current sensing resistor. It may also be referred to as a current shunt. Technically this is incorrect for this application, because a current shunt is always in parallel with a component from which current is diverted. The voltage drop across Rs is measured by input #2 of the measurement system. The current in circuit is then calculated using equation: The voltage across the DDT is measured by input #1 of the measurement system. We now know both the voltage across and the current flowing into the DUT so it’s input impedance can be calculated. I used EASERA for the measurements. It has facilities for performing all of these calculations as should most dual channel FFT measurement systems. Referencing Figure 2, channel #1 across the DDT should be set as the measurement channel while channel #2 should be set as the reference channel. Dual channel FFT systems divide the measurement channel by the reference channel, so we have: All we have to do it multiply our dual channel FFT measurement by the value of Rs used and we get the correct value for impedance. If Rs is chosen to be 1.0 ohm this becomes really easy.
http://www.prosoundweb.com/article/getting_the_most_out_of_impedance_measurement_testing/
13
50
Frequently Asked Questions about Radio Astronomy You can read this screen because your eyes detect light. Light consists of electromagnetic waves. The different colors of light are electromagnetic waves of different lengths. For more info go to: http://imagers.gsfc.nasa.gov/ems/waves3.html Visible light, however, covers only a small part of the range of wavelengths in which electromagnetic waves can be produced. Radio waves are electromagnetic waves of much greater wavelength than those of light. For centuries, astronomers learned about the sky by studying the light coming from astronomical objects, first by simply looking at the objects, and later by making photographs. Many astronomical objects emit radio waves, but that fact wasn't discovered until 1932. Since then, astronomers have developed sophisticated systems that allow them to make pictures from the radio waves emitted by astronomical objects. Astronomical bodies emit radio waves by one of several processes, including: Click here for more information on how radio waves are produced. Solar flares and sunspots are strong sources of radio emission. Their study has led to increased understanding of the complex phenomena near the surface of the Sun (image at left), and provides advanced warning of dangerous solar flares that can interrupt radio communications on the Earth and endanger sensitive equipment in satellites and even the health of astronauts. Radio telescopes are used to measure the surface temperatures of all the planets in our solar system and as well as some of the moons of Jupiter and Saturn. Radio observations have revealed the existence of intense Van Allen Belts surrounding Jupiter (image at right), powerful radio storms in the Jovian atmosphere and an internal heating source deep within the interiors of Jupiter, Saturn, Uranus, and Neptune. Broadband continuum emission throughout the radio-frequency spectrum is observed from a variety of stars (especially binary, X-ray, and other active stars), from supernova remnants, and from magnetic fields and relativistic electrons in the interstellar medium. Radio waves penetrate much of the gas and dust in space as well as the clouds of planetary atmospheres and pass through the terrestrial atmosphere with little distortion. Radio astronomers can therefore obtain a much clearer picture of stars and galaxies than is possible by means of optical observation. Utilizing radio telescopes equipped with sensitive spectrometers, radio astronomers have discovered more than 100 separate molecules, including familiar chemical compounds like water vapor, formaldehyde, ammonia, methanol, ethanol, and carbon dioxide. The important spectral line of atomic hydrogen at 1421.405 MHz (21 centimeters) is used to determine the motions of hydrogen clouds in the Milky Way Galaxy and other galaxies. This is done by measuring the change in the wavelength of the observed lines arising from Doppler shift. It has been established from such measurements that the rotational velocities of the hydrogen clouds vary with distance from the galactic center. The mass of a spiral galaxy can, in turn, be estimated using this velocity data (Click on the picture of the spiral galaxy M33 at right for more details). In this way radio telescope gave some of the first hints for the presence of so called "dark matter" in where the amount of starlight is insufficient to account for the large mass inferred from the rapid rotation curves. A number of celestial objects emit more strongly at radio wavelengths than at those of light, so radio astronomy has produced many surprises in the last half-century. By studying the sky with both radio and optical telescopes, astronomers can gain much more complete understanding of the processes at work in the universe. The first radio astronomy observations were made in 1932 by the Bell Labs physicist Karl Jansky who detected cosmic radio noise from the center of the Milky Way Galaxy while investigating radio disturbances interfering with transoceanic telephone service. A few years later, the young radio engineer and amateur radio operator, Grote Reber (W8GFZ) built the first radio telescope (image at left) at his home in Wheaton, Illinois, and found that the radio radiation came from all along the plane of the Milky Way and from the Sun. During the 1940s and 1950s, Australian and British radio scientists were able to locate a number of discrete sources of celestial radio emission. They associated these sources with old supernovae and active galaxies, which later became to be known as radio galaxies. The construction of ever larger antenna systems and radio interferometers (see radio telescopes), improved radio receivers and data-processing methods have allowed radio astronomers to study fainter radio sources with increased resolution and image quality. Radio galaxies are surrounded by huge clouds of relativistic electrons that move in weak magnetic fields to produce synchrotron radiation, which can be observed throughout the radio spectrum. The electrons are thought to be accelerated by material falling into a massive black hole at the center of the galaxy and are then propelled out along a thin jet to form the radio emitting clouds that are found up to millions of light-years from the parent galaxy. The study of radio galaxies led astronomer Maarten Schmidt to discover quasars in 1963. Quasars are found in the central regions of galaxies and may shine with the luminosity of a hundred ordinary galaxies. Like radio galaxies, they are thought to be powered by a super-massive black hole up to a thousand-million times more massive than the Sun, but contained within a volume less than the size of the solar system. Although, radio galaxies and quasars are powerful sources of radio emission, they are located at great distances from the Earth, and so the signals that reach the Earth are very weak. Measurements made in 1965 by Arno Penzias and Robert W. Wilson using an experimental communications antenna at 7 centimeter wavelength located at Bell Telephone Laboratories detected the existence of a microwave cosmic background radiation at a temperature of 3 K. This radiation, which comes from all parts of the sky, is thought to be the remaining radiation from the hot big bang, the primeval explosion from which the universe presumably originated some 15 billion years ago. Satellite and ground-based radio telescopes are used to measure the very small deviations from isotropy of the cosmic microwave background. This work has lead to refined determination of the size and geometry of the Universe. Radio observations of quasars led to the discovery of pulsars by Jocylen Bell and Tony Hewish in Cambridge, England in 1967. Pulsars are neutron stars that have lost all their electrons and have shrunk to a diameter of a few kilometers following the explosion of the parent star in a supernova. Because they have retained the angular moment of the much larger original star, neutron stars spin very rapidly, up to 641 times per second, and contain magnetic fields as strong as a thousand-billion Gauss or more. (The Earth's magnetic field is on the order of half a Gauss.) The radio emission from pulsars is concentrated along a thin cone, which produces a series of pulses corresponding to the rotation of the neutron star, much like to beacon from a rotating lighthouse lamp. The National Radio Astronomy Observatory (NRAO) is a facility of the National Science Foundation, operated by Associated Universities, Inc., a nonprofit research organization. The NRAO provides state-of-the-art radio telescope facilities for use by the scientific community. We conceive, design, build, operate and maintain radio telescopes used by scientists from around the world. Scientists use our facilities to study virtually all types of astronomical objects known, from planets and comets in our own Solar System to galaxies and quasars at the edge of the observable universe. The headquarters of NRAO is in Charlottesville, Virginia, and the Observatory operates major radio telescope facilities in Socorro, New Mexico and Green Bank, West Virginia. Actually, nothing! While everyday experience and Hollywood movies make people think of sounds when they see the words "radio telescope," radio astronomers do not actually listen to noises. First, sound and radio waves are different phenomena. Sound consists of pressure variations in matter, such as air or water. Sound will not travel through a vacuum. Radio waves, like visible light, infrared, ultraviolet, X-rays and gamma rays, are electromagnetic waves that do travel through a vacuum. When you turn on a radio you hear sounds because the transmitter at the radio station has converted the sound waves into electromagnetic waves, which are then encoded onto an electromagnetic wave in the radio frequency range (generally in the range of 500-1600 kHz for AM stations, or 86-107 MHz for FM stations). Radio electromagnetic waves are used because they can travel very large distances through the atmosphere without being greatly attenuated due to scattering or absorption. Your radio receives the radio waves, decodes this information, and uses a speaker to change it back into a sound wave. An animated gif of this process can be found here. Radio telescopes often produce images of celestial bodies. Just as photographic film records the different amount of light coming from different parts of the scene viewed by a camera's lens, our radio telescope systems record the different amounts of radio emission coming from the area of the sky we observe. After computer processing of this information, astronomers can make a picture. No scientific knowledge would be gained by converting the radio waves received by our radio telescopes into audible sound. If one were to do this, the sound would be "white noise," random hiss such as that you hear when you tune your FM radio between stations. A lot! For some of the highlights, look at NRAO's press releases about recent research results. Every year, hundreds of scientists use NRAO's radio telescopes, and they report their results in numerous papers in scientific journals. Almost any introductory astronomy textbook will contain images and tell of research results from NRAO's various radio telescopes. The GBT stands for Green Bank Telescope. Its full name is the Robert C. Byrd Green Bank Telescope, which honors Senator Robert C. Byrd for enabling NRAO to build the telescope. The VLA got its name because it is an array of radio telescopes and it is very large. In its very early conceptual and planning stages, "Very Large Array" was a working title, probably not intended to be the final name for the facility. However, after a few years, the name stuck. At the VLA, all of its 27 dish antennas work together as a single instrument. The signals from all antennas are brought together in real time through a microwave communication system that uses buried waveguide. As radio astronomers sought to increase their resolving power, or ability to see fine detail, by separating their antennas by even greater distances, it became impractical to bring the signals together in real time. Instead, tape recorders and precise atomic clocks were installed at each antenna, and the signals are combined after the observation is completed. This technique is called Very Long Baseline Interferometry (VLBI). When astronomers wanted to build a continent-wide radio telescope system to implement this technique, the name Very Long Baseline Array (VLBA) was the natural working title. Again, it stuck. Radio observations as part of a Search for Extraterrestrial Intelligence (SETI) have been done by different groups of researchers for a number of years, and sometimes on NRAO telescopes. In the 1990s the SETI Institute, a privately-funded organization, acquired observing time on the 140-foot radio telescope in Green Bank. SETI research had its beginning at NRAO in Green Bank. Read more here. Many electronic experimenters have built their own radio telescopes. In fact, the world's second radio telescope was built by an amateur radio operator, Grote Reber, in 1937. Amateurs use a variety of equipment, sometimes modified satellite receivers and dishes, to build their radio telescopes. For more detailed information about amateur radio telescopes, contact the Society of Amateur Radio Astronomers. Amateur radio operators pursue a number of activities that are somewhat related to radio astronomy, including communicating by bouncing radio signals off the Moon and the ionized trails of meteors in the Earth's atmosphere. There also are a number of amateur radio satellites in orbit. Both the Space Shuttle and the Mir space station carry amateur radio equipment that frequently is used by the astronauts to communicate to "ham" operators and classrooms around the world. For general information about amateur radio, including how to obtain your own license, contact the American Radio Relay League. The typical training for a research astronomer includes earning a Bachelor's degree in Physics, Astronomy or Mathematics, followed by graduate school and a Ph.D. in Astronomy or Astrophysics. The American Astronomical Society, the professional association for astronomers in North America, offers a free brochure on careers in astronomy. High school students interested in astronomy should take as many courses in science and mathematics as possible in preparation for college. Yes, there are some images on the NRAO Web site that you can browse. In addition, images and data from two surveys of the sky made by the VLA are available on the Internet. Check out the NRAO VLA Sky Survey and the FIRST Survey Web pages. You also may download software for viewing these image files. Many scenes in Contact were filmed at the VLA in September of 1996. About 200 filmmakers, including stars Jodie Foster, Tom Skerritt and James Woods, came to the VLA for the filming. The beautiful canyon seen near the VLA in the movie, however, actually is Canyon de Chelly in Arizona, "moved" to New Mexico by the magic of Hollywood special effects! The World Wide Web is a gold mine of information on astronomy and space science. This includes everything from on-line astronomy courses to archives of thousands of astronomical images. An excellent starting point is the AstroWeb site. Other sources of astronomical information include your public or school library and monthly magazines such as Sky & Telescope and Astronomy. There may be an amateur astronomy club in your community, and if there is, it is a good place to meet others who are interested in astronomy and to join activities such as observing with telescopes and hearing lectures on astronomical topics. Your community also may have a planetarium, public observatory or science museum that can provide information. The Web sites of the two magazines listed above have listings of clubs, observatories, planetaria and astronomical events for communities throughout the United States.Modified on Friday, 26-Sep-2008 11:56:49 EDT
http://www.nrao.edu/whatisra/FAQ.shtml
13