score
int64 10
1.34k
| text
stringlengths 296
618k
| url
stringlengths 16
1.13k
| year
int64 13
18
|
---|---|---|---|
33 | "Lost" Dark Matter Discovered in Space, Scientists Say
for National Geographic News
|September 28, 2005|
Astronomers say they have found a type of matter that cannot be seen, but which is thought to dominate the cosmos, in a place where it was thought not to exist.
The finding gives weight to a common theory of how the universe is pieced together.
The exact nature of the matter, called dark matter, is unknown, but scientists believe it accounts for more than 90 percent of the mass in the universe.
Though invisible, scientists think dark matter exists because stars appear to be accelerated by the gravity of a mass greater than all of the visible stars and dust in space.
But this theory was challenged in 2003 when a team of astronomers reported that certain stars in one type of galaxy actually move slowly, suggesting an absence of dark matter.
The new study, based on computer simulations, explains this seemingly odd behavior.
The explanation fits with the theory that these galaxies, like all galaxies, are embedded in haloes of dark matter, said Avishai Dekel, a physics professor at the Hebrew University of Jerusalem in Israel.
"It seems that we have identified the main solution to the problem," Dekel wrote via e-mail.
Dekel is the lead author of the study appearing in tomorrow's issue of the journal Nature.
The theory that galaxies are embedded in dark matter stems from the observation of stars in spiral galaxies. These galaxies, including the Milky Way, are flattened spiral-shaped disks of stars and gas.
Since most of the visible matter in a spiral galaxy is concentrated at the center, stars there would be expected to move more quickly than stars on the outskirts of the galaxy, according to the laws of gravity.
But observations show that the stars in the outskirts of spiral galaxies move just as quickly as those closer to the center. Scientists explain this phenomenon by the gravitational influence of dark matter in and around the galaxy.
But in elliptical galaxies, which are round, smooth collections of stars, it is difficult to study the motion of stars on the outskirts.
In the 2003 study, scientists found that stars in these galaxies move more slowly on the outskirts. The researchers suggested this was evidence for a lack of dark matter in elliptical galaxies.
Since at least some elliptical galaxies are believed to form from the mergers of spiral galaxies, the lack of dark matter in elliptical galaxies confounded the scientific community.
"Where did the dark matter disappear to during the merger?" Dekel said.
To find out, the team behind the new study ran computer simulations of galaxy mergers and analyzed the simulations.
The researchers, based in the U.S. and France, found that during galactic mergers, some stars are thrown into elongated orbits that run perpendicular to a galaxy's center.
These stars are the easiest for researchers to study in elliptical galaxies. To explain why, Dekel said to imagine viewing from across the room a ball of needles pointing out from a central point. The needle tips that are easiest to see are those that are perpendicular to your line of sight, sticking straight up, down, left, or right.
But scientists can only measure the speed of objects that are moving to or away from them, not those moving perpendicular to their line of sight.
So Dekel and his colleagues concluded that the slow-moving stars tracked in the 2003 study were actually moving in these elongated orbits, perpendicular to the astronomers' line of sight.
Those stars may have actually been moving at a high velocity without much motion toward or away from the viewer, thus taking on a slow-moving appearance, Dekel explained.
Michael Merrifield, an astrophysicist at the University of Nottingham in England, co-authored the 2003 study suggesting that elliptical galaxies lack dark matter.
"The jury is still out" on the presence of dark matter in elliptical galaxies, he said.
Computer modeling done by his team did not generate the scenario envisioned by Dekel and colleagues, he said.
However, the 2003 study authors did find some dark matter in elliptical galaxies, according to Aaron Romanowsky, lead author of the paper, who now works at the University of Concepción in Chile.
"We still think [the dark matter] is on the weak side," Romanowsky wrote in an e-mail interview. "One difference between our models is that theirs already has a lot of dark matter in the galaxy centers, which seems incorrect based on what we know about galaxies so far."
Both teams agree that astronomers are making progress toward resolving the issue, which is essential to understanding the formation and structure of galaxies.
Free E-Mail News Updates
Sign up for our Inside National Geographic newsletter. Every two weeks we'll send you our top stories and pictures (see sample).
|© 1996-2008 National Geographic Society. All rights reserved.| | http://news.nationalgeographic.com/news/pf/16727935.html | 13 |
26 | Addition activities are a great way to help kids learn, practice, and master addition facts. Conceptually, kids are beginning to understand that adding is what takes place when the objects in two or more smaller groups come together and make one big group. They are also wrestling with notions of part and whole, and the fact that you can manipulate a group of objects in many ways to make different kinds of addition problems.
For this activity, you will need counters that are 2 different colors, one color on each side. You can buy these, or you can make your own by spray painting beans a different color on each side. Start by giving each child a quantity of beans at the level you want to practice. If you want to practice addition facts up to 5, give children 5 beans each.
Have them shake up the beans (like dice) and let them fall on the table. Put the beans of one color in one pile, and those of the other color in a second pile. Count each pile and add them together. You may also ask kids to write down the sums they make that add up to 5. Make several different sums. How many ways could they make 5?
Especially for tactile learners, writing can be an important way to help learn and remember addition facts. You can dictate facts for them to answer, have kids draw pictures to solve a problem, ask them to think of their own addition facts to write, etc. Writing facts over and over on paper is boring, though, and it's really unnecessary. Try some of these fun addition activities for drawing or writing addition facts:
Make a giant number line outdoors using sidewalk chalk, starting at 0. You will also need 2 giant dice, which can be made or purchased.
Pick a child to walk the number line and another 2 children to roll the two dice.
walker stand on 0, then take steps on the number line to match the amount on one of the dice.
Ask: "What number do you think you will be standing on when you add the other number?" Ask the other kids if
they think her guess is correct. Then have the walker continue along the number line to add the second number.
Spend a few dollars and buy a large beach ball for this game. With a permanent marker, write numbers from 1 to 10 all over the beach ball. It's ok if numbers are repeated; just mix up where you put them.
To play the game, stand in a circle (if there are many kids) or face to face (if there are just two of you). Throw the ball back and forth. Whenever you catch it, look at where your thumbs landed and add those two numbers together before throwing the ball to the next person.
Math boards are basically an interesting background for addition practice with small objects. They are versatile enough to be used for all sorts of addition activities, they are super easy to make, and kids love them. To use them, give each child a math board and some of the manipulatives or counters suggested below. You can then have kids draw number cards or roll dice to see what numbers to add together, or either you or a child could suggest a math fact to solve. Here are some examples:
Work in plenty of addition activities that give practice in adding with physical objects or counters. Colored frogs or teddy bears are great for this, as a different color can be used for each set. Put a certain number of frogs of one color in one group, and frogs of another color in another group, then make a big deal of puuusshing them together to add.
Doing addition activities with dominoes makes math feel like a game. To play, spread out all the dominoes face down, so you can't see the dots. Have a child pick a domino, count the dots on each side, and add the numbers. For example, if a domino shows 5 and 4, show how to add the two together to make 9. Then write this as a problem: 5+4=9. Now turn the domino around and have kids repeat the process to find and solve the new problem: 4+5=9.
To get points, children must write down both problems and the answer correctly. They will get points that equal the sum; for the 4+5 domino, a child will get 9 points.
NOTE: In first grade, children may not intuitively know that after turning the domino around they will get the same solution. If it looks different, they may have to count the dots all over again to find the answer. Addition activities like this one and Linking Cubes Addition (below) are important for helping kids master this concept.
Linking cubes are a great tool for showing addition problems two ways. Have kids pick some cubes of one color, and a few more of another color, and link them together. One cube and three cubes make 4; write the problem 1+3=4. Then turn the cubes around so it shows 3+1=4, and write this problem as well. Kids can either use the cubes to make problems that are written down, or they can make their own sums with the cubes and then do the writing afterwards. The important thing is to give them plenty of practice seeing the inverse relationship of adding: that 2+4 is the same as 4+2, etc.
This is one of the easiest addition activities to teach, since most kids will already know how to play Concentration, or Memory. Get two sets of index cards, each in a different color. Write the addition facts you want your child to practice on one color, and the answers on the other. Mix them up, turn them face down, and have the child pick a card of each color. If the problem and the answer match, she keeps the cards and goes again. The player with the most cards at the end wins. (This is a good adult-child game, as the memory aspect makes either player just as likely to win.)
These addition activities will get kids started on addition basics. To learn more about teaching addition facts, visit the addition facts page. Want some practice worksheets? Don't miss these free, printable first grade math activities developed by a 20-year math educator. Once children have worked enough with addition activities to feel a bit more confident with their skills, try out the slightly more challenging addition games for even more practice. | http://www.smartfirstgraders.com/addition-activities.html | 13 |
20 | Pulsars have been something of a mystery since they were first discovered. A rapidly spinning star that is heavier than our Sun but smaller than a city? The concept was already somewhat alien seeming compared to the physics that are witnessed in daily life, on the surface anyways. But now, new research has found them to be even stranger than previously thought.
Researchers at the University of Vermont recently discovered a pulsar (PSR B0943+10) that very rapidly shifts between very different different types of emissions. One moment emitting radio waves, and the next emitting almost entirely X-rays and no radio waves. As the researchers note, the discovery “challenges all proposed pulsar emission theories,” and completely reopens the debate on the subject. There has long been some dissenting opinions on how these stars actually “work”.
To the eyes of astronomers, the stars essentially look like they are the universe’s lighthouses. “Pulsars shine beams of radio waves and other radiation for trillions of miles. As these highly magnetized neutron stars rapidly rotate, a pair of beams sweeps by, appearing as flashes or pulses in telescopes on Earth.”
“Using a satellite X-ray telescope, coordinated with two radio telescopes on the ground, the researchers observed a pulsar that was previously known to flip on and off every few hours between strong (or ‘bright’) radio emissions and weak (or ‘quiet’) radio emissions.”
By monitoring the pulsar simultaneously in X-rays and radio waves, the researchers observed that it shows the same behavior when it’s observed at X-ray and radio wavelengths, simply in reverse.
“This is the first time that a switching X-ray emission has been detected from a pulsar. Flipping between these two extreme states — one dominated by X-ray pulses, the other by a highly organized pattern of radio pulses — was very surprising,” says Joanna Rankin, an astrophysicist at the University of Vermont.
“As well as brightening in the X-rays we discovered that the X-ray emission also shows pulses, something not seen when the radio emission is bright,” said Rankin. “This was completely unexpected.”
None of the currently proposed model of pulsars can account for this behavior. “All theories to date suggest that X-ray emissions would follow radio emissions. Instead, the new observations show the opposite.”
“The basic physics of a pulsar have never been solved,” Rankin says.
“There is a general agreement about the origin of the radio emission from pulsars: it is caused by highly energetic electrons, positrons and ions moving along the field lines of the pulsar’s magnetic field,” explains Wim Hermsen.
“How exactly the particles are stripped off the neutron star’s surface and accelerated to such high energy, however, is still largely unclear,” he adds.
“By studying the emission from the pulsar at different wavelengths, the team’s study had been designed to discover which of various possible physical processes take place in the vicinity of the magnetic poles of pulsars.”
“Instead of narrowing down the possible mechanisms suggested by theory, however, the results of the team’s observing campaign challenge all existing models for pulsar emission. Few astronomical objects are as baffling as pulsars, and despite nearly fifty years of study, they continue to defy theorists’ best efforts.”
Erratic behavior amongst pulsars isn’t that uncommon. There have been, roughly, 2,000 pulsars discovered so far, and a number of them have shown strange behavior. Showcasing “emissions that can become weak or disappear in a matter of seconds but then suddenly return minutes or hours later.”
B0943+10 is one of the erratic ones, “this star has two very different personalities,” says Rankin. “But we’re still in the dark about what causes this, and other pulsars, to switch modes. We just don’t know.”
“But the fact that the pulsar keeps memory of its previous state and goes back to it,” says Hermsen, “suggests that it must be something fundamental.”
More background on what exactly a pulsar is:
“A pulsar (portmanteau of pulsating star) is a highly magnetized, rotating neutron star that emits a beam of electromagnetic radiation. This radiation can only be observed when the beam of emission is pointing toward the Earth, much the way a lighthouse can only be seen when the light is pointed in the direction of an observer, and is responsible for the pulsed appearance of emission. Neutron stars are very dense, and have short, regular rotational periods. This produces a very precise interval between pulses that range from roughly milliseconds to seconds for an individual pulsar.”
“The precise periods of pulsars makes them useful tools. Observations of a pulsar in a binary neutron star system were used to indirectly confirm the existence of gravitational radiation. The first extrasolar planets were discovered around a pulsar, PSR B1257+12. Certain types of pulsars rival atomic clocks in their accuracy in keeping time.”
“The events leading to the formation of a pulsar begin when the core of a massive star is compressed during a supernova, which collapses into a neutron star. The neutron star retains most of its angular momentum, and since it has only a tiny fraction of its progenitor’s radius (and therefore its moment of inertia is sharply reduced), it is formed with very high rotation speed. A beam of radiation is emitted along the magnetic axis of the pulsar, which spins along with the rotation of the neutron star. The magnetic axis of the pulsar determines the direction of the electromagnetic beam, with the magnetic axis not necessarily being the same as its rotational axis. This misalignment causes the beam to be seen once for every rotation of the neutron star, which leads to the “pulsed” nature of its appearance. The beam originates from the rotational energy of the neutron star, which generates an electrical field from the movement of the very strong magnetic field, resulting in the acceleration of protons and electrons on the star surface and the creation of an electromagnetic beam emanating from the poles of the magnetic field. This rotation slows down over time as electromagnetic power is emitted. When a pulsar’s spin period slows down sufficiently, the radio pulsar mechanism is believed to turn off (the so-called “death line”). This turn-off seems to take place after about 10–100 million years, which means of all the neutron stars in the 13.6 billion year age of the universe, around 99% no longer pulsate. The longest known pulsar period is 9.437 seconds.”
“Though this very general picture of pulsars is mostly accepted, Werner Becker of the Max Planck Institute for Extraterrestrial Physics said in 2006, ‘The theory of how pulsars emit their radiation is still in its infancy, even after nearly forty years of work.’”
The new research was just published in the January 25th edition of the journal Science.
For the fate of the sons of men and the fate of beasts is the same; as one dies, so dies the other. They all have the same breath, and man has no advantage over the beasts; for all is vanity. - Ecclesiastes 3:19 | http://planetsave.com/2013/01/25/chameleon-pulsar-surprises-researchers-reopens-old-debate/ | 13 |
14 | Astronomers are watching galaxies pop out bundles of newborn stars 12 billion light years away, thanks to a huge, sensitive new telescope in the Chilean Andes.
The Atacama Large Millimeter/submillimeter Array (ALMA) radio telescope, built by an international collaboration that included Canada's National Research Council, was officially inaugurated just this week.
But it has already allowed scientists to observe in unprecedented detail galaxies in the very distant universe — 12 billion years back in time — when our universe, now estimated to be 13.7 billion years old, was less than 2 billion years old. (The light has taken 12 billion years to reach Earth, and therefore shows us what was happening at the source 12 billion years ago). The data collected by ALMA, an array of dozens of gigantic satellite dishes, also made it possible to calculate just how far away those galaxies are.
The observations, some of which were published this week in Nature and others that will be described in two upcoming articles in the Astrophysical Journal, show that just one billion years after the Big Bang, the universe was already home to many distant, star-forming galaxies called starburst galaxies.
"It just tells us earlier in the history of the universe, there might have been very large scale galaxy formation and star formation that might have happened earlier than we thought," said Yashar Hezaveh, an astrophysicist at McGill University in Montreal, who led one of the new studies and co-authored the others.
"So perhaps it's going to help us understand some of the processes that could cause the formation of these galaxies."
And at least one of those galaxies could be seen to contain lots of water, suggesting that it may have already harboured a massive black hole at that ancient time.
Hezaveh said one of the big questions that astrophysicists would like to answer is why those galaxies were forming stars so rapidly — at a rate of 4,000 per year. In comparison, our own Milky Way galaxy produces just one star a year.
Because the ancient starburst galaxies are so far away, they appear extremely faint.
But some of them are still detectable because of an effect called gravitational lensing. The effect occurs when the gravity of a galaxy between the Earth and the distant galaxy bends the light of the distant galaxy, making it appear bigger and brighter – up to 20 or 30 times bigger and brighter.
"Basically, it helps us see things in the background galaxy that we wouldn't be able to see otherwise," said Hezaveh in an interview.
Unfortunately, gravitational lensing often splits the image into multiple versions, as a prism does. And it distorts the picture, the way funhouse mirrors at an amusement park can warp your reflection, Hezaveh said.
His role in the study was to correct for that distortion.
"If you know something about how the mirror is, how it has stretched you and squeezed you, then you can correct for the image and really say what that person really looks like," he said.
Even with gravitational lensing, the galaxies observed by ALMA are very hard to detect. That's not only because they're very far away, but also because they emit no visible light or heat — the only "light" they give off is in the form of radio waves.
"We detect what's actually going on in those dark areas," said Alison Peck, the telescope's deputy project scientist. "These are regions that it's never been possible to observe before."
René Plume, an astrophysicist at the University of Calgary, said previous images of galaxies forming in the early universe "have just been fuzzy blobs in space."
With ALMA, he added, "we're going to really be able to start to see some of the detail of this formation process that's going on."
Additional detail is also what Hezaveh is looking forward to. The data from the study released this week was gathered a year ago, when only 16 of ALMA's massive dish antennas had been installed. The last seven of the telescope's full array of 66 antennas are currently being tested, according to the European Southern Observatory, one of the partners in the $1.4 billion project.
Hezaveh said that with 16 antennas, he and his collaborators were only able to see how the distant galaxies look on average, but they will be able to do much more detailed science in the future when all 66 antennas are up and running. | http://ca.news.yahoo.com/distant-star-baby-boom-captured-huge-telescope-014701795.html | 13 |
22 | Paper Airplane Experiment II
AUTHOR: Steve McCombs, Ft. Greely School, Delta Junction, AK
GRADE LEVEL/SUBJECT: (2-6)
OVERVIEW: Most elementary students do not have a good grasp of the scientific method or how to set-up an experiment,
collect data, test a hypothesis, or organize the information after an experiment. Children can do real science by asking simple ‘what it questions’ that can be tested. for example my son, David, wondered what type of popcorn popped best. We eat a lot of popcorn and friends often give us special types of seeds to try. Using one hundred seeds of six types of popcorn and a hot air popper he tested the seeds and graphed the results. The amount popped varied from 65% to 97% for the six types tested. The experiment was written up and used in his school science fair. The best part of the experiment was its simple uniqueness. It tested an idea and was not a copy from a book of experiments already tried.
PURPOSE: The purpose of this activity is for students to gather some baseline information, make one variable and test
the results, make another variable and test the results, choose a paper plane design that they believe will fly the
farthest and test the results, graph the longest and average distances flown for each of paper plane trails.
1. Make and fly a paper airplane.
2. Work cooperatively with a partner in collecting data.
3. Be introduced to the terms hypothesis, variable, and average.
4. Follow directions in making a complex paper plane design.
5. Organize and graph data collected.
Older students may write-up experimental procedures, results, and conclusions.
RESOURCES/MATERIALS: Teacher materials include a simple proven airplane design and plans or kits for experimental
airplane. The White Wing Kits are on the market from Eddie Bauer and there are several books on paper airplanes. The
school library/media center should be able to provide references. Two ten meter tapes, paper clips, graph paper,
and scissors should also be on hand.
Type of plane ____________________________ (ie. paper clip
Trial 1 __________
Trial 2 __________
Trial 3 __________
Trial 4 __________
Trial 5 __________
Shortest trial ______________
Longest trial __________________
Average (**) __________________
ACTIVITIES AND PROCEDURES:
To test your paper airplanes a space at least 60 feet long is needed. The school cafeteria, gym, multipurpose room, or a long hallway will work. The planes are made to be flown, but only during the measured trials. A couple rules are
needed. Students flying planes not during the trials will lose their opportunity to fly that day. Students must have a partner to collect the data.
1. Students all make the same design paper airplane. The design should be simple to make and fly well.
2. Each student will be given five trials to fly his/her plane. The flight distance will be measured in
decimeters and called out by the teacher. The student’s partner will record the distance for each trial. After
the five trials the student will organize the data from shortest flight to longest flight. The flight in the
middle will be the average, median, distance. Data sheets will be kept in a folder for each student until
the experiment is completed.
3. Using the same airplane design students will repeat the procedure using a paper clip on the end of the plane.
This will be the one variable tested.
4. Using the same airplane design students will make flaps at the back of the plane. Flaps are made by cutting
four slits on the rear edge of the wings and folding the slotted portion up. The plane is tested as before.
5. Using a set of photocopied designs or paper airplane kits students will pick and construct the design which
they think will go the farthest. Data will be collected using the same methods as previous trials.
6. Each type of plane will be assigned a color for graphing. Using two sheets of 100 square paper students
will horizontally graph the results of the longest flight and average flight for each type of plane; plain
paper, paper clip, flaps, and experimental. (Other variations may be tried such as 14 inch paper versus 11
inch or various weights of paper could be used from onion skin to construction paper.)
TYING IT ALL TOGETHER:
After the graphs are finished, they should be displayed with the data sheets. Review the scientific process. Review what was tested and what was changed for each series of flights. See how the predictions of the longest flying experimental design turned out. Using the graphs check if one design was always the farthest flying. Check the graphs to see if one variable made a difference in distance for the majority of planes and discuss why it did not work for all planes. Students should be able to outline other things which could be tested.
This can be a fun activity. Making planes once a week can make the project last four to five weeks. The activity
provides several opportunities for cross curriculum activities in the ares of social studies (transportation,
history of light, impact of flight on our society) and language arts (report writing and creative writing) It is
simple to do and enjoyable if one can endure a little chaos during the flight trials. | http://origami-blog.origami-kids.com/paper-airplane-experiment-ii.htm | 13 |
46 | Research and Experiments
On each mission, when not subjecting themselves to medical and psychological testing, space station crews perform hundreds of scientific experiments. All experiments are selected by a panel of NASA scientists from thousands of suggestions. Each is then carefully planned and all needed hardware is assembled. Prior to liftoff, each crew rehearses the steps required for each experiment to minimize failures. Much is at stake: Multimillion-dollar projects can be rendered useless if an experiment is botched.
Scientists representing nearly all major branches of knowledge have jockeyed to gain permission to conduct experiments on Salyut, Skylab, Mir, and the ISS. Everyone has recognized that their unique environments, far beyond Earth's atmosphere and floating in weightlessness, hold extraordinary potential for new discoveries in many fields. Those fields given the highest priority have been astronomy, earth environmental study, material development, botany, combustion and fluid physics, and military reconnaissance.
The point of research in space, in the view of most scientists, is principally to improve human life on Earth. From this research they believe will come knowledge and discoveries that will change and improve everyone's lives on Earth, from the foods that people eat, the cars they drive, the computers they use, and even medical procedures used by physicians.
A Giant Leap for Astronomy
First with Salyut and Skylab, then with Mir, and today with the ISS, one of the key focuses of scientific exploration has been furthering human understanding of the cosmos. All space stations have carried instrumentation of various types on their missions miles above Earth to provide astronomers with clearer images of planets, stars, and galaxies than even the largest telescopes on Earth can offer.
The principal reason astronomers are interested in mounting their instruments on space stations is that they operate far above Earth's atmosphere, which obscures astronomers' views due to dust particles, changing temperatures, and moisture in the form of clouds, rain, and fog. In addition, light from large
One of the earliest attempts at placing a telescope on a space station occurred on Salyut for the purpose of investigating the Sun. Skylab followed with the Apollo telescope mount (ATM), a canister attached to the space station and containing a conventional telescope with lenses that could zoom in on a solar event such as sunspots. It also carried ultraviolet cameras that, thanks to sophisticated mounts, could be aimed steadily and precisely at any point on the Sun regardless of disturbances, such as those caused by crew movement. The instruments provided astronomers with thousands of remarkably detailed photographs of the Sun's surface and of solar flares.
With the launch of Mir, which carried state-of-the-art instrumentation, photographs deeper into space became possible. Soviet cosmonauts conducted a photographic survey of galaxies and star groups using the Glazar telescope. Because the telescope was pointed hundreds of millions of miles into deep space, far beyond the solar system, the amount of light being captured was so small that exposure times up to eight minutes were required to capture enough light for a single photograph. Under such circumstances, even the slightest vibrations from astronaut movements could shake the space station and ruin the photograph. As a result, all astronauts were required to sit, strapped into chairs, during these long exposures.
Of greatest excitement to astronomers today is a new generation of telescope, already built, tested, and secured on the ISS. This telescope, called the Submillimetron, is unique in three significant ways. First, as its name suggests, it detects and photographs very short wavelengths of light, much shorter than sunlight. These short microlight waves were emitted billions of years ago, when the universe was first formed. Astronomers believe that these images, then, are of cosmic bodies formed close to the beginning of the universe. Second, such a unique and precise instrument is designed to operate at supercold temperatures using liquid helium to chill sky-scanning equipment, thereby increasing the sensitivity of the Submillimetron's telescopic gear by slowing the motion of the molecules. A third unique feature allows for normal crew activity at all times, despite the extreme sensitivity of the equipment and extreme distances it photographs. The Submillimetron undocks from the ISS before it is used and then redocks for necessary maintenance. Astrophysicists interested in both the origin and ultimate fate of the universe are particularly interested in the Submillimetron's capabilities.
Investigating Environmental Hot Spots
Environmentalists and biologists recognize the value of space stations as a unique means to gain the broadest possible view of Earth as well as detailed views of particular environmental hot spots. When Earth is viewed from space through a variety of infrared and high-resolution cameras, natural resources can be identified, crops can be surveyed, and changes in the atmosphere and climate can be measured. Events on the surface, such as floods, oil spills, landslides, earthquakes, droughts, storms, forest fires, volcanic eruptions, and avalanches can be accurately located, measured, and monitored.
One of the earliest and most successful environmental projects carried out aboard a space station was the use of a scatterometer on Skylab. A scatterometer is a remote-sensing instrument capable of measuring wind speed and direction on Earth under all weather conditions. When it was activated on Skylab, the scatterometer captured wind speed and direction data once a second and transmitted the data back to Earth. Engineers analyzed the data and used it to forecast weather, warn ships at sea of approaching heavy storms, assisted in oil spill cleanup efforts by accurately predicting the direction and speed the oil slick was taking, and notified manufacturers of hazardous chemicals of the safest times to ship their products.
Mir also proved its value to environmental science. One of Mir's modules, called "Priroda," a Russian word meaning "nature," was launched in April 1996. Priroda carried equipment to study the atmosphere and oceans, with an emphasis on pollution and other forms of human impact on Earth. It also was capable of conducting surveys to locate mineral resources and underground water reserves as well as studies of the effects of erosion on crops and forests.
To accomplish these ambitious objectives, environmental engineers loaded Priroda with active, passive, and infrared sensors for detecting and measuring natural resources. It carried several types of spectrometers used for measuring ozone and fluorocarbon (the chemical found in many aerosols) concentrations in the atmosphere. At the same time, equipment monitored the spread of industrial pollutants, mapped variations in water temperatures across oceans, and measured the height of ocean waves, vertical structure of clouds, and wind direction and speed.
When the ISS went into space in 1998, environmental studies were high on the list of projects for the astronauts to work on. From the ISS orbit, 85 percent of Earth's surface can be observed. Continuously monitoring and investigating Earth from space with an impressive array of high-tech instrumentation, the ISS has facilitated in the identification of many environmental problems. In 2001 the commander of the ISS, Frank Culbertson, shared with the British Broadcasting Corporation the many observations he and other astronauts had made after studying Earth's
The ISS Window
Designers of the ISS wished to add a special portal on one of the modules through which astronauts could gaze at and photograph Earth and neighboring planets. Gazing out into space was not new, but previous windows were made of glass that easily scratched, clouded, and discolored. In an effort to correct these defects, optical engineers created the Nadir window, named after the astronomical term describing the lowest point in the heavens directly below an observer.
Mounted in the U.S. laboratory module element of the space station, the twenty-inch diameter Nadir window provides a view of more than 75 percent of Earth's surface, containing 95 percent of the world's population. Designed by Dr. Karen Scott of the Aerospace Corporation, the high-tech five-inch-thick window is actually a composite of four laminated panes consisting of a thin exterior "debris" pane that protects it from micrometeorites, primary and secondary internal pressure panes, and an interior "scratch" pane to absorb accidental interior impacts. Each has different optical characteristics.
Scott headed a team of thirty optical engineers that used a five-hundred-thousand-dollar optical instrument to make fine calibration measurements on the window to ensure precise clarity free of distortion before installing it in the lab module. Tests conducted on the multiple layers of the window ensured that they would not distort under the varying pressure and temperatures common on the space station. After five days of extreme testing, the unique window was determined to have the characteristics that would allow it to support a wide variety of research applications, including such things as coral reef monitoring, the development of new remote-sensing instruments, and monitoring of Earth's upper atmosphere.
environment for four months. High above Earth, Culbertson made some startling observations:
We see storms, we see droughts, we saw a dust storm a couple of days ago, in Turkey I think it was, and we have seen hurricanes. It is a cause for concern. Since my first flight in 1990 and this flight, I have seen changes in what comes out of some of the rivers, in land usage. We see areas of the world that are being burned to clear land, so we are losing lots of trees. There is smoke and dust in wider spread areas than we have seen before, particularly as areas like Africa dry up in certain regions. 26
Cutting-Edge Cell Research
Since 2000, NASA has been conducting cellular research on board the ISS to take advantage of the weightless environment to study cell growth and the intricate and mysterious subcellular functions within cells. Traditionally, biologists study cells by slicing living tissue into sections of single-cell thickness. The drawback to this process, for as long as it has been practiced, is that the prepared specimens begin to die within a few hours as the cells begin to lose their ability to function normally. At best, researchers on Earth have only one day to scrutinize under microscopes the workings of minute structures within cells. The problem that occurs when single cells are removed from a living organ for examination is that microscopic structures crucial to the life of the cell collapse, causing the cell to cease functioning.
This research has primarily focused on the functioning of cells in the human liver, the organ that regulates most chemical levels in the blood and breaks down the nutrients into forms that are easier for the rest of the body to use. In a weightless environment slices of liver one-cell thick remain healthy and active for up to seven days, a significant advantage for researchers in space over those working on Earth. According to Dr. Fisk Johnson, a specialist in liver disease under contract with NASA, "Space is the gold-standard environment for this cutting-edge cell research. Only in space, a true microgravity environment, will we be able to isolate and study each of the individual factors impacting cell function." 27
Once this advantage was discovered, the question then arose of how medical researchers on Earth could gain the same advantage. That question was answered by medical laboratories working with NASA that developed a device called a rotating bioreactor, which is capable of simulating a weightless environment on Earth. The rotating bioreactor works by gently spinning a fluid medium filled with cells. The spinning motion neutralizes most of gravity's effects, creating a near-weightless environment that allows single cells to function normally rather than collapse as they would otherwise do.
Utilizing the rotating bioreactor on Earth in the year 2002 scientists successfully accomplished long-term culturing of liver cells, which allows the cells to maintain normal functions for six days. One of the advantages of studying healthy cells for a long time is the ability to identify and match cellular characteristics to drugs that might cure particular diseases. According to Dr. Paul Silber, a liver specialist, "Our recent discoveries could lead to better, earlier drug-candidate screening, which would speed up drug development by pharmaceutical companies, and importantly, to a longer life for the 25,000 people every year waiting for a life-saving liver transplant." 28
Creating Materials in a Weightless Environment
The weightless environment on space stations was of as much interest to materials scientists as to any others. Scientists are interested in a variety of physical properties of materials, such as melting points, molding characteristics, and the combining or separating of raw materials into useful products. Before the first space stations, materials scientists performed simple experiments of very short duration aboard plummeting airplanes and from tall drop towers. Through these studies, scientists discovered that gravity plays a role in introducing defects in crystals, in the combination of materials, and in other processing activities requiring the application of heat. Until the advent of space stations, however, they were incapable of sustaining a weightless environment long enough to thoroughly study these phenomena.
The advent of space stations allowed the study of new alloys, protein crystals for drag research, and silicon crystals for use in electronics and semiconductors. Materials scientists theorized that improvements in processing in weightlessness could lead to the development of valuable drugs; high-strength, temperature-resistant ceramics and alloys; and faster computer chips.
One of the Mir components, the Kristall module, was partially dedicated to experiments in materials processing. One objective was to use a sophisticated electrical furnace in a weightless environment for producing perfect crystals of gallium arsenide and zinc oxide to create absolutely pure computer chips capable of faster speeds and fewer errors. Although they failed to create absolutely pure chips, they were purer than those they could create within Earth's gravitational field.
More recently, fiber-optic cables are also being improved in weightlessness. Fiber-optic cables, vital for high-speed data transmission, microsurgery, certain lasers, optical power transmission, and fiber-optic gyroscopes, are made of a complex blend of zirconium, barium, lanthanum, aluminum, and sodium. When this blend is performed in a weightless environment, materials scientists are finding them to be more than one hundred times more efficient than fibers created on Earth.
In 2002 the ISS began the most complex studies of impurities in materials and ways to eliminate them in a microgravity environment. One of the more interesting causes of impurities, for example, is bubbles. On Earth, when metals are melted and blended, bubbles form. According to materials scientist Dr. Richard Grugel, "When bubbles are trapped in solid samples, they show up as internal cracks that diminish a material's strength and usefulness." 29 In a weightless situation, however, although bubbles still form, they move very slightly, and this reduces internal cracks. Secondarily, their slow movement allows researchers to study the effect of bubbles on alloys more easily and precisely.
According to Dr. Donald Gillies, NASA's leader for materials science, the studies of bubbles and other mysteries of materials production hold promise for new materials:
We can thank advances in materials science for everything from cell phones to airplanes to computers to the next space ship in the making. To improve materials needed in our high-tech economy and help industry create the hot new products of the future, NASA scientists are using low gravity to examine and understand the role processing plays in creating materials. 30
For centuries, physicists and chemists have been experimenting on a variety of elements and metals to discover new compounds and to improve existing alloys. They have also been aware that their experimental results are often affected by the containers they use and by the instruments that measure those results. Such contamination often invalidates experiments. Even worse, containers can sometimes dampen vibrations in a material or cool the sample too rapidly, throwing the validity of the experiment into doubt. In some cases, a metal is reactive enough to destroy its container, meaning that some materials simply cannot be studied on Earth.
When the first space stations went into orbit, physicists and chemists seized on the opportunity to conduct experiments within a weightless environment. If materials could be suspended in space during experiments, without the need for containers and eliminating the variables that the containers themselves imposed, far more accurate results would be allowable. Initial results of such experiments answered many questions that could not have been resolved on Earth. Of particular interest was the property of metals in a liquid state that causes them to resist solidifying, even at temperatures where they would be expected to do so. This phenomenon is called nucleation. According to Dr. Kenneth Kelton, a physics professor at Washington University in St. Louis, "Nucleation is the major way physical systems change from one phase to another. The better we understand it, the better we can tailor the properties of materials to meet specific needs." 31
Encouraged by the results of experiments carried out in space, engineers developed an apparatus on Earth that could duplicate a weightless environment for further research. NASA, joined by several private research companies, developed the electrostatic levitator (ESL), which is capable of suspending liquid metals without the sample touching the container and without the technicians handling equipment in ways that might alter results. Two practical applications using the ESL are the production of exceedingly smooth surfaces for computer and optical instrumentation and exceedingly pure metal for wires, making them capable of transmitting large volumes of data.
Greenhouses in Space
While materials scientists look to space station experiments in hopes of improving industrial processes on Earth, others are focused on investigating processes that might someday happen on a large scale in space. For example, botanists are studying the feasibility of crop cultivation on space stations in the belief that grains and vegetables may someday be needed in quantities large enough to supply deep space expeditions or even space colonies. To these ends, many experiments have been performed testing different gases, soils, nutrients, and seeds. One of them, called seed-to-seed cycling in a weightless environment, produced remarkably optimistic results. According to biologist Mary E. Musgrave:
By giving space biologists a look at developmental events beyond the seedling stage, this experiment was an important contribution not only to gravitational biology, but also to the study of space life support systems. Data from this experiment on gas exchange, dry matter production and seed production provided essential information on providing a plant-based food supply for humans on long-duration space flights. 32
Many of the botanical experiments in orbit have focused on the effects of weightlessness on plant growth and seed germination. Botanists had known for many years that seedlings on Earth display geotropism—that is, they respond to gravity by sending their roots down into the soil and stalks up above the ground. In addition, gravity affects the diffusion of gases given off by the plant, the drainage of water through soil, and the movement of water, nutrients, and other substances within the plant.
Early experiments aboard Skylab were not encouraging for those who hoped to grow plants in space. For example, researchers' speculations were confirmed that without gravity, the roots and stalks of plants could not correctly orient themselves. Some seedlings sent their roots above the soil and their stalks deep into the soil, with the result that they withered and died. And even those that did properly orient their roots and stalks often failed to produce seeds, a critical failure unanticipated by researchers.
In the mid-1980s, botanists performed an experiment to understand how seeds might survive weightlessness. Scientists sent 12.5 million tomato seeds into space and kept them there aboard Mir for four years. In 1990 the seeds were planted by botanists; many were also given to schoolchildren so they could make science projects of germinating them. Botanists discovered that a slightly higher percentage of seeds from space germinated than did seeds that had been kept on Earth and that almost all produced normal plants. These results were achieved even though the seeds had been exposed to radiation while in space.
A second significant experiment on the ISS sought to determine whether second-generation space plants would be as healthy as second-generation plants on Earth. Scientists analyzing the data concluded that the quality of second-generation seeds produced in orbit was lower than that of seeds produced on Earth, resulting in a smaller second-generation plant size. This diminished seed quality is believed to be caused by the different ripening mechanics inside the seed pod in weightlessness.
With so much evidence pointing to weightlessness as a hostile environment for plant production, botanists are a bit uncertain of the future of agriculture in space. One potential solution being investigated on the ISS is to grow plants without soil, a process known as hydroponics. In this process, the plants grow without soil, in a nutrient-rich solution.
In addition to their promise for scientists, space stations from the very beginning were seen as having military value. During the Cold War, when the United States and the Soviet Union jockeyed for political and military advantage on Earth, each country also looked to space stations to give them battlefield superiority. Although neither nation actually placed offensive weapons on board their space stations, both sought to exploit space stations' potential for reconnaissance.
All space stations have carried equipment capable of photographing objects 250 miles below. Photographs are detailed enough, for example, to allow analysts to determine the types and numbers of aircraft on aircraft carriers and to track troop movements on land.
Yet, military officials admit that so far, at least, space outposts can do little more than support more conventional military operations. At a meeting of the American Institute of Aeronautics held in Albuquerque, New Mexico, in August 2001, Colonel Steve Davis, an officer at Kirtland Air Force Base, said, "We're [the Air
When NASA and the Russian Space Agency negotiated the initial agreement for the construction, deployment, and utilization of the ISS, no one gave consideration to using it as a tourist destination. From the inception of the project, all countries involved considered the ISS to be an orbiting laboratory dedicated to the study of a variety of scientific experiments and observations.
This somewhat parochial view was shaken in 2001 when the multimillionaire American businessman Dennis Tito expressed an interest in paying for a short vacation on the ISS to satisfy his own personal fascination with space. When NASA was notified of his interest and willingness to pay for a short visit to the spacecraft, his request was rejected on the grounds that the multibillion-dollar craft was for scientific purposes only. Recognizing that the Russians were short of money needed to continue their construction and launch costs, Tito approached them with an offer of $20 million.
Brushing aside NASA's objections, the Russians required Tito first to complete the standard training program before being blasted on what most called the most expensive vacation ever. In May 2001, when Tito docked at the ISS, several important milestones were achieved. These included the fact that a middle-aged civilian astronaut could easily survive space travel, that a space-tourism market did indeed exist, and that there was no longer a valid reason to discount the notion of space tourism.
Despite NASA's long-running opposition to his flight, which included preventing him from training with his Russian crewmates at the Johnson Space Center that triggered a minor international incident, Tito said he enjoyed his eight days in space and hoped that NASA would be more supportive in the future.
Force] still looking for that definitive mission in space; force enhancement is primarily what we're doing today." Davis added that there is increasing reliance on using space for military needs: "Space control is becoming more important as we have very high value assets in orbit. We depend on these assets and are interested in protecting them." Davis added that aboard one of the Soviet Union's early orbital piloted stations, it had a rapid-fire cannon installed. The military outpost was armed, Davis said, "so they could defend themselves from any hostile intercepts." 33
Even the ISS is seen by some participating nations as having military value. An intergovernmental agreement on the ISS was first put in place in 1988, resulting in an exchange of letters between participating countries involved in the megaproject. Those letters state that each partner in the project determines what a "peaceful purpose" is for its own element. According to Marcia Smith, a space policy expert at the Congressional Research Service, a research arm of the U.S. Congress, "The 1988 U.S. letter clearly states that the United States has the right to use its elements . . . for national security purposes, as we define them." 34
One of the more perceptive observations made when the first space stations flew into orbit was the potential that these floating laboratories might provide for investigating and solving a multitude of scientific questions. To a great degree, those making these observations were correct. Nearly every branch of science jumped on the space station bandwagon with proposals to investigate a host of questions. As the twenty-first century pushes forward, many problems of living in space have been solved while others remain elusive. The question being asked more frequently than ever is whether the costs of the many space stations and their experiments have returned enough benefits to taxpayers to continue the space station program. | http://www.scienceclarified.com/scitech/Space-Stations/Research-and-Experiments.html | 13 |
19 | In a short report, the results are essentially presented in tables, graphs, histograms or gels. These graphics are accompanied by legends which describe the experimental conditions under which used the results are obtained. Any written general summary of the results should be used to begin the discussion section as it will form the basis for drawing conclusions.
The main problems students have with the results section in a short report are:
- how to group the data appropriately;
- how to present the data visually to show the results.
It should be possible for the reader to look at the graph and/ or table and instantly get a ‘feel’ for the results. In other words, the reader should not have to do any ‘mental arithmetic’ to appreciate the significance of the trends.
Content and Structure
Your results section provides information to answer the following question:
What did you find? (your actual results).
It is common practice to display your results in the form of a table or figure. Tables are a means of presenting information accurately and concisely, while figures (graphs) can efficiently illustrate trends and comparisons. However, you also have to use language to give your table a title. Please note that we usually use the term 'figure' rather than 'graph'.
Your results section usually has two main stages :
|Stage 1: State the title for a table.||Table 1. Demographic characteristics of study participants.|
|Stage 2: Present the table.||
|Stage 1: Present a figure.|
|Stage 2: State the title for a figure.||Figures 1 and 2. Effect of high- (n=3) and low- (n=4)-cholesterol diets on blood cholesterol concentration.|
Tables consist of data organised into columns and rows. Tables should be
- Tables should be placed on the page so that there is a clear boundary between text and graphic.
- Tables should be presented in close proximity to their accompanying title and legend.
- Tables are numbered consecutively as they appear in the report.
- Tables should be numbered separately, tables following one sequence, figures another sequence.
- clearly titled
- Titles should be simple, but informative.
- Table numbers and titles usually appear above the table.
- easily interpreted
- Tables should have clearly identified row and column headings.
- Like material is usually placed in columns (i.e. vertically) rather than in rows (ie. horizontally).
- The dependent variable(s) are usually listed in the row headings and the independent variable(s) in the column headings.
- Generally, there are more rows than columns.
- The units should appear under the column heading(s) and not in the body of the table.
Tables are very useful for presenting precise quantities in a highly organised and economical way. The reader will scrutinise your tables for the accurate, detailed information on which you have based your discussion and conclusion.
However, you should be careful not to be over-precise - usually, it is not necessary to give three significant figures when presenting quantities, two or even one is sufficient depending on the experiment. When you are averaging results, you will need to quote errors in your table.
Tables (especially those that contain many cells) are not very useful for showing trends and comparisons. For these purposes, figures are more appropriate.
Table 1. Demographic characteristics of study participants.
|Mean ± SEM||Range|
|Age (years)||53.0 ± 1.1||49 - 58|
|Body Mass Index (kg/m2)||26.0 ± 1.1||21 -32|
Figures may be diagrams, graphs, photographs, etc.
Figures should be
- clearly presented
- Figures should be placed on the page so that there is a clear boundary between text and graphic and sufficient margins for labelling of axes.
- Figures should be presented in close proximity to their accompanying title and legend.
- The size and shape of the graph frame, the scales and the scale markings need to be chosen carefully so as not to distort the data.
- The graph should fill at least 70% of the graph frame.
- clearly numbered
- Figures are numbered consecutively as they appear in the report
- Figures should be numbered separately, tables following one sequence, figures another sequence
- clearly titled
- Titles should be simple, but informative
- Figure numbers and titles usually appear below the figure
- easily interpreted
- Axes must be labelled appropriately and units of time, concentration, optical density etc should be clearly specified.
- Data points on the curve(s) must be clear
- Scales for the x-axis and y-axis should be chosen to give a good line of length and slope.
- Scale markings should be in ‘round numbers’ only and evenly spaced. Usually, you do not show experimental values (raw data).
Figure 1. Cholesterol Concentration of subjects on a High Cholesterol Diet.
You have now reached the end of the Results section. | http://sydney.edu.au/science/molecular_bioscience/report/BCHM2/results/results_summary.html | 13 |
10 | Port forwarding or port mapping is a name given to the combined technique of
- translating the address and/or port number of a packet to a new destination
- possibly accepting such packet(s) in a packet filter (firewall)
- forwarding the packet according to the routing table.
The destination may be a predetermined network port (assuming protocols like TCP and UDP, though the process is not limited to these) on a host within a NAT-masqueraded, typically private network, based on the port number on which it was received at the gateway from the originating host.
In a typical residential network, nodes obtain Internet access through a DSL or cable modem connected to a router or network address translator (NAT/NAPT). Hosts on the private network are connected to an Ethernet switch or communicate via a wireless LAN. The NAT device's external interface is configured with a public IP address. The computers behind the router, on the other hand, are invisible to hosts on the Internet as they each communicate only with a private IP address.
When configuring port forwarding, the network administrator sets aside one port number on the gateway for the exclusive use of communicating with a service in the private network, located on a specific host. External hosts must know this port number and the address of the gateway to communicate with the network-internal service. Often, the port numbers of well-known Internet services, such as port number 80 for web services (HTTP), are used in port forwarding, so that common Internet services may be implemented on hosts within private networks.
Typical applications include the following:
- Running a public HTTP server within a private LAN
- Permitting Secure Shell access to a host on the private LAN from the Internet
- Permitting FTP access to a host on a private LAN from the Internet
Administrators configure port forwarding in the gateway's operating system. In Linux kernels, this is achieved by packet filter rules in the iptables or netfilter kernel components. BSD and Mac OS X operating systems implement it in the Ipfirewall (ipfw) module.
When used on gateway devices, a port forward may be implemented with a single rule to translate the destination address and port. (On Linux kernels, this is DNAT rule). The source address and port are, in this case, left unchanged. When used on machines that are not the default gateway of the network, the source address must be changed to be the address of the translating machine, or packets will bypass the translator and the connection will fail.
When a port forward is implemented by a proxy process (such as on application layer firewalls, SOCKS based firewalls, or via TCP circuit proxies), then no packets are actually translated, only data is proxied. This usually results in the source address (and port number) being changed to that of the proxy machine.
Usually only one of the private hosts can use a specific forwarded port at one time, but configuration is sometimes possible to differentiate access by the originating host's source address.
Unix-like operating systems sometimes use port forwarding where port numbers smaller than 1024 can only be created by software running as the root user. Running with superuser privileges (in order to bind the port) may be a security risk to the host, therefore port forwarding is used to redirect a low-numbered port to another high-numbered port, so that application software may execute as a common operating system user with reduced privileges.
The Universal Plug and Play protocol (UPnP) provides a feature to automatically install instances of port forwarding in residential Internet gateways. UPnP defines the Internet Gateway Device Protocol (IGD) which is a network service by which an Internet gateway advertises its presence on a private network via the Simple Service Discovery Protocol (SSDP). An application that provides an Internet-based service may discover such gateways and use the UPnP IGD protocol to reserve a port number on the gateway and cause the gateway to forward packets to its listening socket.
Types of port forwarding
Port forwarding can be divided into the following types:
- Local port forwarding
- Remote port forwarding
- Dynamic port forwarding
Local port forwarding
Local port forwarding is the most common type of port forwarding. It is used to forward data securely from another client application running on the same computer as the Secure Shell Client. Local Port Forwarding lets a user connect from the local computer to another server. By using local port forwarding, firewalls that block certain web pages are able to be bypassed.
Two important items when using local port forwarding are the destination server, and two port numbers. Connections from the SSH client are forwarded via the SSH server, then to a destination server. As stated above, local port forwarding forwards data from another client application running on the same computer as the Secure Shell Client. The Secure Shell client is configured to redirect data from a specified local port through the secure tunnel to a specified destination host and port. This port is on the same computer as the Secure Shell client. Any other client can be configured that is running on the same computer to connect to the forwarded port (rather than directly to the destination host and port). After this connection is established, the Secure Shell client listens on the specified port and redirects all data sent to that port through the secure tunnel to the Secure Shell server. The server decrypts the data, and then directs it to the destination host and port.
On the command line, “-L” specifies local port forwarding. The destination server, and two port numbers need to be included. Port numbers less than 1024 or greater than 49150 are reserved for the system. Some programs will only work with specific source ports, but for the most part any source port number can be used.
Some uses of local port forwarding:
- Using local port forwarding to Receive Mail
- Connect from a laptop to a website using an SSH tunnel.
Remote port forwarding
A form of port forwarding that is used for applications connecting to a Secure Shell server in order to use an application that resides on the Secure Shell client-side. In other words, remote port forwarding lets a user connect from a remote Secure Shell server to another server.
To use remote port forwarding, the address of the destination server and two port numbers must be known. The port numbers chosen depend on what application are to be used.
Remote Port Forwarding allows other computers access to applications hosted on remote servers. Two examples:
- An employee of a company hosts an FTP server at his own home and wants to give access to the FTP service to employees using computers in the workplace. In order to do this, he can set up remote port forwarding through SSH on the company computers by including his FTP server’s address and using the correct port numbers for FTP (FTP port tcp/21)
- Opening remote desktop sessions is a common use of Remote Port Forwarding. Through SSH, this can be accomplished by opening the Virtual Network Computing port (5900) and including the destination computer’s address
Dynamic port forwarding
Dynamic Port Forwarding (DPF) is an on-demand method of traversing a firewall/NAT through the use of firewall pinholes. The goal is to enable clients to connect securely to a trusted server that acts as an intermediary for the purpose of sending/receiving data to one or many destination servers
DPF can be implemented by setting up a local application (such as SSH) as a SOCKS proxy server, which can be used to process data transmissions through the network or over the Internet. Programs (such as web browsers) must be configured individually to direct traffic through the proxy, which acts as a secure tunnel to another server. Once the proxy is no longer needed, the programs must be reconfigured to their original settings. Because of the manual requirements of DPF, it is not often used.
Once the connection is established, DPF can be used to provide additional security for a user connected to an untrusted network. Since data must pass through the secure tunnel to another server before being forwarded to its original destination, the user is protected from packet sniffing that may occur on the LAN.
DPF is a powerful tool with many uses.
- A user connected to the Internet through a coffee shop, hotel, or otherwise minimally secure network may wish to use DPF as a way of protecting data.
- DPF can be used to bypass firewalls that restrict access to outside websites, such as in a corporate network.
- DPF can be used as a precaution against hacking.
- "Definition of: port forwarding". PC Magazine. Retrieved 2008-10-11.
- Rory Krause. "Using ssh Port Forwarding to Print at Remote Locations". Linux Journal. Retrieved 2008-10-11.
- Jeff "Crash" Goldin. "How to set up a home web server". Red Hat. Retrieved 2008-10-11.
- OpenSSH Port forwarding
- Alan Stafford. "Warp Speed Web Access: Sharing the Bandwidth". PC World. Retrieved 2008-10-11.
- Using UPnP for Programmatic Port Forwardings and NAT Traversal — Free software which uses UPnP and the Internet Gateway Device Protocol (IGD) to automate port forwarding
- TCP forwarding source code in C# — Source code in C# explaining/PoC TCP forwarding. | http://en.wikipedia.org/wiki/Port_forwarding | 13 |
22 | Solving Right Triangles
Right Triangle Review
A right triangle is a triangle with one right angle. The side opposite the right angle is called the hypotenuse, and the other two sides are called the legs. The angles opposite the legs, by definition, are complementary. Suppose that the legs have lengths a and b , and the hypotenuse has length c . The Pythagorean Theorem states that in all right triangles, a 2 + b 2 = c 2 . For a more thorough discussion of right triangles, see Right Triangles.
In this text, we will label the vertices of every right triangle A , B , and C . The angles will be labeled according to the vertex at which they are located. The side opposite angle A will be labeled side a , the side opposite angle B will be labeled side b , and the side opposite angle C will be labeled side c . Angle C we will designate as the right angle, and thus, side c will always be the hypotenuse. Angle A will always have its vertex at the origin, and angle B will always have its vertex at the point (b, a) . Any right triangle can be situated on the coordinate axes to be in this position:
In Trigonometic Functions, we defined the trigonometric functions using the coordinates of a point on the terminal side of an angle in standard position. With right triangles, we have a new way to define the trigonometric functions. Instead of using coordinates, we can use the lengths of certain sides of the triangle. These sides are the hypotenuse, the opposite side, and adjacent side. Using the figure above, the hypotenuse is side c , the opposite side is side a , and the adjacent side is side b . Here are the sides of a general right triangle labeled in the coordinate lane. | http://www.sparknotes.com/math/trigonometry/solvingrighttriangles/section1.rhtml | 13 |
65 | Ada Programming/Type System
Ada's type system allows the programmer to construct powerful abstractions that represent the real world, and to provide valuable information to the compiler, so that the compiler can find many logic or design errors before they become bugs. It is at the heart of the language, and good Ada programmers learn to use it to great advantage. Four principles govern the type system:
- Strong typing: types are incompatible with one another, so it is not possible to mix apples and oranges. There are, however, ways to convert between types.
- Static typing: type checked while compiling, this allows type errors to be found earlier.
- Abstraction: types represent the real world or the problem at hand; not how the computer represents the data internally. There are ways to specify exactly how a type must be represented at the bit level, but we will defer that discussion to another chapter.
- Name equivalence, as opposed to structural equivalence used in most other languages. Two types are compatible if and only if they have the same name; not if they just happen to have the same size or bit representation. You can thus declare two integer types with the same ranges that are totally incompatible, or two record types with exactly the same components, but which are incompatible.
Types are incompatible with one another. However, each type can have any number of subtypes, which are compatible with one another, and with their base type.
Predefined types
There are several predefined types, but most programmers prefer to define their own, application-specific types. Nevertheless, these predefined types are very useful as interfaces between libraries developed independently. The predefined library, obviously, uses these types too.
These types are predefined in the Standard package:
- This type covers at least the range .. (RM 3.5.4 (21) (Annotated)). The Standard also defines Natural and Positive subtypes of this type.
- There is only a very weak implementation requirement on this type (RM 3.5.7 (14) (Annotated)); most of the time you would define your own floating-point types, and specify your precision and range requirements.
- A fixed point type used for timing. It represents a period of time in seconds (RM A.1 (43) (Annotated)).
- A special form of Enumerations. There are three predefined kinds of character types: 8-bit characters (called Character), 16-bit characters (called Wide_Character), and 32-bit characters (Wide_Wide_Character). Character has been present since the first version of the language (Ada 83), Wide_Character was added in Ada 95, while the type Wide_Wide_Character is available with Ada 2005.
- Three indefinite array types, of Character, Wide_Character, and Wide_Wide_Character respectively. The standard library contains packages for handling strings in three variants: fixed length (Ada.Strings.Fixed), with varying length below a certain upper bound (Ada.Strings.Bounded), and unbounded length (Ada.Strings.Unbounded). Each of these packages has a Wide_ and a Wide_Wide_ variant.
- A Boolean in Ada is an Enumeration of False and True with special semantics.
- An address in memory.
- An offset, which can be added to an address to obtain a new address. You can also subtract one address from another to get the offset between them. Together, Address, Storage_Offset and their associated subprograms provide for address arithmetic.
- A subtype of Storage_Offset which cannot be negative, and represents the memory size of a data structure (similar to C's
- In most computers, this is a byte. Formally, it is the smallest unit of memory that has an address.
- An array of Storage_Elements without any meaning, useful when doing raw memory access.
The Type Hierarchy
Types are organized hierarchically. A type inherits properties from types above it in the hierarchy. For example, all scalar types (integer, enumeration, modular, fixed-point and floating-point types) have operators "<", ">" and arithmetic operators defined for them, and all discrete types can serve as array indexes.
Here is a broad overview of each category of types; please follow the links for detailed explanations. Inside parenthesis there are equivalences in C and Pascal for readers familiar with those languages.
- Signed Integers (int, INTEGER)
- Signed Integers are defined via the range of values needed.
- Unsigned Integers (unsigned, CARDINAL)
- Unsigned Integers are called Modular Types. Apart from being unsigned they also have wrap-around functionality.
- Enumerations (enum, char, bool, BOOLEAN)
- Ada Enumeration types are a separate type family.
- Floating point (float, double, REAL)
- Floating point types are defined by the digits needed, the relative error bound.
- Ordinary and Decimal Fixed Point (DECIMAL)
- Fixed point types are defined by their delta, the absolute error bound.
- Arrays ( [ ], ARRAY [ ] OF, STRING )
- Arrays with both compile-time and run-time determined size are supported.
- Record (struct, class, RECORD OF)
- A record is a composite type that groups one or more fields.
- Access (*, ^, POINTER TO)
- Ada's Access types may be more than just a simple memory address.
- Task & Protected (no equivalence in C or Pascal)
- Task and Protected types allow the control of concurrency
- Interfaces (no equivalence in C or Pascal)
- New in Ada 2005, these types are similar to the Java interfaces.
Classification of Types
The types of this hierarchy can be classified as follows.
Specific vs. Class-wide
Operations of specific types are non-dispatching, those on class-wide types are dispatching.
New types can be declared by deriving from specific types; primitive operations are inherited by derivation. You cannot derive from class-wide types.
Constrained vs. Unconstrained
type AU is array (I range <>) of ... -- unconstrained type R (X: Discriminant [:= Default]) is ... -- unconstrained
By giving a constraint to an unconstrained subtype, a subtype or object becomes constrained:
subtype RC is R (Value); -- constrained subtype of R OC: R (Value); -- constrained object of anonymous constrained subtype of R OU: R; -- unconstrained object
Declaring an unconstrained object is only possible if a default value is given in the type declaration above. The language does not specify how such objects are allocated. GNAT allocates the maximum size, so that size changes that might occur with discriminant changes present no problem. Another possibility is implicit dynamic allocation on the heap and deallocation followed be a re-allocation when the size changes.
Definite vs. Indefinite
type T (<>) is ... -- indefinite type AU is array (I range <>) of ... -- indefinite type RI (X: Discriminant) is ... -- indefinite
Definite subtypes allow the declaration of objects without initial value, since objects of definite subtypes have constraints that are known at creation-time. Object declarations of indefinite subtypes need an initial value to supply a constraint; they are then constrained by the constraint delivered by the initial value.
OT: T := Expr; -- some initial expression (object, function call, etc.) OA: AU := (3 => 10, 5 => 2, 4 => 4); -- index range is now 3 .. 5 OR: RI := Expr; -- again some initial expression as above
Unconstrained vs. Indefinite
Note that unconstrained subtypes are not necessarily indefinite as can be seen above with RD: it is a definite unconstrained subtype.
Concurrency Types
The Ada language uses types for one more purpose in addition to classifying data + operations. The type system integrates concurrency (threading, parallelism). Programmers will use types for expressing the concurrent threads of control of their programs.
The core pieces of this part of the type system, the task types and the protected types are explained in greater depth in a section on tasking.
Limited Types
Limiting a type means disallowing assignment. The “concurrency types” described above are always limited. Programmers can define their own types to be limited, too, like this:
You can learn more in the limited types chapter.
Defining new types and subtypes
You can define a new type with the following syntax:
followed by the description of the type, as explained in detail in each category of type.
Formally, the above declaration creates a type and its first subtype named
T. The type itself, correctly called the "type of T", is anonymous; the RM refers to it as
T (in italics), but often speaks sloppily about the type T. But this is an academic consideration; for most purposes, it is sufficient to think of
T as a type. For scalar types, there is also a base type called
T'Base, which encompasses all values of T.
For signed integer types, the type of T comprises the (complete) set of mathematical integers. The base type is a certain hardware type, symmetric around zero (except for possibly one extra negative value), encompassing all values of T.
As explained above, all types are incompatible; thus:
type Integer_1 is range 1 .. 10; type Integer_2 is range 1 .. 10; A : Integer_1 := 8; B : Integer_2 := A; -- illegal!
is illegal, because
Integer_2 are different and incompatible types. It is this feature which allows the compiler to detect logic errors at compile time, such as adding a file descriptor to a number of bytes, or a length to a weight. The fact that the two types have the same range does not make them compatible: this is name equivalence in action, as opposed to structural equivalence. (Below, we will see how you can convert between incompatible types; there are strict rules for this.)
Creating subtypes
You can also create new subtypes of a given type, which will be compatible with each other, like this:
type Integer_1 is range 1 .. 10; subtype Integer_2 is Integer_1 range 7 .. 11; -- bad subtype Integer_3 is Integer_1'Base range 7 .. 11; -- OK A : Integer_1 := 8; B : Integer_3 := A; -- OK
The declaration of
Integer_2 is bad because the constraint
7 .. 11 is not compatible with
Integer_1; it raises
Contraint_Error at subtype elaboration time.
Integer_3 are compatible because they are both subtypes of the same type, namely
It is not necessary that the subtype ranges overlap, or be included in one another. The compiler inserts a run-time range check when you assign A to B; if the value of A, at that point, happens to be outside the range of
Integer_3, the program raises
There are a few predefined subtypes which are very useful:
subtype Natural is Integer range 0 .. Integer'Last; subtype Positive is Integer range 1 .. Integer'Last;
Derived types
A derived type is a new, full-blown type created from an existing one. Like any other type, it is incompatible with its parent; however, it inherits the primitive operations defined for the parent type.
type Integer_1 is range 1 .. 10; type Integer_2 is new Integer_1 range 2 .. 8; A : Integer_1 := 8; B : Integer_2 := A; -- illegal!
Here both types are discrete; it is mandatory that the range of the derived type be included in the range of its parent. Contrast this with subtypes. The reason is that the derived type inherits the primitive operations defined for its parent, and these operations assume the range of the parent type. Here is an illustration of this feature:
procedure Derived_Types is package Pak is type Integer_1 is range 1 .. 10; procedure P (I: in Integer_1); -- primitive operation, assumes 1 .. 10 type Integer_2 is new Integer_1 range 8 .. 10; -- must not break P's assumption -- procedure P (I: in Integer_2); inherited P implicitly defined here end Pak; package body Pak is -- omitted end Pak; use Pak; A: Integer_1 := 4; B: Integer_2 := 9; begin P (B); -- OK, call the inherited operation end Derived_Types;
When we call
P (B), the parameter B is converted to
Integer_1; this conversion of course passes since the set of acceptable values for the derived type (here, 8 .. 10) must be included in that of the parent type (1 .. 10). Then P is called with the converted parameter.
Consider however a variant of the example above:
procedure Derived_Types is package Pak is type Integer_1 is range 1 .. 10; procedure P (I: in Integer_1; J: out Integer_1); type Integer_2 is new Integer_1 range 8 .. 10; end Pak; package body Pak is procedure P (I: in Integer_1; J: out Integer_1) is begin J := I - 1; end P; end Pak; use Pak; A: Integer_1 := 4; X: Integer_1; B: Integer_2 := 8; Y: Integer_2; begin P (A, X); P (B, Y); end Derived_Types;
P (B, Y) is called, both parameters are converted to
Integer_1. Thus the range check on J (7) in the body of P will pass. However on return parameter Y is converted back to
Integer_2 and the range check on Y will of course fail.
With the above in mind, you will see why in the following program Constraint_Error will be called at run time.
procedure Derived_Types is package Pak is type Integer_1 is range 1 .. 10; procedure P (I: in Integer_1; J: out Integer_1); type Integer_2 is new Integer_1'Base range 8 .. 12; end Pak; package body Pak is procedure P (I: in Integer_1; J: out Integer_1) is begin J := I - 1; end P; end Pak; use Pak; B: Integer_2 := 11; Y: Integer_2; begin P (B, Y); end Derived_Types;
Subtype categories
Ada supports various categories of subtypes which have different abilities. Here is an overview in alphabetical order.
Anonymous subtype
A subtype which does not have a name assigned to it. Such a subtype is created with a variable declaration:
X : String (1 .. 10) := (others => ' ');
Here, (1 .. 10) is the constraint. This variable declaration is equivalent to:
Base type
In Ada, all types are anonymous and only subtypes may be named. For scalar types, there is a special subtype of the anonymous type, called the base type, which is nameable with the 'Base attribute. The base type comprises all values of the first subtype. Some examples:
The base type
Int'Base is a hardware type selected by the compiler that comprises the values of
Int. Thus it may have the range -27 .. 27-1 or -215 .. 215-1 or any other such type.
Enum'Base is the same as
Short'Base also holds the literal
Constrained subtype
A subtype of an indefinite subtype that adds constraints. The following example defines a 10 character string sub-type.
You cannot partially constrain an unconstrained subtype:
type My_Array is array (Integer range <>, Integer range <>) of Some_Type; -- subtype Constr is My_Array (1 .. 10, Integer range <>); illegal subtype Constr is My_Array (1 .. 10, -100 .. 200);
Constraints for all indices must be given, the result is necessarily a definite subtype.
Definite subtype
Objects of definite subtypes may be declared without additional constraints.
Indefinite subtype
An indefinite subtype is a subtype whose size is not known at compile-time but is dynamically calculated at run-time. An indefinite subtype does not by itself provide enough information to create an object; an additional constraint or explicit initialization expression is necessary in order to calculate the actual size and therefore create the object.
X : String := "This is a string";
X is an object of the indefinite (sub)type String. Its constraint is derived implicitly from its initial value. X may change its value, but not its bounds.
It should be noted that it is not necessary to initialize the object from a literal. You can also use a function. For example:
X : String := Ada.Command_Line.Argument (1);
This statement reads the first command-line argument and assigns it to X.
Named subtype
A subtype which has a name assigned to it. “First subtypes” are created with the keyword type (remember that types are always anonymous, the name in a type declaration is the name of the first subtype), others with the keyword subtype. For example:
Count_to_Ten is the first subtype of a suitable integer base type. However, if you would like to use this as an index constraint on String, the following declaration is illegal:
This is because String has Positive as index, which is a subtype of Integer (these declarations are taken from package Standard):
subtype Positive is Integer range 1 .. Integer'Last; type String is (Positive range <>) of Character;
So you have to use the following declarations:
Now Ten_Characters is the name of that subtype of String which is constrained to Count_To_Ten. You see that posing constraints on types versus subtypes has very different effects.
Unconstrained subtype
A subtype of an indefinite subtype that does not add a constraint only introduces a new name for the original subtype.
My_String and String are interchangeable.
Qualified expressions
In most cases, the compiler is able to infer the type of an expression; for example:
Here the compiler knows that
A is a value of the type Enum. But consider:
procedure Bad is type Enum_1 is (A, B, C); procedure P (E : in Enum_1) is... -- omitted type Enum_2 is (A, X, Y, Z); procedure P (E : in Enum_2) is... -- omitted begin P (A); -- illegal: ambiguous end Bad;
The compiler cannot choose between the two versions of P; both would be equally valid. To remove the ambiguity, you use a qualified expression:
P (Enum_1'(A)); -- OK
As seen in the following example, this syntax is often used when creating new objects. If you try to compile the example, it will fail with a compilation error since the compiler will determine that 256 is not in range of Byte.
with Ada.Text_IO; procedure Convert_Evaluate_As is type Byte is mod 2**8; type Byte_Ptr is access Byte; package T_IO renames Ada.Text_IO; package M_IO is new Ada.Text_IO.Modular_IO (Byte); A : constant Byte_Ptr := new Byte'(256); begin T_IO.Put ("A = "); M_IO.Put (Item => A.all, Width => 5, Base => 10); end Convert_Evaluate_As;
Type conversions
Data do not always come in the format you need them. You must, then, face the task of converting them. As a true multi-purpose language with a special emphasis on "mission critical", "system programming" and "safety", Ada has several conversion techniques. The most difficult part is choosing the right one, so the following list is sorted in order of utility. You should try the first one first; the last technique is a last resort, to be used if all others fail. There are also a few related techniques that you might choose instead of actually converting the data.
Since the most important aspect is not the result of a successful conversion, but how the system will react to an invalid conversion, all examples also demonstrate faulty conversions.
Explicit type conversion
An explicit type conversion looks much like a function call; it does not use the tick (apostrophe, ') like the qualified expression does.
The compiler first checks that the conversion is legal, and if it is, it inserts a run-time check at the point of the conversion; hence the name checked conversion. If the conversion fails, the program raises Constraint_Error. Most compilers are very smart and optimise away the constraint checks; so, you need not worry about any performance penalty. Some compilers can also warn that a constraint check will always fail (and optimise the check with an unconditional raise).
Explicit type conversions are legal:
- between any two numeric types
- between any two subtypes of the same type
- between any two types derived from the same type (note special rules for tagged types)
- between array types under certain conditions (see RM 4.6(24.2/2..24.7/2))
- and nowhere else
(The rules become more complex with class-wide and anonymous access types.)
I: Integer := Integer (10); -- Unnecessary explicit type conversion J: Integer := 10; -- Implicit conversion from universal integer K: Integer := Integer'(10); -- Use the value 10 of type Integer: qualified expression -- (qualification not necessary here).
This example illustrates explicit type conversions:
with Ada.Text_IO; procedure Convert_Checked is type Short is range -128 .. +127; type Byte is mod 256; package T_IO renames Ada.Text_IO; package I_IO is new Ada.Text_IO.Integer_IO (Short); package M_IO is new Ada.Text_IO.Modular_IO (Byte); A : Short := -1; B : Byte; begin B := Byte (A); -- range check will lead to Constraint_Error T_IO.Put ("A = "); I_IO.Put (Item => A, Width => 5, Base => 10); T_IO.Put (", B = "); M_IO.Put (Item => B, Width => 5, Base => 10); end Convert_Checked;
Explicit conversions are possible between any two numeric types: integers, fixed-point and floating-point types. If one of the types involved is a fixed-point or floating-point type, the compiler not only checks for the range constraints (thus the code above will raise Constraint_Error), but also performs any loss of precision necessary.
Example 1: the loss of precision causes the procedure to only ever print "0" or "1", since
P / 100 is an integer and is always zero or one.
with Ada.Text_IO; procedure Naive_Explicit_Conversion is type Proportion is digits 4 range 0.0 .. 1.0; type Percentage is range 0 .. 100; function To_Proportion (P : in Percentage) return Proportion is begin return Proportion (P / 100); end To_Proportion; begin Ada.Text_IO.Put_Line (Proportion'Image (To_Proportion (27))); end Naive_Explicit_Conversion;
Example 2: we use an intermediate floating-point type to guarantee the precision.
with Ada.Text_IO; procedure Explicit_Conversion is type Proportion is digits 4 range 0.0 .. 1.0; type Percentage is range 0 .. 100; function To_Proportion (P : in Percentage) return Proportion is type Prop is digits 4 range 0.0 .. 100.0; begin return Proportion (Prop (P) / 100.0); end To_Proportion; begin Ada.Text_IO.Put_Line (Proportion'Image (To_Proportion (27))); end Explicit_Conversion;
You might ask why you should convert between two subtypes of the same type. An example will illustrate this.
subtype String_10 is String (1 .. 10); X: String := "A line long enough to make the example valid"; Slice: constant String := String_10 (X (11 .. 20));
Slice has bounds 1 and 10, whereas
X (11 .. 20) has bounds 11 and 20.
Change of Representation
Type conversions can be used for packing and unpacking of records or arrays.
type Unpacked is record -- any components end record; type Packed is new Unpacked; for Packed use record -- component clauses for some or for all components end record;
P: Packed; U: Unpacked; P := Packed (U); -- packs U U := Unpacked (P); -- unpacks P
Checked conversion for non-numeric types
The examples above all revolved around conversions between numeric types; it is possible to convert between any two numeric types in this way. But what happens between non-numeric types, e.g. between array types or record types? The answer is two-fold:
- you can convert explicitly between a type and types derived from it, or between types derived from the same type,
- and that's all. No other conversions are possible.
Why would you want to derive a record type from another record type? Because of representation clauses. Here we enter the realm of low-level systems programming, which is not for the faint of heart, nor is it useful for desktop applications. So hold on tight, and let's dive in.
Suppose you have a record type which uses the default, efficient representation. Now you want to write this record to a device, which uses a special record format. This special representation is more compact (uses fewer bits), but is grossly inefficient. You want to have a layered programming interface: the upper layer, intended for applications, uses the efficient representation. The lower layer is a device driver that accesses the hardware directly and uses the inefficient representation.
package Device_Driver is type Size_Type is range 0 .. 64; type Register is record A, B : Boolean; Size : Size_Type; end record; procedure Read (R : out Register); procedure Write (R : in Register); end Device_Driver;
The compiler chooses a default, efficient representation for
Register. For example, on a 32-bit machine, it would probably use three 32-bit words, one for A, one for B and one for Size. This efficient representation is good for applications, but at one point we want to convert the entire record to just 8 bits, because that's what our hardware requires.
package body Device_Driver is type Hardware_Register is new Register; -- Derived type. for Hardware_Register use record A at 0 range 0 .. 0; B at 0 range 1 .. 1; Size at 0 range 2 .. 7; end record; function Get return Hardware_Register; -- Body omitted procedure Put (H : in Hardware_Register); -- Body omitted procedure Read (R : out Register) is H : Hardware_Register := Get; begin R := Register (H); -- Explicit conversion. end Read; procedure Write (R : in Register) is begin Put (Hardware_Register (R)); -- Explicit conversion. end Write; end Device_Driver;
In the above example, the package body declares a derived type with the inefficient, but compact representation, and converts to and from it.
This illustrates that type conversions can result in a change of representation.
View conversion, in object-oriented programming
Within object-oriented programming you have to distinguish between specific types and class-wide types.
With specific types, only conversions to ancestors are possible and, of course, are checked. During the conversion, you do not "drop" any components that are present in the derived type and not in the parent type; these components are still present, you just don't see them anymore. This is called a view conversion.
There are no conversions to derived types (where would you get the further components from?); extension aggregates have to be used instead.
type Parent_Type is tagged null record; type Child_Type is new Parent_Type with null record; Child_Instance : Child_Type; -- View conversion from the child type to the parent type: Parent_View : Parent_Type := Parent_Type (Child_Instance);
Since, in object-oriented programming, an object of child type is an object of the parent type, no run-time check is necessary.
With class-wide types, conversions to ancestor and child types are possible and are checked as well. These conversions are also view conversions, no data is created or lost.
procedure P (Parent_View : Parent_Type'Class) is -- View conversion to the child type: One : Child_Type := Child_Type (Parent_View); -- View conversion to the class-wide child type: Two : Child_Type'Class := Child_Type'Class (Parent_View);
This view conversion involves a run-time check to see if
Parent_View is indeed a view of an object of type
Child_Type. In the second case, the run-time check accepts objects of type
Child_Type but also any type derived from
View renaming
A renaming declaration does not create any new object and performs no conversion; it only gives a new name to something that already exists. Performance is optimal since the renaming is completely done at compile time. We mention it here because it is a common idiom in object oriented programming to rename the result of a view conversion.
type Parent_Type is tagged record <components>; end record; type Child_Type is new Parent_Type with record <further components>; end record; Child_Instance : Child_Type; Parent_View : Parent_Type'Class renames Parent_Type'Class (Child_Instance);
Parent_View is not a new object, but another name for
Child_Instance viewed as the parent, i.e. only the parent components are visible, the further child components are hidden.
Address conversion
Ada's access type is not just a memory location (a thin pointer). Depending on implementation and the access type used, the access might keep additional information (a fat pointer). For example GNAT keeps two memory addresses for each access to an indefinite object — one for the data and one for the constraint informations (Size, First, Last).
If you want to convert an access to a simple memory location you can use the package System.Address_To_Access_Conversions. Note however that an address and a fat pointer cannot be converted reversibly into one another.
The address of an array object is the address of its first component. Thus the bounds get lost in such a conversion.
type My_Array is array (Positive range <>) of Something; A: My_Array (50 .. 100); A'Address = A(A'First)'Address
Unchecked conversion
One of the great criticisms of Pascal was "there is no escape". The reason was that sometimes you have to convert the incompatible. For this purpose, Ada has the generic function Unchecked_Conversion:
generic type Source (<>) is limited private; type Target (<>) is limited private; function Ada.Unchecked_Conversion (S : Source) return Target;
Unchecked_Conversion will bit-copy the source data and reinterprete them under the target type without any checks. It is your chore to make sure that the requirements on unchecked conversion as stated in RM 13.9 (Annotated) are fulfilled; if not, the result is implementation dependent and may even lead to abnormal data. Use the 'Valid attribute after the conversion to check the validity of the data in problematic cases.
A function call to (an instance of) Unchecked_Conversion will copy the source to the destination. The compiler may also do a conversion in place (every instance has the convention Intrinsic).
To use Unchecked_Conversion you need to instantiate the generic.
In the example below, you can see how this is done. When run, the example will output "A = -1, B = 255". No error will be reported, but is this the result you expect?
with Ada.Text_IO; with Ada.Unchecked_Conversion; procedure Convert_Unchecked is type Short is range -128 .. +127; type Byte is mod 256; package T_IO renames Ada.Text_IO; package I_IO is new Ada.Text_IO.Integer_IO (Short); package M_IO is new Ada.Text_IO.Modular_IO (Byte); function Convert is new Ada.Unchecked_Conversion (Source => Short, Target => Byte); A : constant Short := -1; B : Byte; begin B := Convert (A); T_IO.Put ("A = "); I_IO.Put (Item => A, Width => 5, Base => 10); T_IO.Put (", B = "); M_IO.Put (Item => B, Width => 5, Base => 10); end Convert_Unchecked;
There is of course a range check in the assignment
B := Convert (A);. Thus if B were defined as
B: Byte range 0 .. 10;, Constraint_Error would be raised.
If the copying of the result of Unchecked_Conversion is too much waste in terms of performance, then you can try overlays, i.e. address mappings. By using overlays, both objects share the same memory location. If you assign a value to one, the other changes as well. The syntax is:
where expression defines the address of the source object.
While overlays might look more elegant than Unchecked_Conversion, you should be aware that they are even more dangerous and have even greater potential for doing something very wrong. For example if
Source'Size < Target'Size and you assign a value to Target, you might inadvertently write into memory allocated to a different object.
You have to take care also of implicit initializations of objects of the target type, since they would overwrite the actual value of the source object. The Import pragma with convention Ada can be used to prevent this, since it avoids the implicit initialization, RM B.1 (Annotated).
The example below does the same as the example from "Unchecked Conversion".
with Ada.Text_IO; procedure Convert_Address_Mapping is type Short is range -128 .. +127; type Byte is mod 256; package T_IO renames Ada.Text_IO; package I_IO is new Ada.Text_IO.Integer_IO (Short); package M_IO is new Ada.Text_IO.Modular_IO (Byte); A : aliased Short; B : aliased Byte; for B'Address use A'Address; pragma Import (Ada, B); begin A := -1; T_IO.Put ("A = "); I_IO.Put (Item => A, Width => 5, Base => 10); T_IO.Put (", B = "); M_IO.Put (Item => B, Width => 5, Base => 10); end Convert_Address_Mapping;
Export / Import
Just for the record: There is still another method using the Export and Import pragmas. However, since this method completely undermines Ada's visibility and type concepts even more than overlays, it has no place here in this language introduction and is left to experts.
Elaborated Discussion of Types for Signed Integer Types
As explained before, a type declaration
declares an anonymous type
T and its first subtype
T (please note the italicization).
T encompasses the complete set of mathematical integers. Static expressions and named numbers make use of this fact.
All numeric integer literals are of type
Universal_Integer. They are converted to the appropriate specific type where needed.
Universal_Integer itself has no operators.
Some examples with static named numbers:
S1: constant := Integer'Last + Integer'Last; -- "+" of Integer S2: constant := Long_Integer'Last + 1; -- "+" of Long_Integer S3: constant := S1 + S2; -- "+" of root_integer S4: constant := Integer'Last + Long_Integer'Last; -- illegal
Static expressions are evaluated at compile-time on the appropriate types with no overflow checks, i.e. mathematically exact (only limited by computer store). The result is then implicitly converted to
The literal 1 in
S2 is of type
Universal_Integer and implicitly converted to
S3 implicitly converts the summands to
root_integer, performs the calculation and converts back to
S4 is illegal because it mixes two different types. You can however write this as
S5: constant := Integer'Pos (Integer'Last) + Long_Integer'Pos (Long_Integer'Last); -- "+" of root_integer
where the Pos attributes convert the values to
Universal_Integer, which are then further implicitly converted to
root_integer, added and the result converted back to
root_integer is the anonymous greatest integer type representable by the hardware. It has the range
System.Min_Integer .. System.Max_Integer. All integer types are rooted at
root_integer, i.e. derived from it.
Universal_Integer can be viewed as
During run-time, computations of course are performed with range checks and overflow checks on the appropriate subtype. Intermediate results may however exceed the range limits. Thus with
I, J, K of the subtype
T above, the following code will return the correct result:
I := 10; J := 8; K := (I + J) - 12; -- I := I + J; -- range check would fail, leading to Constraint_Error
Real literals are of type
Universal_Real, and similar rules as the ones above apply accordingly.
Relations between types
Types can be made from other types. Array types, for example, are made from two types, one for the arrays' index and one for the arrays' components. An array, then, expresses an association, namely that between one value of the index type and a value of the component type.
type Color is (Red, Green, Blue); type Intensity is range 0 .. 255; type Colored_Point is array (Color) of Intensity;
The type Color is the index type and the type Intensity is the component type of the array type Colored_Point. See array.
See also
Ada Reference Manual
- 3.2.1 Type Declarations (Annotated)
- 3.3 Objects and Named Numbers (Annotated)
- 3.7 Discriminants (Annotated)
- 3.10 Access Types (Annotated)
- 4.9 Static Expressions and Static Subtypes (Annotated)
- 13.9 Unchecked Type Conversions (Annotated)
- 13.3 Operational and Representation Attributes (Annotated)
- Annex K (informative) Language-Defined Attributes (Annotated) | http://en.wikibooks.org/wiki/Ada_Programming/Types | 13 |
14 | II. Introduction to Black Holes
Only in the last few decades as astronomers started looking at the Universe in radio, infrared, ultraviolet, X-ray, and gamma-ray light have we learned very much about black holes. However, the concept of a black hole has been around for over 200 years. English clergyman John Michell suggested in 1784 that some stars might be so big that light could never escape from them. A few years later, French mathematician Pierre Simon de Laplace reached the same conclusion. Michell and Laplace both based their work on the ideas about gravity put forth by Isaac Newton in 1687. Newton had said that objects on Earth fall to the ground as a result of an attraction called gravity. The more massive (heavier) an object is, the greater its pull of gravity. Thus, an apple would fall to Earth. His theory of gravity ruled unchallenged until 1915 when Einstein's general theory of relativity appeared. Instead of regarding gravity as a force, Einstein looked at it as a distortion of space itself.
Shortly after the announcement of Einstein's theory, German physicist Karl Schwarzschild discovered that the relativity equations led to the predicted existence of a dense object into which other objects could fall, but out of which no objects could ever come. (Today, thanks to American physicist John Wheeler, we call such an object a "black hole".) Schwarzschild predicted a "magic sphere" around such an object where gravity is so powerful that nothing can move outward. This distance has been named the Schwarzschild radius. It is also often referred to as the event horizon, because no information about events occurring inside this distance can ever reach us. The event horizon can be said to mark the surface of the black hole, although in truth the black hole is the singularity in the center of the event horizon sphere. Unable to withstand the pull of gravity, all material is crushed until it becomes a point of infinite density occupying virtually no space. This point is known as the singularity. Every black hole has a singularity at its center.
Ignoring the differences introduced by rotation, we can say that to be inevitably drawn into a black hole, one has to cross inside the Schwarzschild radius. At this radius, the escape speed is equal to the speed of light; therefore, once light passes through, even it cannot escape. Wonderfully, the Schwarzschild radius can be calculated using the Newtonian equation for escape speed
Vesc = (2GM/R)1/2.
For photons, or objects with no mass, we can substitute c (the speed of light) for Vesc and find the Schwarzschild radius, R, to be
R = 2GM/c2.
This equation implies to us that any object with mass M can become a black hole if it can achieve a radius of R!
Download a pdf version. | http://imagine.gsfc.nasa.gov/docs/teachers/blackholes/imagine/page3.html | 13 |
22 | Types of Scores in Assessment (page 3)
There are many ways of reporting test performance. A variety of scores can be used when interpreting students' test performance.
The raw score is the number of items a student answers correctly without adjustment for guessing. For example, if there are 15 problems on an arithmetic test, and a student answers 11 correctly, then the raw score is 11. Raw scores, however, do not provide us with enough information to describe student performance.
A percentage score is the percent of test items answered correctly. These scores can be useful when describing a student's performance on a teacher-made test or on a criterion-referenced test. However, percentage scores have a major disadvantage: We have no way of comparing the percentage correct on one test with the percentage correct on another test. Suppose a child earned a score of 85 percent correct on one test and 55 percent correct on another test. The interpretation of the score is related to the difficulty level of the test items on each test. Because each test has a different or unique level of difficulty, we have no common way to interpret these scores; there is no frame of reference.
To interpret raw scores and percentage-correct scores, it is necessary to change the raw or percentage score to a different type of score in order to make comparisons. Evaluators rarely use raw scores and percentage-correct scores when interpreting performance because it is difficult to compare one student's scores on several tests or the performance of several students on several tests.
Derived scores are a family of scores that allow us to make comparisons between test scores. Raw scores are transformed to derived scores. Developmental scores and scores of relative standing are two types of derived scores. Scores of relative standing include percentiles, standard scores, and stanines.
Sometimes called age and grade equivalents, developmental scores are scores that have been transformed from raw scores and reflect the average performance at age and grade levels. Thus, the student's raw score (number of items correct) is the same as the average raw score for students of a specific age or grade. Age equivalents are written with a hyphen between years and months (e.g., 12–4 means that the age equivalent is 12 years, 4 months old). A decimal point is used between the grade and month in grade equivalents (e.g., 1.2 is the first grade, second month).
Developmental scores can be useful (McLean, Bailey, & Wolery, 1996; Sattler, 2001). Parents and professionals easily interpret them and place the performance of students within a context. Because of the ease of misinterpretation of these scores, parents and professionals should approach them with extreme caution. There are a number of reasons for criticizing these scores.
For a student who is 6 years old and in the first grade, grade and age equivalents presume that for each month of first grade an equal amount of learning occurs. But, from our knowledge of child growth and development and theories about learning, we know that neither growth nor learning occurs in equal monthly intervals. Age and grade equivalents do not take into consideration the variation in individual growth and learning.
Teachers should not expect that students will gain a grade equivalent or age equivalent of one year for each year that they are in school. For example, suppose a child earned a grade equivalent of 1.5, first grade, fifth month, at the end of first grade. To assume that at the end of second grade the child should obtain a grade equivalent of 2.5, second grade, fifth month, is not good practice. This assumption is incorrect for two reasons: (1) The grade and age equivalent norms should not be confused with performance standards, and (2) a gain of 1.0 grade equivalent is representative only of students who are in the average range for their grade. Students who are above average will gain more than 1.0 grade equivalent a year, and students who are below average will progress less than 1.0 grade equivalent a year (Gronlund & Linn, 1990).
A second criticism of developmental scores is the underlying idea that because two students obtain the same score on a test they are comparable and will display the same thinking, behavior, and skill patterns. For example, a student who is in second grade earned a grade equivalent score of 4.6 on a test of reading achievement. This does not mean that the second grader understands the reading process as it is taught in the fourth grade. Rather, this student just performed at a superior level for a student who is in second grade. It is incorrect to compare the second grader to a child who is in fourth grade; the comparison should be made to other students who are in second grade (Sattler, 2001).
A third criticism of developmental scores is that age and grade equivalents encourage the use of false standards. A second-grade teacher should not expect all students in the class to perform at the second-grade level on a reading test. Differences between students within a grade mean that the range of achievement actually spans several grades. In addition, developmental scores are calculated so that half of the scores fall below the median and half fall above the median. Age and grade equivalents are not standards of performance.
A fourth criticism of age and grade equivalents is that they promote typological thinking. The use of age and grade equivalents causes us to think in terms of a typical kindergartener or a typical 10-year-old. In reality, students vary in their abilities and levels of performance. Developmental scores do not take these variations into account.
A fifth criticism is that most developmental scores are interpolated and extrapolated. A normed test includes students of specific ages and grades—not all ages and grades—in the norming sample. Interpolation is the process of estimating the scores of students within the ages and grades of the norming sample. Extrapolation is the process of estimating the performance of students outside the ages and grades of the normative sample.
A developmental quotient an estimate of the rate of development. If we know a student's developmental age and chronological age, it is possible to calculate a developmental quotient. For example, suppose a student's developmental age is 12 years (12 years
Developmental age 144 months / Chronological age 144 months X 100 = 100
144/144 X 100 = 100
1/1 X 100 = 100
But, suppose another student's chronological age is also 144 months and that the developmental age is 108 months. Using the formula, this student would have a developmental quotient of 75.
Developmental age 108 months/ Chronological age X 100 = 75
108/144 X 100 = 75
Developmental quotients have all of the drawbacks associated with age and grade equivalents. In addition, they may be misleading because developmental age may not keep pace with chronological age as the individual gets older. Consequently, the gap between developmental age and chronological age becomes larger as the student gets older.
Scores of Relative Standing
Percentile Ranks A percentile rank is the point in a distribution at or below which the scores of a given percentage of students fall. Percentiles provide information about the relative standing of students when compared with the standardization sample. Look at the following test scores and their corresponding percentile ranks.
Jana's score of 93 has a percentile rank of 81. This means that 81 percent of the students who took the test scored 93 or lower. Said another way, Jana scored as well as or better than 81 percent of the students who took the test.
A percentile rank of 50 represents average performance. In a normal distribution, both the mean and the median fall at the 50th percentile. Half the students fall above the 50th percentile and half fall below. Percentiles can be divided into quartiles. A quartile contains 25 percentiles or 25 percent of the scores in a distribution. The 25th and the 75th percentiles are the first and the third quartiles. In addition, percentiles can be divided into groups of 10 known as deciles. A decile contains 10 percentiles. Beginning at the bottom of a group of students, the first 10 percent are known as the first decile, the second 10 percent are known as the second decile, and so on.
The position of percentiles in a normal curve is shown in Figure 4.5. Despite their ease of interpretation, percentiles have several problems. First, the intervals they represent are unequal, especially at the lower and upper ends of the distribution. A difference of a few percentile points at the extreme ends of the distribution is more serious than a difference of a few points in the middle of the distribution. Second, percentiles do not apply to mathematical calculations (Gronlund & Linn, 1990). Last, percentile scores are reported in one-hundredths. But, because of errors associated with measurement, they are only accurate to the nearest 0.06 (six one-hundredths) (Rudner, Conoley, & Plake, 1989). These limitations require the use of caution when interpreting percentile ranks. Confidence intervals, which are discussed later in this chapter, are useful when interpreting percentile scores.
Standard Scores Another type of derived score is a standard score. Standard score is the name given to a group or category of scores. Each specific type of standard score within this group has the same mean and the same standard deviation. Because each type of standard score has the same mean and the same standard deviation, standard scores are an excellent way of representing a child's performance. Standard scores allow us to compare a child's performance on several tests and to compare one child's performance to the performance of other students. Unlike percentile scores, standard scores function in mathematical operations. For instance, standard scores can be averaged. In the Snapshot, teachers Lincoln Bates and Sari Andrews discuss test scores. As is apparent, standard scores are equal interval scores. The different types of standard scores, some of which we discuss in the following subsections, are:
- z-scores: have a mean of 0 and a standard deviation of 1.
- T-scores: have a mean of 50 and a standard deviation of 10.
- Deviation IQ scores: have a mean of 100 and a standard deviation of 15 or 16.
- Normal curve equivalents: have a mean of 50 and a standard deviation of 21.06.
- Stanines: standard score bands divide a distribution of scores into nine parts.
- Percentile ranks: point in a distribution at or below which the scores of a given percentage of students fall.
Deviation IQ Scores Deviation Deviation IQ scores are frequently used to report the performance of students on norm-referenced standardized tests. The deviation scores of the Wechsler Intelligence Scale for Children–III and the Wechsler Individual Achievement Test–II have a mean of 100 and a standard deviation of 15, while the Stanford-Binet Intelligence Scale–IV has a mean of 100 and a standard deviation of 16. Many test manuals provide tables that allow conversion of raw scores to deviation IQ scores.
Normal Curve Equivalents Normal curve equivalents (NCEs) a type of standard score with a mean of 50 and a standard deviation of 21.06. When the baseline of the normal curve is divided into 99 equal units, the percentile ranks of 1, 50, and 99 are the same as NCE units (Lyman, 1986). One test that does report NCEs is the Developmental Inventory-2.However, NCEs are not reported for some tests.
Stanines Stanines are bands of standard scores that have a mean of 5 and a standard deviation of 2. Stanines range from 1 to 9. Despite their relative ease of interpretation, stanines have several disadvantages. A change in just a few raw score points can move a student from one stanine to another. Also, because stanines are a general way of interpreting test performance, caution is necessary when making classification and placement decisions. As an aid in interpreting stanines, evaluators can assign descriptors to each of the 9 values:
3—considerably below average
Basal and Ceiling Levels
Many tests, because test authors construct them for students of differing abilities, contain more items than are necessary. To determine the starting and stopping points for administering a test, test authors designate basal and ceiling levels. (Although these are really not types of scores, basal and ceiling levels are sometimes called rules or scores.) The basal level is the point below which the examiner assumes that the student could obtain all correct responses and, therefore, it is the point at which the examiner begins testing.
The test manual will designate the point at which testing should begin. For example, a test manual states, "Students who are 13 years old should begin with item 12. Continue testing when three items in a row have been answered correctly. If three items in a row are not answered correctly, the examiner should drop back a level." This is the basal level.
Let's look at the example of the student who is 9 years old. Although the examiner begins testing at the 9-year-old level, the student fails to answer correctly three in a row. Thus, the examiner is unable to establish a basal level at the suggested beginning point. Many manuals instruct the examiner to continue testing backward, dropping back one item at a time, until the student correctly answers three items. Some test manuals instruct examiners to drop back an entire level, for instance, to age 8, and begin testing. When computing the student's raw score, the examiner includes items below the basal point as items answered correctly. Thus, the raw score includes all the items the student answered correctly plus the test items below the basal point. The ceiling level is the point above which the examiner assumes that the student would obtain all incorrect responses if the testing were to continue; it is, therefore, the point at which the examiner stops testing. "To determine a ceiling," a manual may read, "discontinue testing when three items in a row have been missed."
A false ceiling can be reached if the examiner does not carefully follow directions for determining the ceiling level. Some tests require students to complete a page of test items to establish the ceiling level.
© ______ 2007, Merrill, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher.
Add your own comment
Today on Education.com
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- Steps in the IEP Process | http://www.education.com/reference/article/types-scores-assessment/?page=3 | 13 |
33 | Watching the Summer Night Sky, Grades 6-8
Science Lessons, Activities & Curriculum Resources
Now that summer is approaching, brush up your stargazing skills and get ready to enjoy humankind's first TV, the night sky. These lessons, activities, and resources will help you locate and identify celestial objects with equipment as basic as your own eyes.
Lesson Plans & Activities
Seeing the Invisible: Dust in the Universe
Students in grades K-12 explore the properties of dust and the astronomical research of dust in space with three grade-appropriate inquiry based activities.
Classroom Activity: Group the Galaxies
Students in grades 5-7 are introduced to information on Galaxy Trading Cards by creating categories based on recognized patterns.
Classroom Activity: Find the Right Circle
Students in grades 6-8 use Venn diagrams to compare galaxy properties.
Reflective Solar Cooker
Students in grades 6-8 build a reflective solar cooker that uses the Sun's energy to cook marshmallows. This activity requires adult supervision.
Additional Lesson Plans
- StarDate Online: Lessons Plans (K-12)
- McDonald Observatory: Classroom Activities & Resources (K-12)
- Amazing Space: Teaching Tools (K-12)
Live from McDonald Observatory - Videoconference Program
Videoconferences are 50-minute, fully interactive connections between one class (or student club) of up to 35 students and a McDonald Observatory facilitator. Versions for grades 3-5, 6-8 and 9-12 are available. Rates are $100 per interactive 50-minute videoconference, or $60 for "view-only" classroom.
http://setiathome.berkeley.edu/ (grades 6-12)
This scientific experiment uses Internet-connected computers in the Search for Extraterrestrial Intelligence (SETI). You can participate by running a free program that downloads and analyzes radio telescope data. SETI@Home is a U.C. Berkeley program funded with grants from the National Science Foundation and NASA.
Galaxy Zoo: Hubble (grades 6-12)
Help scientists classify galaxies according to their shapes.
- Classroom Activities: StarDate Online (K-12)
- Astronomy Crossword Puzzle (grades 3-6)
- Astronomy Hangman (grades 6-8)
- Black Holes – Interactive (grades 6-12)
Human Space Flight (HSF) - Realtime Data
See the ISS in your own backyard. This "applet" uses up-to-the-minute data from Mission Control to project the path the spacecraft (ISS) will make across the sky.
gSky Browser: Explore Astronomy, HubbleSite
Search the night sky using Google Earth, zooming in on astronomical objects at their precise location in the heavens. Requires installation of the latest version of Google Earth.
Finding Hubble in the WorldWide Telescope: Explore Astronomy, HubbleSite
WorldWide Telescope provides guided tours of celestial objects. Listen to Hubble astronomers explain the scientific stories behind the images. Requires installation of Microsoft Research's free WorldWide Telescope (WWT) software.
HubbleSite Reference Desk
Get the facts: answers to basic questions about astronomy and Hubble, figures and charts, and dictionary definitions for astronomical terms.
- Interactive Sky Chart
Interactive sky chart and photo gallery. Requires free registration.
- This Week's Sky at a Glance
Daily celestial events. Requires free registration.
SKYCAL: Sky Events Calendar, NASA (K-12)
SKYCAL (Sky Events Calendar) will help you keep track of the sky by calculating the local date and time of all these celestial happenings. It displays them on a convenient calendar that you can print and hang on the wall. You can generate a calendar for a single month or for an entire year. Just choose your Time Zone.
- NASA Online (K-12)
NASA Learning Modules, Simulations, and Activities
- Space Science Education Resource Directory, NASA (K-12)
Download activities and resources.
- Education Materials, NASA (K-12)
Search by grade, medium, or topic.
- International Space Station, NASA (K-12)
- Out of the Ordinary...Out of This World, HubbleSite (K-12)
- Stargazing Basics (grades 3-12)
Quizzes & Printables
Way Out!, HubbleSite (grades 3-12)
Three levels of a cosmic trivia quiz game starring a cow that jumps over the moon and beyond the Milky Way.
Astronomy Quiz For Kids (grades 6-8)
10 questions, some are difficult.
Astronomy Review Games (grades 6-8)
Choose soccer, basketball, or Deal/No Deal and one of three difficulty levels.
Make a Star Wheel!
Make and use this simple Planisphere or Star Wheel to navigate the night sky.
Create Your Own Planisphere
Two wheels: one with the basic constellations, another with coordinate grid.
Star Finder – Star Chart
Wheel and directions for making a light-shielding viewer from cereal box.
Audio & Video
Radio’s Guide to the Universe, StarDate Online
StarDate tells listeners what to look for in the night sky, and explains the science, history, and skylore behind these objects.
SkyWatch: It's a Big Sky, HubbleSite
Highlights news from the world of astronomy. Listen via computer or MP3 player to the latest discoveries. SkyWatch also includes HubbleWatch, a round-up of news from NASA's Hubble Space Telescope.
Hunting the Edge of Space, PBS NOVA
NOVA examines how a simple instrument, the telescope, has fundamentally changed our understanding of our place in the universe. Watch online; 2 Parts: 52: 55 and 52:55.
Piercing the Sky
History of modern telescopes and astronomy.
Videos – Astronomy Magazine
The editors of Astronomy magazine bring you the latest science, new products, hobby tips, and more. | http://www.nea.org/tools/lessons/summer-night-sky-6-8.html | 13 |
15 | Mean and Variance of a Dataset
When we make a measurement we are trying to determine the true value of a particular characteristic of some material or process. However, the measurements that we make involve uncertainty (i.e., no measurement is perfect). Therefore, the value we acquire is not the true value but some estimate of the true value. If we measure the same sample 20 times, we typically do not expect to acquire the same measured value every time. We will get a distribution of measured values. For example, Figure 1 shows a plot of the result from 20 measurements of the same sample. We need to be able to characterize the true value of the sample (i.e., determine our best estimate of the true value) based on this distribution
If we plot the frequency of each particular measured value, then we acquire the plot shown in Figure 2. This distribution suggests that the next measurement that we make is likely to be close to 8 or 9, but there is some chance that it will be higher or lower than that. However, it is very unlikely that it will be lower than 3 or higher than 16.
We characterize this dataset by calculating the experimental mean ( ) given by
where xn are the measured values for the n=1,2,...N data points. In this example N was equal to 20. This experimental mean tells us that "on average" we expect the true value for x to be close to . But how close would we expect the true value to be from ? We can begin to estimate that by looking at the variance in the dataset.
We characterize the spread in this dataset by calculating the experimental variance ( ) of the dataset given by
The standard deviation is the square root of the variance. A large standard deviation implies that if we make another measurement, then we will have a low confidence that it will be close to the mean. A small standard deviation means that if we make another measurement, then we have a high confidence that it will be close to the mean. One measure of the size of the standard deviation is given by the relative standard deviation (S), which is the ratio of the standard deviation to the mean or
For the dataset shown in Figure 2, the experimental mean is 8.80 and the standard deviation is 2.78. The relative standard deviation is 0.317 or 31.7%. This suggests that there is a large variation in the measured data points. If we make one more measurement, we would have a low confidence that it would be close to the mean.
How spread out this distribution is will depend on the uncertainty in the measured values and specifically uncertainties in the measurement instrument used. Assume we have a sample whose true value for the mass of the sample is 20.00 g. We make 155 measurements of this sample using an instrument that has low uncertainties. The value for each of the 155 measurements is shown in Figure 3. A frequency plot showing the distribution of these measurement values is shown in Figure 4. The experimental mean of these values is 19.986 g and the standard deviation is 1.034 g. The relative standard deviation (S) in this dataset is 5.2%. Thus, if we make another measurement of this sample, we would have a high confidence that it would be close to the mean. Note that since we know the true value of the characteristics (20.00 g), we can confirm that the experimental mean differs from the true value by only a small amount (0.014 g).
Now assume that we make another 155 measurements of the same 20.00 g sample but this time we use an instrument that has a higher uncertainty. The value for each of the 155 measurements is shown in Figure 5 and a frequency plot showing the distribution of these measured values is shown in Figure 6. The experimental mean of these values is 19.956 g and the standard deviation is 3.102 g. The relative standard deviation is 15.5%. Thus, we expect that if we perform an additional measurement we have a lower confidence than in the previous example that it would likely be close to the experimental mean
The examples above show us how we could use repetitive measurements of the same sample using the same instrument to determine characteristics of the measurement system. By measuring the same sample over and over again, we can determine the expected uncertainties for the measurement instrument and the shape of the distribution of measured values.
Page 2 / 9 | http://nsspi.tamu.edu/nsep/courses/introduction-to-statistics/mean-and-variance-of-a-dataset/mean-and-variance-of-a-dataset | 13 |
11 | Compute the value of a given polynomial A.N*X**N+...+A.1*X+A.0 of degree N at a given point V.
A simple algorithm which avoids recomputation of the powers of X is known as Horner's rule. A degree N polynomial can be evaluated using only N-1 multiplications and N additions. This program assumes the array representation for polynomials.
Unit: internal function
Global variables: vector A.0,A.1,...,A.N of coefficients from the set of real numbers
Parameters: a positive integer N - the degree of the polynomial, real number V - a given point
Returns: Evaluating the polynomial A. at the given point V
The value of a given polynomial 5*X**3+4*X-10 of degree N=3 at a given point V=8 is 2582. See the following fragment of program:
Sparse polynomial addition
Evaluation of polynomial and its derivatives
Division of polynomials
Sedgewick R. Algorithms
Addison-Wesley, Reading, Massachusetts, 1984 | http://dhost.info/zabrodskyvlada/aat/a_horn.html | 13 |
29 | Comparing Fractions Worksheets
Comparing fraction worksheets include the following:
Visual Fractions : Pie model, fraction strips and other picture worksheets to help in comparing fractions.
Largest Fraction : Pick out the largest like and unlike fraction from the list.
Smallest Fraction : Identify which fraction is smallest.
More Comparison : Compare both positive and negative fractions. Few more practice sheets on like and unlike fractions included.
Solve and Compare : Add, subtract, multiply or divide the fractions and compare which is greater.
Visual Fractions - Comparison
Comparing Picture Fractions
Two graphics divided into parts and partially shaded. Identify the fraction representation of shaded parts and fill in the box with 'greater than',' less than' or 'equal to' symbol.
Color the Pie and Compare
Two pies and the fractions are given. Shade the pie according to the fraction. Compare the shaded portion to identify which fraction is greater or lesser or equal.
Two fraction strips are partially shaded and the fraction of shaded part is given. Compare the shaded part to identify which fraction is more or less or equal.
Color and Compare the Fraction Strip
Comparing worksheets include the fraction strips that are not shaded. According to the fraction provided for each strip, shade the fraction strip and compare them.
Identify the Largest Fraction
Comparing fraction worksheet has a pair of fractions or set of fractions. Circle the fraction that is greatest in value.
You should pick out all the fractions that are larger than the given fraction.
Identify the Smallest Fraction
Compare the fractions and pick out the smallest one.
Compare the fraction from the group of fractions and pick out the all possible fractions that are smaller than the given.
Comparing Fractions - More Worksheets
More simple worksheets added for extra practice.
Compare the negative fractions to identify which is largest or smallest.
Positive - Negative Fractions
This section contains worksheets that include both positive and negative fractions.
Comparing Like Fraction Worksheets
Comparing Unlike Fraction Worksheets
Solve and Compare the Fractions
Solve Addition and Compare
Separate worksheets provided for comparing like and unlike fractions.
Solve Subtraction and Compare
Solve the fractions on both sides and then fill in the box with >, < or = symbol.
Solve Multiplication and Compare
We have two sides, right and left. Both sides may have fraction in simplest form or fractions to solve. Do the necessary and compare the fractions.
Solve Division and Compare
Divide or simplify the fractions on both sides and then compare. | http://www.mathworksheets4kids.com/comparing-fractions.html | 13 |
21 | Mathematics — Grade 4
All questions and comments about this curriculum should be directed to the North Carolina Department of Public Instruction.
Number and Operations - The learner will read, write, model, and compute with non-negative rational numbers.
Develop number sense for rational numbers 0.01 through 99,999.
- Connect model, number word, and number using a variety of representations.
- Build understanding of place value (hundredths through ten thousands).
- Compare and order rational numbers.
- Make estimates of rational numbers in appropriate situations.
Develop fluency with multiplication and division:
- Two-digit by two-digit multiplication (larger numbers with calculator).
- Up to three-digit by two-digit division (larger numbers with calculator).
- Strategies for multiplying and dividing numbers.
- Estimation of products and quotients in appropriate situations.
- Relationships between operations.
Solve problems using models, diagrams, and reasoning about fractions and relationships among fractions involving halves, fourths, eighths, thirds, sixths, twelfths, fifths, tenths, hundredths, and mixed numbers.
Develop fluency with addition and subtraction of non-negative rational numbers with like denominators, including decimal fractions through hundredths.
- Develop and analyze strategies for adding and subtracting numbers.
- Estimate sums and differences.
- Judge the reasonableness of solutions.
Develop flexibility in solving problems by selecting strategies and using mental computation, estimation, calculators or computers, and paper and pencil.
Measurement - The learner will understand and use perimeter and area.
Develop strategies to determine the area of rectangles and the perimeter of plane figures.
Solve problems involving perimeter of plane figures and areas of rectangles.
Geometry - The learner will recognize and use geometric properties and relationships.
Use the coordinate system to describe the location and relative position of points and draw figures in the first quadrant.
Describe the relative position of lines using concepts of parallelism and perpendicularity.
Identify, predict, and describe the results of transformations of plane figures.
Data Analysis and Probability - The learner will understand and use graphs, probability, and data analysis.
Collect, organize, analyze, and display data (including line graphs and bar graphs) to solve problems.
Describe the distribution of data using median, range and mode.
Solve problems by comparing two sets of related data.
Design experiments and list all possible outcomes and probabilities for an event.
Algebra - The learner will demonstrate an understanding of mathematical relationships.
Identify, describe, and generalize relationships in which:
- Quantities change proportionally.
- Change in one quantity relates to change in a second quantity.
Translate among symbolic, numeric, verbal, and pictorial representations of number relationships.
Verify mathematical relationships using:
- Models, words, and numbers.
- Order of operations and the identity, commutative, associative, and distributive properties. | http://www.learnnc.org/scos/print.php?scos=2004-MAT-0004 | 13 |
40 | In the next few articles, I'd like to concentrate on securing data as it travels over a network. If you remember the IP packets series (see Capturing TCP Packets), most network traffic is transmitted in clear text and can be decoded by a packet sniffing utility. This can be bad for transmissions containing usernames, passwords, or other sensitive data. Fortunately, other utilities known as cryptosystems can protect your network traffic from prying eyes.
To configure a cryptosystem properly, you need a good understanding of the various terms and algorithms it uses. This article is a crash course in Cryptographic Terminology 101. Following articles will demonstrate configuring some of the cryptosytems that are available to FreeBSD.
What is a cryptosystem and why would you want to use one? A cryptosystem is a utility that uses a combination of algorithms to provide the following three components: privacy, integrity, and authenticity. Different cryptosytems use different algorithms, but all cryptosystems provide those three components. Each is important, so let's take a look at each individually.
Privacy ensures that only the intended recipient understands the network transmission. Even if a packet sniffer captures the data, it won't be able to decode the contents of the message. The cryptosystem uses an encryption algorithm, or cipher, to encrypt the original clear text into cipher text before it is transmitted. The intended recipient uses a key to decrypt the cipher text back into the original clear text. This key is shared between the sender and the recipient, and it is used to both encrypt and decrypt the data. Obviously, to ensure the privacy of the data, it is crucial that only the intended recipient has the key, for anyone with the key can decrypt the data.
It is possible for someone without the key to decrypt the data by cracking or guessing the key that was used to encrypt the data. The strength of the encryption algorithm gives an indication of how difficult it is to crack the key. Normally, strengths are expressed in terms of bitsize. For example, it would take less time to crack a key created by an algorithm with a 56-bit size than it would for a key created by an algorithm with a 256-bit size.
Does this mean you should always choose the algorithm with the largest bit size? Not necessarily. Typically, as bit size increases, the longer it takes to encrypt and decrypt the data. In practical terms, this translates into more work for the CPU and slower network transmissions. Choose a bit size that is suited to the sensitivity of the data you are transmitting and the hardware you have. The increase in CPU power over the years has resulted in a double-edged sword. It has allowed the use of stronger encryption algorithms, but it has also reduced the time it takes to crack the key created by those algorithms. Because of this, you should change the key periodically, before it is cracked. Many cryptosystems automate this process for you.
There are some other considerations when choosing an encryption algorithm. Some encryption algorithms are patented and require licenses or restrict their usage. Some encryption algorithms have been successfully exploited or are easily cracked. Some algorithms are faster or slower than their bit size would indicate. For example, DES and 3DES are considered to be slow; Blowfish is considered to be very fast, despite its large bit size.
Legal considerations also vary from country to country. Some countries impose export restrictions. This means that it is okay to use the full strength of an encryption algorithm within the borders of the country, but there are restrictions for encrypting data that has a recipient outside of the country. The United States used to restrict the strength of any algorithm leaving the U.S. border to 40 bits, which is why some algorithms support the very short bit size of 40 bits.
There are still countries where it is illegal to even use encryption. If you are unsure if your particular country has any legal or export restrictions, do a bit of research before you configure your FreeBSD system to use encryption.
The following table compares the encryption algorithms you are most likely to come across.
|DES||56||slow, easily cracked|
|Blowfish||32 - 448||no||extremely fast|
|CAST||40 - 128||yes|
|AES (Rijndael)||128, 192, 256||no||fast|
How much of the original packet is encrypted depends upon the encryption mode. If a cryptosystem uses transport mode, only the data portion of the packet is encrypted, leaving the original headers in clear text. This means that a packet sniffer won't be able to read the actual data but will be able to determine the IP addresses of the sender and recipient and which port number (or application) sent the data.
If a cryptosystem uses tunnel mode, the entire packet, data and headers, is encrypted. Since the packet still needs to be routed to its final destination, a new Layer 3 header is created. This is known as encapsulation, and it is quite possible that the new header contains totally different IP addresses than the original IP header. We will see why in a later article when we configure your FreeBSD system for IPSEC.
Integrity is the second component found in cryptosystems. This component ensures that the data received is indeed the data that was sent and that the data wasn't tampered with during transit. It requires a different class of algorithms, known as cryptographic checksums or cryptographic hashes. You may already be familiar with checksums as they are used to ensure that all of the bits in a frame or a header arrived in the order they were sent. However, frame and header checksums use a very simple algorithm, meaning that it is mathematically possible to change the bits and still use the same checksum. Cryptographic checksums need to be more tamper-resistant.
Like encryption algorithms, cryptographic checksums vary in their effectiveness. The longer the checksum, the harder it is to change the data and recreate the same checksum. Also, some checksums have known flaws. The following table summarizes the cryptographic checksums:
|Checksum length||Known flaws|
The order in the above chart is intentional. When it comes to cryptographic checksums, MD4 is the least secure, and SHA-1 is the most secure. Always choose the most secure checksum available in your cryptosystem.
Another term to look for in a cryptographic checksum is HMAC or Hash-based Message Authentication Code. This indicates that the checksum algorithm uses a key as part of the checksum. This is good, as it's impossible to alter the checksum without access to the key. If a cryptographic checksum uses HMAC, you'll see that term before the name of the checksum. For example, HMAC-MD4 is more secure than MD4, HMAC-SHA is more secure than SHA. If we were to order the checksum algorithms from least secure to most secure, it would look like this:
So far, we've ensured that the data has been encrypted and that the data hasn't been altered during transit. However, all of that work would be for naught if the data, and more importantly, the key, were mistakenly sent to the wrong recipient. This is where the third component, or authenticity, comes into play.
Before any encryption can occur, a key has to be created and exchanged. Since the same key is used to encrypt and to decrypt the data during the session, it is known as a symmetric or session key. How do we safely exchange that key in the first place? How can we be sure that we just exchanged that key with the intended recipient and no one else?
This requires yet another class of algorithms known as asymmetric or public key algorithms. These algorithms are called asymmetric as the sender and recipient do not share the same key. Instead, both the sender and the recipient separately generate a key pair which consists of two mathematically related keys. One key, known as the public key, is exchanged. This means that the recipient has a copy of the sender's public key and vice versa. The other key, known as the private key, must be kept private. The security depends upon the fact that no one else has a copy of a user's private key. If a user suspects that his private key has been compromised, he should immediately revoke that key pair and generate a new key pair.
When a key pair is generated, it is associated with a unique string of short nonsense words known as a fingerprint. The fingerprint is used to ensure that you are viewing the correct public key. (Remember, you never get to see anyone else's private key.) In order to verify a recipient, they first need to send you a copy of their public key. You then need to double-check the fingerprint with the other person to ensure you did indeed get their public key. This will make more sense in the next article when we generate a key pair and you see a fingerprint for yourself.
The most common key generation algorithm is RSA. You'll often see the term RSA associated with digital certificates or certificate authorities, also known as CAs. A digital certificate is a signed file that contains a recipient's public key, some information about the recipient, and an expiration date. The X.509 or PKCS #9 standard dictates the information found in a digital certificate. You can read the standard for yourself at http://www.rsasecurity.com/rsalabs/pkcs or http://ftp.isi.edu/in-notes/rfc2985.txt.
Digital certificates are usually stored on a computer known as a Certificate Authority. This means that you don't have to exchange public keys with a recipient manually. Instead, your system will query the CA when it needs a copy of a recipient's public key. This provides for a scalable authentication system. A CA can store the digital certificates of many recipients, and those recipients can be either users or computers.
It is also possible to generate digital certificates using an algorithm known as DSA. However, this algorithm is patented and is slower than RSA. Here is a FAQ on the difference between RSA and DSA. (The entire RSA Laboratories' FAQ is very good reading if you would like a more in depth understanding of cryptography.)
There is one last point to make on the subject of digital certificates and CAs. A digital certificate contains an expiration date, and the certificate cannot be deleted from the CA before that date. What if a private key becomes compromised before that date? You'll obviously want to generate a new certificate containing the new public key. However, you can't delete the old certificate until it expires. To ensure that certificate won't inadvertently be used to authenticate a recipient, you can place it in the CRL or Certificate Revocation List. Whenever a certificate is requested, the CRL is read to ensure that the certificate is still valid.
Authenticating the recipient is one half of the authenticity component. The other half involves generating and exchanging the information that will be used to create the session key which in turn will be used to encrypt and decrypt the data. This again requires an asymmetric algorithm, but this time it is usually the Diffie Hellman, or DH, algorithm.
It is important to realize that Diffie Hellman doesn't make the actual session key itself, but the keying information used to generate that key. This involves a fair bit of fancy math which isn't for the faint of heart. The best explanation I've come across, in understandable language with diagrams, is Diffie-Hellman Key Exchange - A Non-Mathematician's Explanation by Keith Palmgren.
It is important that the keying information is kept as secure as possible, so the larger the bit size, the better. The possible Diffie Hellman bit sizes have been divided into groups. The following chart summarizes the possible Diffie Hellman Groups:
|Group Name||Bit Size|
When configuring a cryptosytem, you should use the largest Diffie Hellman Group size that it supports.
The other term you'll see associated with the keying information is PFS, or Perfect Forward Secrecy, which Diffie Hellman supports. PFS ensures that the new keying information is not mathematically related to the old keying information. This means that if someone sniffs an old session key, they won't be able to use that key to guess the new session key. PFS is always a good thing and you should use it if the cryptosytem supports it.
Let's do a quick recap and summarize how a cryptosytem protects the data transmitted onto a network.
In next week's article, you'll have the opportunity to see many of these cryptographic terms in action as we'll be configuring a cryptosytem that comes built-in to your FreeBSD system: ssh.
Dru Lavigne is a network and systems administrator, IT instructor, author and international speaker. She has over a decade of experience administering and teaching Netware, Microsoft, Cisco, Checkpoint, SCO, Solaris, Linux, and BSD systems. A prolific author, she pens the popular FreeBSD Basics column for O'Reilly and is author of BSD Hacks and The Best of FreeBSD Basics.
Read more FreeBSD Basics columns.
Return to the BSD DevCenter.
Copyright © 2009 O'Reilly Media, Inc. | http://www.linuxdevcenter.com/lpt/a/2866 | 13 |
35 | Topics covered: Multielectron atoms and electron configurations
Instructor: Catherine Drennan, Elizabeth Vogel Taylor
Lecture Notes (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK. As you're settling into your seats, why don't we take 10 more seconds on the clicker question here. All right, so this is a question that you saw on your problem-set, so this is how many electrons would we expect to see in a single atom in the 2 p state. So, let's see what you said here. Six. And the correct answer is, in fact, six. And most of you got that, about 75% of you got that right. So, let's consider some people got it wrong, however, and let's see where that wrong answer might have come from, or actually, more importantly, let's see how we can all get to the correct answer. So if we say that we have a 2 p orbital here, that means that we can have how many different complete orbitals have a 2 for an n, and a p as its l value? three.
So, we can have the 2 p x, 2 p y, and 2 p z orbitals. Each of these orbitals can have two electrons in them, so we get two electrons here, here, and here. So, we end up with a total of six electrons that are possible that have that 2 p orbital value.
So this is a question that, hopefully, if we see another one like this we'll get a 100% on, because you've already seen this in your problem-set as much as you're going to see it, and you're seen it in class as much as you're going to see it. So if you're still having trouble with this, this is something you want to bring up in your recitation. And the idea behind this, of course, is that we know that every electron has to have its own distinct set of four quantum numbers. So that means that if we have three orbitals, we can only have six electrons in those complete three orbitals.
All right, so today we're going to fully have our discussion focused on multi-electron atoms. We started talking about these on Wednesday, and what we're going to start with is considering specifically the wave functions for multi-electron atoms. So, the wave functions for multi-electron atoms. Then we'll move on to talking about the binding energies, and we'll specifically talk about how that differs from the binding energies we saw of hydrogen atoms. We talked about that quite in depth, but there are some differences now that we have more than one electron in the atom.
Then something that you probably have a lot of experience with is talking about electron configuration and writing out the electron configuration. But we'll go over that, particularly some exceptions, when we're filling in electron configurations, and how we would go about doing that for positive ions, which follow a little bit of a different procedure.
And if we have time today, we'll start in on the photo-electron spectroscopy, if not, that's where we'll start when we come back on Wednesday.
So, what we saw just on Wednesday, in particular, but also as we have been discussing the Schrodinger equation for the hydrogen atom, is that this equation can be used to correctly predict the atomic structure of hydrogen, and also all of the energy levels of the different orbitals in hydrogen, which matched up with what we observed, for example, when we looked at the hydrogen atom emission spectra. And what we can do is we can also use the Schrodinger equation to make these accurate predictions for any other atom that we want to talk about in the periodic table.
The one problem that we run into is as we go to more and more atoms on the table, as we add on electrons, the Schrodinger equation is going to get more complicated. So here I've written for the hydrogen atom that deceptively simple form of the Schrodinger equation, where we don't actually write out the Hamiltonian operator, but you remember that's a series of second derivatives, so we have a differential equation that were actually dealing with.
If you think about what happens when we go from hydrogen to helium, now instead of one electron, so three position variables, we have to describe two electrons, so now we have six position variables that we need to plug into our Schrodinger equation. So similarly, as we now move up only one more atom in the table, so to an atomic number of three or lithium, now we're going from six variables all the way to nine variables.
So you can see that we're starting to have a very complicated equation, and it turns out that it's mathematically impossible to even solve the exact Schrodinger equation as we move up to higher numbers of electrons.
So, what we say here is we need to take a step back here and come up with an approximation that's going to allow us to think about using the Schrodinger equation when we're not just talking about hydrogen or one electron, but when we have these multi-electron atoms.
The most straightforward way to do this is to make what's called a one electron orbital approximation, and when you do you get out what are called Hartree orbitals, and what this means is that instead of considering the wave function as a function, for example, for helium as six different variables, what we do is we break it up and treat each electron has a separate wave function and say that our assumption is that the total wave function is equal to the product of the two individual wave functions.
So, for example, for helium, we can break it up into wave function for it the r, theta, and phi value for electron one, and multiply that by the wave function for the r, theta, and phi value for electron number two. So essentially what we're saying is we have a wave function for electron one, and a wave function for electron two.
We know how to write that in terms of the state numbers, so it would be 1, 0, 0, because we're talking about the ground state. We're always talking about the ground state unless we specify that we're talking about an excited state. And we have the spin quantum number as plus 1/2 for electron one, and minus 1/2 for the electron two. It's arbitrary which one I assigned to which, but we know that we have to have each of those two magnetic spin quantum numbers in order to have the distinct four letter description of an electron. We know that it's not enough just to describe the orbital by three quantum numbers, we need that fourth number to fully describe an electron.
And when we describe this in terms of talking about chemistry terminology, we would call the first one the 1 s, and 1 is in parentheses because we're talking about the first electron there, and we would multiply it by the wave function for the second one, which is also 1 s, but now we are talking about that second electron.
We can do the exact same thing when we talk about lithium, but now instead of breaking it up into two wave functions, we're breaking it up into three wave functions because we have three electrons.
So, the first again is the 1 s 1 electron. We then have the 1 s 2 electron, and what is our third electron going to be? Yeah. So it's going to be the 2 s 1 electron. So we can do this essentially for any atom we want, we just have more and more wave functions that we're breaking it up to as we get to more and more electrons.
And we can also write this in an even simpler form, which is what's called electron configuration, and this is just a shorthand notation for these electron wave functions. So, for example, again we see hydrogen is 1 s 1, helium we say is 1 s 2, or 1 s squared, so instead of writing out the 1 s 1 and the 1 s 2, we just combine it as 1 s squared, lithium is 1 s 2, 2 s 1.
So writing out electron configurations I realize is something that a lot of you had experience with in high school, you're probably -- many of you are very comfortable doing it, especially for the more straightforward atoms. But what's neat to kind of think about is if you think about what a question might have been in high school, which is please write the electron configuration for lithium, now we can also answer what sounds like a much more impressive, and a much more complicated question, which would be write the shorthand notation for the one electron orbital approximation to solve the Schrodinger equation for lithium.
So essentially, that is the exact same thing. The electronic configuration, all it is is the shorthand notation for that one electron approximation for the Schrodinger equation for lithium. So, if you're at hanging your exam from high school on the fridge and you want to make it look more impressive, you could just rewrite the question as that, and essentially you're answering the same thing. But now, hopefully, we understand where that comes from, why it is that we use the shorthand notation.
So, let's write this one electron orbital approximation for berylium, that sounds like a pretty complicated question, but hopefully we know that it's not at all, it's just 1 s 2, and then 2 s 2. And we can go on and on down the table. So, for example, for boron, now we're dealing with 1 s 2, then 2 s 2, and now we have to move into the p orbital so we go to 2 p 1.
So that's a little bit of an introduction into electron configuration. We'll get into some spots where it gets a little trickier, a little bit more complicated later in class. But that's an idea of what it actually means to talk about electron configuration.
So now that we can do this, we can compare and think about, we know how to consider wave functions for individual electrons in multi-electron atoms using those Hartree orbitals or the one electron wave approximations. So let's compare what some of the similarities and differences are between hydrogen atom orbitals, which we spent a lot of time studying, and now these one electron orbital approximations for these multi-electron atoms.
So, as an example, let's take argon, I've written up the electron configuration here, and let's think about what some of the similarities might be between wave functions in argon and wave functions for hydrogen. So the first is that the orbitals are similar in shape. So for example, if you know how to draw an s orbital for a hydrogen atom, then you already know how to draw the shape of an s orbital or a p orbital for argon.
Similarly, if we were to look at the radial probability distributions, what we would find is that there's an identical nodal structure. So, for example, if we look at the 2 s orbital of argon, it's going to have the same amount of nodes and the same type of nodes that the 2 s orbital for hydrogen has. So how many nodes does the 2 s orbital for hydrogen have? It has one node, right, because if we're talking about nodes it's just n minus 1 is total nodes, so you would just say 2 minus 1 equals 1 node for the 2 s orbital. And how many of those nodes are angular nodes? zero. L equals 0, so we have zero angular nodes, that means that they're all radial nodes. So what we end up with is one radial node for the 2 s orbital of hydrogen, and we can apply that for argon or any other multi-electron atom here, we also have one radial node for the 2 s orbital of argon.
But there's also some differences that we need to keep in mind, and that will be the focus of a lot of the lecture today. One of the main difference is is that when you're talking about multi-electron orbitals, they're actually smaller than the corresponding orbital for the hydrogen atom.
We can think about why that would be. Let's consider again an s orbital for argon, so let's say we're looking at the 1 s orbital for argon. What is the pull from the nucleus from argon going to be equal to? What is the charge of the nucleus? Does anyone know, it's a quick addition problem here. Yeah, so it's 18. So z equals 18, so the nucleus is going to be pulling at the electron with a Coulombic attraction that has a charge of plus 18, if we're talking about the 1 s electron or the 1 s orbital in argon. It turns out, and we're going to get the idea of shielding, so it's not going to actually feel that full plus 18, but it'll feel a whole lot more than it will just feel in terms of a hydrogen atom where we only have a nuclear charge of one.
So because we're feeling a stronger attractive force from the nucleus, we're actually pulling that electron in closer, which means that the probability squared of where the electron is going to be is actually a smaller radius. So when we talk about the size of multi-electron orbitals, they're actually going to be smaller because they're being pulled in closer to the nucleus because of that stronger attraction because of the higher charge of the nucleus in a multi-electron atom compared to a hydrogen atom.
The other main difference that we're really going to get to today is that in multi-electron atoms, orbital energies depend not just on the shell, which is what we saw before, not just on the value of n, but also on the angular momentum quantum number. So they also depend on the sub-shell or l. And we'll really get to see a picture of that, and I'll be repeating that again and again today, because this is something I really want everyone to get firmly into their heads.
So, let's now take a look at the energies. We looked at the wave functions, we know the other part of solving the Schrodinger equation is to solve for the binding energy of electrons to the nucleus, so let's take a look at those. And there again is another difference between multi-electron atom and the hydrogen atoms. So when we talk about orbitals in multi-electron atoms, they're actually lower in energy than the corresponding h atom orbitals. And when we say lower in energy, of course, what we mean is more negative. Right, because when we think of an energy diagram, that lowest spot there is going to have the lowest value of the binding energy or the most negative value of binding.
So, let's take a look here at an example of an energy diagram for the hydrogen atom, and we can also look at a energy diagram for a multi-electron atom, and this is just a generic one here, so I haven't actually listed energy numbers, but I want you to see the trend. So for example, if you look at the 1 s orbital here, you can see that actually it is lower in the case of the multi-electron atom than it is for the hydrogen atom.
You see the same thing regardless of which orbital you're looking at. For example, for the 2 s, again what you see is that the multi-electron atom, its 2 s orbital is lower in energy than it is for the hydrogen. The same thing we see for the 2 p. Again the 2 p orbitals for the multi-electron atom, lower in energy than for the hydrogen atom.
But there's something you'll note here also when I point out the case of the 2 s versus the 2 p, which is what I mentioned that I would be saying again and again, which is when we look at the hydrogen atom, the energy of all of the n equals 2 orbitals are exactly the same. That's what we call degenerate orbitals, they're the same energy. But when we get to the multi-electron atoms, we see that actually the p orbitals are higher in energy than the s orbitals. So we'll see specifically why it is that the s orbitals are lower in energy. We'll get to discussing that, but what I want to point out here again is the fact that instead of just being dependent on n, the energy level is dependent on both n and l. And is no longer that sole determining factor for energy, energy also depends both on n and on l.
And we can look at precisely why that is by looking at the equations for the energy levels for a hydrogen atom versus the multi-electron atom. So, for a hydrogen atom, and actually for any one electron atom at all, this is our energy or our binding energy. This is what came out of solving the Schrodinger equation, we've seen this several times before that the energy is equal to negative z squared times the Rydberg constant over n squared. Remember the z squared, that's just the atomic number or the charge on the nucleus, and we can figure that out for any one electron atom at all. And an important thing to note is in terms of what that physically means, so physically the binding energy is just the negative of the ionization energy. So if we can figure out the binding energy, we can also figure out how much energy we have to put into our atom in order to a eject or ionize an electron.
We can also look at the energy equation now for a multi-electron atom. And the big difference is right here in this term. So instead of being equal to negative z squared, now we're equal to negative z effective squared times r h all over n squared.
So when we say z effective, what we're talking about is instead of z, the charge on the nucleus, we're talking about the effective charge on a nucleus. So for an example, even if a nucleus has a charge of 7, but the electron we're interested in only feels the charge as if it were a 5, then what we would say is that the z effective for the nucleus is 5 for that electron. And we'll talk about this more, so if this is not completely intuitive, we'll see why in a second.
So the main idea here is z effective is not z, so don't try to plug one in for the other, they're absolutely different quantities in any case when we're not talking about a 1 electron atom. And the point that I also want to make is the way that they differ, z effective actually differs from the total charge in the nucleus due to an idea called shielding. So, shielding happens when you have more than one electron in an atom, and the reason that it's happening is because you're actually canceling out some of that positive charge from the nucleus or that attractive force with a repulsive force between two electrons. So if you have some charge in the nucleus, but you also have repulsion with another electron, the net attractive charge that a given electron going to feel is actually less than that total charge in the nucleus.
And shielding is a little bit of a misnomer because it's not actually that's the electron's blocking the charge from another electron, it's more like you're canceling out a positive attractive force with a negative repulsive force. But shielding is a good way to think about it, and actually, that's what we'll use in this class to sort of visualize what's happening when we have many electrons in an atom and they're shielding each other. Shielding is the term that's used, it brings up a certain image in our mind, and even though that's not precisely what's going on, it's a very good way to visualize what we're trying to think about here.
So let's take two cases of shielding if we're talking about, for example, the helium, a helium nucleus or a helium atom. So what is the charge on a helium nucleus? What is z? Yup, so it's plus 2. So the charge is actually just equal to z, we can write plus 2, or you can write plus 2 e, e just means the absolute value of the charge on an electron. When we plug it into equations we just use the number, the e is assumed there.
So, let's think of what we could have if we have two electrons in a helium atom that are shielded in two extreme ways. So, in the first extreme way, let's consider that our first electron is at some distance very far away from the nucleus, we'll call this electron one, and our second electron is, in fact, much, much closer to the nucleus, and let's think of the idea of shielding in more of the classical sense where we're actually blocking some of that positive charge. So if we have total and complete shielding where that can actually negate a full positive charge, because remember our nucleus is plus 2, one of the electrons is minus 1, so if it totally blocks it, all we would have left from the nucleus is an effective charge of plus 1.
So in our first case, our first extreme case, would be that the z effective that is felt by electron number 1, is going to be plus 1.
So, what we can do is figure out what we would expect the binding energy of that electron to be in the case of this total shielding. And remember again, the binding energy physically is the negative of the ionization energy, and that's actually how you can experimentally check to see if this is actually correct. And that's going to be equal to negative z effective squared times r h over n squared.
So, let's plug in these values and see what we would expect to see for the energy. So it would be negative 1 squared times r h all over 1 squared, since our z effective we're saying is 1, and n is also equal to 1, because we're in the ground state here so we're talking about a 1 s orbital.
So if we have a look at what the answer would be, this looks very familiar. We would expect our binding energy to be a negative 2 . 1 8 times 10 to the negative 18 joules. This is actually what the binding energy is for hydrogen atom, and in fact, that makes sense because in our extreme case where we have total shielding by the second electron of the electron of interest, it's essentially seeing the same nuclear force that an electron in a hydrogen atom would see.
All right. Let's consider now the second extreme case, or extreme case b, for our helium atom. Again we have the charge of the nucleus on plus 2, but let's say this time the electron now is going to be very, very close to the nucleus. And let's say our second electron now is really far away, such that it's actually not going to shield any of the nuclear charge at all from that first electron. So what we end up saying is that the z effective or the effective charge that that first electron feels is now going to be plus 2.
Again, we can just plug this into our equation, so if we write in our numbers now saying that z effective is equal to 2, we find that we get negative 2 squared r h, all divided again by 1 squared -- we're still talking about a 1 s orbital here. And if we do that calculation, what we find out is that the binding energy, in this case where we have no shielding, is negative 8 . 7 2 times 10 to the negative 18 joules.
So, let's compare what we've just seen as our two extremes. So in extreme case a, we saw that z effective was 1. This is what we call total shielding. The electron completely canceled out it's equivalent of charge from the nucleus, such that we only saw in a z effective of 1. In an extreme case b, we had a z effective of 2, so essentially what we had was no shielding at all. We said that that second electron was so far out of the picture, that it had absolutely no affect on what the charge was felt by that first electron.
So, we can actually think about now, we know the extreme cases, but what is the reality, and the reality is if we think about the ionization energy, and we measure it experimentally, we find that it's 3 . 9 4 times 10 to the negative 18 joules, and what you can see is that falls right in the middle between the two ionization energies that we would expect for the extreme cases. And this is absolutely confirming that what is happening is what we would expect to happen, because we would expect the case of reality is that, in fact, some shielding is going on, but it's not going to be total shielding, but at the same time it's not going to be no shielding at all.
And if we experimentally know what the ionization energy is, we actually have a way to find out what the z effective will be equal to. And we can use this equation here, this is just the equation for the ionization energy, which is the same thing as saying the negative of the binding energy that's equal to z effective squared r h over n squared.
So, what we can do instead of talking about the ionization energy, because that's one of our known quantities, is we can instead solve so that we can find z effective. So, if we just rearrange this equation, what we find is that z effective is equal to n squared times the ionization energy, all over the Rydberg constant and the square root of this. So the square root of n squared r e over r h.
So what's our value for n here? one. Yup, that's right. And then what's our value for ionization energy? Yup. So it's just that ionization energy that we have experimentally measured, 3 . 9 4 times 10 to the negative 18 joules. We put all of this over the Rydberg constant, which is 2 . 1 8 times 10 to the negative 18 joules, and we want to raise this all to the 1/2. So what we end up seeing is that the z effective is equal to positive 1 . 3 4.
So, this is what we find the actual z effective is for an electron in the helium atom. Does this seem like a reasonable number? Yeah? Who says yes, raise your hand if this seems reasonable. Does anyone think this seems not reasonable? OK. How can we check, for example, if it does or if it doesn't seem reasonable. Well, the reason, the way that we can check it is just to see if it's in between our two extreme cases. We know that it has to be more than 1, because even if we had total shielding, we would at least feel is the effective of 1. We know that it has to be equal to less than 2, because even if we had absolutely no shielding at all, the highest z effective we could have is 2, so it makes perfect sense that we have a z effective that falls somewhere in the middle of those two.
So, let's look at another example of thinking about whether we get an answer out that's reasonable. So we should be able to calculate a z effective for any atom that we want to talk about, as long as we know what that ionization energy is. And I'm not expecting you to do that calculation here, because it involves the calculator, among maybe a piece of paper as well. But what you should be able to do is take a look at a list of answers for what we're saying z effective might be, and determining which ones are possible versus which ones are not possible.
So, why don't you take a look at this and tell me which are possible for a 2 s electron in a lithium atom where z is going to be equal to three? Let's do 10 more seconds on that.
OK, great. So, the majority of you got it right. There are some people that are a little bit confused still on where this make sense, so, let's just think about this a little bit more. So now we're saying that z is equal to 3, so if, for example, we had total shielding by the other two electrons, if they totally canceled out one unit of positive charge each in the nucleus, what we would end up with is we started with 3 and then we would subtract a charge of 2, so we would end up with a plus 1 z effective from the nucleus. So our minimum that we're going to see is that the smallest we can have for a z effective is going to be equal to 1. So any of the answers that said a z effective of . 3 9 or . 8 7 are possible, they actually aren't possible because even if we saw a total shielding, the minimum z effective we would see is 1. And then I think it looks like most people understood that four was not a possibility. Of course, if we saw no shielding at all what we would end up with is a z effective of 3.
So again, when we check these, what we want to see is that our z effective falls in between the two extreme cases that we could envision for shielding. And again, just go back and look at this and think about this, this should make sense if you kind of look at those two extreme examples, so even if it doesn't make entire sense in the 10 seconds you have to answer a clicker question right now, make sure this weekend you can go over it and be able to predict if you saw a list of answers or if you calculate your own answer on the p-set, whether or not it's right or it's wrong, you should be able to qualitatively confirm whether you have a reasonable or a not reasonable answer after you do the calculation part.
All right. So now that we have a general idea of what we're talking about with shielding, we can now go back and think about why it is that the orbitals are ordered in the order that they are. We know that the orbitals for multi-electron atoms depend both on n and on l. But we haven't yet addressed why, for example, a 2 s orbital is lower in energy than the 2 p orbital, or why, for example, a 3 s orbital is lower in energy than a 3 p, which in turn is lower than a 3 d orbital.
So let's think about shielding in trying to answer why, in fact, it's those s orbitals that are the lowest in energy. And when we make these comparisons, one thing I want to point out is that we need to keep the constant principle quantum number constant, so we're talking about a certain state, so we could talk about the n equals 2 state, or the n equals 3 state. And when we're talking about orbitals in the same state, what we find is that orbitals that have lower values of l can actually penetrate closer to the nucleus.
This is an idea we introduced on Wednesday when we were looking at the radial probability distributions of p orbitals versus s orbitals versus d orbitals. But now it's going to make more sense because in that case we were just talking about single electron atoms, and now we're talking about a case where we actually can see shielding. So what is actually going to matter is how closely that electron can penetrate to the nucleus, and what I mean by penetrate to the nucleus is is there probability density a decent amount that's very close to the nucleus.
So, if we superimpose, for example, the 2 s radial probability distribution over the 2 p, what we see is there's this little bit of probability density in the 2 s, but it is significant, and that's closer to the nucleus than it is for the 2 p. And remember, this is in complete opposition to what we call the size of the orbitals, because we know that the 2 p is actually a smaller orbital. For example, when we're talking about radial probability distributions, the most probable radius is closer into the nucleus than it is for the s orbital.
But what's important is not where that most probable radius is when we're talking about the z effective it feels, what's more important is how close the electron actually can get the nucleus. And for the s electron, since it can get closer, what we're going to see is that s electrons are actually less shielded than the corresponding p electrons. They're less shielded because they're closer to the nucleus, they feel a greater z effective.
We can see the same thing when we compare p electrons to d electrons, or p and d orbitals. I've drawn the 3 p and the 3 d orbital here, and again, what you can see is that the p electron are going to be able to penetrate closer to the nucleus because of the fact that there's this bit of probability density that's in significantly closer to the nucleus than it is for the 3 d orbital.
And if we go ahead and superimpose the 3 s on top of the 3 p, you can see that the 3 s actually has some bit of probability density that gets even closer to the nucleus than the 3 p did. So that's where that trend comes from where the s orbital is lower than the d orbital, which is lower than the d orbital.
So now that we have this idea of shielding and we can talk about the differences in the radial probability distributions, we can consider more completely why, for example, if we're talking about lithium, we write the electron configuration as 1 s 2, 2 s 1, and we don't instead jump from the 1 s 2 all the way to a p orbital. So the most basic answer that doesn't explain why is just to say well, the s orbital is lower in energy than the p orbital, but we now have a more complete answer, so we can actually describe why that is. And what we're actually talking about again is the z effective. So that z effective felt by the 2 p is going to be less than the z effective felt by the 2 s.
And another way to say this, I think it's easiest to look at just the fact that there's some probability density very close the nucleus, but what we can actually do is average the z effective over this entire radial probability distribution, and when we find that, we find that it does turn out that the average of the z effective over the 2 p is going to be less than that of the 2 s.
So we know that we can relate to z effective to the actual energy level of each of those orbitals, and we can do that using this equation here where it's negative z effective squared r h over n squared, we're going to see that again and again. And it turns out that if we have a, for example, for s, a very large z effective or larger z effective than for 2 p, and we plug in a large value here in the numerator, that means we're going to end up with a very large negative number. So in other words a very low energy is what we're going to have when we talk about the orbitals -- the energy of the 2 s orbital is going to be less than the energy of the 2 p orbital.
Another way to say that it's going to be less, so you don't get confused with that the fact this is in the numerator here, there is that negative sign so it's less energy but it's a bigger negative number that gives us that less energy there.
All right, so let's go back to electrons configurations now that we have an idea of why the orbitals are listed in the energy that they are listed under, why, for example, the 2 s is lower than the 2 p. So now we can go back and think about filling in these electron configurations for any atom.
I think most and you are familiar with the Aufbau or the building up principle, you probably have seen it quite a bit in high school, and this is the idea that we're filling up our energy states, again, which depend on both n and l, one electron at a time starting with that lowest energy and then working our way up into higher and higher orbitals.
And when we follow the Aufbau principle, we have to follow two other rules. One is the Pauli exclusion principal, we discussed this on Wednesday. So this is just the idea that the most electrons that you can have in a single orbital is two electrons. That makes sense because we know that every single electron has to have its own distinct set of four quantum numbers, the only way that we can do that is to have a maximum of two spins in any single orbital or two electrons per orbital.
We also need to follow Hund's rule, this is that a single electron enters each state before it enters a second state. And by state we just mean orbital, so if we're looking at the p orbitals here, that means that a single electron goes in x, and then it will go in the z orbital before a second one goes in the x orbital. This intuitively should make a lot of sense, because we know we're trying to minimize electron repulsions to keep things in as low an energy state as possible, so it makes sense that we would put one electron in each orbital first before we double up in any orbital.
And the third fact that we need to keep in mind is that spins remain parallel prior to adding a second electron in any of the orbitals. So by parallel we mean they're either both spin up or they're both spin down -- remember that's our spin quantum number, that fourth quantum number. And the reason for this comes out of solving the relativistic version of the Schrodinger equation, so unfortunately it's not as intuitive as knowing that we want to fill separate before we double up a degenerate orbital, but you just need to keep this in mind and you need to just memorize the fact that you need to be parallel before you double up in the orbital.
So, we'll see how this works in a second. So let's do this considering, for example, what it would look like if we were to write out the electron configuration for oxygen where z is going to be equal to 8. So what we're doing is filling in those eight electrons following the Aufbau principle, so our first electron is going to go in the 1 s, and then we have no other options for other orbitals that are at that same energy, so we put the second electron in the 1 s as well. Then we go up to the 2 s, and we have two electrons that we can fill in the 2 s. And now we get the p orbitals, remember we want to fill up 1 orbital at a time before we double up, so we'll put one in the 2 p x, then one in the 2 p z, and then one in the 2 p y.
At this point, we have no other choice but to double up before going to the next energy level, so we'll put a second one in the 2 p x. And I arbitrarily chose to put it in the 2 p x, we also could have put it in the 2 p y or the 2 p z, it doesn't matter where you double up, they're all the same energy.
So if we think about what we would do to actually write out this configuration, we just write the energy levels that we see here or the orbital approximations. So if we're talking about oxygen, we would say that it's 1 s 2, then we have 2 s 2, and then we have 2 p, and our total number of electrons in the p orbitals are four.
So it's OK to not specify. I want to point out, whether you're in the p x, the p y, or the p z, unless a question specifically asks you to specify the m sub l, which occasionally will happen, but if it doesn't happen you just write it like this. But if, in fact, you are asked to specify the m sub l's, then we would have to write it out more completely, which would be the 1 s 2, the 2 s 2, and then we would say 2 p x 2, 2 p z 1, and 2 p y 1.
So again, in general, just go ahead and write it out like this, but if we do ask you to specify you should be able to know that the p orbital separates into these three -- the p sub-shell separates into these three orbitals.
So let's do a clicker question on assigning electron configurations using the Aufbau principle. So why don't you go ahead and identify the correct electron configuration for carbon, and I'll tell you that z is equal to 6 here. And in terms of doing this for your homework, I actually want to mention that in the back page of your notes I attached a periodic table that does not have electron configurations on them. It's better to practice doing electron configurations when you cannot actually see the electron configurations. And this is the same periodic table that you're going to get in your exams, so it's good to practice doing your problem-sets with that periodic table so you're not relying on having the double check right there of seeing what the electron configuration is. So, let's do 10 seconds on this problem here.
OK, great. So this might be our best clicker question yet. Most people were able to identify the correct electron configuration here. Some people, the next most popular answer with 5%, which is a nice low number, wanted to put two in the 2 p x before they moved on. Remember we have to put one in each degenerate orbital before we double up on any orbital, so just keep that rule in mind that we would fill one in each p orbital before we a to the second one. But it looks like you guys are all experts here on doing these electron configurations.
So, let's move on to some more complicated electron configurations. So, for example, we can move to the next periods in the periodic table. When we talk about a period, we're just talking about that principle quantum number, so period 2 means that we're talking about starting with the 2 s orbitals, period 3 starts with, what we're now filling into the 3 s orbitals here. So if we're talking about the third period, that starts with sodium and it goes all the way up to argon.
So if we write the electron configuration for sodium, which you can try later -- hopefully you would all get it correctly -- you see that this is the electron configuration here, 1 s 2, 2 s 2, 2 p 6, and now we're going into that third shell, 3 s 1.
And I want to point out the difference between core electrons and valence electrons here. If we look at this configuration, what we say is all of the electrons in these inner shells are what we call core electrons. The core electrons tend not to be involved in much chemistry in bonding or in reactions. They're very deep and held very tightly to the nucleus, so we can often lump them together, and instead of writing them all out separately, we can just write the equivalent noble gas that has that configuration. So, for example, for sodium, we can instead write neon and then 3 s 1.
So the 3 s 1, or any of the other electrons that are in the outer-most shell, those are what we call our valence electrons, and those are where all the excitement happens. That's what we see are involved in bonding. It makes sense, right, because they're the furthest away from the nucleus, they're the ones that are most willing to be involved in some chemistry or in some bonding, or those are the orbitals that are most likely to accept an electron from another atom, for example. So the valence electrons, those are the exciting ones. We want to make sure we have a full picture of what's going on there.
So, no matter whether or not you write out the full form here, or the noble gas configuration where you write ne first or whatever the corresponding noble gas is to the core electrons, we always write out the valence electrons here. So for sodium, again, we can write n e and then 3 s 1. We can go all the way down, magnesium, aluminum, all the way to this noble gas, argon, which would be n e and then 3 s 2, 3 p 6.
Now we can think about the fourth period, and the fourth period is where we start to run into some exceptions, so this is where things get a teeny bit more complicated, but you just need to remember the exceptions and then you should be OK, no matter what you're asked to write. So for the fourth period, now we're into the 4 s 1 for potassium here. And what we notice when we get to the third element in and the fourth period is that we go 4 s 2 and then we're back to the 3 d's.
So if you look at the energy diagram, what we see is that the 4 s orbitals are actually just a teeny bit lower in energy -- they're just ever so slightly lower in energy than the 3 d orbitals. You can see that as you fill up your periodic table, it's very clear. But also we'll tell you a pneumonic device to keep that in mind, so you always remember and get the orbital energy straight. But it just turns out that the 4 s is so low in energy that it actually surpasses the 3 d, because we know the 3 d is going to be pretty high in terms of the three shell, and the 4 s is going to be the lowest in terms of the 4 shell, and it turns out that we need to fill up the 4 s before we fill in the 3 d.
And we can do that just going along, 3 d 1, 2 3, and the problem comes when we get to chromium here, which is instead of what we would expect, we might expect to see 4 s 2, 3 d 4. What we see is that instead it's 4 s 1, and 3 d 5. So this is the first exception that you need to the Aufbau principle. The reason this an exception is because it turns out that half filled d orbitals are more stable than we could even predict. You wouldn't be expected to be able to guess that this would happen, because using any kind of simple theory, we would, in fact, predict that this would not be the case, but what we find experimentally is that it's more stable to have half filled d orbital than to have a 4 s 2, and a 3 d 4.
So you're going to need to remember, so this is an exception, you have to memorize. Another exception in the fourth period is in copper here, we see that again, we have 4 s 1 instead of 4 s 2. This is 4 s 1, 3 d 10, we might expect 4 s 2, 3 d 9, but again, this exception comes out of experimental observation, which is the fact that full d orbitals also are lower in energy then we could theoretically predict using simple calculations.
So again, you need to memorize these two exceptions, and the exception in general is that filled d 10, or half-filled d 5 orbitals are lower in energy than would be expected, so we got this flip-flip where if we can get to that half filled orbital by only removing one s electron, then we're going to do it, and the same with the filled d orbital.
And actually, when we get to the fifth period of the periodic table, that again takes place, so when you get to a half filled, or a filled d orbital, again you want to do it, so those exceptions would be with molybdenum and silver would be the corresponding elements in the fifth period where you're going to see the same case here where it's lower in energy to have the half filled or the filled d orbitals.
So here's the pneumonic I mentioned for writing the electron configuration and getting those orbital energies in the right order. All you do is just write out all the orbitals, the 1 s, then the 2 s 2 p 3, 3 s 3 p d, just write them in a straight line like this, and then if you draw diagonals down them, what you'll get is the correct order in terms of orbital energies. So if we go down the diagonal, we start with 1 s, then we get 2 s, then 2 p and 3 s, then 3 p, and 4 s, and then that's how we see here that 4 s is actually lower in energy than 3 d, then 4 p, 5 s and so on.
So if you want to on an exam, you can just write this down quickly at the beginning and refer to it as you're filling up your electron configurations, but also if you look at the periodic table it's very clear as you try to fill it up that way that the same order comes out of that. So, whichever works best for you can do in terms of figuring out electron configurations.
So the last thing I want to mention today is how we can think about electron configurations for ions. It turns out that it's going to be a little bit different when we're talking about positive ions here. We need to change our rules just slightly. So what we know is that these 3 d orbitals are higher in energy than 4 s orbitals, so I've written the energy of the orbital here for potassium and for calcium. But what happens is that once a d orbital is filled, I said the two are very close in energy, and once a d orbital is filled, it actually drops to become lower in energy than the 4 s orbital. So once we move past, we fill the 4 s first, but once we fill in the d orbital, now that's going to be lower in energy.
So that doesn't make a difference for us when we're talking about neutral atoms, because we would fill up the 4 s first, because that's lower in energy until we fill it, and then we just keep going with the d orbitals. So, for example, if we needed to figure out the electron configuration for titanium, it would just be argon then 4 s 2, and then we would fill in the 3 d 2.
So, actually we don't have to worry about this fact any time we're dealing with neutrals. The problem comes when instead we're dealing with ions. So what I want to point out is what we said now is that the 3 d 2 is actually lower in energy, so if we were to rewrite this in terms of what the actual energy order is, we should instead write it 3 d 2, 4 s 2.
So you might ask in terms of when you're writing electron configurations, which way should you write it. And we'll absolutely accept both answers for a neutral atom. They're both correct. In one case you decided to order in terms of energy and in one case you decided to order in terms of how it fills up. I don't care how you do it on exams or on problem sets, but you do need to be aware that the 3 d once filled is lower in energy than the 4 s, and the reason you need to be aware of that is if you're asked for the electron configuration now of the titanium ion. So, let's say we're asked for the plus two ion. So a plus two ion means that we're removing two electrons from the atom and the electrons that we're going to remove are always going to be the highest energy electrons. So it's good to write it like this because this illustrates the fact that in fact the 4 s electrons are the ones that are higher in energy. So the correct answer for titanium plus two is going to be argon 3 d 2, whereas if we did not rearrange our order here we might have been tempted to write as 4 s 2 so keep that in mind when you're doing the positive ions of corresponding atoms. Alright, so we'll pick up with photoelectron spectroscopy on Wednesday. Have a great weekend. | http://ocw.mit.edu/courses/chemistry/5-111-principles-of-chemical-science-fall-2008/video-lectures/lecture-8/ | 13 |
11 | Learn All Year Long
ReadWriteThink has a variety of resources for out-of-school use. Visit our Parent & Afterschool Resources section to learn more.
What’s in This? Investigating Nutrition Information
|Grades||6 – 8|
|Activity Time||1 to 2 hours (plus additional time to make advertisements)|
- Access to foods that have nutrition labels, either at home or at the grocery store
- Grocery Store Scavenger Hunt Guide
- Advertisement Analysis
- Access to food advertisements on TV, in magazines, on the Internet, and so forth
- Art supplies such as paper or poster board, markers, colored pencils, scissors, and glue, or a computer with design software such as Publisher or PowerPoint for making the advertisement
- Access to the Nutrition Data website
- Using the Grocery Store Scavenger Hunt Guide, ask children and teens to choose two foods they eat regularly and two foods they think are healthful. Have them record their answers on their guides.
- Then, either at the grocery store or at home, children and teens can find these foods and read their Nutrition Facts labels. Have them record their findings on their guides.
- Once they have examined the foods on their lists, invite them to do the same thing with the junk foods that are listed on their guides. If they are unable to find certain foods, they can try looking them up on the Nutrition Data website.
- Discuss which foods are more healthy and how they can tell using the food labels. Encourage children and teens to go to the Nutrition Data website to look up information about the foods being discussed and find out the effects of different foods on their bodies. You may wish to explore questions such as
- How do foods high in saturated fat affect the body?
- What about foods high in protein and iron?
- What do different vitamins do for the body?
- Which foods have those vitamins?
- Have them finish their guides by examining the packaging of the junk foods they found, paying attention to the colors, images, slogans, and so forth, in order to better understand how the sellers of the products makes them stand out as attractive foods to purchase.
- Discuss their findings. What about the packaging catches their attention? How are characters or celebrities used? How might that affect shoppers who see this product? Who does it seem like the product is for? How do they know?
- Now that they have practiced analyzing the food packaging and have discussed the effects of nutrition on their bodies, children and teens can find commercials or advertisements for different foods. They might search on the Internet or watch a few minutes of TV to find them.
- After they watch or look at the advertisements, they are ready to complete the Advertisement Analysis. The goal of this activity is for children and teens to think critically about the ways foods are advertised. In order to encourage this type of analysis, it is a good idea to talk about the parts of the advertisements and how they are put together to create a feeling or to make the products seem good for specific types of people.
- Have teens or children discuss these questions:
What are the parts of the advertisements? These might consist of music, color, and placement of the products being sold.
- What types of people are in the ads? How old are they? How do they dress and behave? What does this make you think about them? Are they cool? Are they sophisticated?
- What are the slogans associated with the products?
- What level of movement and excitement do the ads convey? Do the ads seem full of energy or slow and calm?
- What are you told about the products through narration?
- How do the parts of the ads work together? Based on these parts, who do you think the advertisements are speaking to? Do the products seem appropriate for their audiences?
- Now children and teens can make advertisements or commercials of their own: Children and teens can choose healthful foods that they would like their friends and family to try eating more often. They can create advertisements or commercials using some of the most effective techniques they noticed in professional food advertisements. Encourage them to write scripts, create props and backdrops, and include friends as the actors if they are making commercials. If they are making print advertisements, encourage them to focus on using effective images and words. They might start by brainstorming lists of possible slogans for their healthful foods and building the advertisements around these.
- Nutrition Data Scavenger Hunt: Use the Nutrition Data website to discover more about nutrition and healthful eating. Children and teens can find out their own Body Mass Index (BMI), learn about cholesterol and saturated fat using the Nutrition Glossary, or understand why protein is an important part of everyone's diet. To make it a challenge, invite children and teens to compete to find the most interesting, strange, or surprising information about food and health on the site. They might even want to create quizzes for each other and test their friends' food knowledge.
- Keep a log of all the foods eaten in a week. Explore the Nutrition Data website to see how healthful these foods are. Make a plan to replace any unhealthful foods with fun-to-eat, healthful foods.
- Measure Your Options: Once children and teens have found several unhealthful foods they want to replace with healthful foods in their diets, have some fun with math by showing them how to measure how much more they can eat of a healthful food like carrots than an unhealthful food like potato chips. Use the serving measurements on the packages to determine how much they would get of each food. Then talk about the difference in nutrients. Go to the Nutrition Data website and compare the vitamins, protein, and types of fat in each kind of food.
- Letter writing campaign: Did children and teens find something shocking, disturbing, or encouraging about one of the foods they have researched? What about the commercial that advertises the food? Does it lead viewers to believe the food will be different from how it really is? Consider writing a letter to the food company pointing out an issue about their food or praising them for producing a great food. Is this an issue that all consumers should know about? Write a letter to the editor to your local newspaper informing consumers about this issue.
- Healthful cooking class: Using some of the healthful ingredients that children and teens have found, they can find or create a recipe for a healthful meal and cook that meal with friends and family. They might also want to turn this into an opportunity to teach others about nutrition. Encourage them to create a menu that gives a delicious description of the meal and why it is healthful.
The person or group of people that the message of a piece of writing is meant for. Most pieces of writing have more than one audience.
Discussion is a natural way for children and teens to express or explain what they already know or what they are learning. When possible, let children and teens lead the direction of a discussion. Ask questions that lead to an extended response (“What do you think about…?” or “Why do you think…?”) rather than questions that might result in a yes or no or a simple answer.
To think both logically and creatively about a topic using different kinds of information. When people think critically, they not only attend to new words and ideas, but they also connect these words and ideas with the things they already know. | http://www.readwritethink.org/parent-afterschool-resources/activities-projects/what-this-investigating-nutrition-30294.html?main-tab=2 | 13 |
12 | Science Fair Information
Final Projects are due Jan. 11th. Projects will count as a test grade.
Science Fair Information
1. All students are required to complete a science fair project.
2. All students must have a written report following the steps of the scientific method.
3. All students will perform an oral presentation.
4. Students must have project approved before starting.
5. Projects will count as a test grade.
6. Students must adhere to deadlines.
7. Gifted students & students that did an exceptional project are expected to compete in the Wilson Middle School Science Fair.
8. Students participating in the science fair will receive additional information & necessary forms.
1. Must be typed or written neatly in blue or black ink.
2. Each step must be clearly written then skip a line & put the information.
3. Space in between each step.
1. Oral presentation can include any visual aides, power point presentations, demonstrations, models, etc.
2. If planning on competing, make sure any pictures just show the experiment & not yourself.
1. Title-the title of your project should be catchy, an “interest grabber” but it should also describe the project well enough that people reading your information can quickly figure out what you were studying.
2. Problem-written as a question. Make sure this is clearly written that explains what you are trying to do. Make sure your question is testable. No pronouns.
4. Materials-a list of all the supplies used to build or perform experiment. Include any measurements. Numbered steps.
5. Procedures-detailed step by step instructions on how you performed your experiment. Someone could follow your steps & receive the same information. Include measurements & exact information.
8. Conclusion-restate your hypothesis and indicate whether the hypothesis was correct or not. List evidence (data) that supports your hypothesis. In the conclusion you get to tell your audience what you found out from the experiment. Focus on what you learned about your original question & hypothesis. How did you reach that conclusion? Does that make sense?
9. Bibliography-list 3 sited sources in correct bibliography form.
****A picture is worth a thousand words. Plan to take pictures of the materials you used & of the experiment as it is being carried out.
****Start early so you have time to re-do any part of the experiment. Plan for the unexpected to happen; don’t wait till the week before the project is due to begin.
If you run into trouble seek advice from me. If you have any questions, ask them now. | http://www.augusta.k12.va.us/Page/12562 | 13 |
21 | Ma1, Using and applying mathematics
Sorting and ordering numbers
- Assembles a new classroom resource, which has a 10 by 10 arrangement of clear pockets to hold cards numbered from 1 to 100.
- Chooses to find numbers 1 to 10 and position them in turn across the top row.
- Adapts his approach to include positioning numbers in columns.
- Uses the units digit to decide the column and the tens digit to decide the row.
- Orders all of the numbers from 1 to 100.
- Talk about what would happen if:
- the numbers started with zero
- there were only five columns in the grid.
- Pose his own questions, predict where numbers will be placed and test the prediction.
- What would we write on the pot? What labels would we write? Daniel?
- Teeth. Fantastic!
- As well as…
- What other things do…
- Bones. Brilliant!
- Metal. Leah?
- Glass. We might find glass.
- So we have our pots to sort and group our finds, but how are we going to remember how many plastic artefacts we find, how many metal artefacts we find, how many bone artefacts we find? Daniel?
- We could record it on the board.
- Could you write it on the board for us, Daniel, to show us how you’d organise that information?
- Or a tally chart.
- A tally chart. Is that what you think Daniel is doing, Evangeline?
- Who would like to count how many tally marks we have in wood? Daniel, I’m going to choose you.
- One, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen, fifteen, sixteen,seventeen, eighteen, nineteen, twenty, twenty-one, twentytwo, twenty-three, twenty-four, twenty-five.
- Twenty-five. Who had the closest guess?
- I did because I went er… thirty-one.
- Roshan said…
- Roshan said twenty-six. Now how many did you count, Daniel?
- Twenty five. So is thirty-one closer to twenty-five or is twenty-six?
- Twenty-six. Well done.
- Daniel, how would you count… how would you count the tallies?
- There’s twenty-five of er… wood because I counted in fives and it got to twenty-five.
- Can you show us how you counted in fives, Daniel?
- Five, ten, fifteen, twenty, twenty-five.
- Good counting, Daniel. Well done.
- And next we’ve got… ten… five, ten, fifteen, twenty, twenty-five… twenty-six.
- How many artefacts would be in the not bone case? Work it out with your partner. Working out.
- Inaudible… forty-five, fifty, fifty-one… fifty-two, fifty-three, fifty-four, fifty-five, fifty… -six.
In the classroom
[End of clip]
- Suggests labels for pots to sort and group finds from a sample of garden soil: teeth, bones, metal…
- Divides the whiteboard and labels regions for children to keep a tally as they place finds into labelled pots.
- Estimates that there are thirty-one tally marks for wood.
- Counts the twenty-five tally marks for wood, ignoring the mark that is partially erased, and recognises his estimate was close but not the closest.
- Uses the teacher's grouping of tallies on the whiteboard, counts in fives and confirms the number of wooden artefacts found.
- Counts in fives and one more to find the total of metal artefacts.
- Uses the tally chart and counts in fives to find how many artefacts would be in the 'not bone' case.
- Suggest a criterion for sorting the finds into two roughly equal groups to display in the two available museum cases, e.g. metal/not metal.
- Use a block graph or pictogram where one symbol represents two to record the information about finds from garden soil.
- Compare the graphs of finds from the garden soil and finds from the beach sand.
- Pose and answer questions, e.g. 'What types of find were found only in the sand from the beach?', 'Were some types found only in the garden soil?'.
- Begin to group tallies in fives to make counting more efficient as he collects other data.
- Chooses to list addition facts he knows for different numbers.
- Works in an organised way finding pairs of numbers that add to 1, 2, 3…
- Records addition sentences using symbols '+' and '='.
- Understands addition can be done in any order.
- Checks he has all addition pairs for each number.
- Looks for pattern in results.
- Notices that the number 1 has two facts, number 2 has three facts…
- Predicts that for the number 9 there will be ten facts and for 10 eleven facts.
- Reorganise his calculations as a way of justifying his prediction, e.g.
0 + 3 = 3
1 + 2 = 3
2 + 1 = 3
3 + 0 = 3
- Test the prediction by listing the addition facts for 9 and 10.
- Find different pairs of numbers to make these subtraction sentences correct:
☐ - ☐ = 1, ☐ - ☐ = 2
- Talk about why there are an infinite number of ways to complete the subtraction sentences.
- Shows how to pay for various items using as few coins as possible.
- Works in an organised way selecting the largest coin values first.
- Places coins next to items.
- Records by tracing around the coins, accidentally moving a 1p coin into the wrong group.
- Records coins that total 72 pence but not the fewest possible.
- Review and check his work.
- Investigate a related situation:
- find all the ways to pay exactly 10 pence and talk about how you know you have them all
- find ways to pay each amount from 1p to 50p using as few coins as possible and talk about amounts that can be made with just one coin, with just two coins.
- Identifies the repeat in patterns made by the teacher.
- Continues those patterns, mostly accurately.
- Explains how he knows what comes next.
- Check patterns to identify a missing shape in one of them.
- Continue a sequence of shapes made with, e.g. linking cubes.
- Predict how many cubes there will be in the next shape, the fifth shape…and give his reasons.
- Make the shapes and check predictions.
- Create a similar sequence for a partner to continue.
What the teacher knows about Daniel's attainment in Ma1, Using and applying mathematics
Daniel identifies his own starting points for solving a range of problems that relate to numbers and data particularly. He uses apparatus such as coins and cubes to represent the situations he investigates and problems he solves. He is beginning to approach some problems systematically, for example starting with the largest coin denominations when finding ways to pay different amounts using as few coins as possible. He applies the handling-data skills developed in one survey when undertaking another. For example, having used a tally to record votes for favourite crisp flavours he suggested a tally for recording different types of archaeological find.
Daniel discusses his work using everyday and mathematical language. For example, he describes how to use known doubles facts to find half of an even number. He talks about sorting, counting, tallying, listing and drawing a table or graph when handling data. Daniel is beginning to represent his work using symbols and diagrams. He uses symbols to record addition and subtraction. He uses diagrams, tables and graphs to record and retrieve information.
Daniel tests simple statements about numbers to see if they are true or false. When testing ‘There are five odd numbers between 10 and 20' he listed 11, 13, 15, 17 and 19 on his whiteboard. In this instance he explained, 'There are five. I worked them out. Ten is even. Then there's an odd number, then even, then odd… The odd numbers are all numbers that aren't multiples of two.' Daniel predicts what comes next in a simple number or shape pattern. For example, he continued an ascending sequence of multiples of five and then created a descending sequence by reversing the order.
Summarising Daniel's attainment in Ma1, Using and applying mathematics
In each assessment focus Daniel's teacher decides that his attainment is best described as level 2. Reading the complete level descriptions for levels 1 and 2 confirms that the level 2 description is the best fit overall.
Although Daniel meets all of the assessment criteria for level 2, much of his attainment relates to using and applying number and handling data. He has yet to demonstrate his attainment as fully in the contexts of shape and space or measures. His teacher refines her judgement to secure level 2.
To make further progress within level 2 Daniel should solve problems in a wider range of contexts, particularly contexts relating to shape and space. For example, using construction materials, he might identify which collection of linking shapes could be used to create a given 3-D shape. Investigating 2-D shapes, he might find different ways to fold a square into quarters and investigate the different shapes that can be made by fitting all four quarters of one square together edge to edge. He should also solve measurement problems such as comparing lengths that cannot be placed together for direct comparison. | http://webarchive.nationalarchives.gov.uk/20110809091832/http:/www.teachingandlearningresources.org.uk/collection/30920/node/22378 | 13 |
13 | Scientists have ruled out the possibility that methane is delivered to Mars by meteorites, raising fresh hopes that the gas might be generated by life on the red planet, in research published tomorrow in Earth and Planetary Science Letters.
Methane has a short lifetime of just a few hundred years on Mars because it is constantly being depleted by a chemical reaction in the planet's atmosphere, caused by sunlight. Scientists analysing data from telescopic observations and unmanned space missions have discovered that methane on Mars is being constantly replenished by an unknown source and they are keen to uncover how the levels of methane are being topped up.
Researchers had thought that meteorites might be responsible for Martian methane levels because when the rocks enter the planet's atmosphere they are subjected to intense heat, causing a chemical reaction that releases methane and other gases into the atmosphere.
However, the new study, by researchers from Imperial College London, shows that the volumes of methane that could be released by the meteorites entering Mars's atmosphere are too low to maintain the current atmospheric levels of methane. Previous studies have also ruled out the possibility that the methane is delivered through volcanic activity.
This leaves only two plausible theories to explain the gas's presence, according to the researchers behind today's findings. Either there are microorganisms living in the Martian soil that are producing methane gas as a by-product of their metabolic processes, or methane is being produced as a by-product of reactions between volcanic rock and water.
Co-author of the study, Dr Richard Court, Department of Earth Science and Engineering at Imperial College London, says:
"Our experiments are helping to solve the mystery of methane on Mars. Meteorites vaporising in the atmosphere are a proposed methane source but when we recreate their fiery entry in the laboratory we get only small amounts of the gas. For Mars, meteorites fail the methane test."
The team say their study will help NASA and ESA scientists who are planning a joint mission to the red planet in 2018 to search for the source of methane. The researchers say now that they have discovered that meteorites are not a source of Methane on Mars, ESA and NASA scientists can focus their attention on the two last remaining options.
Co-author, Professor Mark Sephton, Department of Earth Science and Engineering at Imperial College London, adds:
"This work is a big step forward. As Sherlock Holmes said, eliminate all other factors and the one that remains must be the truth. The list of possible sources of methane gas is getting smaller and excitingly, extraterrestrial life still remains an option. Ultimately the final test may have to be on Mars."
The team used a technique called Quantitive Pyrolysis-Fourier Transform Infrared Spectroscopy to reproduce the same searing conditions experienced by meteorites as they enter the Martian atmosphere. The team heated the meteorite fragments to 1000 degrees Celsius and measured the gases that were released using an infrared beam.
When quantities of gas released by the laboratory experiments were combined with published calculations of meteorite in-fall rates on Mars, the scientists calculated that only 10 kilograms of meteorite methane was produced each year, far below the 100 to 300 tonnes required to replenish methane levels in the Martian atmosphere.
More information: "Investigating the contribution of methane produced by ablating micrometeorites to the atmosphere of Mars," Earth and Planetary Science Letters journal, Richard W. Court, Mark A. Sephton
Source: Imperial College London (news : web)
Explore further: First Gagarin film turns Soviet idol into new Russian hero | http://phys.org/news179499648.html | 13 |
42 | Quantum Mechanics/Symmetry and Quantum Mechanics
The idea of symmetry plays a huge role in physics. We have already used symmetry arguments in the theory of relativity -- applying the principle of relativity to obtain the dispersion relation for relativistic matter waves is just such an argument. In this section we begin to explore how symmetry can be used to increase our understanding of quantum mechanics.
The wave function for a free particle of definite momentum, P, and energy, E is given by
For this wave function |ψ|²=1 everywhere, so the probability of finding the particle anywhere in space and time is uniform - just like the particle energy.
This contrasts with the probability distribution which arises if we assume a free particle to have the wave function
In this case the probability varies with position and time, which is inconsistent with a uniform probability distribution. This solution is less symmetric than the problem.
Symmetry and Definiteness
Quantum mechanics is a probabilistic theory, in the sense that the predictions it makes tell us, for instance, the probability of finding a particle somewhere in space. If we know nothing about a particle's previous history, and if there are no physical constraints that would make it more likely for a particle to be at one point along the axis than any another, then the probability distribution must be P(x)=constant.
This is an example of a symmetry argument. Expressed more formally, it states that if the above conditions apply, then the probability distribution ought to be subject to the condition P(x+d)=P(x) for any constant value of d . This can only be true if P(x) is a constant.
In the language of physics, if there is nothing that gives the particle a higher probability of being at one point rather than another, then the probability is independent of position and the system is invariant under displacement in the direction.
The above argument doesn't suffice for quantum mechanics, since as we have learned, the fundamental quantity describing a particle is not the probability distribution, but the wave function . Thus, the wave function rather than the probability distribution ought to be the quantity which is invariant under displacement.
This condition turns out to be too restrictive, because it implies that ψ is constant but we know that a one-dimensional plane wave, which describes a particle with a uniform probability of being found anywhere along the axis, has the form
(For simplicity we temporarily ignore the time dependence.) If we make the substitution x→x+d in a plane wave, we get
The wave function is thus technically not invariant under displacement, in that the displaced wave function is multiplied by the factor exp(ikd). However, the probability distribution of the displaced wave function still equals one everywhere, so there is no change in what we observe.
Thus, in determining invariance under displacement, we are allowed to ignore changes in the wave function which consist only of multiplying it by a complex constant with an absolute value of one; i.e one of the form exp(iα) with α real. Such a multiplicative constant is called a phase factor, and α the phase.
It is easy to convince oneself by trial and error or by more sophisticated means that the only form of wave function which satisfies this condition is
where A is a (possibly complex) constant. This is just in the form of a complex exponential plane wave with wavenumber k. Thus, not only is the complex exponential wave function invariant under displacements in the manner defined above, it is the only wave function which is invariant to displacements. Furthermore, the phase factor which appears for a displacement d of such a plane wave takes the form exp(ikd), where k is the wavenumber of the plane wave.
As an example, let us see if a wave packet is invariant under displacement. Let's define a wave packet consisting of two plane waves:
Making the substitution x→x+d in this case results in
no matter what value α has.
The impossibility of writing the displaced wavefunction as the original multiplied by a phase factor lends plausibility to the assertion that a single complex exponential is the only possible form of the wave function that is invariant under displacement.
Notice that the wave packet does not have definite wavenumber, and hence, momentum. In particular, non-zero amplitudes exist for the associated particle to have either momentum or . This makes sense from the point of view of the uncertainty principle -- for a single plane wave the uncertainty in position is complete and the uncertainty in momentum is zero. For a wave packet the uncertainty in position is reduced and the uncertainty in the momentum is non-zero.
However, we see that this idea can be carried further: A definite value of momentum must be associated with a completely indefinite probability distribution in position. This corresponds to a wave function which has the form of a complex exponential plane wave.
However, such a plane wave is invariant under a displacement d, except for the multiplicative phase factor , which has no physical consequences since it disappears when the probability distribution is obtained. Thus, we see that invariance under displacement of the wave function and a definite value of the momentum are linked, in that each implies the other:
- Invariance under displacement ⇔ Definite momentum
We can compare this with classical mechanics, where we saw that if the energy is invariant under displacement then momentum is conserved, and similarly for generalised coordinates.
The above equivalence can also be extended to arbitary coordinates.
In particular, since the time dependence of a complex exponential plane wave is
we have by analogy with the above argument that
- Invariance under time shift ⇔ Definite energy
Thus, invariance of the wave function under a displacement in time implies a definite value of the energy of the associated particle.
Another useful instance of this is
- Invariance under rotation ⇔ Definite angular momentum
since the same connection between rotation and angular momentum holds as in classical mechanics.
In the previous chapter we assumed that the frequency (and hence the energy) was definite and constant for a particle passing through a region of variable potential energy. We now see that this assumption is justified only if the potential energy doesn't change with time. This is because a time-varying potential energy eliminates the possibility of invariance under time shift.
We already know that definite values of certain pairs of variables cannot be obtained simultaneously in quantum mechanics. For instance, the indefiniteness of position and momentum are related by the uncertainty principle -- a definite value of position implies an indefinite value of the momentum and vice versa. If definite values of two variables can be simultaneously obtained, then we call these variables compatible. If not, the variables are incompatible.
If the wave function of a particle is invariant under the transformations associated with both variables, then the variables are compatible. For instance, the complex exponential plane wave associated with a free particle is invariant under displacements in both space and time. Since momentum is associated with space displacements and energy with time displacements, the momentum and energy are compatible variables for a free particle.
Compatibility and Conservation
Variables which are compatible with the energy have a special status. The wave function which corresponds to a definite value of such a variable is invariant to displacements in time. Thus, if the wave function is also invariant to some other transformation at a particular time, it is invariant to that transformation for all time. The variable associated with that transformation thus retains its definite value for all time -- i. e., it is conserved.
For example, the complex exponential plane wave implies a definite value of energy, and is thus invariant under time displacements. At time t'=0, it is also invariant under displacements, which corresponds to the fact that it represents a particle with a known value of momentum. However, since momentum and energy are compatible for a free particle, the wave function will represent the same value of momentum at all other times.
In other words, if the momentum is definite at t=0, it will be definite at all later times, and furthermore will have the same value. This is how the conservation of momentum (and by extension, the conservation of any other variable compatible with energy) is expressed in quantum mechanics.
We will see later how to describe this in terms of operators. | http://en.m.wikibooks.org/wiki/Quantum_Mechanics/Symmetry_and_Quantum_Mechanics | 13 |
16 | From Wikipedia, the free encyclopedia
In genetics, an allele (pronounced al-eel or al-e-ul) is any one of a number of viable DNA codings occupying a given locus (position) on a chromosome. Usually alleles are DNA (deoxyribonucleic acid) sequences that code for a gene, but sometimes the term is used to refer to a non-gene sequence. An individual's genotype for that gene is the set of alleles it happens to possess. In a diploid organism, one that has two copies of each chromosome, two alleles make up the individual's genotype. The word came from Greek αλληλος = "each other".
An example is the gene for blossom color in many species of flower—a single gene controls the color of the petals, but there may be several different versions (or alleles) of the gene. One version might result in red petals, while another might result in white petals. The resulting color of an individual flower will depend on which two alleles it possesses for the gene and how the two interact.
Diploid organisms, for example, humans, have paired homologous chromosomes in their somatic cells, and these contain two copies of each gene. An organism in which the two copies of the gene are identical — that is, have the same allele — is called homozygous for that gene. An organism which has two different alleles of the gene is called heterozygous. Phenotypes (the expressed characteristics) associated with a certain allele can sometimes be dominant or recessive, but often they are neither. A dominant phenotype will be expressed when at least one allele of its associated type is present, whereas a recessive phenotype will only be expressed when both alleles are of its associated type.
However, there are exceptions to the way heterozygotes express themselves in the phenotype. One exception is incomplete dominance (sometimes called blending inheritance) when alleles blend their traits in the phenotype. An example of this would be seen if, when crossing Antirrhinums — flowers with incompletely dominant "red" and "white" alleles for petal color — the resulting offspring had pink petals. Another exception is co-dominance, where both alleles are active and both traits are expressed at the same time; for example, both red and white petals in the same bloom or red and white flowers on the same plant. Codominance is also apparent in human blood types. A person with one "A" blood type allele and one "B" blood type allele would have a blood type of "AB".
(Note that with the advent of neutral genetic markers, the term 'allele' is now often used to refer to DNA sequence variants in non-functional, or junk DNA. For example, allele frequency tables are often presented for genetic markers, such as the DYS markers.) Also there are many different types of alleles.
There are two equations for the frequency of two alleles of a given gene (see Hardy-Weinberg principle).
Equation 1: p + q = 1,
Equation 2: p2 + 2pq + q2 = 1
where p is the frequency of one allele and q is the frequency of the other allele. Under appropriate conditions, subject to numerous limitations regarding the applicability of the Hardy-Weinberg principle, p2 is the population fraction that is homozygous for the p allele, 2pq is the frequency of heterozygotes and q2 is the population fraction that is homozygous for the q allele.
Natural selection can act on p and q in Equation 1, and obviously affect the frequency of alleles seen in Equation 2.
Equation 2 is a consequence of Equation 1, obtained by squaring both sides and applying the binomial theorem to the left-hand side. Conversely, p2 + 2pq + q2 = 1 implies p + q = 1 since p and q are positive numbers.
The following equation (commonly termed the Lee equation) can be used to calculate the number of possible genotypes in a diploid organism for a specific gene with a given number of alleles.
G = 0.5a^2 + 0.5a
where 'a' is the number of different alleles for the gene being dealt with and 'G' is the number of possible genotypes. For example, the human ABO blood group gene has three alleles; A (for blood group A), B (for blood group B) and i (for blood group O). As such, (using the equation) the number of possible genotypes a human may have with respect to the ABO gene are 6 (AA, Ai, AB, BB, Bi, ii). Take care when using the equation though as it does not in any way calculate the number of possible phenotypes. Such an equation would be quite impossible as the number of possible phenotypes varies amongst different genes and their alleles. For example, in a diploid heterozygote some genotypes may show complete dominance, incomplete dominance etc., depending of the gene involved.
See also
- Mendelian error
- Mendelian inheritance
- Genealogical DNA test
- Punnett square | http://www.bazpedia.com/en/a/l/l/Allele.html | 13 |
14 | Loosely speaking, a black hole is a region of space that has so much mass concentrated in it that there is no way for a nearby object to escape its gravitational pull. Since our best theory of gravity at the moment is Einstein's general theory of relativity, we have to delve into some results of this theory to understand black holes in detail.Imagine an object with such an enormous concentration of mass in such a small radius that its escape velocity was greater than the velocity of light. Then, since nothing can go faster than light, nothing can escape the object's gravitational field. Even a beam of light would be pulled back by gravity and would be unable to escape.
In general relativity, gravity is a manifestation of the curvature of spacetime. Massive objects distort space and time, so that the usual rules of geometry don't apply anymore. Near a black hole, this distortion of space is extremely severe and causes black holes to have some very strange properties. In particular, a black hole has something called an 'event horizon.' This is a spherical surface that marks the boundary of the black hole. You can pass in through the horizon, but you can't get back out. In fact, once you've crossed the horizon, you're doomed to move inexorably closer and closer to the 'singularity' at the center of the black hole.
You can think of the horizon as the place where the escape velocity equals the velocity of light. Outside of the horizon, the escape velocity is less than the speed of light, so if you fire your rockets hard enough, you can give yourself enough energy to get away. But if you find yourself inside the horizon, then no matter how powerful your rockets are, you can't escape.
The horizon has some very strange geometrical properties. To an observer who is sitting still somewhere far away from the black hole, the horizon seems to be a nice, static, unmoving spherical surface. But once you get close to the horizon, you realize that it has a very large velocity. In fact, it is moving outward at the speed of light! That explains why it is easy to cross the horizon in the inward direction, but impossible to get back out. Since the horizon is moving out at the speed of light, in order to escape back across it, you would have to travel faster than light. You can't go faster than light, and so you can't escape from the black hole.
Just click on the pictures above for a better view!
You can't see a black hole directly, of course, since light can't get past the horizon. That means that we have to rely on indirect evidence that black holes exist.
Suppose you have found a region of space where you think there might be a black hole. How can you check whether there is one or not? The first thing you'd like to do is measure how much mass there is in that region. If you've found a large mass concentrated in a small volume, and if the mass is dark, then it's a good guess that there's a black hole there.
According to a recent review by Kormendy and Richstone (to appear in the 1995 edition of "Annual Reviews of Astronomy and Astrophysics"), eight galaxies have been observed to contain such massive dark objects in their centers. The masses of the cores of these galaxies range from one million to several billion times the mass of the Sun. The mass is measured by observing the speed with which stars and gas orbit around the center of the galaxy: the faster the orbital speeds, the stronger the gravitational force required to hold the stars and gas in their orbits.
Two very recent discovery has been made that strongly support the hypothesis that these systems do indeed contain black holes. First, a nearby active galaxy was found to have a "water maser" system (a very powerful source of microwave radiation) near its nucleus. Using the technique of very-long-baseline interferometry, a group of researchers was able to map the velocity distribution of the gas with very fine resolution. In fact, they were able to measure the velocity within less than half a light-year of the center of the galaxy. From this measurement they can conclude that the massive object at the center of this galaxy is less than half a light-year in radius. It is hard to imagine anything other than a black hole that could have so much mass concentrated in such a small volume. (This result was reported by Miyoshi et al. in the 12 January 1995 issue of Nature, vol. 373, p. 127.)
If you have any questiomns please email me at [email protected] | http://www.lancs.ac.uk/postgrad/carra1/ | 13 |
17 | Website Detail Page
published by the Physics Education Technology Project
Available Languages: English, Spanish
This is an interactive simulation created to help beginners differentiate velocity and acceleration vectors. The user can move a ball with the mouse or let the simulation move the ball in four modes of motion (two types of linear, simple harmonic, and circular). Two vectors are displayed -- one green and one blue. As the motion of the ball changes, the vectors also change. Which color represents velocity and which acceleration?
This item is part of a larger and growing collection of resources developed by the Physics Education Technology project (PhET), each designed to implement principles of physics education research.
Please note that this resource requires Java Applet Plug-in.
Editor's Note: This simulation was designed with improvements based on research of student interaction with the PhET resource "Ladybug Revolution". The authors added two new features for the beginning learner: linear acceleration and harmonic motion. To supplement the simulation, we recommend the Physics Classroom tutorial "Vectors and Direction" and the teacher-created lesson, "Vectors Phet Lab" -- see links in Related Materials.
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
11. Common Themes
Common Core State Standards for Mathematics Alignments
High School — Number and Quantity (9-12)
Vector and Matrix Quantities (9-12)
The Physics Classroom: Vectors and Direction (Author: Tom Henderson)
As instructors, we may forget that certain representations (like vector arrows) seem like a foreign language to beginning students. This thoughtfully-crafted tutorial introduces vector diagrams in kid-friendly language and extends the learning to interactive practice problems with answers provided.
This resource is part of a Physics Front Topical Unit.
Topic: Kinematics: The Physics of Motion
Unit Title: Vectors
This very simple simulation can help beginners understand what vector arrows represent. It was designed by the PhET team to target specific areas of difficulty in student understanding of vectors. Learners can move a ball with the mouse or let the simulation control the ball in four modes of motion (two types of linear, simple harmonic, and circular). Two vectors are displayed -- one green and one blue. Which color represents velocity and which acceleration?Link to Unit:
ComPADRE is beta testing Citation Styles!
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
Citation Source Information
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
PhET Simulation: Motion in 2D:
Is Supplemented By The Physics Classroom: Vectors - Fundamentals and Operations
A self-directed tutorial for deeper exploration of vector direction, with problem sets and related animation.relation by Caroline Hall
Is Required By PhET Teacher Activities: Vectors Simulations Lab
An editor-recommended virtual lab, authored by a high school teacher specifically for use with the Motion in 2D simulation.relation by Caroline Hall
Know of another related resource? Login to relate this resource to it.
Is Supplemented By | http://www.compadre.org/precollege/items/detail.cfm?ID=6095 | 13 |
24 | The Pythagorean theorem states that the square of the hypotenuse is the sum of the squares of the other two sides, that is,
This theorem is useful to determine one of the three sides of a right triangle if you know the other two. For instance, if two legs are a = 5, and b = 12, then you can determine the hypotenuse c by squaring the lengths of the two legs (25 and 144), adding the two squares together (169), then taking the square root to get the value of c, namely, 13.
Likewise, if you know the hypotenuse and one leg, then you can determine the other. For instance, if the hypotenuse is c = 41, and one leg is a = 9, then you can determine the other leg b as follows. Square the hypotenuse and the first leg (1681 and 81), subtract the square of the first leg from the square of the hypotenuse (1600), then take the square root to get the value of the other leg b, namely 40.
Proof: Start with the right triangle ABC with right angle at C. Draw a square on the hypotenuse AB, and translate the original triangle ABC along this square to get a congruent triangle A'B'C' so that its hypotenuse A'B' is the other side of the square (but the triangle A'B'C' lies inside the square). Draw perpendiculars A'E and B'F from the points A' and B' down to the line BC. Draw a line AG to complete the square ACEG.
Note that ACEG is a square on the leg AC of the original triangle. Also, the square EFB'C' has side B'C' which is equal to BC, so it equals a square on the leg BC. Thus, what we need to show is that the square ABB'A' is equal to the sum of the squares ACEG and EFB'C'.
But that's pretty easy by cutting and pasting. Start with the big square ABB'A'. Translate the triangle A'B'C' back across the square to triangle ABC, and translate the triangle AA'G across the square to the congruent triangle BB'F. Paste the pieces back together, and you see you've filled up the squares ACEG and EFB'C'. Therefore, ABB'A' = ACEG + EFB'C', as required.
In fact, as Euclid showed, each of these two conditions implies the others. That is to say, if corresponding angles are equal, then the three ratios are equal (Prop. VI.4), but if the three ratios are equal, then corresponding angles are equal (Prop. VI.5). Thus, it is enough to know either that their corresponding angles are equal or that their sides are proportional in order to conclude that they are similar triangles.
Typically, the smaller of the two similar triangles is part of the larger. For example, in the diagram to the left, triangle AEF is part of the triangle ABC, and they share the angle A. When this happens, the opposite sides, namely BC and EF, are parallel lines.
This situation frequently occurs in trigonometry applications, and for many of those, one of the three angles A, B, or C is a right angle.
On to Angle measurement
David E. Joyce
Department of Mathematics and Computer Science
Worcester, MA 01610
Dave's Short Trig Course is located at http://www.clarku.edu/~djoyce/trig | http://www.clarku.edu/~djoyce/trig/geometry.html | 13 |
20 | Apollo 17 Mission
Science Experiments - Lunar Sounder
Right: Diagram of the Lunar Sounder Experiment. The VHF and HF antennas were used by the experiment. CSAR is the experiment's electronics box in the Service Module and the Optical Recorder was used to record the experiment's results. The high-gain antenna was used to communicate with Earth and was not a part of the Lunar Sounder Experiment.
The Apollo Lunar Sounder Experiment was performed on Apollo 17. This experiment used radar to study the Moon's surface and interior. Radar waves with wavelengths between 2 and 60 meters were transmitted through a series of antennas near the back of the Service Module. After the waves were reflected by the Moon, they were received using the same antennas and the data was recorded on film for analysis on Earth. The primary purpose of this experiment was to "see" into the upper 2 kilometers of the Moon's crust in a manner somewhat analogous to using seismic waves to study the internal structure of the Moon. This was possible because very long radar wavelengths were used and because the Moon is very dry, which allowed the radar waves to penetrate much deeper into the Moon than would have been possible if water were present in lunar rocks. (A radar experiment on the space shuttle has been similarly used to map ancient river valleys beneath the Sahara Desert.) This experiment also provided very precise information about the Moon's topography. In addition to studying the Moon, the experiment also measured radio emissions from the Milky Way Galaxy.
This experiment revealed structures beneath the surface in both Mare Crisium and Mare Serenitatis. These layers were observed in several different parts of these basins and are therefore believed to be widespread features. Based on the properties of the reflected radar waves, the structures are believed to be layering within the basalt that fills both of these mare basins. In Mare Serenitatis, layers were detected at depths of 0.9 and 1.6 kilometers below the surface. In Mare Crisium, a layer was detected at a depth of 1.4 kilometers below the surface. The bottom of the mare basalts were apparently not detected by this experiment. However, in Mare Crisium the Lunar Sounder Experiment results were combined with other observations to estimate a total basalt thickness of between 2.4 and 3.4 kilometers.
The Lunar Sounder Experiment also contributed to our understanding of wrinkle ridges on the Moon. These long, low ridges are found in many of the lunar maria. Most lunar geologists believe that these ridges formed when the Moon's surface was deformed by motion along faults ("moonquakes") in the Moon's crust more than 3 billion years ago. The weight of several kilometers of mare basalt in these areas caused the Moon's surface to sag somewhat, and this motion caused the surface to buckle in some places, forming the wrinkle ridges. However, other scientists suggested that these ridges are volcanic features, formed by the flow of magma either on the Moon's surface or within the crust. The Lunar Sounder Experiment studied several wrinkle ridges in southern Mare Serenitatis in detail, providing information about both the topography of these ridges and about structures in the crust below these ridges. These results support the idea that wrinkle ridges formed primarily by motions along faults. | http://www.lpi.usra.edu/lunar/missions/apollo/apollo_17/experiments/lse/ | 13 |
34 | A quantity having two components; a magnitude component and a direction component. In n-dimensional Euclidean space, a vector is representable by an ordered tuple (a1, a2, a3, ... an) whose elements are called the components of the vector. In this case the magnitude of the vector is given by the square root of the sum of the squares of the components of the vector.
The vector product (also called cross product) of two vectors u and v, denoted u × v and called “u cross v,” is a vector w whose magnitude (length) is the product of the magnitudes of u and v and the sine of the angle between them, and which points in a direction perpendicular to the plane containing u and v so as to form a right-handed system, as in the figure.
Note that the directedness of the vector product implies that it is not commutative.
Cf. scalar product.
A structure consisting of two kinds of elements called scalars and vectors, with operations of addition of pairs of scalars or pairs of vectors, and multiplication of pairs of scalars or a scalar and a vector. The vectors form an Abelian group under addition, and the scalars form a field under their operations, and the vector space is said to be over that field.
If the scalar field is the real numbers or the complex numbers and the vectors are in n-dimensional real or complex space, then the space is called an n-dimensional real or complex vector space accordingly. Multiplication of vectors by scalars is associative with scalar multiplication, and distributive over both scalar addition and vector addition. Symbolically, for scalars a, b, and vectors u, v,
Vector spaces are usually denoted by V, and it is conventional to write the scalar on the left of a scalar multiplication. When there is any possibility of confusion, the vectors of a vector space are usually specially marked, either by drawing a (right pointing) arrow over them or by writing them in bold face.
Geometry: In a plane figure, a point which is a common end-point for two or more lines or curves.
Graph Theory: One of two kinds of entities in a graph.
The set of vertices of some graph. For a graph G, the vertex set of G is denoted by V(G), or, if there is no ambiguity as to the graph in question, simply by V.
Von Neumann Heirarchy
A construction in set theory giving rise to the class of ordinals. One begins with the empty set, and successor sets are obtained by forming the union of the current set and the set containing the current set; at limit stages, take unions. See the article for a detailed exposition.
weakly connected digraph
A directed graph every two vertices of which are connected by a semipath. Also called weak for short. A directed graph is weak if and only if it has a spanning semiwalk.
Cf. unilaterally connected digraph, strongly connected digraph, connected graph.
In set theory, a collection is well-founded if every subcollection has a least member under the membership relation. For example, the set of natural numbers is well-founded. In ZFC, the foundation axiom asserts this property of all sets. A set which is not well-founded is sometimes called a hyperset.
A set S with a linear order is called well-ordered if every non-empty subset T of S has a least element under the ordering relation.
Cf. well-ordering principle.
The assertion that every set can be well-ordered. Equivalent to the Axiom of Choice.
An element of the set consisting of the number 0 together with the counting numbers, 1, 2, 3, etc.; i.e., the set N of natural numbers with 0 included. Sometimes the term “whole” is meant to refer to negative integers also; the intended meaning should be clear from the context.
Zeno’s Paradox of the Arrow
One of the famous paradoxes of Zeno of Elea. Consider an arrow in flight. At each moment of time the arrow may be considered to occupy a specific region in space. This poses the following problem: if the arrow is in a particular point of space in any given moment, then how is it moving? And if it is not moving, how does it get from point to point in the passage of time?
Zermelo Fraenkel set theory
The standard set theory in which most mathematics is formalized. Its axioms include the pairing axiom, the power set axiom, the axiom of infinity, the axiom of extensionality, the axiom of replacement, the separation axiom, the union axiom, and the foundation axiom. Abbreviated ZF. When the axiom of choice is assumed, this theory is abbreviated ZFC.
The ring with only one element, its additive identity.
See Zermelo Fraenkel set theory.
Zermelo Fraenkel set theory with the axiom of choice. | http://www.mathacademy.com/pr/prime/browse.asp?LT=L&ANCHOR=vector000000000000000000000000&TBM=Y&TAL=&TAN=&TBI=&TCA=&TCS=&TDI=&TEC=&TFO=Y&TGE=&TGR=&THI=&TNT=&TPH=&TST=&TTO=&TTR=&TAD=&LEV=B | 13 |
51 | Exoplanets: How the Milky Way is Surprising Scientists
Three prominent researchers discuss how recent findings from the Kepler mission are deepening our knowledge of planets beyond our solar system, as well as redefining the boundaries where life could exist.
Launched by NASA in 2009, the Kepler spacecraft uses a photometer to look beyond our own solar system at a selected slice of the Milky Way. Kepler’s mission:
• Determine the abundance of terrestrial and larger planets in or near the habitable zone of a wide variety of stars;
• Determine the distribution of sizes and shapes of the orbits of these planets;
• Estimate how many planets there are in multiple-star systems;
• Determine the variety of orbit sizes and planet reflectivities, sizes, masses and densities of short-period giant planets;
• Identify additional members of each discovered planetary system using other techniques;
• Determine the properties of those stars that harbor planetary systems.
EARLIER THIS YEAR, astronomers announced that, beyond our solar system, there are hundreds of possible planets in a small region of the Milky Way Galaxy. These potential planets range from gaseous planets much larger than Jupiter to suspected rocky planets a few times more massive than Earth. As of September 13, researchers had confirmed 20 of these 1,235 candidates are actual planets.
This trove of information comes from Kepler, a spacecraft launched by NASA in 2009. Kepler is NASA’s first mission capable of finding Earth-size planets orbiting other stars, and it’s giving astronomers their first broad overview of the structure and diversity of planetary systems in the Milky Way. Most recently and dramatically, this includes discovering a planet that actually circles two stars.
Earlier this summer, The Kavli Foundation spoke with three prominent researchers in the field about how Kepler’s discoveries are already helping to change their thinking about planets in the Milky Way. The participants:
- JACK J. LISSAUER, a space scientist in the Space Science and Astrobiology Division at NASA’s Ames Research Center in Northern California, and a co-investigator on the Kepler space telescope mission;
- GEOFFREY W. MARCY, a professor of astronomy at the University of California, Director of U.C. Berkeley's Center for Integrative Planetary Science, and a co-investigator on the Kepler space telescope mission; and
- SARA SEAGER, a professor of Physics and the Ellen Swallow Richards Professor of Planetary Science at the Massachusetts Institute of Technology. Seager also is a faculty member at the MIT Kavli Institute for Astrophysics and Space Research (MKI).
The following is an edited transcript of the teleconference on the symposium.
THE KAVLI FOUNDATION (TKF): Earlier this year, NASA announced that Kepler had discovered hundreds more planet candidates in the small region of the Milky Way it is observing. That brings the total number of exoplanet candidates that Kepler has identified to date to 1,235. Ongoing studies, meanwhile, are revealing a wide variety of planetary arrangements. How is Kepler changing the way you think about the diversity of extrasolar systems?
JACK J. LISSAUER: In a good fraction of extrasolar systems, we are seeing planets clustered close in. This is something that Geoff was finding with giant planets, Jupiter-mass planets, beginning a decade ago. But we’re now seeing this for smaller planets as well. We’re also seeing that a good fraction of planetary systems tend to be fairly flat like our own solar system. And, there are some systems, like Kepler 11, where there are pretty big planets spaced quite close to each other.
GEOFFREY W. MARCY: The diversity of planetary systems is extraordinary. We had observed unexpected types of planets prior to Kepler, most surprisingly Jupiter-sized planets orbiting very close to their host stars and numerous planets that were in non-circular orbits (eccentric, elongated orbits). In some systems the large planets orbit near the star and the small planets orbit farther out, unlike our solar system that has this neat architecture of small planets in circular orbits closer in and the large planets farther out. So the backdrop for Kepler consisted of the unexpected from previous surveys.
The most dramatic result from Kepler so far comes in two forms: one is the discovery that there are more small planets than large planets in our Milky Way Galaxy. There are almost certainly way more of the smaller planets, almost down to the size of the Earth, than planets the size of Neptune, Saturn or Jupiter. That’s an extraordinary result. It doesn’t speak to diversity, but it certainly carries the profound implication that our Milky Way galaxy has more of the smaller planets that are mini-Neptunes and nearly Earth-size.
The other remarkable discovery is that approximately 120 Kepler stars have been found with two or more planets that transit the star, which indicates that in many cases planetary systems have planets that reside in the same flat plane. This flat structure is exactly what we see in our own solar system. That’s a sort of anti-diversity, if you will, in the sense that flattened planetary systems were expected both from our Solar system and from the flat protoplanetary disks within which planets form. We have learned that planets must have all formed in some kind of flattened disc of gas and dust around young stars.
"There are almost certainly way more of the smaller planets [in the Milky Way Galaxy], almost down to the size of the Earth, than planets the size of Neptune, Saturn or Jupiter. That’s an extraordinary result." — Geoffrey W. Marcy
SARA SEAGER: In my mind, the most significant insight from Kepler so far is that smaller planets are more common than their larger counterparts. One of my favorites is Kepler 10b, which is really quite an extreme planet because it is so very close to its host star, with a less than one-day period orbit. We additionally call Kepler 10b a super-Earth, a planet that is more massive than Earth but is still expected to be predominantly rocky. We think the planet is heated to well over 2,500 degrees Kelvin, and this means that the planet should have liquid lava – not created by volcanoes but just created from the star heating that surface to very hot temperatures, hot enough to melt rock.
TKF: When we think of the habitable zone in a planetary system – the orbital distance from a star where liquid water on a planet could exist – we only have one example to go on, our own solar system. How are the results from Kepler, as well as other exoplanet studies, broadening our perspective?
SEAGER: The diversity of exoplanets has really forced us to reconsider what the habitable zone really is. For example, some of these super-Earths are massive enough that they could retain a different atmosphere than we have on Earth. These super-Earths may hold on to the light gases, hydrogen or hydrogen and helium. In this case, if they have a massive atmosphere they could have a massive greenhouse effect. This could actually increase the range of the habitable zone in a planetary system.
MARCY: We really can’t say anything about the frequency of Earth-sized planets, of rocky planets, that reside in the habitable zones of solar-type stars. We need to collect more data, and we’re about a year away from really saying anything clear and useful on that front.
But I strongly agree with what Sara said. We have very Earth-centric views of what conditions are necessary for life, and here we happen to reside on a planet in a zone that we’ve deemed, we humans have deemed, the habitable zone. But we now know there are many other types of planets, maybe even moons around planets, where there could be the conditions necessary for life. So we’re beginning to broaden our perspective about what types of planets and environments might be suitable for life. The traditional habitable zone might be too narrow.
LISSAUER: I agree with what Sara and Geoff said, but let me just add a few things. People who introduced the concept of the habitable zone defined it as the region at a certain distance from a star where a planet that is like Earth, with a similar greenhouse effect, would have liquid water on its surface. That’s the defining criteria for habitability for life on Earth. However, Geoff has discovered many planets that are in this habitable zone that are not viewed as potentially habitable worlds because they’re way too big.
Meanwhile, there can also be planets that are not in this definition of the habitable zone which could be potentially habitable worlds – for the reason that Sara said. They could have a much bigger greenhouse atmosphere.
TKF: In our own solar system, Jupiter’s moon Europa is believed to have liquid water beneath its icy surface. Scientists suspect this partly because tidal forces exerted on Europa as it orbits Jupiter may generate enough heat beneath the ice to support life. Europa, of course, is far beyond the defined habitable zone in our own solar system. Should we therefore be looking for biosignatures on exoplanets or their moons that lie far from their stars?
SEAGER: Let me jump in here. We have to be careful about what is accessible to us remotely. And that is what is in the atmosphere, and what is on the surface of a planet. We do tend to discard (moons), not because they’re not important but because they’re not exoplanets and we don’t really have a way to study them. I just want to throw something else out here. People do like to talk about life in liquids other than water, and people like to talk about (Saturn’s moon) Titan. Titan has lakes of liquid, not water but liquid ethane and methane. If there are planets that have those characteristics but are as far away from their star as Titan is from the sun, detecting those chemical signatures will be very difficult because the exoplanet’s reflected or infrared brightness will be low. So we also discount exoplanet versions of Titan, not because we don’t believe life exists there but because, for the foreseeable future, we don’t have remote sensing access to them. We don’t have access to the exoplanet versions of Titan.
LISSAUER: And even if we found something like this, we’re not going to get much information about it. So, the more that a planet seems like Earth, in terms of size and composition (mostly rocky), in terms of the amount of radiation it gets from its star, in terms of the star being not that dissimilar to the sun – those are suggestive that it is more likely to be habitable. But again, this is all an extension of one example.
SEAGER: I do really like the theme of where this conversation is going. The point is that we’re rather limited in the planets that we can study in the future, to look for biosignatures, or signs of life, in their atmospheres and on their surfaces. It’s true that Kepler will find lots and lots of objects, but the Kepler exoplanets are not accessible for follow-up observations because they are too far away and we don’t get enough photons (reflecting off them) to analyze. But we are trying to expand the idea of diversity, so that we have a bigger chance of identifying a habitable world in the future.
MARCY: There’s some fairly simple analysis that one has to do. You can’t just take the planets that Kepler is finding at face value and count them up, and reason is of course that smaller planets are more difficult to find than the bigger planets. Kepler has an easier time finding the bigger planets than the smaller planets. So you take that into account.
And then there’s another factor which very few people know about, but if you think about it it’s obvious. Kepler, because it finds planets by the dimming of the star as the planet crosses in front, is limited to finding only those planets that orbit in a flattened plane that is seen edge on as viewed from the Earth. Kepler misses all of the planets that reside in planes that are tilted – in orbital planes tilted 20 degrees, 40 degrees, 80 degrees – to our line of sight. And so we have to correct for that.
After applying those corrections for those known effects, we can determine what the actual occurrence rate is of planets of different sizes. The bottom line is that small planets predominate in our galaxy, compared with the bigger planets.
TKF: What significance can be attached to that?
MARCY: I’ll give the cup half-full and the cup half-empty answer. If you want to be simple minded, the cup half-full answer would say, ‘Well, these early Kepler results show that the planets nearly the size of the Earth – let me be specific, the size is about twice that the diameter of the Earth – are very, very common.’ And so you might conclude that if planets just a little bit larger than the Earth are common, it therefore must be true that planets having the same size as the Earth are also common.
"This is the beginning of: 'What’s out there? What are the planets known?' And in the future, we hope that our descendants will find signs of life." —Sara Seager
But that’s where the cup might also be half empty, in the sense that we have not yet really been able to measure the occurrence rate of truly Earth-sized planets – never mind Earth-sized habitable planets.
It remains possible that planets that are rocky and the size of Earth are still rare. There are theoretical reasons to wonder whether or not planets devoid of the most common elements of the universe, hydrogen and helium – the Earth being one such planet – are really common or not. How do you form planets the size of the Earth that are missing 99 percent of the most common elements, namely hydrogen and helium? So, it remains possible that planets that have just a little, thin veneer of water and are otherwise rocky may still be rare. And I think that’s the glory of Kepler. We’re still going to answer that question. It’s an outstanding question, and we’ve yet to address it.
TKF: Kepler is looking at a tiny region of the Milky Way, and a very small fraction of stars in the galaxy. How can we make conclusions about the population of planets that could be out there everywhere in the galaxy?
LISSAUER: The region that we’re looking at primarily has stars that are of comparable distance to the center of the galaxy as our own sun. And there are a fair number of stars in that category. Now, we are missing, to a large extent, the very smallest stars, M-dwarf stars, because they tend to be so faint. But we’re getting a statistically good set of data on stars that are comparable to the sun. We’re a statistical mission. The longer we go, the better statistics we’ll have, and looking at 156,000 stars gives us a pretty good sample.
MARCY: Basically the stars we are sampling with Kepler are very similar to the predominant types of stars in the Milky Way Galaxy. So in that sense, while we’re observing stars within a narrow pencil beam in the sky, it’s probably a representative fraction.
TKF: As you zero in on individual planets and analyze their atmospheres for chemical signatures of life, how will your knowledge of Earth’s atmosphere help inform your search?
SEAGER: In general, astronomers and planetary scientists have a good handle on what should be in an exoplanet atmosphere that is in chemical equilibrium. What we see on Earth, our only example of a planet with life, is that our atmosphere is heavily modified by life. As a result, Earth’s atmosphere is out of chemical equilibrium. In our atmosphere, oxygen makes up 20 percent of our atmosphere by volume. If it weren’t for life – plants and photosynthetic bacteria – oxygen shouldn’t be in our atmosphere in any significant amount. We should only have negligible amounts. Ozone, meanwhile, is a photochemical byproduct from oxygen, and therefore it’s also a potential chemical signature of life.
So, in general, we are looking for gases that do not belong in an exoplanet atmosphere. More specifically, people would say we’re looking for an atmosphere that is out of chemical equilibrium.
Now, in a very terracentric way, we can look for oxygen or ozone or nitrous oxide, because those are Earth’s prime biosignature gases. But we also don’t know what nature will give us. Whenever we observe in the future, we should design our instruments to be able to detect a broad range of gases, so we will be flexible to see whatever is out there.
SEAGER: Kepler is taking us in a whole new direction with exoplanets by giving us statistics. Kepler is going to give us lots of numbers to find out which types of exoplanets and exoplanetary systems are more common.
It’s my goal that eventually we’ll return from statistical studies to individual exoplanets orbiting the very nearest stars. At MIT and together with Draper Labs, we are building a prototype nanosatellite that will be launched in 2013. It’s going to be very complementary to Kepler. Instead of looking at 156,000 distant stars, we hope to survey the very nearest sun-like stars for transiting Earth-size planets. So the return to studying individual exoplanets will be for those orbiting stars that are close enough for detailed follow-up.
This artist’s concept illustrates the two Saturn-sized planets discovered by NASA’s Kepler mission. The star system is oriented edge-on, as seen by Kepler, such that both planets cross in front, or transit, their star, named Kepler-9. This is the first star system found to have multiple transiting planets. (Credit: NASA/Ames/JPL-Caltech)
LISSAUER: Kepler still has a long way to go. The spacecraft’s ability to see multi-planet systems farther from their stars, and smaller planets within those systems, will increase greatly as it returns more data over the next six to ten years. Kepler is performing extremely well, and I am interested in analyzing those data for a long time to come.
MARCY: There are stars in the night sky that are three or four light years away – a sort of cosmic stone’s throw right out our back porch. We should be examining those stars for Earth-like planets. Kepler can’t do it. We don’t have any equipment right now that can do it.
What we need is a new space-borne telescope that can detect the Earth-sized planets that may well be orbiting the nearest stars out our back porch, and take spectra of those planets so that we can analyze them for biosignatures of life.
If indeed small planets are common, as the Kepler results so far suggest, let’s go find them right in our cosmic, or galactic, backyard. And so there’s an appeal that I’m essentially making here that more funding be given to NASA and ESA - and frankly I hope they’re doing this in Japan, China and Canada – for a space-borne telescope that can hunt for Earth-like planets around stars that are so close that someday we can send spacecraft there ourselves to get pictures of those Earth-like planets, up close and personal.
"The spacecraft’s ability to see multi-planet systems farther from their stars, and smaller planets within those systems, will increase greatly as it returns more data over the next six to ten years. Kepler is performing extremely well, and I am interested in analyzing those data for a long time to come" — Jack J. Lissauer
LISSAUER: The goal of major missions such as Kepler is to determine the abundance of Earth-like planets near Earth, and whether they’re more common around solar-type stars than, say, smaller stars. This will determine the best type of star to focus our search on, and how far away we'll need to look to find a few such stars with Earth-like planets. Three or four years from now, this information, combined with the technology of the day, will determine whether we are ready to build an advanced space observatory to go out and look for them, and the best way to accomplish that goal.
TKF: What role do you think the James Webb Space Telescope, as well as the next-generation ground-based telescopes such as the Thirty-Meter and Giant Magellan telescopes, will have in future exoplanet research?
SEAGER: What we hope to do with the James Webb Space Telescope is to repeat what we’ve already done for transiting giant planet atmospheres. We want to be able to look at some select, favorable, super-Earths orbiting in the habitable zones of low-mass M stars. We want to look at the composition of the super-Earth atmospheres and to see perhaps if there are any biosignature gases. That’s the ultimate goal with the James Webb. But the planets have to be transiting, and they have to be transiting small stars.
MARCY: There’s a lot of momentum now toward building 30-meter diameter ground-based telescopes. My own personal opinion is that the capability of those telescopes toward our understanding of Earth-like planets is limited. Beneath the Earth’s atmosphere, the various techniques we know of for detecting Earth-like planets and analyzing their atmospheres spectroscopically is limited. The Earth’s atmosphere is deadly in all the different ways that you might imagine.
So the future of exoplanet research lies with space-borne telescopes. And I just want to re-emphasize what Sara Seager said about the nanosats. She’s really pointing to the future. We need inexpensive ways of getting great equipment above the atmosphere.
TKF: We are in an incredible era of discovery, aren’t we?
SEAGER: If you want to look back to what we remember from hundreds of years ago, inevitably it is the great explorers. Christopher Columbus didn’t know what he was going to find, and he came across North America. Many of us working in the field of exoplanets believe that thousands of years from now, when people look back at our generation in the early 21st century, they will remember the discovery of other Earths as one of our most significant accomplishments.
This is the beginning of: “What’s out there?” “What are the planets known?” And in the future, we hope that our descendants will find signs of life. We are optimistic that our descendants are going to know that there are lots and lots of Earths with lots and lots of signs of life, and we hope they’re going to find a way to travel to the very nearest planets around other stars. And they’ll look back and they’ll see that we were the ones who started it all.
- Summer 2011 | http://www.kavlifoundation.org/science-spotlights/astrophysics-exoplanets-milky-way | 13 |
19 | |Module 15 - Particle Motion and Parametric Models|
|Introduction | Lesson 1 | Lesson 2 | Lesson 3 | Self Test|
|Lesson 15.1: Motion Along a Line|
In this lesson you will model the motion of a particle that moves along the x-axis using parametric equations. The motion of the particle will be illustrated using the animation feature of the TI-83. By developing different parametric equations to model the same movement, you will see that parametric equations that model a particle's movement are not unique.
Suppose a particle moves along the x-axis so that its position is given by the equation below, where t represents time in seconds.
You can use your TI-83 to illustrate the motion of the particle by defining its movement with parametric equations, which were explored in Module 4. Displaying the parametric equations in animated graph mode will help determine when the particle is at rest, when it is moving right, and when it is moving left.
To graph parametric equations on the TI-83 you need to change the graphing mode.
The particle's horizontal position along the x-axis is given by x(t) = 2t3 - 9t2 + 12t + 1 and its vertical position is constantly zero because the particle is moving along the x-axis.
Setting the Graph Style to Animate
When an equation is graphed in Animate style, a small circular icon moves along the path defined by the equation. You can view an animation of the particle's motion by changing the Graph style of the parametric equations.
Setting the Viewing Window
With the values shown, t will initially be 0 and then increase by steps of one tenth until it is 4. For each value of t the position of the particle, as determined by the corresponding x and y-values, will be plotted.
You should see a circular icon move along the x-axis. This circle illustrates the path of the particle over the time interval from t = 0 to t = 4 as defined by x(t) = 2t3 - 9t2 + 12t + 1.
The screen above shows an intermediate view of the animation when the graph is displayed.
You can see the animation again by pressing [DRAW] and selecting 1:ClrDraw.
Animation Using the Trace Feature
The Trace feature can produce a similar animated effect but the speed of the animation is under your control.
Try pressing the left or right arrow and holding it down.
Notice that the right arrow key moves the particle forward in time, which may not always coincide with motion to the right, and the left arrow key moves the cursor back in time, which may not always coincide with motion to the left. The value of t and the x- and y-coordinates are shown at the bottom of the screen as you trace the particle's motion.
15.1.1 Use the cursor movement keys to estimate the time interval(s) when the particle is moving right. Click here for the answer.
15.1.2 Estimate when the particle is moving left. Click here for the answer.
15.1.3 When does the particle appear to change direction? Click here for the answer.
Illustrating the Path Over Time
The animation on the x-axis is a realistic model for the motion of the particle. However, it's often helpful to have a static graph that can be used to help visualize and study motion. Make the following additions in the Y= editor to better illustrate the motion of the particle over the four-second interval.
Set X2T = X1T by following the procedure below.
Setting the Path Style
The graphs defined in X1T, Y1T and in X2T, Y2T can be drawn at the same time by setting the Graph mode to Simultaneous.
Displaying the Graphs
Notice the advantage that this visualization has over the original "Path" style. Any trail left on a 1-dimensional motion hides changes in direction and doesn't give any information about when the particle is in what position.
15.1.4 Identify the features of the graph that indicate when the particle appears to change direction, and then approximate the times when the particle changes direction. Click here for the answer.
A Different Model of the Particle's Motion
Different parametric equations can be used to make another model of the same particle motion. The functions below represent the same motion shown earlier, but time is now shown on the horizontal axis and the particle's position is represented vertically.
In the graph above, the y-values represent the particle's position and the x-values represent the corresponding times. This model has the advantage of having time as the x-variable and position as the y-variable, which is comparable to the representation normally used in the FUNCTION Graph mode.
Instantaneous Velocity of the Particle
The particle's instantaneous velocity function can be displayed along with its position function. Defining new parametric equations where the x-values represent time and the y-values represent the particle's instantaneous velocity (the derivative of the function in Y1T) can do this.
15.1.5 Describe how these graphs show when the particle is moving left, moving right, and at rest. Click here for the answer.
|< Back | Next >|
2007 All rights reserved. | | http://education.ti.com/html/t3_free_courses/calculus84_online/mod15/mod15_lesson1.html | 13 |
24 | Let's start with the logic of the experimental method:
If two groups of participants are equal in all respects save one and are not similar in respect of a behaviour that is being measured, then the difference between them must be attributable to the one way in which they were different (J.S Mill).
Hopefully by exploring this logic in relation to the word search experiment you'll see that it’s actually relatively straightforward.
When we ran the experiment we had two groups of participants (some in PC lab 1) and some in (PC lab 2). Now for the time being, let’s accept that these two groups of participants were equal in all respects save one. This of course begs the question, what was the one way in which the two groups of participants were not equal?
The answer is level of teacher presence.
When the students were doing the puzzle in PC lab 1 (my lab), the teaching team made their presence felt as much as possible, walking up and down each bank of computers, looking over the students shoulder, essentially being as conspicuous as possible.
In PC lab 2 (colleagues lab) the teaching team did the opposite i.e. they remained as inconspicuous as possible i.e. showed no interest in what the students were doing at all.
Now if it transpired that students’ in PC lab 1 and PC lab 2 were not similar in respect of the behavior that was being measured. In our case the time taken to complete a word search puzzle, then according to the logic of the experimental method, this difference must be attributable to level of teacher presence. In other words
Level of teacher presence (the cause) made a difference in the time taken to complete the puzzle (the effect).
So the first thing to note about the experimental method is that it is fundamentally concerned with establishing cause and effect. Indeed it’s the only research methodology that allows you talk about cause and effect in relation to what is being investigated.
In addition to understanding the logic of the experimental method you’ll also need to understand the language that accompanies it. So let's take each concept listed on the quick reference guide in turn and where possible, link it to the word search puzzle experiment.
The first thing on the quick reference guide is the experimental hypothesis, which you may see denoted as the H1.
As the definition states it’s the prediction of the outcome of the experiment.
Now the hypothesis is the starting point in the process of experimentation, so when we were thinking about a simple experiment, we needed an idea that we could test and this is what we came up with.
High levels of teacher presence in PC lab 1 will cause students to perform a word search puzzle more quickly than students in PC lab 2.
Now because we’ve predicted the way in which behavior will change, i.e. it will get faster, we’ve actually formulated a one-tailed experimental hypothesis.
If we’d simply stated that high levels of teacher presence will cause students in PC lab 1 to perform a word search puzzle differently than students in PC lab 2. We would have formulated a two-tailed hypothesis because all we’re saving here is that behavior of interest will change.
The decision to choose a one-tailed or a two-tailed hypothesis depends on how confident you feel in predicting the way that behavior will change.
We were confident enough to go for a one-tailed experimental hypothesis because a large body of research evidence suggests that the presence of others will improve performance on simple tasks.
Another type of hypothesis you need to be aware of is the null hypothesis sometimes denoted as the H0, now this simply states that any observed differences between groups were down to chance. The idea behind it, is that depending on the result of statistical testing at the end of the experiment you will either reject or retain the null hypothesis. Put simply the bigger the difference between groups, the less likely that the difference is down to chance and the more likely you are to reject the null hypothesis.
Continuing with the quick reference guide let’s have a look at the independent variable (the IV). This is the one factor that is different between conditions. Now the IV is the thing that you as the experimenter manipulate in order to see if it causes a change in behavior and as already mentioned it was level of teacher presence.
So what about the dependent variable (the DV)? The aspect of behavior measured? Again as previously mentioned it was the time taken to complete the word search puzzle.
Hopefully you’re beginning to see that although the experimental method uses unfamiliar terms like the H1, H0, IV and DV the things they allude to are not actually that complicated.
Let’s go back to the logic of the experimental method and highlight the independent and dependent variables.
If two groups of participants are equal in all respects save one (The independent variable) and are not similar in respect of a behavior that is being measured (The dependent variable) then the difference between them must be attributable to the one way in which they were different (The independent variable).
Now, remember I said lets accept for the time being that the two groups of participants doing the word search puzzle are equal in all respects save one.
Well clearly this is impossible to maintain when dealing with human subjects for the simple fact that we’re not clones, we’re all different and we bring our individual differences with us when we take part in an experiment.
For instance, some of the students will have had more sleep than others and this may have affected their performance i.e. they didn’t finish the word search puzzle as quickly as they might have done if they'd had a proper night’s sleep.
Now this shouldn’t be a problem providing we are only talking about one or two people, and this kind of thing is referred to as a random error because things like number of hours sleep vary randomly in a population of people and it’s something that cannot be eliminated.
It’s when random errors become constant errors that you’ll find yourself saying Houston we have a problem.
Remember the cornerstone of the scientific method is to establish cause and effect and we do this by manipulating the independent variable to see what effect it has on the dependent variable.
Now imagine if it transpired that all the students in PC lab 2 had been to the same party the night before and only had on average 2 hours sleep? Whereas; all the students in PC lab 1 had fallen out with the person throwing the party so they didn’t go and so they ended up having a normal night’s sleep.
You now have no way of knowing whether students in PC lab 1 performed the word search puzzle more quickly because of level of teacher presence (the IV); or whether they performed the puzzle more quickly because they’d had more sleep and as such were more alert.
In this example sleep has become a confounding variable. It’s confounded your results and the experiment is ruined. This somewhat frivolous example was used to introduce you to another integral aspect of the experimental method i.e. control.
In the experimental method we accept that random error exists. We also accept that random error produces extraneous variables.
We must, therefore, always do our best to ensure that extraneous variables do not become confounding variables.
The simplest way to do this is to randomly allocate people to different conditions. So in our sleep example for instance, random allocation would ensure that all the party going tired people don't end up in the same group.
This leads us nicely into the 3 main types of experimental design because as you’ll see each has strengths and weaknesses in relation to control issues. Experimental design simply refers to the way in which participants are deployed during the experiment.
Before we look at the 3 designs I just want to quickly outline the difference between the experimental group and the control group. The experimental group is where you expect the predicted behavior change to occur (in our case PC Lab 1 i.e. high level of teacher presence) and the control group acts as the baseline so that this change can be assessed (PC Lab 2 i.e. low level of teacher presence).
Independent Subjects Design
Depending which book you read this design may also be referred to as independent groups, independent samples or between groups. The independent subjects design is the one we employed during the word search experiment. In this design participants are divided into entirely separate groups on the basis of random allocation, meaning that each participant has an equal chance of being allocated to either the experimental group or the control group.
Let’s say we have 60 participants, we could put the numbers 1 to 60 in a hat and get participants to pick a number. Participants who pick an even numbers would go into the experimental group and participants who pick an odd numbers would go into the control group.
Each participant will provide us with a single score. It’s then a case of looking at all the scores in the experimental group and comparing them with all the scores in the control group.
Repeated Measures Design
This is also referred to as the related samples design or the within groups design. In this design the participant performs in both the experimental condition and the control condition.
Each participant, therefore, provides us with two scores or paired data. It’s then a case of comparing all the paired data scores to see if there’s a difference between the conditions.
Now the main strength of this design over the independent subjects design is that you don’t have to worry as much about individual differences confounding the results of your experiment because if you think about it, each participant acts as their own control.
In other words they take their random variability i.e. number of hours sleep, personality etc. with them across the conditions so that they cancel each other out.
The danger of this type of design is that order effects will confound your results. Because each participant is being asked to do something twice it’s possible that they will perform better in the second condition because of practice or that they perform worse in the second condition because of fatigue or boredom.
Therefore, we must ensure that the order of the two conditions is counterbalanced.
Counterbalancing is a technique employed to ensure that half the subjects perform in the experimental condition first and the control group second.
And half the subjects perform in the control group first and the experimental group second.
Counterbalancing does not get rid of order effects but it does makes sure that any possible confounding effects cancel each other out.
Independent subjects and repeated measures are the two most common types of design in experimentation and the thing to note about them is that their strength lies in the weakness of the other.
With independent subjects design you don’t have to worry about order effects because you aren’t getting people to perform twice; but in the repeated measures design you are this is can be a real problem. However, in the independent subjects design you have to worry about individual differences between conditions; but in the repeated measures design you don’t because people take their differences with them, thereby nullifying any possible effect.
Matched Pairs Design
In this design participants are arranged into pairs on the basis of a pre-test. For instance you might pair people up who have a similar IQ. Members of the pairs are then allocated randomly to either the experimental condition or the control condition. It’s then a case of comparing the paired data scores to see if there’s a difference between the conditions.
This tutorial has covered a lot of material and it would be worth going through at least a couple more times. If some of the concepts still seem a little abstract just keep trying to relate them to word search experiment.
Please bear in mind that the tutorial only covers simple experiments where you only have 2 conditions i.e. the experimental group and the control group and where you only have one independent variable and one dependent variable. Experiments can be more complex than this you can have more than two conditions and you can have multiple independent and dependent variables but the logic remains the same.
Finally I'd like to conclude with a definition of an experiment because having completed the tutorial it should make more sense.
An experiment is a study of cause and effect. It differs from simple observation in that it involves deliberate manipulation of one variable (the independent variable), while controlling other variables (extraneous variables) so that they do not affect the outcome, in order to discover the effect on another variable (the dependent variable) (S. Heyes 1986).
Research Methods and Statistics in Psychology by Hugh Coolican
The latest edition of this market-leading textbook has been updated and revised to embrace current developments in this area of psychology. It remains a comprehensive survey of research methods in psychology today, with clear and detailed explanations of statistical concepts and data analysis.
It also covers the full range of experimental and non-experimental methods and explores the ongoing quantitative-qualitative debate among researchers. It reexamines issues surrounding validity; including confounds, quasi, field, and non-experiments, and includes a new section on ethnographic methods.
This special Kindle collection consists primarily of the landmark articles written by members of the Behavioral Science Units, National Center for the Analysis of Violent Crime, at the FBI Academy. These seminal publications in the history of FBI profiling were released by the U.S. Department of Justice as part of the information on serial killers provided by the FBI's Training Division. | http://www.all-about-forensic-psychology.com/experimental-design-tutorial.html | 13 |
27 | ||This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. (April 2010)|
Most atoms on Earth came from the interstellar dust and gas from which the Sun and Solar System formed. However, in the space science community, "extraterrestrial materials" generally refers to objects now on Earth that were solidified prior to arriving on earth. In October 2011, scientists reported that one form of extraterrestrial material, cosmic dust, contains complex organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars.
To date these fall into a small number of broad categories, namely:
- Meteorites too large to vaporize on atmospheric entry but small enough to leave fragments lying on the ground
- moon rocks brought back by the Apollo missions
- unmelted micrometeorites typically less than 100 micrometres in diameter collected on Earth and in the Earth's stratosphere
- specimens returned from space-borne collection missions like NASA's Long Duration Exposure Facility (LDEF) and the Stardust sample return mission.
A minor but notable subset of the materials that make up the foregoing collections were also solidified outside of our solar system. These are often referred to as interstellar materials, some of which are also presolar i.e. they predate the formation of our solar system.
Each of these types of material is treated elsewhere. This entry, therefore, is here primarily to consider the relationship between types of extraterrestrial materials on Earth, as well as the types of extraterrestrial material we'd like to have a closer look at.
Special Characteristics
Oxidation State
Thanks to the "romance" between carbon and oxygen atoms, which form gases when they combine, the solid material in a given planetary system probably depends on whether carbon abundances were greater, or less, than those of oxygen. What's left over goes into solids. There was more oxygen than carbon in our Solar System, so solid planetary surfaces (as well as primitive meteorites) are largely made up of oxides like SiO2 (silicates, resulting in silicate planets). If the converse was true, carbides might be the dominant form of "rock" (resulting in carbon planets).
Larger planetary bodies in our early Solar System were hot enough to experience melting and differentiation, with heavier elements (like nickel) finding their way into the core. The lighter silicate rocks presumably float to the surface. Thus this might explain why iron on Earth's surface is depleted in nickel, while iron meteorites (from the center of a differentiated planetoid) are rich in nickel. Since it's not a very active rock-forming element, most of our nickel is in the Earth's core. A key feature of extraterrestrial materials, in comparison to naturally occurring terrestrial materials, is therefore Ni/Fe ratios above several percent.
Another consequence of the leftover oxygen in our early Solar System is the fact that presolar carbides from star systems with higher ratios of C/O are easier to recognize in primitive meteorites, than are presolar silicates. Hence presolar carbon and carbide grains were discovered first.
Elemental Abundances
Present day elemental abundances are superimposed on an (evolving) galactic-average set of elemental abundances that was inherited by our Solar System, along with some atoms from local nucleosynthesis sources, at the time of the Sun's formation. Knowledge of these average planetary system elemental abundances is serving as a powerful tool for tracking chemical and physical processes involved in the formation of planets, and the evolution of their surfaces. Awaiting more on this here cf. cosmic abundance.
Impact and irradiation effects
The atmosphere of planet earth, and the Earth's magnetic field, protect us from impact and irradiation by a wide range of objects that fly around in space. Those who know about these effects can often learn from them a lot about the history of extraterrestrial materials.
Microcraters etc.
Microcraters, "pancakes", and "splashes" were first seen on the surface of lunar rocks and soil grains. They tell us about the direct exposure of an object's surface to space in the absence of a vacuum. These structures are quite unlike features found on the surface of terrestrial rocks and soil. They have also been identified on grains found in meteorites, and on man-made objects exposed to the micrometeorite flux in space.
Nuclear particle tracks
Nuclear track damage trails from the passage of heavy ions in non-conducting solids were first reported by E. C. H. Silk and R. S. Barnes in 1959. Their etchability along with many subsequent applications were established by Fleischer, Price and Walker, starting with their work at General Electric Laboratories in Schenectady, New York. See also solid state nuclear track detector.
Nuclear particle tracks subsequently found many uses in extraterrestrial materials, thanks both to the exposure of those materials to radiation in space, and to their sometimes ancient origins. These applications included (i) determining the exposure of mineral surfaces to solar flare particles from the sun, (ii) determining the spectrum of solar flare particle energies from the early Sun, and (iii) the discovery of fission tracks from extinct isotopes of Plutonium 244 in primitive meteorites.
Nuclear spallation effects
Particles subject to bombardment by sufficiently energetic particles, like those found in cosmic rays, also experience the transmutation of atoms of one kind into another. These spallation effects can alter the trace element isotopic composition of specimens in ways which allow researchers in the laboratory to fingerprint the nature of their exposure in space.
These techniques have been used, for example, to look for (and determine the date of) events in the pre-Earth history of a meteorite's parent body (like a major collision) that drastically altered the space exposure of the material in that meteorite. For example the Murchison meteorite landed in Australia in 1967, but its parent body apparently underwent a collision event about 800,000 years ago which broke it into meter-sized pieces.
Isotopic abundances
The isotopic homogeneity of our planet and our Solar System provides a blank slate on which to detect the effect of nuclear processes on earth, in our solar system, and in objects coming to us from outside the solar system. Awaiting more here, see natural abundance.
Noble gases
Noble gases are particularly interesting from an isotopic perspective, first because they avoid chemical interactions, secondly because many of them have more than one isotope on which to carry the signature of nuclear processes (xenon has many), and finally because they are relatively easy to extract from solid materials by simple heating. As a result, they play a pivotal role in the unfolding drama of extraterrestrial materials study.
Radiometric dating in general
Isotopic abundances provide important clues to the date of events that allowed a material to begin accumulating gaseous decay byproducts, or that allowed incident radiation to begin transmuting elements. Because of the harsh radiations, unusual starting compositions, and long delays between events sometimes experienced, isotopic dating techniques are of special importance in the study of extraterrestrial materials.
Until more perspective is added here on the importance of these tools to extraterrestrial materials research, you can find clues to some strategies not mentioned here under radiometric dating.
Other isotopic studies
The categories of study mentioned before this are classic isotopic applications. However, extraterrestrial materials also carry information on a wide range of other nuclear processes. These include for example: (i) the decay of now-extinct radionuclides, like those supernova byproducts introduced into Solar System materials shortly before the collapse of our solar nebula, and (ii) the products of stellar and explosive nucleosynthesis found in almost undiluted form in presolar grains. The latter are providing astronomers with up-close information on the state of the whole periodic table in exotic environments scattered all across the early Milky Way.
See also
- Cosmic dust
- Interplanetary dust cloud
- Glossary of meteoritics
- List of Martian meteorites
- List of meteorites on Mars
- Moon rock
- Presolar grains
- Chow, Denise (26 October 2011). "Discovery: Cosmic Dust Contains Organic Matter from Stars". Space.com. Retrieved 2011-10-26.
- ScienceDaily Staff (26 October 2011). "Astronomers Discover Complex Organic Matter Exists Throughout the Universe". ScienceDaily. Retrieved 2011-10-27.
- Kwok, Sun; Zhang, Yong (26 October 2011). "Mixed aromatic–aliphatic organic nanoparticles as carriers of unidentified infrared emission features". Nature. Bibcode:2011Natur.479...80K. doi:10.1038/nature10542.
- Suess, H. E.; Urey, H. C. (1956). "Abundances of the elements". Rev Mod Phys 28: 53–74. Bibcode:1956RvMP...28...53S. doi:10.1103/RevModPhys.28.53.
- Cameron, A. G. W. (1973). "Abundances of the elements in the solar system". Space Sci Rev 15: 121–146. Bibcode:1973SSRv...15..121C. doi:10.1007/BF00172440.
- Anders, E.; Ebihara, M. (1982). "Solar-system abundances of the elements". Geochim. Cosmochim. Acta 46 (11): 2363–2380. Bibcode:1982GeCoA..46.2363A. doi:10.1016/0016-7037(82)90208-3.
- Silk, E. C. H.; Barnes, R. S. (1959). "Examination of fission fragment tracks with an electron microscope". Phil. Mag. 4 (44): 970–971. Bibcode:1959PMag....4..970S. doi:10.1080/14786435908238273.
- R. L. Fleischer, P. Buford Price, and Robert M. Walker (1975) Nuclear Tracks in Solids (U. California Press, Berkeley).
- M. W. Caffee, J. N. Goswami, C. M. Hohenberg, K. Marti and R. C. Reedy (1988) in Meteorites and the early solar system (ed. J. F. Kerridge and M. S. Matthews, U Ariz. Press, Tucson AZ) 205-245.
- Clayton, Robert N. (1978). "Isotopic anomalies in the early solar system". Annual Review of Nuclear and Particle Science 28: 501–522. Bibcode:1978ARNPS..28..501C. doi:10.1146/annurev.ns.28.120178.002441.
- Minoru Ozima and Frank A. Podosek (2002) Noble gas geochemistry (Cambridge U. Press, NY, second edition) ISBN 0-521-80366-7
- Hohenberg, C (2006). "Noble gas mass spectrometry in the 21st century". Geochimica et Cosmochimica Acta 70 (18): A258. Bibcode:2006GeCAS..70Q.258H. doi:10.1016/j.gca.2006.06.518.
- Zinner, Ernst (2003). "An isotopic view of the early solar system". Science 300 (5617): 265–267. doi:10.1126/science.1080300. PMID 12690180.
- Zinner, Ernst (1998). "Stellar nucleosynthesis and the isotopic composition of presolar grains from primitive meteorites". Annual Review of Earth and Planetary Sciences 26: 147–188. Bibcode:1998AREPS..26..147Z. doi:10.1146/annurev.earth.26.1.147.
- Planetary Science Research Discoveries Educational journal with articles about extraterrestrial materials. | http://en.wikipedia.org/wiki/Extraterrestrial_materials | 13 |
14 | Click on the graphics for larger version (this is a picture intensive page - please be patient loading.) For a printer friendly version,
We gratefully acknowledge the owners of www.knowth.com for their permission to use information from their website.
I) Ancient Astronomy
A) In the beginning, there were 3 basic types of ancient observatories
1) Simple markers
(a) Stone, wood, holes, or lines
3) Temples or tombs
(a) Passageways, shafts, windows, or other openings that would face the rising/setting moon, sun or important stars
II) Famous Places
1) Located on the Salisbury Plain in Southern England
2) One of about 900 megalithic circles throughout Britain
3) Consists of three concentric circles built during three different time periods
4) Built in stages from about 2800 BCE to about 1075 BCE
5) Used to observe the sun and the moon
6) Gave regularity to the builders calendar; helped in planting and used for religious ceremonies
7) The largest stones weigh 50 tons, and were transported from many miles away.
8) Viewed from the center, the sun rises over the heel stone at the summer solstice
Period 1 - 2 Period 3
Alignments of Stonehenge Heel Stone
Around 3000 BCE, the first Stonehenge consisted of a ditch and bank enclosing a ring of 56 pits. These were later named Aubrey Holes after the 17th century antiquarian John Aubrey who discovered them. Around 2500 BCE, the 4 ton bluestone megaliths were brought from the Preseli mountains in Wales. Around 2300 BCE, 30 sarsens (sandstone uprights), each weighing over 25 tons, were positioned in a circle and capped with morticed stone lintels. Seven centuries later two mysterious rings of pits were dug around the Stones. Over time, the landscape around Stonehenge underwent substantial change and development. In the Neolithic period long barrows and huge earthworks such as the Cursus and Durrington Walls were created. During the Bronze Age hundreds of round barrows were built for the burial of chieftains or leaders, often with supplies to support them on their journey into the next world. The Avenue, a ceremonial approach to the Stones aligned on the midsummer sunrise, was also built around this period.
What really happens at Stonehenge!
B) Newgrange (Ireland) - check www.knowth.com for a
fascinating detailed description of this ancient site.
1) 3000 BCE
2) Underground chamber (tomb?) with passageways which point to the rising sun at winter solstice
3) Window allows light to enter at sunrise on the first day of winter
4) Built by a sophisticated society with an interest in tracking and recording the movements of the sun
5) Originally used as calendar
(a) Used to mark major events in the sun’s yearly and four-yearly cycle
6) Change in climatic conditions and a change in the obliquity of the ecliptic of Earth’s passage led to disuse
7) Ornamentations found in Newgrange have a functional purpose
(a) As the sun shines directly onto the symbols engraved on the backstone they act not just as primitive representations of the sun, but as devices precisely positioned to measure solar movement
Pictures & Text are © www.knowth.com
The Megalithic Passage Tomb at Newgrange was built about 3200 BCE. The outer kidney shaped mound covers an area of over one acre and is surrounded by 97 Kerbstones, many of which are decorated. The 19 meter long inner passage leads to a cruciform chamber. It is estimated that the construction of the Passage Tomb at Newgrange would have taken a work force of 300 at least 20 years to build. The passage and chamber of Newgrange are illuminated by the Winter Solstice sun. A shaft of sunlight shines through the roof box over the entrance and penetrates the passage to light up the chamber. The dramatic event lasts for 17 minutes at dawn from the 19th to the 23rd of December. The Passage Tomb at Newgrange was re-discovered in 1699 by the removal of material for road building. The pictures below show the entrance and a small window directly above. Around December 21st, the sun shines through this window illuminating the complete passage along with an ornamentation carved on a back stone that precisely measures solar movement for 15 minutes.
C) Big Horn Medicine Wheel
1) Arrangement of rocks resembling a 28-spoke wheel in the Big Horn Mountains of Wyoming; 90 feet in diameter
2) Used as calendar by the Plains Indians from about 1500 – 1700 CE
3) Used as indicator of summer solstice sunrise and sunset, with other alignments for the rising of certain stars (Aldebaran, Rigel, and Sirius)
4) About 50 similar circles exist
(a) The oldest is in Canada
(b) About 2500 BCE - the age of the Egyptian pyramids
5) The alignments are controversial as they could be due to chance
6) No evidence they are astronomical in design
(a) Why were they interested in those three particular stars?
Locations of Medicine Wheels throughout the Northwest and Canada
The Big Horn Medicine Wheel is a prehistoric Native American rock structure laid down about 2,500 years ago by an Aztec-Tanoan culture that occupied the infamous Bighorn Canyon and adjacent areas between 1500 BCE and 500 CE.
The Wheel lies at an elevation of 9,642 feet, 11 miles south of the Montana Boundary. Medicine Wheel was first discovered by Crow Indian hunters nearly 300 years ago. They became deathly afraid of its "bad medicine" and no Indians of any tribe dared go near it after the news had spread across the Plains. Because of the structure's resemblance to a giant wagon wheel with hub, spokes, and a rim, white trappers called it a wheel. Until 1992 its true interpretation was unknown, although many theories were advanced without real study.
As an archeolinguistic artifact, the Medicine Wheel tells how the first people on earth, (Uto-Aztecan) emerged as spirits out of the Underworld via a conduit topped by the large central rock cairn, to be driven by a spirit vectored force into an inwardly opening rim-touching cairn where they became born as human beings. Where an offset cairn lies 12 feet outside the Wheel's rim, the greatly feared evil ghosts of dead Uto-Aztecans were uplifted into the Afterworld in the Milky Way directly above the structure's central exit cairn.
The rim of the structure symbolized the cosmological horizon of the Milky Way. Four external rim-touching cairns symbolize the four preceding eras in the world's history closed completely in time; they could not be entered by either the spirits rising from the Underworld nor by the ghosts en route to the Afterworld. On the inside a rim-touching, larger cairn opens toward the central cairn via a spoked channel to symbolize the Fifth Era of Current Existence in which all humanity plays its varied parts. In this era the Underworld spirits become human beings living a normal life span.
The Wheels' rim measures 245 feet. There are 20 aboriginal "spokes" that connect the rim only to the central cairn in five groups of four spokes each. These symbolize the number of days in the Uto-Aztecan month and their counting system to the base 20 (fingers and toes). Another seven spokes connect the central cairn to the other rim-touching elements. The total of 27 aboriginal spokes symbolized the number of nights between the New Moon crescent and the Old Moon crescent when the moon is actually visible each month. This lunar visibility symbolized the forces of darkness, evil. On the central cairn lies a bison skull that symbolizes the sun (light), beneficence or goodness. These two symbolizations represent the eternal warfare between light and dark, i.e., evil vs. good.
All across Wyoming, with radii up to 200 miles, giant stone arrows direct the way to the Medicine Wheel to show the greatly feared evil ghosts of Uto-Aztecan dead the way to the Afterworld of Darkness. Thus the Medicine Wheel was also a mythological cemetery for ghosts, a place of great fear to be avoided forever and a place totally lacking in religious sentiments.
The Aztec-Tanoan culture originated around 4,500 years ago in Southern Alberta, Canada, where the Majorville Cairn is the oldest wheel-like cosmological rock structure belonging to the Medicine Wheel Complex. A second and follow-on structure based on five is the Moose Mountain Cairn in Southern Saskatchewan and dates to around 3,000 years ago. Third in the order is the Big Horn Medicine Wheel at 2,500 years of age. The fourth and final rock symbolization appears to be the famed Aztec Calendar Stone located in Mexico City which has been dated to 1481 CE. All of these four structured linguistic artifacts (none are archaeological remnants of prehistory) symbolize a mythological story of the creation of Uto-Aztecan humanity -- the "Origin Myth" of which every tribe on earth has its own version -- and the disposition of the ghosts of Uto-Aztecan dead in an Afterworld of Darkness located in the cosmological horizon of the Milky Way.
The cosmology patterned into these four rock structures is common only to Native American tribal groups speaking branches of the Uto-Aztecan (Numic) language. No other Western Hemisphere tribe or society entertained this particular view of man's place in the cosmos of the earth exactly half way between the Underworld of his mythological creation and the Afterworld high above.
The Big Horn Medicine Wheel (as also its Canadian predecessors and its Aztec successor) is a linguistic artifact that reflects the "thought world" of a very ancient primitive Siberian people immersed also in the eternal warfare between the forces of darkness (evil) and the forces of light (good), i.e., originating as between the bitter darkness of arctic winters vs. the perpetual daylight of warm summers. This never-ending warfare is clearly portrayed in the Medicine Wheel Complex rock patterns and exists today in the legends of many of the Numic-speaking tribes between Southern Canada and Central America.
All of these prehistoric rock structures are based in the mystical number five, which meant power. All other Western Hemisphere Native Americans revered the number four of the cardinal directions North, South, East, and West. None of the Medicine Wheel Complex structures bear the slightest relationship to the number four, and for this reason their entire interpretation must be in terms of the Uto-Aztecan "thought world" stock language. No religion whatsoever connects to the Medicine Wheel.
Since the Bighorn Medicine Wheel was first discovered by white men, many fanciful myths and stories have grown up around the mysterious arrangements of its limestone rocks, however, none have any scientific validity.
Contemporary Native American claims of "religious rights" to the site because of "traditional ceremonial usages" did not surface until 1985 with the American Indian Movement. All such "claims" are patently false and unrelated in any way to the Wheel's paleoethnological time span of 2,500 years. There is no recorded evidence of any twentieth century Native American of any tribe visiting the Big Horn Medicine Wheel prior to 1985.
D) Mayan Ruins - Caracol Temple
1) Yucatan Peninsula, Mexico
2) 1000 CE
3) Solstice and equinox alignments
4) Star alignments
5) Alignments with Venus
The Maya were quite accomplished astronomers. Their primary interest, in contrast to "western" astronomers, were Zenial Passages when the Sun crossed over the Maya latitudes. On an annual basis the sun travels to its summer solstice point, or the latitude of 23-1/3 degrees north.
Most of the Maya cities were located south of this latitude, meaning that they could observe the sun directly overhead during the time that the sun was passing over their latitude. This happened twice a year, evenly spaced around the day of solstice.
The Maya could easily determine these dates, because at local noon, they cast no shadow. Zenial passage observations are possible only in the Tropics and were quite unknown to the Spanish conquistadors who descended upon the Yucatan peninsula in the 16th century. The Maya had a god to represent this position of the Sun called the Diving God.
The Maya believed the Earth was flat with four corners. Each corner represented a cardinal direction and each direction had a color: east-red; north-white; west-black; south-yellow. Green was the center. At each corner, there was a jaguar of a different color that supported the sky. The jaguars were called bacabs. These bacabs, held up the sky.
The Milky Way itself was much venerated by the Maya. They called it the World Tree, which was represented by a tall and majestic flowering tree, the Ceiba. The Milky Way was also called the Wakah Chan. Wak means "Six" or "Erect". Chan or K'an means "Four", "Serpent" or "Sky". The World Tree was erect when the constellation Sagittarius was well over the horizon. At this time the Milky Way rose up from the horizon and climbed overhead into the North. The star clouds that form the Milky Way were seen as the tree of life where all life came from. Mayans further believed the universe was divided into thirteen layers, each with its own god.
The Maya portrayed the Ecliptic in their artwork as a Double-Headed Serpent. As you know, the ecliptic is the path of the sun in the sky which is marked by the constellations of fixed stars. Here the moon and the planets can be found because they are bound, like the Earth, to the sun.
The constellations on the ecliptic are also called the zodiac. We don't know exactly how fixed constellations on the ecliptic were seen by the Maya, but we have some idea of the order in some parts of the sky. We know there is a scorpion, which is the same as the constellation of Scorpius. It has also been determined that Gemini appeared to the Maya as a pig or peccary, (a nocturnal animal in the pig family.) Some other constellations on the ecliptic are identified as a jaguar, at least one serpent, a bat, a turtle, a xoc monster (shark or sea monster.) The Pleiades were seen as the tail of the rattlesnake and is called, "Tz'ab."
The Maya built observatories at many of their cities and aligned important structures with the movements of celestial bodies. Some of these are temple groupings, such as a group of three at Uaxactún, which marks the Sun's rising position at summer solstice, the two equinoxes and winter solstice. Architecture such as the Caracol Temple was also aligned with the appearance of celestial bodies such as the Pleiades and Venus.
1) West of the Nile River in southern Egypt
2) Radiocarbon dating dates Nabta could not be younger than 4,800 years old
(a) Megaliths at Nabta predate most other similar sites, such as Stonehenge
In Nabta, there are six megalithic alignments extending across the sediments containing a total of 24 megaliths or megalithic scatters. Like the spokes on a wheel, each alignment radiates outward from the complex structure. These lines coincide with the rising positions of three prominent stars from the period 4800-3700 BCE: Sirius (the brightest star in the night sky), Dubhe (the brightest star in Ursa Major), and stars in the belt of Orion.
1) Astronomical Site in Sub-Saharan Africa
2) 300 BCE
3) 25 stone alignments with seven positions in the sky
Built around 300 BCE by an unknown African people. They built a stone "observatory" using a detailed understanding of the motions of the Stars and the Moon. With their astronomical knowledge, these people created a very accurate lunar calendar.
G) Aztec Ruins
1) Templo Mayor in Tenochtitlan, eastern Mexico
2) Festivals occurred at the equinoxes
The Aztecs were observers of nature in all its cycles: the stars, the passing of the seasons and the birth and death of plant and animal life. These observations informed many aspects of life: from the creation of the calendar, to the integration of time cycles with the stories of the gods and creation, to the structuring of rituals in their proper time and place. More unpredictable astronomical events were linked to omens and portents. For example, the comet seen by Montezuma prior to the Spaniards' arrival was seen as a forewarning of an impending crisis.
There were two calendar systems used by the Aztecs. The first was the solar year, or the 365 day cycle. This was divided up into 18 months of 20 days each. There were 5 leftover days (called "nemontemi" which were deemed to be bad luck. The second calendar was a 260 day cycle made up of 20 day signs (named mostly after aspects of nature) and 13 numbers. These two cycles worked together like two intermeshing gears. After the gears of the 260 and 365 day cycles have gone completely around, 52 years (the equivalent of our century) will have passed.
Astronomically, this 52 year cycle was begun when the Pleiades crossed the fifth cardinal point or the zenith of heaven at midnight. The ritual that marked this new cycle was called the "New Fire Ceremony", which was probably a once in a lifetime event for most Aztec people. On this evening, priests and a warrior chosen by the King began their 20 kilometer procession to the Hill of the Star. At the proper moment of alignment in the heavens, wood bundles representing the past 52 year cycle were lit, the heart sacrifice of the warrior was enacted and the "new fire" built on his chest. Watching from afar, the citizens also cut and bled themselves and celebrated as the fire was brought back to the Great Temple at Tenochtitlan. Priests and emissaries from outlying towns came to Tenochtitlan to fetch the fire to bring back to their people, so that all could share in the marking of the time as well as symbolically renewing ties with the capital.
The universal time scale, is one of the most famous of Aztec relics: the Calendar Stone (or Sunstone). This huge object (4 feet thick, 12 feet in diameter, and weighing over 24 tons) was found in Mexico City's main square, or Zocalo, in 1790. It is believed that the stone was meant to portray the five ages or cycles of creation. At the time of the arrival of Cortes, the Aztecs were in the fifth and final age, which had started 535 years earlier. In the center of the stone is the face of the Sun God, Tonatiuh. Surrounding him are square areas representing the four previous ages which had been destroyed by hurricanes, jaguars, fires, and rains. Doomsday is marked by the pointing triangular area at the top of the stone.
For the Aztecs, observations of nature provided a framework for marking temporal events in calendars. It also served to help order the time and place of the rituals that were so important to their society.
H) Incan Ruins
at Machu Picchu, Peru
The Incan Empire played host to a broad spectrum of divine spirits and cultural heroes that were based upon their observations of the night sky. From Viracocha, the creator god, to K'uychi, the rainbow god, the Incans viewed themselves as subjects to a menagerie of powerful deities. By controlling weather, health, fertility, and time itself, these gods held the fate of the empire in their hands. If appeased, they could be helpful allies, but if angered, they were capable of unleashing profound terror upon the masses, desiccating the fields with drought, ravaging the villages with disease, or destroying entire cities with natural disaster.
Inti was considered the Sun god and the ancestor of the Incas. Inca people were living in South America in ancient Peru. In the remains of the city of Machu Picchu, it is possible to see a shadow clock which describes the course of the Sun personified by Inti.
Inti and his wife Pachamama, the Earth goddess, were regarded as benevolent deities. According to an ancient Inca myth, Inti taught his son Manco Capac and his daughter Mama Ocollo the arts of civilization and sent them to Earth to instruct mankind about what they had learned.
Inti ordered his children to build the Inca capital where the tupayauri fell to the ground. The tupayauri was a divine golden wedge. Manco probed the ground with the wedge, and at one point threw it into the ground. The tupayauri sank into the ground, and so the search for a site was over. Incas believed this happened in the city of Cuzco, which had been founded by the Ayar. A festival, held in honour of Inti is celebrated even today in Peru during the Festival of Inti Raimi in Cuzco, where an Inca drama related to the Sun god is re-enacted.
Finally, the Inca Universe had three worlds: the Cosmos (Hanaqpacha); the surface of the earth (Kaypacha) and also the earth's interior (Okupacha.)
1) Temple at Karnak
(a) Certain alignments correspond to summer solstice sunset and winter solstice sunrise
2) Pyramid of Khufu at Giza
(a) Shafts from the King's chamber indicate
(1) location of Polaris 5000 years ago
(2) Former position of Orion's belt
(b) The pyramid is also aligned perfectly N-S and E-W
Temple at Karnak - Egypt
Pyramid of Khufu at Giza, Egypt
The Pyramids of Giza are the most famous monuments of ancient Egypt. These massive stone structures were built around 4500 years ago on a rocky desert plateau close to the River Nile. They form part of the burial complexes of the Egyptian pharaohs Khufu, Khafre and Menkaure. The intriguing Egyptian pyramids were more than just graves. The mysteries surrounding their symbolism, design and purpose have inspired passionate debate. It is likely that many of these mysteries will never be solved.
The Great Pyramid of Khufu is the largest of the pyramids of ancient Egypt, and was considered by the ancient Greeks to be one of the Seven Wonders of the World. Khufu (Cheops to the Greeks) came to power when the Old Kingdom of ancient Egypt was nearing a peak of prosperity and culture. His pyramid is remarkable for both its size and its mathematical precision.
The four sides of the Great Pyramid of Khufu are accurately oriented to the cardinal points of the compass. The base has sides 230 meters long, apparently with a difference between them of only a few centimeters. The pyramid was originally 146 meters high until it was robbed of its outer casing and capstone. The structure contains approximately 2.3 million stone blocks. The Great Pyramid of Khufu is regarded as the most massive building ever erected in the world - a remarkable statistic for a construction feat achieved 4500 years ago.
There is some debate on whether or not the pyramid was built with an "eye on the stars." The following pictures will show you that there is a perfect alignment with the pyramids and the three stars of Orion's belt. In addition, the pyramid is perfectly aligned with North/South and East/West.
A fascinating archaeological story argues that the great pyramids of Egypt's Fourth Dynasty (c. 2600-2400 BCE.) were vast astronomically sophisticated temples, rather than the pharaonic tombs depicted by conventional Egyptology. A tiny remote-controlled robot created by Rudolf Gantenbrink, a German robotics engineer, traveled up airshafts within the Great Pyramid of Giza and relayed to scientists video pictures of a hitherto unknown sealed door within the pyramid. Robert Bauval, a British engineer and writer who has been investigating the pyramids for more than ten years, and Adrian Gilbert, a British publishing consultant, use Gantenbrink's tantalizing discovery as the basis for an extended analysis of the purpose of the mysterious airshafts, which lead from the Great Pyramid's chambers to its exterior, and of the placement of other Fourth Dynasty pyramids. They were sited, the authors argue, to coincide with the key stars of Orion, a constellation that had religious significance for the Egyptians. Bauval and Gilbert claim that the shafts were pointed directly at important stars in Orion (when Orion was in position during ancient times.) Using astronomical data about stellar movement, they argue that the Orion stars coincide exactly with the pyramids' positions in approximately 10,400 BCE. This is during a period the Egyptians called the First Time, when they believed the god Osiris ruled the Earth. The authors also speculate that the mysterious space within the Great Pyramid discovered by Gantenbrink contains the mythical Benben stone, which the Egyptians linked to the creation of the world. Along with this theory, the authors also suggest that other astronomical sites through out the world all line up with constellations in the sky during the year 10,400 BCE.
J) Chinese Alignments
1) A tower built in 1270 CE
2) Measures the sun's shadow
(a) Shadow shortest at noon
(b) Very short at the summer solstice
3) Markers on ground locate shadow positions
Mankind's first record of an eclipse of the Sun was made in China in 2136 BCE. Zhengtong, a Ming Dynasty ruler of China from 1436-1449, had the Ancient Beijing Observatory built at the southeast corner of the city wall. A 46-ft.-high platform held eight Qing Dynasty bronze astronomical instruments. Two were built in 1439 and six in 1673.
During the Dark Ages in Europe, the flourishing civilization in China achieved major advances in science, technology, medicine, mathematics and astronomy. A "guest star" was seen by the Chinese in 1054. The supernova explosion was witnessed in the area of Earth's sky where today we see an expanding gas cloud that we now call the Crab Nebula. When the star exploded, it was so bright it was visible during daylight and lasted more than a year. When contact was made with European astronomers, the Chinese realized how much they did not yet know about astronomy.
< Astronomy > < Website Directory > | http://www.sir-ray.com/Ancient%20Astronomy.htm | 13 |
55 | Bijection, injection and surjection
In mathematics, injections, surjections and bijections are classes of functions distinguished by the manner in which arguments (input expressions from the domain) and images (output expressions from the codomain) are related or mapped to each other.
- A function is injective (one-to-one) if or, equivalently, if . One could also say that every element of the codomain (sometimes called range) is mapped to by at most one element (argument) of the domain; not every element of the codomain, however, need have an argument mapped to it. An injective function is an injection.
- A function is surjective (onto) if every element of the codomain is mapped to by some element (argument) of the domain; some images may be mapped to by more than one argument. (Equivalently, a function where the range is equal to the codomain.) A surjective function is a surjection.
- A function is bijective (one-to-one and onto) if and only if (iff) it is both injective and surjective. (Equivalently, every element of the codomain is mapped to by exactly one element of the domain.) A bijective function is a bijection (one-to-one correspondence).
(Note: a one-to-one function is injective, but may fail to be surjective, while a one-to-one correspondence is both injective and surjective.)
An injective function need not be surjective (not all elements of the codomain may be associated with arguments), and a surjective function need not be injective (some images may be associated with more than one argument). The four possible combinations of injective and surjective features are illustrated in the following diagrams.
A function is injective (one-to-one) if every possible element of the codomain is mapped to by at most one argument. Equivalently, a function is injective if it maps distinct arguments to distinct images. An injective function is an injection. The formal definition is the following.
- The function is injective iff for all , we have
- A function f : A → B is injective if and only if A is empty or f is left-invertible, that is, there is a function g: B → A such that g o f = identity function on A.
- Since every function is surjective when its codomain is restricted to its range, every injection induces a bijection onto its range. More precisely, every injection f : A → B can be factored as a bijection followed by an inclusion as follows. Let fR : A → f(A) be f with codomain restricted to its image, and let i : f(A) → B be the inclusion map from f(A) into B. Then f = i o fR. A dual factorisation is given for surjections below.
- The composition of two injections is again an injection, but if g o f is injective, then it can only be concluded that f is injective. See the figure at right.
- Every embedding is injective.
A function is surjective (onto) if every possible image is mapped to by at least one argument. In other words, every element in the codomain has non-empty preimage. Equivalently, a function is surjective if its range is equal to its codomain. A surjective function is a surjection. The formal definition is the following.
- The function is surjective iff for all , there is such that f(a) = b.
- A function f : A → B is surjective if and only if it is right-invertible, that is, if and only if there is a function g: B → A such that f o g = identity function on B. (This statement is equivalent to the axiom of choice.)
- By collapsing all arguments mapping to a given fixed image, every surjection induces a bijection defined on a quotient of its domain. More precisely, every surjection f : A → B can be factored as a projection followed by a bijection as follows. Let A/~ be the equivalence classes of A under the following equivalence relation: x ~ y if and only if f(x) = f(y). Equivalently, A/~ is the set of all preimages under f. Let P(~) : A → A/~ be the projection map which sends each x in A to its equivalence class [x]~, and let fP : A/~ → B be the well-defined function given by fP([x]~) = f(x). Then f = fP o P(~). A dual factorisation is given for injections above.
- The composition of two surjections is again a surjection, but if g o f is surjective, then it can only be concluded that g is surjective. See the figure at right*.
A function is bijective if it is both injective and surjective. A bijective function is a bijection (one-to-one correspondence). A function is bijective if and only if every possible image is mapped to by exactly one argument. This equivalent condition is formally expressed as follows.
- The function is bijective iff for all , there is a unique such that f(a) = b.
- A function f : A → B is bijective if and only if it is invertible, that is, there is a function g: B → A such that g o f = identity function on A and f o g = identity function on B. This function maps each image to its unique preimage.
- The composition of two bijections is again a bijection, but if g o f is a bijection, then it can only be concluded that f is injective and g is surjective. (See the figure at right and the remarks above regarding injections and surjections.)
- The bijections from a set to itself form a group under composition, called the symmetric group.
Suppose you want to define what it means for two sets to "have the same number of elements". One way to do this is to say that two sets "have the same number of elements" if and only if all the elements of one set can be paired with the elements of the other, in such a way that each element is paired with exactly one element. Accordingly, we can define two sets to "have the same number of elements" if there is a bijection between them. We say that the two sets have the same cardinality.
It is important to specify the domain and codomain of each function since by changing these, functions which we think of as the same may have different jectivity.
Injective and surjective (bijective)
- For every set A the identity function idA and thus specifically .
- and thus also its inverse .
- The exponential function and thus also its inverse the natural logarithm
Injective and non-surjective
- The exponential function
Non-injective and surjective
- The sine function f(x) = sin x
Non-injective and non-surjective
- For every function f, subset A of the domain and subset B of the codomain we have A ⊂ f −1(fA) and f(f −1B) ⊂ B. If f is injective we have A = f −1(fA) and if f is surjective we have f(f −1B) = B.
- For every function h : A → C we can define a surjection H : A → h(A) : a → h(a) and an injection I : h(A) → C : a → a. It follows that h = I o H. This decomposition is unique up to isomorphism.
This terminology was originally coined by the Bourbaki group.
cs:Bijekce da:Bijektiv de:Bijektivität es:Función biyectiva fr:Bijection io:Bijektio it:Corrispondenza biunivoca he:התאמה על nl:Bijectie ja:全単射 pl:Bijekcja ru:Биекция fi:Bijektio sv:Bijektiv uk:Бієкція zh:单射、双射与满射 | http://www.exampleproblems.com/wiki/index.php/Bijection,_injection_and_surjection | 13 |
53 | The Minkowski diagram, also known as a spacetime diagram, was developed in 1908 by Hermann Minkowski and provides an illustration of the properties of space and time in the special theory of relativity. It allows a quantitative understanding of the corresponding phenomena like time dilation and length contraction without mathematical equations.
The term Minkowski diagram is used in both a generic and particular sense. In general, a Minkowski diagram is a graphic depiction of a portion of Minkowski space, often where space has been curtailed to a single dimension. These two-dimensional diagrams portray worldlines as curves in a plane that correspond to motion along the spatial axis. The vertical axis is usually temporal, and the units of measurement are taken such that the light cone at an event consists of the lines of slope plus or minus one through that event.
A particular Minkowski diagram illustrates the result of a Lorentz transformation. The origin corresponds to an event where a change of velocity takes place. The new worldline forms an angle α with the vertical, with α < π/4. The Lorentz transformation that moves the vertical to α also moves the horizontal by α. The horizontal corresponds to the usual notion of simultaneous events, for a stationary observer at the origin. After the Lorentz transformation the new simultaneous events lie on the α-inclined line. Whatever the magnitude of α, the line t = x forms the universal bisector.
In Minkowski’s 1908 paper there were three diagrams, first to illustrate the Lorentz transformation, then the partition of the plane by the light-cone, and finally illustration of worldlines. The first diagram used a branch of the unit hyperbola to show the locus of a unit of proper time depending on velocity, thus illustrating time dilation. The second diagram showed the conjugate hyperbola to calibrate space, where a similar stretching leaves the impression of Fitzgerald contraction. In 1914 Ludwik Silberstein included a diagram of "Minkowski’s representation of the Lorentz transformation". This diagram included the unit hyperbola, its conjugate, and a pair of conjugate diameters. Since the 1960s a version of this more complete configuration has been referred to as The Minkowski Diagram, and used as a standard illustration of the transformation geometry of special relativity. E. T. Whittaker has pointed out that the Principle of relativity is tantamount to the arbitrariness of what hyperbola radius is selected for time in the Minkowski diagram. In 1912 Gilbert N. Lewis and Edwin B. Wilson applied the methods of synthetic geometry to develop the properties of the non-Euclidean plane that has Minkowski diagrams.
For simplification in Minkowski diagrams, usually only events in a one dimensional world are considered. Unlike common distance-time diagrams, the distance will be displayed on the x-axis (abscissa) and the time on the y-axis (ordinate). In this manner the events happening on a horizontal path in reality can be transferred easily to a horizontal line in the diagram. Objects plotted on the diagram can be thought of as moving from bottom to top as time passes. In this way each object, like an observer or a vehicle, follows in the diagram a certain curve which is called its world line.
Each point in the diagram represents a certain position in space and time. Such a position is called an event whether or not anything happens at that position.
For convenience, the (vertical) time axis represents, not t, but the corresponding quantity ct, where c =299,792,458 m/s is the speed of light. In this way, one second on the ordinate corresponds to a distance of 299,792,458 m on the abscissa. Due to x=ct for a photon passing through the origin to the right, its world line is a straight line with a slope of 45°, if the scales on both axes are chosen to be identical.
Path-time diagram in Newtonian physics
The black axes labelled x and ct on the adjoining diagram are the coordinate system of an observer which we will refer to as 'at rest', and who is positioned at x=0. His world line is identical with the time axis. Each parallel line to this axis would correspond also to an object at rest but at another position. The blue line, however, describes an object moving with constant speed v to the right, such as a moving observer.
This blue line labelled ct' may be interpreted as the time axis for the second observer. Together with the path axis (labeled x, which is identical for both observers) it represents his coordinate system. Both observers agree on the location of the origin of their coordinate systems. The axes for the moving observer are not perpendicular to each other and the scale on his time axis is stretched. To determine the coordinates of a certain event, two lines parallel to the two axes must be constructed passing through the event, and their intersections with the axes read off.
Determining position and time of the event A as an example in the diagram leads to the same time for both observers, as expected. Only for the position different values result, because the moving observer has approached the position of the event A since t=0. Generally stated, all events on a line parallel to the path axis (x axis) happen simultaneously for both observers. There is only one universal time t=t' which corresponds with the existence of only one common path axis. On the other hand due to two different time axes the observers usually measure different path coordinates for the same event. This graphical translation from x and t to x' and t' and vice versa is described mathematically by the so called Galilean transformation.
Minkowski diagram in special relativity
Albert Einstein discovered that the description above is not correct. Space and time have properties which lead to different rules for the translation of coordinates in case of moving observers. In particular, events which are estimated to happen simultaneously from the viewpoint of one observer, happen at different times for the other.
In the Minkowski diagram this relativity of simultaneity corresponds with the introduction of a separate path axis for the moving observer. Following the rule described above each observer interprets all events on a line parallel to his path axis as simultaneous. The sequence of events from the viewpoint of an observer can be illustrated graphically by shifting this line in the diagram from bottom to top.
If ct instead of t is assigned on the time axes, the angle α between both path axes will be identical with that between both time axes. This follows from the second postulate of the special relativity, saying that the speed of light is the same for all observers, regardless of their relative motion (see below). α is given by
The corresponding translation from x and t to x' and t' and vice versa is described mathematically by the so called Lorentz transformation. Whatever space and time axes arise through such transformation, in a Minkowski diagram they correspond to conjugate diameters of a pair of hyperbolas.
For the graphical translation it has been taken into account that the scales on the inclined axes are different from the Newtonian case described above. To avoid this problem it is recommended that the whole diagram be deformed in such a way that the scales become identical for all axes, eliminating any need to stretch or compress either axis. This can be done by a compression in the direction of 45° or an expansion in the direction of 135° until the angle between the time axes becomes equal to the angle between the path axes. The angle β between both time and path axes is given by
In this symmetrical representation (also referred to as Loedel diagram, named after the physicist Enrique Loedel Palumbo who first introduced this symmetrised Minkowski representation), the coordinate systems of both observers are equivalent, since both observers are traveling at the same speed in opposite directions, relative to some third point of view.
However, Loedel diagrams become more complicated than Minkowski diagrams for more than three observers and therefore lose their pedagogical appeal.
Relativistic time dilation means that a clock moving relative to an observer is observed to run slower. In fact, time itself in the frame of the moving clock is observed to run slower. This can be read immediately from the adjoining Minkowski diagram. The observer whose reference frame is given by the black axes is assumed to move from the origin O towards A. The moving clock has the reference frame given by the blue axes and moves from O to B. For the black observer all events happening simultaneously with the event at A are located on a straight line parallel to its space axis. This line passes through A and B, so A and B are simultaneous from the reference frame of the observer with black axes. However, the clock that is moving relative to the black observer marks off time along the blue time axis. This is represented by the distance from O to B. Therefore, the observer at A with the black axes notices his or her clock as reading the distance from O to A while he or she observes the clock moving relative him or her to read the distance from O to B. Due to the distance from O to B being smaller than the distance from O to A, he or she concludes that the time passed on the clock moving relative to him or her is smaller than that passed on his own clock.
A second observer having moved together with the clock from O to B will argue that the other clock has reached only C until this moment and therefore this clock runs slower. The reason for these apparently paradoxical statements is the different determination of the events happening synchronously at different locations. Due to the principle of relativity the question of "who is right" has no answer and does not make sense.
Relativistic length contraction means that the length of an object moving relative to an observer is decreased and finally also the space itself is contracted in this system. The observer is assumed again to move along the ct-axis. The world lines of the endpoints of an object moving relative to him are assumed to move along the ct'-axis and the parallel line passing through A and B. For this observer the endpoints of the object at t=0 are O and A. For a second observer moving together with the object, so that for him the object is at rest, it has the length OB at t'=0. Due to OA<OB the object is contracted for the first observer.
The second observer will argue that the first observer has evaluated the endpoints of the object at O and A respectively and therefore at different times, leading to a wrong result due to his motion in the meantime. If the second observer investigates the length of another object with endpoints moving along the ct-axis and a parallel line passing through C and D he concludes the same way this object to be contracted from OD to OC. Each observer estimates objects moving with the other observer to be contracted. This apparently paradoxical situation is again a consequence of the relativity of simultaneity as demonstrated by the analysis via Minkowski diagram.
For all these considerations it was assumed, that both observers take into account the speed of light and their distance to all events they see in order to determine the times at which these events happen actually from their point of view.
Constancy of the speed of light
Another postulate of special relativity is the constancy of the speed of light. It says that any observer in an inertial reference frame measuring the vacuum speed of light relative to himself obtains the same value regardless of his own motion and that of the light source. This statement seems to be paradoxical, but it follows immediately from the differential equation yielding this, and the Minkowski diagram agrees. It explains also the result of the Michelson–Morley experiment which was considered to be a mystery before the theory of relativity was discovered, when photons were thought to be waves through an undetectable medium.
For world lines of photons passing the origin in different directions x=ct and x=−ct holds. That means any position on such a world line corresponds with steps on x- and ct-axis of equal absolute value. From the rule for reading off coordinates in coordinate system with tilted axes follows that the two world lines are the angle bisectors of the x- and ct-axis. The Minkowski diagram shows, that they are angle bisectors of the x'- and ct'-axis as well. That means both observers measure the same speed c for both photons.
Further coordinate systems corresponding to observers with arbitrary velocities can be added to this Minkowski diagram. For all these systems both photon world lines represent the angle bisectors of the axes. The more the relative speed approaches the speed of light the more the axes approach the corresponding angle bisector. The path axis is always more flat and the time axis more steep than the photon world lines. The scales on both axes are always identical, but usually different from those of the other coordinate systems.
Speed of light and causality
Straight lines passing the origin which are steeper than both photon world lines correspond with objects moving more slowly than the speed of light. If this applies to an object, then it applies from the viewpoint of all observers, because the world lines of these photons are the angle bisectors for any inertial reference frame. Therefore any point above the origin and between the world lines of both photons can be reached with a speed smaller than that of the light and can have a cause-effect-relationship with the origin. This area is the absolute future, because any event there happens later compared to the event represented by the origin regardless of the observer, which is obvious graphically from the Minkowski diagram.
Following the same argument the range below the origin and between the photon world lines is the absolute past relative to the origin. Any event there belongs definitely to the past and can be the cause of an effect at the origin.
The relationship between of such pairs of event is called timelike, because they have a time distance greater than zero for all observers. A straight line connecting these two events is always the time axis of a possible observer for whom they happen at the same place. Two events which can be connected just with the speed of light are called lightlike.
In principle a further dimension of space can be added to the Minkowski diagram leading to a three-dimensional representation. In this case the ranges of future and past become cones with apexes touching each other at the origin. They are called light cones.
The speed of light as a limit
Following the same argument, all straight lines passing through the origin and which are more nearly horizontal than the photon world lines, would correspond to objects or signals moving faster than light regardless of the speed of the observer. Therefore no event outside the light cones can be reached from the origin, even by a light-signal, nor by any object or signal moving with less than the speed of light. Such pairs of events are called spacelike because they have a finite spatial distance different from zero for all observers. On the other hand a straight line connecting such events is always the space coordinate axis of a possible observer for whom they happen at the same time. By a slight variation of the velocity of this coordinate system in both directions it is always possible to find two inertial reference frames whose observers estimate the chronological order of these events to be different.
Therefore an object moving faster than light, say from O to A in the adjoining diagram, would imply that, for any observer watching the object moving from O to A, there can be found another observer (moving at less than the speed of light with respect to the first) for whom the object moves from A to O. The question of which observer is right has no unique answer, and therefore makes no physical sense. Any such moving object or signal would violate the principle of causality.
Also, any general technical means of sending signals faster than light would permit information to be sent into the originator's own past. In the diagram, an observer at O in the x-ct-system sends a message moving faster than light to A. At A it is received by another observer, moving so as to be in the x'-ct'-system, who sends it back, again faster than light by the same technology, arriving at B. But B is in the past relative to O. The absurdity of this process becomes obvious when both observers subsequently confirm that they received no message at all but all messages were directed towards the other observer as can be seen graphically in the Minkowski diagram. Indeed, if it was possible to accelerate an observer to the speed of light, the space and time axes would coincide with their angle bisector. The coordinate system would collapse.
These considerations show that the speed of light as a limit is a consequence of the properties of spacetime, and not of the properties of objects such as technologically imperfect space ships. The prohibition of faster-than-light motion actually has nothing in particular to do with electromagnetic waves or light, but depends on the structure of spacetime.
When Taylor and Wheeler composed Spacetime Physics (1966), they did not use the term "Minkowski diagram" for their spacetime geometry. Instead they included an acknowledgement of Minkowski’s contribution to philosophy by the totality of his innovation of 1908.
As an eponym, the term Minkowski diagram is subject to Stigler’s law of eponymy, namely that Minkowski is wrongly designated as originator. The earlier works of Alexander Macfarlane contain algebra and diagrams that correspond well with the Minkowski diagram. See for instance the plate of figures in Proceedings of the Royal Society in Edinburgh for 1900. Macfarlane was building on what one sees in William Kingdon Clifford’s Elements of Dynamic (1878), page 90.
When abstracted to a line drawing, then any figure showing conjugate hyperbolas, with a selection of conjugate diameters, falls into this category. Students making drawings to accompany the exercises in George Salmon’s A Treatise on Conic Sections (1900) at pages 165–71 (on conjugate diameters) will be making Minkowski diagrams.
- Mermin (1968) Chapter 17
- See Vladimir Karapetoff
- Silberstein (1914) The Theory of Relativity, page 131
- Taylor/Wheeler (1966) page 37: "Minkowski's insight is central to the understanding of the physical world. It focuses attention on those quantities, such as interval, which are the same in all frames of reference. It brings out the relative character of quantities, such as velocity, energy, time, distance, which depend on the frame of reference."
- Herman Minkowski (1908) "Raum und Zeit", (German Wikisource).
- Various English translations on Wikisource: Space and Time.
- Anthony French (1968) Special Relativity, pages 82 & 83, New York: W W Norton & Company.
- E.N. Glass (1975) "Lorentz boosts and Minkowski diagrams" American Journal of Physics 43:1013,4.
- N. David Mermin (1968) Space and Time in Special Relativity, Chapter 17 Minkowski diagrams: The Geometry of Spacetime, pages 155–99 McGraw-Hill.
- Rindler, Wolfgang (2001). Relativity: Special, General and Cosmological. Oxford University Press. ISBN 0-19-850836-0.
- W.G.V. Rosser (1964) An Introduction to the Theory of Relativity, page 256, Figure 6.4, London: Butterworths.
- Edwin F. Taylor and John Archibald Wheeler (1963) Spacetime Physics, pages 27 to 38, New York: W. H. Freeman and Company, Second edition (1992).
- Walter, Scott (1999), "The non-Euclidean style of Minkowskian relativity", in J. Gray, The Symbolic Universe: Geometry and Physics, Oxford University Press, pp. 91–127 (see page 10 of e-link)
Media related to Minkowski diagrams at Wikimedia Commons | http://en.wikipedia.org/wiki/Minkowski_diagram | 13 |
36 | Scale of temperature
|This article's factual accuracy is disputed. (October 2010)|
||This article's lead section may not adequately summarize key points of its contents. (October 2010)|
Formal description
According to the zeroth law of thermodynamics, being in thermal equilibrium is an equivalence relation. Thus all thermal systems may be divided into a quotient set by this equivalence relation, denoted below as M. Assume the set M has the cardinality of c, then one can construct an injective function ƒ: M → R , by which every thermal system will have a number associated with it such that when and only when two thermal systems have same such value, they will be in thermal equilibrium. This is clearly the property of temperature, and the specific way of assigning numerical values as temperature is called a scale of temperature. In practical terms, a temperature scale is always based on usually a single physical property of a simple thermodynamic system, called a thermometer, that defines a scaling function mapping the temperature to the measurable thermometric parameter. Such temperature scales that are purely based on measurement are called empirical temperature scales.
The second law of thermodynamics provides a fundamental, natural definition of thermodynamic temperature starting with a null point of absolute zero. A scale for thermodynamic temperature is established similarly to the empirical temperature scales, however, needing only one additional fixing point.
Empirical scales
Empirical scales are based on the measurement of physical parameters that express the property of interest to be measured through some formal, most commonly a simple linear, functional relationship. For the measurement of temperature, the formal definition of thermal equilibrium in terms of the thermodynamic coordinate spaces of thermodynamic systems, expressed in the zeroth law of thermodynamics, provides the framework to measure temperature.
All temperature scales, including the modern thermodynamic temperature scale used in the International System of Units, are calibrated according to thermal properties of a particular substance or device. Typically, this is established by fixing two well-defined temperature points and defining temperature increments via a linear function of the response of the thermometric device. For example, both the old Celsius scale and Fahrenheit scale were originally based on the linear expansion of a narrow mercury column within a limited range of temperature, each using different reference points and scale increments.
Different empirical scales may not be compatible with each other, except for small regions of temperature overlap. If an alcohol thermometer and a mercury thermometer have same two fixed points, namely the freezing and boiling point of water, their reading will not agree with each other except at the fixed points, as the linear 1:1 relationship of expansion between any two thermometric substances may not be guaranteed.
Empirical temperature scales are not reflective of the fundamental, microscopic laws of matter. Temperature is a universal attribute of matter, yet empirical scales map a narrow range onto a scale that is known to have a useful functional form for a particular application. Thus, their range is limited. The working material only exists in a form under certain circumstances, beyond which it no longer can serve as a scale. For example, mercury freezes below 234.32 K, so temperature lower than that cannot be measured in a scale based on mercury. Even ITS-90, which interpolates among different ranges of temperature, has only a range of 0.65 K to approximately 1358 K (−272.5 °C to 1085 °C).
Ideal gas scale
When pressure approaches zero, all real gas will behave like ideal gas, that is, pV of a mole of gas relying only on temperature. Therefore we can design a scale with pV as its argument. Of course any bijective function will do, but for convenience's sake linear function is the best. Therefore we define it as
The ideal gas scale is in some sense a "mixed" scale. It relies on the universal properties of gas, a big advance from just a particular substance. But still it is empirical since it puts gas at a special position and thus has limited applicability—at some point no gas can exist. One distinguishing characteristic of ideal gas scale, however, is that it precisely equals thermodynamical scale when it is well defined (see below).
International temperature scale of 1990
ITS-90 is designed to represent the thermodynamic temperature scale (referencing absolute zero) as closely as possible throughout its range. Many different thermometer designs are required to cover the entire range. These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers (known as SPRTs, PRTs or Platium RTDs) and monochromatic radiation thermometers.
Although the Kelvin and Celsius scales are defined using absolute zero (0 K) and the triple point of water (273.16 K and 0.01 °C), it is impractical to use this definition at temperatures that are very different from the triple point of water. Accordingly, ITS–90 uses numerous defined points, all of which are based on various thermodynamic equilibrium states of fourteen pure chemical elements and one compound (water). Most of the defined points are based on a phase transition; specifically the melting/freezing point of a pure chemical element. However, the deepest cryogenic points are based exclusively on the vapor pressure/temperature relationship of helium and its isotopes whereas the remainder of its cold points (those less than room temperature) are based on triple points. Examples of other defining points are the triple point of hydrogen (−259.3467 °C) and the freezing point of aluminum (660.323 °C).
Thermometers calibrated per ITS–90 use complex mathematical formulas to interpolate between its defined points. ITS–90 specifies rigorous control over variables to ensure reproducibility from lab to lab. For instance, the small effect that atmospheric pressure has upon the various melting points is compensated for (an effect that typically amounts to no more than half a millikelvin across the different altitudes and barometric pressures likely to be encountered). The standard even compensates for the pressure effect due to how deeply the temperature probe is immersed into the sample. ITS–90 also draws a distinction between “freezing” and “melting” points. The distinction depends on whether heat is going into (melting) or out of (freezing) the sample when the measurement is made. Only gallium is measured while melting, all the other metals are measured while the samples are freezing.
There are often small differences between measurements calibrated per ITS–90 and thermodynamic temperature. For instance, precise measurements show that the boiling point of VSMOW water under one standard atmosphere of pressure is actually 373.1339 K (99.9839 °C) when adhering strictly to the two-point definition of thermodynamic temperature. When calibrated to ITS–90, where one must interpolate between the defining points of gallium and indium, the boiling point of VSMOW water is about 10 mK less, about 99.974 °C. The virtue of ITS–90 is that another lab in another part of the world will measure the very same temperature with ease due to the advantages of a comprehensive international calibration standard featuring many conveniently spaced, reproducible, defining points spanning a wide range of temperatures.
Celsius scale
Celsius (known until 1948 as centigrade) is a temperature scale that is named after the Swedish astronomer Anders Celsius (1701–1744), who developed a similar temperature scale two years before his death. The degree Celsius (°C) can refer to a specific temperature on the Celsius scale as well as a unit to indicate a temperature interval (a difference between two temperatures or an uncertainty).
From 1744 until 1954, 0 °C was defined as the freezing point of water and 100 °C was defined as the boiling point of water, both at a pressure of one standard atmosphere. Although these defining correlations are commonly taught in schools today, by international agreement the unit "degree Celsius" and the Celsius scale are currently defined by two different points: absolute zero, and the triple point of VSMOW (specially prepared water). This definition also precisely relates the Celsius scale to the Kelvin scale, which defines the SI base unit of thermodynamic temperature (symbol: K). Absolute zero, the hypothetical but unattainable temperature at which matter exhibits zero entropy, is defined as being precisely 0 K and −273.15 °C. The temperature value of the triple point of water is defined as being precisely 273.16 K and 0.01 °C.
This definition fixes the magnitude of both the degree Celsius and the kelvin as precisely 1 part in 273.16 parts, the difference between absolute zero and the triple point of water. Thus, it sets the magnitude of one degree Celsius and that of one kelvin as exactly the same. Additionally, it establishes the difference between the two scales' null points as being precisely 273.15 degrees Celsius (−273.15 °C = 0 K and 0 °C = 273.15 K).
Thermodynamic scale
Thermodynamic scale differs from empirical scales in that it is absolute. It is based on the fundamental laws of thermodynamics or statistical mechanics instead of some arbitrary chosen working material. Besides it covers full range of temperature and has simple relation with microscopic quantities like the average kinetic energy of particles (see equipartition theorem). In experiments ITS-90 is used to approximate thermodynamic scale due to simpler realization.
Lord Kelvin devised the thermodynamic scale based on the efficiency of heat engines as shown below:
The efficiency of a engine is the work divided by the heat introduced to the system or
where wcy is the work done per cycle. Thus, the efficiency depends only on qC/qH.
Because of Carnot theorem, any reversible heat engine operating between temperatures T1 and T2 must have the same efficiency, meaning, the efficiency is the function of the temperatures only:
In addition, a reversible heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2andT3. This can only be the case if
Specializing to the case that is a fixed reference temperature: the temperature of the triple point of water. Then for anyT2and T3,
Therefore, if thermodynamic temperature is defined by
then the function f, viewed as a function of thermodynamic temperature, is
and the reference temperature T1 has the value 273.16. (Of course any reference temperature and any positive numerical value could be used—the choice here corresponds to the Kelvin scale.)
Equality to ideal gas scale
It follows immediately that
Substituting Equation 3 back into Equation 1 gives a relationship for the efficiency in terms of temperature:
This is identical to the efficiency formula for Carnot cycle, which effectively employs the ideal gas scale. So this means the two scales equals numerically at every point.
Conversion table between the different temperature units
- H A Buchdahl. "2.Zeroth law". The concepts of classical thermodynamics. Cambridge U.P.1966. ISBN 978-0-521-04359-5.
- Giuseppe Morandi; F Napoli, E Ercolessi. Statistical mechanics : an intermediate course. Singapore ; River Edge, N.J. : World Scientific, ©2001. pp. 6~7. ISBN 978-981-02-4477-4.
- Walter Greiner; Ludwig Neise,Horst Stöcker. Thermodynamics and statistical mechanics. New York [u.a.] : Springer, 2004. pp. 6~7.
- Carl S. Helrich (2009). Modern Thermodynamics with Statistical Mechanics. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-540-85417-3.
- "Thermometers and the Ideal Gas Temperature Scale".
- "SI brochure, section 22.214.171.124". International Bureau of Weights and Measures. Retrieved 9 May 2008.
- "Essentials of the SI: Base & derived units". Retrieved 9 May 2008. | http://en.wikipedia.org/wiki/Scale_of_temperature | 13 |
20 | Control Systems/Feedback Loops
A feedback loop is a common and powerful tool when designing a control system. Feedback loops take the system output into consideration, which enables the system to adjust its performance to meet a desired output response.
When talking about control systems it is important to keep in mind that engineers typically are given existing systems such as actuators, sensors, motors, and other devices with set parameters, and are asked to adjust the performance of those systems. In many cases, it may not be possible to open the system (the "plant") and adjust it from the inside: modifications need to be made external to the system to force the system response to act as desired. This is performed by adding controllers, compensators, and feedback structures to the system.
Basic Feedback Structure
This is a basic feedback structure. Here, we are using the output value of the system to help us prepare the next output value. In this way, we can create systems that correct errors. Here we see a feedback loop with a value of one. We call this a unity feedback.
Here is a list of some relevant vocabulary, that will be used in the following sections:
- The term "Plant" is a carry-over term from chemical engineering to refer to the main system process. The plant is the preexisting system that does not (without the aid of a controller or a compensator) meet the given specifications. Plants are usually given "as is", and are not changeable. In the picture above, the plant is denoted with a P.
- A controller, or a "compensator" is an additional system that is added to the plant to control the operation of the plant. The system can have multiple compensators, and they can appear anywhere in the system: Before the pick-off node, after the summer, before or after the plant, and in the feedback loop. In the picture above, our compensator is denoted with a C.
- A summer is a symbol on a system diagram, (denoted above with parenthesis) that conceptually adds two or more input signals, and produces a single sum output signal.
- Pick-off node
- A pickoff node is simply a fancy term for a split in a wire.
- Forward Path
- The forward path in the feedback loop is the path after the summer, that travels through the plant and towards the system output.
- Reverse Path
- The reverse path is the path after the pick-off node, that loops back to the beginning of the system. This is also known as the "feedback path".
- Unity feedback
- When the multiplicative value of the feedback path is 1.
Negative vs Positive Feedback
It turns out that negative feedback is almost always the most useful type of feedback. When we subtract the value of the output from the value of the input (our desired value), we get a value called the error signal. The error signal shows us how far off our output is from our desired input.
Positive feedback has the property that signals tend to reinforce themselves, and grow larger. In a positive feedback system, noise from the system is added back to the input, and that in turn produces more noise. As an example of a positive feedback system, consider an audio amplification system with a speaker and a microphone. Placing the microphone near the speaker creates a positive feedback loop, and the result is a sound that grows louder and louder. Because the majority of noise in an electrical system is high-frequency, the sound output of the system becomes high-pitched.
Example: State-Space Equation
In the previous chapter, we showed you this picture:
Now, we will derive the I/O relationship into the state-space equations. If we examine the inner-most feedback loop, we can see that the forward path has an integrator system, , and the feedback loop has the matrix value A. If we take the transfer function only of this loop, we get:
Pre-multiplying by the factor B, and post-multiplying by C, we get the transfer function of the entire lower-half of the loop:
We can see that the upper path (D) and the lower-path Tlower are added together to produce the final result:
Now, for an alternate method, we can assume that x' is the value of the inner-feedback loop, right before the integrator. This makes sense, since the integral of x' should be x (which we see from the diagram that it is. Solving for x', with an input of u, we get:
This is because the value coming from the feedback branch is equal to the value x times the feedback loop matrix A, and the value coming from the left of the sumer is the input u times the matrix B.
If we keep things in terms of x and u, we can see that the system output is the sum of u times the feed-forward value D, and the value of x times the value C:
These last two equations are precisely the state-space equations of our system.
Feedback Loop Transfer Function
We can solve for the output of the system by using a series of equations:
and when we solve for Y(s) we get:
[Feedback Transfer Function]
The reader is encouraged to use the above equations to derive the result by themselves.
The function E(s) is known as the error signal. The error signal is the difference between the system output (Y(s)), and the system input (X(s)). Notice that the error signal is now the direct input to the system G(s). X(s) is now called the reference input. The purpose of the negative feedback loop is to make the system output equal to the system input, by identifying large differences between X(s) and Y(s) and correcting for them.
Here is a simple example of reference inputs and feedback systems:
There is an elevator in a certain building with 5 floors. Pressing button "1" will take you to the first floor, and pressing button "5" will take you to the fifth floor, etc. For reasons of simplicity, only one button can be pressed at a time.
Pressing a particular button is the reference input of the system. Pressing "1" gives the system a reference input of 1, pressing "2" gives the system a reference input of 2, etc. The elevator system then, tries to make the output (the physical floor location of the elevator) match the reference input (the button pressed in the elevator). The error signal, e(t), represents the difference between the reference input x(t), and the physical location of the elevator at time t, y(t).
Let's say that the elevator is on the first floor, and the button "5" is pressed at time t0. The reference input then becomes a step function:
Where we are measuring in units of "floors". At time t0, the error signal is:
Which means that the elevator needs to travel upwards 4 more floors. At time t1, when the elevator is at the second floor, the error signal is:
Which means the elevator has 3 more floors to go. Finally, at time t4, when the elevator reaches the top, the error signal is:
And when the error signal is zero, the elevator stops moving. In essence, we can define three cases:
- e(t) is positive: In this case, the elevator goes up one floor, and checks again.
- e(t) is zero: The elevator stops.
- e(t) is negative: The elevator goes down one floor, and checks again.
State-Space Feedback Loops
In the state-space representation, the plant is typically defined by the state-space equations:
The plant is considered to be pre-existing, and the matrices A, B, C, and D are considered to be internal to the plant (and therefore unchangeable). Also, in a typical system, the state variables are either fictional (in the sense of dummy-variables), or are not measurable. For these reasons, we need to add external components, such as a gain element, or a feedback element to the plant to enhance performance.
Consider the addition of a gain matrix K installed at the input of the plant, and a negative feedback element F that is multiplied by the system output y, and is added to the input signal of the plant. There are two cases:
- The feedback element F is subtracted from the input before multiplication of the K gain matrix.
- The feedback element F is subtracted from the input after multiplication of the K gain matrix.
In case 1, the feedback element F is added to the input before the multiplicative gain is applied to the input. If v is the input to the entire system, then we can define u as:
In case 2, the feeback element F is subtracted from the input after the multiplicative gain is applied to the input. If v is the input to the entire system, then we can define u as:
Open Loop vs Closed Loop
Let's say that we have the generalized system shown above. The top part, Gp(s) represents all the systems and all the controllers on the forward path. The bottom part, Gb(s) represents all the feedback processing elements of the system. The letter "K" in the beginning of the system is called the Gain. We will talk about the gain more in later chapters. We can define the Closed-Loop Transfer Function as follows:
[Closed-Loop Transfer Function]
If we "open" the loop, and break the feedback node, we can define the Open-Loop Transfer Function, as:
[Open-Loop Transfer Function]
We can redefine the closed-loop transfer function in terms of this open-loop transfer function:
These results are important, and they will be used without further explanation or derivation throughout the rest of the book.
Placement of a Controller
There are a number of different places where we could place an additional controller.
- In front of the system, before the feedback loop.
- Inside the feedback loop, in the forward path, before the plant.
- In the forward path, after the plant.
- In the feedback loop, in the reverse path.
- After the feedback loop.
Each location has certain benefits and problems, and hopefully we will get a chance to talk about all of them.
The damping ratio is defined by way of the sign zeta. The damping ratio gives us an idea about the nature of the transient response detailing the amount of overshoot and oscillation that the system will undergo. This is completely regardless time scaling. If zeta is:
zero, the system is undamped; zeta < 1, the system is underdamped; zeta = 1, the system is critically damped; zeta > 1, the system is overdamped;
Zeta is used in conjunction with the natural frequency to determine system properties. To find the zeta value you must first find the natural response! | http://en.m.wikibooks.org/wiki/Control_Systems/Feedback_Loops | 13 |
100 | In mathematics, a ratio is a relationship between two numbers of the same kind (e.g., objects, persons, students, spoonfuls, units of whatever identical dimension), usually expressed as "a to b" or a:b, sometimes expressed arithmetically as a dimensionless quotient of the two that explicitly indicates how many times the first number contains the second (not necessarily an integer).
In layman's terms a ratio represents, for every amount of one thing, how much there is of another thing. For example, supposing one has 8 oranges and 6 lemons in a bowl of fruit, the ratio of oranges to lemons would be 4:3 (which is equivalent to 8:6) while the ratio of lemons to oranges would be 3:4. Additionally, the ratio of oranges to the total amount of fruit is 4:7 (equivalent to 8:14). The 4:7 ratio can be further converted to a fraction of 4/7 to represent how much of the fruit is oranges.
Notation and terminology
The ratio of numbers A and B can be expressed as:
- the ratio of A to B
- A is to B
- A fraction (rational number) that is the quotient of A divided by B
The proportion expressing the equality of the ratios A:B and C:D is written A:B=C:D or A:B::C:D. this latter form, when spoken or written in the English language, is often expressed as
- A is to B as C is to D.
Again, A, B, C, D are called the terms of the proportion. A and D are called the extremes, and B and C are called the means. The equality of three or more proportions is called a continued proportion.
Ratios are sometimes used with three or more terms. The dimensions of a two by four that is ten inches long are 2:4:10.
History and etymology
|Look up ratio in Wiktionary, the free dictionary.|
It is impossible to trace the origin of the concept of ratio, because the ideas from which it developed would have been familiar to preliterate cultures. For example, the idea of one village being twice as large as another is so basic that it would have been understood in prehistoric society. However, it is possible to trace the origin of the word "ratio" to the Ancient Greek λόγος (logos). Early translators rendered this into Latin as ratio ("reason"; as in the word "rational"). (A rational number may be expressed as the quotient of two integers.) A more modern interpretation of Euclid's meaning is more akin to computation or reckoning. Medieval writers used the word proportio ("proportion") to indicate ratio and proportionalitas ("proportionality") for the equality of ratios.
Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers. The Pythagoreans' conception of number included only what would today be called rational numbers, casting doubt on the validity of the theory in geometry where, as the Pythagoreans also discovered, incommensurable ratios (corresponding to irrational numbers) exist. The discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables.
The existence of multiple theories seems unnecessarily complex to modern sensibility since ratios are, to a large extent, identified with quotients. This is a comparatively recent development however, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios and quotients. The reasons for this are twofold. First, there was the previously mentioned reluctance to accept irrational numbers as true numbers. Second, the lack of a widely used symbolism to replace the already established terminology of ratios delayed the full acceptance of fractions as alternative until the 16th century.
Euclid's definitions
Book V of Euclid's Elements has 18 definitions, all of which relate to ratios. In addition, Euclid uses ideas that were in such common usage that he did not include definitions for them. The first two definitions say that a part of a quantity is another quantity that "measures" it and conversely, a multiple of a quantity is another quantity that it measures. In modern terminology, this means that a multiple of a quantity is that quantity multiplied by an integer greater than one—and a part of a quantity (meaning aliquot part) is a part that, when multiplied by an integer greater than one, gives the quantity.
Euclid does not define the term "measure" as used here, However, one may infer that if a quantity is taken as a unit of measurement, and a second quantity is given as an integral number of these units, then the first quantity measures the second. Note that these definitions are repeated, nearly word for word, as definitions 3 and 5 in book VII.
Definition 3 describes what a ratio is in a general way. It is not rigorous in a mathematical sense and some have ascribed it to Euclid's editors rather than Euclid himself. Euclid defines a ratio as between two quantities of the same type, so by this definition the ratios of two lengths or of two areas are defined, but not the ratio of a length and an area. Definition 4 makes this more rigorous. It states that a ratio of two quantities exists when there is a multiple of each that exceeds the other. In modern notation, a ratio exists between quantities p and q if there exist integers m and n so that mp>q and nq>m. This condition is known as the Archimedean property.
Definition 5 is the most complex and difficult. It defines what it means for two ratios to be equal. Today, this can be done by simply stating that ratios are equal when the quotients of the terms are equal, but Euclid did not accept the existence of the quotients of incommensurables, so such a definition would have been meaningless to him. Thus, a more subtle definition is needed where quantities involved are not measured directly to one another. Though it may not be possible to assign a rational value to a ratio, it is possible to compare a ratio with a rational number. Specifically, given two quantities, p and q, and a rational number m/n we can say that the ratio of p to q is less than, equal to, or greater than m/n when np is less than, equal to, or greater than mq respectively. Euclid's definition of equality can be stated as that two ratios are equal when they behave identically with respect to being less than, equal to, or greater than any rational number. In modern notation this says that given quantities p, q, r and s, then p:q::r:s if for any positive integers m and n, np<mq, np=mq, np>mq according as nr<ms, nr=ms, nr>ms respectively. There is a remarkable similarity between this definition and the theory of Dedekind cuts used in the modern definition of irrational numbers.
Definition 6 says that quantities that have the same ratio are proportional or in proportion. Euclid uses the Greek ἀναλόγον (analogon), this has the same root as λόγος and is related to the English word "analog".
Definition 7 defines what it means for one ratio to be less than or greater than another and is based on the ideas present in definition 5. In modern notation it says that given quantities p, q, r and s, then p:q>r:s if there are positive integers m and n so that np>mq and nr≤ms.
As with definition 3, definition 8 is regarded by some as being a later insertion by Euclid's editors. It defines three terms p, q and r to be in proportion when p:q::q:r. This is extended to 4 terms p, q, r and s as p:q::q:r::r:s, and so on. Sequences that have the property that the ratios of consecutive terms are equal are called Geometric progressions. Definitions 9 and 10 apply this, saying that if p, q and r are in proportion then p:r is the duplicate ratio of p:q and if p, q, r and s are in proportion then p:s is the triplicate ratio of p:q. If p, q and r are in proportion then q is called a mean proportional to (or the geometric mean of) p and r. Similarly, if p, q, r and s are in proportion then q and r are called two mean proportionals to p and s.
The quantities being compared in a ratio might be physical quantities such as speed or length, or numbers of objects, or amounts of particular substances. A common example of the last case is the weight ratio of water to cement used in concrete, which is commonly stated as 1:4. This means that the weight of cement used is four times the weight of water used. It does not say anything about the total amounts of cement and water used, nor the amount of concrete being made. Equivalently it could be said that the ratio of cement to water is 4:1, that there is 4 times as much cement as water, or that there is a quarter (1/4) as much water as cement..
If there are 2 oranges and 3 apples, the ratio of oranges to apples is 2:3, and the ratio of oranges to the total number of pieces of fruit is 2:5. These ratios can also be expressed in fraction form: there are 2/3 as many oranges as apples, and 2/5 of the pieces of fruit are oranges. If orange juice concentrate is to be diluted with water in the ratio 1:4, then one part of concentrate is mixed with four parts of water, giving five parts total; the amount of orange juice concentrate is 1/4 the amount of water, while the amount of orange juice concentrate is 1/5 of the total liquid. In both ratios and fractions, it is important to be clear what is being compared to what, and beginners often make mistakes for this reason.
Number of terms
In general, when comparing the quantities of a two-quantity ratio, this can be expressed as a fraction derived from the ratio. For example, in a ratio of 2:3, the amount/size/volume/number of the first quantity is that of the second quantity. This pattern also works with ratios with more than two terms. However, a ratio with more than two terms cannot be completely converted into a single fraction; a single fraction represents only one part of the ratio since a fraction can only compare two numbers. If the ratio deals with objects or amounts of objects, this is often expressed as "for every two parts of the first quantity there are three parts of the second quantity".
Percentage ratio
If we multiply all quantities involved in a ratio by the same number, the ratio remains valid. For example, a ratio of 3:2 is the same as 12:8. It is usual either to reduce terms to the lowest common denominator, or to express them in parts per hundred (percent).
If a mixture contains substances A, B, C & D in the ratio 5:9:4:2 then there are 5 parts of A for every 9 parts of B, 4 parts of C and 2 parts of D. As 5+9+4+2=20, the total mixture contains 5/20 of A (5 parts out of 20), 9/20 of B, 4/20 of C, and 2/20 of D. If we divide all numbers by the total and multiply by 100, this is converted to percentages: 25% A, 45% B, 20% C, and 10% D (equivalent to writing the ratio as 25:45:20:10).
If the two or more ratio quantities encompass all of the quantities in a particular situation, for example two apples and three oranges in a fruit basket containing no other types of fruit, it could be said that "the whole" contains five parts, made up of two parts apples and three parts oranges. In this case, , or 40% of the whole are apples and , or 60% of the whole are oranges. This comparison of a specific quantity to "the whole" is sometimes called a proportion. Proportions are sometimes expressed as percentages as demonstrated above.
Note that ratios can be reduced (as fractions are) by dividing each quantity by the common factors of all the quantities. This is often called "cancelling." As for fractions, the simplest form is considered that in which the numbers in the ratio are the smallest possible integers.
Thus, the ratio 40:60 may be considered equivalent in meaning to the ratio 2:3 within contexts concerned only with relative quantities.
Mathematically, we write: "40:60" = "2:3" (dividing both quantities by 20).
- Grammatically, we would say, "40 to 60 equals 2 to 3."
An alternative representation is: "40:60::2:3"
- Grammatically, we would say, "40 is to 60 as 2 is to 3."
A ratio that has integers for both quantities and that cannot be reduced any further (using integers) is said to be in simplest form or lowest terms.
Sometimes it is useful to write a ratio in the form 1:n or n:1 to enable comparisons of different ratios.
For example, the ratio 4:5 can be written as 1:1.25 (dividing both sides by 4)
Alternatively, 4:5 can be written as 0.8:1 (dividing both sides by 5)
Dilution ratio
Ratios are often used for simple dilutions applied in chemistry and biology. A simple dilution is one in which a unit volume of a liquid material of interest is combined with an appropriate volume of a solvent liquid to achieve the desired concentration. The dilution factor is the total number of unit volumes in which your material is dissolved. The diluted material must then be thoroughly mixed to achieve the true dilution. For example, a 1:5 dilution (verbalize as "1 to 5" dilution) entails combining 1 unit volume of solute (the material to be diluted) + 4 unit volumes (approximately) of the solvent to give 5 units of the total volume. (Some solutions and mixtures take up slightly less volume than their components.)
The dilution factor is frequently expressed using exponents: 1:5 would be 5e−1 (5−1 i.e. one-fifth:one); 1:100 would be 10e−2 (10−2 i.e. one hundredth:one), and so on.
There is often confusion between dilution ratio (1:n meaning 1 part solute to n parts solvent) and dilution factor (1:n+1) where the second number (n+1) represents the total volume of solute + solvent. In scientific and serial dilutions, the given ratio (or factor) often means the ratio to the final volume, not to just the solvent. The factors then can easily be multiplied to give an overall dilution factor.
In other areas of science such as pharmacy, and in non-scientific usage, a dilution is normally given as a plain ratio of solvent to solute.
Odds (as in gambling) are expressed as a ratio. For example, odds of "7 to 3 against" (7:3) mean that there are seven chances that the event will not happen to every three chances that it will happen. On the other hand, the probability of success is 30%. In every ten trials, there are three wins and seven losses.
Different units
For example, the ratio 1 minute : 40 seconds can be reduced by changing the first value to 60 seconds. Once the units are the same, they can be omitted, and the ratio can be reduced to 3:2.
In chemistry, mass concentration "ratios" are usually expressed as w/v percentages, and are really proportions.
For example, a concentration of 3% w/v usually means 3g of substance in every 100mL of solution. This cannot easily be converted to a pure ratio because of density considerations, and the second figure is the total amount, not the volume of solvent.
See also
- Aspect ratio
- Fraction (mathematics)
- Golden ratio
- Interval (music)
- Parts-per notation
- Price/performance ratio
- Proportionality (mathematics)
- Ratio estimator
- Rule of three (mathematics)
- Wentworth, p. 55
- New International Encyclopedia
- Penny Cyclopedia, p. 307
- New International Encyclopedia
- New International Encyclopedia
- Smith, p. 477
- Penny Cyclopedia, p. 307
- Smith, p. 478
- Heath, p. 112
- Heath, p. 113
- Smith, p. 480
- Heath, reference for section
- "Geometry, Euclidean" Encyclopædia Britannica Eleventh Edition p682.
- Heath p. 125
Further reading
- "Ratio" The Penny Cyclopædia vol. 19, The Society for the Diffusion of Useful Knowledge (1841) Charles Knight and Co., London pp. 307ff
- "Proportion" New International Encyclopedia, Vol. 19 2nd ed. (1916) Dodd Mead & Co. pp270-271
- "Ratio and Proportion" Fundamentals of practical mathematics, George Wentworth, David Eugene Smith, Herbert Druery Harper (1922) Ginn and Co. pp. 55ff
- The thirteen books of Euclid's Elements, vol 2. trans. Sir Thomas Little Heath (1908). Cambridge Univ. Press. pp. 112ff.
- D.E. Smith, History of Mathematics, vol 2 Dover (1958) pp. 477ff | http://en.wikipedia.org/wiki/Ratio | 13 |
10 | Electrical Impedance Matching and Termination
When computer systems were first introduced decades ago, they
were large, slow-working devices that were incompatible with each
other. Today, national and international networking standards
have established electronic control protocols that enable different
systems to "talk" to each other. The Electronics Industries
Associations (EIA) and the Institute of Electrical and Electronics
Engineers (IEEE) developed standards that established common terminology
and interface requirements, such as EIA RS-232 and IEEE 802.3.
If a system designer builds equipment to comply with these standards,
the equipment will interface with other systems. But what about
analog signals that are used in ultrasonics?
Data Signals: Input versus Output
Consider the signal going to and from ultrasonic transducers.
When you transmit data through a cable, the requirement usually
simplifies into comparing what goes in one end with what comes
out the other. High frequency pulses degrade or deteriorate when
they are passed through any cable. Both the height of the pulse
(magnitude) and the shape of the pulse (wave form) change dramatically,
and the amount of change depends on the data rate, transmission
distance and the cable's electrical characteristics. Sometimes a marginal
electrical cable may perform adequately if used in only short
lengths, but the same cable with the same data in long lengths
will fail. This is why system designers and industry standards
specify precise cable criteria.
manufacturer's recommended practices for cable impedance, cable
length, impedance matching, and any requirements for termination
in characteristic impedance.
possible, use the same cables and cable dressing for all inspections.
Cable Electrical Characteristics
The most important characteristics in an electronic cable are
In this page, we can only review these characteristics very
generally, however, we will discuss capacitance in more detail.
Impedance (Ohms) represents the total resistance that
the cable presents to the electrical current passing through it.
At low frequencies the impedance is largely a function of the
conductor size, but at high frequencies conductor size, insulation
material, and insulation thickness all affect the cable's impedance.
Matching impedance is very important. If the system is designed
to be 100 Ohms, then the cable should match that impedance, otherwise
error-producing reflections are created.
Attenuation is measured in decibels per unit length (dB/m),
and provides an indication of the signal loss as it travels through
the cable. Attenuation is very dependent on signal frequency.
A cable that works very well with low frequency data may do very
poorly at higher data rates. Cables with lower attenuation are
Shielding is normally specified as a cable construction
detail. For example, the cable may be unshielded, contain shielded
pairs, have an overall aluminum/mylar tape and drain wire, or
have a double shield. Cable shields usually have two functions:
to act as a barrier to keep external signals from getting in and
internal signals from getting out, and to be a part of the electrical
circuit. Shielding effectiveness is very complex to measure and
depends on the data frequency within the cable and the precise
shield design. A shield may be very effective in one frequency
range, but a different frequency may require a completely different
design. System designers often test complete cable assemblies
or connected systems for shielding effectiveness.
Capacitance in a cable is usually measured as picofarads
per foot (pf/m). It indicates how much charge the cable can store
within itself. If a voltage signal is being transmitted by a twisted
pair, the insulation of the individual wires becomes charged by
the voltage within the circuit. Since it takes a certain amount
of time for the cable to reach its charged level, this slows down
and interferes with the signal being transmitted. Digital data
pulses are a string of voltage variations that are represented
by square waves. A cable with a high capacitance slows down these
signals so that they come out of the cable looking more like "saw-teeth,"
rather than square waves. The lower the capacitance of the cable,
the better it performs with high speed data. | http://www.ndt-ed.org/EducationResources/CommunityCollege/Ultrasonics/EquipmentTrans/impedancematching.htm | 13 |
14 | The following vocabulary terms will
appear throughout the lessons in the section on Transformational
Image: An image is the resulting
point or set of points under a transformation. For example, if the
reflection of point P in line l is P', then P'
is called the image of point P under the reflection. Such a
transformation is denoted rl (P) = P'.
An isometry is a transformation of the plane that
preserves length. For example, if the sides of an original
pre-image triangle measure 3, 4, and 5, and the sides of its image after
a transformation measure 3, 4, and 5, the transformation preserved
direct isometry preserves
orientation or order - the letters on the diagram go in the same
clockwise or counterclockwise direction on the figure and its image.
non-direct or opposite isometry
changes the order (such as clockwise changes to counterclockwise).
A figure or property that remains unchanged under a transformation of
the plane is referred to as invariant. No variations have
Transformation: An opposite
transformation is a transformation that changes the orientation of a
figure. Reflections and glide reflections are opposite
||For example, the original
image, triangle ABC, has a clockwise orientation - the
letters A, B and C are read in a clockwise direction.
After the reflection in the x-axis, the image
triangle A'B'C' has a counterclockwise orientation - the
letters A', B', and C' are read in a counterclockwise
A reflection is an opposite transformation.
Orientation: Orientation refers
to the arrangement of points, relative to one another, after a
transformation has occurred. For example, the reference made to
the direction traversed (clockwise or counterclockwise) when traveling
around a geometric figure.
(Also see the diagram shown under "Opposite Transformations".)
vector: A position vector is a
coordinate vector whose initial point is the origin. Any vector
can be expressed as an equivalent position vector by translating the
vector so that it originates at the origin.
Transformation: A transformation
of the plane is a one-to-one mapping of points in the plane to points in
Transformational Geometry: Transformational
Geometry is a method for studying geometry that illustrates congruence
and similarity by the use of transformations.
Transformational Proof: A
transformational proof is a proof that employs the use of
quantity that has both magnitude and direction; represented
geometrically by a directed line segment. | http://regentsprep.org/Regents/math/geometry/GT0/transvocab.htm | 13 |
45 | Simultaneous Linear Equations
To remember the process of framing simultaneous linear equations from mathematical problems
● To remember how to solve simultaneous equations by the method of comparison and method of elimination
● To acquire the ability to solve simultaneous equations by the method of substitution and method of cross-multiplication
● To know the condition for a pair of linear equations to become simultaneous equations
● To acquire the ability to solve mathematical problems framing simultaneous equations
We know that if a pair of definite values of two unknown quantities satisfies simultaneously two distinct linear equations in two variables, then those two equations are called simultaneous equations in two variables. We also know the method of framing simultaneous equations and two methods of solving these simultaneous equations.
We have already learnt that linear equation in two variable x and y is in the form ax + by + c = 0.
Where a, b, c are constant (real number) and at least one of a and b is non-zero.
The graph of linear equation ax + by + c = 0 is always a straight line.
Every linear equation in two variables has an infinite number of solutions. Here, we will learn about two linear equations in 2 variables. (Both equations having to same variable i.e., x, y)
Simultaneous linear equations:
Two linear equations in two variables taken together are called simultaneous linear equations.
The solution of system of simultaneous linear equation is the ordered pair (x, y) which satisfies both the linear equations.
Necessary steps for forming and solving simultaneous linear equations
Let us take a mathematical problem to indicate the necessary steps for forming simultaneous equations:
In a stationery shop, cost of 3 pencil cutters exceeds the price of 2 pens by $2. Also, total price of 7 pencil cutters and 3 pens is $43.
Follow the steps of instruction along with the method of solution.
Step I: Indentify the unknown variables; assume one of them as x and the other as y
Here two unknown quantities (variables) are:
Price of each pencil cutter = $x
Price of each pen = $y
Step II: Identify the relation between the unknown quantities.
Price of 3 pencil cutter =$3x
Price of 2 pens = $2y
Therefore, first condition gives: 3x – 2y = 2
Step III: Express the conditions of the problem in terms of x and y
Again price of 7 pencil cutters = $7x
Price of 3 pens = $3y
Therefore, second condition gives: 7x + 3y = 43
Simultaneous equations formed from the problems:
3x – 2y = 2 ----------- (i)
7x + 3y = 43 ----------- (ii)
(i) x + y = 12 and x – y = 2 are two linear equation (simultaneous equations). If we take x = 7 and y = 5, then the two equations are satisfied, so we say (7, 5) is the solution of the given simultaneous linear equations.
(ii) Show that x = 2 and y = 1 is the solution of the system of linear equation x + y = 3and 2x + 3y = 7
Put x = 2 and y = 1 in the equation x + y = 3
L.H.S. = x + y = 2 + 1 = 3, which is equal to R.H.S.
In 2nd equation, 2x + 3y = 7, put x = 2 and y = 1 in L.H.S.
L.H.S. = 2x + 3y = 2 × 2 + 3 × 1 = 4 + 3 = 7, which is equal to R.H.S.
Thus, x = 2 and y = 1 is the solution of the given system of equations.
Worked-out problems on solving simultaneous linear equations:
1. x + y = 7 ………… (i)
3x - 2y = 11 ………… (ii)
The given equations are:
x + y = 7 ………… (i)
3x – 2y = 11 ………… (ii)
From (i) we get y = 7 – x
Now, substituting the value of y in equation (ii), we get;
3x – 2 (7 – x) = 11
or, 3x – 14 + 2x = 11
or, 3x + 2x – 14 = 11
or, 5x – 14 = 11
or, 5x -14 + 14 = 11 + 14 [add 14 in both the sides]
or, 5x = 11 + 14
or, 5x = 25
or, 5x/5 = 25/5 [divide by 5 in both the sides]
or, x = 5
Substituting the value of x in equation (i), we get;
x + y = 7
Put the value of x = 5
or, 5 + y = 7
or, 5 – 5 + y = 7 – 5
or, y = 7 – 5
or, y = 2
Therefore, (5, 2) is the solution of the system of equation x + y = 7 and 3x – 2y = 11
2. Solve the system of equation 2x – 3y = 1 and 3x – 4y = 1.
The given equations are:
2x – 3y = 1 ………… (i)
3x – 4y = 1 ………… (ii)
From equation (i), we get;
2x = 1 + 3y
or, x = 1/2(1 + 3y)
Substituting the value of x in equation (ii), we get;
or, 3 × 1/2(1 + 3y) – 4y = 1
or, 3/2 + 9/2y - 4y = 1
or, (9y – 8y)/2 = 1 – 3/2
or, 1/2y = (2 – 3)/2
or, 1/2y = -1/2
or, y = -1/
2 × 2/1
or, y = -1
Substituting the value of y in equation (i)
2x – 3 × (-1) = 1
or, 2x + 3 = 1
or, 2x = 1 - 3or, 2x = -2
or, x = -2/2
or, x = -1
Therefore, x = -1 and y = -1 is the solution of the system of equation
2x – 3y = 1 and 3x – 4y = 1.
● Simultaneous Linear Equations
Simultaneous Linear Equations● Simultaneous Linear Equations - Worksheets
Solvability of Linear Simultaneous Equations
Pairs of Equations
Word Problems on Simultaneous Linear Equations
Practice Test on Word Problems Involving Simultaneous
Worksheet on Simultaneous Linear Equations
Worksheet on Problems on Simultaneous Linear
8th Grade Math Practice
From Simultaneous Linear Equations to HOME PAGE | http://www.math-only-math.com/Simultaneous-linear-equations.html | 13 |
28 | CONTENT OBJECTIVE: 8K1.00 To understand the various components of the Universe
INSTRUCTIONAL OBJECTIVE: The learner will:
1.01 define star, solar system, galaxy and universe.
1.02 state that the sun is a star.
1.03 list the objects that make up the solar system.
OUTLINE OF CONTENT:
B. Solar System
II. Composition of our solar system
III. The order of the nine planets
TN COMPONENT OF SCIENCE: Process of Science
To enable students to demonstrate the processes of science by posing questions and investigating phenomena through language, methods and instruments of science.
COMMUNICATING - An essential aspect of science is the act of accurately and effectively conveying oral, written, graphic or electronic information from the preparer to the user.
TN STANDARD(S): The learner will understand that:
1.6a The sharing and disseminating of results should be done in a clear and concise manner.
BENCHMARK: Human beings learn complicated concepts from others through various methods of communication.
This classroom connector addresses instructional objective 1.01.
One instructional period
Bulletin board, white paper, filmstrip about galaxies and universe
(Have a bulletin board set up in your room. Take white paper and cover the board. Write on the top of the board "Milky Way". Have each student come to the board and make star on it. Make sure that you make different sizes and colors. Start out by defining galaxy as groups of billions of stars. Name our galaxy as the Milky Way. Today, we are going to learn about the stars, galaxies, universe, and solar system.
Define these terms STAR, GALAXIES, UNIVERSE, and SOLAR SYSTEM. Let's compare the differences between galaxies and universe. The universe consists of more than 100 billion distinct groups of stars, called galaxies. Galaxies are irregular, elliptical, and spiral. Stars differ in size, color and magnitude. The color of a star is related to its temperature.
MONITOR AND ADJUST:
(Show a filmstrip about galaxies and the universe. After the Filmstrip, have the students answer questions on the filmstrip. Explain to the students that stars are what actually make up the galaxies and universe.)
Whisper to your partner what a universe is and what a galaxy is. (Have the students respond by answering T and F statements about the galaxies and the universe.)
INDEPENDENT PRACTICE AND/OR ENRICHMENT:
(1. There are two theories about how the universe came into existence. Creation and the Big Bang Theory) Ask the students to do research on these two theories.
2. Write these words on the board in random order. Have the students arrange the words from smallest in size to the largest. (Earth, moon, our sun, solar system, galaxy, universe.)
3. Have students address an envelope from outer space using all the above terms except sun or moon. (Student's name, street, city, county, state, country, continent, planet, solar system).
This classroom connector addresses instructional objective 1.02.
Pencil, tape, cardboard, watch, compass, marking pen, materials for making a drawing of the sun
What do you think of when you think of the sun? Do you think of it as a star? Today, we are going to talk about the sun.
The sun is a star. It is made entirely of hot gases. As with all stars, the sun gets its energy from fusion. Fusion happens when protons crash into each other and stick together. The sun has many parts: the CORONA, the crown of light around the sun; SUN SPOTS, dark areas on the suns surface; SOLAR FLARES, powerful eruptions of hot gases from the sun.
(Make a sundial with pencil, tape, 20cm of cardboard, watch, compass, marking pen. Draw a circle (15 cm) across the cardboard. Make one place on the circle "N" for north. Make a hole in the center of the circle big enough for the pencil. Tape the pencil to keep it upright. Bring the sundial outdoors on a sunny day. Find "N" on the compass. Turn the cardboard so that the "N" faces north. Mark the place where the shadow of the pencil points. Write the time there. Do several times during the day.)
What direction is the sun when the shadow of the pencil is the shortest? (response) the longest? (response) Can you tell the time by the shadow the next day? (response)
Today we have learned that the sun is a star. On a sheet of paper, write the parts of the sun.
INDEPENDENT PRACTICE AND/OR ENRICHMENT:
(Have students go to the library and do reports on lunar and solar eclipses.)
This classroom connector addresses instructional objective 1.03.
Poster board, modeling clay, wires or pipe cleaners, pictures of the solar system
Today, we are going to learn the planets in order from the sun. Push all chairs away from the middle of the floor. (Have the children start moving in a circle. Let them move in different directions. Have them stop; the person that is in the middle is the sun; the person closest to the sun is Mercury; continue on in this way until you have named the nine planets. The children that are left over are the other things that are found in the universe.)
(Now that you have your children in order of the planets (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, Pluto), place a card with the name of each planet on them. Let the children move around the sun. Explain that all planets do not move at the same speed. Ask the children questions about who is the closest and who is the farthest away from the sun.)
(Have children make a list of things that are in space such as planets, meteors, comets, asteroids, and satellites. Verbalize in class, then write on the board things that were talked about in class. Give facts and show pictures of each planet. Also write a mnemonic device for the nine planets. Example: My Very Everloving Mother Just Sent Us Nine Pies.)
MONITOR AND ADJUST, ACTIVE PARTICIPATION, SUPERVISED PRACTICE:
(Make a list of the terms used during instruction and put them on the board. Take the terms listed on the board and ask the children to give their ideas about what they are. Then define the terms in class and explain. Divide the class into groups. Let each group take two objects found in the solar system and do a short report. This work is to be done during class period.)
(Divide the class into two groups; half of the class will draw models of the solar system and list facts about each planet; the other half of the class will make models of the solar system out of clay and wires. They will also label and make lists of facts.)
Today, we have learned about the components that makeup our universe. Rename the nine planets and relate a fact about each one on your paper. (pause) Now list at least one other object found in space. (pause, then summarize)
(Have the children make up their own mnemonic device for the 9 planets and bring that back to class.)
This is the
time this file has been accessed since 04/04/98.
The University of Tennessee at Martin is not responsible for the information or views expressed here. | http://www.utm.edu/departments/cece/old_site/eighth/8K1.shtml | 13 |
45 | Where to Launch and Land the Space Shuttle? (1971-1972)
NASA’s ambition in 1971 was to build a fully reusable Space Shuttle which it could operate much as an airline operates its airplanes. The typical fully reusable Shuttle design in play in 1971 included a large Booster and a smaller Orbiter (image at top of post), each of which would carry a crew.
The Booster’s rocket motors would ignite on the launch pad, drawing liquid hydrogen/liquid oxygen propellants from integral internal tanks. At the edge of space, its propellants depleted, the Booster would release the Orbiter. It then would turn around, reenter the dense part of Earth’s atmosphere, deploy air-breathing jet engines, and fly under power to a runway at its launch site. Because it would return to its launch site, NASA dubbed it the “Flyback Booster.” It would then taxi or be towed to a hanger for minimal refurbishment and preparation for its next launch.
The Space Shuttle Orbiter, meanwhile, would arc up and away from the Booster. After achieving a safe separation distance, it would ignite its rocket motors to place itself into Earth orbit. After accomplishing its mission, it would fire its motors to slow down and reenter Earth’s atmosphere, where it would deploy jet engines and fly under power to a runway landing. As in the case of the Booster, the Orbiter would need minimal refurbishment before it was launched again.
Unlike an expendable launcher – for example, the Saturn V moon rocket - a fully reusable Space Shuttle would not discard spent parts downrange of its launch site as it climbed to Earth orbit. This meant that, in theory, any place that could host an airport might become a Space Shuttle launch and landing site.
NASA managers felt no need for a new launch and landing site; they already had two at their disposal. They planned to launch and land the Space Shuttle at Kennedy Space Center (KSC) on Florida’s east coast and Vandenberg Air Force Base (VAFB), California. Nevertheless, for a time in 1971-1972, a NASA board reviewed some 150 candidate Shuttle launch and landing sites in 40 of the 50 U.S. states. A few were NASA-selected candidates, but most were put forward by members of Congress, state and local politicians, and even private individuals.
The Space Shuttle Launch and Recovery Site Review Board, as it was known, was chaired by Floyd Thompson, a former director of NASA’s Langley Research Center in Hampton, Virginia. The Board got its start on 26 April 1971, when Dale Myers, NASA Associate Administrator for Manned Space Flight, charged it with determining whether any of the candidate sites could host a single new Shuttle launch and landing site as versatile as KSC and VAFB were together. The consolidation scheme aimed to trim Shuttle cost by eliminating redundancy.
The proposed Space Shuttle launch and landing sites were a motley mix. Many were Defense Department air bases of various types (for example, Patuxent Naval Air Station, Maryland), while a few were city airports (for example, the Lincoln, Nebraska Municipal Airport). Texas proposed two sites at the Big Bend of the Rio Grande River and Wyoming offered 11 of its 23 counties. KSC and VAFB were on the list, as were NASA’s Marshall Space Flight Center in Huntsville, Alabama, and Ellington Air Force Base in Houston, Texas, which had as its chief function to serve NASA’s Manned Spacecraft Center.
Texas had the most candidate sites (22) of any state, while Nebraska and Wyoming tied for second place with 12 sites each. Furthest north and east were Presque Isle Air Force Base, Dow Air Force Base, and Loring Air Force Base in Maine. Furthest south were sites around Brownsville, Texas. VAFB was the westernmost site considered.
The 10 states that contained no candidate Space Shuttle launch and landing sites lacked obvious disqualifying features (or, at least, appeared no more or less qualified than most of the states that included candidate sites). Alaska and Hawaii were disqualified because they were located too far from established U.S. space industry. The Midwestern states of Iowa, Illinois, Indiana, and Minnesota contained no sites, though candidates existed in neighboring states Missouri, Kansas, Nebraska, North Dakota, South Dakota, Wisconsin, Ohio, and Michigan. West Virginia alone among states east of the Mississippi River and south of the Ohio River lacked a candidate site. The east coast states of Rhode Island, Connecticut, and New Jersey rounded out the list of no-shows.
In its efforts to cull unsuitable sites, the Thompson Board focused most of its attention on the effects of sonic booms, the sudden waves of air pressure produced when an aircraft or spacecraft exceeds the speed of sound (that is, “breaks the sound barrier”). Sonic booms, which the Board wrote had “the startling audibility and dynamic characteristics of an explosion,” were a bone of contention in the U.S. in the early 1970s; concern at the time over possible injury to people on the ground and damage to structures helped to kill U.S. plans to develop a supersonic passenger aircraft akin to the Anglo-French Concorde.
The Thompson Board determined that the Space Shuttle would generate its most powerful sonic boom during ascent, while the Booster and Orbiter formed a single large vehicle. The Booster’s rocket plume would, for purposes of calculating sonic boom effects, make the ascending, accelerating spacecraft appear even bigger. The Shuttle’s flight path characteristics – for example, the pitch-over maneuver that it would perform as it steered toward orbit – would create a roughly 10-square-mile “focal zone” for sonic boom effects about 33 nautical miles downrange of the launch site.
“Overpressure” in the focal zone would almost certainly exceed six pounds per square foot (psf) and might reach 30 psf, which would be powerful enough to damage structures (plaster and windows could suffer damage at an overpressure as low as three psf, the Board noted). Winds could unpredictably shift the focal area by several miles. The Board urged that “the severe overpressures associated with the focal zone. . .be prevented from occurring in any inhabited area.”
Based on this and other criteria, the Thompson Board trimmed the list of candidate single Space Shuttle launch and landing sites to just seven. These were: KSC; VAFB; Edwards Air Force Base, California; Las Vegas, Nevada; Matagorda Island, Texas; Michael Army Air Field/Dugway Proving Ground, Utah; and Mountain Home Air Force Base, Idaho.
As the Thompson Board continued its deliberations, the Space Shuttle design was undergoing rapid and profound changes. At its 22 June 1971 meeting, the Board discussed NASA Administrator James Fletcher’s 16 June announcement that the space agency would spread out Shuttle costs by adopting “series development” of the Booster and Orbiter. The Orbiter would be developed first. Until the Booster could be developed, the Orbiter would be coupled with an “interim expendable booster” – possibly a modified Saturn V S-IC stage - that would separate after depleting its propellants and fall back to Earth downrange of the launch site.
In addition, Fletcher had told reporters, Shuttle contractors would abandon work on the Orbiter’s reusable internal liquid propellant tanks in favor of expendable external tanks that would supply liquid oxygen/liquid hydrogen propellants to the Orbiter’s main engines. The expendable external tanks would be less technologically challenging than their reusable internal counterparts and thus would have a lower development cost. The tanks would break up high in the atmosphere after separating from the Orbiter.
The Thompson Board received a whirlwind series of briefings on the Shuttle design changes at KSC, the Manned Spacecraft Center, and Marshall Space Flight Center in late September 1971, after which Floyd Thompson called a two-month recess to give the Shuttle design time to firm up. Then, on 5 January 1972, Fletcher announced that President Richard Nixon would seek new-start funding for the Space Shuttle Program in the Fiscal Year (FY) 1973 NASA budget.
On 15 March 1972, as NASA and Nixon’s Office of Management and Budget jousted over the Shuttle’s development cost, Fletcher announced that the reusable Booster would be abandoned entirely in favor of a stack comprising a single expendable External Tank (ET) and a pair of reusable Solid Rocket Boosters (SRBs). After expending their propellant, the SRBs would separate from the Orbiter/ET combination and descend on parachutes. NASA’s Office of Manned Space Flight subsequently determined that the SRBs could not safely touch down in “a controlled manner” on land; they would instead need to splash down and be recovered at sea.
The Thompson Board met just twice more. At its 27 March 1972 meeting, it discussed the implications of the 15 March booster decision, and officially eliminated all non-coastal candidate Shuttle launch and landing sites from consideration. At its final meeting on 6 April 1972, the Board compared the cost of building and operating a single new Space Shuttle launch and landing facility at Matagorda Island, 65 miles south of Houston, Texas, with the cost of modifying and operating both KSC and VAFB.
The Board’s members assumed that NASA would build five Orbiters, begin Space Shuttle flights in FY 1978, and ramp up to 60 Shuttle missions per year beginning in FY 1985. To launch that many missions – more than one per week - from Matagorda Island, the Shuttle fleet would need one Orbiter Thermal Protection System (TPS) maintenance and checkout bay, three vehicle assembly highbays for mating Orbiters with their ET/twin SRB booster stacks, three Mobile Launcher Platforms for transporting the Shuttle/ET/twin SRB combinations to their launch pads, three launch pads, three firing rooms, and one Orbiter landing strip.
If NASA opted for the dual-site approach, three Orbiters based at KSC would conduct 40 missions per year using one Orbiter TPS bay, two vehicle assembly highbays, two Mobile Launcher Platforms, two pads, two firing rooms, and one landing strip. The two Orbiters based at VAFB would conduct 20 missions per year using one Orbiter TPS bay, one vehicle assembly highbay, two Mobile Launcher Platforms, one pad, one firing room, and one landing strip. The KSC/VAFB plan would thus need one more TPS bay, Mobile Launcher, and landing strip than the Matagorda Island plan.
The single-site plan would, however, incur greater construction costs than the dual-site plan, for the simple reason that Matagorda Island had no spaceflight infrastructure already in place. The Board estimated that Matagorda Island construction and operations would cost $5.365 billion through FY 1990, while KSC and VAFB would together cost $5.137 billion. The single-site plan would, as predicted, lead to reduced Shuttle operations costs, but these savings would amount to only $87.6 million. Constructing the Matagorda Island site would, on the other hand, cost $315 million more than would modifying KSC and VAFB to support Shuttle launches. This meant that the single-site option would cost $228 million more than the dual-site option.
In addition to its greater monetary cost, the single-site option would introduce substantial programmatic risk and societal costs. The Texas coastal site was partly privately owned, so construction could not begin there until NASA had negotiated purchase of the private land. Infrastructure such as roads, railways, an electric grid, a harbor, an airport, waste treatment plants, and a water system would need to be built new or expanded. Thousands of workers would need to relocate to the area in less than five years, placing enormous strain on local housing, schools, and what few amenities existed in the immediate area. At the same time, the communities around KSC, already under pressure as the Apollo Program drew to an end, would suffer catastrophic job losses.
The Thompson Board briefed James Fletcher on its results on 10 April 1972. Just four days later, Fletcher told a press conference at NASA Headquarters that Space Shuttles would launch from KSC starting in 1978 and that launches from VAFB would be phased in early in the 1980s.
Space Shuttle Launch and Recovery Site Review Board, NASA, 10 April 1972.
Space Shuttle: The History of the National Space transportation System – The First 100 Missions, Dennis R. Jenkins, 3rd edition, January 2001.
Chronology: MSFC Space Shuttle Program – Development, Assembly, and Testing Major Events (1969-April 1981), MHR-15, NASA George C. Marshall Space Flight Center, December 1988.
I research and write about the history of space exploration and space technology with an emphasis on missions and programs planned but not flown (that is, the vast majority of them). Views expressed are my own. | http://www.wired.com/wiredscience/2013/02/where-to-launch-and-land-the-space-shuttle-1972/ | 13 |
17 | A superconductor is a phase of matter that occurs when an attractive force arises between electrons. While the force between electrons is always repulsive in vacuum, the interaction with condensed matter can bring about correlations in the motion of electrons with each other that result in a net attractive force. When this occurs at low enough temperatures, bound electron partners can fall into a macroscopic quantum state where it becomes difficult to jostle the electrons out of their path. Consequently, the electrons do not lose energy to resistance and electric current can flow without hindrance.
Superconductivity was discovered in the pre-information age. However, these superconductors had electron pairs that were only weakly bound, and thus required extremely low temperatures to maintain the superconducting state. It was not until the 5th century that forms of matter were engineered to maintain superconductivity at room temperature and above. The original room temperature superconductors were time consuming and expensive to produce. In addition, the materials were not initially well suited to real world applications as they tended to be brittle and difficult to work with, typically exhibiting problems with magnetic flux pinning and migration that made them not quite as resistance-free as advertised. Their original applications were restricted to individual micro-scale rigid components, such as magnetic field detectors. In time the problems of manufacturing flexible and robust room temperature superconductors were solved, but the process was sufficiently laborious that the cost remained high until well after the nanoswarm era. This restricted their application to components for small, high value devices - typically sensors or small scale energy storage where a high specific energy or specific power was critical. Eventually methods were developed to rapidly print wide swaths of patterned superconductive films, an advancement which enabled room temperature superconductors to enter the mainstream in the form of cheap consumer goods, bulk energy storage devices, frictionless bearings, high powered microwave devices, and lossless power transmission lines.
High temperatures, magnetic fields, and currents can all cause superconductivity to break down. The earliest room temperature superconductors could tolerate temperatures of up to 350 kelvin and magnetic fields of 500 tesla (although not both at the same time - at room temperature of 300 K, the critical magnetic field at which superconductivity is quenched occurred near 200 T). Modern superconductors can handle temperatures near 500 K and fields over 700 T at room temperature.
Power transmission and electric devices.
Supercurrents flow through superconductors without any resistive losses. This leads to the obvious application as wires or cable waveguides. These allow electric power in the form of supercurrents from DC to far infrared frequencies to flow without losing energy to resistance or heating the wires. When superconductors form the backbone of a power transmission grid, electricity can be efficiently transferred from the generating stations to the consumer.
In time, metal and carbon wires were largely replaced by superconductive ribbons for nearly all electrical and electronic applications. This led to inductors, transformers, rotary and linear electric motors and electric generators with both higher efficiency and lower mass for a given power.
If a superconductive path forms a complete loop, a supercurrent can flow around the loop forever. In theory, the migration of lines of magnetic flux across the superconductor can cause some resistance, but for superconductors in which these magnetic flux lines are either sufficiently strongly pinned or are sufficiently difficult to form, the supercurrent can flow for tens of thousands of years without any noticeable decrease (calculations for the more popular superconductive materials show that the supercurrent will not change appreciably over the entire lifespan of the universe, but such claims are clearly impossible to measure).
A flowing current will create a magnetic field, and magnetic fields contain energy. Thus, a device that produces a strong magnetic field will store large amounts of energy. The usual method of doing so is to wind a superconductive ribbon tightly around a tube to create an electromagnet. If the ribbon is connected to itself end to end, a persistent supercurrent can be set up that sustains a powerful field. If the current is interrupted, as by flipping a switch to redirect the current through a load, the inductive backlash creates a surge of voltage that will ram the current through the load and power any device it is attached to. The electromotive force that pushes the current is so strong that, for all practical purposes, a superconductive solenoid can discharge all of its energy instantly if needed.
There are several engineering issues with this approach. If the magnetic field spills beyond the electromagnet, it can pose a danger when loose ferromagnetic objects are violently accelerated toward the electromagnet. Further, eddy currents in conductive (not necessarily ferromagnetic) nearby objects can exert an additional drag if the electromagnet and the conductor are moving with respect to each other. The usual solution is to wrap the superconductive ribbon around a torus (a donut or bagel shape), which keeps the magnetic field entirely within the torus. A variety of other shapes are possible, so long as the tube shape around which the electromagnet is wrapped curves back around so the ends connect up with each other.
Another issue that must be addressed is that a magnetic field exerts forces on currents, including the very current that created it. These forces put tension on any superconductor carrying a supercurrent; if it carries enough current, the tension exceeds the tensile strength of the material and the superconductor breaks. If this occurs when it is storing a lot of energy, the break quickly turns into an explosion as the electromagnet violently tears itself apart. Superconductors do not themselves have a high tensile strength - however, they can be backed up by carbon nanotube fabrics or graphene sleeves. This allows the high strength carbon materials to take up the tension so the device can store much higher energies than if using a superconductive ribbon alone. Since supercurrent flows only over the surface of a superconductor, the thin superconductive ribbon can have negligible mass compared to the thick carbon sheath. It is the tensile strength of the sheath that determines the specific energy (energy per unit mass) of the superconductive solenoid: Specific Energy (in joules/kg) = Tensile strength of sheath (in pascals) / mass density of device (in kilograms per cubic meter).
For nanoconstructed carbon materials, with a tensile strength on the order of 100 GPa and a density of around 2000 kg/m^3, this leads to specific energies of near 50 MJ/kg. This is at the theoretical limit for what matter held together by chemical bonds can withstand, no other energy storage device that relies on chemical bonds for energy or support can exceed the specific energy of a carbon-backed superconductive toroidal solenoid.
The energy density (energy per unit volume) of a superconductive solenoid is determined by the magnetic field strength it can sustain. Energy density (in joules/cubic meter) = 400,000 x (magnetic field (in tesla))^2. Since modern superconductors can withstand fields as high as 700 tesla, this leads to energy densities of 200 GJ/m^3. This is the energy stored inside the "empty" tube of the torus, and does not include the volume occupied by the supporting sheath. Comparing the energy density of the tube to the mass of the sheath, we see that one third of the volume of a well engineered superconductive solenoid is the interior of the tube, while two thirds is the surrounding carbon sheath. This leads to a net maximum energy density (energy stored divided by the total volume of the device) of around 70 GJ/m^3.
For safety reasons, superconductive batteries are not typically energized up to the maximum that their superconductors or sheath can sustain. This risks small jostles or temperature fluctuations pushing the materials past their limits, causing the solenoid to explode. With typical safety margins, a superconductive solenoid will store 15 to 25 MJ/kg.
Once low cost superconductors come on the scene, solenoids become competitive with other forms of consumer energy storage which have as their ultimate limit the strength and energy of chemical bonds, such as torsion batteries and flywheels. Solenoids have the highest specific electrical power of any energy storage device, and are preferred in applications where high power pulses of electricity are required.
Superconductive solenoidal tubes of the sort described for energy storage cells, consisting of an inner lining of superconducitve sheeting and an outer sheath of high tensile strength carbon backing, will become rigid when fully energized as if the magnetic field is exerting a pressure on the superconductor. This allows structures to be engineered such that they are entirely under tension. The superconductor will "inflate" solenoidal tubes until they become rigid, while cables or carbon sheets can constrain the flexibility and extent of the structure. Keeping a structure entirely under tension has certain benefits - for example, under compression you need thick supports to resist buckling while tension can be supported by a slender cable. Further, the tensile strength of graphene-like carbon (including carbon nanotubes) is higher than the compressional strength of the strongest known (normal matter) materials.
Using solenoidal tubes for structural support leads to a number of unique attributes in those structures that use them. One of the most dramatic is that the structure can be de-energized and it will deflate, becoming easy to fold up and pack away. When re-energized, the structure will inflate and become turgid, ready for use. Further, the energy stored in the solenoidal tubes can be used to power the device, combining the function of energy storage and structure. However, if too much energy is drawn from the structure during operation without re-energizing, the structure will droop or wilt as it loses magnetic pressure.
Since the materials used for tension-only structures are flexible, they tend to be resistant to impacts. The structure will compress and bend from the force of the blow, then bounce back. This can be used to cushion occupants or delicate machinery or instruments required for operation. However, the energized tubes present an additional hazard - if stressed beyond their capacity they can rupture, and the resulting explosion can tear the structure apart and injure occupants or nearby bystanders. Consequently, much like energy storage solenoids, structural support tubes are typically energized to only a fraction of their maximum capacity - where high rigidty is needed, they may take up to 1/3 to 1/2 of the maximum energy but for applications where softer structures are allowable or desirable much lower fractions of the maximum capacity may be used.
Because vehicles commonly use an internal store of energy, because they often carry relatively fragile operators or passengers, and because any other form of compact, high specific energy storage has a similar explosion hazard, vehicles very commonly make use of solenoidal tubes and tension cable/sheet construction for much of their frame and body. This leads to personal passenger transport that can be deflated and easily stored in a compact volume, negating the need for spacious parking lots and structures.
If a superconductive solenoid is broken while carrying a supercurrent near the limit of what it can withstand, it tears itself apart in a violent explosion. A modern superconductive explosive energized up to near its limit of 50 MJ/kg will have over ten times the explosive energy as an equivalent weight of chemical high explosives. Because fully energized superconductive explosives are dangerous to transport, they are typically stored partially energized along with an energized storage cell. To prime the explosive, the energy storage cell is discharged into the explosive. In propelled munitions the energy storage cell must often necessarily be sacrificed as a fully energized and primed explosive would not withstand the rigors of launch, although there are engineering workarounds involving soft launch or in-flight charging. However, for construction, demolition, and blasting for engineering purposes the explosive is typically remotely primed and the energy storage cell can be reused.
Superconductors will exclude all magnetic fields from their interior by setting up surface supercurrents that exactly cancel the external fields. This is called the Meisner effect. One consequence of this is that a superconductor will be repelled from sources of magnetic field, and the field sources will likewise be repelled from the superconductor. This effect is strong enough to levitate magnets over a superconductive surface (or, conversely, levitate a superconductor over a magnet).
Magnetic levitation is commonly used for transportation along fixed paths. When superconductors are inexpensive, a track covered with superconductive film is constructed and vehicles with superconductive electromagnets hover above it. They may pull themselves along using a linear electric motor with the coils or the armature embedded in the track, or they may use turbojets, turbofans, or propellers. When superconductors are more expensive than magnets, the track is made of magnets and the vehicle has a superconductive bottom.
Another use for magnetic levitation is frictionless bearings. These find uses in any application that requires rotating parts, from wheeled vehicles to space habitats with stationary docking sections coupled to spin gravity sections. Although the magnet-superconductor interface is frictionless, there are other inevitable losses that cause drag on rotating systems, so while such bearings allow rotation with little intrinsic resistance, such systems will always need external torque to keep them at their desired rotation for long periods. The spin imparted to a flywheel in vacuum suspended on frictionless bearings does last for weeks or even months, however, before slowing appreciably.
Superconductors with defects can pin magnetic flux rather than excluding it. A magnet with its flux lines pinned to a superconductor will tend to remain in the same relative position with respect to the superconductor - if originally levitating and the superconductor is flipped over, the magnet will end up suspended underneath the superconductor by nothing more than its magnetic field. Magnets which are levitated in this fashion do not slide frictionlessly over the superconductor, they tend to stay pinned in place and tend to return to their original position if displaced.
Microwave, Radar, and Optical Applications
Because superconductors exclude both electric and magnetic fields from their interiors, they reflect electromagnetic waves. As long as the frequency of the EM waves is not too high, this reflection is nearly perfect. The cutoff frequency is roughly where the energy per photon exceeds the binding energy of the electron pairs, or about 10 THz (30 micron wavelength) for superconductors with a 500 K transition temperature. At higher frequencies (or, equivalently, shorter wavelengths) the superconductor rapidly loses reflectivity.
For those frequencies where superconductors are perfect reflectors, they are commonly used for resonant cavities. This allows powerful sources of radio waves, microwaves, and far infrared to be produced. These have applications ranging from radar beams to drivers for particle beams and free electron lasers.
A resonant cavity for electromagnetic waves has many of the same limitations as a solenoid for energy storage. If the energy density gets too high, the magnetic field will exceed the critical field for the superconductor and the material will lose its superconductivity. Further, the radiation trapped in the cavity exerts forces on the walls of the cavity, which tend to force the walls apart and will burst the cavity if the force overcomes the cavity's tensile strength. The result is that these resonant cavities can store the same amount of energy as a superconductive solenoid of the same size and weight, and can explode just as disastrously if damaged or overloaded. However, they do tend to leak, and will lose their stored energy over a period of seconds or minutes if not continuously driven with an external source of EM waves. Their application is in the generation and control of microwave and far infrared beams, not power storage.
If two superconducting regions are placed closely adjacent to each other, a current flows between them that depends sensitively on the magnetic field going through the space between the superconductive regions. This effect can be exploited to produce exquisitely sensitive magnetic field detectors called Superconducting QUantum Interference Devices (SQUIDs). SQUIDs enable a wide range of sensor and detector technologies, such as ultra-sensitive medical and industrial MRI scanners. The name is occasionally a source of confusion, and it should be noted that SQUIDs have little to do with squids, which are slimy, rubbery predatory mollusks overly endowed with tentacles and suckers (or provolved squids, for that matter, which are fierce, noble and intelligent sophonts).
The same coupled superconductors that can be used for magnetic sensing can also be used for computation and computer memory. Their high switching rate and low energy per switch make them attractive for certain applications. They are bulkier than computers built out of nanoscale carbon components, and the extra distance the signals have to cross to reach other components means that for many applications these nanoscale carbon computers are faster. Superconductive computers have two advantages, however. First, they can be reversible, meaning that energy dissipation is kept to a minimum. Second, they can be made into quantum computers, allowing them to solve some classes of problems (such as factoring large numbers) much faster than traditional computers. Consequently, most computers built by S0 engineers make use of superconductors to compliment nanoscale carbon. Superconductors do not seem to be used much in the phonon-based computers of transapients, such as the ultimate chip.
The strong magnetic fields that a superconductive wire can generate and indefinitely sustain with no energy draw make them useful for systems that require magnetic confinement or direction of plasmas. Examples of such systems include magnetic confinement fusion, where a diffuse plasma of exotic isotopes of hydrogen or helium is trapped and heated until fusion occurs between the nuclei; plasma thrusters, where plasma from an onboard source is vented and channeled away from a spacecraft to provide thrust; and pulse drives, where an exterior explosion of plasma (such as might be created by inertial confinement fusion processes) is directed away from a spacecraft to provide thrust. Certain engineering considerations are required for such systems. In particular, the temperatures of the plasma will be far above the superconductive transition temperature, so the superconductive magnet loops mush be heavily insulated and often actively cooled. This can be a particular concern for high performance fusion pulse "torch" drives, which can liberate on the order of a terawatt of power for several million newtons of thrust - the radiated energy of such drives brings the superconductive exhaust bells to near their limit, and the cooling available to the drive components is critical to keep the drive functional.
Failures can be spectacular - as previously discussed, magnetic fields contain energy and produce stresses on the supercurrents that create them. When magnetic confinement units lose superconductivity even in small regions, the resulting current being rammed through a suddenly resistive medium explosively flashes that section of the magnetic coil to plasma. The resulting violent expansion of the vaporized debris is sufficient to compromise structural integrity of the carbon fiber support, allowing the magnetic stresses to rip the coils apart and fling the fragments apart with brutal energy that can cause severe damage to nearby equipment and injury or death to personnel in the area.
Fossil Fuel - Text by M. Alan Kazlev Naturally-occurring, energy-rich carbon-based substance, such as shale, petroleum, coal, or natural gas, in a Gaian Type world's crust that was formed from ancient organic material. During the Industrial, Atomic, and early Information ages on Old Earth fossil fuels were burned in a criminally negligent manner, resulting in drastic climate change and ecological crisis that was only repaired during the late Interplanetary Age.
Fusion Reactor - Text by M. Alan Kazlev Power generation through the release of heat through a controlled nuclear fusion reaction. The hot plasma is confined in a magnetic bottle. Dedicated expert systems and subturing computers are required to ensure that the magnetic bottle remains at exactly the right charge to safely hold the plasma. Fusion generation is a widely relied upon power source throughout much of the galaxy, both to power large vehicles and settlements. Although not as efficient as amat, it is considerably safer, since there is no need to store amat and a magnetic failure means the hot plasma disperses causing only minor local damage.
Superconductivity - Text by M. Alan Kazlev; modified from KurzweilAI The physical phenomenon whereby some materials exhibit zero electrical resistance at low temperatures. Superconductivity allows great computational power with little or no heat dissipation (the major limiting factor in all processing operations). The synthesizing of special materials enabling cheap and reliable room temperature superconductivity during the early Interplanetary Age represented a tremendous leap forward in many fields of technology. | http://www.orionsarm.com/eg-article/4a3271fc1b0e4 | 13 |
10 | One of the great enduring ideas of the near-future in space is that of enormous, orbiting arrays of solar cells that collect sunlight, convert it to energy, and beam that energy to Earth for use. The idea can be traced back to Peter Glaser of the Arthur D. Little Company, who originally suggested the concept in its modern form in 1968. NASA and the US Department of Energy did an extensive conceptual study of solar powers satellites (SPS) in the 1970s, and the idea has popped up again and again in both science fiction and in space applications studies.
These satellites are usually envisioned as large planar affairs composed of many square kilometers of high-efficiency solar cells. The satellites could be placed in geosynchronous orbits that would never pass through the Earth’s shadow, ensuring a non-stop flow of energy. The sunlight on the cells is converted into electricity, which is gathered and beamed back to Earth via microwave emitters.
Because they are in geosynchronous orbits, their microwave emitters can always be trained on one specific spot on the ground. Here, arrays of receiving antennas, also called rectennas, intercept the microwaves and convert the energy into electricity usable by average consumers. This grouping of receiving antennas is sometimes called a rectenna farm.
The original NASA/DOE study called for a rectangular satellite with a collecting array that measured 10 kilometers by 14 kilometers. It would have used a transmitting antenna roughly about a kilometer across (the larger the better, to prevent beam-spreading,) which would beam the power to Earth at a frequency of 2.45 GHz, the same frequency used by microwave ovens, but also has the advantage of allowing the beam to pass unimpeded through clouds and rain. The rectenna farm would cover an oval area roughly 13 kilometers long and 10 kilometers wide.
The peak intensity of the microwave beam would be 23 milliwatts per square centimeter; the maximum allowable leakage from a consumer microwave oven is 5 milliwatts per square centimeter. While this would not be healthy in terms of long term exposure, it would certainly be possible to walk through the entire multi-kilometer width of the naked beam without experiencing any ill effects. Since the receiving area is expected to be covered over with large, raised rectennas, anyone on the ground underneath them would receive only negligible microwave exposure. Still, rectenna farms would likely be located in remote areas such as deserts in order to allay concerns from residents about possible ill effects of the microwave exposure.
At the distance of Earth’s orbit, sunlight delivers about 1400 watts worth of power per square meter. Using the types of solar cell technology available at the time of the NASA/DOE study, this would result in a net power gain on the ground of about 5 billion watts, or about ten times the output of a typical ground-based power plant.
These estimates, however, were made with the assumption of solar cell efficiency (how much of the 1400 watts per square meter of sunlight they can convert into usable energy) of around 5%, typical for 1970s technology. Today’s space-based solar cell arrays, such as those used on the International Space Station, have an energy-conversion efficiency of about 14%. The most modern systems have efficiencies ranging between 42% and 56%. The amount of power that can be delivered to the ground would be increased proportionally as well.
The mass of the SPS in the NASA/DOE study was estimated to be between 30,000 and 50,000 metric tons. With modern composite materials and far more lightweight solar cell designs, this mass could be cut to about one half to one third that. But even so, this represents a tremendous amount of material one would have to boost into space. At a current cost of at least several thousand dollars per pound to put an object in orbit, SPSs, despite their other advantages, would remain economically unfeasible in the near future.
A recent study conducted by the Space Studies Institute (SSI) showed that 98% of the material needed to construct a SPS could be mined from materials on the Moon. This would greatly reduce the cost of construction, but it would also mean that at least the seed of a lunar manufacturing infrastructure would have to exist first before the SPS scheme became feasible.
Though the microwave beam from an SPS cannot do much harm to any individual person, it is feasible the beam could be used as an environmental damage weapon, especially if the beam intensity were increased. If trained on an area for an appreciable length of time, it could use heat damage to kill cropland, forests, swamps, and perhaps even be used to oppress the residents of a large modern city that could otherwise be under siege. This effect need not always be used to detriment, however. In the novel Fallen Angels, by Larry Niven, Jerry Pournelle, and Michael Flynn, the heat from SPS transmitters was used to keep the last Canadian city, Ottawa, ice-free and livable after the rest of the country was buried under the glaciers of a new Ice Age.
At least one nation, the perpetually power-starved Japan, has committed itself to constructing a working solar power satellite by 2040. An smaller, cheaper, but less efficient alternate design by Japanese engineers suggest an SPS with the solar cells arranged in an equilateral triangle 300 meters wide and 300 meters to a side. The satellite would sweep along the equator at an altitude of 1100 kilometers and beam its power to a long array of rectenna stations below its flight path.
SPS technology also has a secondary application, that of providing beam power to launch craft and space-borne vessels, such as Myrabo’s Lightcraft and various incarnations of solar and magnetic sails. For the latter applications, however, the energy from the satellite might be converted to laser light or frequencies other than microwaves, depending on the type of spaceship used.
http://www.space.com/businesstechnology/technology/solar_power_sats_011017-1.htmlhttp://www.thespacereview.com/article/214/1 http://members.fcac.org/~sol/station/sps.htm http://www.spacefuture.com/archive/conceptual_study_of_a_solar_power_satellite_sps_2000.shtml
|HOME||SPACE STRUCTURES HUB| | http://orbitalvector.com/Space%20Structures/Solar%20Power%20Satellites/SOLAR%20POWER%20SATELLITES.htm | 13 |
33 | An Introduction to Dark Matter
There are many theories and predictions among scientists to describe and explain the universe and its contents. Attempting to solve the dilemma of whether dark matter exists and, if so, what it consists of is of great importance to the scientific world. By definition dark matter emits no electromagnetic radiation but must have a large cumulative mass since its presence is inferred solely through its gravitational effects on visible (luminous) matter. It is now widely believed that dark matter does, indeed, exist and the standard model of cosmology anticipates that dark matter accounts for > 90% of all matter in the Universe. The overall matter-energy density of the Universe has been found to have large proportions of dark matter (23%) and dark energy (72%) (WMAP), which is consistent with the standard cosmological model.
There are several pieces of observational evidence for dark matter in our Universe. Gravitational lensing (see figure below: Hubble Deep Field) and the unexpected rotational curves of spiral galaxies (see figure below: Rotational velocity curve) are among these observations that point to there being so-called "missing mass" throughout the Universe. Recent results from WMAP give us our most accurate value for the total mass in the Universe and how it is divided between different types of matter and energy.
The Hubble Deep Field.
Rotational velocity curve of the M33 galaxy.
These pieces of observational evidence lead to the idea of a dark matter halo approximated as a sphere surrounding the visible galaxy with a density distribution proportional to the radius squared, and contributing circa 90% of the galactic mass. The favoured dark matter candidate at present is the Weakly Interacting Massive Particle (WIMP). The WIMP halo is what all dark matter experiments are attempting to observe.
WIMPs are expected to form a halo in and around the visible constituents of our Galaxy, the Milky Way. Since the Sun and, therefore, the Solar System is in motion around the centre of the Galaxy, it is expected that the Earth should experience a so-called 'WIMP wind' that is detectable as fluctuations in the magnitude and direction of a WIMP signal. All dark matter detectors with sufficient target mass, directional or otherwise, are capable of noticing an annual modulation due to the Earth's orbit around the Sun as the Sun orbits the galactic centre (see figure below: Annual modulation). Only directional dark matter detectors such as the DRIFT detector, however, have the ability to observe fluctuations in the direction of incoming WIMPs throughout a 24 hour period (see figure below: Daily fluctuation).
Illustration of annual modulation in a WIMP signal.
The solar system is ~ 8.5 kpc from the galactic centre and is travelling around it at ~ 220 km / s. The Earth's orbit around the Sun is inclined at 60 degrees to the galactic plane and the Earth's spin axis is 23.5 degrees from its orbital plane. This leads to a `WIMP wind' that means the WIMP flux observed by a static detector on Earth will not be isotropic, but will be peaked in the direction in which the Earth is travelling. This subsequently causes the nuclear recoil flux, due to WIMP-nucleon scattering, to be anisotropic also, although less so than the WIMP flux since scattering does not always happen with a full head-on contact between particles. A detector that can accurately reconstruct track information and produce directional data can observe the changes in signal flux over particular periods of time. For example, a laboratory located at a similar latitude to that at Boulby mine should observe a direction change in the signal from downward to Southward and back again during a sidereal day. The signal coordinates can then be transformed from the laboratory frame to galactic coordinates, which removes the effects of the Earth's rotation, and hence determine whether the signal is of galactic origin, which is expected of a WIMP signal, or not.
Illustration of daily fluctuations in a WIMP signal.
The nuclear recoils that must be observed for dark matter detection are of
energies less than 100 keV. Dark matter experiments therefore need detectors
with a good (low) energy threshold to be able to detect these recoils accurately.
Directional dark matter detectors then also need to be able to demonstrate
a correlation of these recoil signals and their anisotropy with what is expected
due to the Earth's rotation. It is this recognition of patterns within the
recoil flux and direction that allow directional detectors to identify any
signal as galactic in origin, as it must be to be due to WIMP interactions.
Directional detectors have the capability to then probe further into the structure
and dynamics of the WIMP halo and distinguish between different halo
models proposed from theory. This is a very powerful opportunity for directional
dark matter detection to produce a definite and indisputable result.
Setting WIMP limits
Dark matter experiments attempt to confine the properties that can be assigned to WIMPs by detecting no events over as long a period of time as possible. These confines can be found since it is known that if WIMPs exist with particular characteristics they would be observed by the detector. Since no events are observed, it can be stated to a good level of confidence that WIMPs with these characteristics do not exist. It is also possible to set limits on these parameters even when events are observed, as long as they are consistent with what is expected from backgrounds that cannot be removed. The parameter space that experiments are most interested in for comparison of results is the WIMP-nucleon cross-section as a function of WIMP mass, and this is what is plotted when illustrating the sensitivity reached by a dark matter experiment.
The important things to be considered when trying to achieve the best possible sensitivity from a WIMP search experiment are the energy threshold, target mass, spin-dependency and the discrimination capabilities. The energy threshold must be as low as possible, in order that the WIMP interactions, which are expected to produce nuclear recoils of < few keV energies, can be detected. The target mass should be high, since larger target nuclei have a higher probability of interaction and a higher target mass means more target nuclei, which raises the interaction probability further, and with a predicted WIMP interaction rate of < 1 per day per kg of target mass it is important that the target mass be as large as possible and the detector runs for as long a time as possible to achieve the best results. Nuclei with an overall non-zero spin, i.e. a nucleus with an odd number of neutrons and/or protons, have a scattering cross-section with WIMPs that is proportional to the spin value J. The cross-section for nuclei with zero spin is simply proportional to the mass number, A.
The ability to discriminate WIMP-induced nuclear recoils from any other events in a detector is vital. This is done using knowledge of the differences in the ionisation, scintillation and/or thermal energy deposition in a detector produced by background events compared to those of WIMP-like events. There are essentially two types of discrimination attainable by detectors, statistical discrimination, where not all events are distinguishable, and absolute, or event-by-event, discrimination, where all events and interactions can be distinguished. To aid this it is also imperative that sources of backgrounds, such as uranium and thorium contamination in materials, is kept to a minimum through the use of radiopure materials in constructing the detector and shielding, e.g. hydrogen-rich materials to thermalise neutrons from U and Th, and an overburden of rock to suppress the cosmic ray-induced backgrounds. Once all these items have been taken into consideration and the best possible sensitivity has been achieved, data from the experiment is analysed and the resulting energy spectrum of nuclear recoils is examined to determine any peak that is not simply what is expected from background events. If the observed rate is consistent with that expected from background sources an upper limit on the WIMP-nucleon cross-section can be set.
There are many dark matter searches being executed worldwide. The current best dark matter limits have been set by CDMS-II and NaIAD (UKDMC) for spin-independent and spin-dependent cross-sections, respectively. | http://www.hep.shef.ac.uk/research/dm/intro.php | 13 |
18 | of the Universe
The most profound insight of General Relativity was the conclusion that the effect
of gravitation could be reduced to a statement about the geometry of spacetime.
particular, Einstein showed that in General Relativity mass caused space to curve,
and objects travelling in that curved space have their paths deflected, exactly as
if a force had acted on them.
Curvature of Space in Two Dimensions
The idea of a curved surface is not an unfamiliar one
since we live on the surface of a
sphere. More generally, mathematicians distinguish 3 qualitatively different
classes of curvature, as illustrated in the following image
These are examples of surfaces that have two dimensions. For example, the left
surface can be described by a coordinate system having two variables (x and y,
likewise, the other two surfaces are each described by two independent
coordinates. The flat surface at the left is said to have zero curvature, the
spherical surface is said to have positive curvature, and the saddle-shaped surface
is said to have negative curvature.
Curvature of 4-Dimensional Spacetime
The preceding is not too
difficult to visualize, but General Relativity asserts that space
itself (not just an object in
space) can be curved, and furthermore, the space
of General Relativity has 3 space-like dimensions and one time dimension, not just
two as in our example above.
This IS difficult to visualize! Nevertheless, it can be described mathematically
by the same methods that mathematicians use to describe the 2-dimensional surfaces
that we can visualize easily.
The Large-Scale Geometry of the Universe
Since space itself is curved, there are three general possibilities for the
geometry of the Universe. Each of these possibilites is tied intimately to the
amount of mass (and thus to the total strength of gravitation) in the Universe,
and each implies a different past and future for the
Which of these scenarios is correct is still unknown because we have been unable to
determine exactly how much mass is in the Universe.
If space has negative curvature, there is insufficient mass to cause the expansion
of the Universe to stop. The Universe in that case has no bounds, and will expand
forever. This is termed an open universe.
If space has no curvature (it is flat), there is exactly enough mass to cause the
expansion to stop, but only after an infinite amount of time. Thus, the Universe
has no bounds in that case and will also expand forever, but with the rate of
expansion gradually approaching zero after an infinite amount of time.
This is termed a flat universe or a Euclidian universe (because
the usual geometry of non-curved surfaces that we learn in high school is called
If space has positive curvature, there is more than enough mass to stop the present
expansion of the Universe. The Universe in this case is not infinite, but it has
no end (just as the
area on the surface of a sphere is not infinite but there is no point on the sphere
that could be called the "end"). The
expansion will eventually stop and turn into a contraction. Thus, at some point in
the future the galaxies will stop receding from each other and begin approaching
each other as the Universe collapses on itself. This is called a
Is the Universe Open, Flat, or Closed?
The Density Parameter of the Universe
||(0.013 +/- 0.005) h-2
|Stars in Galaxies
|Dynamics (r < 10 h-1
||~0.05 - 0.2
|Dynamics (r > 30 h-1
||~0.05 - 1
Source: P. J. E. Peebles, Principles of
The geometry of the Universe is
often expressed in terms of the density parameter, which is
defined to the the ratio of the actual density of the Universe to the critical density
that would just be required to cause the expansion to stop. Thus, if
the Universe is flat (contains just the amount of mass to close it) the density
parameter is exactly 1, if the Universe is open with negative curvature the density
parameter lies between 0 and 1, and if the Universe is closed with positive curvature the
density parameter is greater than 1.
The density parameter determined from various methods is summarized in the
adjacent table. In this table, BB nucleosynthesis
refers to constraints coming from the synthesis of
the light elements in the big bang, +/- denotes an experimental uncertainty in a
quantity, and the parameter h lies in the range 0.5 to 0.85 and measures the uncertainty
in the value of the Hubble parameter.
Although most of these methods (which we will not discuss in detail) yield values of the
density parameter far below the critical value of 1, we must remember that they
have likely not detected all matter in the Universe yet.
The current theoretical
(because it is predicted by the
theory of cosmic inflation)
is that the Universe is flat, with exactly the amount of mass required to
stop the expansion (the corresponding average critical
density that would just stop the is called the closure
density), but this is not yet confirmed. Therefore,
the value of the density parameter
and thus the ultimate fate of the Universe remains one of the major
unsolved problems in modern cosmology. | http://csep10.phys.utk.edu/astr162/lect/cosmology/geometry.html | 13 |
14 | Obtaining Astronomical Spectra - Spectrographs
A spectrograph is an instrument used to obtain and record an astronomical spectrum. The spectrograph splits or disperses the light from an object into its component wavelengths so that it can be recorded then analysed. These steps are discussed in more detail below.
Light entering a spectrograph can be split or dispersed into a spectrum by one of two means, using a prism or a diffraction grating. When Newton split light into a spectrum in the 1660s he used a glass prism. School students often use perspex prisms from ray box kits to disperse or "split" white light from an incandescent bulb into the component colours of the spectrum. This effect arises due to the fact that the different wavelengths of light also have different frequencies. As they pass through a prism, they undergo refraction, a change in velocity due to the change in medium. If the light falls incident to to the prism at an angle other than 90° it will also change direction. Red light has a longer wavelength than blue light so its angle of refraction is lower, both at entry to and exit from the prism. This means it gets bent less. The light emerging from the prism is dispersed as shown schematically in the diagram below.
Most astronomical spectrographs use diffraction gratings rather than prisms. Diffraction gratings are more efficient than prisms which can absorb some of the light passing through them. As every photon is precious when trying to take a spectrum from a faint source astronomers do not like wasting them. A diffraction grating has thousands of narrow lines ruled onto a glass surface. It reflects rather than refracts light so no photons are "lost". The response from a grating is also linear whereas a prism disperses blue light much more than in the red part of the spectrum. Gratings can also reflect light in the UV wavebands unlike a glass prism which is opaque to UV. A common example of a diffraction grating is a CD where the pits encoding the digital information act as a grating and disperse light into a colourful spectrum.
The schematic diagram below shows the key components of a modern slit spectrograph.
The slit on the spectrograph limits the light entering the spectrograph so that it acts as a point source of light from a larger image. This allows an astronomer to take a number of spectra from different regions of an extended source such as a galaxy or of s specific star in the telescope's field of view. Light is then collimated (made parallel) before hitting a diffraction grating. This disperses the light into component wavelengths which can then by focused by a camera mirror into a detector such as a charged-couple device (CCD). By rotating the grating different parts of the dispersed spectrum can be focused on the camera. The comparison lamp is vital in that it provides spectral lines of known wavelength (eg sodium or neon) at rest with respect to the spectrograph, allowing the spectrum of the distant source to be calibrated and any shift of spectral lines to be measured.
Newton recorded the spectrum of sunlight by drawing it. The rise of spectroscopy for astronomical use was in part due to its linkage with another emerging technology - photography. Astronomical spectra could be recorded by photographing them on glass plates. This was a far superior approach to viewing them with through an eyepiece and trying to draw the image. Photographic records of spectra could be stored for later analysis, copied for distribution or publication and the spectral lines could be measured relative to spectral lines from a stationary lamp producing spectral lines of known wavelength. It was only by observing and photographing the spectra of thousands of stars that astronomers were able to classify them into spectral classes and thus start to understand the characteristics of stars. Photographic spectra were generally recorded on glass plates rather than photographic film as plates would not stretch. The image of the spectrum was normally presented as a negative so that the absorption lines show up as white lines on a dark background. The example below shows the photographic spectrum of a standard reference star, α Lyrae from the 1943 An Atlas of Stellar Spectra.
Photoelectric spectroscopy allows spectral information to be recorded electronically and digitally rather than on photographic plates. Modern astronomical charged-couple devices or CCDs can reach a quantum efficiency of about 90% compared with about 1% for photographic emulsions. This means a CCD can convert almost 9 out of 10 incident photons into useful information compared with about 1 in 100 for film. Using a CCD an astronomer can therefore obtain a useful spectrum much quicker than using a photographic plate and can also obtain spectra from much fainter sources. CCDs have a more linear response over time than photographic emulsions which lose sensitivity with increased exposure. A spectra recorded on a CCD can be read directly to a computer disk for storage and analysis. The digital nature of the information allows for rapid processing and correction for atmospheric contributions to the spectrum. Modern spectra are therefore normally displayed as intensity plots of relative intensity versus wavelength as is shown below for a stellar spectrum.
The last decade has seen the growth in multifibre spectroscopy. This involves the use of optical fibres to take light from the focal plane of the telescope to a spectrograph. A key advantage of this technique is that more than one spectrum can be obtained simultaneously, dramatically improving the efficiency of observing time on a telescope. Many of the techniques for multifibre spectroscopy were developed at the Anglo-Australian Observatory for use on the AAT and the UK Schmidt telescopes.
The 2dF project revolutionised the emerging field of multifibre spectroscopy by using a computerised robot to precisely position 400 minute prisms onto a metal plate so that each prism could gather light from an object such as a galaxy or quasar. Attached to each prism was an optical fibre that feeds into a spectrograph. The 2dF instrument sits at the top of the AAT and can take spectra from 400 objects simultaneously over a 2 degree field of view. Whilst observing one field, the robot sets up a second set of prisms on another plate which can then be flipped over in a few minutes to begin observing a new field. This incredibly efficient system allows spectra from thousands of objects to obtained in a single night's observing run.
Two key projects, the 2dF Galaxy Redshift Survey and the 2dF QSO Redshift Survey provided the scientific impetus for building this multifibre instrument. These surveys produced accurate data on over 250,000 galaxies and 25,000 quasars that have proved an immense boon for cosmologists studying the formation and large-scale structure of the Universe.
Australian astronomers and engineers continue to design, develop and build new multifibre devices for the latest generation 8-10 m class telescopes overseas. An Australian consortium from the AAO, ANU and UNSW has just built OzPoz, a multifibre spectrograph for ESO's VLT in Chile. It develops the techniques used in 2dF and currently allows 132 spectra to be gathered simultaneously. Future instruments such as Echidna and AAOmega are under development at present.
Spectroscopy is not just the tool of optical astronomers. It can be carried out at all wavebands, each of which provides new insights into the structure and characteristics of celestial objects.
Infrared spectroscopy allows astronomers to study regions of star birth obscured to optical astronomy by cold clouds of dust and gas. Australia is actively involved in infrared astronomy and has built infrared spectrographs such as IRIS 2 for the AAT and the ANU's 2.3 m telescope at Siding Spring. The Research School of Astronomy and Astrophysics at Mt Stromlo in Canberra was building the Near IR Integral Field Spectrograph (NIFS) for the 8.1 m Gemini North telescope in Hawaii when fire destroyed most of the facilities on the mountain in early 2003. A replacement NIFS has now been made and will soon be in use on Gemini.
High-energy spectroscopy in the X-ray and γ-ray regions is more difficult s the instruments have to withstand the rigours of a rocket launch and the harsh environment of space. As high energy photons have much shorter wavelengths traditional optical designs for spectrographs are not suitable or able to be adapted. The resolution of high energy spectrographs cannot match optical ones at present but they allow us to gain greater understanding of violent, energetic objects and events in the Universe.
Radio astronomers also gain spectral information from their observations. Receivers used on radio telescopes can pick up thousands of bands in a given region of the radio band just as you could obtain by moving a radio dial through several stations and measuring the intensity of the received signal. This information effectively provides details about the various transitions emitted by matter. Radio spectral data can give details about frequency and velocity. It can also provide information about the polarisation of the signal, information not normally available in visible spectra. Improvements in receivers and detectors now allow astronomers to routinely observe at mm-wavelengths where there is a wealth of spectral lines from molecules in space. Molecules such as acetic acid and formaldehyde have been discovered in interstellar clouds and the search continues for the signature of amino acids such as glycine. Information on these will prove vital for astrobiologists and astrochemists. | http://outreach.atnf.csiro.au/education/senior/astrophysics/spectrographs.html | 13 |
17 | Neptune has thirteen known moons. The largest by far is Triton, discovered by William Lassell just seventeen days after the discovery of Neptune itself. It took about one hundred years to discover the second natural satellite, Nereid.
Triton is massive enough to have achieved hydrostatic equilibrium, and would be considered a dwarf planet if it were in direct orbit about the Sun. Triton has a very unusual orbit that is circular but retrograde and inclined. Inward of Triton are six regular satellites, which all have prograde orbits that are not greatly inclined with respect to Neptune's equatorial plane. Some of these orbit among Neptune's rings.
Neptune also has six outer irregular satellites, including Nereid, whose orbits are much farther from Neptune, have high inclinations, and are mixed between prograde and retrograde. Two natural satellites discovered in 2002 and 2003, Psamathe and Neso, have the largest orbits of any natural satellites discovered in the Solar system to date. They take 25 years to orbit Neptune at an average of 125 times the distance between Earth and the Moon. Neptune has the largest Hill sphere in the solar system, owing primarily to its large distance from the Sun; this allows it to retain control of such distant moons.
Triton was discovered by William Lassell in 1846, seventeen days after Neptune was discovered. Nereid was discovered by Gerard P. Kuiper in 1949. In 1981 Larissa was first observed by Harold J. Reitsema, William B. Hubbard, Larry A. Lebofsky and David J. Thole.
No further moons were found until Voyager 2 flew by Neptune in 1989. Voyager 2 recovered Larissa and discovered five new inner moons, bringing the total of known moons of Neptune to eight.
Note that Triton did not have an official name until the twentieth century. The name "Triton" was suggested by Camille Flammarion in his 1880 book Astronomie Populaire, but it did not come into common use until at least the 1930s. Until this time it was usually simply known as "the satellite of Neptune" (the second satellite, Nereid, was not discovered until 1949).
- Main article: Triton (moon)
|This section requires expansion.|
The diagram illustrates the orbits of Neptune’s irregular moons discovered so far. The eccentricity of the orbits is represented by the yellow segments (extending from the pericentre to the apocentre) with the inclination represented on Y axis. The satellites above the X axis are prograde, the satellites beneath are retrograde. The X axis is labelled in Gm (million km) and the fraction of the Hill sphere's (gravitational influence) radius (~116 Gm for Neptune).
Triton, the biggest moon following a retrograde but a quasi-circular orbit, also conjectured to be a captured satellite, is not shown. Nereid, which has a prograde but very eccentric orbit, is believed to have been scattered during Triton's capture.
The mass distribution of the Neptunian moons is the most lopsided of any planet. One moon, Triton, makes up nearly all of the mass of the system, with all other moons together comprising only one third of one percent (see diagram). This may be because the capture of Triton destroyed much of the original Neptunian system.
- See also: Error: Template must be given at least one article name
It is likely that Neptune's inner satellites are not the original bodies that formed with Neptune but accreted rubble from the havoc that was wreaked after Triton's capture. Triton's orbit upon capture would have been highly eccentric, and would have caused chaotic perturbations in the orbits of the original inner Neptunian satellites, causing them to collide and reduce to a disc of rubble. Only after Triton's orbit became circularised did some of the rubble disc re-accrete into the present-day satellites .
The mechanism of the Triton’s capture has been the subject of several theories over the years. The most recent as of 2006 postulates that Triton was captured in a three-body encounter. In this scenario, Triton is the surviving member of a binary object[note 1] disrupted by its encounter with Neptune..
Numerical simulations show that a moon discovered in 2002, Halimede, has had a high probability of colliding with Nereid during the lifespan of the system. As both moons appear to have similar (grey) colours, the satellite could be a fragment of Nereid.
|‡</br> Major moon||♠</br> Retrograde moons|
The Neptunian moons are listed here by orbital period, from shortest to longest. Triton, which is not only massive enough for its surface to have collapsed into a spheroid, but is comparable in size to our own moon, is highlighted in purple. Irregular (captured) moons are shown in grey; prograde in light grey and retrograde in dark grey. (Triton is also thought to be captured.)
(×1016 kg)[note 5]
| Semi-major axis|
| Orbital period|
| Eccentricity|| Discovery|
|1||Neptune III||Naiad||ˈneɪəd||66 (96×60×52)||~19||48 227||0.294||4.691°||0.0003||1989|
|2||Neptune IV||Thalassa||θəˈlæsə||82 (108×100×52)||~35||50 075||0.311||0.135°||0.0002||1989|
|3||Neptune V||Despina||dɨsˈpiːnə||150 (180×148×128)||~210||52 526||0.335||0.068°||0.0002||1989|
|4||Neptune VI||Galatea||ˌɡæləˈtiːə||176 (204×184×144)||212||61 953||0.429||0.034°||0.0001||1989|
|5||Neptune VII||Larissa||ləˈrɪsə||194 (216×204×168)||~420||73 548||0.555||0.205°||0.0014||1981|
|6||Neptune VIII||Proteus||ˈproʊtiəs||420 (436 × 416 × 402)||~5 000||117 647||1.122||0.075°||0.0005||1989|
|7||Neptune I||‡♠Triton||ˈtraɪtən||2707||2 140 000||354 800||−5.877||156.865°||0.0||1846|
|8||Neptune II||Nereid||ˈniːriː.ɪd||340||~3 100||5 513 400||360.14||7.090°||0.7507||1949|
|9||Neptune IX||♠Halimede||ˌhælɨˈmiːdiː||62||~16||15 728 000||1 879.71||112.712°||0.2646||2002|
|10||Neptune XI||Sao||ˈseɪ.oʊ||44||~5.8||22 422 000||2 914.07||53.483°||0.1365||2002|
|11||Neptune XII||Laomedeia||ˌleɪ.ɵmɨˈdiːə||42||~5.0||23 571 000||3 167.85||37.874°||0.3969||2002|
|12||Neptune X||♠Psamathe||ˈsæməθiː||38||~3.7||46 695 000||9 115.91||126.312°||0.3809||2003|
|13||Neptune XIII||♠Neso||ˈniːsoʊ||60||~15|| 48 387 000|
- ↑ Binary objects, objects with moons such as the Pluto–Charon system, are quite common among the larger Trans-Neptunian objects.
- ↑ Order refers to the position among other moons with respect to their average distance from Neptune.
- ↑ Label refers to the Roman numeral attributed to each moon in order of their discovery.
- ↑ Diameters with multiple entries such as "60×40×34" reflect that the body is not spherical and that each of its dimensions has been measured well enough.
- ↑ Mass of the small irregular moons (Halimede through Neso) was calculated assuming a density of 1.3 g/cm³. Unless otherwise noted, the uncertainty in the reported masses is not available.
- ↑ Each moon's inclination is given relative to its local Laplace plane. Inclinations greater than 90° indicate retrograde orbits (in the direction opposite to the planet's rotation).
- ↑ Flammarion, Camille (1880). "Astronomie populaire, p. 591". Retrieved on 2007-04-10.
- ↑ "Camile Flammarion". Hellenica. Retrieved on 2008-01-18.
- ↑ Scott S. Sheppard, David C. Jewitt, Jan Kleyna, A Survey for "Normal" Irregular Satellites Around Neptune: Limits to Completeness (preprint)
- ↑ Goldreich, P.; Murray, N.; Longaretti, P. Y.; Banfield, D. Neptune's story, Science, 245, (1989), p. 500-504.
- ↑ D. Banfield and N. Murray (1992). "A dynamical history of the inner Neptunian satellites". Icarus 99: 390. doi:10.1016/0019-1035(92)90155-Z, http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1992Icar...99..390B&db_key=AST&data_type=HTML&format=&high=444b66a47d03051.
- ↑ C.B. Agnor & D.P. Hamilton Neptune's capture of its moon Triton in a binary-planet gravitational encounter, Nature, 441 (2006), pp. 192. (pdf)
- ↑ M.Holman, JJ Kavelaars, B.Gladman, T.Grav, W.Fraser, D.Milisavljevic, P.Nicholson, J.Burns, V.Carruba, J-M.Petit, P.Rousselot, O.Mousis, B.Marsden, R.Jacobson Discovery of five irregular moons of Neptune, Nature, 430 (2004), pp. 865-867. Final preprint(pdf)
- ↑ T.Grav, M.Holman and W.Fraser, Photometry of Irregular Satellites of Uranus and Neptune, The Astrophysical Journal, 613 (2004), pp.L77–L80 (preprint)
- ↑ 9.0 9.1 Jacobson, R.A. (2008) NEP078 - JPL satellite ephemeris
- ↑ 10.0 10.1 "Planet and Satellite Names and Discoverers". Gazetteer of Planetary Nomenclature. USGS Astrogeology (July 21 2006). Retrieved on 2006-08-05.
bs:Neptunovi prirodni sateliti bg:Естествени спътници на Нептун ca:Satèl·lits de Neptú cs:Měsíce Neptunu da:Neptuns månerel:Δορυφόροι του Ποσειδώνα es:Satélites de Neptuno eo:Naturaj satelitoj de Neptuno fr:Satellites naturels de Neptunehr:Neptunovi prirodni sateliti it:Satelliti naturali di Nettuno lv:Neptūna pavadoņi lt:Neptūno palydovai hu:A Neptunusz holdjai nah:Tlāloccītlalli īmētzno:Neptuns måner nn:Neptunmånane nds:List von de Neptun-Maanden pl:Księżyce Neptuna ru:Спутники Нептуна simple:List of Neptune's moons sk:Mesiace Neptúna sl:Neptunovi naravni sateliti fi:Neptunuksen kuut sv:Neptunus naturliga satelliter tr:Neptün'ün Ayları | http://gravity.wikia.com/wiki/Moons_of_Neptune | 13 |
18 | The conventional model for galaxy evolution predicts that small galaxies in the early Universe evolved into the massive galaxies of today by coalescing. Nine Lego-like “building block” galaxies initially detected by Hubble likely contributed to the construction of the Universe as we know it. “These are among the lowest mass galaxies ever directly observed in the early Universe” says Nor Pirzkal of the European Space Agency/STScI.
Pirzkal was surprised to find that the galaxies’ estimated masses were so small. Hubble’s cousin observatory, NASA’s Spitzer Space Telescope was called upon to make precise determinations of their masses. The Spitzer observations confirmed that these galaxies are some of the smallest building blocks of the Universe.
These young galaxies offer important new insights into the Universe’s formative years, just one billion years after the Big Bang. Hubble detected sapphire blue stars residing within the nine pristine galaxies. The youthful stars are just a few million years old and are in the process of turning Big Bang elements (hydrogen and helium) into heavier elements. The stars have probably not yet begun to pollute the surrounding space with elemental products forged within their cores.
“While blue light seen by Hubble shows the presence of young stars, it is the absence of infrared light in the sensitive Spitzer images that was conclusive in showing that these are truly young galaxies without an earlier generation of stars,” says Sangeeta Malhotra of Arizona State University in Tempe, USA, one of the investigators.
The galaxies were first identified by James Rhoads of Arizona State University, USA, and Chun Xu of the Shanghai Institute of Technical Physics in Shanghai, China. Three of the galaxies appear to be slightly disrupted – rather than being shaped like rounded blobs, they appear stretched into tadpole-like shapes. This is a sign that they may be interacting and merging with neighbouring galaxies to form larger, cohesive structures.
The galaxies were observed in the Hubble Ultra Deep Field (HUDF) with Hubble’s Advanced Camera for Surveys and the Near Infrared Camera and Multi-Object Spectrometer as well as Spitzer’s Infrared Array Camera and the European Southern Observatory’s Infrared Spectrometer and Array Camera. Seeing and analysing such small galaxies at such a great distance is at the very limit of the capabilities of the most powerful telescopes. Images taken through different colour filters with the ACS were supplemented with exposures taken through a so-called grism which spreads the different colours emitted by the galaxies into short “trails”. The analysis of these trails allows the detection of emission from glowing hydrogen gas, giving both the distance and an estimate of the rate of star formation. These “grism spectra” - taken with Hubble and analysed with software developed at the Space Telescope-European Coordinating Facility in Munich, Germany - can be obtained for objects that are significantly fainter than can be studied spectroscopically with any other current telescope.
Explore further: NASA builds unusual testbed for analyzing X-ray navigation technologies | http://phys.org/news108302681.html | 13 |
14 | What is the ozone hole?
The "ozone hole" is a loss of stratospheric ozone in springtime over Antarctica, peaking in September. The ozone hole area is defined as the size of the region with total ozone below 220 Dobson units (DU). Dobson Units are a unit of measurement that refer to the thickness of the ozone layer in a vertical column from the surface to the top of the atmosphere, a quantity called the "total column ozone amount." Prior to 1979, total column ozone values over Antarctica never fell below 220 DU. The hole has been proven to be a result of human activities--the release of huge quantities of chlorofluorocarbons (CFCs) and other ozone depleting substances into the atmosphere.
Is the ozone hole related to global warming?
Global warming and the ozone hole are not directly linked, and the relationship between the two is complex. Global warming is primarily due to CO2, and ozone depletion is due to CFCs. Even though there is some greenhouse gas effect on stratospheric ozone, the main cause of the ozone hole is the harmful compounds (CFCs) that are released into the atmosphere.
The enhanced greenhouse effect that we're seeing due to a man-made increase in greenhouse gases is acting to warm the troposphere and cool the stratosphere. Colder than normal temperatures in this layer act to deplete ozone. So the cooling in the stratosphere due to global warming will enhance the ozone holes in Arctic and Antarctic. At the same time, as ozone decreases in the stratosphere, the temperature in the layer cools down even more, which will lead to more ozone depletion. This is what's called a "positive feedback."
How big was the 2010 ozone hole, and is it getting bigger?
Every four years, a team of many of the top scientists researching ozone depletion put together a comprehensive summary of the scientific knowledge on the subject, under the auspices of the World Meteorological Organization (WMO). According to their most recent assessment, (WMO, 2006), monthly total column ozone amounts in September and October have continued to be 40 to 50% below pre-ozone-hole values, with up to 70% decreases for periods of a week or so. During the last decade, the average ozone hole area in the spring has increased in size, but not as rapidly as during the 1980s. It is not yet possible to say whether the area of the ozone hole has maximized. However, chlorine in the stratosphere peaked in 2000 and had declined by 3.8% from these peak levels by 2008, so the ozone hole may have seen its maximum size. Annual variations in temperature will probably be the dominant factor in determining differences in size of the ozone hole in the near future, due to the importance of cold-weather Polar Stratospheric Clouds (PSCs) that act as reactive surfaces to accelerate ozone destruction.
The 2010 hole was the tenth smallest since 1979, according to NASA. On September 25, 2010, the hole reached its maximum size of 22 million square kilometers. The 2010 hole was slightly smaller than North America, which is 25 million square kilometers. Record ozone holes were recorded in both 2000 and 2006, when the size of the hole reached 29 million square kilometers. The graph below, taken from NOAA's Climate Prediction Center, compares the 2010 ozone hole size with previous years. The smaller size of the 2010 hole compared to most years in the 2000s is due to the fact that the jet stream was more unstable than usual over Antarctica this September, which allowed very cold air in the so-called "polar vortex" over Antarctica to mix northwards, expelling ozone-deficient air and mixing in ozone-rich air. This also warmed the air over Antarctica, resulting in the formation of fewer Polar Stratospheric Clouds (PSCs), resulting in fewer locations for the chemical reactions needed to destroy to occur.
Has there been ozone loss in places besides Antarctica?
Yes, ozone loss has been reported in the mid and high latitudes in both hemispheres during all seasons (WMO, 2006). Relative to the pre-ozone-hole abundances of 1980, the 2002-2005 losses in total column ozone were:
Other studies have shown the following ozone losses:
In 2011, the Arctic saw record ozone loss according to the World Meteorological Organization (WMO). Weather balloons launched in the Arctic measured the ozone loss at 40%—the previous record loss was 30%. Although there has been international agreement to reduce the consumption of ozone-destroying chemicals, effects from peak usage will continue because these compounds stay in the atmosphere long after they're released. The WMO estimates it will take several decades before we see these harmful compounds reach pre-1980s levels.
Ozone loss in the Arctic is highly dependent on the meteorology, due to the importance of cold-weather Polar Stratospheric Clouds (PSCs) that act as reactive surfaces to accelerate ozone destruction. Some Arctic winters see no ozone loss, and some see extreme loss like that in 2011.
A future Arctic ozone hole similar to that of the Antarctic appears unlikely, due to differences in the meteorology of the polar regions of the northern and southern hemispheres (WMO, 2002). However, a recent model study (Rex et. al., 2004), indicates that future Arctic ozone depletion could be much worse than expected, and that each degree Centigrade cooling of the Arctic may result in a 4% decrease in ozone. This heightened ozone loss is expected due to an increase in PSCs. The Arctic stratosphere has cooled 3°C in the past 20 years due the combined effects of ozone loss, greenhouse gas accumulation, and natural variability, and may cool further in the coming decades due to the greenhouse effect (WMO, 2002). An additional major loss of Arctic (and global) ozone could occur as the result of a major volcanic eruption (Tabazadeh, 2002).
Has ozone destruction increased levels of UV-B light at the surface?
Yes, ozone destruction has increased surface levels of UV-B light (the type of UV light that causes skin damage). For each 1% drop in ozone levels, about 1% more UV-B reaches the Earth's surface (WMO, 2002). Increases in UV-B of 6-14% have been measured at many mid and high-latitude sites over the past 20 years (WMO, 2002, McKenzie, 1999). At some sites about half of this increase can be attributed to ozone loss. Changes in cloudiness, surface air pollution, and albedo also strongly influence surface UV-B levels. Increases in UV-B radiation have not been seen in many U.S. cities in the past few decades due to the presence of air pollution aerosol particles, which commonly cause 20% decreases in UV-B radiation in the summer (Wenny et al, 2001).
Source: World Meteorological Organization, Scientific Assessment of Ozone Depletion: 1998, WMO Global Ozone Research and Monitoring Project - Report No. 44, Geneva, 1998.
What are the human health effects of increased UV-B light?
From the outset it should be pointed out that human behavior is of primary importance when considering the health risks of sun exposure. Taking proper precautions, such as covering up exposed skin, using sunscreen, and staying out of the sun during peak sun hours is of far greater significance to health than the increased UV-B due to ozone loss is likely to be.
A reduction in ozone of 1% leads to increases of up to 3% in some forms of non-melanoma skin cancer (UNEP, 1998). It is more difficult to quantify a link between ozone loss and malignant melanoma, which accounts for about 4% of skin cancer cases, but causes about 79% of skin cancer deaths. Current research has shown that melanoma can increase with both increased UV-B and UV-A light, but the relationship is not well understood (UNEP, 2002). In the U.S. in 2003, approximately 54,200 persons will have new diagnoses of melanoma, and 7,600 will die from the disease, and more than 1 million new cases of the other two skin cancers, basal cell carcinoma and squamous cell carcinoma, will be diagnosed (American Cancer Society, 2002) . Worldwide, approximately 66,000 people will die in 2003 from malignant melanoma, according to the World Health Organization. However, the significant rises in skin cancer worldwide can primarily be attributed human behavioral changes rather than ozone depletion (Urbach, 1999; Staehelin, 1990).
On the positive side, UV light helps produce vitamin D in the skin, which may help against contraction of certain diseases. Multiple sclerosis has been shown to decrease in the white Caucasian population with increasing UV light levels. On the negative side, excessive UV-B exposure depresses the immune system, potentially allowing increased susceptibility to a wide variety of diseases. And in recent years, it has become apparent that UV-B damage to the eye and vision is far more insidious and detrimental than had previously been suspected (UNEP, 2002). Thus, we can expect ozone loss to substantially increase the incidence of cataracts and blindness. A study done for Environment Canada presented to a UN meeting in 1997, estimated that because of the phase-out of CFCs and other ozone depleting substances mandated by the 1987 Montreal Protocol, there will be 19.1 million fewer cases of non-melanoma skin cancer, 1.5 million fewer cases of melanoma, 129 million fewer cases of cataracts, and 330,000 fewer skin cancer deaths worldwide.
Has ozone loss contributed to an observed increase in sunburns and skin cancer in humans?
Yes, Punta Arenas, Chile, the southernmost city in the world (53°S), with a population of 154,000, has regularly seen high levels of UV-B radiation each spring for the past 20 years, when the Antarctic ozone hole has moved over the city (Abarca, 2002). Ozone levels have dropped up to 56%, allowing UV-B radiation more typical of summertime mid-latitude intensities to affect a population unused to such levels of skin-damaging sunshine. Significant increases in sunburns have been observed during many of these low-ozone days. During the spring of 1999, a highly unusual increase in referrals for sunburn occurred in Punta Arenas during specific times when the ozone hole passed over the city. And while most of the worldwide increase in skin cancer rates the past few decades has been attributed to people spending more time outdoors, and the use of tanning businesses (Urbach, 1999), skin cancer cases increased 66% from 1994-2000 compared to 1987-1993 in Punta Arenas, strongly suggesting that ozone depletion was a significant factor.
What is the effect of increased UV-B light on plants?
UV-B light is generally harmful to plants, but sensitivity varies widely and is not well understood. Many species of plants are not UV-B sensitive; others show marked growth reduction and DNA damage under increased UV-B light levels. It is thought that ozone depletion may not have a significant detrimental effect on agricultural crops, as UV-B tolerant varieties of grains could fairly easily be substituted for existing varieties. Natural ecosystems, however, would face a more difficult time adapting. Direct damage to plants from ozone loss has been documented in several studies. For example, data from a spring, 1997 study in Tierra del Fuego, at the southern tip of Argentina, found DNA damage to plants on days the ozone hole was overhead to be 65% higher than on non-ozone-hole days (Rousseaux et. al., 1999).
What is the effect of increased UV-B light on marine life?
UV-B light is generally harmful to marine life, but again the effect is highly variable and not well understood. UV-B radiation can cause damage to the early developmental stages of fish, shrimp, crab, amphibians and other animals (UNEP, 2002).. Even at current levels, solar UV-B radiation is a limiting factor in reproductive capacity and larval development, and small increases in UV-B radiation could cause significant population reductions in the animals that eat these smaller creatures. One study done in the waters off Antarctica where increased UV-B radiation has been measured due to the ozone hole found a 6-12% decrease in phytoplankton, the organism that forms the base of the food chain in the oceans (Smith et. al., 1990). Since the ozone hole lasts for about 10-12 weeks, this corresponds to an overall phytoplankton decrease of 2-4% for the year.
Is the worldwide decline in amphibians due to ozone depletion?
No. The worldwide decline in amphibians is just that--worldwide. Ozone depletion has not not yet affected the tropics (-25° to 25° latitude), and that is where much of the decline in amphibians has been observed. It is possible that ozone depletion in mid and high latitudes has contributed to the decline of amphibians in those areas, but there are no scientific studies that have made a direct link.
Are sheep going blind in Chile?
Yes, but not from ozone depletion! In 1992, The New York Times reported ozone depletion over southern Chile had caused "an increase in Twilight Zone-type reports of sheep and rabbits with cataracts" (Nash, 1992). The story was repeated in many places, including the July 1, 1993 showing of ABC's Prime Time Live. Al Gore's book, Earth in the Balance, stated that "in Patagonia, hunters now report finding blind rabbits; fishermen catch blind salmon" (Gore, 1992). A group at Johns Hopkins has investigated the evidence and attributed the cases of sheep blindness to a local infection ("pink eye") (Pearce, 1993).
What do the skeptics say about the ozone hole?
Ever since the link between CFCs and ozone depletion was proposed in 1974, skeptics have attacked the science behind the link and the policies of controlling CFCs and other ozone depleting substances. We have compiled a detailed analysis of the arguments of the skeptics. It is interesting to note how the skeptics are using the same bag of tricks to cast doubt upon the science behind the global warming debate, and the need to control greenhouse-effect gases.
What are the costs and savings of the CFC phaseout?
The costs have been large, but not as large as initially feared. As the United Nations Environment Programme (UNEP) Economic Options Committee (an expert advisory body) stated in 1994: "Ozone-depleting substance replacement has been more rapid, less expensive, and more innovative than had been anticipated at the beginning of the substitution process. The alternative technologies already adopted have been effective and inexpensive enough that consumers have not yet felt any noticeable impacts (except for an increase in automobile air conditioning service costs)" (UNEP, 1994). A group of over two dozen industry experts estimated the total CFC phaseout cost in industrialized counties at $37 billion to business and industry, and $3 billion to consumers (Vogelsberg, 1997). A study done for Environment Canada presented to a UN meeting in 1997, estimated a total CFC phaseout cost of $235 billion through the year 2060, but economic benefits totaling $459 billion, not including the savings due to decreased health care costs. These savings came from decreased UV exposure to aquatic ecosystems, plants, forests, crops, plastics, paints and other outdoor building materials.
What steps have been taken to save the ozone layer? Are they working?
In 1987, the nations of the world banded together to draft the Montreal Protocol to phase out the production and use of CFCs. The 43 nations that signed the protocol agreed to freeze consumption and production of CFCs at 1986 levels by 1990, reduce them 20% by 1994, and reduce them another 30% by 1999. The alarming loss of ozone in Antarctica and worldwide continued into the 1990's, and additional amendments to further accelerate the CFC phase-out were adopted. With the exception of a very small number of internationally agreed essential uses, CFCs, halons, carbon tetrachloride, and methyl chloroform were all phased out by 1995 in developing countries (undeveloped countries have until 2010 to do so). The pesticide methyl bromide, another significant ozone-depleting substance, was scheduled to be phased out in 2004 in developing countries, but a U.S.-led delaying effort led to a one-year extension until the end of 2005. At least 183 counties are now signatories on the Montreal Protocol.
The Montreal Protocol is working, and ozone depletion due to human effects is expected to start decreasing in the next 10 years. Observations show that levels of ozone depleting gases at a maximum now and are beginning to decline (Newchurch et. al., 2003). NASA estimates that levels of ozone-depleting substances peaked in 2000, and had fallen by 3.8% by 2008. Provided the Montreal Protocol is followed, the Antarctic ozone hole is expected to disappear by 2050. The U.N. Environment Program (UNEP) said in August 2006 that the ozone layer would likely return to pre-1980 levels by 2049 over much of Europe, North America, Asia, Australasia, Latin America and Africa. In Antarctica, the agencies said ozone layer recovery would likely be delayed until 2065.
What replacement chemicals for CFCs have been found? Are they safe?
Hydrofluorocarbons (HFCs), hydrochlorofluorocarbons (HCFCs) and "Greenfreeze" chemicals (hydrocarbons such as cyclopentane and isobutane) have been the primary substitutes. The primary HFC used in automobile air conditioning, HFC-134a, costs about 3-5 times as much as the CFC-12 gas it replaced. A substantial black market in CFCs has resulted.
HCFCs are considered a "transitional" CFC substitute, since they also contribute to ozone depletion (but to a much less degree than CFCs). HCFCs are scheduled to be phased out by 2030 in developed nations and 2040 in developing nations, according to the Montreal Protocol. HCFCs (and HFCs) are broken down in the atmosphere into several toxic chemicals, trifluoroacetic acid (TFA) and chlorodifluoroacetic acid (CDFA). Risks to human health and the environment from these chemicals is thought to be minimal (UNEP/WMO, 2002).
HFCs do not cause ozone depletion, but do contribute significantly to global warming. For example, HFC-134a, the new refrigerant of choice in automobile air conditioning systems, is 1300 times more effective over a 100-year period as a greenhouse gas than carbon dioxide. At current rates of HFC manufacture and emission, up to 4% of greenhouse effect warming by the year 2010 may result from HFCs.
"Greenfreeze" hydrocarbon chemicals appear to be the best substitute, as they do not contribute to greenhouse warming, or ozone depletion. The hydrocarbons used are flammable, but the amount used (equivalent to two butane lighters of fluid) and safety engineering considerations have made quieted these concerns. Greenfreeze technology has captured nearly 100% of the home refrigeration market in many countries in Europe, but has not been introduced in North America yet due to product liability concerns and industry resistance.
When was the ozone hole discovered?
Ozone depletion by human-produced CFCs was first hypothesized in 1974 (Molina and Rowland, 1974). The first evidence of ozone depletion was detected by ground-based instruments operated by the British Antarctic Survey at Halley Bay on the Antarctic coast in 1982. The results seemed so improbable that researchers collected data for three more years before finally publishing the first paper documenting the emergence of an ozone hole over Antarctica (Farman, 1985). Subsequent analysis of the data revealed that the hole began to appear in 1977. After the 1985 publication of Farman's paper, the question arose as to why satellite measurements of Antarctic ozone from the Nimbus-7 spacecraft had not found the hole. The satellite data was re-examined, and it was discovered that the computers analyzing the data were programmed to throw at any ozone holes below 180 Dobson Units as impossible. Once this problem was corrected, the satellite data clearly confirmed the existence of the hole.
How do CFCs destroy ozone?
CFCs are extremely stable in the lower atmosphere, only a negligible amount are removed by the oceans and soils. However, once CFCs reach the stratosphere, UV light intensities are high enough to break apart the CFC molecule, freeing up the chlorine atoms in them. These free chlorine atoms then react with ozone to form oxygen and chlorine monoxide, thereby destroying the ozone molecule. The chlorine atom in the chlorine monoxide molecule can then react with an oxygen atom to free up the chlorine atom again, which can go on to destroy more ozone in what is referred to as a "catalytic reaction":
Cl + O3 -> ClO + O2
ClO + O -> Cl + O2
Thanks to this catalytic cycle, each CFC molecule can destroy up 100,000 ozone molecules. Bromine atoms can also catalytically destroy ozone, and are about 45 times more effective than chlorine in doing so.
For more details on ozone depletion chemistry, see the Usenet Ozone FAQ.
Are volcanos a major source of chlorine to the stratosphere?
No, volcanos contribute at most just a few percent of the chlorine found in the stratosphere. Direct measurements of the stratospheric chlorine produced by El Chichon, the most important eruption of the 1980's (Mankin and Coffey, 1983), and Pinatubo, the largest volcanic eruption since 1912 (Mankin et. al., 1991) found negligible amounts of chlorine injected into the stratosphere.
What is ozone pollution?
Ozone forms in both the upper and the lower atmosphere. Ozone is helpful in the stratosphere, because it absorbs most of the harmful ultraviolet light coming from the sun. Ozone found in the lower atmosphere (troposphere) is harmful. It is the prime ingredient for the formation of photochemical smog. Ozone can irritate the eyes and throat, and damage crops. Visit the Weather Underground's ozone pollution page, or our ozone action page for more information.
Where can I go to learn more about the ozone hole?
We found the following sources most helpful when constructing the ozone hole FAQ:
Dr. Jeff Masters' Recent Climate Change Blogs
Dr. Ricky Rood's Recent Climate Change Blogs
Abarca, J.F, and C.C. Casiccia, "Skin cancer and ultraviolet-B radiation under the Antarctic ozone hole: southern Chile, 1987-2000," Photodermatology, Photoimmunology & Photomedicine, 18, 294, 2002.
American Cancer Society. Cancer facts & figures 2002. Atlanta: American Cancer Society, 2002.
Farman, J.C., B.D. Gardner and J.D. Shanklin, "Large losses of total ozone in Antarctica reveal seasonal ClOx/NOx Interaction, Nature, 315, 207-210, 1985.
Gore, A., "Earth in the Balance: Ecology and the Human Spirit", Houghton Mifflin, Boston, 1992.
Manins, P., R. Allan, T. Beer, P. Fraser, P. Holper, R. Suppiah, R. and K. Walsh. "Atmosphere, Australia State of the Environment Report 2001 (Theme Report)," CSIRO Publishing and Heritage, Canberra, 2001.
Mankin, W., and M. Coffey, "Increased stratospheric hydrogen chloride in the El Chichon cloud", Science, 226, 170, 1983.
Mankin, W., M. Coffey, and A. Goldman, "Airborne observations of SO2, HCl, and O3 in the stratospheric plume of the Pinatubo volcano in July 1991", Geophys. Res. Lett., 19, 179, 1992.
McKenzie, R., B. Connor, G. Bodeker, "Increased summertime UV radiation in New Zealand in response to ozone loss", Science, 285, 1709-1711, 1999.
Molina, M.J., and F.S. Rowland, Stratospheric Sink for Chlorofluoromethanes: Chlorine atom-catalyzed destruction of ozone, Nature, 249, 810-812, 1974.
Nash, N.C., "Ozone Depletion Threatening Latin Sun Worshipers", New York Times, 27 March 1992, p. A7.
Newchurch, et. al., "Evidence for slowdown in stratospheric ozone loss: First stage of ozone recovery", Journal of Geophysical Research, 108, doi: 10.1029/2003JD003471, 2003.
Pearce, F., "Ozone hole 'innocent' of Chile's ills", New Scientist, 1887, 7, 21 Aug. 1993.
Rex, M. et. al., "Arctic ozone loss and climate change", Geophys. Res. Lett., 31, L04116, 2004.
Rousseaux, M.C., C.L. Ballare, C.V. Giordano, A.L. Scopel, A.M. Zima, M. Szwarcberg-Bracchitta, P.S. Searles, M.M. Caldwell, S.B. Diaz, "Ozone depletion and UVB radiation: impact on plant DNA damage in southern South America", Proc Natl Acad Sci, 96(26):15310-5, 1999.
Smith, D.A., K. Vodden, L. Rucker, and R. Cunningham, "Global Benefits and Costs of the Montreal Protocol on Substances that Deplete the Ozone Layer", Applied Research Consultants report for Environment Canada, Ottawa, 1997.
Smith, R., B. Prezelin, K. Baker, R. Bidigare, N. Boucher, T. Coley, D. Karentz, S. MacIntyre, H. Matlick, D. Menzies, M. Ondrusek, Z. Wan, and K. Waters, "Ozone depletion: Ultraviolet radiation and phytoplankton biology in Antarctic waters", Science, 255, 952, 1992.
Staehelin, J., M. Blumthaler, W. Ambach, and J. Torhorst, "Skin cancer and the ozone shield", Lancet 336, 502, 1990.
Tabazadeh, A., K. Drdla, M.R. Schoeberl, P. Hamill, and O. B. Toon, "Arctic "ozone hole" in a cold volcanic stratosphere", Proc Natl Acad Sci, 99(5), 2609-12, Mar 5 2002.
United Nations Environmental Programme (UNEP), "1994 Report of the Economics Options Committee for the 1995 Assessment of the Montreal Protocol on Substances that Deplete the Ozone Layer", UNEP, Nairobi, Kenya, 1994.
United Nations Environmental Programme (UNEP), "Environmental Effects of Ozone Depletion: 1998 Assessment", UNEP, Nairobi, Kenya, 1998.
United Nations Environmental Programme (UNEP), "Environmental Effects of Ozone Depletion and its interactions with climate change: 2002 Assessment", UNEP, Nairobi, Kenya, 2002.
Urbach, F. "The cumulative effects of ultraviolet radiation on the skin: Photocarcinogenesis," In: Hawk J, , ed. Photodermatology. Arnold Publishers, 89-102, 1999.
Vogelsberg, F.A., "An industry perspective - lessons learned and the cost of CFC phaseout", HPAC Heating/Piping/AirConditioning, January 1997, 121-128.
Wenny, B.N., V.K. Saxena, and J.E. Frederick, "Aerosol optical depth measurements and their impact on surface levels of ultraviolet-B radiation", J. Geophys. Res., 106, 17311-17319, 2001.
World Meteorological Organization (WMO), "Scientific Assessment of Ozone Depletion: 2002 Global Ozone Research and Monitoring Project - Report #47", WMO, Nairobi, Kenya, 2002.
Young, A.R., L.O. Bjorn, J. Mohan, and W. Nultsch, "Environmental UV Photobiology", Plenum, N.Y. 1993. | http://rss.wunderground.com/resources/climate/holefaq.asp | 13 |
10 | The Galactic Coordinate System
The galactic coordinate system is the key to understanding where objects are located within the Galaxy. It was established in 1958 by the International Astronomical Union and is useful for specifying an object's location relative to the Sun and the galactic core of the Milky Way.
The galactic coordinate system is a 2-D spherical coordinate system with us (or the Sun) at its center. It has latitude and longitude lines, similar to Earth's. In fact, a good analogy is to imagine yourself standing at the center of a hollow Earth looking at the latitude and longitude lines on the Earth's surface. The galactic coordinate system is similar except we are looking out at the celestial sphere.
There is a one-to-one mapping between the galactic coordinate system and the more familiar equatorial coordinate system. Relatively simple equations can be used to convert from one to the other.
The galactic equator (i.e., 0º galactic latitude) is coincident with the plane of the Milky Way Galaxy and is shown as the red circle in the image above. Galactic latitude is the angle above or below this plane (e.g. the yellow angle above). Thus, objects with a galactic latitude near 0º will be located within the Milky Way's spiral arms. Objects with a positive galactic latitude will be above the arms in the northern galactic hemisphere.
Galactic longitude is measured from 0º to 360º, counter clockwise as seen from the north galactic pole. 0º galactic longitude is arbitrarily defined as the direction pointing to our galactic center. Within the plane of our galaxy (0º galactic latitude), the main points of longitude and the Milky Way constellations which lie in their directions are as follows:
- 0º is in the direction of Sagittarius
- 90º is in the direction of Cygnus
- 180º is in the direction of the galactic anti-center in Auriga
- 270º is in the direction of Vela | http://www.thinkastronomy.com/M13/Manual/common/galactic_coords.html | 13 |
20 | Temperature and Heat Study Guide (page 2)
In this lesson, we will shift our attention from forces to energy, and we will learn about heat as a form of energy, characterized by temperature We will also discuss the various types of heat transfer between objects, and tha effect of heat transfer on solids, that is, thermal expansion.
Temperature and Temperature Scales
Compared to simpler quantities such as distance and mass, temperature is more difficult to explain because it is more abstract. It's not as visible or tangible a measurement as is mass. All things constant, the larger the object, the more the mass, or volume; that is, the larger the size of the cube, the greater the volume. Temperature, on the other hand, characterizes a state of the matter, and a thermometer placed in contact with an object at different times can show different values although the object itself remains the same. Temperature measures the motion of the particles in the matter, and we will learn about this dependence later in the book. An increase in the speed of motion produces an increase in the temperature.
Historically, only a few temperature scales have been defined and adopted, the most common ones being the Celsius, or centigrade scale, and the Fahrenheit scale. Both of these scales depend on some reference points. The Celsius scale defines 0° C as the temperature where water freezes at normal atmospheric pressure, and defines 100°C as the boiling point of water. One one-hundredth of this interval is called a Celsius degree. As you can see, the scale was defined taking into consideration a specific substance—water and its properties at two standard points (0 and 100).
The Fahrenheit scale considers the same specific substance, but the standard points were assigned at the values of 32 (freezing) and 212 (boiling). So, not only does the Fahrenheit scale have an offset with respect to Celsius, it's not a centigrade (or l00-grade) scale (212 – 32 = 180). Therefore, the conversion between the two is not through a conversion factor but is a linear dependence.
If we call the Celsius temperature t(°C) and the Fahrenheit temperature t(°F), then, as we said, there is an offset between the two and a difference in the degrees. The Celsius degree is larger than the Fahrenheit degree, for example, 100 degrees Celsius equals 180 degrees Fahrenheit.
In order to eliminate the dependence on the specific substance and standard points, a more universal scale is needed. The result of that search is the absolute Kelvin scale, and its definition is based on empirical observation of the temperature dependence of the pressure of a gas at a constant volume. By lowering the temperature of a gas, the pressure is shown to decrease linearly. The pressure cannot be measured for very low temperatures where the gas liquefies, but if we extrapolate the data, we can obtain the point where the pressure becomes zero.
This point is called the zero absolute temperature, and the scale defined is Kelvin temperature scale. We can think about the Kelvin scale as an offset of the Celsius scale by –273.15 degrees; hence, the conversion between the two scales is stated simply as:
T(K) = t(°C) + 273.15
When we talk about the state of an object, we say the temperature is so many degrees Celsius (°C) or Fahrenheit (°F). When we talk about a process, a process whereby temperature increases or decreases by a value, then one says the temperature changed by so many Celsius or Fahrenheit degrees.
Find the conversion from Fahrenheit to Kelvin scale. Find the value of 32° F in Kelvin degrees and check your work.
We have already determined that the conversIOn between Fahrenheit and Celsius is:
We can use this equation to solve for t(°C) as a function of t(°F) and then use the result in T(K).
With this expression, we go back to T(K) as a function of t(°C).
t(°F) = 32°F
T(K) = 273.15K
t(°C) = 0°C
The result agrees with our initial discussion about the Kelvin scale and its direct connection with the Celsius scale.
For a system to change its temperature, the object exchanges energy with its surroundings. We call this energy heat, and we measure heat in joules (from James Joule, 1818–1889).
We have seen previously that temperature is specific to a certain state. Therefore, we call temperature a quantity of state. In contrast, heat is an energy flow established when there is a thermal contact between two different temperature states. We call heat a quantity of process.
Heat is related to a measure of the change in the motion and interaction of the particles, which is called internal energy. Internal energy is equal to the kinetic energy and the potential energy of the particles. Kinetic energy measures the translational, rotational, and vibrational motion of the atoms and molecules in the object, whereas potential energy measures the interaction between the particles. When the temperature increases, the kinetic energy increases. One way to accomplish this process is by exchanging heat with the object.
Heat will flow freely from a high-temperature object to a low-temperature object because of the difference in temperature.
We have seen why heat flows; let's learn now about the means of this flow. On an atomic scale, materials in different states are built differently: Solids have an internal structure, and the atoms and molecules are bound through strong bonds. Disturbing the bonds in one place will create a disturbance in the lattice, and so we end up with a propagation of the initial effect. Liquids and gases are different in the sense that the structure is not isotropic (the same in all directions, or completely absent as in the case of gases). Hence, a disturbance in one side of the container with fluid will spread differently than it does in a solid.
There are three types of heat transfer: conduction, convection, and radiation.
In the case of conduction, the heat is transferred through the material itself, and a difference in temperature between different sides of the object is required. On an atomic scale. the particles on the side of the object where the temperature is larger are characterized by a greater kinetic energy. While moving, they will collide with slower particles and, in the process, lose some of their kinetic energy to the slower moving particles that now accelerate.
Metals are good thermal conductors because of the free electrons that move through the lattice. Other materials such as glass, plastic, and paper are poor conductors due to the light interaction between constituent particles. Still other materials, such as gases, are isolators due to the large distance between particles.
The situation is different in convection, where the transfer of heat happens due to movement of the substance through space. Consider forcing warm air into a room through floor-level inlets. What happens in time? The warm air rises, and the cold air sinks. Warm air has atoms and molecules that move faster, and they are farther apart; therefore, the density is less than the density of cold air. And, as we have seen in the last lesson, a lower density material will be buoyed up.
A different process that does not require contact is called radiation. With this transfer, heating is accomplished by electromagnetic radiation. Every object with a temperature more than zero absolute Kelvin radiates infrared radiation, which in turn is absorbed by other objects and increases their temperature.
Heat and Temperature Change
An object that is warmed up will experience an increase in temperature. The variation of temperature is different depending on the nature of the object. To characterize this dependence, we define two coefficients: the heat capacity and the specific heat. The coefficients of heat capacity and specific heat for different materials are tabulated.
If a quantity of heat Q is transferred to a substance thereby increasing its temperature by ΔT = T2–T1 the heat capacity is defined to be:
This coefficient is not dependent on the mass of the specific object and is measured in J/K or J/C°
If a quantity of heat Q is transferred to a substance of mass m and is increasing its temperature by ΔT = T2 – T1 , the specific heat is defined to be:
We can also rewrite the equation as:
Q = m · c · (T2 – T1)
Using this equation, we define a new unit for heat called the calorie. One calorie (1 cal) is the heat necessary to be transferred to 1 g of water to increase its temperature by one Celsius degree (from 14.5 to 15.5°C). The conversion from calories to joules is:
1 cal = 4.186 J
The nutritional calorie that you see on food labels is actually 1,000 calories and is symbolized by C.
1 C = 1,000 cal
Another usual unit is one British thermal unit (BTU), which is the heat necessary to be transferred to 1 pound of water to raise its temperature from 63 to 64° F.
Some containers are built such that they make good isolators. The heat transferred to a fluid is completely exchanged with the fluid and none lost to its surroundings. Such containers are called calorimeters, and the study of the heat exchange in these systems is called calorimetry. In this case, the heat coming from a hot reservoir is completely transferred to the cold reservoir:
Qhot = –Qcold
The minus sign indicates that the system that cools down looses energy (heat is coming out), and therefore, the heat transfer is negative. This is called the calorimetric equation.
Some of the most usual materials encountered and their specific heats are shown in Table 11.1.
An aluminum piece of 400 g is placed in a container that holds 100 g water at 80° C. The water cools down to 20° C. In the process, the aluminum piece gets warmer and reaches a temperature of 45° C. What was the initial temperature of the aluminum piece? Consider the only heat exchange to be between the aluminum and the water.
First, convert all quantities to 51 units. Next, set the equations and solve for the initial temperature.
- mal = 400 g = 400 g ·1 kg/1,000 g = 0.4 kg
- mwater = 100 g = 100 g · 1 kg/1,000 g = 0.1 kg
- twater initial = 80° C
- twater final = 20° C
- tAl final = 45° C
- tal initial = ?
Because the system is thermally isolated, the heat released by the water is absorbed by the aluminum.
- Qwater = – Qaluminum
- Qwater = mwater· Cwater · (twater final – twater initial)
- Qwater = 0.1 kg · 4,186 J/kg · °C · (80° C – 20° C)
- Qwater = 0.1· 4,186 J · 60
- Qwater = 2.5 . 104 J
- –Qaluminum = –mAl · cAl · (tAl final – tAl initial)
- –Qaluminum == – 0.4 kg . 900 j/kg °C · (45 – tAl initial)
- 2.5 · 104 J = 0.4 · 900 J/ °C · (45 – tAl initial)
- 2.5 · 104 J/(0.4 · 900 J/°C) = (45 – tAl initial)
- 2.5 · 104°C/(0.4 · 900) = (45 – tAl initial)
- 70 °C = (45 – tAl initial)
- tAl initial = (45 – 70°)C = – 25°C
- tAl initial = – 25°C
Heat and Phase Change
Is the heat absorbed or released by an object always changing its temperature? The answer is no. Think, for example, about boiling water: Once the water boils, there is another phenomenon taking place called vaporization. So, in this example, the heat absorbed is used first to increase the water temperature and then to change the phase from liquid to gas. This transformation is called a phase change, and in our example, the pressure is considered to be constant because the transformation takes place in open space. The heat exchanged when the liquid is experiencing a phase change is called latent heat. Measurements show that in a phase change, the temperature is constant until the transformation to another phase happens in the bulk of the substance.
There are a few processes regarding phase changes: from solid to liquid and the inverse. These are called melting and freezing. Together, they are called fusion; specifically, they are vaporization at evaporation or condensation, and sublimation at the phase change from solid to gas or gas to solid.
The heat exchanged in a phase change has the same value regardless of the direction of change: The coefficient of latent heat of melting is equal to the coefficient of latent heat of freezing.
In many books, these processes are summarized in a graph (see Figure 11.2) of the temperature dependence of the absorbed energy (heat).
If you interpret the graph, you will see that the slopes of the graphs for ice and for steam are almost the same. All things being equal, this translates to the ratio of heat and temperature change, which gives us the specific heat. If you check with the table, you will see the two coefficients close in value. How about the water phase? The slope is smoother in this case, as shown also by the numbers in the coefficients of specific heat.
In a calorimetry measurement, the hot reservoir or/and the cold reservoir heat might contain terms similar to m · c · ΔT as we have already seen before. However, there are new terms involving a process with no temperature change, a process called phase transition. Let's try to define a measure of the transition from the point of view of heat transfer.
Coefficient of Latent Heat
If a quantity of heat Q is transferred to a substance of mass m and the substance has a phase change, then the coefficient of latent heat L is given as follows:.
This coefficient depends on the nature of the substance and on the type of phase change occurring.
Add your own comment
Today on Education.com
SUMMER LEARNINGJune Workbooks Are Here!
TECHNOLOGYAre Cell Phones Dangerous for Kids?
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- First Grade Sight Words List
- 10 Fun Activities for Children with Autism
- Graduation Inspiration: Top 10 Graduation Quotes
- What Makes a School Effective?
- Child Development Theories
- Should Your Child Be Held Back a Grade? Know Your Rights
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Smart Parenting During and After Divorce: Introducing Your Child to Your New Partner | http://www.education.com/study-help/article/temperature-heat/?page=2 | 13 |
29 | © 1996 by Karl Hahn
1) In the first expression we have the sum of two functions, each of which can be expressed as xn, one where n = 4 and one where n = 3. We already know how to find the derivative of each of the summands. The sum rule says we can consider the summands separately, then add the derivatives of each to get the derivative of the sum. So the answer for the first expression is: f'(x) = 4x3 + 3x2
In the second expression we have the sum of an xn and an mx + b. In the first summand, n = 2. In the second summand, m = -7 and b = 12. Again the sum rule says we can consider the two summands separately, then add their derivatives to get the derivative of the sum. So the answer for the second expression is: f'(x) = 2x - 7
2) This one should have been easy. The text of the problem observed that a constant function is just a straight line function with a slope of zero (i.e., m = 0). We already know that the derivative of any straight line function is exactly its slope, m. When the slope is zero, so must be the derivative.
3) A very quick argument for the first part is that:
g(x) = n f(x) = f(x) + f(x) + ... [n times] ... + f(x)Hence, by the sum rule,
g'(x) = f'(x) + f'(x) + ... [n times] ... + f'(x) = n f'(x)A purist, however, would do it by induction. To get onto the first rung of the ladder, we observe that for n = 1:
g(x) = (1) f(x) = f(x)hence
g'(x) = f'(x) = (1) f'(x)So it works for n = 1. Now we demonstrate that if it's true for the nth rung, then it must be true for the n + 1st rung. So if:
gn'(x) = n f'(x)whenever
gn(x) = n f(x)then
gn+1(x) = (n + 1) f(x) = (n f(x) ) + f(x)That the derivative of n f(x) is n f'(x) is given by assumption. And we know by the sum rule that:
gn+1'(x) = (n f'(x) ) + f'(x) = (n + 1) f'(x)And you have it proved by induction. But you only need to do it that way if you are performing for a stickler on formality.
As for the second part, you can observe that:
n u(x) = f(x)We know from the first part of the problem that:
n u'(x) = f'(x)Simply divide both sides by n, and your proof is complete.
Now, using the results from both of the parts of the problem, can you show that if:
u(x) = (n/m) f(x)and both n and m are counting numbers then:
u'(x) = (n/m) f'(x)
And using an argument similar to the one we used for the second part of this problem, can you prove that the derivative of the difference is always equal to the difference of the derivatives? Hint: if u(x) = f(x) - g(x), then u(x) + g(x) = f(x).
You can email me by clicking this button:
© 1996 by Karl Hahn
Exercise 4: In each of the functions given, you must try to break the expression up into products and sums, find the derivatives of the simpler functions, then combine them according to the sum and product rules and their derivatives.
4a) Here you are asked to find the derivative of the product of two straight line functions (both of which are in the standard mx + b form). So let f(x) = m1x + b1, and let g(x) = m2x + b2. Both are f(x) and g(x) are straight line functions. And we know that the derivative of a straight line function is always the slope of the line. Hence we know that f'(x) = m1 and g'(x) = m2. Now simply apply the product rule (equation 4.2-20b). Simply substitute into the rule the expressions you have for f(x), g(x), f'(x), and g'(x). You should get:
u'(x) = (m1 (m2x + b2) ) + (m2 (m1x + b1) )Ordinarily, I wouldn't bother to multiply this out, but I'd like to take this opportunity to demonstrate a method of cross-check. Multiplying the above expression out yields:
u'(x) = m1m2x + m1b2 + m1m2x + m2b1 = 2m1m2x + m1b2 + m2b1If we multiply out the original u(x), we get:
u(x) = m1m2x2 + m1b2x + m2b1x + b1b2Try using the sum rule, the rule about xn, and the rule that about multiplying by a constant on this version of u(x) to demonstrate that the derivative obtained this way is identical to the one we obtained the other way.
4b) Here you are given the product of two quadratics. We do this the same way as we did 4a. Let f(x) = x2 + 2x + 1 and let g(x) = x2 - 3x + 2. We can find the derivative of each by applying the xn rule and the sum rule. Hence f'(x) = 2x + 2 and g'(x) = 2x - 3. Now simply apply the product rule (equation 4.2-20b), substituting in these expressions for f(x), g(x), f'(x), and g'(x). Doing that yields:
u'(x) = ( (2x + 2) (x2 - 3x + 2) ) + ( (2x - 3) (x2 + 2x + 1) )which is the answer. If you want to multiply out u'(x) and u(x) to do the cross check, by all means do so. It's good exercise.
As further exercise, observe that the f(x) and g(x) we used in this problem are both, themselves, products, if you factor them:
f(x) = x2 + 2x + 1 = (x + 1) (x + 1) g(x) = x2 - 3x + 2 = (x - 1) (x - 2)Try applying the product rule to both of those and make sure you come up with the same expressions for f'(x) and g'(x) as we did in the first part of this problem (ie 4b).
4c) This is still the same problem as the two previous ones (ie 4a and 4b). Again, we find that u(x) is a product, and we set f(x) and g(x) to the two factors respectively. So f(x) = x - 1 and g(x) = 4x4 - 7x3 + 2x2 - 5x + 8. We find that f'(x) = 1, which ought to be pretty easy for you by now. To take the derivative of g(x) you have to see it as the sum of a bunch of xn terms, each multiplied by a constant. So apply the xn rule, the rule for multiplying by a constant, and the sum rule, and you get g'(x) = 16x3 - 21x2 + 4x - 5. When you apply the product rule (equation 4.2-20b) to the f(x), g(x), f'(x), and g'(x) we get here, you find that the answer is:
u'(x) = ( (1) (4x4 - 7x3 + 2x2 - 5x + 8) ) + ( (x - 1) (16x3 - 21x2 + 4x - 5) )
4d) In this one you are given the product of two functions, one, x2, explicitly, the other, f(x), as simply a symbol for any function. So let g(x) = x2. Using material we have already covered, we can determine that g'(x) = 2x. So now we apply the product rule (equation 4.2-20b), substituting our expressions for g(x) and g'(x). We can't substitute f(x) or f'(x) because the problem doesn't give anything to substitute. We get as an answer:
u'(x) = (2x f(x) ) + (x2 f'(x) )
4e) This problem is the difference of two products. You have to apply the product rule (equation 4.2-20b) to each product individually to find the derivative of each product. We have already seen that the derivative of the difference is the same as the difference of the derivatives. So to get the answer, simply take the difference of the two derivatives that you got using the product rule. I won't work the details for you here. You should be getting better at this by now. The answer is:
u'(x) = f(x) + xf'(x) - 3x2g(x) - x3g'(x)
You can email me by clicking this button: | http://karlscalculus.org/l4_2.html | 13 |
24 | Use Coordinates of Points to Find Values of Trigonometry Functions
One way to find the values of the trig functions for angles is to use the coordinates of points on a circle that has its center at the origin. Letting the positive x-axis be the initial side of an angle, you can use the coordinates of the point where the terminal side intersects with the circle to determine the trigonometry functions.
The preceding figure shows a circle with a radius of r that has an angle drawn in standard position.
The equation of the circle is x2 + y2 = r2. Based on this equation and the coordinates of the point where the terminal side of the angle intersects the circle, the six trig functions for angle θ are defined as follows:
You can see where these definitions come from if you picture a right triangle formed by dropping a perpendicular segment from the point (x,y) to the x-axis.
The preceding figure shows such a right triangle. Remember that the x-value is to the right (or left) of the origin, and the y-value is above (or below) the x-axis — and use those values as lengths of the triangle’s sides. Therefore, the side opposite angle θ is y, the value of the y-coordinate. The adjacent side is x, the value of the x-coordinate. You find r, the radius of the circle, using the Pythagorean theorem. | http://www.dummies.com/how-to/content/use-coordinates-of-points-to-find-values-of-trigon.navId-420744.html | 13 |
13 | - Concepts as mental representations, where concepts are entities that exist in the brain.
- Concepts as abilities, where concepts are abilities peculiar to cognitive agents.
- Concepts as abstract objects, where objects are the constituents of propositions that mediate between thought, language, and referents.
In a physicalist theory of mind, a concept is a mental representation, which the brain uses to denote a class of things in the world. This is to say that it is literally, a symbol or group of symbols together made from the physical material of the brain. Concepts are mental representations that allows us to draw appropriate inferences about the type of entities we encounter in our everyday lives. Concepts do not encompass all mental representations, but are merely a subset of them. The use of concepts is necessary to cognitive processes such as categorization, memory, decision making, learning, and inference.
There is debate as to the relationship between concepts and natural language. However, it is necessary at least to begin by understanding that the concept "dog" is philosophically distinct from the things in the world grouped by this concept - or the reference class or extension. Concepts that can be equated to a single word are called "lexical concepts".
Notable theories on the structure of concepts
The classical theory of concepts, also referred to as the empiricist theory of concepts, is the oldest theory about the structure of concepts (it can be traced back to Aristotle), and was prominently held until the 1970s. The classical theory of concepts says that concepts have a definitional structure. Adequate definitions of the kind required by this theory usually take the form of a list of features. These features must have two important qualities to provide a comprehensive definition. Features entailed by the definition of a concept must be both necessary and sufficient for membership in the class of things covered by a particular concept. A feature is considered necessary if every member of the denoted class has that feature. A feature is considered sufficient if something has all the parts required by the definition. For example, the classic example bachelor is said to be defined by unmarried and man. An entity is a bachelor (by this definition) if and only if it is both unmarried and a man. To check whether something is a member of the class, you compare its qualities to the features in the definition. Another key part of this theory is that it obeys the law of the excluded middle, which means that there are no partial members of a class, you are either in or out.
The classical theory persisted for so long unquestioned because it seemed intuitively correct and has great explanatory power. It can explain how concepts would be acquired, how we use them to categorize and how we use the structure of a concept to determine its referent class. In fact, for many years it was one of the major activities in philosophy - concept analysis. Concept analysis is the act of trying to articulate the necessary and sufficient conditions for the membership in the referent class of a concept.
Arguments against the classical theory
Given that most later theories of concepts were born out of the rejection of some or all of the classical theory, it seems appropriate to give an account of what might be wrong with this theory. In the 20th century, philosophers such as Rosch and Wittgenstein argued against the classical theory. There are six primary arguments summarized as follows:
- It seems that there simply are no definitions - especially those based in sensory primitive concepts.
- It seems as though there can be cases where our ignorance or error about a class means that we either don't know the definition of a concept, or have incorrect notions about what a definition of a particular concept might entail.
- Quine's argument against analyticity in Two Dogmas of Empiricism also holds as an argument against definitions.
- Some concepts have fuzzy membership. There are items for which it is vague whether or not they fall into (or out of) a particular referent class. This is not possible in the classical theory as everything has equal and full membership.
- Rosch found typicality effects which cannot be explained by the classical theory of concepts, these sparked the prototype theory. See below.
- Psychological experiments show no evidence for our using concepts as strict definitions.
Prototype theory came out of problems with the classical view of conceptual structure. Prototype theory says that concepts specify properties that members of a class tend to possess, rather than must possess.Wittgenstein, Rosch, Mervis, Berlin, Anglin, and Posner are a few of the key proponents and creators of this theory. Wittgenstein describes the relationship between members of a class as family resemblances. There are not necessarily any necessary conditions for membership, a dog can still be a dog with only three legs. This view is particularly supported by psychological experimental evidence for prototypicality effects. Participants willingly and consistently rate objects in categories like 'vegetable' or 'furniture' as more or less typical of that class. It seems that our categories are fuzzy psychologically, and so this structure has explanatory power. We can judge an item's membership to the referent class of a concept by comparing it to the typical member - the most central member of the concept. If it is similar enough in the relevant ways, it will be cognitively admitted as a member of the relevant class of entities. Rosch suggests that every category is represented by a central exemplar which embodies all or the maximum possible number of features of a given category.
Theory-theory is a reaction to the previous two theories and develops them further. This theory postulates that categorization by concepts is something like scientific theorizing. Concepts are not learned in isolation, but rather are learned as a part of our experiences with the world around us. In this sense, concepts' structure relies on their relationships to other concepts as mandated by a particular mental theory about the state of the world. How this is supposed to work is a little less clear than in the previous two theories, but is still a prominent and notable theory. This is supposed to explain some of the issues of ignorance and error that come up in prototype and classical theories as concepts that are structured around each other seem to account for errors such as whale as a fish (this misconception came from an incorrect theory about what a whale is like, combining with our theory of what a fish is). When we learn that a whale is not a fish, we are recognizing that whales don't in fact fit the theory we had about what makes something a fish. In this sense, the Theory-Theory of concepts is responding to some of the issues of prototype theory and classic theory.
Issues in concept theory
A priori concepts
Kant declared that human minds possess pure or a priori concepts. Instead of being abstracted from individual perceptions, like empirical concepts, they originate in the mind itself. He called these concepts categories, in the sense of the word that means predicate, attribute, characteristic, or quality. But these pure categories are predicates of things in general, not of a particular thing. According to Kant, there are 12 categories that constitute the understanding of phenomenal objects. Each category is that one predicate which is common to multiple empirical concepts. In order to explain how an a priori concept can relate to individual phenomena, in a manner analogous to an a posteriori concept, Kant employed the technical concept of the schema. Immanuel Kant held that the account of the concept as an abstraction of experience is only partly correct. He called those concepts that result from abstraction "a posteriori concepts" (meaning concepts that arise out of experience). An empirical or an a posteriori concept is a general representation (Vorstellung) or non-specific thought of that which is common to several specific perceived objects (Logic, I, 1., §1, Note 1)
A concept is a common feature or characteristic. Kant investigated the way that empirical a posteriori concepts are created.
The logical acts of the understanding by which concepts are generated as to their form are:
In order to make our mental images into concepts, one must thus be able to compare, reflect, and abstract, for these three logical operations of the understanding are essential and general conditions of generating any concept whatever. For example, I see a fir, a willow, and a linden. In firstly comparing these objects, I notice that they are different from one another in respect of trunk, branches, leaves, and the like; further, however, I reflect only on what they have in common, the trunk, the branches, the leaves themselves, and abstract from their size, shape, and so forth; thus I gain a concept of a tree.
- comparison, i.e., the likening of mental images to one another in relation to the unity of consciousness;
- reflection, i.e., the going back over different mental images, how they can be comprehended in one consciousness; and finally
- abstraction or the segregation of everything else by which the mental images differ ...
— Logic, §6
In cognitive linguistics, abstract concepts are transformations of concrete concepts derived from embodied experience. The mechanism of transformation is structural mapping, in which properties of two or more source domains are selectively mapped onto a blended space (Fauconnier & Turner, 1995; see conceptual blending). A common class of blends are metaphors. This theory contrasts with the rationalist view that concepts are perceptions (or recollections, in Plato's term) of an independently existing world of ideas, in that it denies the existence of any such realm. It also contrasts with the empiricist view that concepts are abstract generalizations of individual experiences, because the contingent and bodily experience is preserved in a concept, and not abstracted away. While the perspective is compatible with Jamesian pragmatism, the notion of the transformation of embodied concepts through structural mapping makes a distinct contribution to the problem of concept formation.
Plato was the starkest proponent of the realist thesis of universal concepts. By his view, concepts (and ideas in general) are innate ideas that were instantiations of a transcendental world of pure forms that lay behind the veil of the physical world. In this way, universals were explained as transcendent objects. Needless to say this form of realism was tied deeply with Plato's ontological projects. This remark on Plato is not of merely historical interest. For example, the view that numbers are Platonic objects was revived by Kurt Gödel as a result of certain puzzles that he took to arise from the phenomenological accounts.
Gottlob Frege, founder of the analytic tradition in philosophy, famously argued for the analysis of language in terms of sense and reference. For him, the sense of an expression in language describes a certain state of affairs in the world, namely, the way that some object is presented. Since many commentators view the notion of sense as identical to the notion of concept, and Frege regards senses as the linguistic representations of states of affairs in the world, it seems to follow that we may understand concepts as the manner in which we grasp the world. Accordingly, concepts (as senses) have an ontological status (Morgolis:7)
According to Carl Benjamin Boyer, in the introduction to his The History of the Calculus and its Conceptual Development, concepts in calculus do not refer to perceptions. As long as the concepts are useful and mutually compatible, they are accepted on their own. For example, the concepts of the derivative and the integral are not considered to refer to spatial or temporal perceptions of the external world of experience. Neither are they related in any way to mysterious limits in which quantities are on the verge of nascence or evanescence, that is, coming into or going out of existence. The abstract concepts are now considered to be totally autonomous, even though they originated from the process of abstracting or taking away qualities from perceptions until only the common, essential attributes remained.
The term "concept" is traced back to 1554–60 (Latin conceptum - "something conceived"), but what is today termed "the classical theory of concepts" is the theory of Aristotle on the definition of terms. The meaning of "concept" is explored in mainstream information science,cognitive science, metaphysics, and philosophy of mind. In computer and information science contexts, especially, the term 'concept' is often used in unclear or inconsistent ways.
- Class (philosophy)
- Concept and object
- Concept learning
- Concept map
- Conceptual art
- Conceptual blending
- Conceptual clustering
- Conceptual framework
- Conceptual history
- Conceptual model
- Conversation Theory
- Conveyed concept
- Formal concept analysis
- Fuzzy concept
- Hypostatic abstraction
- Notion (philosophy)
- Object (philosophy)
- Schema (Kant)
- Social construction
- Symbol grounding problem
- Eric Margolis; Stephen Lawrence. "Concepts". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab at Stanford University. Retrieved 6 November 2012.
- Carey, Susan (2009). The Origin of Concepts. Oxford University Press. ISBN 978-0-19-536763-8.
- Murphy, Gregory (2002). The Big Book of Concepts. Massachusetts Institute of Technology. ISBN 0-262-13409-8.
- Stephen Lawrence; Eric Margolis (1999). Concepts and Cognitive Science. in Concepts: Core Readings: Massachusetts Institute of Technology. pp. 3–83. ISBN 978-0-262-13353-1.
- Brown, Roger (1978). A New Paradigm of Reference. Academic Press Inc. pp. 159–166. ISBN 0-12-497750-2.
- 'Godel's Rationalism', Standford Encyclopedia of Philosophy
- http://www.bartleby.com/61/49/C0544900.html%7CThe American Heritage Dictionary of the English Language: Fourth Edition.
- Stock, W.G. (2010). Concepts and semantic relations in information science. Journal of the American Society for Information Science and Technology, 61(10), 1951-1969.
- Hjørland, B. (2009). Concept Theory. Journal of the American Society for Information Science and Technology, 60(8), 1519–1536
- Smith, B. (2004). Beyond Concepts, or: Ontology as Reality Representation, Formal Ontology and Information Systems. Proceedings of the Third International Conference (FOIS 2004), Amsterdam: IOS Press, 2004, 73–84.
- Armstrong, S. L., Gleitman, L. R., & Gleitman, H. (1999). what some concepts might not be. In E. Margolis, & S. Lawrence, Concepts (pp. 225–261). Massachusetts: MIT press.
- Carey, S. (1999). knowledge acquisition: enrichment or conceptual change? In E. Margolis, & S. Lawrence, concepts: core readings (pp. 459–489). Massachusetts: MIT press.
- Fodor, J. A., Garrett, M. F., Walker, E. C., & Parkes, C. H. (1999). against definitions. In E. Margolis, & S. Lawrence, concepts: core readings (pp. 491–513). Massachusetts: MIT press.
- Fodor, J., & LePore, E. (1996). the pet fish and the red Herring: why concept still can't be prototypes. cognition, 253-270.
- Hume, D. (1739). book one part one: of the understanding of ideas, their origin, composition, connexion, abstraction etc. In D. Hume, a treatise of human nature. England.
- Murphy, G. (2004). Chapter 2. In G. Murphy, a big book of concepts (pp. 11 – 41). Massachusetts: MIT press.
- Murphy, G., & Medin, D. (1999). the role of theories in conceptual coherence. In E. Margolis, & S. Lawrence, concepts: core readings (pp. 425–459). Massachusetts: MIT press.
- Prinz, J. J. (2002). Desiderata on a Theory of Concepts. In J. J. Prinz, Furnishing the Mind: Concepts and their Perceptual Basis (pp. 1–23). Massechusettes: MIT press.
- Putnam, H. (1999). is semantics possible? In E. Margolis, & S. Lawrence, concepts: core readings (pp. 177–189). Massachusetts: MIT press.
- Quine, W. (1999). two dogmas of empiricism. In E. Margolis, & S. Lawrence, concepts: core readings (pp. 153–171). Massachusetts: MIT press.
- Rey, G. (1999). Concepts and Stereotypes. In E. Margolis, & S. Laurence (Eds.), Concepts: Core Readings (pp. 279–301). Cambridge, Massachusetts: MIT Press.
- Rosch, E. (1977). Classification of real-world objects: Origins and representations in cognition. In P. Johnson-Laird, & P. Wason, Thinking: Readings in Cognitive Science (pp. 212–223). Cambridge: Cambridge University Press.
- Rosch, E. (1999). Principles of Categorization. In E. Margolis, & S. Laurence (Eds.), Concepts: Core Readings (pp. 189–206). Cambridge, Massachusetts: MIT Press.
- Wittgenstein, L. (1999). philosophical investigations: sections 65-78. In E. Margolis, & S. Lawrence, concepts: core readings (pp. 171–175). Massachusetts: MIT press.
- The History of Calculus and its Conceptual Development, Carl Benjamin Boyer, Dover Publications, ISBN 0-486-60509-4
- The Writings of William James, University of Chicago Press, ISBN 0-226-39188-4
- Logic, Immanuel Kant, Dover Publications, ISBN 0-486-25650-2
- A System of Logic, John Stuart Mill, University Press of the Pacific, ISBN 1-4102-0252-6
- Parerga and Paralipomena, Arthur Schopenhauer, Volume I, Oxford University Press, ISBN 0-19-824508-4
- What is Philosophy?, Gilles Deleuze and Félix Guattari
- Kant's Metaphysic of Experience, H. J. Paton, London: Allen & Unwin, 1936
- Conceptual Integration Networks. Gilles Fauconnier and Mark Turner, 1998. Cognitive Science. Volume 22, number 2 (April–June 1998), pages 133-187.
- The Portable Nietzsche, Penguin Books, 1982, ISBN 0-14-015062-5
- Stephen Laurence and Eric Margolis "Concepts and Cognitive Science". In Concepts: Core Readings, MIT Press pp. 3–81, 1999.
- Birger Hjørland. (2009). Concept Theory. Journal of the American Society for Information Science and Technology, 60(8), 1519–1536
- Georgij Yu. Somov (2010). Concepts and Senses in Visual Art: Through the example of analysis of some works by Bruegel the Elder. Semiotica 182 (1/4), 475–506.
- Daltrozzo J, Vion-Dury J, Schön D. (2010). Music and Concepts. Horizons in Neuroscience Research 4: 157-167.
- Concept at PhilPapers
- Concept entry in the Stanford Encyclopedia of Philosophy
- Concept at the Indiana Philosophy Ontology Project
- Concept entry in the Internet Encyclopedia of Philosophy
|Look up concept in Wiktionary, the free dictionary.|
|Wikisource has the text of the 1911 Encyclopædia Britannica article Concept.|
- E. Margolis and S. Lawrence (2006), Concepts entry in the Stanford Encyclopedia of Philosophy
- Blending and Conceptual Integration
- Conceptual Science and Mathematical Permutations
- Concept Mobiles Latest concepts
- v:Conceptualize: A Wikiversity Learning Project
- Concept simultaneously translated in several languages and meanings
Read in another language
This page is available in 55 languages
- Bahasa Indonesia
- Bahasa Melayu
- Олык марий
- Runa Simi
- Simple English
- Српски / srpski
- Srpskohrvatski / српскохрватски
- Tiếng Việt | http://en.mobile.wikipedia.org/wiki/Concept | 13 |
11 | The monsoon trough is a portion of the Intertropical Convergence Zone as depicted by a line on a weather map showing the locations of minimum sea level pressure, and as such, is a convergence zone between the wind patterns of the southern and northern hemispheres. Westerly monsoon winds lie in its equatorward portion while easterly trade winds exist poleward of the trough. Right along its axis, heavy rains can be found which usher in the peak of a location's respective rainy season. As it passes poleward of a location, hot and dry conditions develop. The monsoon trough plays a role in creating many of the world's rainforests.
The term "monsoon trough" is most commonly used in monsoonal regions of the Western Pacific such as Asia and Australia. The migration of the ITCZ/monsoon trough into a landmass heralds the beginning of the annual rainy season during summer months. Depressions and tropical cyclones often form in the vicinity of the monsoon trough, with each capable of producing a year's worth of rainfall in a relatively short time frame.
Monsoon troughing in the western Pacific reaches its zenith in [latitude]during the late summer when the wintertime surface ridge in the opposite hemisphere is the strongest. It can reach as far as the 40th parallel in East Asia during August and the 20th parallel in Australia during February. Its poleward progression is accelerated by the onset of the summer monsoon which is characterized by the development of lower air pressure over the warmest part of the various continents. In the southern hemisphere, the monsoon trough associated with the Australian monsoon reaches its most southerly latitude in February, oriented along a west-northwest/east-southeast axis.
Effect of wind surges
Increases in the relative vorticity, or spin, with the monsoon trough are normally a product of increased wind convergence within the convergence zone of the monsoon trough. Wind surges can lead to this increase in convergence. A strengthening or equatorward movement in the subtropical ridge can cause a strengthening of a monsoon trough as a wind surge moves towards the location of the monsoon trough. As fronts move through the subtropics and tropics of one hemisphere during their winter, normally as shear lines when their temperature gradient becomes minimal, wind surges can cross the equator in oceanic regions and enhance a monsoon trough in the other hemisphere's summer. A key way of detecting whether a wind surge has reached a monsoon trough is the formation of a burst of thunderstorms within the monsoon trough.
Embedded depressions
If a circulation forms within the monsoon trough, it is able to compete with the neighboring thermal low over the continent, and a wind surge will occur at its periphery. Such a circulation which is broad in nature within a monsoon trough is known as a monsoon depression. In northern hemisphere, monsoon depressions are generally asymmetric, and tend to have their strongest winds on their eastern periphery. Light and variable winds cover a large area near their center, while bands of showers and thunderstorms develop within their area of circulation.
The presence of an upper level jet stream poleward and west of the system can enhance its development by leading to increased diverging air aloft over the monsoon depression, which leads to a corresponding drop in surface pressure. Even though these systems can develop over land, the outer portions of monsoon depressions are similar to tropical cyclones. In India, for example, 6 to 7 monsoon depressions move across the country yearly, and their numbers within the Bay of Bengal increase during July and August of El Niño events. Monsoon depressions are efficient rainfall producers, and can cause a year's worth of rainfall when they move through drier areas such as the outback of Australia.
Monsoon depressions often strengthen into cyclone status,and a few examples include a monsoonal trough which formed in the Philippine sea on sprouted 3 tropical cyclones which were: Typhoon Odessa, Typhoon Pat and Tropical Storm Ruby. On June 1994, a Monsoonal Trough over the South China Sea was associated with an embedded depression,which became Tropical Storm Russ.
Role in rainy season
Since the monsoon trough is an area of convergence in the wind pattern, and an elongated area of low pressure at the surface, the trough focuses low level moisture and is defined by one or more elongated bands of thunderstorms when viewing satellite imagery. Its abrupt movement to the north between May and June is coincident with the beginning of the monsoon regime and rainy seasons across South and East Asia. This convergence zone has been linked to prolonged heavy rain events in the Yangtze river as well as northern China. Its presence has also been linked to the peak of the rainy season in locations within Australia. As it progresses poleward of a particular location, clear, hot, and dry conditions develop as winds become westerly. Many of the world's rainforests are associated with these climatological low pressure systems.
Role in tropical cyclogenesis
A monsoon trough is a significant genesis region for tropical cyclones. Vorticity-rich low level environments, with significant low level spin, lead to a better than average chance of tropical cyclone formation due to their inherent rotation. This is because a pre-existing near-surface disturbance with sufficient spin and convergence is one of the six requirements for tropical cyclogenesis. There appears to be a 15-25 day cycle in thunderstorm activity associated with the monsoon trough, which is roughly half the wavelength of the Madden-Julian Oscillation, or MJO. This mirrors tropical cyclone genesis near these features, as genesis clusters in 2–3 weeks of activity followed by 2–3 weeks of inactivity. Tropical cyclones can form in outbreaks around these features under special circumstances, tending to follow the next cyclone to its poleward and west. This is different than the Atlantic Ocean, where tropical cyclones mainly form from tropical waves which move offshore Africa, although 2010's Tropical Storm Nicole may have been an exception to this rule. Eastern Pacific Ocean tropical cyclone formation shows a hybrid of these two mechanisms.
Whenever the monsoon trough on the eastern side of the summertime Asian monsoon is in its normal orientation (oriented east-southeast to west-northwest), tropical cyclones along its periphery will move with a westward motion. If it is reverse oriented, or oriented southwest to northeast, tropical cyclones will move more poleward. Tropical cyclone tracks with S shapes tend to be associated with reverse-oriented monsoon troughs. The South Pacific convergence zone and South American convergence zones are generally reverse oriented. The failure of the monsoon trough, or ITCZ, to move south of the equator in the eastern Pacific ocean and Atlantic ocean during the southern hemisphere summer is considered one of the reasons that tropical cyclones normally do not form in those regions. It has also been noted that when the monsoon trough lies near 20 degrees north latitude in the Pacific, the frequency of tropical cyclones is 2 to 3 times greater than when it lies closer to 10 degrees north.
- "Monsoon trough". Glossary of Meteorology. American Meteorological Society. Retrieved 2009-06-04.
- Bin Wang. The Asian Monsoon. Retrieved on 2008-05-03.
- World Meteorological Organization. Severe Weather Information Centre. Retrieved on 2008-05-03.
- National Centre for Medium Range Forecasting. Chapter-II Monsoon-2004: Onset, Advancement and Circulation Features. Retrieved on 2008-05-03.
- Australian Broadcasting Corporation. Monsoon. Retrieved on 2008-05-03.
- Dr. Alex DeCaria. Lesson 4 – Seasonal-mean Wind Fields. Retrieved on 2008-05-03.
- U. S. Navy. 1.2 Pacific Ocean Surface Streamline Pattern. Retrieved on 2006-11-26.
- Chih-Lyeu Chen. Effects of the Northeast Monsoon on the Equatorial Westerlies Over Indonesia. Retrieved on 2008-05-03.
- U. S. Navy. SECTION 3. DYNAMIC CONTRIBUTORS TO TROPICAL CYCLONE FORMATION. Retrieved on 2006-11-26.
- Chip Guard. Climate Variability on CNMI. Retrieved on 2008-05-03.
- Sixiong Zhao and Graham A. Mills. A Study of a Monsoon Depression Bringing Record Rainfall over Australia. Part II: Synoptic–Diagnostic Description. Retrieved on 2008-05-03.
- N.E. Davidson and G.J. Holland. A Diagnostic Analysis of Two Intense Monsoon Depressions over Australia. Retrieved on 2008-05-03.
- O. P. Singh, Tariq Masood Ali Khan, and Md. Sazedur Rahman. Impact of Southern Oscillation on the Frequency of Monsoon Depressions in the Bay of Bengal. Retrieved on 2008-05-03.
- Bureau of Meteorology. TWP-ICE Synoptic Overview, 01 February 2006. Retrieved on 2008-05-03.
- Bureau of Meteorology. Climate of Giles. Retrieved on 2008-05-03.
- School of Ocean and Earth Science and Technology at the University of Hawai'i. Pacific ENSO Update: 4th Quarter 2001 - Vol. 7 No. 4. Retrieved on 2008-05-03.
- Hobgood (2008). Global Pattern of Surface Pressure and Wind. Ohio State University. Retrieved on 2009-03-08.
- Christopher Landsea. Climate Variability of Tropical Cyclones: Past, Present and Future. Retrieved on 2006-11-26.
- Patrick A. Harr. Tropical Cyclone Formation/Structure/Motion Studies. Retrieved on 2006-11-26.
- Joint Typhoon Warning Center. Typhoon Polly. Retrieved on 2006-11-26.
- Mark A. Lander. Specific Tropical Cyclone Track Types and Unusual Tropical Cyclone Motions Associated with a Reverse-Oriented Monsoon Trough in the Western North Pacific. Retrieved on 2006-11-26. | http://en.wikipedia.org/wiki/Monsoon_trough | 13 |
10 | Section 6 continues looking at data
collection methods. This page looks
at focus groups.
Read about the toolkit.
Focus groups can be a useful evaluation method for collecting qualitative data from group discussions. This webpage on focus groups addresses:
- What’s a focus group?
- When to use a focus group
- What happens in a focus group
What’s a focus group?
Focus groups are usually composed of individuals who are similar to one another on one or more factors of importance (e.g., parents of children with learning disabilities; high school special education teachers).
Ideally, participants should be “typical” or representative or key informants of the broader target audience you want to learn from. However, focus group findings cannot really be generalized to the larger target population.
When to use a focus group
Focus groups are most effectively used when:
- planning and designing program activities,
- conducting needs assessments, and
- obtaining information about experiences that individuals have with your program activities and dissemination efforts.
What happens in a focus group?
In a focus group, a moderator follows a predetermined guide to direct a discussion among 5-10 people with the purpose of collecting in-depth qualitative information about the group members’:
- opinions and suggestions,
- experiences, and
- resource needs on a defined topic or issue.
Focus groups obtain data with open-ended questions, in which participants influence and are influenced by the discussion within the group.
Focus groups are structured with an interview protocol or questioning route, in which questions are arranged in a natural and logical sequence. Often:
The beginning section of the protocol is intentionally broad and less structured, with a goal of learning about participants’ general perspectives.
The middle section of the protocol is usually more structured, with the goal of addressing the topics more systematically.
The final section tends to be narrower and is usually the most structured.
It is usually advisable to conduct more than one focus group to learn about a topic; two or more groups may be needed to ensure that a full spectrum of views and opinions is obtained.
Advantages of using a focus group
Can provide insights about what participants think, as well as why they think it.
Can reveal consensus and diversity about participants’ needs, preferences, assumptions, and experiences.
Allows for group interaction such that participants are able to build on each other’s ideas and comments which can provide an in-depth view not attainable from questioning individuals one at a time.
Unexpected comments and perspectives can often be explored easily.
Can often be planned and organized more quickly, and produce information faster, than some other evaluation techniques – especially telephone and mailed questionnaires.
Disadvantages of using a focus group
Samples of participants are typically small and thus may not be very representative of the larger target audience.
The logistics of gathering participants together in one place at the same time may be challenging and somewhat costly.
More outspoken individuals can dominate the discussion, making viewpoints and contributions from less assertive participants difficult to assess.
The quality of the discussion and the usefulness of information depend much on the skills of the moderator. The moderator’s job is to both encourage discussion and maintain focus. Too much moderator control may result in not obtaining participants’ input and perspectives, while too little control may result in the discussion veering off topic.
Can generate a large amount of qualitative data which can be difficult, complicated and resource demanding to analyze.
Analysis of the data collected may be more open to subjective, biased interpretation than is the case with quantitative data.
Select participants who represent the target population, and who are comparable or somewhat similar on important demographic characteristics. For example, you might design a focus group to include only elementary school special education teachers to provide input and perspectives about their professional development needs. Or, you might want to conduct a focus group of parents of children with autism to identify effective ways to disseminate resources to them. Homogeneous groups can really help to create a sense of comfort, trust and compatibility among participants – and thus provide more useful input.
Limit the group size to between 5-10 individuals. This size generally facilitates opportunities for all to participate, while also providing diversity of input. Consider a smaller group when you need to obtain more depth and detail, or if participants are very involved with a topic and will likely have a lot to contribute.
Multiple focus groups? Sometimes project staff may want to conduct multiple focus groups in order to obtain adequate input and perspectives to answer key evaluation questions. Conducting a single focus group is often not sufficient; at minimum, a second group should be conducted to check on the consistency of findings obtained from the first group.
The focus group protocol should comprise a mix of general and more specific questions. If questions are too general, participants’ responses may not be adequately detailed, clear or useful. On the other hand, if questions are too specific, responses may not provide enough information about the main topics of interest. Follow-up probes for most focus group questions are often needed to clarify a question and have participants elaborate on their responses.
Select a moderator who has good group processing and interpersonal skills, and who reflects a non-judgmental approach. Usually, a moderator should represent the demographics of the participants; for example, a focus group of parents of children with severe cognitive disabilities should probably be moderated by another parent with a child with similar disabilities.
Digital or tape recording (either audio or video) is often used as the primary source of focus group data collection. However, recording can add significant costs (e.g., transcription service) and personnel time (e.g., detailed review and analysis of lengthy transcripts). As an alternative to digital or tape recording, an individual could serve as an observer and recorder, making sure to capture with detailed notes the comments and input provided during the discussion.
This webpage is an excerpt from the evaluation toolkit produced by NICHCY. The suggested citation is:
To the entire toolkit
Sawyer, R. (2012). Toolkit for the OSEP TA & D network on how to evaluate dissemination: A component of the dissemination initiative. Washington, DC: National Dissemination Center for Children with Disabilities.
To this section/webpage on focus groups
Sawyer, R. (2012). Focus groups. In Toolkit for the OSEP TA & D network on how to evaluate dissemination: A component of the dissemination initiative (pp. 13-14). Washington, DC: National Dissemination Center for Children with Disabilities.
What section of the toolkit would you like to read now?
Focus Groups (you’re already here!) | http://nichcy.org/dissemination/evaltoolkit/focusgroups | 13 |
40 | Basics of Math and Physics
These subjects are usually seen by undergraduate students of sciences and engineering. However, a basic understanding of them is required if you want a deep understanding of any computational physical simulation.
The text below is not intended to be a reference in these subjects. It’s main purpose is just to provide the user of Blender some fundamental insight on what is actually happening inside physical simulations of Blender. If you wish learn more about any of the topics discussed, there are some useful links at the bottom of this page.
The formal description of what vectors are and how to use them (mathematically) is the core of Linear Algebra. However, the definition of vectors in Linear Algebra is somewhat obscure, and not intended for direct geometrical practical use, being defined as an abstract concept.
In this document we will use the analytical geometry approach, which defines vectors as "arrows" in 3d space. Examples of physical vectors are velocity, force, and acceleration. Physical quantities that are not vectorial are said to be Scalar. Examples of scalar quantities are energy, mass, and time.
Some properties that summarize vectors:
- One vector may be broken down into its components on the main axes.
- Two vectors are equal if, and only if, all their components are equal.
- The sum/difference of two vectors is equal to the sum/difference of their components.
- Multiplication/division of vectors is not the same as with "common" numbers.
There are some types of operation called products, but any of them are not equal to the products of the components, as you might think. These will not be discussed here.
Particles are the most basic type of body you can create in Physics. They have no size, and therefore cannot be rotated. The law that governs the movement of particles is Newton's Second Law: f=ma (force equals mass times acceleration). This is not the complete form of Newton's law, but the simplified form is sufficient for calculating particle motion. The vectorial sum of all the forces acting on the particle equals has the same direction and orientation of the acceleration. The branch of Physics that is dedicated to this subject is called Classical Mechanics.
Using vector notation, it is simple to solve particle motion equations using a computer even if there are multiple forces of different types. Examples of these forces include Blender "force fields" and Dampening, a force that is proportional to speed (the higher the speed, the higher the force). As particle motion is so simple to solve, Blender is able to calculate the position of thousands of particles almost in real-time when animating.
Rigid Body simulation
Rigid body means the body is not deformable, i.e. cannot stretch, shrink, etc. The main difference from particle simulation is that now our objects are allowed to rotate, and have a size, and a volume.
The equation that governs rotational motion is Τ = I α. Torque equals moment of inertia times angular acceleration. Now we need a definition of each of these words:
- A force may cause a torque. The torque it causes is the vectorial cross product of the component of the force perpendicular to the axis you are evaluating by the distance from the point of the application of the force to that axis. Also, torques are vectors. A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
The construction of the "moment arm" is shown in the figure below, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque arising from a perpendicular force:
For instance, it is much easier to close a door by pushing it by the handle than by pushing it in the middle of the door, because, when you do it by the handle, you increase the distance of the force you are apply to the axis of calculation.
- Moment of Inertia is a measure of how difficult is to rotate the body. It is proportional to both the mass and the geometry of the object. Given a fixed volume, a sphere possesses the smallest moment of inertia possible.
- Angular acceleration: Is a measure of the acceleration in the rotational movement.
The programs devoted to deliver fast and accurate simulation of rigid body dynamics are often called physics engines, or game engines. Blender itself has a game engine, called Bullet. All simulations in the Bullet engine, if well designed, are real-time, except for those with a very high number of objects present.
In the near future, it is expected that computers used for gaming will have a Physics Processor Unit (PPU) dedicated to these calculations as a card, like what happened when graphics processing was moved from the CPU to video cards (Graphics Processing Units or GPUs).
Physics of deformable bodies
The method most computers use to simulate deformable 2D or 3D bodies is to subdivide (automatically or manually) a body into cells, and then, assuming any properties inside a single cell to be an interpolation of the properties at its corners, to solve the equations at the boundaries of the cells. If you use a large number of cells (in other words, a highly subdivided mesh), you can get very realistic results.
A common application of this method, plus some more hypotheses, leaves us into a wide field of engineering which nowadays is called Finite Element Analysis.
In Blender, the simulation of deformable bodies is somewhat similar to this, but the equations are simplified for speed. If we enable an object as a soft body, Blender assumes that all faces are cells and all vertices are masses, with the edges as springs. The following images illustrate how a 3x3 grid of vertices (a mesh plane in Blender) becomes a set of Soft Body cells in Blender.
There is a theory (in the scientific use of the word theory, it has been proven), that provides us a set of equations called Navier-Stokes equations, that completely state how a fluid will behave in most of situations being turbulent or not. However, these equations cannot be solved by hand, and today, no exact answer has been found to solve this equation in its complete state, just in some very special states, and none of them include turbulence.
Currently, the method used to solve this equation consists in using iterative solving, thus getting an answer as close to real as we want to. This is called DNS (Direct Numerical Simulation). However, DNS requires enormous computation power, and even for today, mid-2006, only supercomputers or very large clusters of computers can use DNS with some success. If correctly applied, however, DNS returns the best and most trusted results of all methods, from the more micro distance at which the results are meaningful to the simulation, to the more macro distance, the scale of the objects we are simulating.
So, we need to approximate our model more, in order to do less calculation. Instead of considering the fluid a continuum, we will discretize our fluid. By discretizing, you can understand dividing into cells. Inside a cell, the properties like velocity, pressure, density are all considered to be the same, so we only have to solve equations on its borders. We also discretized time, i.e., only some instances are calculated. However, we need another equation that deals with this discrete problem. This equation is called the Lattice-Boltzmann equation, and this equation complies with the Navier-Stokes equation.
There is also one more optimization done in Blender, the use of adaptative grids. In a region far from interfaces, instead of using a tiny cell, we use a larger cell. This greatly decreases calculation time (up to 4 times faster), without loss of quality. This optimization is responsible to find places where you can use a larger cell without disturbing the results, and when to start using smaller cells in these places.
Boundaries: The boundaries (domain, obstacles) counts as a cell in the method.
- No slip: The fluid cells near the surface of the boundaries are not allowed to move at all, having zero velocity.
- Free slip: The fluid cells are allowed to move freely. If the calculation indicates that that cell would move inwards the boundary cell, then its velocity vector is inversed.
- Classical Mechanics
Rigid Body simulation
Finite element analysis
- Finite Element Analysis
- Finite Element Analysis example | http://wiki.blender.org/index.php/Doc:2.4/Tutorials/Physics/BSoD/Math | 13 |
17 | Apr. 10, 2009 Two places on opposite sides of Earth may hold the secret to how the moon was born. NASA's twin Solar Terrestrial Relations Observatory (STEREO) spacecraft are about to enter these zones, known as the L4 and L5 Lagrangian points, each centered about 93 million miles away along Earth's orbit.
As rare as free parking in New York City, L4 and L5 are among the special points in our solar system around which spacecraft and other objects can loiter. They are where the gravitational pull of a nearby planet or the sun balances the forces from the object's orbital motion. Such points closer to Earth are sometimes used as spaceship "parking lots", like the L1 point a million miles away in the direction of the sun. They are officially called Libration points or Lagrangian points after Joseph-Louis Lagrange, an Italian-French mathematician who helped discover them.
L4 and L5 are where an object's motion can be balanced by the combined gravity of the sun and Earth. "These places may hold small asteroids, which could be leftovers from a Mars-sized planet that formed billions of years ago," said Michael Kaiser, Project Scientist for STEREO at NASA's Goddard Space Flight Center in Greenbelt, Md. "According to Edward Belbruno and Richard Gott at Princeton University, about 4.5 billion years ago when the planets were still growing, this hypothetical world, called Theia, may have been nudged out of L4 or L5 by the increasing gravity of the other developing planets like Venus and sent on a collision course with Earth. The resulting impact blasted the outer layers of Theia and Earth into orbit, which eventually coalesced under their own gravity to form the moon."
This theory is a modification of the "giant impact" theory of the moon's origin, which has become the dominant theory because it explains some puzzling properties of the moon, such as its relatively small iron core. According to giant impact, at the time of the collision, the two planets were large enough to be molten, so heavier elements, like iron, sank to their centers to form their cores.
The impact stripped away the outer layers of the two worlds, which contained mostly lighter elements, like silicon. Since the moon formed from this material, it is iron-poor.
STEREO will look for asteroids with a wide-field-of-view telescope that's part of the Sun Earth Connection Coronal and Heliospheric Investigation instrument. Any asteroid will probably appear as just a point of light. Like a picky person circling the mall for the perfect parking space, the asteroids orbit the L4 or L5 points. The team will be able to tell if a dot is an asteroid because it will shift its position against stars in the background as it moves in its orbit. The team is inviting the public to participate in the search by viewing the data and filing a report at: >
Kaiser said, "If we discover the asteroids have the same composition as the Earth and moon, it will support Belbruno and Gott's version of the giant impact theory. The asteroids themselves could well be left-over from the formation of the solar system. Also, the L4/L5 regions might be the home of future Earth-impacting asteroids."
Analyses of lunar rocks brought to Earth by the Apollo missions reveal that they have the same isotopes (heavier versions of an element) as terrestrial rocks. Scientists believe that the sun and the worlds of our solar system formed out of a cloud of gas and dust that collapsed under its gravity. The composition of this primordial cloud changed with temperature. Since the temperature decreased with distance from the sun, whatever created the moon must have formed in the same orbital location as Earth in order for them to have the same isotope composition.
In a planetary version of "the rich get richer", Earth's gravity should have swept up most of the material in its orbit, leaving too little to create our large moon or another planet like Theia. "However, computer models by Belbruno and Gott indicate that Theia could have grown large enough to produce the moon if it formed in the L4 or L5 regions, where the balance of forces allowed enough material to accumulate," said Kaiser.
The STEREO spacecraft are designed to give 3D views of space weather by observing the sun from two points of view and combining the images in the same way your eyes work together to give a 3D view of the world. STEREO "A" is moving slightly ahead of Earth and will pass through L4, and STEREO "B" is moving slightly behind Earth and will pass through L5. "Taking the time to observe L4 and L5 is kind of cool because it's free. We're going through there anyway -- we're moving too fast to get stuck," said Kaiser. "In fact, after we pass through these regions, we will see them all the time because our instruments will be looking back through them to observe the sun – they will just happen to be in our field of view."
Although L4 and L5 are just points mathematically, their region of influence is huge – about 50 million miles along the direction of Earth's orbit, and 10 million miles along the direction of the sun. It will take several months for STEREO to pass through them, with STEREO A making its closest pass to L4 in September, and STEREO B making its closest pass to L5 in October.
"L4 or L5 are excellent places to observe space weather. With both the sun and Earth in view, we could track solar storms and watch them evolve as they move toward Earth. Also, since we could see sides of the sun not visible from Earth, we would have a few days warning before stormy regions on the solar surface rotate to become directed at Earth," said Kaiser.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | http://www.sciencedaily.com/releases/2009/04/090409153020.htm | 13 |
36 | All observations in astronomy are based on light emitted from stars and galaxies and, according to the general theory of relativity, the light will be affected by gravity. At the same time all interpretations in astronomy are based on the correctness of the theory of relatively, but it has never before been possible to test Einstein's theory of gravity on scales larger than the solar system. Now astrophysicists at the Dark Cosmology Centre at the Niels Bohr Institute have managed to measure how the light is affected by gravity on its way out of galaxy clusters. The observations confirm the theoretical predictions. The results have been published in the journal Nature.
Observations of large distances in the universe are based on measurements of the redshift, which is a phenomenon where the wavelength of the light from distant galaxies is shifted more and more towards the red with greater distance. The redshift indicates how much the universe has expanded from when the light left until it was measured on Earth. Furthermore, according to Einstein's general theory of relativity, the light and thus the redshift is also affected by the gravity from large masses like galaxy clusters and causes a gravitational redshift of the light. But the gravitational influence of light has never before been measured on a cosmological scale.
"It is really wonderful. We live in an era with the technological ability to actually measure such phenomena as cosmological gravitational redshift", says astrophysicist Radek Wojtak, Dark Cosmology Centre under the Niels Bohr Institute at the University of Copenhagen.
Galaxy clusters in the searchlight
Radek Wojtak, together with colleagues Steen Hansen and Jens Hjorth, has analysed measurements of light from galaxies in approximately 8,000 galaxy clusters. Galaxy clusters are accumulations of thousands of galaxies, held together by their own gravity. This gravity affects the light being sent out into space from the galaxies.
The researchers have studied the galaxies lying in the middle of the galaxy clusters and those lying on the periphery and measured the wavelengths of the light.
"We could measure small differences in the redshift of the galaxies and see that the light from galaxies in the middle of a cluster had to 'crawl' out through the gravitational field, while it was easier for the light from the outlying galaxies to emerge", explains Radek Wojtak.
Then he measured the entire galaxy cluster's total mass and with that got the gravitational potential. By using the general theory of relativity he could now calculate the gravitational redshift for the different locations of the galaxies.
"It turned out that the theoretical calculations of the gravitational redshift based on the general theory of relativity was in complete agreement with the astronomical observations. Our analysis of observations of galaxy clusters show that the redshift of the light is proportionally offset in relation to the gravitational influence from the galaxy cluster's gravity. In that way our observations confirm the theory of relativity", explains Radek Wojtak.
New light on the dark universe
The discovery has significance for the phenomena in the universe that researchers are working to unravel. It is the mysterious dark universe dark matter and dark energy.
In addition to the visible celestial bodies like stars, planets and galaxies, the universe consists of a large amount of matter, which researchers can work out that it must be there, but which cannot be observed as it neither emits nor reflects light. It is invisible and is therefore called dark matter. No one knows what dark matter is, but they know what the mass and thus the gravity must be. The new results for gravitational redshift do not change the researchers' modelling for the presence of dark matter.
Another of the main components of the universe is dark energy, which according to the theoretical models acts like a kind of vacuum that causes the expansion of the universe to accelerate. According to the calculations, which are based on Einstein's theory of relativity, dark energy constitutes 72 percent of the structure of the universe. Many alternative theories try to explain the accelerating expansion without the presence of dark energy.
Theory tested on a large scale
"Now the general theory of relativity has been tested on a cosmological scale and this confirms that the general theory of relativity works and that means that there is a strong indication for the presence of dark energy", explains Radek Wojtak.
The new gravitation results thus contribute a new piece of insight to the understanding of the hidden, dark universe and provide a greater understanding of the nature of the visible universe.
Explore further: Galaxy's Ring of Fire
More information: Citation: Nature: 2011-05-06822B | http://phys.org/news/2011-09-galaxy-clusters-theory-relativity.html | 13 |
14 | Chromosomes are tightly coiled microscopic rod-like structures of DNA and protein that are found in the nuclei of eukaryotic cells. Each chromosome contains a single molecule of DNA. Each strand of the DNA double helix is a linear arrangement of repeating similar units called nucleotides, which are each composed of one sugar, one phosphate, and a nitrogenous base. A DNA nucleotide contains one of four different nitrogenous bases: adenine (A), thymine (T), cytosine (C), and guanine (G). The order of bases along a strand of DNA is what determines the genome sequence. See a DNA Structure diagram.
How big are human chromosomes?
Human chromosomes range in length from 51 million to 245 million base pairs. With few exceptions (e.g., red blood cells), each of the trillions of cells in the human body contains a complete set of chromosomes--the genome. If all the bases in the human genome were spread out 1 millimeter apart, they would extend from Memphis to Los Angeles.
How many chromosomes are in the human genome?
The nucleus of most human cells contains two sets of chromosomes, one set given by each parent. Each set has 23 single chromosomes--22 autosomes and an X or Y sex chromosome. A normal female will have a pair of X chromosomes; a male will have an X and Y pair.
Why are the chromosomes shown in this Web site striped?
Chromosomes can be seen under a light microscope and, when stained with certain dyes, reveal several features. Such features include length, position of a constricted region (centromere), and the particular patterns of light and dark bands that result from treatment with stains. Differences in size and banding pattern allow the 24 chromosomes to be distinguished from each other, an analysis called a karyotype.
A few types of major chromosomal abnormalities, including missing or extra copies of a chromosome or gross breaks and rejoinings (translocations), can be detected by microscopic examination; Downs syndrome, in which an individual's cells contain a third copy of chromosome 21, is diagnosed by karyotype analysis.
Most changes in DNA, however, are too subtle to be detected by this technique and require molecular analysis. These subtle DNA abnormalities (mutations) are responsible for many inherited diseases such as cystic fibrosis and sickle cell anemia or may predispose an individual to cancer, major psychiatric illnesses, and other complex diseases.
What are genes? What role do they play in disease research?
Genes are chromosome pieces whose particular bases (e.g., ATTCGGA) determine how, when, and where the body makes each of the many thousands of different proteins required for life. Humans have an estimated 30,000 genes, with an average length of about 3,000 bases. Genes make up less than 2 percent of human DNA; the remaining DNA has important but still unknown functions that may include regulating genes and maintaining the chromosome structure.
Researchers hunt for disease-associated genes by looking for base changes found only in the DNA of affected individuals. Numerous disorders and traits mapped to particular chromosomes are displayed in this Web site. Some disorders, such as cystic fibrosis (chromosome 7) and sickle cell anemia (chromosome 11) are caused by base sequence changes in a single gene. Many common diseases such as diabetes, hypertension, deafness, and cancers have more complex causes that may be a combination of sequence variations in several genes on different chromosomes, in addition to environmental factors.
How will knowing the human genome sequence affect medicine?
Knowing the DNA sequence is important because it affects such attributes as appearance, response to particular medicines, and resistance to infections and toxins. It may even influence behavior. Some sequence variations also can cause or contribute to such disorders as those found on the chromosomes presented in this Web site.
These new data and powerful DNA analysis tools will usher in a new era of medicine that could allow doctors to detect disease at earlier stages, make more accurate diagnoses, and customize drugs and other medical treatments to fit an individual's own DNA sequence. The eventual understanding of a gene's normal functions and how the gene may change to cause or contribute to disease will lead to more focused and effective treatments with fewer side effects.
How many chromosomes are found in chickens? tomatoes? armadillos?
Go to Chromosome Number of Different Species Web site and use their search tool to find out the number of chromosomes in several different organisms.
For more FAQs about the Human Genome, see the Human Genome Project Information FAQs.
Last modified: September 12, 2003
Home Site Index Chromosome Viewer Genetic Disorder Guide Gene and Protein Guide Bioinformatics Tutorials
Bioinformatics Terms Sample Profiles Evaluating Medical Information Links FAQs
|The online presentation of this poster is a special feature of the U.S. Department of Energy (DOE) Human Genome Project Information Web site. The DOE Biological and Environmental Research program of the Office of Science funds this site.| | http://www.ornl.gov/sci/techresources/Human_Genome/posters/chromosome/faqs.shtml | 13 |
13 | Indirect Measurement and Trigonometry
Learn how to use the concept of similarity to measure distance
indirectly, using methods involving similar triangles, shadows, and
transits. Apply basic right-angle trigonometry to learn about the
relationships among steepness, angle of elevation, and
height-to-distance ratio. Use trigonometric ratios to solve problems
involving right triangles.
Classroom Case Studies, 3–5
Watch this program in the 10th session for grade 3–5 teachers. Explore how the concepts developed in this course can be applied through case studies of grade 3–5 teachers (former course participants who have adapted their new knowledge to their classrooms), as well as a set of typical measurement problems for grade 3–5 students.
Exploring Borderland-Unit 2
Chicana writer Gloria Anzaldúa tells us that the border is "una herida
abierta [an open wound] where the lifeblood of two worlds is merging to form a third country — a border culture." This program explores the literature of the Chicano borderlands and its beginnings in the
literature of Spanish colonization. Learning activities that go with this lesson can be found at: http://www.learner.org/amerpass/unit02/index.html
Puritan and Quaker Utopian Visions, 1620-1750-Unit 3
When British colonists landed in the Americas they created communities that they hoped would serve as a "light onto the nations." But what role would the native inhabitants play in this new model community? This Unit compares the answers of three important groups, the Puritans, Quakers, and Native Americans, and exposes the lasting influence they had upon American identity.
Contested Territories Unit 7
The United States acquired vast territories between the time of the
Revolution and the Civil War, paying a price economically, socially, and politically. This unit examines the forces that drove such rapid
expansion, the settlers moving into these regions, and the impact on the Native Americans already there. (This unit includes a facilitator
guide, video, and online text chapter.)
Wendell Brooks is a teacher at the diverse Berkeley High School in
Berkeley, California. Mr. Brooks' ninth–grade history class focuses on a variety of political ideologies present during the period of World War
I. His class includes lively discussion on capitalism, communism,
totalitarianism, and Nazism, as portrayed by leaders such as Hitler and Mussolini. In his lesson, Mr. Brooks incorporates a Socratic discussion into his lesson, as well as group activities and present
How People Learn: Introduction to Learning Theory -Session 1
This program introduces the main themes of the course. Teacher
interviews and classroom footage illustrate why learning theory is at
the core of good classroom instruction and demonstrate the broad
spectrum of theoretical knowledge available for use in classroom
New Literacies of the Internet Workshop 5
This workshop focuses on the evolving use of networked technology in education. Literacy expert Donald Leu discusses strategies that help
students effectively read, write, and communicate on the Internet.
Classroom examples illustrate strategies for using Internet resources in the classroom.
Nancy—Grade 8 Nancy wants her eighth-grade students to develop more autonomy and critical thinking skills.
Nancy wants her eighth-grade students to develop more autonomy and critical thinking skills.
Volcanoes provide clues about what is going on inside Earth. Animations illustrate volcanic processes and how plate boundaries are related to volcanism. The program also surveys the various types of eruptions, craters, cones and vents, lava domes, magma, and volcanic rock. The 1980 eruption of Mount St. Helens serves as one example.
Rock and Roll - A Brief History
This is an historical snapshot of rock and roll music a new style of music that became popular in 1951. The video explains how money influenced music and tells of its cross-racial appeal. Explores briefly the career of Elvis Presley. Video uses still images. (2:39)
How To Teach Number Comparisons - Math Game
Learn how to play and teach this number comparison game for practicing comparing numbers with expert, Courtney Hester, teaching tips in this math games video clip.
Energy Flow in Communities
In Session 1, we saw that one characteristic of life was the need for a constant supply of matter and energy. Why is this? What’s the difference between the two? The next two sessions explore these questions. Session 7 focuses on energy and life, while Session 8 focuses on matter and life
Solving Proportions Using Cross Multiplication
The best way to multiply proportions is through cross
multiplication. In this video a math teacher explains how to reduce a proportion before cross multiplying with examples on a white board. Text of narrators speech is shown on bottom of screen.
Lesson on fractions. In this clip Larry shows examples of comparing fractions. More lessons at: http://www.MathWithLarry.com
Plants and Seasons
Join several Journey North classrooms as they become engaged in the study of tulip bulbs, and track their growth from fall to spring. In this large experiment students across the Northern Hemisphere track the growth of the same plant
Advanced Bank Math Game in Montessori
This is a clip on how to learning to play a bank game composition of numbers geared for preschoolers. The game activity is presented.
Microbes and Human Diseases
How microbes come into contact with humans, and the many factors leading to disease outbreaks around the globe, are examined here. Students learn about current efforts to track infectious diseases and the considerations necessary to control disease worldwide.
Workshop 1: Behind the Design
With Philip Sadler, Ed.D. Young children are natural designers and builders, but if their interest is not fostered, it may wane as they move through the grades. This workshop focuses on the use of simple design prototypes that children are asked to improve upon in order to meet a particular challenge. You will see these design challenges in action in middle sch
Workshop 2: Mathematics: A Community Focus
With Dr. Marta Civil. As teachers, we often make assumptions about the knowledge children are exposed to at home. Sometimes it seems that we focus on only reading and writing; Dr. Civil contends that we need to look more carefully at the mathematical potential of the home and that it is essential that schools learn to be more flexible and knowledgeab | http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Understanding%20glob&start=5560&end=5580 | 13 |
10 | Ray Optics: Refraction and Lenses
Refraction and Lenses: Audio Guided Solution
The diagram at the right shows series of transparent materials that form layers on top of each other and are surrounded by water (n=1.33). Layer 1 has an index of refraction of 1.91; layer 2 has an index of refraction of 1.52; layer 3 has an index of refraction of 1.36. A light ray in water approaches the boundary with layer 1 at 62.8 degrees.
a. Determine the angles of refraction for the light as it enters into each layer
b. Determine the angle of incidence of the light after it passes through each layer and strikes the boundary with the next layer.
c. Determine the angle of refraction for the light as it refracts out of layer 3 into the water.
Audio Guided Solution
Click to show or hide the answer!
a. angle of refraction at water - Layer 1 boundary: 38.3°
angle of refraction at Layer 1- Layer 2 boundary: 51.1°
angle of refraction at Layer 2- Layer 3 boundary: 60.4°
b. angle of incidence at Layer 1 - Layer 2 boundary: 38.3°
angle of incidence at Layer 2 - Layer 3 boundary: 51.1°
angle of incidence at Layer 3 - Water boundary: 60.4°
c. angle of refraction when refracting back into water: 62.8°
Habits of an Effective Problem Solver
- Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it.
- Identify the known and unknown quantities and record them in an organized manner. Equate given values to the symbols used to represent the corresponding quantity - e.g., do = 24,8 cm; di = 16.7 cm; f = ???.
- Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity.
- Identify the appropriate formula(s) to use.
- Perform substitutions and algebraic manipulations in order to solve for the unknown quantity.
Read About It!
Get more information on the topic of Refraction and Lenses at The Physics Classroom Tutorial.
Return to Problem Set
Return to Overview | http://www.physicsclassroom.com/calcpad/refrn/prob10.cfm | 13 |
94 | MSP:MiddleSchoolPortal/Ratios For All Occasions
From Middle School Portal
Ratios for All Occasions - Introduction
In Ratios for All Occasions, we feature resources on the concept of the ratio as encountered in middle school: as rates in real-world problems, percents in relation to fractions, scale factors in building models, and comparisons of lengths in geometry. Most of these digital resources are activities that can serve as supplementary or motivational material.
A central theme in the middle school mathematics curriculum, proportional reasoning is based on making sense of ratios in a variety of contexts. The resources chosen for this unit provide practice in solving problems, often informally, in the format of games, hands-on modeling, mapmaking, and questions selected for their interest for students. As students work through the activities, they will exercise reasoning about basic proportions as well as further develop their knowledge of the relationship between fractions and percents.
The section titled Background Resources for Teachers contains links to workshop sessions, developed for teachers, on the mathematical content of the unit. Ratios in Children's Books identifies three picture books that entertain while they explore scale and proportion. In the final section, we look at the coverage of proportionality at the middle level in the NCTM Principles and Standards for School Mathematics.
Background Resources for Teachers
Ratios, whether simple comparisons or rates or percents or scale factors, are old friends of the middle school teacher. Every year you deal with them in your classroom. However, you may like to explore a particular topic, such as the golden mean or indirect measurement. These online workshop sessions, created for teachers, make use of applets and video to enable deeper investigation of a topic. You may find yourself fascinated enough with a topic to import the workshop idea directly into your classroom.
Rational Numbers and Proportional Reasoning How do ratios relate to our usual idea of fractions? In this session, part of a free online course for K-8 teachers, you can look at ways to interpret, model and work with rational numbers and to explore the basics of proportional reasoning. You can investigate these ideas through interactive applets, problem sets, and a video of teachers solving one of the problems. This session is part of the online course Learning Math: Number and Operations.
Fractions, Percents, and Ratios In this set of lessons created for K-8 teachers, you can examine graphical and geometric representations of these topics, as well as some of their applications in the physical world. A review of percents in terms of ratio and proportion is followed by an investigation of Fibonacci numbers and the golden mean. Why do we study the golden rectangle? In a video segment, an architect explains the place of the golden rectangle as an architectural element throughout history. This set of lessons is from Learning Math: Number and Operations.
Similarity Explore scale drawing, similar triangles, and trigonometry in terms of ratios and proportion in this series of lessons developed for teachers. Besides explanations and real-world problems, the unit includes video segments that show teachers investigating problems of similarity. To understand the ratios that underlie trigonometry, you can use an interactive activity provided online. This session is part of the course Learning Math: Geometry.
Indirect Measurement and Trigonometry For practical experience in the use of trigonometry, look at these examples of measuring impossible distances and inaccessible heights. These lessons show proportional reasoning in action! This unit is part of the online course Learning Math: Measurement.
Ratios as Fractions and Rates
It is at the middle school level that students move from understanding fractions to working with ratios and setting up proportions. You will find here problem-solving activities that you can use to introduce the concept of ratio as a rate that can be expressed as a fraction—miles per hour, drops per minute, for example. And you will find real-world problems that can be set up as proportions. Each activity was selected with student appeal in mind.
All About Ratios Designed to introduce the concept of ratio at the most basic level, this activity could open the idea to younger middle school students. Each multiple-choice problem shows sets of colorful elements and asks students to choose the one that matches the given ratio. The activity is from the collection titled Mathematics Lessons that are Fun! Fun! Fun!
Which Tastes Juicier? Students are challenged to decide which of four cans of grape juice concentrate requiring different amounts of water would have the strongest grape juice taste. A hint suggests forming ratios that are fractions to compare quantities. Two solutions are given, each fully illustrated with tables. Students are then offered further mixture-related questions.
Tern Turn: Are We There Yet? If you know an arctic tern's rate of flight and hours per day in flight, can you calculate how many days would be required to fly the 18,000-mile roundtrip from the Arctic Circle to Antarctica? A hint suggests that students first calculate how many miles the tern flies in one day. Similar questions follow, offering more opportunities to practice distance-rate-time problems.
Drip Drops: How Much Water Do You Waste? A leaky faucet is dripping at the rate of one drop every two seconds. Students are asked to decide if the water lost in one week would fill a drinking glass, a sink, or a bathtub. The only hint is that a teaspoon holds about 20 drops. The full solution demonstrates how to convert the drops to gallons using an equation or a table. Students then consider, "How much water is lost in one year by a single leaky faucet? By two million leaky faucets?"
How Far Can You Go on a Tank of Gas? Which car will go the farthest on a single tank of gas? Students are given the mileage and gasoline tank capacity of three models of automobiles and are encouraged to begin the problem by calculating how far each car could go in the city and on the highway. In follow-up problems, students compare the fuel efficiency of different sports cars and calculate how often a commuter would need to refuel.
Capture Recapture: How Many Fish in a Pond? A real application of the ideas of proportion! To estimate the number of fish in a pond, scientists tag a number of them and return them to the pond. The next day, they catch fish from the pond and count the number of tagged fish recaptured. From this, they can set up a proportion to make their estimation. Hints on getting started are given, if needed, and the solution explains the setup of the proportion.
Neighborhood Math This site contains four activities in a neighborhood setting: Math at the Mall, Math in the Park or City, Wheel Figure This Out, and Gearing Up. Students calculate the amount of floor space occupied by various stores, find the height of objects, and take a mathematical look at bicycles. The third and fourth activities involve both geometry and ratios. Answers and explanations of the four activities are included.
Understanding Rational Numbers and Proportions To work well with ratios, learners need a solid basis in the idea of rational number. This complete lesson includes three well-developed activities that investigate fractions, proportion, and unit rates—all through real-world problems students encounter at a bakery.
Ratios as Percentages
In teaching ratio, percentage is where the rubber meets the road! Students need to understand the concept of percent thoroughly, which is the objective of the first five resources here. Students also need practice in converting from fractions to decimals to percents, and in finding percentages. The last four resources offer practice in various scenarios, generally through a game format.
Grid and Percent It This lesson begins with a basic visual used in many textbooks: a 10 × 10 grid as a model for demonstrating percent as "parts per hundred." It goes on to extend the model to solve various percentage problems. Especially valuable are the illustration of each problem and the thorough explanation that accompanies it. This is an exceptional lesson plan!
Percentages In this interactive activity, students can enter any two of these three numbers: the whole, the part, and the percentage. The missing number is not only calculated but the relationship among the three is illustrated as a colored section of both a circle and a rectangle. The exercise is an excellent help to understanding the meaning of percentage.
Majority Vote: What Percentage Does It Take to Win a Vote? This problem challenges students' understanding of percentage. Two solutions are available, plus hints for getting started. Clicking on "Try these" leads to different but similar problems on percentage. Questions under "Did you know?" include "Can you have a percentage over 100?" and "When can you add, subtract, multiply, or divide percentages?" These questions can lead to interesting math conversations.
Fraction Model III Using this applet, students create a fraction for which the denominator is 100 and then make the numerator any value they choose. A visual of the fraction is shown—either as a circle, a rectangle, or a model with the decimal and percent equivalents of the fraction. An excellent aid in understanding the basics of percentages!
Tight Weave: Geometry This is a fractal that can be used to give a visual of percentages. At each stage in the creation of the fractal, the middle one-ninth of each purple square area is transformed to gold. This gives progressively smaller similar patterns of gold and purple. At any stage of iteration, the percentage of gold is given. Interesting questions that your class might consider: At what stage will more than 50% of the area be gold? Or you could pick a stage, show it visually, and ask the students to estimate the percentage of the original purple square that has turned to gold.
Dice Table This activity shows the student the possible results of rolling two dice. It can become a game between several students who select various combinations of results, which appear on an interactive table. The players then figure the probability of winning the roll, giving the probabilities as fractions, decimals, and percentages. Good practice in converting from fractions to percents.
Fraction Four A game for two players, this activity requires students to convert from fractions to percents, find percentages of a number, and more. Links go to game ideas and a brief discussion of the connection between fractions and percentages, presented as a talk between a student and a mentor.
Snap Saloon In this interactive online game, students practice matching fractions with decimals and percentages. Three levels of difficulty are available. This is one of 12 games from The Maths File Game Show.
Ratios in Building Scale Models
This is the hands-on area of ratios! These activities are for students who like to get in there and get dirty—in other words, all middle schoolers. Here they can make models, maps, floor plans, and pyramids, or consider the length of the Statue of Liberty's nose. All the problems deal with the idea of scale, the application of a scale factor, and the central question: What changes when an object is enlarged or shrunk to scale?
Floor Plan Your Classroom: Make an Architectural Plan in 3 Steps This resource guides the learner step-by-step in creating a floor plan of a classroom. The directions include drawings of student work. The three parts of the activity are: sketching a map of the classroom, making a scale drawing from the sketch, and drafting a CAD (computer-aided design) floor plan from the drawing.
Statue of Liberty: Is the Statue of Liberty's Nose Too Long? The full question is: "The arm of the Statue of Liberty is 42 feet. How long is her nose?" To answer the question, students first find the ratio of their own arm length to nose length and then apply their findings to the statue's proportions. The solution sets out different approaches to the problem, including the mathematics involved in determining proportion. Extension problems deal with shrinking a T-shirt and the length-to-width ratios of cereal boxes.
Scaling Away For this one-period lesson, students bring to class either a cylinder or a rectangular prism, and their knowledge of how to find surface area and volume. They apply a scale factor to these dimensions and investigate how the scaled-up model has changed from the original. Activity sheets and overheads are included, as well as a complete step-by-step procedure and questions for class discussion.
This activity provides instructions for making a scale model of the solar system, including an interactive tool to calculate the distances between the planets. The student selects a measurement to represent the diameter of the Sun, and the other scaled measurements are automatically calculated. Students can experiment with various numbers for the Sun's diameter and see how the interplanetary distances adjust to the scale size.
Mathematics of Cartography: Mathematics Topics This web page looks at scale in relation to making maps. It discusses coordinate systems as well as the distortions created when projecting three -- dimensional space onto a two-dimensional paper. One activity here has students use an online site to create a map of their neighborhoods-to scale, of course!
Size and Scale This is a challenging and thorough activity on the physics of size and scale. Again, the product is a scale model of the Earth-moon system, but the main objective is understanding the relative sizes of bodies in our solar system and the problem of making a scale model of the entire solar system. The site contains a complete lesson plan, including motivating questions for discussion and extension problems.
Scaling the Pyramids Students working on this activity will compare the Great Pyramid to such modern structures as the Statue of Liberty and the Eiffel Tower. The site contains all the information needed, including a template, to construct a scale model of the Great Pyramid. Heights of other tall structures are given. A beautifully illustrated site!
Ratios in Geometry
Geometry offers a challenging arena in which to wrestle with ideas of ratio. Except for the first resource, the work below is more appropriate for the upper end of middle school than for the younger students. All of the resources include activities that will involve your students in working with visual, geometric figures that they can draw or manipulate online. You will notice the absence of a favorite and most significant ratio: p. You will find several interesting resources on the circumference to diameter ratio in Going in Circles!
Constant Dimensions In this carefully developed lesson, students measure the length and width of a rectangle using standard units of measure as well as nonstandard units such as pennies, beads, and paper clips. When students mark their results on a length-versus-width graph, they find that the ratio of length to width of a rectangle is constant, in spite of the units. For many middle school students, not only is the discovery surprising but also opens up the whole meaning of ratio.
Parallel Lines and Ratio Three parallel lines are intersected by two straight lines. The classic problem is: If we know the ratio of the segments created by one of the straight lines, what can we know about the ratio of the segments along the other line? An applet allows students to clearly see the geometric reasoning involved. The activity is part of the Manipula Math site.
Figure and Ratio of Area A page shows two side-by-side grids, each with a blue rectangle inside. Students can change the height and width of these blue rectangles and then see how their ratios compare--not only of height and width but also, most important, of area. The exercise becomes most impressive visually when a tulip is placed inside the rectangles. As the rectangles' dimensions are changed, the tulips grow tall and widen or shrink and flatten. An excellent visual! The activity is part of the Manipula Math site.
Cylinders and Scale Activity Using a film canister as a pattern, students create a paper cylinder. They measure its height, circumference, and surface area, then scale up by doubling and even tripling the linear dimensions. They can track the effect on these measurements, on the area, and finally on the amount of sand that fits into each module (volume). The lesson is carefully described and includes handouts.
The Fibonacci Numbers and the Golden Section Here students can explore the properties of the Fibonacci numbers, find out where they occur in nature, and learn about the golden ratio. Illustrations, diagrams, and graphs are included.
The Golden Ratio Another site that introduces the golden ratio, this resource offers seven activities that guide students in constructing a golden rectangle and spiral. Although designed for ninth and tenth graders, the explorations are appropriate for middle school students as well.
Ratios in Children's Books
Middle schoolers may be surprised and pleased to find ratios treated as the subject of these three picture books. You can find the books in school or public libraries. They are also available from online booksellers.
Cut Down to Size at High Noon by Scott Sundby and illustrated by Wayne Geehan
This parody of classic western movies teaches scale and proportion. The story takes place in Cowlick, a town filled with people with intricate western-themed hairstyles that the town's one and only barber creates with the help of scale drawings. Enter a second barber, and the town does not seem big enough for both of them! The story reaches its high point of suspense when the two barbers face off with scissors at high noon. The duel ends in a draw of equally magnificent haircuts, one in the shape of a grasshopper and the other in the shape of a train engine, and the reader learns that scale drawings can be used to scale up as well as down.
If You Hopped Like a Frog by David M. Schwartz and illustrated by James Warhola
Imagine, with the help of ratio and proportion, what you could accomplish if you could hop like a frog or eat like a shrew. You would certainly be a shoo-in for the Guinness World Records. The book first shows what a person could do if he or she could hop proportionately as far as a frog or were proportionately as powerful as an ant. At the back of the book, the author explains each example and poses questions at the end of the explanations.
If the World Were a Village: A Book about the World's People by David J. Smith and illustrated by Shelagh Armstrong
How can you comprehend statistics about a world brimming with more than 6.2 billion people (the population in January 2002)? One answer to understanding large numbers is to create a scale where 100 people represent the total world population and change the other numbers proportionally. In a world of 100 people, how many people (approximately) would come from China? (21) From India? (17) From the United States? (5) In the same way, the book presents statistics about the different languages spoken in the world, age distributions, religions, air and water quality, and much more.
SMARTR: Virtual Learning Experiences for Students
Visit our student site SMARTR to find related virtual learning experiences for your students! The SMARTR learning experiences were designed both for and by middle school aged students. Students from around the country participated in every stage of SMARTR’s development and each of the learning experiences includes multimedia content including videos, simulations, games and virtual activities. Visit the virtual learning experience on Ratios.
The FunWorks Visit the FunWorks STEM career website to learn more about a variety of math-related careers (click on the Math link at the bottom of the home page).
Within the NCTM Principles and Standards, the concept of ratio falls under the Number and Operations Standard. The document states that one curricular focus at this level is "the proposed emphasis on proportionality as an integrative theme in the middle-grades mathematics program. Facility with proportionality develops through work in many areas of the curriculum, including ratio and proportion, percent, similarity, scaling," and more. Another focus identified for middle school is rational numbers, including conceptual understanding, computation, and learning to "think flexibly about relationships among fractions, decimals, and percents" (NCTM, 2000, p. 212).
Characteristically, the document emphasizes the deep understandings that underlie the coursework. For example, to work proficiently with fractions, decimals, and percents, a solid concept of rational number is needed. Many students hold serious misconceptions about what a fraction is and how it relates to a decimal or a percent. They can develop a clearer, more intuitive understanding through "experiences with a variety of models" that "offer students concrete representations of abstract ideas" (pp. 215-216).
The online resources in this unit offer several models for hands-on encounters with ratio under some of its many guises: a rate, a scale factor, a percent, a comparison of geometric dimensions. We hope that your students will enjoy their encounters with ratios and deepen their understanding of this useful concept.
Author and Copyright
Terese Herrera taught math several years at middle and high school levels, then earned a Ph.D. in mathematics education. She is a resource specialist for the Middle School Portal 2: Math & Science Pathways project.
Please email any comments to [email protected].
Connect with colleagues at our social network for middle school math and science teachers at http://msteacher2.org.
Copyright June 2006 - The Ohio State University. This material is based upon work supported by the National Science Foundation under Grant No. 0424671 and since September 1, 2009 Grant No. 0840824. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | http://msp.ehe.osu.edu/wiki/index.php?title=MSP:MiddleSchoolPortal/Ratios_For_All_Occasions&oldid=300 | 13 |
151 | Special relativity, an introduction
- This article is intended as a generally accessible introduction to the subject.
Special relativity is a fundamental physics theory about space and time that was developed by Albert Einstein in 1905 as a modification of Newtonian physics. It was created to deal with some pressing theoretical and experimental issues in the physics of the time involving light and electrodynamics. The predictions of special relativity correspond closely to those of Newtonian physics at speeds which are low in comparison to that of light, but diverge rapidly for speeds which are a significant fraction of the speed of light. Special relativity has been experimentally tested on numerous occasions since its inception, and its predictions have been verified by those tests.
Einstein postulated that the speed of light is the same for all observers, irrespective of their motion relative to the light source. This was in total contradiction to classical mechanics, which had been accepted for centuries. Einstein's approach was based on thought experiments and calculations. In 1908, Hermann Minkowski reformulated the theory based on different postulates of a more geometrical nature. His approach depended on the existence of certain interrelations between space and time, which were considered completely separate in classical physics. This reformulation set the stage for further developments of physics.
Special relativity makes numerous predictions that are incompatible with Newtonian physics (and everyday intuition). The first such prediction described by Einstein is called the relativity of simultaneity, under which observers who are in motion with respect to each other may disagree on whether two events occurred at the same time or one occurred before the other. The other major predictions of special relativity are time dilation (under which a moving clock ticks more slowly than when it is at rest with respect the observer), length contraction (under which a moving rod may be found to be shorter than when it is at rest with respect to the observer), and the equivalence of mass and energy (written as E=mc2). Special relativity predicts a non-linear velocity addition formula, which prevents speeds greater than that of light from being observed. Special relativity also explains why Maxwell's equations of electromagnetism are correct in any frame of reference, and how an electric field and a magnetic field are two aspects of the same thing.
Special relativity has received experimental support in many ways, and it has been proven far more accurate than Newtonian mechanics. The most famous experimental support is the Michelson-Morley experiment, the results of which (showing that the speed of light is a constant) was one factor that motivated the formulation of the theory of special relativity. Other significant tests are the Fizeau experiment (which was first done decades before special relativity was proposed), the detection of the transverse Doppler effect, and the Haefele-Keating experiment. Today, scientists are so comfortable with the idea that the speed of light is always the same that the meter is now defined as being the distance traveled by light in 1/299,792,458th of a second. This means that the speed of light is now defined as being 299,792,458 m/s.
Reference frames and Galilean relativity: A classical prelude
A reference frame is simply a selection of what constitutes stationary objects. Once the velocity of a certain object is arbitrarily defined to be zero, the velocity of everything else in the universe can be measured relative to it. When a train is moving at a constant velocity past a platform, one may either say that the platform is at rest and the train is moving or that the train is at rest and the platform is moving past it. These two descriptions correspond to two different reference frames. They are respectively called the rest frame of the platform and the rest frame of the train (sometimes simply the platform frame and the train frame).
The question naturally arises, can different reference frames be physically differentiated? In other words, can one conduct some experiments to claim that "we are now in an absolutely stationary reference frame?" Aristotle thought that all objects tend to cease moving and become at rest if there were no forces acting on them. Galileo challenged this idea and argued that the concept of absolute motion was unreal. All motion was relative. An observer who couldn't refer to some isolated object (if, say, he was imprisoned inside a closed spaceship) could never distinguish whether according to some external observer he was at rest or moving with constant velocity. Any experiment he could conduct would give the same result in both cases. However, accelerated reference frames are experimentally distinguishable. For example, if an astronaut moving in free space saw that the tea in his tea-cup was slanted rather than horizontal, he would be able to infer that his spaceship was accelerated. Thus not all reference frames are equivalent, but people have a class of reference frames, all moving at uniform velocity with respect to each other, in all of which Newton's first law holds. These are called the inertial reference frames and are fundamental to both classical mechanics and SR. Galilean relativity thus states that the laws of physics can not depend on absolute velocity, they must stay the same in any inertial reference frame. Galilean relativity is thus a fundamental principle in classical physics.
Mathematically, it says that if one transforms all velocities to a different reference frame, the laws of physics must be unchanged. What is this transformation that must be applied to the velocities? Galileo gave the common-sense "formula" for adding velocities: If
- Particle P is moving at velocity v with respect to reference frame A and
- Reference frame A is moving at velocity u with respect to reference frame B, then
- The velocity of P with respect to B is given by v + u.
The formula for transforming coordinates between different reference frames is called the Galilean transformation. The principle of Galilean relativity then demands that laws of physics be unchanged if the Galilean transformation is applied to them. Laws of classical mechanics, like Newton's second law, obey this principle because they have the same form after applying the transformation. As Newton's law involves the derivative of velocity, any constant velocity added in a Galilean transformation to a different reference frame contributes nothing (the derivative of a constant is zero). Addition of a time-varying velocity (corresponding to an accelerated reference frame) will however change the formula (see pseudo force), since Galilean relativity only applies to non-accelerated inertial reference frames.
Time is the same in all reference frames because it is absolute in classical mechanics. All observers measure exactly the same intervals of time and there is such a thing as an absolutely correct clock.
Invariance of length: The Euclidean picture
In special relativity, space and time are joined into a unified four-dimensional continuum called spacetime. To gain a sense of what spacetime is like, we must first look at the Euclidean space of Newtonian physics.
This approach to the theory of special relativity begins with the concept of "length." In everyday experience, it seems that the length of objects remains the same no matter how they are rotated or moved from place to place; as a result the simple length of an object doesn't appear to change or is "invariant." However, as is shown in the illustrations below, what is actually being suggested is that length seems to be invariant in a three-dimensional coordinate system.
The length of a line in a two-dimensional Cartesian coordinate system is given by Pythagoras' theorem:
One of the basic theorems of vector algebra is that the length of a vector does not change when it is rotated. However, a closer inspection tells us that this is only true if we consider rotations confined to the plane. If we introduce rotation in the third dimension, then we can tilt the line out of the plane. In this case the projection of the line on the plane will get shorter. Does this mean length is not invariant? Obviously not. The world is three-dimensional and in a 3D Cartesian coordinate system the length is given by the three-dimensional version of Pythagoras's theorem:
This is invariant under all rotations. The apparent violation of invariance of length only happened because we were "missing" a dimension. It seems that, provided all the directions in which an object can be tilted or arranged are represented within a coordinate system, the length of an object does not change under rotations. A 3-dimensional coordinate system is enough in classical mechanics because time is assumed absolute and independent of space in that context. It can be considered separately.
Note that invariance of length is not ordinarily considered a dynamic principle, not even a theorem. It is simply a statement about the fundamental nature of space itself. Space as we ordinarily conceive it is called a three-dimensional Euclidean space, because its geometrical structure is described by the principles of Euclidean geometry. The formula for distance between two points is a fundamental property of an Euclidean space, it is called the Euclidean metric tensor (or simply the Euclidean metric). In general, distance formulas are called metric tensors.
Note that rotations are fundamentally related to the concept of length. In fact, one may define length or distance to be that which stays the same (is invariant) under rotations, or define rotations to be that which keep the length invariant. Given any one, it is possible to find the other. If we know the distance formula, we can find out the formula for transforming coordinates in a rotation. If, on the other hand, we have the formula for rotations then we can find out the distance formula.
The postulates of Special Relativity
Einstein developed Special Relativity on the basis of two postulates:
- First postulate—Special principle of relativity—The laws of physics are the same in all inertial frames of reference. In other words, there are no privileged inertial frames of reference.
- Second postulate—Invariance of c—The speed of light in a vacuum is independent of the motion of the light source.
Special Relativity can be derived from these postulates, as was done by Einstein in 1905. Einstein's postulates are still applicable in the modern theory but the origin of the postulates is more explicit. It was shown above how the existence of a universally constant velocity (the speed of light) is a consequence of modeling the universe as a particular four dimensional space having certain specific properties. The principle of relativity is a result of Minkowski structure being preserved under Lorentz transformations, which are postulated to be the physical transformations of inertial reference frames.
The Minkowski formulation: Introduction of spacetime
After Einstein derived special relativity formally from the counterintuitive proposition that the speed of light is the same to all observers, the need was felt for a more satisfactory formulation. Minkowski, building on mathematical approaches used in non-Euclidean geometry and the mathematical work of Lorentz and Poincaré, realized that a geometric approach was the key. Minkowski showed in 1908 that Einstein's new theory could be explained in a natural way if the concept of separate space and time is replaced with one four-dimensional continuum called spacetime. This was a groundbreaking concept, and Roger Penrose has said that relativity was not truly complete until Minkowski reformulated Einstein's work.
The concept of a four-dimensional space is hard to visualize. It may help at the beginning to think simply in terms of coordinates. In three-dimensional space, one needs three real numbers to refer to a point. In the Minkowski space, one needs four real numbers (three space coordinates and one time coordinate) to refer to a point at a particular instant of time. This point at a particular instant of time, specified by the four coordinates, is called an event. The distance between two different events is called the spacetime interval.
A path through the four-dimensional spacetime, usually called Minkowski space, is called a world line. Since it specifies both position and time, a particle having a known world line has a completely determined trajectory and velocity. This is just like graphing the displacement of a particle moving in a straight line against the time elapsed. The curve contains the complete motional information of the particle.
In the same way as the measurement of distance in 3D space needed all three coordinates we must include time as well as the three space coordinates when calculating the distance in Minkowski space (henceforth called M). In a sense, the spacetime interval provides a combined estimate of how far two events occur in space as well as the time that elapses between their occurrence.
But there is a problem. Time is related to the space coordinates, but they are not equivalent. Pythagoras's theorem treats all coordinates on an equal footing (see Euclidean space for more details). We can exchange two space coordinates without changing the length, but we can not simply exchange a space coordinate with time, they are fundamentally different. It is an entirely different thing for two events to be separated in space and to be separated in time. Minkowski proposed that the formula for distance needed a change. He found that the correct formula was actually quite simple, differing only by a sign from Pythagoras's theorem:
where c is a constant and t is the time coordinate. Multiplication by c, which has the dimension ms − 1, converts the time to units of length and this constant has the same value as the speed of light. So the spacetime interval between two distinct events is given by
There are two major points to be noted. Firstly, time is being measured in the same units as length by multiplying it by a constant conversion factor. Secondly, and more importantly, the time-coordinate has a different sign than the space coordinates. This means that in the four-dimensional spacetime, one coordinate is different from the others and influences the distance differently. This new 'distance' may be zero or even negative. This new distance formula, called the metric of the spacetime, is at the heart of relativity. This distance formula is called the metric tensor of M. This minus sign means that a lot of our intuition about distances can not be directly carried over into spacetime intervals. For example, the spacetime interval between two events separated both in time and space may be zero (see below). From now on, the terms distance formula and metric tensor will be used interchangeably, as will be the terms Minkowski metric and spacetime interval.
In Minkowski spacetime the spacetime interval is the invariant length, the ordinary 3D length is not required to be invariant. The spacetime interval must stay the same under rotations, but ordinary lengths can change. Just like before, we were missing a dimension. Note that everything this far are merely definitions. We define a four-dimensional mathematical construct which has a special formula for distance, where distance means that which stays the same under rotations (alternatively, one may define a rotation to be that which keeps the distance unchanged).
Now comes the physical part. Rotations in Minkowski space have a different interpretation than ordinary rotations. These rotations correspond to transformations of reference frames. Passing from one reference frame to another corresponds to rotating the Minkowski space. An intuitive justification for this is given below, but mathematically this is a dynamical postulate just like assuming that physical laws must stay the same under Galilean transformations (which seems so intuitive that we don't usually recognize it to be a postulate).
Since by definition rotations must keep the distance same, passing to a different reference frame must keep the spacetime interval between two events unchanged. This requirement can be used to derive an explicit mathematical form for the transformation that must be applied to the laws of physics (compare with the application of Galilean transformations to classical laws) when shifting reference frames. These transformations are called the Lorentz transformations. Just like the Galilean transformations are the mathematical statement of the principle of Galilean relativity in classical mechanics, the Lorentz transformations are the mathematical form of Einstein's principle of relativity. Laws of physics must stay the same under Lorentz transformations. Maxwell's equations and Dirac's equation satisfy this property, and hence, they are relativistically correct laws (but classically incorrect, since they don't transform correctly under Galilean transformations).
With the statement of the Minkowski metric, the common name for the distance formula given above, the theoretical foundation of special relativity is complete. The entire basis for special relativity can be summed up by the geometric statement "changes of reference frame correspond to rotations in the 4D Minkowski spacetime, which is defined to have the distance formula given above." The unique dynamical predictions of SR stem from this geometrical property of spacetime. Special relativity may be said to be the physics of Minkowski spacetime. In this case of spacetime, there are six independent rotations to be considered. Three of them are the standard rotations on a plane in two directions of space. The other three are rotations in a plane of both space and time: These rotations correspond to a change of velocity, and are described by the traditional Lorentz transformations.
As has been mentioned before, one can replace distance formulas with rotation formulas. Instead of starting with the invariance of the Minkowski metric as the fundamental property of spacetime, one may state (as was done in classical physics with Galilean relativity) the mathematical form of the Lorentz transformations and require that physical laws be invariant under these transformations. This makes no reference to the geometry of spacetime, but will produce the same result. This was in fact the traditional approach to SR, used originally by Einstein himself. However, this approach is often considered to offer less insight and be more cumbersome than the more natural Minkowski formalism.
Reference frames and Lorentz transformations: Relativity revisited
We have already discussed that in classical mechanics coordinate frame changes correspond to Galilean transfomations of the coordinates. Is this adequate in the relativistic Minkowski picture?
Suppose there are two people, Bill and John, on separate planets that are moving away from each other. Bill and John are on separate planets so they both think that they are stationary. John draws a graph of Bill's motion through space and time and this is shown in the illustration below:
John sees that Bill is moving through space as well as time but Bill thinks he is moving through time alone. Bill would draw the same conclusion about John's motion. In fact, these two views, which would be classically considered a difference in reference frames, are related simply by a coordinate transformation in M. Bill's view of his own world line and John's view of Bill's world line are related to each other simply by a rotation of coordinates. One can be transformed into the other by a rotation of the time axis. Minkowski geometry handles transformations of reference frames in a very natural way.
Changes in reference frame, represented by velocity transformations in classical mechanics, are represented by rotations in Minkowski space. These rotations are called Lorentz transformations. They are different from the Galilean transformations because of the unique form of the Minkowski metric. The Lorentz transformations are the relativistic equivalent of Galilean transformations. Laws of physics, in order to be relativistically correct, must stay the same under Lorentz transformations. The physical statement that they must be same in all inertial reference frames remains unchanged, but the mathematical transformation between different reference frames changes. Newton's laws of motion are invariant under Galilean rather than Lorentz transformations, so they are immediately recognizable as non-relativistic laws and must be discarded in relativistic physics. Schrödinger's equation is also non-relativistic.
Maxwell's equations are trickier. They are written using vectors and at first glance appear to transform correctly under Galilean transformations. But on closer inspection, several questions are apparent that can not be satisfactorily resolved within classical mechanics (see History of special relativity). They are indeed invariant under Lorentz transformations and are relativistic, even though they were formulated before the discovery of special relativity. Classical electrodynamics can be said to be the first relativistic theory in physics. To make the relativistic character of equations apparent, they are written using 4-component vector like quantities called 4-vectors. 4-Vectors transform correctly under Lorentz transformations. Equations written using 4-vectors are automatically relativistic. This is called the manifestly covariant form of equations. 4-Vectors form a very important part of the formalism of special relativity.
Einstein's postulate: The constancy of the speed of light
Einstein's postulate that the speed of light is a constant comes out as a natural consequence of the Minkowski formulation
- When an object is traveling at c in a certain reference frame, the spacetime interval is zero.
- The spacetime interval between the origin-event (0,0,0,0) and an event (x, y,z, t) is
- The distance travelled by an object moving at velocity v for t seconds is:
- Since the velocity v equals c we have
- Hence the spacetime interval between the events of departure and arrival is given by
- An object traveling at c in one reference frame is traveling at c in all reference frames.
- Let the object move with velocity v when observed from a different reference frame. A change in reference frame corresponds to a rotation in M. Since the spacetime interval must be conserved under rotation, the spacetime interval must be the same in all reference frames. In proposition 1 we showed it to be zero in one reference frame, hence it must be zero in all other reference frames. We get that
- which implies
The paths of light rays have a zero spacetime interval, and hence all observers will obtain the same value for the speed of light. Therefore, when assuming that the universe has four dimensions that are related by Minkowski's formula, the speed of light appears as a constant, and does not need to be assumed (postulated) to be constant as in Einstein's original approach to special relativity.
Clock delays and rod contractions: More on Lorentz transformations
Another consequence of the invariance of the spacetime interval is that clocks will appear to go slower on objects that are moving relative to you. This is very similar to how the 2D projection of a line rotated into the third-dimension appears to get shorter. Length is not conserved simply because we are ignoring one of the dimensions. Let us return to the example of John and Bill.
John observes the length of Bill's spacetime interval as:
whereas Bill doesn't think he has traveled in space, so writes:
The spacetime interval, s2, is invariant. It has the same value for all observers, no matter who measures it or how they are moving in a straight line. This means that Bill's spacetime interval equals John's observation of Bill's spacetime interval so:
So, if John sees a clock that is at rest in Bill's frame record one second, John will find that his own clock measures between these same ticks an interval t, called coordinate time, which is greater than one second. It is said that clocks in motion slow down, relative to those on observers at rest. This is known as "relativistic time dilation of a moving clock." The time that is measured in the rest frame of the clock (in Bill's frame) is called the proper time of the clock.
In special relativity, therefore, changes in reference frame affect time also. Time is no longer absolute. There is no universally correct clock, time runs at different rates for different observers.
Similarly it can be shown that John will also observe measuring rods at rest on Bill's planet to be shorter in the direction of motion than his own measuring rods. This is a prediction known as "relativistic length contraction of a moving rod." If the length of a rod at rest on Bill's planet is X, then we call this quantity the proper length of the rod. The length x of that same rod as measured on John's planet, is called coordinate length, and given by
These two equations can be combined to obtain the general form of the Lorentz transformation in one spatial dimension:
where the Lorentz factor is given by
The above formulas for clock delays and length contractions are special cases of the general transformation.
Alternatively, these equations for time dilation and length contraction (here obtained from the invariance of the spacetime interval), can be obtained directly from the Lorentz transformation by setting X = 0 for time dilation, meaning that the clock is at rest in Bill's frame, or by setting t = 0 for length contraction, meaning that John must measure the distances to the end points of the moving rod at the same time.
A consequence of the Lorentz transformations is the modified velocity-addition formula:
Simultaneity and clock desynchronization
Rather counter-intuitively, special relativity suggests that when 'at rest' we are actually moving through time at the speed of light. As we speed up in space we slow down in time. At the speed of light in space, time slows down to zero. This is a rotation of the time axis into the space axis. We observe that the object speeding by relativistically as having its time axis not at a right angle.
The consequence of this in Minkowski's spacetime is that clocks will appear to be out of phase with each other along the length of a moving object. This means that if one observer sets up a line of clocks that are all synchronized so they all read the same time, then another observer who is moving along the line at high speed will see the clocks all reading different times. This means that observers who are moving relative to each other see different events as simultaneous. This effect is known as "Relativistic Phase" or the "Relativity of Simultaneity." Relativistic phase is often overlooked by students of special relativity, but if it is understood, then phenomena such as the twin paradox are easier to understand.
Observers have a set of simultaneous events around them that they regard as composing the present instant. The relativity of simultaneity results in observers who are moving relative to each other having different sets of events in their present instant.
The net effect of the four-dimensional universe is that observers who are in motion relative to you seem to have time coordinates that lean over in the direction of motion, and consider things to be simultaneous that are not simultaneous for you. Spatial lengths in the direction of travel are shortened, because they tip upwards and downwards, relative to the time axis in the direction of travel, akin to a rotation out of three-dimensional space.
Great care is needed when interpreting spacetime diagrams. Diagrams present data in two dimensions, and cannot show faithfully how, for instance, a zero length spacetime interval appears.
Mass Velocity Relationship
E = mc2 where m stands for rest mass (invariant mass), applies most simply to single particles with no net momentum. But it also applies to ordinary objects composed of many particles so long as the particles are moving in different directions so the total momentum is zero. The mass of the object includes contributions from heat and sound, chemical binding energies and trapped radiation. Familiar examples are a tank of gas, or a hot bowl of soup. The kinetic energy of their particles, the heat motion and radiation, contribute to their weight on a scale according to E = mc2.
The formula is the special case of the relativistic energy-momentum relationship:
This equation gives the rest mass of a system which has an arbitrary amount of momentum and energy. The interpretation of this equation is that the rest mass is the relativistic length of the energy-momentum four-vector.
If the equation E = mc2 is used with the rest mass of the object, the E given by the equation will be the rest energy of the object, and will change with according to the object's internal energy, heat and sound and chemical binding energies, but will not change with the object's overall motion).
If the equation E = mc2 is used with the relativistic mass of the object, the energy will be the total energy of the object, which is conserved in collisions with other fast moving objects.
In developing special relativity, Einstein found that the total energy of a moving body is
with v the velocity.
For small velocities, this reduces to
Which includes the newtonian kinetic energy, as expected, but also an enormous constant term, which is not zero when the object isn't moving.
The total momentum is:
The ratio of the momentum to the velocity is the relativistic mass, and this ratio is equal to the total energy times c2. The energy and relativistic mass are always related by the famous formula.
While this is suggestive, it does not immediately imply that the energy and mass are equivalent because the energy can always be redefined by adding or subtracting a constant. So it is possible to subtract the m0c2 from the expression for E and this is also a valid conserved quantity, although an ugly one. Einstein needed to know whether the rest-mass of the object is really an energy, or whether the constant term was just a mathematical convenience with no physical meaning.
In order to see if the m0c2 is physically significant, Einstein considered processes of emission and absorption. He needed to establish that an object loses mass when it emits energy. He did this by analyzing two photon emission in two different frames.
After Einstein first made his proposal, it became clear that the word mass can have two different meanings. The rest mass is what Einstein called m, but others defined the relativistic mass as:
This mass is the ratio of momentum to velocity, and it is also the relativistic energy divided by c2. So the equation E = mrelc2 holds for moving objects. When the velocity is small, the relativistic mass and the rest mass are almost exactly the same.
E = mc2 either means E = m0c2 for an object at rest, or E = mrelc2 when the object is moving.
Einstein's original papers treated m as what would now be called the rest mass and some claim that he did not like the idea of "relativistic mass." When modern physicists say "mass," they are usually talking about rest mass, since if they meant "relativistic mass," they would just say "energy."
We can rewrite the expression for the energy as a Taylor series:
For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because v / c is small. For low speeds we can ignore all but the first two terms:
The classical energy equation ignores both the m0c2 part, and the high-speed corrections. This is appropriate, because all the high order corrections are small. Since only changes in energy affect the behavior of objects, whether we include the m0c2 part makes no difference, since it is constant. For the same reason, it is possible to subtract the rest energy from the total energy in relativity. In order to see if the rest energy has any physical meaning, it is essential to consider emission and absorption of energy in different frames.
The higher-order terms are extra correction to Newtonian mechanics which become important at higher speeds. The Newtonian equation is only a low speed approximation, but an extraordinarily good one. All of the calculations used in putting astronauts on the moon, for example, could have been done using Newton's equations without any of the higher order corrections.
Mass-energy equivalence: Sunlight and atom bombs
Einstein showed that mass is simply another form of energy. The energy equivalent of rest mass m is E = mc2. This equivalence implies that mass should be interconvertible with other forms of energy. This is the basic principle behind atom bombs and production of energy in nuclear reactors and stars (like Sun).
The standard model of the structure of matter has it that most of the 'mass' of the atom is in the atomic nucleus, and that most of this nuclear mass is in the intense field of light-like gluons swathing the quarks. Most of what is called the mass of an object is thus already in the form of energy, the energy of the quantum color field that confines the quarks.
The sun, for instance, fuels its prodigious output of energy by converting each second 600 billion kilograms of hydrogen-1 (single proton]]s) into 595.2 billion kilograms of helium-4 (2 protons combined with 2 neutrons)—the 4.2 billion kilogram difference is the energy which the sun radiates into space each second. The sun, it is estimated, will continue to turn 4.2 billion kilos of mass into energy for the next 5 billion years or so before leaving the main sequence.
The atomic bombs that ended the Second World War, in comparison, converted about a thirtieth of an ounce of mass into energy.
The energy involved in chemical reactions is so small, however, that the conservation of mass is an excellent approximation.
General relativity: A peek forward
Unlike Newton's laws of motion, relativity is not based upon dynamical postulates. It does not assume anything about motion or forces. Rather, it deals with the fundamental nature of spacetime. It is concerned with describing the geometry of the backdrop on which all dynamical phenomena take place. In a sense therefore, it is a meta-theory, a theory that lays out a structure that all other theories must follow. In truth, Special relativity is only a special case. It assumes that spacetime is flat. That is, it assumes that the structure of Minkowski space and the Minkowski metric tensor is constant throughout. In General relativity, Einstein showed that this is not true. The structure of spacetime is modified by the presence of matter. Specifically, the distance formula given above is no longer generally valid except in space free from mass. However, just like a curved surface can be considered flat in the infinitesimal limit of calculus, a curved spacetime can be considered flat at a small scale. This means that the Minkowski metric written in the differential form is generally valid.
One says that the Minkowski metric is valid locally, but it fails to give a measure of distance over extended distances. It is not valid globally. In fact, in general relativity the global metric itself becomes dependent on the mass distribution and varies through space. The central problem of general relativity is to solve the famous Einstein field equations for a given mass distribution and find the distance formula that applies in that particular case. Minkowski's spacetime formulation was the conceptual stepping stone to general relativity. His fundamentally new outlook allowed not only the development of general relativity, but also to some extent quantum field theories.
- ↑ Einstein, Albert, On the Electrodynamics of Moving Bodies, Annalen der Physik 17: 891-921. Retrieved December 18, 2007.
- ↑ Hermann Minkowski, Raum und Zeit, 80. Versammlung Deutscher Naturforscher, Physikalische Zeitschrift 10: 104-111.
- ↑ UCR, What is the experimental basis of Special Relativity? Retrieved December 22, 2007.
- ↑ Core Power, What is the experimental basis of the Special Relativity Theory? Retrieved December 22, 2007.
- ↑ S. Walter and J. Gray (eds.), "The non-Euclidean style of Minkowskian relativity." The Symbolic Universe (Oxford, UK: Oxford University Press, 1999, ISBN 0198500882).
- ↑ 6.0 6.1 Albert Einstein, R.W. Lawson (trans.), Relativity. The Special and General Theory (London, UK: Routledge classics, 2003).
- ↑ Richard Feyman, Six Not so Easy Pieces (Reading, MA: Addison-Wesley Pub, ISBN 0201150255).
- ↑ Hermann Weyl, Space, Time, Matter (New York, NY: Dover Books, 1952).
- ↑ Kip Thorne, and Roger Blandford, Caltec physics notes, Caltech. Retrieved December 18, 2007.
- ↑ FourmiLab, Special Relativity. Retrieved December 19, 2007.
- ↑ UCR, usenet physics FAQ. Retrieved December 19, 2007.
- Bais, Sander. 2007. Very Special Relativity: An Illustrated Guide. Cambridge, MA: Harvard University Press. ISBN 067402611X.
- Robinson, F.N.H. 1996. An Introduction to Special Relativity and Its Applications. River Edge, NJ: World Scientific Publishing Company. ISBN 9810224990.
- Stephani, Hans. 2004. Relativity: An Introduction to Special and General Relativity. Cambridge, UK: Cambridge University Press. ISBN 0521010691.
Special relativity for a general audience (no math knowledge required)
- Einstein Light An award-winning, non-technical introduction (film clips and demonstrations) supported by dozens of pages of further explanations and animations, at levels with or without mathematics. Retrieved December 18, 2007.
- Einstein Online Introduction to relativity theory, from the Max Planck Institute for Gravitational Physics. Retrieved December 18, 2007.
Special relativity explained (using simple or more advanced math)
- Albert Einstein. Relativity: The Special and General Theory. New York: Henry Holt 1920. BARTLEBY.COM, 2000. Retrieved December 18, 2007.
- Usenet Physics FAQ. Retrieved December 18, 2007.
- Sean Carroll's online Lecture Notes on General Relativity. Retrieved December 18, 2007.
- Hyperphysics Time Dilation. Retrieved December 18, 2007.
- Hyperphysics Length Contraction. Retrieved December 18, 2007.
- Greg Egan's Foundations. Retrieved December 18, 2007.
- Special Relativity Simulation. Retrieved December 18, 2007.
- Caltech Relativity Tutorial A basic introduction to concepts of Special and General Relativity, requiring only a knowledge of basic geometry. Retrieved December 18, 2007.
- Relativity Calculator - Learn Special Relativity Mathematics Mathematics of special relativity presented in as simple and comprehensive manner possible within philosophical and historical contexts. Retrieved December 18, 2007.
- Special relativity made stupid. Retrieved December 18, 2007.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Special_relativity%2C_an_introduction | 13 |
16 | The Cassini-Huygens spacecraft is one of the largest, heaviest and most complex interplanetary spacecraft ever built. Of all interplanetary spacecraft, only the two Phobos spacecraft sent to Mars by the former Soviet Union were heavier.
Loaded with an array of powerful instruments and cameras, the spacecraft is capable of taking accurate measurements and detailed images in a variety of atmospheric conditions and light spectra.
Two elements comprise the spacecraft: the Cassini orbiter and the Huygens probe. After arrival at Saturn, the spacecraft will then orbit around the Saturnian system for four years; sending data back to Earth that will help us understand this region.
Cassini-Huygens is equipped for 27 diverse science investigations. The Cassini orbiter has 12 instruments and the Huygens probe has six. The instruments often have multiple functions, equipped to thoroughly investigate all the important elements of the Saturnian system.
Cassini was the first planetary spacecraft to use solid-state recorders without moving parts instead of the older tape recorder.
The Cassini-Huygens spacecraft communicates with Earth through its antenna subsystem, consisting of one high-gain antenna and two low-gain antennas.
The primary function of the high-gain antenna is to support communication with Earth, but it is also used for scientific experiments. During the early portion of the long journey to Saturn, the high-gain antenna was positioned toward the Sun, functioning like an umbrella to shield the spacecraft’s instruments from the harmful rays of the Sun.
The spacecraft would communicate through one of its low-gain antennas only in the event of a power failure or other such emergency situation.
The Cassini spacecraft stands more than 6.7 metres high and is more than 4 metres wide. The magnetometer instrument is mounted on an 11-metre boom that extends outward from the spacecraft.
The orbiter alone weighs 2125 kilograms. Total mass of the Huygens probe is 349 kilograms, including payload (49 kilograms) and probe support equipment on the orbiter (30 kilograms).
The launch mass of Cassini-Huygens was 5.82 tonnes, of which 3.1 tonnes were propellant.
Three Radioisotope Thermoelectric Generators (RTGs) provide power for the spacecraft, including the instruments, computers, radio transmitters, attitude thrusters and reaction wheels. | http://www.esa.int/Our_Activities/Space_Science/Cassini-Huygens/Cassini_spacecraft/(print) | 13 |
20 | In astronomy, a galactic latitude or longitude, useful for describing the relative positions and motions of components of the Milky Way Galaxy. Galactic latitude is measured in degrees north or south of the plane of the Milky Way. Galactic longitude is measured in degrees east of the galactic centre in the constellation Sagittarius.
Learn more about galactic coordinate with a free trial on Britannica.com.
Investigation of geometric objects using coordinate systems. Because René Descartes was the first to apply algebra to geometry, it is also known as Cartesian geometry. It springs from the idea that any point in two-dimensional space can be represented by two numbers and any point in three-dimensional space by three. Because lines, circles, spheres, and other figures can be thought of as collections of points in space that satisfy certain equations, they can be explored via equations and formulas rather than graphs. Most of analytic geometry deals with the conic sections. Because these are defined using the notion of fixed distance, each section can be represented by a general equation derived from the distance formula.
Learn more about analytic geometry with a free trial on Britannica.com.
The typical CMM is composed of three axes, an X, Y and Z. These axes are orthogonal to each other in a typical three dimensional coordinate system. Each axis has a very accurate scale system that indicates the location of that axis. All three axes are displayed on a digital readout. The probe is used to touch different spots on the part being measured. The machine then uses the X,Y,Z coordinates of each of these points to determine size and position. There are newer models that have probes that drag along the surface of the part taking points at specified intervals. This method of CMM inspection is more accurate than the conventional touch-probe method and most times faster as well. The next generation of scanning, known as laser scanning, is advancing very quickly. This method uses laser beams that are projected against the surface of the part. Many thousands of points can then be taken and used to not only check size and position, but to create a 3D image of the part as well. This "point-cloud data" can then be transferred to CAD software to create a working 3D model of the part. The laser scanner is often used to facilitate the "reverse engineering" process. This is the process of taking an existing part, measuring it to determine its size, and creating engineering drawings from these measurements. This is most often necessary in cases where engineering drawings may no longer exist or are unavailable for the particular part that needs replacement.
A coordinate measuring machine (CMM) is also a device used in manufacturing and assembly processes to test a part or assembly against the design intent. By precisely recording the X, Y, and Z coordinates of the target, points are generated which can then be analyzed via regression algorithms for the construction of features. These points are collected by using a probe that is positioned manually by an operator or automatically via Direct Computer Control (DCC).
The machines are available in a wide range of sizes and designs with a variety of different probe technologies. They can be controlled and operated manually, or by CNC or PC controls. They are offered in various configurations such as benchtop, free-standing, handheld and portable.
As well as the traditional three axis machines (as pictured above), CMMs are now also available in a variety of other forms. These include CMM arms that use angular measurements taken at the joints of the arm to calculate the position of the stylus tip. Such arm CMMs are often used where their portablity is an advantage over traditional fixed bed CMMs. Because CMM arms imitate the flexibility of a human arm they are also often able to reach the insides of complex parts that could not be probed using a standard three axis machine.
A further development was the addition of motors for driving each axis. Operators no longer had to physically touch the machine but could drive each axis using a handbox with joysticks in much the same way as with modern remote controlled cars. Measurement accuracy and precision improved dramatically with the invention of the electronic touch trigger probe. The pioneer of this new probe device was David McMurtry who subsequently formed what is now Renishaw Plc, even today the driving force behind many developments in the CMM field. Although still a contact device, the probe had a spring loaded steel ball (later ruby ball) stylus. As the probe touched the surface of the component the stylus deflected and simultaneously sent the X.Y,Z coordinate information to the computer. Measurement errors caused by individual operators became fewer and the stage was set for the introduction of CNC operations and the coming of age of CMM's.
Optical probes are lens-CCD-systems, which are moved like the mechanical ones, and are aimed at the point of interest, instead of touching the material. The captured image of the surface will be enclosed in the borders of a measuring window, until the residue is adequate to contrast between black and white zones. The dividing curve can be calculated to a point, which is the wanted measuring point in space. The horizontal information on the CCD is 2D (XY) and the vertical position is the position of the complete probing system on the stand Z-drive (or other device component). This allows entire 3D-probing.
Optical probes and/or laser probes can be used (if possible in combination), which change CMM's to measuring microscopes or multi sensor measuring machines. Fringe projection systems, theodolite triangulation systems or laser distant and triangulation systems are not called measuring machines, but the measuring result is the same: a space point. Laser probes are used to detect the distance between the surface and the reference point on the end of the kinematic chain (i.e.: end of the Z-drive component). This can use an interferometrical, a light deflection or half beam shadowing principle.
"Coordinate Input Apparatus, Control Method Thereof and Coordinate Input System" in Patent Application Approval Process
Apr 18, 2013; By a News Reporter-Staff News Editor at Politics & Government Week -- A patent application by the inventors Sato, Hajime...
"Three Dimensional Coordinate Location Device, Method for Same, and Program" in Patent Application Approval Process
Dec 27, 2012; By a News Reporter-Staff News Editor at Politics & Government Week -- A patent application by the inventors Tomokage, Hajime...
Converting UTM coordinates for display: conversion of different display formats of UTM coordinates is made easy by special software programs.
Apr 01, 2009; For non-geodesists, coordinates, consisting of long sequences of numbers and letters that indicate a unique point on the... | http://www.reference.com/browse/coordinate | 13 |
14 | In what way are exponential and log functions related?
Our teacher said that they are inverses of each other but did not explain why.
taking log(base a) of both sides we get
base other than a could also be taken.
So log was needed to find the inverse.
(Remember taking log creates some restrictions as log is a function.)
loga X=p (a is base)
hope this helps
Suppose you have a function . The inverse of this function is:
Notice that we switched the x to y and the y to x. Why? You should know that a function has a domain and range. The inverse function also has a domain and range, but what are they? The domain of the inverse function is the range of the original function and the range of the inverse function is the domain of the original function. Now we covered what inverse functions are, let's consider the case:
Finding the inverse is easy, right? It's just:
But now this is an implicit equation. We need to find y explicitly, and that's where we use logarithms:
Logarithms base e is commonly called the natural logarithm, and it's notation is ln.
Thus, and are inverses of each other. | http://mathhelpforum.com/pre-calculus/45058-log-exponential-functions.html | 13 |
58 | This HTML version of the book is provided as a convenience, but some math equations are not translated correctly. The PDF version is more reliable.
Chapter 12 Vectors as vectors
12.1 What’s a vector?
The word “vector” means different things to different people. In MATLAB, a vector is a matrix that has either one row or one column. So far we have used MATLAB vectors to represent
In this chapter we will see another use of MATLAB vectors: representing spatial vectors. A spatial vector is a value that represents a multidimensional physical quantity like position, velocity, acceleration or force1.
These quantities cannot be described with a single number because they contain multiple components. For example, in a 3-dimensional Cartesian coordinate space, it takes three numbers to specify a position in space; they are usually called x, y and z coordinates. As another example, in 2-dimensional polar coordinates, you can specify a velocity with two numbers, a magnitude and an angle, often called r and θ.
It is convenient to represent spatial vectors using MATLAB vectors because MATLAB knows how to perform most of the vector operations you need for physical modeling. For example, suppose that you are given the velocity of a baseball in the form of a MATLAB vector with two elements, vx and vy, which are the components of velocity in the x and y directions.
>> V = [30, 40] % velocity in m/s
And suppose you are asked to compute the total acceleration of the ball due to drag and gravity. In math notation, the force due to drag is
where V is a spatial vector representing velocity, v is the magnitude of the velocity (sometimes called “speed”), and V is a unit vector in the direction of the velocity vector. The other terms, ρ, A and Cd, are scalars.
The magnitude of a vector is the square root of the sum of the squares of the elements. You could compute it with hypotenuse from Section 5.5, or you could use the MATLAB function norm (norm is another name2 for the magnitude of a vector):
>> v = norm(V) v = 50
V is a unit vector, which means it should have norm 1, and it should point in the same direction as V. The simplest way to compute it is to divide V by its own norm.
>> Vhat = V / v Vhat = 0.6 0.8
Then we can confirm that the norm of V is 1:
>> norm(Vhat) ans = 1
To compute Fd we just multiply the scalar terms by V.
Fd = - 1/2 * C * rho * A * v^2 * Vhat
Similarly, we can compute acceleration by dividing the vector Fd by the scalar m.
Ad = Fd / m
To represent the acceleration of gravity, we create a vector with two components:
Ag = [0; -9.8]
The x component of gravity is 0; the y component is −9.8 m/s2.
Finally we compute total acceleration by adding vector quantities:
A = Ag + Ad;
One nice thing about this computation is that we didn’t have to think much about the components of the vectors. By treating spatial vectors as basic quantities, we can express complex computations concisely.
12.2 Dot and cross products
Multiplying a vector by a scalar is a straightforward operation; so is adding two vectors. But multiplying two vectors is more subtle. It turns out that there are two vector operations that resemble multiplication: dot product and cross product.
The dot product of vectors A and B is a scalar:
where a is the magnitude of A, b is the magnitude of B, and θ is the angle between the vectors. We already know how to compute magnitudes, and you could probably figure out how to compute θ, but you don’t have to. MATLAB provides a function, dot, that computes dot products.
d = dot(A, B)
dot works in any number of dimensions, as long as A and B have the same number of elements.
If one of the operands is a unit vector, you can use the dot product to compute the component of a vector A that is in the direction of a unit vector, î:
s = dot(A, ihat)
In this example, s is the scalar projection of A onto î. The vector projection is the vector that has magnitude s in the direction of î:
V = dot(A, ihat) * ihat
The cross product of vectors A and B is a vector whose direction is perpendicular to A and B and whose magnitude is
where (again) a is the magnitude of A, b is the magnitude of B, and θ is the angle between the vectors. MATLAB provides a function, cross, that computes cross products.
C = cross(A, B)
cross only works for 3-dimensional vectors; the result is a 3-dimensional vector.
A common use of cross is to compute torques. If you represent a moment arm R and a force F as 3-dimensional vectors, then the torque is just
Tau = cross(R, F)
If the components of R are in meters and the components of F are in Newtons, then the torques in Tau are in Newton-meters.
12.3 Celestial mechanics
Modeling celestial mechanics is a good opportunity to compute with spatial vectors. Imagine a star with mass m1 at a point in space described by the vector P1, and a planet with mass m2 at point P2. The magnitude of the gravitational force3 between them is
where r is the distance between them and G is the universal gravitational constant, which is about 6.67 × 10−11 N m2 / kg2. Remember that this is the appropriate value of G only if the masses are in kilograms, distances in meters, and forces in Newtons.
The direction of the force on the star at P1 is in the direction toward P2. We can compute relative direction by subtracting vectors; if we compute R = P2 - P1, then the direction of R is from P1 to P2.
The distance between the planet and star is the length of R:
r = norm(R)
The direction of the force on the star is R:
rhat = R / r
Exercise 1 Write a sequence of MATLAB statements that computes F12, a vector that represents the force on the star due to the planet, and F21, the force on the planet due to the star.
Exercise 2 Encapsulate these statements in a function named gravity_force_func that takes P1, m1, P2, and m2 as input variables and returns F12.
Exercise 3 Write a simulation of the orbit of Jupiter around the Sun. The mass of the Sun is about 2.0 × 1030 kg. You can get the mass of Jupiter, its distance from the Sun and orbital velocity from http://en.wikipedia.org/wiki/Jupiter. Confirm that it takes about 4332 days for Jupiter to orbit the Sun.
Animation is a useful tool for checking the results of a physical model. If something is wrong, animation can make it obvious. There are two ways to do animation in MATLAB. One is to use getframe to capture a series of images and movie to play them back. The more informal way is to draw a series of plots. Here is an example I wrote for Exercise 12.3:
function animate_func(T,M) % animate the positions of the planets, assuming that the % columns of M are x1, y1, x2, y2. X1 = M(:,1); Y1 = M(:,2); X2 = M(:,3); Y2 = M(:,4); minmax = [min([X1;X2]), max([X1;X2]), min([Y1;Y2]), max([Y1;Y2])]; for i=1:length(T) clf; axis(minmax); hold on; draw_func(X1(i), Y1(i), X2(i), Y2(i)); drawnow; end end
The input variables are the output from ode45, a vector T and a matrix M. The columns of M are the positions and velocities of the Sun and Jupiter, so X1 and Y1 get the coordinates of the Sun; X2 and Y2 get the coordinates of Jupiter.
minmax is a vector of four elements which is used inside the loop to set the axes of the figure. This is necessary because otherwise MATLAB scales the figure each time through the loop, so the axes keep changing, which makes the animation hard to watch.
Each time through the loop, animate_func uses clf to clear the figure and axis to reset the axes. hold on makes it possible to put more than one plot onto the same axes (otherwise MATLAB clears the figure each time you call plot).
Each time through the loop, we have to call drawnow so that MATLAB actually displays each plot. Otherwise it waits until you finish drawing all the figures and then updates the display.
draw_func is the function that actually makes the plot:
function draw_func(x1, y1, x2, y2) plot(x1, y1, 'r.', 'MarkerSize', 50); plot(x2, y2, 'b.', 'MarkerSize', 20); end
The input variables are the position of the Sun and Jupiter. draw_func uses plot to draw the Sun as a large red marker and Jupiter as a smaller blue one.
Exercise 4 To make sure you understand how animate_func works, try commenting out some of the lines to see what happens.
One limitation of this kind of animation is that the speed of the animation depends on how fast your computer can generate the plots. Since the results from ode45 are usually not equally spaced in time, your animation might slow down where ode45 takes small time steps and speed up where the time step is larger.
There are two ways to fix this problem:
Exercise 5 Use animate_func and draw_func to vizualize your simulation of Jupiter. Modify it so it shows one day of simulated time in 0.001 seconds of real time—one revolution should take about 4.3 seconds.
12.5 Conservation of Energy
A useful way to check the accuracy of an ODE solver is to see whether it conserves energy. For planetary motion, it turns out that ode45 does not.
The kinetic energy of a moving body is m v2 / 2; the kinetic energy of a solar system is the total kinetic energy of the planets and sun. The potential energy of a sun with mass m1 and a planet with mass m2 and a distance r between them is
Exercise 6 Write a function called energy_func that takes the output of your Jupiter simulation, T and M, and computes the total energy (kinetic and potential) of the system for each estimated position and velocity. Plot the result as a function of time and confirm that it decreases over the course of the simulation. Your function should also compute the relative change in energy, the difference between the energy at the beginning and end, as a percentage of the starting energy.
You can reduce the rate of energy loss by decreasing ode45’s tolerance option using odeset (see Section 11.1):
options = odeset('RelTol', 1e-5); [T, M] = ode45(@rate_func, [0:step:end_time], W, options);
The name of the option is RelTol for “relative tolerance.” The default value is 1e-3 or 0.001. Smaller values make ode45 less “tolerant,” so it does more work to make the errors smaller.
Exercise 7 Run ode45 with a range of values for RelTol and confirm that as the tolerance gets smaller, the rate of energy loss decreases.
Exercise 8 Run your simulation with one of the other ODE solvers MATLAB provides and see if any of them conserve energy.
12.6 What is a model for?
In Section 7.2 I defined a “model” as a simplified description of a physical system, and said that a good model lends itself to analysis and simulation, and makes predictions that are good enough for the intended purpose.
Since then, we have seen a number of examples; now we can say more about what models are for. The goals of a model tend to fall into three categories.
The exercises at the end of this chapter include one model of each type.
Exercise 9 If you put two identical bowls of water into a freezer, one at room temperature and one boiling, which one freezes first?
Hint: you might want to do some research on the Mpemba effect.
Exercise 10 You have been asked to design a new skateboard ramp; unlike a typical skateboard ramp, this one is free to pivot about a support point. Skateboarders approach the ramp on a flat surface and then coast up the ramp; they are not allowed to put their feet down while on the ramp. If they go fast enough, the ramp will rotate and they will gracefully ride down the rotating ramp. Technical and artistic display will be assessed by the usual panel of talented judges.
Your job is to design a ramp that will allow a rider to accomplish this feat, and to create a physical model of the system, a simulation that computes the behavior of a rider on the ramp, and an animation of the result.
A binary star system contains two stars orbiting each other and sometimes planets that orbit one or both stars4. In a binary system, some orbits are “stable” in the sense that a planet can stay in orbit without crashing into one of the stars or flying off into space.
Simulation is a useful tool for investigating the nature of these orbits, as in Holman, M.J. and P.A. Wiegert, 1999, “Long-Term Stability of Planets in Binary Systems,” Astronomical Journal 117, available from http://citeseer.ist.psu.edu/358720.html.
Read this paper and then modify your planetary simulation to replicate or extend the results.
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/matlab/html/book014.html | 13 |
16 | The Scientific Method
Biology is a science that studies living things.
The scientific method is an ordered sequence of investigative steps used by most scientists to explain the phenomena observed in nature.
The Scientific Method involves the following steps:
1. Observation: must be accurate and unbiased.
2. Hypothesis: a testable possible explanation. It is an if…then statement. It is a prediction of what the scientist (student) thinks may happen as a result of a factor (variable) being used in the experiment.
3. Experimentation: Activities done to test the hypothesis made. During this process data is collected and interpreted. This data will relate to the hypothesis. It will either accept, reject, or make the experimenter change the hypothesis.
4. Conclusion: hypothesis supported or not supported (rejected).
5. Further Testing: The supported hypothesis is tested many times to be sure it of its validity or a new hypothesis is formed and tested if the original one was rejected.
And so it continues until a hypothesis, subjected to
rigorous testing, is found ‘not false’. After many tests a theory is
formed. Then, after many more tests, the theory becomes a scientific law.
Controlled Experiments – a double test
The Control (sometimes called the constant) is the procedure without the factor (variable) under investigation. This is the part of the experiment that is done without the variable being used. The control and the experimental procedures must be identical in all other aspects. This will show the experimenter how what he/she is testing (variable) has affected the outcome. Results of control and experiment are compared. If there are identical control and experiment results then the variable is of no relevance. If there are different results then the variable under investigation has a significant role.
Important Aspects of Good Experimental Procedure
1. Well planned and designed: the hypothesis will be tested properly.
2. Safe working: a full and accurate risk assessment of each step is vital.
3. A suitable control must be designed.
4. Repetition: to verify the results.
5. Independent Verification: other unconnected scientists must repeat the work exactly and obtain the same results.
6. No Bias: the appeal of the hypothesis must not influence the procedures or interpretation of the results.
1. Large sample size: better chance of gaining the true representative average.
2. Random selection: more likely to produce a ‘regular type’.
3. Double-blind testing: the investigating scientist and the test subjects do not know the composition of the control or experimental group.
Here is a simple example of the scientific method: | http://www.leavingbio.net/The%20Scientific%20Method.htm | 13 |
14 | Find the ratio of the area of the region of points closer to the centroid than to the sides of an equilateral triangle to the area of the triangle.
Consider each side as a directrix and the Centoid as a focal point. The set of points equidistant from the centroid and the respective side would be a parabola. So the region whose area is needed to compare to the area of the triangle is the intersection of three parabolas.
The region is shown here:
The segments from the centroid to the vertices divide the equilateral triangle into three congruent areas. Therefore, the desired ratio of the region to the equilateral triangle is the same as the region of triangle AGC that is above the arc of the parabola to the area of triangle AGC
The problem is replaced by one of determining the ratio of the area of the region cut off by a parabola determined by a directrix along one side and having its focus as the vertex of the triangle.
The region cut off by the parabola can be seen as the sum of two areas, one a similar triangle and the other a section of the parabola cut off by a chord.
Achimedes, in his Quadrature of the Parabolic Section, provided us with a formula for the Quadrature of the Parabolic Section as
where a(ABC) is the area of triangle ABC. A and C are the endpoints of the chord of the parabola and B is its vertex. Erbas provides an explanation and derivation of the Archimedes quadrature. | http://jwilson.coe.uga.edu/EMT725/CloseCentroid/CloseCentroid.html | 13 |
13 | Washington, May 12 (ANI): An international team of astronomers has discovered what appears to be a supermassive black hole leaving its home galaxy at high speed.
For her final-year project, undergraduate student Marianne Heida of the University of Utrecht, worked at the SRON Netherlands Institute for Space Research, using the Chandra Source Catalog (made using the orbiting Chandra X-ray Observatory) to compare hundreds of thousands of sources of X-rays with the positions of millions of galaxies.
Normally each galaxy contains a supermassive black hole at its center. The material that falls into black holes heats up dramatically on its final journey and often means that black holes are strong X-ray sources.
X-rays are also able to penetrate the dust and gas that obscures the center of a galaxy, giving astronomers a clear view of the region around the black hole, with the bright source appearing as a starlike point.
Looking at one galaxy in the catalog, Marianne noticed that the point of light was offset from the center and yet was so bright that it could well be associated with a supermassive black hole.
The black hole appears to be in the process of being expelled from its galaxy at high speed. Given that these objects can have masses equivalent to 1 billion Suns, it takes a special set of conditions to cause this to happen.
Marianne”s newly-discovered object is probably the result of the merger of two smaller black holes. Supercomputer models suggest that the larger black hole that results is shot away at high speed, depending on the direction and speed in which the two black holes rotate before their collision.
In any case, it provides a fascinating insight into the way in which supermassive black holes develop in them center of galaxies.
Marianne”s research — which was carried out under the supervision of SRON researcher Peter Jonker — suggests this discovery may be only the tip of the iceberg, with others subject to future confirmation using the Chandra Observatory.
“We have found many more objects in this strange class of X-ray sources. With Chandra we should be able to make the accurate measurements we need to pinpoint them more precisely and identify their nature,” she said.
Finding more recoiling black holes will provide a better understanding of the characteristics of black holes before they merge.
In future, it might even be possible to observe this process with the planned LISA satellite, an instrument capable of measuring the gravity waves that the two merging black holes emit.
Ultimately this information will let scientists know if supermassive black holes in the cores of galaxies really are the result of many lighter black holes merging together.
This discovery appears in a paper in the journal Monthly Notices of the Royal Astronomical Society. (ANI) | http://silverscorpio.com/tag/galaxy-washington/ | 13 |
19 | view a plan
Mean, Median, and Mode.
Title – Mean, Median, and Mode.
By – Kristin Reeves
Primary Subject – Math
Grade Level – 7
Multimedia Graphing Unit Contents:
- Lesson 1: YouTube Scatter Plot Instructions
- Lesson 2: Using Scatter Plots
- Lesson 3: Types of Graphs PowerPoint
- Lesson 4: Create Excel Chart and Kidspiration Presentation
- Lesson 5: Mean, Median, Mode and M&Ms (below)
- Mean, Median, and Mode
- D.AN.07.04 Find and interpret the median, quartiles, and interquartile range of a given set of data.
Learning Resources and Materials:
- Packets of M&Ms
Development of Lesson:
- We will discuss the definitions of the mean, median, and mode.
- Students will count and graph the number of pieces of candy in their packs by color. Each student will graph his/her results.
- Use the above data to figure the mean, the average number of pieces of candy of each color. For example, if the data indicates 1 red candy, 2 green candies, 3 orange candies, 7 yellow candies, and 7 purple candies, the mean, or average number of pieces of each color is 4. [ (1 + 2 + 3 + 7 + 7) / 5 = 4 ]
- Use the data to calculate the median, or middle number.
- Use the data to calculate the mode, or the number that occurs most frequently.
- The students will record their data on a worksheet and graph their findings.
- We will compile all data and find a class-wide mean, median, and mode.
- I will assist any students with special needs in the counting of their candy.
- They will turn in their worksheets and graphs for grading.
- Eat M&Ms!
E-Mail Kristin Reeves! | http://lessonplanspage.com/mathgraphunit5applyinggraphs7-htm/ | 13 |
17 | One of the largest asteroids known to have made a close approach to Earth flew past about 300,000 miles away on March 8, but nobody noticed it until four days later. When the object, which has been named 2002 EM7, passed closest to the Earth, it was too close to the Sun to be visible. A telescope operated by the Lincoln Laboratory at M.I.T. first recorded the new asteroid on March 12, as it moved away from the Earth and more of its bright side came into view.
Asteroids approaching from a blind spot cannot be seen by astronomers. If an object passed through this zone on a collision course with Earth, it would not be identified until it was too late for any intervention.
Astronomers have made numerous pleas in recent years for more funds to catalogue near-Earth objects and their orbits. This would reduce the number of unknown objects that could possibly take us by surprise, and give us early warning about potential future collisions.
Further observations by Timothy Spahr of the Harvard-Smithsonian Center for Astrophysics revealed that 2002 EM7 has a 323-day orbit. It is invisible to the naked eye, and is too small to be classed as a ?potentially hazardous asteroid.? But it?s probably between 55 and 100 yards across, meaning it?s bigger than the object that exploded in 1908 over the Tunguska region of Siberia, flattening trees for 1,200 square miles.
Brian Marsden of Harvard-Smithsonian says preliminary calculations indicate that 2002 EM7 will have several more chances to hit the Earth during the next century, with odds of one in six million to one in a billion of an actual impact.
To learn more,click here.
NASA announced this week a new Web-based asteroid monitoring system called Sentry that can monitor and assess the threat of space rocks that could possibly strike the Earth.It will help scientists communicate with each other about their discoveries of new, potentially threatening asteroids.
While no large asteroid is currently known to be on a collision course with our planet, experts say an eventual impact is inevitable and the consequences could be serious, up to and including global devastation that might destroy civilization as we know it. The odds of such an impact in any given decade are extremely low, and most experts agree that there would likely be at least 10 years of warning if such an object were ever spotted.
However, smaller asteroids like 2002 EM7 are more likely to hit Earth in any given year and could cause significant local or regional damage. The odds of a locally or regionally destructive asteroid hitting an inhabited area in a given 50-year period are about 1-in-160, according to experts.
In recent years, asteroid experts worldwide have struggled to develop a system to catalogue and track newly spotted Near Earth Asteroids and communicate any possible threats to the public.
However, asteroids move so slowly against the background of stars that when one is first discovered, astronomers cannot pin down its exact path. Recent false alarms, when scientists said there was a threat that a particular asteroid would hit Earth in a certain year, have made headlines and frightened the public.
The new Sentry system, developed over the past two years, is operated out of NASA's Jet Propulsion Laboratory. The system's online ?Risks Page? listsed 37 asteroids last week.
?Objects normally appear on the Risks Page because their orbits can bring them close to the Earth?s orbit and the limited number of available observations do not yet allow their trajectories to be well-enough defined,? says JPL?s Donald Yeomans, manager of NASA?s Near-Earth Object Program Office, which oversees Sentry. ?By far the most likely outcome is that the object will eventually be removed as new observations become available, the object?s orbit is improved, and its future motion is more tightly constrained.? Several new asteroids will be added to the list each month, only to be removed to the ?no-risk? page soon afterwards.
A color-coded measurement called the Torino Scale, developed in 1999, gives each asteroid a number. Zero or one represents a remote risk, and a 10 means there will be an impact. All but one of the asteroids currently on the Sentry list are zeros on the Torino Scale. At the top of the list, however, is space rock 2002 CU11, discovered February 7. It presently has a 1-in-100,000 chance of hitting the Earth on August 31, 2049. But as its orbit is refined, it?s possible this asteroid, like many others, will be re-categorized as harmless.
Asteroid detections have increased in recent months because Congress has asked NASA to find 90 percent of all Near Earth Objects larger than 0.6 miles by 2008. About 500 of the these asteroids have been found, and an estimated 500 remain undiscovered.
To see the list of Sentry asteroids,click here.
NOTE: This news story, previously published on our old site, will have any links removed. | http://www.unknowncountry.com/news/asteroid-nearly-hit-usand-nobody-noticed | 13 |
10 | Direct democracy, classically termed pure democracy, comprises a form of democracy and theory of civics where in sovereignty is lodged in the assembly of all citizens who choose to participate. Depending on the particular system, this assembly might pass executive motions, make laws, elect and dismiss officials and conduct trials. Where the assembly elects officials, these are executive agents or direct representatives, bound to the will of the people.
Direct democracy stands in contrast to representative democracy, where sovereignty is exercised by a subset of the people, usually on the basis of election. However, it is possible to combine the two into representative direct democracy.
Modern direct democracy is characterized by three pillars:
Referendums can include the ability to hold a binding referendum on whether a given law should be scrapped. This effectively grants the populace a veto on government legislation. Recalls gives the people the right to remove from office elected officials before the end of their term.
The first recorded democracy, which was also direct, was the Athenian democracy
in the 5th century BC. The main bodies in the Athenian democracy were the assembly, composed by male citizens, the boule
which was composed by 500 citizens chosen annually by lot
), and the law courts composed by a massive number of juries chosen by lot, with no judges. Out of the male population of 30,000, several thousand citizens were politically active every year and many of them quite regularly for years on end. The Athenian democracy was not only direct
in the sense that decisions were made by the assembled people, but also directest
in the sense that the people through the assembly, boule and law courts controlled the entire political process and a large proportion of citizens were involved constantly in the public business. Modern democracies do not use institutions that resemble the Athenian system of rule, due to the problems arising when implementing such on the scale of modern societies.
Also relevant is the history of Roman republic beginning circa 449 BC (Cary, 1967). The ancient Roman Republic's "citizen lawmaking"—citizen formulation and passage of law, as well as citizen veto of legislature-made law—began about 449 BC and lasted the approximately four hundred years to the death of Julius Caesar in 44 BC. Many historians mark the end of the Republic on the passage of a law named the Lex Titia, 27 November 43 BC (Cary, 1967).
Modern-era citizen lawmaking began in the towns of Switzerland in the 13th century. In 1847, the Swiss added the "statute referendum" to their national constitution. They soon discovered that merely having the power to veto Parliament's laws was not enough. In 1891, they added the "constitutional amendment initiative". The Swiss political battles since 1891 have given the world a valuable experience base with the national-level constitutional amendment initiative (Kobach, 1993). Today, Switzerland is still an example of modern direct democracy, as it exhibits the first two pillars at both the local and federal levels. In the past 120 years more than 240 initiatives have been put to referendum. The populace has been conservative, approving only about 10% of the initiatives put before them; in addition, they have often opted for a version of the initiative rewritten by government. (See Direct democracy in Switzerland below.) Another example is the United States, where, despite being a federal republic where no direct democracy exists at the federal level, over half the states (and many localities) provide for citizen-sponsored ballot initiatives (also called "ballot measures" or "ballot questions") and the vast majority of the states have either initiatives and/or referendums. (See Direct democracy in the United States below.)
Some of the issues surrounding the related notion of a direct democracy using the Internet and other communications technologies are dealt with in e-democracy. More concisely, the concept of open source governance applies principles of the free software movement to the governance of people, allowing the entire populace to participate in government directly, as much or as little as they please. This development strains the traditional concept of democracy, because it does not give equal representation to each person. Some implementations may even be considered democratically-inspired meritocracies, where contributers to the code of laws are given preference based on their ranking by other contributers.
Many political movements
within representative democracies, seek to restore some measure of direct democracy or a more deliberative democracy
, to include consensus decision-making
rather than simply majority rule
. Such movements advocate more frequent public votes and referendums on issues, and less of the so-called "rule by politician
". Collectively, these movements are referred to as advocating grassroots democracy
or consensus democracy
, to differentiate it from a simple direct democracy model. Another related movement is community politics
which seeks to engage representatives with communities directly.
Anarchists (usually Social anarchists) have advocated forms of direct democracy as an alternative to the centralized state and capitalism, however, some anarchists such as individualist anarchists have criticized direct democracy and democracy in general for ignoring the rights of the minority and instead have advocated a form of consensus decision-making. Libertarian Marxists, however, fully support direct democracy in the form of the proletarian republic and see majority rule and citizen participation as virtues. Within Marxist circles, "proletarian democracy" is synonymous with direct democracy, just as "bourgeois democracy" is synonymous with representative democracy.
Arguments for direct democracy
Arguments in favor of direct democracy tend to focus on perceived flaws in the alternative, representative democracy
which sometimes is seen as a form of oligarchy
(Hans Köchler, 1995) and its properties, such as nepotism
, lack of transparency and accountability to the people etc:
- Non representation. Individuals elected to office in a representative democracy tend not to be demographically representative of their constituency. They tend to be wealthier and more educated, and are also more predominantly male as well as members of the majority race, ethnic group, and religion than a random sample would produce. They also tend to be concentrated in certain professions, such as lawyers. Elections by district may reduce, but not eliminate, those tendencies, in a segregated society. Direct democracy would be inherently representative, assuming universal suffrage (where everyone can vote). Critics counter that direct democracy can be unrepresentative, if not all eligible voters participate in every vote, and that this is lacking voter turnout is not equally distributed among various groups. Greater levels of education, especially regarding law, seem to have many advantages and disadvantages in lawmaking.
- Conflict of interest. The interests of elected representatives do not necessarily correspond with those of their constituents. An example is that representatives often get to vote to determine their own salaries. It is in their interest that the salaries be high, while it is in the interest of the electorate that they be as low as possible, since they are funded with tax revenue. The typical results of representative democracy are that their salaries are much higher than this average, however. Critics counter that salaries for representatives are necessary, otherwise only the wealthy could afford to participate.
- Corruption. The concentration of power intrinsic to representative government is seen by some as tending to create corruption. In direct democracy, the possibility for corruption is reduced.
- Political parties. The formation of political parties is considered by some to be a "necessary evil" of representative democracy, where combined resources are often needed to get candidates elected. However, such parties mean that individual representatives must compromise their own values and those of the electorate, in order to fall in line with the party platform. At times, only a minor compromise is needed. At other times such a large compromise is demanded that a representative will resign or switch parties. In structural terms, the party system may be seen as a form of oligarchy. (Hans Köchler, 1995) Meanwhile, in direct democracy, political parties have virtually no effect, as people do not need to conform with popular opinions. In addition to party cohesion, representatives may also compromise in order to achieve other objectives, by passing combined legislation, where for example minimum wage measures are combined with tax relief. In order to satisfy one desire of the electorate, the representative may have to abandon a second principle. In direct democracy, each issue would be decided on its own merits, and so "special interests" would not be able to include unpopular measures in this way.
- Government transition. The change from one ruling party to another, or to a lesser extent from one representative to another, may cause a substantial governmental disruption and change of laws. For example, US Secretary of State (then National Security Advisor) Condoleezza Rice cited the transition from the previous Clinton Administration as a principal reason why the United States was unable to prevent the September 11, 2001 attacks. The Bush Administration had taken office just under 8 months prior to the attacks.
- Cost of elections. Many resources are spent on elections which could be applied elsewhere. Furthermore, the need to raise campaign contributions is felt to seriously damage the neutrality of representatives, who are beholden to major contributors, and reward them, at the very least, by granting access to government officials. However, direct democracy would require many more votings, which would be costly, and also probably campaigns by those who may lose or gain from the results.
- Patronage and nepotism. Elected individuals frequently appoint people to high positions based on their mutual loyalty, as opposed to their competence. For example, Michael D. Brown was appointed to head the US Federal Emergency Management Agency, despite a lack of experience. His subsequent poor performance following Hurricane Katrina may have greatly increased the number of deaths. In a direct democracy where everybody voted for agency heads, it wouldn't be likely for them to be elected solely based on their relationship with the voters. On the other hand, most people may have no knowledge of the candidates and get tired of voting for every agency head. As a result, mostly friends and relatives may vote.
- Lack of transparency. Supporters argue that direct democracy, where people vote directly for issues concerning them, would result in greater political transparency than representative democracy. Critics argue that representative democracy can be equally transparent. In both systems people cannot vote on everything, leaving many decisions to some forms of managers, requiring strong Freedom of Information legislation for transparency.
- Insufficient sample size. It is often noted that prediction markets most of the time produce remarkably efficient predictions regarding the future. Many, maybe even most, individuals make bad predictions, but the resulting average prediction is often surprisingly good. If the same applies to making political decisions, then direct democracy may produce very efficient decisions.
- Lack of accountability. Once elected, representatives are free to act as they please. Promises made before the election are often broken, and they frequently act contrary to the wishes of their electorate. Although theoretically it is possible to have a representative democracy in which the representatives can be recalled at any time; in practice this is usually not the case. An instant recall process would, in fact, be a form of direct democracy.
- Voter apathy. If voters have more influence on decisions, it is argued that they will take more interest in and participate more in deciding those issues.
Arguments against direct democracy
- Scale. Direct democracy works on a small system. For example, the Athenian Democracy governed a city of, at its height, about 30,000 eligible voters (free adult male citizens). Town meetings, a form of local government once common in New England, have also worked well, often emphasizing consensus over majority rule. The use of direct democracy on a larger scale has historically been more difficult, however. Nevertheless, developments in technology such as the internet, user-friendly and secure software, and inexpensive, powerful personal computers have all inspired new hope in the practicality of large scale applications of direct democracy. Furthermore ideas such as council democracy and the Marxist concept of the dictatorship of the proletariat are if nothing else proposals to enact direct democracy in nation-states and beyond.
- Practicality and efficiency. Another objection to direct democracy is that of practicality and efficiency. Deciding all or most matters of public importance by direct referendum is slow and expensive (especially in a large community), and can result in public apathy and voter fatigue, especially when repeatedly faced with the same questions or with questions which are unimportant to the voter. Modern advocates of direct democracy often suggest e-democracy (sometimes including wikis, television and Internet forums) to address these problems.
- Demagoguery. A fundamental objection to direct democracy is that the public generally gives only superficial attention to political issues and is thus susceptible to charismatic argument or demagoguery. The counter argument is that representative democracy causes voters not to pay attention, since each voter's opinion doesn't matter much and their legislative power is limited. However, if the electorate is large, direct democracy also brings the effect of diminished vote significance, lacking a majority vote policy.
- One possible solution is demanding that a proposal requires the support of at least 50% of all citizens in order to pass, effectively meaning that absent voters count as "No" votes. This would prevent minorities from gaining power. However, this still means that the majority could be swayed by demagoguery. Also, this solution could be used by representative democracy.
- Complexity. A further objection is that policy matters are often so complicated that not all voters understand them. The average voter may have little knowledge regarding the issues that should be decided. The arduous electoral process in representative democracies may mean that the elected leaders have above average ability and knowledge. Advocates of direct democracy argue, however, that laws need not be so complex and that having a permanent ruling class (especially when populated in large proportion by lawyers) leads to overly complex tax laws, etc. Critics doubt that laws can be extremely simplified and argue that many issues require expert knowledge. Supporters argue that such expert knowledge could be made available to the voting public. Supporters further argue that policy matters are often so complicated that politicians in traditional representative democracy do not all understand them. In both cases, the solution for politicians and demos of the public is to have experts explain the complexities.
- Voter apathy. The average voter may not be interested in politics and therefore may not participate. This immediately reveals the lack of interest either in the issues themselves or in the options; sometimes people need to redefine the issues before they can vote either in favor or in opposition. A small amount of voter apathy is always to be expected, and this is not seen as a problem so long the levels remain constant among (do not target) specific groups of people. That is, if 10% of the population voted with representative samples from all groups in the population, then in theory, the outcome would be correct. Nevertheless, the high level of voter apathy would reveal a substantial escalation in voter fatigue and political disconnect. The risk is, however, that voter apathy would not apply to special interest groups. For example, most farmers may vote for a proposal to increase agricultural subsidies to themselves while the general population ignore this issue. If many special interest groups do the same thing, then the resources of the state may be exhausted. One possible solution is compulsory voting, although this has problems of its own such as restriction of freedom, costs of enforcement, and random voting.
- Self-interest. It is very difficult under a system of direct democracy to make a law which benefits a smaller group if it hurts a larger group, even if the benefit to the small group outweighs that of the larger group. This point is also an argument in favour of Direct Democracy, as current representative party systems often make decisions that are not in line with or in favour of the mass of the population, but of a small group. It should be noted that this is a criticism of democracy in general. "Fiscal responsibility", for instance, is difficult under true direct democracy, as people generally do not wish to pay taxes, despite the fact that governments need a source of revenue. One possible solution to the issue regarding minority rights and public welfare is to have a constitution that requires that minority interests and public welfare (such as healthcare, etc) be protected and ensures equality, as is the case with representative democracy. The demos would be able to work out the "how" of providing services, but some of the "what" that is to be provided could be enshrined in a constitution.
- Suboptimality. Results may be quite different depending on whether people vote on single issues separately in referendums, or on a number of options bundled together by political parties. As explained in the article on majority rule, the results from voting separately on the issues may be suboptimal, which is a strong argument against the indiscriminate use of referendums. With direct democracy, however, the one-vote one-human concept and individualism with respect to voting would tend to discourage the formation of parties. Further optimality might be achieved, argue proponents, by having recallable delegates to specialized councils and higher levels of governance, so that the primary focus of the everyday citizen would be on their local community.
- Manipulation by timing and framing. If voters are to decide on an issue in a referendum, a day (or other period of time) must be set for the vote and the question must be framed, but since the date on which the question is set and different formulations of the same question evoke different responses, whoever sets the date of the vote and frames the question has the possibility of influencing the result of the vote. Manipulation is also present in pure democracy with a growing population. Original members of the society are able to instigate measures and systems that enable them to manipulate the thoughts of new members to the society. Proponents counter that a portion of time could be dedicated and mandatory as opposed to a per-issue referendum. In other words, each member of civil society could be required to participate in governing their society each week, day, or other period of time.
Direct democracy in Switzerland
, single majorities are sufficient at the town, city, and state (canton
and half-canton) level, but at the national level, "double majorities" are required on constitutional matters. The intent of the double majorities is simply to ensure any citizen-made law's legitimacy
Double majorities are, first, the approval by a majority of those voting, and, second, a majority of states in which a majority of those voting approve the ballot measure. A citizen-proposed law (i.e. initiative) cannot be passed in Switzerland at the national level if a majority of the people approve, but a majority of the states disapprove (Kobach, 1993). For referendums or proposition in general terms (like the principle of a general revision of the Constitution), the majority of those voting is enough (Swiss constitution, 2005).
In 1890, when the provisions for Swiss national citizen lawmaking were being debated by civil society and government, the Swiss copied the idea of double majorities from the United States Congress, in which House votes were to represent the people and Senate votes were to represent the states (Kobach, 1993). According to its supporters, this "legitimacy-rich" approach to national citizen lawmaking has been very successful. Kobach claims that Switzerland has had tandem successes both socially and economically which are matched by only a few other nations, and that the United States is not one of them. Kobach states at the end of his book, "Too often, observers deem Switzerland an oddity among political systems. It is more appropriate to regard it as a pioneer." Finally, the Swiss political system, including its direct democratic devices in a multi-level governance context, becomes increasingly interesting for scholars of EU integration (see Trechsel, 2005).
Direct democracy in the United States
Direct democracy was very much opposed by the framers of the United States Constitution and some signers of the Declaration of Independence. They saw a danger in majorities forcing their will on minorities. As a result, they advocated a representative democracy in the form of a constitutional republic over a direct democracy. For example, James Madison, in Federalist No. 10 advocates a constitutional republic over direct democracy precisely to protect the individual from the will of the majority. He says, "A pure democracy can admit no cure for the mischiefs of faction. A common passion or interest will be felt by a majority, and there is nothing to check the inducements to sacrifice the weaker party. Hence it is, that democracies have ever been found incompatible with personal security or the rights of property; and have, in general, been as short in their lives as they have been violent in their deaths. John Witherspoon, one of the signers of the Declaration of Independence, said "Pure democracy cannot subsist long nor be carried far into the departments of state — it is very subject to caprice and the madness of popular rage." Alexander Hamilton said, "That a pure democracy if it were practicable would be the most perfect government. Experience has proved that no position is more false than this. The ancient democracies in which the people themselves deliberated never possessed one good feature of government. Their very character was tyranny; their figure deformity...".
Despite the framers' intentions in the beginning of the republic, ballot measures and their corresponding referendums have been widely used at the state and sub-state level. There is much state and federal case law, from the early 1900s to the 1990s, that protects the people's right to each of these direct democracy governance components (Magleby, 1984, and Zimmerman, 1999). The first United States Supreme Court ruling in favor of the citizen lawmaking was in Pacific States Telephone and Telegraph Company v. Oregon, 223 U.S. 118—in 1912 (Zimmerman, December 1999). President Theodore Roosevelt, in his "Charter of Democracy" speech to the 1912 Ohio constitutional convention, stated "I believe in the Initiative and Referendum, which should be used not to destroy representative government, but to correct it whenever it becomes misrepresentative."
In various states, referendums through which the people rule include:
- Referrals by the legislature to the people of "proposed constitutional amendments" (constitutionally used in 49 states, excepting only Delaware — Initiative & Referendum Institute, 2004).
- Referrals by the legislature to the people of "proposed statute laws" (constitutionally used in all 50 states — Initiative & Referendum Institute, 2004).
- Constitutional amendment initiative is the most powerful citizen-initiated, direct democracy governance component. It is a constitutionally-defined petition process of "proposed constitutional law," which, if successful, results in its provisions being written directly into the state's constitution. Since constitutional law cannot be altered by state legislatures, this direct democracy component gives the people an automatic superiority and sovereignty, over representative government (Magelby, 1984). It is utilized at the state level in eighteen states: Arizona, Arkansas, California, Colorado, Florida, Illinois, Massachusetts, Michigan, Mississippi, Missouri, Montana, Nebraska, Nevada, North Dakota, Ohio, Oklahoma, Oregon and South Dakota (Cronin, 1989). Among the eighteen states, there are three main types of the constitutional amendment initiative, with different degrees of involvement of the state legislature distinguishing between the types (Zimmerman, December 1999).
- Statute law initiative is a constitutionally-defined, citizen-initiated, petition process of "proposed statute law," which, if successful, results in law being written directly into the state's statutes. The statute initiative is used at the state level in twenty-one states: Alaska, Arizona, Arkansas, California, Colorado, Idaho, Maine, Massachusetts, Michigan, Missouri, Montana, Nebraska, Nevada, North Dakota, Ohio, Oklahoma, Oregon, South Dakota, Utah, Washington and Wyoming (Cronin, 1989). Note that, in Utah, there is no constitutional provision for citizen lawmaking. All of Utah's I&R law is in the state statutes (Zimmerman, December 1999). In most states, there is no special protection for citizen-made statutes; the legislature can begin to amend them immediately.
- Statute law referendum is a constitutionally-defined, citizen-initiated, petition process of the "proposed veto of all or part of a legislature-made law," which, if successful, repeals the standing law. It is used at the state level in twenty-four states: Alaska, Arizona, Arkansas, California, Colorado, Idaho, Kentucky, Maine, Maryland, Massachusetts, Michigan, Missouri, Montana, Nebraska, Nevada, New Mexico, North Dakota, Ohio, Oklahoma, Oregon, South Dakota, Utah, Washington and Wyoming (Cronin, 1989).
- The recall is a constitutionally-defined, citizen-initiated, petition process, which, if successful, removes an elected official from office by "recalling" the official's election. In most state and sub-state jurisdictions having this governance component, voting for the ballot that determines the recall includes voting for one of a slate of candidates to be the next office holder, if the recall is successful. It is utilized at the state level in eighteen states: Alaska, Arizona, California, Colorado, Georgia, Idaho, Kansas, Louisiana, Michigan, Minnesota, Montana, Nevada, New Jersey, North Dakota, Oregon, Rhode Island, Washington and Wisconsin (National Conference of State Legislatures, 2004, Recall Of State Officials).
There are now a total of 24 U.S. states with constitutionally-defined, citizen-initiated, direct democracy governance components (Zimmerman, December 1999). In the United States, for the most part only one-time majorities are required (simple majority of those voting) to approve any of these components.
In addition, many localities around the U.S. also provide for some or all of these direct democracy governance components, and in specific classes of initiatives (like those for raising taxes), there is a supermajority voting threshold requirement. Even in states where direct democracy components are scant or nonexistent at the state level, there often exists local options for deciding specific issues, such as whether a county should be "wet" or "dry" in terms of whether alcohol sales are allowed.
In the U.S. region of New England, nearly all towns practice a very limited form of home rule, and decide local affairs through the direct democratic process of the town meeting.
has been dabbling with direct democracy since its new constitution was approved in 1999. However, this situation has been on for less than ten years, and results are controversial. Still, its constitution does enshrine the right of popular initiative, sets minimal requirements for referenda (a few of which have already been held) and does include the institution of recall for any elected authority (which has been unsuccessfully used against its incumbent president).
Contemporary movements for direct democracy via direct democratic praxis
Some contemporary movements working for direct democracy via direct democratic praxis include:
- Arnon, Harel (January 2008). "A Theory of Direct Legislation" (LFB Scholarly)
- Cary, M. (1967) A History Of Rome: Down To The Reign Of Constantine. St. Martin's Press, 2nd edition.
- Cronin, Thomas E. (1989). Direct Democracy: The Politics Of Initiative, Referendum, And Recall. Harvard University Press.
- Finley, M.I. (1973). Democracy Ancient And Modern. Rutgers University Press.
- Fotopoulos, Takis, Towards an Inclusive Democracy: The Crisis of the Growth Economy and the Need for a New Liberatory Project (London & NY: Cassell, 1997).
- Fotopoulos, Takis, The Multidimensional Crisis and Inclusive Democracy. (Athens: Gordios, 2005). (English translation of the book with the same title published in Greek).
- Fotopoulos, Takis, "Liberal and Socialist “Democracies” versus Inclusive Democracy", The International Journal of INCLUSIVE DEMOCRACY, vol.2, no.2, (January 2006).
- Gerber, Elisabeth R. (1999). The Populist Paradox: Interest Group Influence And The Promise Of Direct Legislation. Princeton University Press.
- Hansen, Mogens Herman (1999). The Athenian Democracy in the Age of Demosthenes: Structure, Principles and Ideology. University of Oklahoma, Norman (orig. 1991).
- Kobach, Kris W. (1993). The Referendum: Direct Democracy In Switzerland. Dartmouth Publishing Company.
- Köchler, Hans (1995). style="font-style : italic;">A Theoretical Examination of the Dichotomy between Democratic Constitutions and Political Reality University Center Luxemburg.
- Magleby, David B. (1984). Direct Legislation: Voting On Ballot Propositions In The United States. Johns Hopkins University Press.
- National Conference of State Legislatures, (2004). Recall Of State Officials
- Polybius (c.150 BC). The Histories. Oxford University, The Great Histories Series, Ed., Hugh R. Trevor-Roper and E. Badian. Translated by Mortimer Chanbers. Washington Square Press, Inc (1966).
- Reich, Johannes (2008). style="font-style : italic;">An Interactional Model of Direct Democracy - Lessons from the Swiss Experience SSRN Working Paper.
- Raaflaub K. A., Ober J., Wallace R. W., Origins of Democracy in Ancient Greece, University of California Press, 2007
- Zimmerman, Joseph F. (March 1999). The New England Town Meeting: Democracy In Action Praeger Publishers.
- Zimmerman, Joseph F. (December 1999). The Initiative: Citizen Law-Making. Praeger Publishers.
- ðÐ ─ German and international dd-portal.
- Kol1 ─ Movement for Direct Democracy In Israel.
- MyVote (in Hebrew) ─ Movement for Direct Democracy In Israel.
- democraticidiretti.org - Association of Direct Democrats
- listapartecipata.it ─ Roman Chapter of the Association of Direct Democrats' campaign to present a list of candidates for Rome Province Election to be controlled through an ad-hoc temporary organization of citizens. | http://www.reference.com/browse/misrepresentative | 13 |
28 | The Algebra 1 Teacher's guide to the Common Core State Standards for Mathematics.
Linear Equations and Inequalities
Common Core Says...
Algebraic manipulations are governed by the properties of operations and exponents, and the conventions of algebraic notation. At times, an expression is the result of applying operations to simpler expressions. For example,?p?+ 0.05p?is the sum of the simpler expressions?p?and 0.05p. Viewing an expression as the result of operation on simpler expressions can sometimes clarify its underlying structure.
Big Picture Lesson Planning for the Common Core
Week #14 Solve and Justify Literal Equations
Week #15 Solve and Graph Inequalities
All of these resources and more can be found in the livebinder below.?
A.CED.1 Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions.*
A.CED.3 Represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret solutions as viable or non-viable options in modeling context. For example, represent inequalities describing nutritional and cost constraints on combinations of different foods.*
A.CED.4 Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. For example, rearrange Ohm?s law V = IR to highlight resistance R.
A.REI.1 Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation has a solution. Construct a viable argument to justify a solution method.
A.REI.3 Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters.
A.REI.10 Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line).
A.REI.11 Explain why the x-coordinates of the points where the graphs of the equations y = f(x) and y = g(x) intersect are the solutions of the equation f(x) = g(x); find the solutions approximately, e.g., using technology to graph the functions, make tables of values, or find successive approximations. Include cases where f(x) and/or g(x) are linear, polynomial, rational, absolute value, exponential, and logarithmic functions.* (Emphasize linear, absolute value, and exponential functions) Video Explaination
A.REI.12 Graph the solutions to a linear inequality in two variables as a half-plane (excluding the boundary in the case of a strict inequality), and graph the solution set to a system of linear inequalities in two variables as the intersection of the corresponding half-planes.
F.BF.4.a Solve an equation of the form f(x) = c for a simple function f that has an inverse and write an expression for the inverse. For example, f(x) = 2 x3 or f(x) = (x + 1)/(x -?- 1) for x ? 1.
Algebra 1 Units | http://algebra1teachers.com/Unit%205.html | 13 |
11 | As physicians use scanning devices to view the hidden structures and activities of the brain, astronomers can now use distant supernovae and high-resolution cosmic background radiation maps to scan the structures and properties of “branes.” This new capability allows them to examine the most remarkably fine-tuned feature of the universe: space energy density, or the self-stretching property of space.1
Branes are space surfaces having any number of dimensions. The most familiar brane is the 3-brane (three-dimensional surface) on which the galaxies, stars, and solar system reside. (This 3-brane is the surface of the four-dimensional, space-plus-time universe.) Since the 1998 discovery that the cosmic expansion transitioned, roughly 7 billion years ago, from slowing down to speeding up,2 theoreticians have attempted to explain the turnaround and acceleration by some means other than the currently accepted space energy density feature, or cosmological constant.
Some researchers have speculated that gravity operates on a higher brane than does the light from stars and galaxies. Among the brane models closest to matching established data for the universe are those in which the familiar 3-brane is embedded in a 5-brane, a five-dimensional surface—length, width, height, and time, plus one extra space dimension.3 According to this scenario, bodies having mass (galaxies, stars, planets, protons, and electrons) reside within the length, width, and height dimensions while gravity operates in these three plus two. With gravity thus “spread out” over five dimensions, its force would be sufficiently weakened by the expansion process to account for the turnaround and for the accelerating expansion rate without invoking a cosmological constant.
A recent Astrophysical Journal article, however, argues against this possibility. Astronomers from Portugal and England used new astronomical instruments to test the validity of the 5-brane model.4 Combining several improved measurements of the geometry of the universe, the acceleration rate of the universe’s expansion, and the date at which the deceleration turned to acceleration, these astronomers demonstrated that the mass density of the universe is at least 75% greater than that which the 5-brane model allows.5 In their words, 5-brane models are “strongly disfavored by existing cosmological data sets.”They go on to say that “currently available cosmological observations are already powerful enough to impose tight constraints on a wide range of possible models. . . . The era of precision cosmology has indeed started.”6
This arrival of “precision cosmology” means that ongoing research holds great hope for refining human understanding of the origin and development of the cosmos. Both Christians and skeptics will have more data with which to test biblical claims that God “stretches out the heavens like a canopy, and spreads them out like a tent to live in” (Isaiah 40:22).7 Scientific confirmation becomes all the more significant when one recognizes that for 3,400 years Bible authors stood alone in describing this characteristic of the universe.
- Lawrence M. Krauss, “The End of the Age Problem and the Case for a Cosmological Constant Revisited,” Astrophysical Journal 501 (1998): 461; Hugh Ross, “Flat-Out Confirmed,” Facts for Faith 2 (Q2 2000), 26-31.
- P. P. Avelino, J. P. M. DeCarvalho, and C. J. A. P. Martins, “Supernova Constraints on Spatial Variations of the Vacuum Energy Density,” Physical Review D 64 (2001): 063505; A. G. Reiss et al., astro-ph/0104455 preprint (2001); M. S. Turner and A. G. Reiss, astro-ph/0012011 preprint (2001).
- G. Dvali, G. Gabadadze, and M. Porrati, “4D Gravity on a Brane in 5D Minkowski Space,” Physical Letters B 502 (2001): 199-208; C. Deffayet, G. Dvali, and G. Gabadadze, “Accelerated Universe from Gravity Linking to Extra Dimensions,” Physical Review D 65 (2002), 044023.
- P. P. Avelino and C. J. A. P. Martins, “A Supernova Brane Scan,” Astrophysical Journal 565 (2002): 661-67.
- Mass density comprises 30% to 38% of the total density of the universe; space energy density comprises the rest. See Avelino and Martins, p. 665, and Mikel Susperregi, “Overconstrained Dynamics in the Galaxy Redshift Surveys,” Astrophysical Journal 563 (2001): 473-82. The 5-brane models discussed here can tolerate a maximum cosmic mass density no greater than 20% of the total.
- Avelino and Martins, 666.
- See also Genesis 1:1 and 2:3-4; Job 9:8; Psalm 104:2; Psalm 148:5; Isaiah 42:5, 44:24; 45:12 and 18, 48:13, and 51:13; Jeremiah 10:12 and 51:15; Zechariah 12:1; John 1:3; Colossians 1:15-17; and Hebrews 11:3, The Holy Bible. | http://www.reasons.org/articles/cosmic-brane-scans | 13 |
17 | Science Fair Project Encyclopedia
In physics, a gravitational wave consists of energy transmitted in the form of a wave through the gravitational field of space-time. Gravitational radiation is the overall result of gravitational waves in bulk. The theoretical treatment of gravitational waves is governed by general relativity. The Einstein field equations imply that any accelerated mass emits gravitational radiation, similar to how the Maxwell equations describe the electromagnetic radiation emitted by an accelerated electric charge.
According to general relativity, gravity can cause oscillations (or waves) in spacetime which can transmit energy. Roughly speaking, the strength of gravity will vary as a gravitational wave passes, much as the depth of a body of water will vary as a water wave passes. More precisely, it is the strength and direction of tidal forces (measured by the Weyl tensor) that oscillate, which should cause objects in the path of the wave to change shape (but not size) in a pulsating fashion. Similarly, gravitational waves will be emitted by physical objects with a pulsating shape, specifically objects with a changing quadrupole moment .
In 2005 it was announced that observations of the binary pulsar PSR J0737-3039 appeared to confirm predictions of general relativity with respect to energy emitted by gravitational waves, with the system's orbit observed to shrink 7 mm per day.
Sources of gravitational waves
One reason for the lack of direct detection so far is that the gravitational waves that we expect to be produced in nature are very weak, so that the signals for gravitational waves, if they exist, are buried under noise generated from other sources. Reportedly, ordinary terrestrial sources would be undetectable, despite their closeness, because of the great relative weakness of the gravitational force.
Scientists are eager to find a way to detect these gravitational waves, since they could help reveal information about the very structure of the universe. In contrast to electromagnetic radiation, it is not fully understood what difference the presence of gravitational radiation would make for the workings of the universe. A sufficiently strong sea of primordial gravitational radiation, with an energy density exceeding that of the big bang electromagnetic radiation by a few orders of magnitude, would shorten the life of the universe, violating existing data that show it is at least 13 billion years old. More promising is the hope to detect waves emitted by sources on astronomic size scales, such as:
- supernovas or gamma ray bursts;
- "chirps" from inspiraling coalescing binary stars;
- periodic signals from spherically asymmetric neutron stars or quark stars;
- stochastic gravitational wave background sources.
What is more, a detection of gravitational waves of these objects might also give information about the objects themselves. Although the idea is far-fetched, some astronomers already dream of "gravitational telescopes" to see in gravity, as opposed to light.
Gravitational wave detectors
Gravitational radiation has not been directly observed, although there are a number of proposed experiments such as LIGO that intend to do so. Scientists are eager to implement experiments which propose to detect gravitational waves, not so much because of the expected observations, but because unexpected and surprising results are believed to be likely to be found. A number of teams are working on making more sensitive and selective gravitational wave detectors and analysing their results. A commonly used technique to reduce the effects of noise is to use coincidence detection to filter out events that do not register on both detectors. There are two common types of detectors used in these experiments:
- laser interferometers, which use long light paths, such as GEO , LIGO, TAMA , VIRGO, ACIGA and the space-based LISA;
- resonant mass gravitational wave detectors which use large masses at very low temperatures, such as EXPLORER and NAUTILUS .
In November 2002, a team of Italian researchers at the Instituto Nazionale di Fisica Nucleare and the University of Rome produced an analysis of their experimental results that may be further indirect evidence of the existence of gravitational waves. Their paper, entitled "Study of the coincidences between the gravitational wave detectors EXPLORER and NAUTILUS in 2001", is based on a statistical analysis of the results from their detectors which shows that the number of coincident detections is greatest when both of their detectors are pointing into the center of our Milky Way galaxy.
Energy, momentum and angular momentum
Do gravitational waves carry energy, momentum and angular momentum? Well, in a vacuum, their stress-energy tensor would be zero. However, it's possible to define a noncovariant pseudo stress-energy tensor which is "conserved" such that they do carry them.
Bruce Allen of UWM's LIGO Scientific Collaboration, LSC group is leading the development of the Einstein@Home project, developed to search data from LIGO in the US and from the GEO 600 gravitational wave observatory in Germany for signals coming from selected, extremely dense, rapidly rotating stars. Such sources are believed to be either quark stars or neutron stars, and a subclass of these are already observed by conventional means, and are known as pulsars, electromagnetic wave emitting celestial bodies. If some of these stars are not quite near-perfectly spherical, they should emit gravitational waves, which LIGO and GEO 600 may begin to detect.
Einstein@Home is a small part of the LSC scientific program. It has been set up and released as a distributed computing project, like SETI@home. Meaning, it relies on computer time donated by private computer users to process data generated by LIGO's and GEO 600's search for gravity waves.
- Discussion of gravitational radiation on the USENET physics FAQ
- Table of gravitational wave detectors
- Laser Interferometer Gravitational Wave Observatory. LIGO Laboratory, California Institute of Technology.
- Info page for the "Einstein@Home," a distributed computing project, processing raw datat from LIGO Laboratory, at CalTech seaching for gravity waves
- Home page for Einstein@Home project
- The Italian researchers' paper analysing data from EXPLORER and NAUTILUS
- Center for Gravitational Wave Physics. National Science Foundation [PHY 01- 14375].
- Australian International Gravitational Research Center. University of Western Australia.
- TAMA project. Developing advanced techniques for km-sized interferometer.
- Could superconductors transmute electromagnetic radiation into gravitational waves? -- Scientific American article
- Sica, R. J., A Short Primer on Gravity Waves. Department of Physics and Astronomy, The University of Western Ontario. 1999.
- Gravity waves. Physics central.
- B. Allen, et al., Observational Limit on Gravitational Waves from Binary Neutron Stars in the Galaxy. The American Physical Society, March 31, 1999.
- Gravitational Radiation. Davis Associates, Inc.
- Amos, Jonathan, Gravity wave detector all set. BBC, February 28, 2003.
- Rickyjames, Doing the (Gravity) Wave. SciScoop, December 8, 2003.
- Will, Clifford M., The Confrontation between General Relativity and Experiment. McDonnell Center for the Space Sciences, Department of Physics, Washington University, St. Louis MO.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Gravitational_wave | 13 |
19 | ||This article needs additional citations for verification. (February 2012)|
In psychology, philosophy, and their many subsets, emotion is the generic term for subjective, conscious experience that is characterized primarily by psychophysiological expressions, biological reactions, and mental states. Emotion is often associated and considered reciprocally influential with mood, temperament, personality, disposition, and motivation, as well as influenced by hormones and neurotransmitters such as dopamine, noradrenaline, serotonin, oxytocin, cortisol and GABA. Emotion is often the driving force behind motivation, positive or negative. An alternative definition of emotion is a "positive or negative experience that is associated with a particular pattern of physiological activity."
The physiology of emotion is closely linked to arousal of the nervous system with various states and strengths of arousal relating, apparently, to particular emotions. Although those acting primarily on emotion may seem as if they are not thinking, cognition is an important aspect of emotion, particularly the interpretation of events. For example, the experience of fear usually occurs in response to a threat. The cognition of danger and subsequent arousal of the nervous system (e.g. rapid heartbeat and breathing, sweating, muscle tension) is an integral component to the subsequent interpretation and labeling of that arousal as an emotional state. Emotion is also linked to behavioral tendency.
Research on emotion has increased significantly over the past two decades with many fields contributing including psychology, neuroscience, medicine, history, sociology, and even computer science. The numerous theories that attempt to explain the origin, neurobiology, experience, and function of emotions have only fostered more intense research on this topic. The current research that is being conducted about the concept of emotion involves the development of materials that stimulate and elicit emotion. In addition PET scans and fMRI scans help study the affective processes in the brain.
Etymology, definitions, and differentiation
The word "emotion" dates back to 1579, when it was adapted from the French word émouvoir, which means "to stir up". However, the earliest precursors of the word likely dates back to the very origins of language.
Emotions have been described as discrete and consistent responses to internal or external events which have a particular significance for the organism. Emotions are brief in duration and consist of a coordinated set of responses, which may include verbal, physiological, behavioural, and neural mechanisms. Emotions have also been described as biologically given and a result of evolution because they provided good solutions to ancient and recurring problems that faced our ancestors.
- Feelings are best understood as a subjective representation of emotions, private to the individual experiencing them.
- Moods are diffuse affective states that generally last for much longer durations than emotions and are also usually less intense than emotions.
- Affect is an encompassing term, used to describe the topics of emotion, feelings, and moods together, even though it is commonly used interchangeably with emotion.
Components of emotion
In Scherer's components processing model of emotion, five crucial elements of emotion are said to exist. From the component processing perspective, emotion experience is said to require that all of these processes become coordinated and synchronized for a short period of time, driven by appraisal processes. Although the inclusion of cognitive appraisal as one of the elements is slightly controversial, since some theorists make the assumption that emotion and cognition are separate but interacting systems, the component processing model provides a sequence of events that effectively describes the coordination involved during an emotional episode.
- Cognitive appraisal: provides an evaluation of events and objects
- Bodily symptoms: the physiological component of emotional experience
- Action tendencies: a motivational component for the preparation and direction of motor responses.
- Expression: facial and vocal expression almost always accompanies an emotional state to communicate reaction and intention of actions
- Feelings: the subjective experience of emotional state once it has occurred
A distinction can be made between emotional episodes and emotional dispositions. Emotional dispositions are also comparable to character traits, where someone may be said to be generally disposed to experience certain emotions. For example, an irritable person is generally disposed to feel irritation more easily or quickly than others do. Finally, some theorists place emotions within a more general category of "affective states" where affective states can also include emotion-related phenomena such as pleasure and pain, motivational states (for example, hunger or curiosity), moods, dispositions and traits.
The classification of emotions has mainly been researched from two fundamental viewpoints. The first viewpoint is that emotions are discrete and fundamentally different constructs while the second viewpoint asserts that emotions can be characterized on a dimensional basis in groupings.
Basic Emotions
For more than 40 years, Paul Ekman has supported the view that emotions are discrete, measurable, and physiologically distinct. Ekman's most famous work revolved around the finding that certain emotions appeared to be universally recognized, even in cultures that were preliterate and could not have learned associations for facial expressions through media. Another infamous study found that when participants contorted their facial muscles into distinct facial expressions (e.g. disgust), they reported subjective and physiological experiences that matched the distinct facial expressions. His research findings led him to classify six emotions as basic: anger, disgust, fear, happiness, sadness and surprise.
Robert Plutchik agreed with Ekman's biologically driven perspective but developed the "wheel of emotions", suggesting eight primary emotions grouped on a positive or negative basis: joy versus sadness; anger versus fear; trust versus distrust; and surprise versus anticipation. Some basic emotions can be modified to form complex emotions. The complex emotions could arise from cultural conditioning or association combined with the basic emotions. Alternatively, similar to the way primary colors combine, primary emotions could blend to form the full spectrum of human emotional experience. For example, interpersonal anger and disgust could blend to form contempt. Relationships exist between basic emotions, resulting in positive or negative influences.
Emotions are controlled by a constellation of interacting brain systems, but the amygdala appears to play a particularly crucial role. According to LeDoux (1996), sensory inputs that can trigger fear (such as seeing a snake while walking) arrive in the thalamus and then are routed along a fast pathway directly to the amygdala and along a slow pathway that allows the cortex time to think about the situation.
Multi dimensional Analysis of emotions
Through the use of multidimensional scaling, psychologists can map out similar emotional experiences, which allows a visual depiction of the "emotional distance" between experiences. A further step can be taken by looking at the map's dimensions of the emotional experiences. The emotional experiences are divided into two dimensions known as valences (how negative or positive the experience was) and arousal (extent of reaction to stimuli). These two dimensions can be depicted on a 2D coordinate map.
Theories on the experience of emotions
Ancient Greece and Middle Ages
Theories about emotions stretch back to at least as far as the stoics of Ancient Greece and Ancient China. In the latter it was believed that excess emotion caused damage to qi, which in turn, damages the vital organs. The four humours theory made popular by Hippocrates contributed to the study of emotion in the same way that it did for medicine.
Western philosophy regarded emotion in varying ways. In stoic theories it was seen as a hindrance to reason and therefore a hindrance to virtue. Aristotle believed that emotions were an essential component to virtue. In the Aristotelian view all emotions (called passions) corresponded to an appetite or capacity. During the Middle Ages, the Aristotelian view was adopted and further developed by scholasticism and Thomas Aquinas in particular. There are also theories in the works of philosophers such as René Descartes, Niccolo Machiavelli, Baruch Spinoza and David Hume. In the 19th century emotions were considered adaptive and were studied more frequently from an empiricist psychiatric perspective.
Evolutionary theories
- 19th Century
Perspectives on emotions from evolutionary theory were initiated in the late 19th century with Charles Darwin's book The Expression of the Emotions in Man and Animals. Darwin argued that emotions actually served a purpose for humans, in communication and also in aiding their survival. Darwin therefore argued that emotions evolved via natural selection and therefore have universal cross-cultural counterparts. Darwin also detailed the virtues of experiencing emotions and the parallel experiences that occur in animals (see emotion in animals). This led the way for animal research on emotions and the eventual determination of the neural underpinnings of emotion.
More contemporary views along the evolutionary psychology spectrum posit that both basic emotions and social emotions evolved to motivate (social) behaviors that were adaptive in the ancestral environment. Current research suggests that emotion is an essential part of any human decision-making and planning, and the famous distinction made between reason and emotion is not as clear as it seems. Paul D. MacLean claims that emotion competes with even more instinctive responses, on hand, and the more abstract reasoning, on the other hand. The increased potential in neuroimaging has also allowed investigation into evolutionarily ancient parts of the brain. Important neurological advances were derived from these perspectives in the 1990s by Joseph E. LeDoux and António Damásio.
Research on social emotion also focuses on the physical displays of emotion including body language of animals and humans (see affect display). For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared. Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status.
Somatic theories
Somatic theories of emotion claim that bodily responses, rather than cognitive interpretations, are essential to emotions. The first modern version of such theories came from William James in the 1880s. The theory lost favor in the 20th century, but has regained popularity more recently due largely to theorists such as John Cacioppo, António Damásio, Joseph E. LeDoux and Robert Zajonc who are able to appeal to neurological evidence.
James–Lange theory
In his 1884 article William James argued that feelings and emotions were secondary to physiological phenomena. In his theory, James proposed that the perception of what he called an "exciting fact" led directly to a physiological response, known as "emotion." To account for different types of emotional experiences, James proposed that stimuli trigger activity in the autonomic nervous system, which in turn produces an emotional experience in the brain. The Danish psychologist Carl Lange also proposed a similar theory at around the same time, and therefore this theory became known as the James–Lange theory. As James wrote, "the perception of bodily changes, as they occur, is the emotion." James further claims that "we feel sad because we cry, angry because we strike, afraid because we tremble, and neither we cry, strike, nor tremble because we are sorry, angry, or fearful, as the case may be."
An example of this theory in action would be as follows: An emotion-evoking stimulus (snake) triggers a pattern of physiological response (increased heart rate, faster breathing, etc.), which is interpreted as a particular emotion (fear). This theory is supported by experiments in which by manipulating the bodily state induces a desired emotional state. Some people may believe that emotions give rise to emotion-specific actions: e.g. "I'm crying because I'm sad," or "I ran away because I was scared." The issue with the James–Lange theory is that of causation (bodily states causing emotions and being a priori), not that of the bodily influences on emotional experience (which can be argued is still quite prevalent today in biofeedback studies and embodiment theory).
Although mostly abandoned in its original form, Tim Dalgleish argues that most contemporary neuroscientists have embraced the components of the James-Lange theory of emotions.
The James–Lange theory has remained influential. Its main contribution is the emphasis it places on the embodiment of emotions, especially the argument that changes in the bodily concomitants of emotions can alter their experienced intensity. Most contemporary neuroscientists would endorse a modified James–Lange view in which bodily feedback modulates the experience of emotion." (p. 583)
Cannon–Bard theory
Walter Bradford Cannon agreed that physiological responses played a crucial role in emotions, but did not believe that physiological responses alone could explain subjective emotional experiences. He argued that physiological responses were too slow and often imperceptible and this could not account for the relatively rapid and intense subjective awareness of emotion. He also believed that the richness, variety, and temporal course of emotional experiences could not stem from physiological reactions, that reflected fairly undifferentiated fight or flight responses. An example of this theory in action is as follows: An emotion-evoking event (snake) triggers simultaneously both a physiological response and a conscious experience of an emotion.
Phillip Bard contributed to the theory with his work on animals. Bard found that sensory, motor, and physiological information all had to pass through the diencephalon (particularly the thalamus), before being subjected to any further processing. Therefore, Cannon also argued that is was not anatomically possible for sensory events to trigger a physiological response prior to triggering conscious awareness and emotional stimuli had to trigger both physiological and experiential aspects of emotion simultaneously.
Two-factor theory
Stanley Schachter formulated his theory on the earlier work of a Spanish physician, Gregorio Maranon, who injected patients with epinephrine and subsequently asked them how they felt. Interestingly, Maranon found that most of these patients felt something but in the absence of an actual emotion-evoking stimulus, the patients were unable to interpret their physiological arousal as an experienced emotion. Schachter did agree that physiological reactions played a big role in emotions. He suggested that physiological reactions contributed to emotional experience by facilitating a focused cognitive appraisal of a given physiologically arousing event and that this appraisal was what defined the subjective emotional experience. Emotions were thus a result of two stage process: general physiological arousal, and experience of emotion. For example, the physiological arousal, heart pounding, in a response to an evoking stimulus, the sight of a bear in the kitchen. The brain then quickly scans the area, to explain the pounding, and notices the bear. Consequently, the brain interprets the pounding heart as being the result of fearing the bear. With his student, Jerome Singer, Schachter demonstrated that subjects can have different emotional reactions despite being placed into the same physiological state with an injection of epinephrine. Subjects were observed to express either anger or amusement depending on whether another person in the situation (a confederate) displayed that emotion. Hence, the combination of the appraisal of the situation (cognitive) and the participants' reception of adrenaline or a placebo together determined the response. This experiment has been criticized in Jesse Prinz's (2004) Gut Reactions.
Cognitive theories
With the two-factor theory now incorporating cognition, several theories began to argue that cognitive activity in the form of judgments, evaluations, or thoughts was entirely necessary for an emotion to occur. One of the main proponents of this view was Richard Lazarus who argued that emotions must have some cognitive intentionality. The cognitive activity involved in the interpretation of an emotional context may be conscious or unconscious and may or may not take the form of conceptual processing.
Lazarus' theory is very influential; emotion is a disturbance that occurs in the following order:
- Cognitive appraisal—The individual assesses the event cognitively, which cues the emotion.
- Physiological changes—The cognitive reaction starts biological changes such as increased heart rate or pituitary adrenal response.
- Action—The individual feels the emotion and chooses how to react.
For example: Jenny sees a snake.
- Jenny cognitively assesses the snake in her presence. Cognition allows her to understand it as a danger.
- Her brain activates Adrenaline gland which pumps Adrenaline through her blood stream resulting in increased heartbeat.
- Jenny screams and runs away.
Lazarus stressed that the quality and intensity of emotions are controlled through cognitive processes. These processes underline coping strategies that form the emotional reaction by altering the relationship between the person and the environment.
George Mandler provided an extensive theoretical and empirical discussion of emotion as influenced by cognition, consciousness, and the autonomic nervous system in two books (Mind and Emotion, 1975, and Mind and Body: Psychology of Emotion and Stress, 1984)
There are some theories on emotions arguing that cognitive activity in the form of judgements, evaluations, or thoughts is necessary in order for an emotion to occur. A prominent philosophical exponent is Robert C. Solomon (for example, The Passions, Emotions and the Meaning of Life, 1993). Solomon claims that emotions are judgements. He has put forward a more nuanced view which responds to what he has called the ‘standard objection’ to cognitivism, the idea that a judgement that something is fearsome can occur with or without emotion, so judgement cannot be identified with emotion. The theory proposed by Nico Frijda where appraisal leads to action tendencies is another example.
It has also been suggested that emotions (affect heuristics, feelings and gut-feeling reactions) are often used as shortcuts to process information and influence behavior. The affect infusion model (AIM) is a theoretical model developed by Joseph Forgas in the early 1990s that attempts to explain how emotion and mood interact with one's ability to process information.
- Perceptual theory
Theories dealing with perception either use one or multiples perceptions in order to find an emotion (Goldie, 2007).A recent hybrid of the somatic and cognitive theories of emotion is the perceptual theory. This theory is neo-Jamesian in arguing that bodily responses are central to emotions, yet it emphasizes the meaningfulness of emotions or the idea that emotions are about something, as is recognized by cognitive theories. The novel claim of this theory is that conceptually-based cognition is unnecessary for such meaning. Rather the bodily changes themselves perceive the meaningful content of the emotion because of being causally triggered by certain situations. In this respect, emotions are held to be analogous to faculties such as vision or touch, which provide information about the relation between the subject and the world in various ways. A sophisticated defense of this view is found in philosopher Jesse Prinz's book Gut Reactions and psychologist James Laird's book Feelings.
- Affective events theory
This is a communication-based theory developed by Howard M. Weiss and Russell Cropanzano (1996), that looks at the causes, structures, and consequences of emotional experience (especially in work contexts). This theory suggests that emotions are influenced and caused by events which in turn influence attitudes and behaviors. This theoretical frame also emphasizes time in that human beings experience what they call emotion episodes— a "series of emotional states extended over time and organized around an underlying theme." This theory has been utilized by numerous researchers to better understand emotion from a communicative lens, and was reviewed further by Howard M. Weiss and Daniel J. Beal in their article, "Reflections on Affective Events Theory" published in Research on Emotion in Organizations in 2005.
Situated perspective on emotion
A situated perspective on emotion, developed by Paul E. Griffiths and Andrea Scarantino , emphasizes the importance of external factors in the development and communication of emotion, drawing upon the situationism approach in psychology. This theory is markedly different from both cognitivist and neo-Jamesian theories of emotion, both of which see emotion as a purely internal process, with the environment only acting as a stimulus to the emotion. In contrast, a situationist perspective on emotion views emotion as the product of an organism investigating its environment, and observing the responses of other organisms. Emotion stimulates the evolution of social relationships, acting as a signal to mediate the behavior of other organisms. In some contexts, the expression of emotion (both voluntary and involuntary) could be seen as strategic moves in the transactions between different organisms. The situated perspective on emotion states that conceptual thought is not an inherent part of emotion, since emotion is an action-oriented form of skillful engagement with the world. Griffiths and Scarantino suggested that this perspective on emotion could be helpful in understanding phobias, as well as the emotions of infants and animals.
The neurocircuitry of emotion
Based on discoveries made through neural mapping of the limbic system, the neurobiological explanation of human emotion is that emotion is a pleasant or unpleasant mental state organized in the limbic system of the mammalian brain. If distinguished from reactive responses of reptiles, emotions would then be mammalian elaborations of general vertebrate arousal patterns, in which neurochemicals (for example, dopamine, noradrenaline, and serotonin) step-up or step-down the brain's activity level, as visible in body movements, gestures, and postures.
For example, the emotion of love is proposed to be the expression of paleocircuits of the mammalian brain (specifically, modules of the cingulate gyrus) which facilitate the care, feeding, and grooming of offspring. Paleocircuits are neural platforms for bodily expression configured before the advent of cortical circuits for speech. They consist of pre-configured pathways or networks of nerve cells in the forebrain, brain stem and spinal cord.
The motor centers of reptiles react to sensory cues of vision, sound, touch, chemical, gravity, and motion with pre-set body movements and programmed postures. With the arrival of night-active mammals, smell replaced vision as the dominant sense, and a different way of responding arose from the olfactory sense, which is proposed to have developed into mammalian emotion and emotional memory. The mammalian brain invested heavily in olfaction to succeed at night as reptiles slept—one explanation for why olfactory lobes in mammalian brains are proportionally larger than in the reptiles. These odor pathways gradually formed the neural blueprint for what was later to become our limbic brain.
Emotions are thought to be related to certain activities in brain areas that direct our attention, motivate our behavior, and determine the significance of what is going on around us. Pioneering work by Broca (1878), Papez (1937), and MacLean (1952) suggested that emotion is related to a group of structures in the center of the brain called the limbic system, which includes the hypothalamus, cingulate cortex, hippocampi, and other structures. More recent research has shown that some of these limbic structures are not as directly related to emotion as others are, while some non-limbic structures have been found to be of greater emotional relevance.
In 2011, Lövheim proposed a direct relation between specific combinations of the levels of the signal substances dopamine, noradrenaline and serotonin and eight basic emotions. A model was presented where the signal substances form the axes of a coordinate system, and the eight basic emotions according to Silvan Tomkins are placed in the eight corners. Anger is, according to the model, for example produced by the combination of low serotonin, high dopamine and high noradrenaline.
Prefrontal cortex
There is ample evidence that the left prefrontal cortex is activated by stimuli that cause positive approach. If attractive stimuli can selectively activate a region of the brain, then logically the converse should hold, that selective activation of that region of the brain should cause a stimulus to be judged more positively. This was demonstrated for moderately attractive visual stimuli and replicated and extended to include negative stimuli.
Two neurobiological models of emotion in the prefrontal cortex made opposing predictions. The Valence Model predicted that anger, a negative emotion, would activate the right prefrontal cortex. The Direction Model predicted that anger, an approach emotion, would activate the left prefrontal cortex. The second model was supported.
This still left open the question of whether the opposite of approach in the prefrontal cortex is better described as moving away (Direction Model), as unmoving but with strength and resistance (Movement Model), or as unmoving with passive yielding (Action Tendency Model). Support for the Action Tendency Model (passivity related to right prefrontal activity) comes from research on shyness and research on behavioral inhibition. Research that tested the competing hypotheses generated by all four models also supported the Action Tendency Model.
Homeostatic/primordial emotion
Another neurological approach distinguishes two classes of emotion: "classical" emotions such as love, anger and fear that are evoked by environmental stimuli via distance receptors in the eyes, nose and ears; and "homeostatic" (or "primordial") emotions – imperious (attention-demanding) feelings such as pain, hunger and fatigue, evoked by internal body states communicated to the central nervous system by interoceptors, that motivate behavior aimed at maintaining the body's internal milieu at its ideal state.
Derek Denton defines the latter as "the subjective element of the instincts, which are the genetically programmed behaviour patterns which contrive homeostasis. They include thirst, hunger for air, hunger for food, pain and hunger for specific minerals etc. There are two constituents of a primordial emotion--the specific sensation which when severe may be imperious, and the compelling intention for gratification by a consummatory act."
Disciplinary approaches
Many different disciplines have produced work on the emotions. Human sciences study the role of emotions in mental processes, disorders, and neural mechanisms. In psychiatry, emotions are examined as part of the discipline's study and treatment of mental disorders in humans. Nursing studies emotions as part of its approach to the provision of holistic health care to humans. Psychology examines emotions from a scientific perspective by treating them as mental processes and behavior and they explore the underlying physiological and neurological processes. In neuroscience sub-fields such as social neuroscience and affective neuroscience, scientists study the neural mechanisms of emotion by combining neuroscience with the psychological study of personality, emotion, and mood. In linguistics, the expression of emotion may change to the meaning of sounds. In education, the role of emotions in relation to learning is examined.
Social sciences often examine emotion for the role that it plays in human culture and social interactions. In sociology, emotions are examined for the role they play in human society, social patterns and interactions, and culture. In anthropology, the study of humanity, scholars use ethnography to undertake contextual analyses and cross-cultural comparisons of a range of human activities. Some anthropology studies examine the role of emotions in human activities. In the field of communication sciences, critical organizational scholars have examined the role of emotions in organizations, from the perspectives of managers, employees, and even customers. A focus on emotions in organizations can be credited to Arlie Russell Hochschild's concept of emotional labor. The University of Queensland hosts EmoNet, an e-mail distribution list representing a network of academics that facilitates scholarly discussion of all matters relating to the study of emotion in organizational settings. The list was established in January 1997 and has over 700 members from across the globe.
In economics, the social science that studies the production, distribution, and consumption of goods and services, emotions are analyzed in some sub-fields of microeconomics, in order to assess the role of emotions on purchase decision-making and risk perception. In criminology, a social science approach to the study of crime, scholars often draw on behavioral sciences, sociology, and psychology; emotions are examined in criminology issues such as anomie theory and studies of "toughness," aggressive behavior, and hooliganism. In law, which underpins civil obedience, politics, economics and society, evidence about people's emotions is often raised in tort law claims for compensation and in criminal law prosecutions against alleged lawbreakers (as evidence of the defendant's state of mind during trials, sentencing, and parole hearings). In political science, emotions are examined in a number of sub-fields, such as the analysis of voter decision-making.
In philosophy, emotions are studied in sub-fields such as ethics, the philosophy of art (for example, sensory–emotional values, and matters of taste and sentimentality), and the philosophy of music (see also Music and emotion). In history, scholars examine documents and other sources to interpret and analyze past activities; speculation on the emotional state of the authors of historical documents is one of the tools of interpretation. In literature and film-making, the expression of emotion is the cornerstone of genres such as drama, melodrama, and romance. In communication studies, scholars study the role that emotion plays in the dissemination of ideas and messages. Emotion is also studied in non-human animals in ethology, a branch of zoology which focuses on the scientific study of animal behavior. Ethology is a combination of laboratory and field science, with strong ties to ecology and evolution. Ethologists often study one type of behavior (for example, aggression) in a number of unrelated animals.
The history of emotions has become an increasingly popular topic recently, with some scholars arguing that it is an essential category of analysis, not unlike class, race, or gender. Historians, like other social scientists, assume that emotions, feelings and their expressions are regulated in different ways by both different cultures and different historical times, and constructivist school of history claims even that some sentiments and meta-emotions, for example Schadenfreude, are learnt and not only regulated by culture. Historians of emotion trace and analyse the changing norms and rules of feeling, while examining emotional regimes, codes, and lexicons from social, cultural or political history perspectives. Others focus on the history of medicine, science or psychology. What somebody can and may feel (and show) in a given situation, towards certain people or things, depends on social norms and rules. It is thus historically variable and open to change. Several research centers have sprung up in different countries in the past few years in Germany, England, Spain , Sweden and Australia.
Furtherly, research in historical trauma suggests that some traumatic emotions can be passed on from parents to offspring to second and even third generation, presented as examples of transgenerational trauma.
We try to regulate our emotions to fit in with the norms of the situation, based on many (sometimes conflicting) demands and expectations which originate from various entities. The emotion of anger is in many cultures discouraged in girls and women, while fear is discouraged in boys and men. Expectations attached to social roles, such as "acting as man" and not as a woman, and the accompanying "feeling rules" contribute to the differences in expression of certain emotions. Some cultures encourage or discourage happiness, sadness, or jealousy, and the free expression of the emotion of disgust is considered socially unacceptable in most cultures. Some social institutions are seen as based on certain emotion, such as love in the case of contemporary institution of marriage. In advertising, such as health campaigns and political messages, emotional appeals are commonly found. Recent examples include no-smoking health campaigns and political campaign advertising emphasizing the fear of terrorism.
Psychotherapy and regulation of emotion
Emotion regulation refers to the cognitive and behavioral strategies people use to influence their own emotional experience. For example, a behavioral strategy in which one avoids a situation to avoid unwanted emotions (e.g., trying not to think about the situation, doing distracting activities, etc.). Depending on the particular school's general emphasis on either cognitive components of emotion, physical energy discharging, or on symbolic movement and facial expression components of emotion, different schools of psychotherapy approach the regulation of emotion differently. Cognitively oriented schools approach them via their cognitive components, such as rational emotive behavior therapy. Yet others approach emotions via symbolic movement and facial expression components (like in contemporary Gestalt therapy).
Computer science
In the 2000s, research in computer science, engineering, psychology and neuroscience has been aimed at developing devices that recognize human affect display and model emotions. In computer science, affective computing is a branch of the study and development of artificial intelligence that deals with the design of systems and devices that can recognize, interpret, and process human emotions. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing. Detecting emotional information begins with passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. The detection and processing of facial expression or body gestures is achieved through detectors and sensors.
Notable theorists
|This section does not cite any references or sources. (February 2012)|
In the late 19th century, the most influential theorists were William James (1842–1910) and Carl Lange (1834–1900). James was an American psychologist and philosopher who wrote about educational psychology, psychology of religious experience/mysticism, and the philosophy of pragmatism. Lange was a Danish physician and psychologist. Working independently, they developed the James–Lange theory, a hypothesis on the origin and nature of emotions. The theory states that within human beings, as a response to experiences in the world, the autonomic nervous system creates physiological events such as muscular tension, a rise in heart rate, perspiration, and dryness of the mouth. Emotions, then, are feelings which come about as a result of these physiological changes, rather than being their cause.
Silvan Tomkins (1911 – 1991) developed the Affect theory and Script theory. The Affect theory introduced the concept of basic emotions, and was based on the idea that the dominance of the emotion , which he called the affect system, was the motivating force in human life.
Some of the most influential theorists on emotion from the 20th century have died in the last decade. They include Magda B. Arnold (1903–2002), an American psychologist who developed the appraisal theory of emotions; Richard Lazarus (1922–2002), an American psychologist who specialized in emotion and stress, especially in relation to cognition; Herbert A. Simon (1916–2001), who included emotions into decision making and artificial intelligence; Robert Plutchik (1928–2006), an American psychologist who developed a psychoevolutionary theory of emotion; Robert Zajonc (1923–2008) a Polish–American social psychologist who specialized in social and cognitive processes such as social facilitation. An American philosopher, Robert C. Solomon (1942–2007), contributed to the theories on the philosophy of emotions with books such as What Is An Emotion?: Classic and Contemporary Readings (Oxford, 2003). Peter Goldie (1946-2011) British philosopher who specializes in ethics, aesthetics, emotion, mood and character
Influential theorists who are still active include the following psychologists, neurologists, and philosophers:
- Lisa Feldman Barrett – Social philosopher and psychologist specializing in affective science and human emotion.
- John Cacioppo – from the University of Chicago, founding father with Gary Berntson of social neuroscience.
- António Damásio (born 1944) – Portuguese behavioral neurologist and neuroscientist who works in the US
- Richard Davidson (born 1951) – American psychologist and neuroscientist; pioneer in affective neuroscience.
- Paul Ekman (born 1934) – Psychologist specializing in the study of emotions and their relation to facial expressions
- Barbara Fredrickson – Social psychologist who specializes in emotions and positive psychology.
- Nico Frijda (born 1927) – Dutch psychologist who specializes in human emotions, especially facial expressions
- Arlie Russell Hochschild (born 1940) – American sociologist whose central contribution was in forging a link between the subcutaneous flow of emotion in social life and the larger trends set loose by modern capitalism within organizations.
- Joseph E. LeDoux (born 1949) – American neuroscientist who studies the biological underpinnings of memory and emotion, especially the mechanisms of fear
- George Mandler (born 1924) - American psychologist who wrote influential books on cognition and emotion
- Jaak Panksepp (born 1943) – Estonian-born American psychologist, psychobiologist and neuroscientist; pioneer in affective neuroscience.
- Jesse Prinz – American philosopher who specializes in emotion, moral psychology, aesthetics and consciousness
- Klaus Scherer (born 1943) – Swiss psychologist and director of the Swiss Center for Affective Sciences in Geneva; he specializes in the psychology of emotion
- Ronald de Sousa (born 1940) – English–Canadian philosopher who specializes in the philosophy of emotions, philosophy of mind and philosophy of biology.
Morality and emotions
||This section needs additional citations for verification. (December 2012)|
The complexity of emotions and their role in mental life is reflected in the unsettled place they have held in the history of ethics. Often they have been regarded as a dangerous threat to morality and rationality; in the romantic tradition, on the contrary, passions have been placed at the center both of human individuality and of the moral life. This ambivalence is reflected in the close connections between the vocabulary of emotions and that of vices and virtues: envy, spite, jealousy, wrath, and pride are some names of emotions that also refer to common vices. Not coincidentally, some key virtues—love, compassion, benevolence, and sympathy—are also names of emotions. (On the other hand, prudence, fortitude and temperance consist largely in the capacity to resist the motivational power of emotions.)
The view that emotions are irrational was eloquently defended by the Epicureans and Stoics. For this reason, these Hellenistic schools pose a particularly interesting challenge for the rest of the Western tradition. The Stoics adapted and made their own the Socratic hypothesis that virtue is nothing else than knowledge, adding the idea that emotions are essentially irrational beliefs. All vice and all suffering is then irrational, and the good life requires the rooting out of all desires and attachments. (As for the third of the major Hellenistic schools, the Skeptics, their view was that it is beliefs as such that were responsible for pain. Hence they recommend the repudiation of opinions of any sort.) All three schools stressed the overarching value of “ataraxia”, the absence of disturbance in the soul. Philosophy can then be viewed as therapy, the function of which is to purge emotions from the soul (Nussbaum 1994). In support of this, the Stoics advanced the plausible claim that it is psychologically impossible to keep only nice emotions and give up the nasty ones. For all attachment and all desire, however worthy their objects might seem, entail the capacity for wrenching and destructive negative emotions. Erotic love can bring with it the murderous jealousy of a Medea, and even a commitment to the idea of justice may foster a capacity for destructive anger which is nothing but “furor brevis”— temporary insanity, in Seneca's arresting phrase. Moreover, the usual objects of our attachment are clearly unworthy of a free human being, since they diminish rather than enhance the autonomy of those that endure them.
The Hellenistic philosophers' observations about nasty emotions are not wholly compelling. Surely it is possible to see at least some emotions as having a positive contribution to make to our moral lives, and indeed we have seen that the verdict of cognitive science is that a capacity for normal emotion appears to be a sine qua non for the rational and moral conduct of life. Outside of this intimate but still somewhat mysterious link between the neurological capacity for emotion and rationality, the exact significance of emotions to the moral life will again depend on one's theory of the emotions. Inasmuch as emotions are partly constituted by desires, as some cognitivist theorists maintain, they will, as David Hume contended, help to motivate decent behavior and cement social life. If emotions are perceptions, and can be more or less epistemically adequate to their objects, then emotions may have a further contribution to make to the moral life, depending on what sort of adequacy and what sort of objects are involved. Max Scheler (1954) was the first to suggest that emotions are in effect perceptions of “tertiary qualities” that supervene in the (human) world on facts about social relations, pleasure and pain, and natural psychological facts, a suggestion recently elaborated by Tappolet (2000).
An important amendment to that view, voiced by D'Arms and Jacobson (2000a) is that emotions may have intrinsic criteria of appropriateness that diverge from, and indeed may conflict with, ethical norms. Appropriate emotions are not necessarily moral. Despite that, some emotions, specifically guilt, resentment, shame and anger, may have a special role in the establishment of a range of “response-dependent” values and norms that lie at the heart of the moral life. (Gibbard 1990, D'Arms and Jacobson 1993). Mulligan (1998) advances a related view: though not direct perceptions of value, emotions can be said to justify axiological judgments. Emotions themselves are justified by perceptions and beliefs, and are said to be appropriate if and only if the axiological judgments they support are correct. If any of those variant views is right, then emotions have a crucial role to play in ethics in revealing to us something like moral facts. A consequence of this view is that art and literature, in educating our emotions, will have a substantial role in our moral development (Nussbaum 2001). On the other hand, there remains something “natural” about the emotions concerned, so that moral emotions are sometimes precisely those that resist the principles inculcated by so-called moral education. Hence the view that emotions apprehend real moral properties can explain our approval of those, like Huckleberry Finn when he ignored his “duty” to turn in Jim the slave, whose emotions drive them to act against their own “rational” conscience (Bennett 1974; McIntyre 1990).
See also
- Affect measures
- Affective Computing
- Affective forecasting
- Affective neuroscience
- Affective science
- Emotion classification
- Emotion in animals
- Emotions and culture
- Emotion and memory
- Emotional expression
- Emotions in virtual communication
- Fuzzy-trace theory
- Group emotion
- List of emotions
- Sociology of emotions
- Social emotion
- Social neuroscience
- Social sharing of emotions
- Somatic markers hypothesis
- Affective science#Measuring Emotions
- Gaulin, Steven J. C. and Donald H. McBurney. Evolutionary Psychology. Prentice Hall. 2003. ISBN 978-0-13-111529-3, Chapter 6, p 121-142.
- Schacter, Daniel L. (2011). Psychology Second Edition. 41 Madison Avenue, New York, NY 10010: Worth Publishers. p. 310. ISBN 978-1-4292-3719-2.
- Cacioppo, J.T & Gardner, W.L (1999). Emotion. "Annual Review of Psychology", 191.
- Merriam-Webster (2004). The Merriam-Webster dictionary (11th ed.). Springfield, MA: Author.
- Fox 2008, pp. 16–17.
- Ekman, Paul (1992). "An argument for basic emotions". Cognition & Emotion 6: 169–200.
- Scherer, K. R. (2005). "What are emotions? And how can they be measured?". Social Science Information 44: 693–727.
- Schwarz, N. H. (1990). Feelings as information: Informational and motivational functions of affective states. Handbook of motivation and cognition: Foundations of social behavior, 2, 527-561.
- Handel, Steven. "Classification of Emotions". Retrieved 30 April 2012.
- Plutchik , R. (2002). Nature of emotions. American Scientist,89, 349.
- Wayne, W. & McCann, D. (2007). Psychology: Themes & Variety 2nd Canadian ed. Nelson Education Ltd: Thompson Wadsworth Publisher. ISBN 978-0-17-647273-3
- Schacter, Daniel L. (2011). Psychology Ed. 2. 41 Madison Avenue New York, NY 10010: Worth Publishers. ISBN 1–4292–3719–8.
- Suchy, Yana (2011). Clinical neuropsychology of emotion. New York, NY: Guilford.
- Aristotle. Nicomachean Ethics. Book 2. Chapter 6.
- Aquinas, Thomas. Summa Theologica. Q.59, Art.2.
- See for instance Antonio Damasio (2005) Looking for Spinoza.
- Darwin, Charles (1872). The Expression of Emotions in Man and Animals. Note: This book was originally published in 1872, but has been reprinted many times thereafter by different publishers
- Wright, Robert. Moral animal.
- Cacioppo, J. T. (1998). Somatic responses to psychological stress: The reactivity hypothesis.Advances in psychological science, Vol. 2, pp. 87-114. East Sussex, United Kingdom: Psychology Press
- Aziz-Zadeh L, Damasio A. (2008) Embodied semantics for actions: findings from functional brain imaging. J Physiol Paris. Jan-May;102(1-3):35-9
- LeDoux J.E. (1996) The Emotional Brain. New York: Simon & Schuster.
- McIntosh, D. N., Zajonc, R. B., Vig, P. S., & Emerick, S. W. (1997). Facial movement, breathing, temperature, and affect: Implications of the vascular theory of emotional efference. Cognition & Emotion, 11(2), 171-195.
- James, William. 1884. "What Is an Emotion?" Mind. 9, no. 34: 188-205.
- Laird, James, Feelings: the Perception of Self, Oxford University Press
- Reisenzein, R. (1995). James and the physical basis of emotion: A comment on ellsworth. Psychological review, 102(4), 757-761. doi: 0033-295X
- Dalgleish, T. (2004). The emotional brain. Nature: Perspectives, 5, 582–89.
- Cannon, Walter B. (1929). "Organization for Physiological Homeostasis". Physiological Review 9 (3): 399–421.
- Cannon, Walter B. (1927). "The James-Lange theory of emotion: A critical examination and an alternative theory.". The American Journal of Psychology 39: 106–124.
- Daniel L. Schacter, Daniel T. Gilbert, Daniel M. Wegner (2011). Psychology. Worth Publishers ).
- see the Heuristic–Systematic Model, or HSM, (Chaiken, Liberman, & Eagly, 1989) under attitude change. Also see the index entry for "Emotion" in "Beyond Rationality: The Search for Wisdom in a Troubled Time" by Kenneth R. Hammond and in "Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets" by Nassim Nicholas Taleb.
- Griffiths, Paul Edmund and Scarantino, Andrea (2005) Emotions in the wild: The situated perspective on emotion.
- Lövheim H. A new three-dimensional model for emotions and monoamine neurotransmitters. Med Hypotheses (2011), Epub ahead of print. doi:10.1016/j.mehy.2011.11.016 PMID 22153577
- Kringelbach, M.L.; O'Doherty, J.O.; Rolls, E.T.; & Andrews, C. (2003). Activation of the human orbitofrontal cortex to a liquid food stimulus is correlated with its subjective pleasantness. Cerebral Cortex, 13, 1064–1071.
- Drake, R.A. (1987). Effects of gaze manipulation on aesthetic judgments: Hemisphere priming of affect. Acta Psychologica, 65, 91–99.
- Merckelbach, H.; & van Oppen, P. (1989). Effects of gaze manipulation on subjective evaluation of neutral and phobia-relevant stimuli: A comment on Drake's (1987) 'Effects of gaze manipulation on aesthetic judgments: Hemisphere priming of affect.' Acta Psychologica, 70, 147–151.
- Harmon-Jones, E.; Vaughn-Scott, K.; Mohr, S.; Sigelman, J.; & Harmon-Jones, C. (2004). The effect of manipulated sympathy and anger on left and right frontal cortical activity. Emotion, 4, 95–101.
- Schmidt, L.A. (1999). Frontal brain electrical activity in shyness and sociability. Psychological Science, 10, 316–320.
- Garavan, H.; Ross, T.J.; & Stein, E.A. (1999). Right hemispheric dominance of inhibitory control: An event-related functional MRI study. Proceedings of the National Academy of Sciences, 96, 8301–8306.
- Drake, R.A.; & Myers, L.R. (2006). Visual attention, emotion, and action tendency: Feeling active or passive. Cognition and Emotion, 20, 608–622.
- Wacker, J.; Chavanon, M.-L.; Leue, A.; & Stemmler, G. (2008). Is running away right? The behavioral activation–behavioral inhibition model of anterior asymmetry. Emotion, 8, 232–249.
- Denton 2006, p. 10.
- Craig, A.D. (Bud) (2003). "Interoception: The sense of the physiological condition of the body". Current Opinion in Neurobiology 13 (4): 500–505. doi:10.1016/S0959-4388(03)00090-4. PMID 12965300.
- Derek A. Denton (8 June 2006). The primordial emotions: the dawning of consciousness. Oxford University Press. p. 7. ISBN 978-0-19-920314-7.
- Craig, A.D. (Bud) (2008). "Interoception and emotion: A neuroanatomical perspective". In Lewis, M.; Haviland-Jones, J.M.; Feldman Barrett, L. Handbook of Emotion (3 ed.). New York: The Guildford Press. pp. 272–288. ISBN 978-1-59385-650-2. Retrieved 6 September 2009.
- Denton DA, McKinley MJ, Farrell M, Egan GF (June 2009). "The role of primordial emotions in the evolutionary origin of consciousness". Conscious Cogn 18 (2): 500–14. doi:10.1016/j.concog.2008.06.009. PMID 18701321.
- Schacter, Daniel. "Psychology". Worth Publishers. 2011. p.316
- Schacter, Daniel. "Psychology". Worth Publishers. 2011. p.340
- Freitas-Magalhães, A., & Castro, E. (2009). Facial Expression: The effect of the smile in the Treatment of Depression. Empirical Study with Portuguese Subjects. In A. Freitas-Magalhães (Ed.), Emotional Expression: The Brain and The Face (pp. 127–140). Porto: University Fernando Pessoa Press. ISBN 978-989-643-034-4
- On Emotion – an article from Manchester Gestalt Centre website
- Fellous, Armony & LeDoux, 2002
- Tao, Jianhua; Tieniu Tan (2005). "LNCS". Affective Computing and Intelligent Interaction 3784. Springer. pp. 981–995. doi:10.1007/11573548.
- "Affective Computing" MIT Technical Report #321 (Abstract), 1995
- Kleine-Cosack, Christian (October 2006). "Recognition and Simulation of Emotions" (PDF). Archived from the original on May 28, 2008. Retrieved May 13, 2008. "The introduction of emotion to computer science was done by Pickard (sic) who created the field of affective computing."
- Diamond, David (December 2003). "The Love Machine; Building computers that care.". Wired. Retrieved May 13, 2008. "Rosalind Picard, a genial MIT professor, is the field's godmother; her 1997 book, Affective Computing, triggered an explosion of interest in the emotional side of computers and their users."
- Cherry, Kendra. "v". Retrieved 30 April 2012.
- The Tomkins Institute. "Applied Studies in Motivation, Emotion, and Cognition". Retrieved 30 April 2012.
- Reisenzein , R. (2006). Cognition & emotion. Psychology Press, part of the Taylor & Francis Group ,20(7), 920-951. doi:10.1080/02699930600616445
- Plutchik, R. (1982). A psychoevolutionary theory of emotions.Social Science Information, 21, 529. doi:10.1177/053901882021004003
Further reading
- Dana Sugu & Amita Chaterjee "Flashback: Reshuffling Emotions", International Journal on Humanistic Ideology, Vol. 3 No. 1, Spring–Summer 2010.
- Cornelius, R. (1996). The science of emotion. New Jersey: Prentice Hall.
- Freitas-Magalhães, A. (Ed.). (2009). Emotional Expression: The Brain and The Face. Porto: University Fernando Pessoa Press. ISBN 978-989-643-034-4.
- Freitas-Magalhães, A. (2007). The Psychology of Emotions: The Allure of Human Face. Oporto: University Fernando Pessoa Press.
- González, Ana Marta (2012). The Emotions and Cultural Analysis. Burlington, VT : Ashgate. ISBN 978-1-4094-5317-8
- Ekman, P. (1999). "Basic Emotions". In: T. Dalgleish and M. Power (Eds.). Handbook of Cognition and Emotion. John Wiley & Sons Ltd, Sussex, UK:.
- Frijda, N.H. (1986). The Emotions. Maison des Sciences de l'Homme and Cambridge University Press.
- Hochschild, A.R. (1983). The managed heart: Commercialization of human feelings. Berkeley: University of California Press.
- Hogan, Patrick Colm. (2011). What Literature Teaches Us about Emotion Cambridge: Cambridge University Press.
- Hordern, Joshua. (2013). Political Affections: Civic Participation and Moral Theology. Oxford: Oxford University Press. ISBN 0199646813
- LeDoux, J.E. (1986). The neurobiology of emotion. Chap. 15 in J.E. LeDoux & W. Hirst (Eds.) Mind and Brain: dialogues in cognitive neuroscience. New York: Cambridge.
- Mandler, G. (1984). Mind and Body: Psychology of emotion and stress. New York: Norton.
- Nussbaum, Martha C. (2001) Upheavals of Thought: The Intelligence of Emotions. Cambridge: Cambridge University Press.
- Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In R. Plutchik & H. Kellerman (Eds.), Emotion: Theory, research, and experience: Vol. 1. Theories of emotion (pp. 3–33). New York: Academic.
- Ridley-Duff, R.J. (2010). Emotion, Seduction and Intimacy: Alternative Perspectives on Human Behaviour (Third Edition), Seattle: Libertary Editions. http://www.libertary.com/book/emotion-seduction-intimacy
- Roberts, Robert. (2003). Emotions: An Essay in Aid of Moral Psychology. Cambridge: Cambridge University Press.
- Scherer, K. (2005). What are emotions and how can they be measured? Social Science Information Vol. 44, No. 4: 695–729.
- Solomon, R. (1993). The Passions: Emotions and the Meaning of Life. Indianapolis: Hackett Publishing.
- Zeki, S. & Romaya, J.P. (2008), "Neural correlates of hate", PloS one, vol. 3, no. 10, pp. 3556.
- Wikibook Cognitive psychology and cognitive neuroscience
- Dror Green (2011). "Emotional Training, the art of creating a sense of a safe place in a changing world". Bulgaria: Books, Publishers and the Institute of Emotional Training.
- Goldie, Peter. (2007). "Emotion". Philosophy Compass, vol. 1, issue 6
|Wikiversity has learning materials about Emotion|
|Look up emotion in Wiktionary, the free dictionary.|
- Online Demo: Emotion recognition from speech, University of Patras, Wire Communication Lab
- Facial Emotion Expression Lab
- CNX.ORG: The Psychology of Emotions, Feelings and Thoughts (free online book)
- Queen Mary Centre for the History of the Emotions
- Humaine Emotion-Research.net: The Humaine Portal: Research on Emotions and Human-Machine Interaction
- PhilosophyofMind.net: Philosophy of Emotions portal
- Swiss Center for Affective Sciences
- The Internet Encyclopedia of Philosophy: Theories of Emotion
- The Stanford Encyclopedia of Philosophy: Emotion
- University of Arizona: Salk Institute:cCC
- Center for the History of Emotions, Max Planck Institute for Human Development, Berlin
- Emotional Culture and Identity | http://en.wikipedia.org/wiki/Emotion | 13 |
26 | All of the stars, galaxies, and nebulae seen by telescopes make up only four percent of the contents of the universe. Scientists are unsure about the nature of the rest.
A large part of our universe is made up of so-called "dark matter," which emits no detectable energy, such as visible light, X-rays, or radio waves. However, it reveals itself by its gravity, just like a magnet underneath a table betrays its presence by attracting paperclips and pins.
The mystery of dark matter is more than 70 years old. In 1933, Fritz Zwicky studied the motions of galaxies in the Coma cluster and found that the galaxies were moving around much too fast. The cluster is flying apart, unless it's much more massive than it appears. One year earlier, Jan Oort studied the motions of stars in the Milky Way, and used similar arguments to conclude that our galaxy contains more mass than meets the eye.
In the late 1970s, Vera Rubin and Kent Ford announced the results of their pioneering studies of distant spiral galaxies. The outer regions of every galaxy they observed rotated so fast that there was one inescapable conclusion: Galaxies are embedded in extended "halos" of dark matter.
Why does fast rotation imply the presence of dark matter? Think about the solar system. Since most of the mass is in the Sun, which sits in the center of the system, Mercury's orbital speed is much higher than Pluto's.
Likewise, if most of a galaxy's mass is concentrated in its core (where most of the light comes from), you would expect stars and gas clouds to orbit slower with increasing distance from the core. This happens out to a certain distance. But past that point, orbital speeds of stars and gas clouds remain almost constant at increasing distances from the core. This can be explained only if there's a lot of invisible mass outside their orbits. Rubin and Ford concluded that the universe must contain about 10 times more dark matter than ordinary "luminous" matter.
Some of the dark matter consists of ordinary, or "baryonic" matter (matter consisting of protons and neutrons) that simply does not give off energy. Candidates include tenuous gas clouds, remnants of dead stars, or primordial black holes. But this is just the tip of the dark-matter iceberg: the amount of strange dark matter -- new types of elementary particles -- may be almost 10 times as large.
There's a simple reason why astronomers are so interested in dark matter. The mass of the universe determines its fate. The universe began expanding at the Big Bang, and is still expanding today. If the visible mass were all the mass in the universe, the universe would expand forever. However, the gravity of large amounts of dark matter might stop the expansion and cause the universe to contract, causing it to end in a "Big Crunch."
Recent observations of distant stellar explosions seem to indicate that the Big Crunch is not in our future. In fact, the expansion of the universe is speeding up, goaded by a mysterious "dark energy."
MACHOs and WIMPs
Most dark-matter searches have focused on our galaxy's halo -- a sphere around the flat, main disk, like the halo seen in blue around the edge-on spiral galaxy NGC 4631. Suppose it harbors a large amount of dark matter that consists of normal atoms (called "baryonic" dark matter). It may be locked up in small, cold bodies, like dead stars, cool brown dwarfs, rogue planets, or maybe even small black holes. These hypothetical bodies are called MACHOs, for MAssive Compact Halo Objects.
The nature of the "non-baryonic" dark matter -- dark matter not made of normal atoms -- is more mysterious. It may consist of particles that rarely or never interact with normal matter, except through gravity. Astronomers call this non-baryonic dark matter WIMPs -- Weakly Interacting Massive Particles. | http://stardate.org/astro-guide/btss/cosmology/dark_matter | 13 |
23 | Jupiter vacuumed up the pieces of the disrupted comet Shoemaker–Levy 9 in 1994, but the impacts were a reminder of the danger faced by Earth.
Comet Shoemaker-Levy 9 experienced one of the most spectacular ends that humans ever witnessed. Several months after its discovery, pieces of the comet smashed into the planet Jupiter. The collision produced scars that were visible from Earth in small telescopes.
"This is the first collision of two solar system bodies ever to be observed, and the effects of the comet impacts on Jupiter's atmosphere have been simply spectacular and beyond expectations," NASA wrote on a website describing the comet.
The comet, which struck Jupiter in 1994, brought the dangers of asteroid and comet collisions with Earth to the public fore. In the late 1990s, Hollywood unleashed two blockbuster films – "Armageddon" and "Deep Impact" – on the theme of large objects threatening Earth.
After the release of these films, Congress authorized NASA to seek more near-Earth objects (NEOs) to better monitor those that come cruising close to our planet. [Related: Meteor Blast over Russia Feb. 15, 2013: Complete Coverage]
The first known Jupiter-orbiting comet
The comet was first spotted in March 1993 by three veteran comet discoverers: Eugene and Carolyn Shoemaker, and David Levy. The group had collaborated several times before and discovered several other comets, which is why this comet was called Shoemaker-Levy 9.
A March circular from the International Astronomical Union Central Bureau for Astronomical Telegrams contained a casual reference to the comet's position: "The comet is located some 4 degrees from Jupiter, and the motion suggests that it may be near Jupiter's distance."
- Is This a Door Handle on Mars?!
- Images From Saturn's Ice Moon
- Voyager Set to Leave Our Solar System
- Watch it Rain on the Sun
- What 1-Million Degrees Looks Like
As the months progressed, it was clear that the comet was actually orbiting Jupiter and not the sun. Astronomer Steve Fentress suggested the comet broke up on July 7, 1992, when it whipped by Jupiter roughly 74,600 miles (120,000 km) above its atmosphere. (Accounts vary, with some sources saying the comet passed as close as 15,534 miles, or 25,000 km.)
But the comet was probably orbiting Jupiter for decades before that, perhaps as early as 1966 when it got captured by the massive planet's gravity.
Further orbital calculations showed the comet would actually crash into Jupiter in July 1994. The spacecraft Galileo, scheduled to orbit Jupiter, was still en route to the planet at the time and would not be able to get a close-up view.
Observatories around the world, however, prepared to turn their attention to the planet, expecting a spectacular show. The orbiting Hubble Space Telescope also was tapped to observe the encounter.
"For comet experts and planetary specialists around the world, this may be the most important event of their careers, because of the discoveries they may make about the nature of comets and the makeup of Jupiter's atmosphere and magnetosphere," NASA wrote prior to the event.
"This knowledge may help them explain similar high-energy events on Earth."
Watching the fireworks
The collisions ended up being a multi-day extravaganza. From July 16 to 22, 1994, 21 separate fragments of the comet smashed into Jupiter's atmosphere, leaving blotches behind.
Although all of the collisions took place on the side of Jupiter facing away from Earth, they generally occurred fairly close to the morning "terminator", or the location on Jupiter that was shortly moving within sight of Earth. This meant that telescopes saw some impact sites just minutes after the event.
Jupiter's bright surface was now dotted with smudges from where the comet smashed through the atmosphere. Astronomers using Hubble were surprised to see "sulfur-bearing compounds" such as hydrogen sulfide, as well as ammonia, as a result of the collision.
A month after the collision, the sites were noticeably faded, and Hubble scientists declared that Jupiter's atmosphere would have no permanent change from the impacts.
"Hubble's ultraviolet observations show the motion of very fine impact debris particles now suspended high in Jupiter's atmosphere," NASA added in a release.
"The debris eventually will diffuse down to lower altitudes. This provides the first information ever obtained about Jupiter's high altitude wind patterns."
The impact scars disappeared many years ago, but at least one group of scientists has recently detected a change in Jupiter's environment because of Shoemaker-Levy 9.
When Galileo arrived at Jupiter, the spacecraft imaged ripples in Jupiter's main ring in 1996 and 2000. Also, the entire ring tilted in 1994 by about 1.24 miles (two kilometers) following the impact.
In 2011 – nearly two decades after the impact – the Pluto-bound New Horizons spacecraft still was detecting disturbances in the ring, according to a paper in the journal Science.
"Impacts by comets or their dust streams are regular occurrences in planetary rings, altering them in ways that remain detectable decades later," the researchers wrote in their abstract.
Political effects also came to pass in the decades after Shoemaker-Levy 9, as politicians sought to figure out how many large extraterrestrial objects lurk near Earth. In 1998, Congress mandated that NASA seek out at least 90 percent of the asteroids near the planet that are 0.62 miles (1 kilometer) in diameter.
Seven years later, representatives refined the search to order that by 2020, NASA find 90 percent of NEOs that are 459 feet (140 meters) wide or larger – a threshold considered to pose a large threat to Earth. [Video: NEOs: The Video Show]
As of 2011, NASA had found more than 90 percent of the biggest asteroids lurking near Earth, the agency announced. A survey using the Wide-field Infrared Survey Explorer suggested that there were actually fewer asteroids lurking near our planet than previously feared.
"Astronomers now estimate there are roughly 19,500 – not 35,000 – mid-size near-Earth asteroids. Scientists say this improved understanding of the population may indicate the hazard to Earth could be somewhat less than previously thought," NASA wrote.
"However, the majority of these mid-size asteroids remain to be discovered."
MORE ON WEATHER.COM: Close Encounter -- Jupiter and the Moon
Close Encounter: Jupiter and the Moon
Uploaded by iWitness Weather Contributor ricfur2 from Gilbert, Arizona. | http://www.weather.com/news/science/space/comet-hits-jupiter-20130220 | 13 |
34 | Part I: The Circumcenter of a triangle
1. Start up Geometry Explorer. Using the segment tool construct a triangle ABC in the Canvas. Then, select all three sides and click on the midpoint tool to construct the midpoints to each of the sides.
Part II: The Orthocenter of a triangle.
2. Next, select, in turn, each side and its midpoint and click on the perpendicular tool to construct the three perpendicular bisectors of each side.
4. Note that it appears that the three perpendicular bisectors intersect at a common point. Drag vertices A, B, and C around. Does this common intersection property persist? This common point is called the circumcenter of the triangle. Select two of the perpendicular bisectors and click on the intersection tool to find the circumcenter G. Then, construct a circle with center at the circumcenter and radius out to to one of the vertices of the triangle. What do you notice about the circle in relation to the vertices of the triangle. This circle is called the circumscribed circle about the triangle.
1. Construct a new triangle ABC in the Canvas. Then, construct the perpendiculars from each side to the opposite vertex. It appears that the three perpendiculars intersect at a common point, point O in the figure. This point is called the orthocenter of the triangle. Construct a circle at the intersection point with radius out to one of the vertices. What do you notice different about this situation in comparison to the example above?
Part III: The Incenter of a triangle.
1. Construct a new triangle ABC in the Canvas. Then, construct the angle bisectors of each angle of the triangle. For example, to construct the bisector on angle CAB we would select (in order) points C, A, and B and click on the bisector tool. Note that angles are oriented in Geometry Explorer and you must select the vertices of an angle in the proper order. It appears that yet again the three bisectors intersect at a common point called the incenter of the triangle.
2. There is a special circle associated to the incenter. To construct this circle, drop a perpendicular line from the incenter to one of the sides of the triangle, and find the intersection of this perpendicular with the side. Call this point J. Construct a circle with center at the incenter and radius at J. This circle is the inscribed circle to the triangle.
Drag points A, B, and C around. What special properties does the incenter have? | http://homepages.gac.edu/~hvidsten/gex/gex-examples/projects/triangleFun.html | 13 |
15 | Europa Impact Crater
A newly discovered, city-sized impact crater viewed by NASA's Galileo spacecraft may shed new light on the nature of the enigmatic icy surface of Jupiter's moon Europa.
This false-color image reveals the scar of a past major impact of a comet or small asteroid on Europa's surface. The bright, circular feature at center right has a diameter of about 80 kilometers (50 miles), making it comparable in size to the largest cities on Earth. The area within the outer boundary of the continuous bright ring is about 5,000 square kilometers (nearly 2,000 square miles). The diameter of the darker area within the bright ring is about 29 kilometers (18 miles), which is large enough to contain both the city of San Francisco and New York's Manhattan Island, side by side.
The brightest reds in this image correspond to surfaces with high proportions of relatively pure water ice, while the blue colors indicate that non-ice materials are also present. The composition of the darker materials is controversial; they may consist of minerals formed by evaporation of salty brines, or they may be rich in sulfuric acid. The bright ring is a blanket of ejecta that consists of icy subsurface material that was blasted out of the crater by the impact, while the darker area in the center may retain some of the materials from the impacting body. Further study may yield new insights about both the nature of the impactor and the surface chemistry of Europa.
Europa's surface is a question of great interest at present, since an ocean of liquid water may exist beneath the icy crust, possibly providing an environment suitable for life. Geologic investigations of Europa's surface are underway, and a new spacecraft mission, the Europa Orbiter, is planned.
Impact craters with diameters of 20 kilometers (12 miles) and larger are extremely rare on Europa; as of 1999 only 7 such features were known. The rarity of larger impact craters on Europa lends greater significance to the discovery of this one. Impact crater counts are often employed to estimate the ages of the exposed surfaces of planets and satellites, and the small number of craters found on Europa implies that the surface may be quite young in geological terms. Thus the discovery of this feature may provide additional insights into questions about the age and level of geological activity of Europa's surface.
Impact craters are expected to form with greater frequency on the "leading" sides of satellites that always turn the same face to their primary planet, in this case, Jupiter. The process is much like the effect of running through a rainstorm. The "apex" of Europa's leading side is located on the equator at 90 degrees West longitude, only about 10 degrees removed from the feature shown. Europa's leading side does not receive a continuous bombardment by ionized particles carried along by Jupiter's rapidly rotating magnetosphere (as is the case for the trailing side), which may allow greater preservation of the chemical signatures of the impacting object.
To the east of the bright ring-like feature are two, or perhaps three, similar but less well-defined quasi-circular features, raising the possibility that this crater is one member of a catena, or chain of craters. This would lend still greater interest to this area as a potential target for focused investigations by later missions such as the Europa Orbiter.
The near-infrared mapping spectrometer on board Galileo obtained this image on May 31,1998, during that spacecraft's 15th orbital encounter with Europa. The image data was returned to Earth in several segments during both the 15th and the 16th orbital periods. Merging and processing of the full data set was accomplished in 1999. Analysis and interpretation are ongoing.
Galileo has been orbiting Jupiter and its moons since December 1995. Its primary mission ended in December 1997, and after that Galileo successfully completed a two-year extended mission. The spacecraft is in the midst of yet another extended journey called the Galileo Millennium Mission. More information about the Galileo mission is available at:http://solarsystem.nasa.gov/galileo/JPL manages Galileo for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology in Pasadena. | http://www.jpl.nasa.gov/spaceimages/details.php?id=PIA02561 | 13 |
19 | For the more information about the eclipse on March 20, 2012, please visit http://www.nature.nps.gov/eclipse/.
Transit of Venus
One June 5, 2012, U.S. national parks were able to see a rare event, the Transit of Venus. In the hours before sunset, every park in the contiguous United States, Hawaii, and the Virgin Islands was able to view most of the transit as Venus moved across the face of the sun..
The transit of Venus is a special alignment of the Sun, the planet Venus, and the Earth. Due to this Solar System geometry, Venus transits usually occur in pairs, eight years apart, about once a century. In the Pacific, on June 5, 2012, for over six hours, Venus' tiny silhouette moved across the Sun's disk. On the scale of a human life, it is a rare astronomical event. It is so rare that the 2012 transit will be only the 54th occurrence since 2000 B.C.E. The 55th transit will be in 2117. Since Mercury and Venus are the only two planets that orbit between the Sun and the Earth, these are the only planets in our solar system in which this amazing phenomenon can be observed.
World Explorers Breaking the Frontiers of Modern Science
For centuries the second planet in our Solar System, Venus, held the key for estimating the size of the solar system. Nearly 300 years ago, astronomers theorized that observations of the silhouette of Venus crossing the front of the Sun, combined with the mathematics of geometry, can be used to calculate the scale of the solar system. Thus began a scientific endeavor to observe and time this rare event. It would take centuries before explorers and astronomers would capture enough observations to realize how big our corner of the Universe is.
We do not know for certain if the transits of the inner planets were observed by ancient cultures. Before the invention of the telescope, Venus, to many appeared as a bright point of light. Before solar filters, observing the Sun had horrible blinding results, resulting in various projection methods to observe the Sun's disk. In 1610 Galileo Galilee was the first to observe Venus through a telescope and see it as a disk, not a point of light. In 1627 Johannes Kepler (using the detailed data of Tycho Brahe) discovered the three laws of planetary motion and made the first Venus transit predictions. It was this work that inspired Jeremiah Horrocks to more accurately calculate and predict the eight year interval between the paired Venus transits and successfully record the details on December 4, 1639. The 1639 transit of Venus observations inspired mathematician and inventor of the reflecting telescope, James Gregory, to theorize how to calculate the size of the solar system by measuring the transit event. It was during the next transit of Venus on June 5, 1761, that a Russian astronomer, Mikhail Lomonosov, eager to test these new theories, witnessed a "halo" around Venus as it reached the dark edge of the Sun, indicating that Venus has an atmosphere.
Eight years later, an exploration race developed amongst scientists to measure the next transit of Venus event. In 1769 many European expeditions to the South Pacific were launched to observe the transit of Venus and calculate the size of the solar system. The most famous of these expedition ships was the Endeavour, which Captain James Cook sailed to the island of Tahiti to observe the Venus transit. It was after the transit that the Endeavour happened upon New Zealand and Australia's Great Barrier Reef. With observation such as these, humanity came to realize that the Solar System was immense. The Sun was over 93 million miles away from the Earth, meaning the Sun was very large; even at the speed of light it takes over 8 minutes for sunlight to reach the Earth.
Just as the Venus transits were once used by early astronomers to calculate the size of our solar system, modern astronomers make new discoveries today by observing transits of extrasolar planets orbiting around distant stars. When a planet transits in front of its parent star, it blocks a tiny fraction of that star's light. The tiny change in brightness is measured by the NASA Kepler Mission's space telescope as it scans one area of our neighborhood in the Milky Way. Modern Earth-based telescopes confirm Kepler's exoplanets candidates, and a new solar system is discovered.
- NASA Transit of Venus
- Transit of Venus article in Sky & Telescope
- Observing tips in Sky & Telescope
- U.S. Naval Observatory Transit Computer (Note: Times are listed in UT or Universal Time. UT or GMT time conversion may be necessary.)
- NASA Night Sky Network Calendar of Events
- Night Sky Parks Calendar of Events
- NASA Kepler Mission for finding habitable planets
Last Updated: June 11, 2012 | http://www.nature.nps.gov/features/eclipse/transitofvenus.cfm | 13 |
13 | Soil scientists divide soil into layers they call horizons. The A horizon is the uppermost several inches and consists mostly of what we know as topsoil. It is often dark in color and rich in organic matter, and it usually provides a favorable environment for plant growth. The next two layers, the B and C horizons, are lighter in color, lower in organic matter and relatively infertile. We call the B and C horizons subsoil. Plant roots generally extend through the A horizon and well into the B horizon. However, the C horizon, which may be well below the surface, is comparatively inhospitable for root growth.
In landscape situations, this natural layering often is absent due to soil movement during construction. All too often, this means that no topsoil layer is present, forcing the landscape installer to modify the existing subsoil to make it more favorable to plant growth.
Aside from horizons, which describe the position of the soil layer, soil scientists also refer to soil fractions. Fractions refer to organic or inorganic (mineral) substances. Thus, most soils are composed partly of a mineral fraction and partly of an organic fraction. A few soils are almost completely organic, and others are mostly mineral.
• The mineral fraction of soil, consisting of particles that ultimately originated from rock, comprises the largest percentage of most soils. The type of rock from which the mineral particles originated has some bearing on the chemistry of a soil. However, the mix of particle sizes has a greater impact on soil quality and how you must manage it. The age of the soil and how much weathering it has undergone determine particle size: Older, more weathered soils consist of smaller particles.
The smallest particles are clay. Larger (but still quite small) particles are silt, and the largest particles (that still qualify as soil) are sand (see table, “Sizes of soil particles,” above right). Soils rarely, if ever, consist of solely one size of particle. Thus, soils are classified according to the proportion of each particle size they contain—we commonly refer to this as soil texture (see Figure 1, at right).
Texture, in the broadest sense, is stated as coarse (sandy soils), medium (silts or loamy soils) or fine (clayey soils). Loamy soils are intermediate in nature and not totally dominated by the characteristics of any particular particle size, though they proportionately contain more silt than sandy or clayey soils. Thus, there is no such thing as a loam particle, only loam soils. Loamy soils generally have the best overall characteristics for plant growth.
To be even more specific, we combine these terms. For example, sandy clay has significant amounts of sand but is dominated by clay particles and clay characteristics. A sandy loam is a mix of particle sizes not totally dominated by characteristics of any particle size, but—due to a somewhat higher relative sand content—its qualities tend toward those of sand. Other terms you’ll often encounter to describe texture include light and heavy, referring to sandy and clayey soils, respectively. As we’ll see, texture, more than any other single aspect, determines the manageability of soils (see sidebar, “Testing for texture,”).
• Organic matter (OM), the other soil fraction, is present in most soils, but content varies widely. Soils low in organic matter may have less than 1 percent OM content, whereas highly organic soils range far higher. Most soils contain less than 10 percent, and many—especially in arid climates—hold only 1 or 2 percent OM.
OM results from decaying plant material. This decay is brought about mainly by bacteria and fungi that consume plant matter as food. The resulting residues are a rich mix of organic materials that usually have a positive effect on soil quality. As complex organic molecules break down into simpler forms, the organic matter eventually arrives at a semi-stable form we call humus—the dark-colored substance we commonly associate with “rich” soil.
Humus contains a variety of carbohydrates, proteins, lignin, cellulose and other materials, but its main benefit does not lie in its nutritional content (most of which is unavailable to plants). Humus improves the physical structure and chemistry of soils so that they have better water- and nutrient-holding capacities and greater permeability. Notably, humic acid causes clay particles to aggregate into larger particles that act more like sand than clay. This improves drainage and aeration and so is especially valuable in clay soils. Before plant material undergoes extensive decomposition—that is, before it becomes humus—it still is beneficial to soil because it improves physical structure.
As stated above, most of the nutrients in humus are unavailable to plants. Eventually, however, even humus can break down into inorganic compounds by the process of mineralization. At this point, nutrients become available to plants again, and the cycle is completed. The reverse of this process is immobilization, wherein microorganisms assimilate inorganic substances into organic compounds. Both of these processes are ongoing in soil, but the overall trend—not counting plant uptake of nutrients—is always toward mineralization.
• Water is present in all soils. Texture has the greatest effect on how much water soil can hold: Finely textured soils hold more water than coarse soils. This is because of how soil particles hold onto water molecules. Water molecules “stick” to soil-particle surfaces by a force called adhesion because they possess positive electrical charges that are attracted to negative electrical charges on the soil particles. Thus, a layer of water surrounds soil particles. Even soils that may seem dry have very small layers of water around each particle (though this water may be unavailable to plants). Sandy soils hold the least amount of water due to low soil-particle surface area. A given volume of clay soil, because of the greater number of particles present, contains a far greater surface area onto which water molecules can cling and so has excellent water retention.
• Air is present in the pore spaces between soil particles. Because water is the other substance that can occupy significant amounts of pore space, air content is determined to a large extent by how wet soil is. The presence of air—particularly oxygen—in pore spaces is as important to most plants as water. Thus, good aeration is an important physical property of soil. Soils that hold a great deal of water are low or lacking in oxygen. That is why plants languish in saturated soils—their roots starve for oxygen.
• Living organisms are prevalent in nearly all soils. Bacteria, fungi, protozoans, nematodes and larger creatures such as earthworms inhabit soils, where they live on decaying plant matter and each other. From a soil-management standpoint, the main benefit of soil organisms is their role in decomposing organic matter, which we discussed above. Warm, moist conditions favor the activity of these organisms, so these types of climates favor rapid decomposition of organic matter. However, warm moist climates also favor rapid plant growth, which adds more raw material for the decay process. Thus, the cycling occurs more rapidly and on a larger scale.
HOW SOIL TYPE AFFECTS MANAGEMENT
• Water movement. Because soil particles are solid, water obviously cannot move through them. Instead, it must move around them. Water’s movement in and through soil depends on the arrangement and size of the soil’s pore spaces—the spaces between soil particles. Due to the random way soil particles pack together, pore spaces vary in size. Some are large and some are small. A “typical” soil may be about 50 percent pore space—25 percent small pore space and 25 percent large pore space. The proportion of soil occupied by pore space is its porosity and varies a great deal among soil types.
When water drains through a soil, most of its movement is through large pores. Coarse-textured soils have more large pore spaces than finely textured soils, and air normally fills these large pores. Larger pores are much better at conducting both air and water through the soil, and that’s why sandy soils have excellent drainage and aeration. The rate at which water can flow through a soil is called hydraulic conductivity. Coarser, sandy soils, with their larger pore sizes, have higher hydraulic conductivity than fine, clay soils, which tend to be lower in oxygen and retain more water.
Small pores more often contain water, rather than air. Because clay soils have more small pores and greater water retention, they also are more prone to saturation due to heavy rainfall or poorly drained conditions. When water completely occupies all the pore space in soil, the soil is saturated. Saturated soils, as we mentioned, lack oxygen and therefore make poor environments for root growth.
However, clay soils have their benefits as well. For example, they hold more available water and nutrients, so plants can last longer between irrigations and fertilizer applications. Sandy soils hold much less water and nutrients, so plants growing in them are more prone to drought and nutrient deficiency. One reason loamy soils are valuable is that they hold more water than sand, but they do not have the drainage problems of clays.
Infiltration rate describes how fast water enters the soil surface. Infiltration is similar to hydraulic conductivity and largely dependent on it. Whenever you apply water to the soil surface at a higher rate than the infiltration rate, you will have puddling or runoff.
Because clay soils often require short, light doses of water to avoid runoff or puddling (and possess low hydraulic conductivity), they are more susceptible to salt buildup. This happens because each time you apply water, you also apply a small amount of dissolved salt along with it. In well-drained soils, you easily can apply enough water so that some of it drains, or leaches, completely through the root zone. This water takes some of the dissolved salt with it, thus reducing the amount to which plant roots are exposed. However, when infiltration rates limit you to small doses of water, you cannot apply enough to leach any out of the root zone. Thus, while water leaves the soil by evapotranspiration, the salt stays behind and slowly accumulates to toxic levels as additional irrigation water brings more. This also illustrates why water quality is an important issue.
• Compaction and density. An aspect we have not touched on yet is soil-particle shape. Particles smaller than sand tend to be flattened and plate-like. This tendency is very strong with clay particles, and this has important implications. Clay particles, being flat, can stack tightly together, virtually eliminating any pore spaces between particles. In other words, porosity decreases. This is true with silts as well and is what happens when soils compact and why compacted soils conduct little water or air. Further, root growth is reduced because pore spaces through which small roots grow do not exist in compacted soil. Moist soil is more prone to compaction because when ample water is present in the soil, the particles can slip and slide past one another, making repositioning into a more compact state easier.
Clay particles also can seal, for the same reason. The flattened particles all can be oriented in the same positions— flat—and form a barrier through which water and air cannot penetrate. That’s why it’s important to score glazed surfaces—such as those created by tree-spades in planting holes—to disrupt this barrier and allow water and air penetration.
Sand tends not to compact because, unlike clays, sand particles are not flat. They cannot “stack” in a way that reduces pore space. That is why sand is the preferred medium for high-traffic turf such as golf greens and athletic fields—turf growing on sand is not as prone to the damage that compaction causes.
At this point, we should mention pans. Pans are impermeable layers present below the surface of some soils at varying depth. Hard pans are rock-like while clay pans are softer. Most pans occur naturally, but some cultural practices can create them. For example, repeated core aeration at the same depth can create a pan layer of highly compacted soil just below the depth of tine penetration.
Pans all cause serious drainage problems in landscapes. They prevent water from draining and so create perched water tables. This not only saturates soil, it also causes salt buildup because salt cannot leach out of the soil. Even if you’re able to manage irrigation well enough to prevent these problems, pans still effectively create a “bottom” to the soil, which may be quite shallow. This can restrict the rooting depth of trees and shrubs.
Soil chemistry is the interaction of various chemical constituents that takes place among soil particles and in the soil solution—the water retained by soil. The chemical interactions that occur in soil are highly complex, but understanding certain basic concepts will better help you manage turf and ornamentals.
• Nutrition. Having discussed water relations, it now is a bit simpler to discuss nutrient-holding capacity. Soils hold onto nutritional elements in a way similar to how they retain water: Positively charged nutrient molecules, cations, are attracted to the negative charges on the soil particles. This is called adsorption. The sites where cations attach to particles are cation-exchange sites (see Figure 2, left). Thus, clay retains more nutrients than coarser soils, just as it holds more water, because of the greater surface area (greater number of cation-exchange sites) to which nutrients can adsorb. The ability to hold cation nutrients is called the cation-exchange capacity (CEC) and is an important characteristic of soils in that it relates to a soil’s ability to retain nutrients and prevent nutrient leaching. Coarse soils have low CECs, while clays and highly organic soils have high CECs. A sand may have a CEC of under 10—a very low figure. Any CEC above 50 is high, and such soils should be able to hold ample nutrients.
• Salinity. Some soils, particularly in arid regions, hold high levels of salt. We discussed earlier how clay soils are more prone to salt buildup, and the same principle applies to arid-region soils. Low rainfall prevents leaching of salts, so they build up in soils. Pan layers, common in arid regions, further inhibit drainage and leaching. Some fertilizers and amendments also can increase salinity.
• Soil pH. This is perhaps the single most important aspect of soil chemistry. Strictly speaking, soil pH, or reaction, is a measure of the number of hydrogen ions (H+) present in a solution. In more common terms, it is a measure of alkalinity and acidity. The pH scale runs from 0 to 14. Seven is neutral, 0 is the most highly acidic value possible, and 14 is the most alkaline, or basic, value. Most plants grow best in the range of 6.5 to 7.0, which is acidic, but only slightly. The so-called acid-loving plants prefer lower pH, in the range of 4.0 to 6.0. Under 4.0, few plants are able to survive. Slightly alkaline soil is not harmful to most plants (except acid lovers). In strongly alkaline soils, however, nutrient-availability problems related to pH result.
The parent material of soils initially influences soil pH. For example, granite-based soils are acidic and limestone-based soils are alkaline. However, soil pH can change over time. Soils become acidic through natural processes as well as human activities. Rainfall and irrigation control the pH of most soils. In humid climates, such as the Northeastern United States, heavy rainfall percolates through the soil. When it does, it leaches basic ions such as calcium and magnesium and replaces them with acidic ions such as hydrogen and aluminum. In arid regions of the country (less than 20 inches of rain per year), soils tend to become alkaline. Rainfall is not heavy enough to leach basic ions from soils in these areas.
Other natural processes that increase soil acidity include root growth and decay of organic matter by soil microorganisms. Whereas the decay of organic matter gradually will increase acidity, adding sources of organic matter with high pH values (such as some manures and composts) can raise soil pH.
Human activities that increase soil acidity include fertilization with ammonium-containing fertilizers and production of industrial by-products such as sulfur dioxide and nitric acid, which ultimately enter the soil via rainfall. Irrigating with water high in bicarbonates gradually increases soil pH and can lead to alkaline conditions.
In most cases, changes in soil pH—whether natural processes or human activities cause them—occur slowly. This is due to the tremendous buffering capacity (resistance to change in pH) of most mineral soils. An exception to this is high-sand-content soils, where buffering tends to be low, as we’ll discuss below.
Nutrient availability varies markedly according to pH. This, in fact, is the main reason why pH is so critical. The best pH for overall nutrient availability is around 6.5, which is one reason why this is an optimal pH for most plants.
Calcium, magnesium and potassium are cation nutrients, meaning they are available to plants in a form with a positive charge. As we discussed earlier, these nutrients adsorb to soil particles, especially clay particles. Soils high in clay or organic matter have high CECs. Thus, these soils act as reservoirs for these nutrients and plants growing in them seldom are deficient in the cation nutrients.
Cations do not adsorb permanently to particles. Other compounds that are more strongly attracted to the cation-exchange sites can replace them. This is one way that pH affects nutrient availability. Low-pH soils, by definition, have many of their cation-exchange sites occupied by H+ ions. By default, exchange sites holding H+ ions cannot hold other cations. Therefore, low-pH soils are more likely to be deficient in nutrients such as magnesium, calcium or potassium. If cations are not held by particles, they can leach out of the soil.
Soil-solution pH also affects the solubility of other nutrients in the soil. In fact, pH affects the availability of all nutrients one way or another (see Figure 3, above). Therefore, maintaining pH close to the ideal level—6.0 to 7.0 for most plants—is important.
Buffering capacity is the ability of soil to resist changes in pH. Soils with a high buffering capacity require a great deal of amendment to alter pH. This is good if the soil already has a desirable pH, but it can be a problem if the soil needs pH modification. Normally, soils high in clay or organic matter (those that have high CECs) have high buffering capacities. Calcareous soils often have high buffering capacities because lime effectively neutralizes acid—a great deal of acidification may be necessary to eliminate the lime before you can realize a significant drop in pH. Conversely, in lime-free soils, acid treatment can drop pH significantly. Soils also can resist upward changes in pH, depending on their composition. Because buffering capacity determines how much amendment it will take to change pH, this is an important characteristic. Soil labs determine buffering capacity and adjust their recommendations according to the buffer pH.
Landscape managers commonly manage soils to improve their physical structure. Doing so entails cultivation and, often, the addition of some organic or inorganic amendment.
One of the main reasons we amend and cultivate soil is to alleviate compaction (see “Testing for compaction— bulk density,” below). Thus, it’s appropriate that this discussion should address preventing compaction as the first step in improving soil structure.
Trees commonly suffer from construction activity, which compacts soil to an extent that often can kill the plant. On construction sites, create a zone around trees in which equipment is prohibited. In areas with high foot traffic, take steps to route people along paths that will not affect the root zones of existing ornamentals. The same thing applies to vehicular traffic. Other practices also help reduce or prevent compaction:
Do not cultivate when the soil is wet. This can be a very frustrating situation during wet periods because it seemingly takes forever for soil—especially clay—to dry. However, cultivating soil when it’s wet will only destroy soil structure and cause the formation of blocky, hard clods impossible to break up.
Keep traffic, including foot traffic, off of wet soil—soil compacts more easily when it’s wet.
Improve drainage to speed soil drying and reduce saturation during wet periods.
Apply mulch around trees, as far as the drip line if possible. This will lessen compaction effects on the root zone and improve the soil environment for root growth.
• Physical cultivation. Cultivation can take place in a variety of situations and by several means. The easiest and best time to perform cultivation is before the installation of the landscape or turf.
If pan layers exist in your soil, now is the time to break them up, because it is nearly impossible to do so after the landscape is established. This may require some heavy-duty equipment but is well worth the trouble because pans can cause you no end of problems. Breaking a pan layer may require the use of a deep-ripper implement. If you cannot do this over the entire landscape, at least use augers or some other method of punching through the pan layer in your tree- and shrub-planting holes. Otherwise, the plants will sit in a “bathtub.” If you must dig the planting hole deeper than you normally would to accomplish this, do so. Just be sure to compact the backfill below the root ball to prevent too much settling.
In established landscapes, cultivating soil is a more complex matter. To treat compaction problems around trees, several options exist. Air injection and vertical mulching are techniques finding some use, but they have their drawbacks. A treatment gaining in popularity that provides excellent results for trees growing in compacted soil is soil replacement with radial trenching. This involves digging a trench starting near the trunk and extending it outward to near the drip line. A recent study of this method used trenches that started 10 feet from the trunk of white oaks and radiated outward. The trenches were 10 feet long, 2 feet deep, 14 inches wide and held about 1 cubic yard. The trenches were refilled with amended soil rich in organic matter. These trenches reversed the decline of trees suffering from highly compacted urban soils by providing a favorable soil environment for the tree roots. Such trenches are easy to dig with a variety of equipment (or even by hand) and so represent a viable method of alleviating compaction around existing trees. Any digging around trees should avoid damaging major roots.
Surface mulching around trees also is an effective method of improving soil conditions if the mulch covers a large enough area. Mulch should extend to the drip line if possible. This produces results more slowly but is perhaps the best long-term strategy for alleviating compaction around trees.
Turf-soil amendment is a different matter. The most common method of cultivating turf soil is through core aeration. This method uses hollow tines that pull soil cores from the turf and deposit them on the surface. The resulting holes, though they soon fill in with material, increase air and water penetration to the root zone. In many instances (low- to medium-traffic sites), doing this once or twice a year provides adequate relief from compaction. In high-traffic situations, such as golf courses and athletic fields, turf managers may core-aerate several times a year.
Repeated coring at the same depth gradually can create a compacted soil layer. Deep-tine aeration, using much longer tines, reduces this problem. Drills or water jets also are aeration options that avoid the problem of compacted layers. Many golf-course superintendents use a combination of these aeration techniques.
• Amending soil. Cultivation techniques such as aeration help alleviate compaction created by traffic. Often, however, soil has innate properties that make it difficult to manage. You can improve these soils with amendments that impart more desirable qualities to the soil.
►Organic amendments benefit soils in several ways. They increase nutrient- and water-holding capacities and improve drainage and aeration. In different ways, organic amendments benefit both coarse and fine soils. Because OM increases nutrient and water-holding capacity, it helps counter the drawbacks of sand-based soils. In clay soils, water and nutrient-holding capacities are not usually a concern. However, tilth (the quality that allows you easily to work a soil into a loose state), infiltration and drainage often are poor in clay soils. These, too, benefit from organic matter, as already discussed.
Organic amendments are available in many forms (see table, “Organic amendments,” above right), often as processed wood products. These amendments take some time to decompose to the point where they create actual humus, but they still provide infiltration, drainage and tilth in the meantime. Other common amendments include manure and peat.
Wood-based amendments are infamous for their ability to tie up soil nitrogen. Obviously, this can be a problem and may require the addition of supplemental nitrogen to offset this loss. Manure can contain high salt levels, another problem that may be of concern in your situation. See Chapter 10 for more information on the effects of amendments on soil fertility.
You will do no harm by adding large amounts of organic amendments to soil. Thus, there is little danger of overdoing it. A more common problem is adding too little. Often, amounts greater than 50 percent by volume are necessary to achieve significant modification. If you feel you need a more precise idea of how much to add to achieve the desired changes, have a laboratory test your soil.
►Inorganic amendments can be quite useful for improving soil quality. The main reason to amend soil with inorganic amendments is to improve porosity and thus increase water and air permeability of the soil. Therefore, this discussion pertains mainly to clay soils. The best way to improve porosity with inorganic amendments is with coarse amendments. These consist of particles that range in size from sand to fine gravel. Smaller particles do not increase porosity enough to be useful as amendments. Coarse amendments should be of uniform particle size: amendments with a wide mix of particle sizes tend to pack tightly and reduce porosity rather than increase it.
For amendments to be effective, the amendment particles must bridge. That is, they must touch each other so that they create large pore spaces in between. This can require between 50 and 80 percent amendment by volume. Small amounts of amendment are not very effective because they are too sparse to bridge with one another.
Sand is the most commonly used inorganic amendment due to its low cost and effectiveness. Calcined clay, perhaps most recognized as cat litter, is another effective coarse amendment that also increases CEC. Other amendments that grounds-care professionals occasionally use include diatomite, zeolite, expanded shale, pumice, blast-furnace slag and sintered fly ash. The latter two materials are by-products that are available on a regional basis. Perlite and vermiculite are materials used primarily in greenhouse and container culture but have disadvantages in landscape use due to their inability to remain intact under traffic.
Gypsum (calcium sulfate) is an amendment professionals often use to increase infiltration in some types of saline soils. Sodium in saline soils destroys good soil structure by causing clay particles to disperse. This dispersion effectively seals soil to water infiltration and percolation. Gypsum (and lime) displaces sodium, causing clay particles to aggregate (clump together) and create large pore spaces through which water can flow. The displaced sodium is then free to leach through the root zone (with enough water).
Incorporating amendments—organic or inorganic—is simply a matter of tilling the material into the soil after you’ve spread it on the surface. Don’t confuse the term amendment with mulch. Mulch refers to material that remains on the soil surface. Mulches can improve soil by reducing compaction, conserving moisture and decomposing to increase OM in the surface layer of soil. However, by definition, they are not amendments.
You can amend soil in existing turf by core aeration followed by topdressing that you drag into coring holes. This type of soil replacement is not difficult but requires some time—perhaps a year or two depending on frequency of aeration—to achieve significant replacement of soil.
►Topsoil. Many times, it is simply more efficient to bring in high-quality soil than to modify the poor soils already present on a site. Though this use of topsoil does not, strictly speaking, make it an amendment, the idea is the same: Provide a good soil environment for plant growth. Topsoil for sale often is actually loam. It may be of excellent quality, but it is a misnomer to call it topsoil. Of course, it is wise to inspect topsoil before purchasing it to ensure it’s of the quality you’re looking for. Ideally, the soil should be reasonably weed-free and should not contain too many large clods.
If the difference between the topsoil and the site soil is great—as it usually is—till a shallow layer of the topsoil into the top few inches of site soil. This will create a transition zone that will aid water movement and root growth between the two soils.
After improving porosity, changing pH is the most common reason for altering soils. Raising and lowering pH both are necessary at times, depending, of course, on the pH with which you’re starting.
• Reducing acidity. Liming is the practice of applying an agent to reduce soil acidity (raise pH) and make soils more favorable for plant growth. The amount of lime you must add depends on the degree of soil acidity, the buffering capacity of the soil, the desired pH, and the quality and type of lime you use.
►Liming materials. The most widely used liming materials for turfgrass areas consist of carbonates of calcium or magnesium. These include ground, pelletized and flowable limestone. Of these three, ground limestone is the type used most widely. Crystalline calcium carbonate (CaCO3), one type of ground limestone, is termed calcitic limestone. Dolomitic limestone, another ground-limestone product, comes from ground rock containing calcium-magnesium carbonate (CaMg[CO3]2) and has a higher neutralizing value than calcitic limestone. Dolomitic limestone not only lowers pH but also can supply magnesium in soils that are deficient. Although ground limestone is the most inexpensive source, it is dusty and not as easy to spread as the pelletized form.
Pelletized limestone is ground limestone (either calcitic or dolomitic) that has been aggregated into larger particles to facilitate spreading and reduce dust. The pellets quickly disintegrate when wet.
Flowable limestone is available for use on turf when you need to use a liquid application. Although liquid applications are dust-free and uniform, you only can apply relatively small amounts at one time, and lime-spray suspensions may be abrasive to sprayer parts.
Hydrated (slaked) lime [calcium hydroxide, Ca(OH)2] and burned lime (quicklime—calcium oxide, CaO) provide a rapid pH change but can be phytotoxic. These products are corrosive and difficult to handle.
As you might expect, sources of limestone vary in quality and effectiveness. Even two pelletized limestones made by different companies may vary in their ability to neutralize soil. To get the best bargain when purchasing lime products, look for quality, not just the lowest price. Two main factors govern the quality of a liming material: purity and fineness.
► Purity. Most lime recommendations assume you will use liming materials that have the same neutralizing potential as pure calcium carbonate. In other words, if your soil-test report recommends that you apply 50 pounds of limestone per 1,000 square feet, it assumes you will use a lime source that will raise soil pH to the same extent as 50 pounds of pure calcium carbonate at the same rate. A liming material with the same neutralizing potential as pure calcium carbonate has a calcium carbonate equivalent (CCE) of 100 percent.
You should adjust the recommended rate of any liming material with a CCE of less than or greater than 100 percent (see “CCE of liming materials,” above) so that you apply the right amount of material to raise your soil pH to the target level (see “Calcium carbonate equivalent [CCE] and liming rates,” page 14). Generally, because of impurities such as clay, the neutralizing value of most agricultural limestones is 90 to 98 percent. Most states require that agricultural liming materials state their CCE on the label.
►Fineness. Any effective liming material should be finely ground. This is important because the rate at which limestone raises pH increases with the fineness of the particles. Plus, limestone affects only the small volume of soil surrounding each limestone particle. A given volume of limestone contains more particles if it is finely ground and thus affects more soil than coarser limestone. Many states govern the sizes of limestone particles in pelletized lime and agricultural ground limestone. Manufacturers usually print the actual range of particle sizes on the label. However, you will generally find little advantage in using material much finer than these minimum standards.
►How and when to apply limestone. Lime will neutralize soil acidity and benefit turf growth faster if you incorporate it directly into the soil. You can incorporate lime by spreading a layer on the soil surface following a rough grading, then mixing the lime 4 to 6 inches into the soil with rotary tilling equipment. Not only does this practice distribute the lime throughout the entire root zone, you can apply much more in a single application than with a surface application. Often, you can supply the entire lime requirement in a single application during establishment, whereas several surface applications may be necessary on established turf or landscape beds.
A means of incorporating lime in established turf is through core aeration. If your soil-test report indicates that an area about to undergo renovation requires liming, apply the recommended amount of lime (along with any needed phosphorus and potassium) after herbicide treatment and thatch removal, and just before or just after aeration. As you aerate and drag the area, some of the lime/soil mix will fall into the aeration holes and some will remain on the soil surface. The more vigorous the aeration treatment the better the lime will mix with the soil.
Established turfgrass areas should not receive more than 25 to 50 pounds of limestone per 1,000 square feet in any single surface application. If you use hydrated or burned lime, apply no more that 25 pounds per 1,000 square feet in a single application. The main reason for this is to ensure that a layer of excess residue does not remain on or near the surface after watering or, in the case of hydrated or burned lime, that plant injury does not occur. If a soil requires more limestone than you can apply at one time, use semiannual applications until you meet the requirement.
Ground limestone sometimes is difficult to spread with conventional spreaders. However, pelletized limestone spreads easily with conventional drop or broadcast spreaders. For large areas, commercial spreader trucks are available for custom spreading. You can apply ground limestone anytime during the year, but it is most effective in the fall or winter because rain, snow and frost heaving help work limestone into the soil.
• Lowering soil pH—acidification. Soils often need acidification in semiarid and arid regions or when you’ve applied excess lime. Plus, golf-course superintendents sometimes apply acidifying materials to their greens as a means of managing certain diseases. They accomplish this by applying ammonium-containing fertilizers such as ammonium sulfate or elemental sulfur, or by injecting sulfuric acid into their irrigation systems.
Ammonium-containing fertilizers are effective for lowering soil pH when you need only slight acidification over an extended period. In the Northeastern United States, some golf-course superintendents use ammonium sulfate to lower the pH of putting greens affected by take-all patch and summer patch diseases. While this practice is effective in some cases, take care to avoid foliar burn and over-stimulation of turf with nitrogen. To avoid burning, make the applications during cool weather (spring and fall) at low rates. When using this approach for disease management, you should monitor soil-pH levels frequently to avoid nutrition and thatch problems caused by low pH.
If you require greater and more rapid acidification, you can use high-sulfur-content products. When you apply sulfur to soil, soil-borne bacteria convert it to sulfuric acid, thereby lowering soil pH. Powdered elemental sulfur typically is yellow and fairly pure (greater than 90 percent sulfur). As with lime, sulfur is more effective in a finely ground state. Several sulfur products are available in powder form but, as such, are dusty and not easy to apply with spreaders. You also can obtain sulfur in pelletized form (90 percent powdered sulfur and 10 percent bentonite clay). This is easy to spread with conventional fertilizer spreaders and quickly breaks down into the powdered form when moist. If you want to apply sulfur as a liquid, flowable forms also are available.
The best time to apply sulfur is before establishment. By applying sulfur directly to the soil surface, and then tilling it into the soil, sulfur will be in direct contact with soil microbes and distributed throughout the entire root zone. Incorporating sulfur before planting also allows you to use greater amounts than possible with surface applications on established turf.
Generally, sandy soils require smaller amounts of sulfur to lower pH than mineral soils. For example, lowering the pH of a 6-inch-deep layer of sandy soil from 8.0 to 6.5 requires 27.5 pounds of sulfur per 1,000 square feet. However, a clay soil needs 45.9 pounds of sulfur per 1,000 square feet for the same adjustment.
Established turf generally requires frequent applications of sulfur at relatively low rates to lower pH. On putting greens, applications normally are around 0.5 pound sulfur per 1,000 square feet and should not exceed about 2.3 pounds per 1,000 square feet per year. You can double these rates on high-cut turf if you apply the product in cool weather. Remember, excessive sulfur can injure turf, especially in hot and humid weather.
To determine if your sulfur applications are having the desired effect on pH, monitor your soil with laboratory tests. Make sure that you test the surface soil (upper 0.5 to 1 inch) separately because most of the sulfur you apply to established turf will remain and react near the soil surface. This possibly can create highly acidic conditions in the top 0.5 to 1 inch of the soil.
In recent years, some golf courses in the Southwestern United States have used sulfuric-acid irrigation-system injections to acidify soil. At least one system uses pH electrodes and a computer to maintain water pH at a constant 6.5. If the pH falls outside the operating range, the system automatically shuts down. With innovations such as these, acidification of soils with acid injection undoubtedly will become more common in the near future.
You can determine soil pH with one of several types of soil tests. However, not all soil tests provide accurate information about how much lime or acidifier you should apply. Test kits using dyes, pH pens or pH paper determine pH rapidly in the field. The least accurate means of determining soil pH is with pH paper, but it can be useful in obtaining an approximate value. While each of these tests can provide a fair indication of soil pH and tell you if you need lime, they do not provide accurate information on how much lime you should apply. The table at left gives amounts of material needed to raise and lower pH. These figures are only approximate—consult a soil lab before undertaking pH modifications.
Commercial and university testing labs accurately determine pH values for soils over a range of pH values. They also provide meaningful lime recommendations for acid soils. They base their lime recommendation on a lime-requirement test that tells you how much lime is necessary to bring the soil to an optimum pH. The lime-requirement test takes the buffering capacity of a soil into account to provide buffer pH. Regarding pH amendments, buffer pH is more important than active pH.
Each lab bases its lime recommendations on what they consider to be optimal pH for the turf or ornamentals you’re growing. Before submitting your soil samples, realize that differences exist among labs regarding what they consider to be the optimum pH ranges for turfgrasses and ornamentals. This is why lime recommendations vary from one lab to another. The best way to deal with this problem is to choose a lab that provides recommendations that make sense to you and then stick with that lab for future testing to maintain consistency.
TESTING, SAMPLING AND SOIL LABORATORIES
Soil laboratories are necessary to provide accurate analysis and meaningful recommendations. Many kits and test methods—some of which we mentioned earlier—are available that allow you to conduct crude analyses for various nutrients, as well as pH, texture, density and other factors. However, you should consider these only rough indicators of soil quality. A laboratory analysis is necessary for you to get a good grasp of your soil’s condition.
For small landscapes, the cost of testing may not be justified unless serious problems are occurring. However, for larger landscapes and golf courses, the cost of testing is trivial compared to the benefits. The information labs provide allows you to take the appropriate management steps to maximize plant growth. Otherwise, you’re just guessing at how much and what type of material to apply if you wish to amend soils.
The results from any kit or lab are only as good as the sample taken. Therefore, ensure that you follow instructions on the soil-test form. Pay particular attention to the suggested number of subsamples per unit area, sampling pattern, sampling depth, mixing procedure and whether to include thatch as part of the sample. Take care not to contaminate the sample with fertilizer, lime or any other substance that may influence results.
Want to use this article? Click here for options!
© 2013 Penton Media Inc. | http://grounds-mag.com/mag/chapter_2_soils/ | 13 |
11 | A lumpy bubble of hot gas rises from a cauldron of glowing matter in a distant galaxy, as seen
by NASA's Hubble Space Telescope.
The new images, taken by Hubble's Wide Field and Planetary Camera 2, are online at
http://oposite.stsci.edu/pubinfo/pr/2001/28 and http://www.jpl.nasa.gov/images/wfpc. The camera
was designed and built by NASA's Jet Propulsion Laboratory, Pasadena, Calif.
Galaxy NGC 3079, located 50 million light-years from Earth in the constellation Ursa Major,
has a huge bubble in the center of its disc, as seen in the image on the left. The smaller photo at right
shows a close-up of the bubble. The two white dots are stars.
Astronomers suspect the bubble is being blown by "winds," or high-speed streams of particles,
released during a burst of star formation. The bubble's lumpy surface has four columns of gaseous
filaments towering above the galaxy's disc. The filaments whirl around in a vortex and are expelled
into space. Eventually, this gas will rain down on the disc and may collide with gas clouds, compress
them and form a new generation of stars.
Theoretical models indicate the bubble formed when winds from hot stars mixed with small
bubbles of hot gas from supernova explosions. Radio telescope observations indicate those processes
are still active. Eventually, the hot stars will die, and the bubble's energy source will fade away.
The images, taken in 1998, show glowing gas as red and starlight as blue/green. Results appear
in the July 1, 2001 issue of the Astrophysical Journal. More information about the Hubble Space
Telescope is at http://www.stsci.edu. More information about the Wide Field and Planetary Camera 2
is at http://wfpc2.jpl.nasa.gov.
The Space Telescope Science Institute, Baltimore, Md., manages space operations for Hubble
for NASA's Office of Space Science, Washington, D.C. The institute is operated by the Association of
Universities for Research in Astronomy, Inc., for NASA, under contract with the Goddard Space Flight
Center, Greenbelt, Md. The Hubble Space Telescope is a project of international cooperation between
NASA and the European Space Agency. JPL is a division of the California Institute of Technology in | http://www.jpl.nasa.gov/news/releases/2001/release_2001_170.html | 13 |
11 | - Thinking & Reasoning
- Social & Civic Responsibility
1 class period that runs 90 minutes.
Students will participate in a simulation in which they do not have the same civil rights as others. They will also view and discuss video of the classroom of Jane Elliot, the Iowa teacher who first pioneered this simulation.
- Students will experience what discrimination "feels like" and the need for the protections and privledges guaranteed by the Bill of Rights.
- What would it feel like to be part of a group that is denied their rights?
- What rights and liberties are offered to individuals and groups by the Bill of Rights?
- How does the Bill of Rights promote civil rights and protect diversity?
Main Curriculum Tie:
Social Studies - U.S. Government & Citizenship
Standard 2 Objective 1
Assess the freedoms and rights guaranteed in the United States Constitution.
- Construction paper (blue and brown)collars or large (so they can be seen across the room) tags for each student.
- Pins or construction paper to attach the collars or tags to student clothing.
- Bill of Rights Worksheet.
- Selected video of Jane Elliot's simulation (see bibliography for videos to consider).
- Venn diagram graphic organizer
Background For Teachers:
The following website gives an excellent overview of the work of Jane Elliot and the blue-eyed/brown-eyed experiment.
Student Prior Knowledge:
Basic knowledge of U.S. History and the protections offered by the Bill of Rights.
Intended Learning Outcomes:
Students will understand:
- What it feels like to be part of a group that is denied their rights.
- The rights and liberties offered to individuals and groups by the Bill of Rights.
- How the Bill of Rights promotes civil rights and protects diversity.
To conduct this lesson follow these steps:
- Prepare blue and brown construction paper collars or large tags for your class. Depending on the racial/ethnic makeup of your community, you may need more of one color than another- be prepared with extra collars.
- Arrange the chairs or tables in your room so that there are two clearly defined sides of the room.
- As your students come into the room, meet them at the door and look at their eye color. If their eyes are blue or green, give them a blue collar or tag and direct them to one side of the room. If they are brown or hazel, give them a brown collar and direct them to the other.
- Tell your students that through your research and personal experience you have come to some interesting conclusions lately. You have found that student success can actually be predicted by eye color. You have found that students with blue or brown eyes (choose your eye color) are more prone to success. Therefore, they should be treated differently. Make up some rules for how you want to conduct this simulation. Be sure to treat the two groups differently. Be very supportive of the favored group. Call on them first. Give them lots of praise for everything they say or do. Ignore the other group. If you do call on them, be critical of everything they say. Give the favored group extra privileges, such as unlimited access to the bathroom or drinking fountain. You might consider giving treats to the two groups- if you do this give more to the favored group. Tell the favored group not to waste their time talking to the other group because, honestly, anything they might have to say wouldn’t be very intelligent. Practice, so that you can give these instructions with a straight face.
- Have students “vote” on whether they would like to work together or independently on the next part of the lesson. Only let the favored group vote, because the other group probably doesn’t understand what they are voting for, anyway. Chances are that your preferred students will vote to work together.
- Pass out the Bill of Rights worksheet or a similar assignment. Either do not let the disadvantaged students work together, or let them, but then almost immediately take the privilege away because they obviously can’t handle it. Sometime during the work time, accuse one of the students in the disadvantaged group of some infringement of class or school rules. Punish them for the crime without giving them a chance to defend themselves.
- While students work on the assignment, give lots of praise and attention to the favored group. Ignore or belittle the other group.
- After about twenty minutes (no longer), end the simulation. Pull the class back together and ask them to take off their collars and go back to their regular seats.
- De-brief with your class about the experience they just had. First of all, tell them that this has just been an exercise, and that everything you told them about the connection between eye color and intelligence or achievement was untrue. Then discuss their feelings. How did the people in the dominant group feel? How did the people in the disadvantaged group feel?
- Show the class a segment from one of the videos about Jane Elliot’s experience with this simulation. Have them watch especially for how participant’s attitudes changed as the simulation proceeded. How did Elliot’s subjects experiences compare with their experiences in your class?
- Next, guide the class in a discussion of how the experience they just had relates to U.S. History or current events. What groups in U.S. History have faced discrimination similar to that we exhibited in class?
- Review with the class the rights protected by the U.S. Bill of Rights. Have them complete a Venn diagram comparing their experiences in class with that of disadvantaged Americans before the Bill of Rights. Discuss their conclusions.
This lesson is best used as part of a unit on the protections and privileges of individuals and groups in the United States (U.S. Government 6210-02). Other activities in this unit (included on UEN) might include the Shadow of Hate, Five Senses activities, and the Bill of Rights in the News/Big Questions.
- Classroom discussion regarding activity and how it relates to the need for the U.S. Bill of Rights.
- Venn diagram comparing their experience with that of disadvantaged groups in the U.S. before the Bill of Rights.
Prejudice: Answering Children's Questions", ABC News. Peter
Jennings conducts a discussion with a group of elementary
through high school aged students on prejudice and racism.
Jane Elliot conducts a version of her famous 1950 experiment
on the audience as part of the discussion (use selected
"Eye of the Storm" -This video explores the nature of
prejudice in a dramatic third-grade experiment in the small,
nearly all-white, all-Christian farming community of Riceville, Iowa.
"A Class Divided"- This FRONTLINE reunites Jane Elliot and
her class after 15 years to relate the enduring effects of their lesson.
"Blue Eyed" (1995). A diverse group of 40 public employees
from the Midwest--blacks, Hispanics, whites, women and men—
are subjected to Elliott's withering regime. Jane Elliott treats
them according to negative traits that are commonly assigned
to people of color, women, people with disabilities, and other
non-dominant members of society. She later describes, with
great emotion, how her family has been harassed and ostracized
as a result of her efforts to educate white people about racism.
(A study guide is available upon request)
Created Date :
Aug 05 2002 13:44 PM | http://www.uen.org/Lessonplan/preview.cgi?LPid=536 | 13 |
15 | The South has long been a region apart, even though it is not isolated by any formidable natural barriers and is itself subdivided into many distinctive areas: the coastal plains along the Atlantic Ocean and the Gulf of Mexico; the Piedmont; the ridges, valleys, and high mountains bordering the Piedmont, especially the Great Smoky Mts. in North Carolina and Tennessee; areas of bluegrass, black-soil prairies, and clay hills west of the mountains; bluffs, floodplains, bayous, and delta lands along the Mississippi River; and W of the Mississippi, the interior plains and the Ozark Plateau.
The humid subtropical climate, however, is one unifying factor. Winters are neither long nor very cold, and no month averages below freezing. The long, hot growing season (nine months at its peak along the Gulf) and the fertile soil (much of it overworked or ruined by erosion) have traditionally made the South an agricultural region where such staples as tobacco, rice, and sugarcane have long flourished; citrus fruits, livestock, soybeans, and timber have gained in importance. Cotton, once the region's dominant crop, is now mostly grown in Texas, the Southwest, and California.
Since World War II, the South has become increasingly industrialized. High-technology (such as aerospace and petrochemical) industries have boomed, and there has been impressive growth in the service, trade, and finance sectors. The chief cities of the South are Atlanta, New Orleans, Charlotte, Miami, Memphis, and Jacksonville.
From William Byrd (1674-1744) to William Faulkner and Toni Morrison, the South has always had a strong regional literature. Its principal subject has been the Civil War, reflected in song and poetry from Paul Hamilton Hayne to Allen Tate and in novels from Thomas Nelson Page to Margaret Mitchell.
The basic agricultural economy of the Old South, which was abetted by the climate and the soil, led to the introduction (1617) of Africans as a source of cheap labor under the twin institutions of the plantation and slavery. Slavery might well have expired had not the invention of the cotton gin (1793) given it a firmer hold, but even so there would have remained the problem of racial tension. Issues of race have been central to the history of the South. Slavery was known as the "peculiar institution" of the South and was protected by the Constitution of the United States.
The Missouri Compromise (1820-21) marked the rise of Southern sectionalism, rooted in the political doctrine of states' rights, with John C. Calhoun as its greatest advocate. When differences with the North, especially over the issue of the extension of slavery into the federal territories, ultimately appeared insoluble, the South turned (1860-61) the doctrine of states' rights into secession (or independence), which in turn led inevitably to the Civil War. Most of the major battles and campaigns of the war were fought in the South, and by the end of the war, with slavery abolished and most of the area in ruins, the Old South had died.Reconstruction to World War II
The period of Reconstruction following the war set the South's political and social attitude for years to come. During this difficult time radical Republicans, African Americans, and so-called carpetbaggers and scalawags ruled the South with the support of federal troops. White Southerners, objecting to this rule, resorted to terrorism and violence and, with the aid of such organizations as the Ku Klux Klan, drove the Reconstruction governments from power. The breakdown of the plantation system during the Civil War gave rise to sharecropping, the tenant-farming system of agriculture that still exists in areas of the South. The last half of the 19th cent. saw the beginning of industrialization in the South, with the introduction of textile mills and various industries.
The troubled economic and political life of the region in the years between 1880 and World War II was marked by the rise of the Farmers' Alliance, Populism, and Jim Crow laws and by the careers of such Southerners as Tom Watson, Theodore Bilbo, Benjamin Tillman, and Huey Long. During the 1930s and 40s, thousands of blacks migrated from the South to Northern industrial cities.The Contemporary South
Since World War II the South has experienced profound political, economic, and social change. Southern reaction to the policies of the New Deal, the Fair Deal, the New Frontier, and the Great Society caused the emergence of a genuine two-party system in the South. Many conservative Southern Democrats (such as Strom Thurmond) became Republicans because of disagreements over civil rights, the Vietnam War, and other issues. During the 1990s, Republican strength in the South increased substantially. After the 1994 elections, Republicans held a majority of the U.S. Senate and House seats from Southern states; Newt Gingrich, a Georgia Republican, became Speaker of the House.
During the 1950s and 60s the civil-rights movement, several key Supreme Court decisions, and federal legislation ended the legal segregation of public schools, universities, transportation, businesses, and other establishments in the South, and helped blacks achieve more adequate political representation. The process of integration was often met with bitter protest and violence. Patterns of residential segregation still exist in much of the South, as they do throughout the United States. The influx of new industries into the region after World War II made the economic life of the South more diversified and more similar to that of other regions of the United States.
The portions of the South included in the Sun Belt have experienced dramatic growth since the 1970s. Florida's population almost doubled between 1970 and 1990 and Georgia, North Carolina, and South Carolina have also grown considerably. Economically, the leading metropolitan areas of the South have become popular destinations for corporations seeking favorable tax rates, and the region's relatively low union membership has attracted both foreign and U.S. manufacturing companies. In the rural South, however, poverty, illiteracy, and poor health conditions often still predominate.
See works by C. Eaton, H. W. Odum, and U. B. Phillips; W. H. Stephenson and E. M. Coulter, ed., A History of the South (10 vol., 1947-73); F. B. Simkins and C. P. Roland, A History of the South (4th ed. 1972); C. V. Woodward, Origins of the New South (1971) and The Strange Career of Jim Crow (3d rev. ed. 1974); D. R. Goldfield, Cotton Fields and Skyscrapers (1982); E. and M. Black, Politics and Society in the South (1987); C. R. Wilson and W. Ferris, ed., The Encyclopedia of Southern Culture (1989); D. R. Goldfield, The South for New Southerners (1991).
See L. Ultan, The Beautiful Bronx (1982); L. Ultan and G. Hermalyn, The Bronx in the Innocent Years (1985); E. Gonzalez et al., Building a Borough (1986).
See studies by R. H. Hethmon, ed. (1965, repr. 1991), D. Garfield (1980), F. Hirsch (1984, repr. 2001), and S. Frome (2001).
See V. I. Seroff, The Mighty Five (1948); M. O. Zetlin, The Five (tr. 1959).
The group burst on the international rock music scene in 1961. Their initial appeal derived as much from their wit, Edwardian clothes, and moplike haircuts as from their music. By 1963 they were the objects of wild adoration and were constantly followed by crowds of shrieking adolescent girls. By the late 1960s, "Beatlemania" had abated somewhat, and The Beatles were highly regarded by a broad spectrum of music lovers.
From 1963 to 1970 the group released 18 record albums that clearly document its musical development. The early recordings, such as Meet The Beatles (1964), are remarkable for their solid rhythms and excitingly rich, tight harmony. The middle albums, like Rubber Soul (1965) and Revolver (1966), evolved toward social commentary in their lyrics ("Eleanor Rigby," "Taxman") and introduced such instruments as the cello, trumpet, and sitar. In 1967, Sgt. Pepper's Lonely Hearts Club Band marked the beginning of The Beatles' final period, which is characterized by electronic techniques and allusive, drug-inspired lyrics. The group acted and sang in four films: A Hard Day's Night (1964), Help! (1965), Magical Mystery Tour (1968), and Let It Be (1970); all of these are outstanding for their exuberance, slapstick, and satire. They also were animated characters in the full-length cartoon, Yellow Submarine (1968). After they disbanded, all The Beatles continued to compose and record songs. In 1980, Lennon was shot to death by a fan, Mark Chapman. McCartney was knighted in 1997.
See John Lennon, In His Own Write (1964, repr. 2000); H. Davies, The Beatles (1968, repr. 1996); W. Mellers, Twilight of the Gods (1974); P. Norman, Shout! (1981); R. DiLello, The Longest Cocktail Party (1972, repr. 1983); T. Riley, Tell Me Why (1988); M. Lewisohn, The Beatles Recording Sessions (1988), The Beatles Day by Day (1990), and The Complete Beatles Chronicles (1992); I. MacDonald, Revolution in the Head (1994); M. Hertsgaard, A Day in the Life (1995); The Beatles Anthology (video, 1995; book, 2000); J. S. Wenner, ed., Lennon Remembers: The Rolling Stone Interviews (2000); B. Spitz, The Beatles: The Biography (2005).
See Juilliard (television documentary, 2003).
See study by G. Dietze (1960).
In the center of the Arctic is a large basin occupied by the Arctic Ocean. The basin is nearly surrounded by the ancient continental shields of North America, Europe, and Asia, with the geologically more recent lowland plains, low plateaus, and mountain chains between them. Surface features vary from low coastal plains (swampy in summer, especially at the mouths of such rivers as the Mackenzie, Lena, Yenisei, and Ob) to high ice plateaus and glaciated mountains. Tundras, extensive flat and poorly drained lowlands, dominate the regions. The most notable highlands are the Brooks Range of Alaska, the Innuitians of the Canadian Arctic Archipelago, the Urals, and mountains of E Russia. Greenland, the world's largest island, is a high plateau covered by a vast ice sheet except in the coastal regions; smaller ice caps are found on other Arctic islands.
The climate of the Arctic, classified as polar, is characterized by long, cold winters and short, cool summers. Polar climate may be further subdivided into tundra climate (the warmest month of which has an average temperature below 50°F;/10°C; but above 32°F;/0°C;) and ice cap climate (all months average below 32°F;/0°C;, and there is a permanent snow cover). Precipitation, almost entirely in the form of snow, is very low, with the annual average precipitation for the regions less than 20 in. (51 cm). Persistent winds whip up fallen snow to create the illusion of constant snowfall. The climate is moderated by oceanic influences, with regions abutting the Atlantic and Pacific oceans having generally warmer temperatures and heavier snowfalls than the colder and drier interior areas. However, except along its fringe, the Arctic Ocean remains frozen throughout the year, although the extent of the summer ice has shrunk significantly since the early 1980s.
Great seasonal changes in the length of days and nights are experienced N of the Arctic Circle, with variations that range from 24 hours of constant daylight ("midnight sun") or darkness at the Arctic Circle to six months of daylight or darkness at the North Pole. However, because of the low angle of the sun above the horizon, insolation is minimal throughout the regions, even during the prolonged daylight period. A famous occurrence in the arctic night sky is the aurora borealis, or northern lights.
Vegetation in the Arctic, limited to regions having a tundra climate, flourishes during the short spring and summer seasons. The tundra's restrictive environment for plant life increases northward, with dwarf trees giving way to grasses (mainly mosses, lichen, sedges, and some flowering plants), the ground coverage of which becomes widely scattered toward the permanent snow line. There are about 20 species of land animals in the Arctic, including the squirrel, wolf, fox, moose, caribou, reindeer, polar bear, musk ox, and about six species of aquatic mammals such as the walrus, seal, and whale. Most of the species are year-round inhabitants of the Arctic, migrating to the southern margins as winter approaches. Although generally of large numbers, some of the species, especially the fur-bearing ones, are in danger of extinction. A variety of fish is found in arctic seas, rivers, and lakes. The Arctic's bird population increases tremendously each spring with the arrival of migratory birds (see migration of animals). During the short warm season, a large number of insects breed in the marshlands of the tundra.
In parts of the Arctic are found a variety of natural resources, but many known reserves are not exploited because of their inaccessibility. The arctic region of Russia, the most developed of all the arctic regions, is a vast storehouse of mineral wealth, including deposits of nickel, copper, coal, gold, uranium, tungsten, and diamonds. The North American Arctic yields uranium, copper, nickel, iron, natural gas, and oil. The arctic region of Europe (including W Russia) benefits from good overland links with southern areas and ship routes that are open throughout the year. The arctic regions of Asian Russia and North America depend on isolated overland routes, summertime ship routes, and air transportation. Transportation of oil by pipeline from arctic Alaska was highly controversial in the early 1970s, with strong opposition from environmentalists. Because of the extreme conditions of the Arctic, the delicate balance of nature, and the slowness of natural repairs, the protection and preservation of the Arctic have been major goals of conservationists, who fear irreparable damage to the natural environment from local temperature increases, the widespread use of machinery, the interference with wildlife migration, and oil spills. Global warming and the increasing reduction in the permanent ice cover on the Arctic Ocean has increased interest in the region's ocean resouces.
The Arctic is one of the world's most sparsely populated areas. Its inhabitants, basically of Mongolic stock, are thought to be descendants of a people who migrated northward from central Asia after the ice age and subsequently spread W into Europe and E into North America. The chief groups are now the Lapps of Europe; the Samoyedes (Nentsy) of W Russia; the Yakuts, Tungus, Yukaghirs, and Chukchis of E Russia; and the Eskimo of North America. There is a sizable Caucasian population in Siberia, and the people of Iceland are nearly all Caucasian. In Greenland, the Greenlanders, a mixture of Eskimos and northern Europeans, predominate.
Because of their common background and the general lack of contact with other peoples, arctic peoples have strikingly similar physical characteristics and cultures, especially in such things as clothing, tools, techniques, and social organization. The arctic peoples, once totally nomadic, are now largely sedentary or seminomadic. Hunting, fishing, reindeer herding, and indigenous arts and crafts are the chief activities. The arctic peoples are slowly being incorporated into the society of the country in which they are located. With the Arctic's increased economic and political role in world affairs, the regions have experienced an influx of personnel charged with building and maintaining such things as roads, mineral extraction sites, weather stations, and military installations.
Many parts of the Arctic were already settled by the Eskimos and other peoples of Mongolic stock when the first European explorers, the Norsemen or Vikings, appeared in the region. Much later the search for the Northwest Passage and the Northeast Passage to reach Asia from Europe spurred exploration to the north. This activity began in the 16th cent. and continued in the 17th, but the hardships suffered and the negative results obtained by early explorers—among them Martin Frobisher, John Davis, Henry Hudson, William Baffin, and William Barentz—caused interest to wane. The fur traders in Canada did not begin serious explorations across the tundras until the latter part of the 18th cent. Alexander Mackenzie undertook extensive exploration after the beginnings made by Samuel Hearne, Philip Turnor, and others. Already in the region of NE Asia and W Alaska, the Russian explorations under Vitus Bering and others and the activities of the promyshlennyki [fur traders] had begun to make the arctic coasts known.
After 1815, British naval officers—including John Franklin, F. W. Beechey, John Ross, James Ross, W. E. Parry, P. W. Dease, Thomas Simpson, George Back, and John Rae—inspired by the efforts of John Barrow, took up the challenge of the Arctic. The disappearance of Franklin on his expedition between 1845 and 1848 gave rise to more than 40 searching parties. Although Franklin was not found, a great deal of knowledge was gained about the Arctic as a result, including the general outline of Canada's arctic coast.
Otto Sverdrup, D. B. MacMillan, and Vilhjalmur Stefansson added significant knowledge of the regions. Meanwhile, in the Eurasian Arctic, Franz Josef Land was discovered and Novaya Zemlya explored. The Northeast Passage was finally navigated in 1879 by Nils A. E. Nordenskjöld. Roald Amundsen, who went through the Northwest Passage (1903-6), also went through the Northeast Passage (1918-20). Greenland was also explored. Robert E. Peary reportedly won the race to be the first at the North Pole in 1909, but this claim is disputed. Although Fridtjof Nansen, drifting with his vessel Fram in the ice (1893-96), failed to reach the North Pole, he added enormously to the knowledge of the Arctic Ocean.
Air exploration of the regions began with the tragic balloon attempt of S. A. Andrée in 1897. In 1926, Richard E. Byrd and Floyd Bennett flew over the North Pole, and Amundsen and Lincoln Ellsworth flew from Svalbard (Spitsbergen) to Alaska across the North Pole and unexplored regions N of Alaska. In 1928, George Hubert Wilkins flew from Alaska to Spitsbergen. The use of the "great circle" route for world air travel increased the importance of Arctic, while new ideas of the agricultural and other possibilities of arctic and subarctic regions led to many projects for development, especially in the USSR.
In 1937 and 1938 many field expeditions were sent out by British, Danish, Norwegian, Soviet, Canadian, and American groups to learn more about the Arctic. The Soviet group under Ivan Papinin wintered on an ice floe near the North Pole and drifted with the current for 274 days. Valuable hydrological, meteorological, and magnetic observations were made; by the time they were taken off the floe, the group had drifted 19° of latitude and 58° of longitude. Arctic drift was further explored (1937-40) by the Soviet icebreaker Sedov. Before World War II the USSR had established many meteorological and radio stations in the Arctic. Soviet activity in practical exploitation of resources also pointed the way to the development of arctic regions. Between 1940 and 1942 the Canadian vessel St. Roch made the first west-east journey through the Northwest Passage. In World War II, interest in transporting supplies gave rise to considerable study of arctic conditions.
After the war interest in the Arctic was keen. The Canadian army in 1946 undertook a project that had as one of its objects the testing of new machines (notably the snowmobile) for use in developing the Arctic. There was also a strong impulse to develop Alaska and N Canada, but no consolidated effort, like that of the Soviets, to take the natives into partnership for a full-scale development of the regions. Since 1954 the United States and Russia have established a number of drifting observation stations on ice floes for the purpose of intensified scientific observations. In 1955, as part of joint U.S.-Canadian defense, construction was begun on a c.3,000-mi (4,830-km) radar network (the Distant Early Warning line, commonly called the DEW line) stretching from Alaska to Greenland. As older radar stations were replaced and new ones built, a more sophisticated surveillance system developed. In 1993 the system, now stretching from NW Alaska to the coast of Newfoundland, was renamed the North Warning System.
With the continuing development of northern regions (e.g., Alaska, N Canada, and Russia), the Arctic has assumed greater importance in the world. During the International Geophysical Year (1957-58) more than 300 arctic stations were established by the northern countries interested in the arctic regions. Atomic-powered submarines have been used for penetrating the Arctic. In 1958 the Nautilus, a U.S. navy atomic-powered submarine, became the first ship to cross the North Pole undersea. Two years later the Skate set out on a similar voyage and became the first to surface at the Pole. In 1977 the Soviet nuclear icebreaker Arktika reached the North Pole, the first surface ship to do so.
In the 1960s the Arctic became the scene of an intense search for mineral and power resources. The discovery of oil on the Alaska North Slope (1968) and on Canada's Ellesmere Island (1972) led to a great effort to find new oil fields along the edges of the continents. In the summer of 1969 the SS Manhattan, a specially designed oil tanker with ice breaker and oceanographic research vessel features, successfully sailed from Philadelphia to Alaska by way of the Northwest Passage in the first attempt to bring commercial shipping into the region.
In 1971 the Arctic Ice Dynamics Joint Experiment (AIDJEX) began an international effort to study over a period of years arctic pack ice and its effect on world climate. In 1986 a seasonal "hole" in the ozone layer above the Arctic was discovered, showing some similarities to a larger depletion of ozone over the southern polar region; depletion of the ozone layer results in harmful levels of ultraviolet radiation reaching the earth from the sun. In the 21st cent. increased interest in the resources of the Arctic Ocean, prompted by a decrease in permanent ice cover due to global warming, have led to disputes among the Arctic nations over territorial claims. Practically all parts of the Arctic have now been photographed and scanned (by remote sensing devices) from aircraft and satellites. From these sources accurate maps of the Arctic have been compiled.
Classic narratives of arctic exploration include F. Nansen, In Northern Mists (tr. 1911); R. E. Amundsen, The North West Passage (tr., 2 vol., 1908); R. E. Peary, The North Pole (1910, repr. 1969); V. Stefansson, My Life with the Eskimo (1913) and The Friendly Arctic (1921).
For history and geography, see L. P. Kirwan, A History of Polar Exploration (1960); R. Thorén, Picture Atlas of the Arctic (1969); L. H. Neatby, Conquest of the Last Frontier (1966) and Discovery in Russian and Siberian Waters (1973); L. Rey et al., ed., Unveiling the Arctic (1984); F. Bruemmer and W. E. Taylor, The Arctic World (1987); R. McCormick, Voyages of Discovery in the Antarctic and Arctic Seas (1990); F. Fleming, Barrow's Boys (1998) and Ninety Degrees North: The Quest for the North Pole (2002); C. Officer and J. Page, A Fabulous Kingdom: The Exploration of the Arctic (2001).
The original Celtic inhabitants, converted to Christianity by St. Columba (6th cent.), were conquered by the Norwegians (starting in the 8th cent.). They held the Southern Islands, as they called them, until 1266. From that time the islands were formally held by the Scottish crown but were in fact ruled by various Scottish chieftains, with the Macdonalds asserting absolute rule after 1346 as lords of the isles. In the mid-18th cent. the Hebrides were incorporated into Scotland. The tales of Sir Walter Scott did much to make the islands famous. Emigration from the overpopulated islands occurred in the 20th cent., especially to Canada.
Although it has some industries (the manufacture of clothing, metal goods, printed materials, and food products), The Hague's economy revolves around government administration, which is centered there rather than in Amsterdam, the constitutional capital of the Netherlands. The Hague is the seat of the Dutch legislature, the Dutch supreme court, the International Court of Justice, and foreign embassies. The city is the headquarters of numerous companies, including the Royal Dutch Shell petroleum company. Also of economic importance are banking, insurance, and trade.
Among the numerous landmarks of The Hague is the Binnenhof, which grew out of the 13th-century palace and houses both chambers of the legislature; the Binnenhof contains the 13th-century Hall of Knights (Dutch Ridderzaal), where many historic meetings have been held. Nearby is the Gevangenenpoort, the 14th-century prison where Jan de Witt and Cornelius de Witt were murdered in 1672. The Mauritshuis, a 17th-century structure built as a private residence for John Maurice of Nassau, is an art museum and contains several of the greatest works of Rembrandt and Vermeer.
The Peace Palace (Dutch Vredespaleis), which was financed by Andrew Carnegie and opened in 1913, houses the Permanent Court of Arbitration and, since 1945, the International Court of Justice. Among the other notable buildings are the former royal palace; the Groote Kerk, a Gothic church (15th-16th cent.); the Nieuwe Kerk, containing Spinoza's tomb; the 16th-century town hall; and the Netherlands Conference Center (1969). Educational institutions in The Hague include schools of music and international law. Northwest of the city is Scheveningen, a popular North Sea resort and a fishing port.
The Hague was (13th cent.) the site of a hunting lodge of the counts of Holland ('s Gravenhage means "the count's hedge"). William, count of Holland, began (c.1250) the construction of a palace, around which a town grew in the 14th and 15th cent. In 1586 the States-General of the United Provs. of the Netherlands convened in The Hague, which later (17th cent.) became the residence of the stadtholders and the capital of the Dutch republic. In the 17th cent., The Hague rose to be one of the chief diplomatic and intellectual centers of Europe. William III (William of Orange), stadtholder of Holland and other Dutch provinces as well as king of England (1689-1702), was born in The Hague.
In the early 19th cent., after Amsterdam had become the constitutional capital of the Netherlands, The Hague received its own charter from Louis Bonaparte. It was (1815-30) the alternative meeting place, with Brussels, of the legislature of the United Netherlands. The Dutch royal residence from 1815 to 1948, the city was greatly expanded and beautified in the mid-19th cent. by King William II. In 1899 the First Hague Conference met there on the initiative of Nicholas II of Russia; ever since, The Hague has been a center for the promotion of international justice and arbitration.
The smallest country on the continent of Africa, The Gambia comprises Saint Mary's Island (site of Banjul) and, on the adjacent mainland, a narrow strip never more than 30 mi (48 km) wide; this finger of land borders both banks of the Gambia River for c.200 mi (320 km) above its mouth. The river, which rises in Guinea and flows c.600 mi (970 km) to the Atlantic, is navigable throughout The Gambia and is the main transport artery. Along The Gambia's coast are fine sand beaches; inland is the swampy river valley, whose fertile alluvial soils support rice cultivation. Peanuts, the country's chief cash crop, and some grains are raised on higher land. The climate is tropical and fairly dry.
The Gambia's population consists primarily of Muslim ethnic groups; the Malinke (Mandinka) is the largest, followed by the Fulani (Fula), Wolof, Diola (Jola), and Soninke (Serahuli). Almost a tenth of the population is Christian. English is the official language, but a number of African dialects are widely spoken. During the sowing and reaping seasons migrants from Senegal and Guinea also come to work in the country.
Despite attempts at diversification, The Gambia's economy remains overwhelmingly dependent on the export of peanuts and their byproducts and the re-exporting of imported foreign goods to other African nations. About three quarters of the population is employed in agriculture. Rice, millet, sorghum, corn, and cassava are grown for subsistence, and cattle, sheep, and goats are raised. There is also a fishing industry. The main industrial activities center around the processing of agricultural products and some light manufacturing. Tourism, which suffered following the 1994 military takeover, rebounded in the late 1990s. Besides peanut products, dried and smoked fish, cotton lint, palm kernels, and hides and skins are exported; foodstuffs, manufactures, fuel, machinery, and transportation equipment are imported. India, Great Britain, China, and Senegal are the country's leading trading partners. The Gambia is one of the world's poorest nations and relies heavily on foreign aid.
The Gambia is governed under the constitution of 1997. The president, who is both head of state and head of government, is popularly elected for a five-year term; there are no term limits. The unicameral legislature consists of a 53-seat National Assembly whose members also serve five-year terms; 48 members are elected and 5 are appointed by the president. Administratively, The Gambia is made up of five divisions and the capital city.
Portuguese explorers reaching the Gambia region in the mid-15th cent. reported a group of small Malinke and Wolof states that were tributary to the empire of Mali. The English won trading rights from the Portuguese in 1588, but their hold was weak until the early 17th cent., when British merchant companies obtained trading charters and founded settlements along the Gambia River. In 1816 the British purchased Saint Mary's Island from a local chief and established Banjul (called Bathurst until 1973) as a base against the slave trade. The city remained a colonial backwater under the administration of Sierra Leone until 1843, when it became a separate crown colony. Between 1866 and 1888 it was again governed from Sierra Leone. As the French extended their rule over Senegal's interior, they sought control over Britain's Gambia River settlements but failed during negotiations to offer Britain acceptable territory in compensation. In 1889, The Gambia's boundaries were defined, and in 1894 the interior was declared a British protectorate. The whole of the country came under British rule in 1902 and that same year a system of government was initiated in which chiefs supervised by British colonial commissioners ruled a variety of localities. In 1906 slavery in the colony was ended.
The Gambia continued the system of local rule under British supervision until after World War II, when Britain began to encourage a greater measure of self-government and to train some Gambians for administrative positions. By the mid-1950s a legislative council had been formed, with members elected by the Gambian people, and a system had been initiated wherein appointed Gambian ministers worked along with British officials. The Gambia achieved full self-government in 1963 and independence in 1965 under Dauda Kairaba Jawara and the People's Progressive party (PPP), made up of the predominant Malinke ethnic group. Following a referendum in 1970, The Gambia became a republic in the Commonwealth of Nations. In contrast to many other new African states, The Gambia preserved democracy and remarkable political stability in its early years of independence.
Since the mid-1970s large numbers of Gambians have migrated from rural to urban areas, resulting in high urban unemployment and overburdened services. The PPP demonstrated an interest in expanding the agricultural sector, but droughts in the late 1970s and early 1980s prompted a serious decline in agricultural production and a rise in inflation. In 1978, The Gambia entered into an agreement with Senegal to develop the Gambia River and its basin. Improvements in infrastructure and a heightened popular interest by outsiders in the country (largely because of the popularity of Alex Haley's novel Roots, set partially in The Gambia) helped spur a threefold increase in tourism between 1978 and 1988.
The Gambia was shaken in 1981 by a coup attempt by junior-ranking soldiers; it was put down with the intervention of Senegalese troops. In 1982, The Gambia and Senegal formed a confederation, while maintaining individual sovereignty; by 1989, however, popular opposition and minor diplomatic problems led to the withdrawal of Senegalese troops and the dissolution of Senegambia. In July, 1994, Jawara was overthrown in a bloodless coup and Yahya Jammeh assumed power as chairman of the armed forces and head of state.
Jammeh survived an attempted countercoup in Nov., 1994, and won the presidential elections of Sept., 1996, from which the major opposition leaders effectively had been banned. Only in 2001, in advance of new presidential elections, was the ban on political activities by the opposition parties lifted, and in Oct., 2001, Jammeh was reelected. The 2002 parliamentary elections, in which Jammeh's party won nearly all the seats, were boycotted by the main opposition party.
There was a dispute with Senegal in Aug.-Oct., 2005, over increased ferry charges across the Gambia river, which led to a Senegalese ferry boycott and a blockade of overland transport through Gambia, which hurt Senegal S of Gambia but also affected Gambian merchants. Gambia subsequently reduced the charges. A coup plot led by the chief of defense staff was foiled in Mar., 2006. Jammeh was again reelected in Sept., 2006, but the opposition denounced and rejected the election for being marred by intimidation. In the subsequent parliamentary elections (Jan. 2007), Jammeh's party again won all but a handful of the seats. Jammeh's rule has been marked by the often brutal treatment of real and percieved opponents.
See B. Rice, Enter Gambia (1968); H. B. Bachmann et al., Gambia: Basic Needs in The Gambia (1981); H. A. Gailey, Historical Dictionary of The Gambia (1987); D. P. Gamble, The Gambia (1988); F. Wilkins, Gambia (1988); M. F. McPherson and S. C. Radelet, ed., Economic Recovery in The Gambia (1996); D. R. Wright, The World and a Very Small Place in Africa (1997).
|Subfamily||Group||Subgroup||Languages and Principal Dialects|
|Anatolian||Hieroglypic Hittite*, Hittite (Kanesian)*, Luwian*, Lycian*, Lydian*, Palaic*|
|Baltic||Latvian (Lettish), Lithuanian, Old Prussian*|
|Celtic||Brythonic||Breton, Cornish, Welsh|
|Goidelic or Gaelic||Irish (Irish Gaelic), Manx*, Scottish Gaelic|
|Germanic||East Germanic||Burgundian*, Gothic*, Vandalic*|
|North Germanic||Old Norse* (see Norse), Danish, Faeroese, Icelandic, Norwegian, Swedish|
(see Grimm's law)
|High German||German, Yiddish|
|Low German||Afrikaans, Dutch, English, Flemish, Frisian, Plattdeutsch (see German language)|
|Greek||Aeolic*, Arcadian*, Attic*, Byzantine Greek*, Cyprian*, Doric*, Ionic*, Koiné*, Modern Greek|
|Indo-Iranian||Dardic or Pisacha||Kafiri, Kashmiri, Khowar, Kohistani, Romany (Gypsy), Shina|
|Indic or Indo-Aryan||Pali*, Prakrit*, Sanskrit*, Vedic*|
|Central Indic||Hindi, Hindustani, Urdu|
|East Indic||Assamese, Bengali (Bangla), Bihari, Oriya|
|Northwest Indic||Punjabi, Sindhi|
|Pahari||Central Pahari, Eastern Pahari (Nepali), Western Pahari|
|South Indic||Marathi (including major dialect Konkani), Sinhalese (Singhalese)|
|West Indic||Bhili, Gujarati, Rajasthani (many dialects)|
|Iranian||Avestan*, Old Persian*|
|East Iranian||Baluchi, Khwarazmian*, Ossetic, Pamir dialects, Pashto (Afghan), Saka (Khotanese)*, Sogdian*, Yaghnobi|
|West Iranian||Kurdish, Pahlavi (Middle Persian)*, Parthian*, Persian (Farsi), Tajiki|
|Italic||(Non-Romance)||Faliscan*, Latin, Oscan*, Umbrian*|
|Romance or Romanic||Eastern Romance||Italian, Rhaeto-Romanic, Romanian, Sardinian|
|Western Romance||Catalan, French, Ladino, Portuguese, Provençal, Spanish|
|Slavic or Slavonic||East Slavic||Belarusian (White Russian), Russian, Ukrainian|
|South Slavic||Bulgarian, Church Slavonic*, Macedonian, Serbo-Croatian, Slovenian|
|West Slavic||Czech, Kashubian, Lusatian (Sorbian or Wendish), Polabian*, Polish, Slovak|
|Thraco-Illyrian||Albanian, Illyrian*, Thracian*|
|Thraco-Phrygian||Armenian, Grabar (Classical Armenian)*, Phrygian*|
|Tokharian (W China)||Tokharian A (Agnean)*, Tokharian B (Kuchean)*|
The Comoros is comprised of three main islands, Njazidja (or Ngazidja; also Grande Comore or Grand Comoros)—on which Moroni is located—Nzwani (or Ndzouani; also Anjouan), and Mwali (also Mohéli), and numerous coral reefs and islets. They are volcanic in origin, with interiors that vary from high peaks to low hills and coastlines that feature many sandy beaches. Njazidja is the site of an active volcano, Karthala, which, at 7,746 ft (2,361 m), is the islands' highest peak. The Comoros have a tropical climate with the year almost evenly divided between dry and rainy seasons; cyclones (hurricanes) are quite frequent. The islands once supported extensive rain forests, but most have been severely depleted.
The inhabitants are a mix mostly of African, Arab, Indian, and Malay ethnic strains. Sunni Muslims make up 98% of the population; there is a small Roman Catholic minority. Arabic and French are the official languages, and Comorian (or Shikomoro, a blend of Swahili and Arabic) is also spoken.
With few natural resources, poor soil, and overpopulation, the islands are one of the world's poorest nations. Some 80% of the people are involved in agriculture. Vanilla, ylang-ylang (used in perfumes), cloves, and copra are the major exports; coconuts, bananas, and cassava are also grown. Fishing, tourism, and perfume distillation are the main industries, and remittances from Comorans working abroad are an important source of revenue. Rice and other foodstuffs, consumer goods, petroleum products, and transportation equipment are imported. The country is heavily dependent on France for trade and foreign aid.
The Comoros is governed under the constitution of 2001. The president, who is head of state, is chosen from among the elected heads of the three main islands; the presidency rotates every five years. The government is headed by the prime minister, who is appointed by the president. The unicameral legislature consists of the 33-seat Assembly of the Union. Fifteen members are selected by the individual islands' local assemblies, and 18 are popularly elected. All serve five-year terms. Administratively, the country is divided into the three main islands and four municipalities.
The islands were populated by successive waves of immigrants from Africa, Indonesia, Madagascar, and Arabia. They were long under Arab influence, especially Shiragi Arabs from Persia who first arrived in A.D. 933. Portugal, France, and England staked claims in the Comoros in the 16th cent., but the islands remained under Arab domination. All of the islands were ceded to the French between 1841 and 1909. Occupied by the British during World War II, the islands were granted administrative autonomy within the French Union in 1946 and internal self-government in 1968. In 1975 three of the islands voted to become independent, while Mayotte chose to remain a French dependency.
Ahmed Abdallah Abderrahman was Comoros's first president. He was ousted in a 1976 coup, returned to power in a second coup in 1978, survived a coup attempt in 1983, and was assassinated in 1989. The nation's first democratic elections were held in 1990, and Saïd Mohamed Djohar was elected president. In 1991, Djohar was impeached and replaced by an interim president, but he returned to power with French backing. Multiparty elections in 1992 resulted in a legislative majority for the president and the creation of the office of prime minister.
Comoros joined the Arab League in 1993. A coup attempt in 1995 was suppressed by French troops. In 1996, Mohamed Taki Abdulkarim was elected president. In 1997, following years of economic decline, rebels took control of the islands of Nzwani and Mwali, declaring their secession and desire to return to French rule. The islands were granted greater autonomy in 1999, but voters on Nzwani endorsed independence in Jan., 2000, and rebels continue to control the island. Taki died in 1998 and was succeeded by Tadjiddine Ben Said Massounde. As violence spread to the main island, the Comoran military staged a coup in Apr., 1999, and Col. Azali Assoumani became president of the Comoros. An attempted coup in Mar., 2000, was foiled by the army.
Forces favoring reuniting with the Comoros seized power in Nzwani in 2001, and in December Comoran voters approved giving the three islands additional autonomy (and their own presidents) within a Comoran federation. Under the new constitution, the presidency of the Comoros Union rotates among the islands. In Jan., 2002, Azali resigned, and Prime Minister Hamada Madi became also interim president in the transitional government preparing for new elections. After two disputed elections (March and April), a commission declared Azali national president in May, 2002.
An accord in Dec., 2003, concerning the division of powers between the federal and island governments paved the way for legislative elections in 2004, in which parties favoring autonomy for the individual islands won a majority of the seats. The 2006 presidential election was won by Ahmed Abdallah Mohamed Sambi, a Sunni cleric regarded as a moderate Islamist.
In Apr., 2007, the president of Nzwani, Mohamed Bacar, refused to resign as required by the constitutional courts and used his police forces to retain power, holding an illegal election in June, after which he was declared the winner. The moves were denounced by the central government and the African Union, but the central government lacked the forces to dislodge Bacar. In Nov., 2007, the African Union began a naval blockade of Nzwani and imposed a travel ban on its government's officials. With support from African Union forces, Comoran troops landed on Mzwani in Mar., 2008, and reestablished federal control over the island. Bacar fled to neighboring Mayotte, then was taken to Réunion; in July he was flown to Benin. A referendum in May, 2009, approved of a constitutional amendment to extend the president's term to five years and replace the islands' presidents with governors.
See World Bank, Comoros (1983); M. and H. Ottenheimer, Historical Dictionary of the Comoro Islands (1994).
See R. Greenfield, Dark Star: An Oral Biography of Jerry Garcia (1996); C. Brightman, Sweet Chaos: The Grateful Dead's American Adventure (1999); B. Jackson, Garcia: An American Life (1999); S. Peters, What a Long, Strange Trip (1999); R. G. Adams, ed., Deadhead Social Science (2000); D. McNally, A Long Strange Trip: The Inside History of the Grateful Dead (2002); P. Lesh, Searching for the Sound: My Life with the Grateful Dead (2005).
See V. Weybright, The Star-spangled Banner (1935).
The public information stored in the multitude of computer networks connected to the Internet forms a huge electronic library, but the enormous quantity of data and number of linked computer networks also make it difficult to find where the desired information resides and then to retrieve it. A number of progressively easier-to-use interfaces and tools have been developed to facilitate searching. Among these are search engines, such as Archie, Gopher, and WAIS (Wide Area Information Server), and a number of commercial, Web-based indexes, such as Google or Yahoo, which are programs that use a proprietary algorithm or other means to search a large collection of documents for keywords and return a list of documents containing one or more of the keywords. Telnet is a program that allows users of one computer to connect with another, distant computer in a different network. The File Transfer Protocol (FTP) is used to transfer information between computers in different networks. The greatest impetus to the popularization of the Internet came with the introduction of the World Wide Web (WWW), a hypertext system that makes browsing the Internet both fast and intuitive. Most e-commerce occurs over the Web, and most of the information on the Internet now is formatted for the Web, which has led Web-based indexes to eclipse the other Internet-wide search engines.
Each computer that is directly connected to the Internet is uniquely identified by a 32-bit binary number called its IP address. This address is usually seen as a four-part decimal number, each part equating to 8 bits of the 32-bit address in the decimal range 0-255. Because an address of the form 126.96.36.199 could be difficult to remember, a system of Internet addresses, or domain names, was developed in the 1980s. An Internet address is translated into an IP address by a domain-name server, a program running on an Internet-connected computer.
Reading from left to right, the parts of a domain name go from specific to general. For example, www.cms.hhs.gov is a World Wide Web site for the Centers for Medicare and Medicaid Services, which is part of the U.S. Health and Human Services Dept., which is a government agency. The rightmost part, or top-level domain (or suffix or zone), can be a two-letter abbreviation of the country in which the computer is in operation; more than 250 abbreviations, such as "ca" for Canada and "uk" for United Kingdom, have been assigned. Although such an abbreviation exists for the United States (us), it is more common for a site in the United States to use a generic top-level domain such as edu (educational institution), gov (government), or mil (military) or one of the four domains originally designated for open registration worldwide, com (commercial), int (international), net (network), or org (organization). In 2000 seven additional top-level domains (aero, biz, coop, info, museum, name, and pro) were approved for worldwide use, and other domains, including the regional domains asia and eu, have since been added. In 2008 new rules were adopted that would allow a top-level domain to be any group of letters, and the following year further rules changes permitted the use of other writing systems in addition to the Latin alphabet in domain names beginning in 2010.
The Internet evolved from a secret feasibility study conceived by the U.S. Dept. of Defense in 1969 to test methods of enabling computer networks to survive military attacks, by means of the dynamic rerouting of messages. As the ARPAnet (Advanced Research Projects Agency network), it began by connecting three networks in California with one in Utah—these communicated with one another by a set of rules called the Internet Protocol (IP). By 1972, when the ARPAnet was revealed to the public, it had grown to include about 50 universities and research organizations with defense contracts, and a year later the first international connections were established with networks in England and Norway.
A decade later, the Internet Protocol was enhanced with a set of communication protocols, the Transmission Control Program/Internet Protocol (TCP/IP), that supported both local and wide-area networks. Shortly thereafter, the National Science Foundation (NSF) created the NSFnet to link five supercomputer centers, and this, coupled with TCP/IP, soon supplanted the ARPAnet as the backbone of the Internet. In 1995 the NSF decommissioned the NSFnet, and responsibility for the Internet was assumed by the private sector. Progress toward the privatization of the Internet continued when Internet Corporation for Assigned Names and Numbers (ICANN), a nonprofit U.S. corporation, assumed oversight responsibility for the domain name system in 1998 under an agreement with the U.S. Dept. of Commerce.
Fueled by the increasing popularity of personal computers, e-mail, and the World Wide Web (which was introduced in 1991 and saw explosive growth beginning in 1993), the Internet became a significant factor in the stock market and commerce during the second half of the decade. By 2000 it was estimated that the number of adults using the Internet exceeded 100 million in the United States alone. The increasing globalization of the Internet has led a number of nations to call for oversight and governance of the Internet to pass from the U.S. government and ICANN to an international body, but a 2005 international technology summit agreed to preserve the status quo while establishing an international forum for the discussion of Internet policy issues.
See B. P. Kehoe, Zen and the Art of the Internet: A Beginner's Guide (4th ed. 1995); B. Pomeroy, ed., Beginnernet: A Beginner's Guide to the Internet and the World Wide Web (1997); L. E. Hughes, Internet E-Mail: Protocols, Standards, and Implementation (1998); J. S. Gonzalez, The 21st Century Internet (1998); D. P. Dern, Internet Business Handbook: The Insider's Internet Guide (1999).
See S. Vogel, The Pentagon: A History (2007).
The Philippines extend 1,152 mi (1,855 km) from north to south, between Taiwan and Borneo, and 688 mi (1,108 km) from east to west, and are bounded by the Philippine Sea on the east, the Celebes Sea on the south, and the South China Sea on the west. They comprise three natural divisions—the northern, which includes Luzon and attendant islands; the central, occupied by the Visayan Islands and Palawan and Mindoro; and the southern, encompassing Mindanao and the Sulu Archipelago. In addition to Manila, other important centers are Quezon City, also on Luzon; Cebu, on Cebu Island; Iloilo, on Panay; Davao and Zamboanga, on Mindanao; and Jolo, on Jolo Island in the Sulu Archipelago.
The Philippines are chiefly of volcanic origin. Most of the larger islands are traversed by mountain ranges, with Mt. Apo (9,690 ft/2,954 m), on Mindanao, the highest peak. Narrow coastal plains, wide valleys, volcanoes, dense forests, and mineral and hot springs further characterize the larger islands. Earthquakes are common. Of the navigable rivers, Cagayan, on Luzon, is the largest; there are also large lakes on Luzon and Mindanao.
The Philippines are entirely within the tropical zone. Manila, with a mean daily temperature of 79.5°F; (26. 4°C;), is typical of the climate of the lowland areas—hot and humid. The highlands, however, have a bracing climate; e.g., Baguio, the summer capital, on Luzon, has a mean annual temperature of 64°F; (17.8°C;). The islands are subject to typhoons, whose torrential rains can cause devastating floods; 5,000 people died on Leyte in 1991 from such flooding, and several storms in 2004 and 2006 caused deadly flooding and great destruction.
The great majority of the people of the Philippines belong to the Malay group and are known as Filipinos. Other groups include the Negritos (negroid pygmies) and the Dumagats (similar to the Papuans of New Guinea), and there is a small Chinese minority. The Filipinos live mostly in the lowlands and constitute one of the largest Christian groups in Asia. Roman Catholicism is professed by over 80% of the population; 5% are Muslims (concentrated on Mindanao and the Sulu Archipelago; see Moros); about 2% are Aglipayans, members of the Philippine Independent Church, a nationalistic offshoot of Catholicism (see Aglipay, Gregorio); and there are Protestant and Evangelical groups. The official languages are Pilipino, based on Tagalog, and English; however, some 70 native languages are also spoken.
With their tropical climate, heavy rainfall, and naturally fertile volcanic soil, the Philippines have a strong agricultural sector, which employs over a third of the population. Sugarcane, coconuts, rice, corn, bananas, cassava, pineapples, and mangoes are the most important crops, and tobacco and coffee are also grown. Carabao (water buffalo), pigs, chickens, goats, and ducks are widely raised, and there is dairy farming. Fishing is a common occupation; the Sulu Archipelago is noted for its pearls and mother-of-pearl.
The islands have one of the world's greatest stands of commercial timber. There are also mineral resources such as petroleum, nickel, cobalt, silver, gold, copper, zinc, chromite, and iron ore. Nonmetallic minerals include rock asphalt, gypsum, asbestos, sulfur, and coal. Limestone, adobe, and marble are quarried.
Manufacturing is concentrated in metropolitan Manila, near the nation's prime port, but there has been considerable industrial growth on Cebu, Negros, and Mindanao. Garments, footwear, pharmaceuticals, chemicals, and wood products are manufactured, and the assembly of electronics and automobiles is important. Other industries include food processing and petroleum refining. The former U.S. military base at Subic Bay was redeveloped in the 1990s as a free-trade zone.
The economy has nonetheless remained weak, and many Filipinos have sought employment overseas; remittances from an estimated 8 million Filipinos abroad are economically important. Chief exports are semiconductors, electronics, transportation equipment, clothing, copper, petroleum products, coconut oil, fruits, lumber and plywood, machinery, and sugar. The main imports are electronic products, mineral fuels, machinery, transportation equipment, iron and steel, textiles, grains, chemicals, and plastic. The chief trading partners are the United States, Japan, China, Singapore, Hong Kong, and Taiwan.
The Philippines is governed under the constitution of 1987. The president, who is both head of state and head of the government, is elected by popular vote for a single six-year term. There is a bicameral legislature, the Congress. Members of the 24-seat Senate are popularly elected for six-year terms. The House of Representatives consists of not more than 250 members, who are popularly elected for three-year terms. There is an independent judiciary headed by a supreme court. Administratively, the republic is divided into 79 provinces and 117 chartered cities.
The Negritos are believed to have migrated to the Philippines some 30,000 years ago from Borneo, Sumatra, and Malaya. The Malayans followed in successive waves. These people belonged to a primitive epoch of Malayan culture, which has apparently survived to this day among certain groups such as the Igorots. The Malayan tribes that came later had more highly developed material cultures.
In the 14th cent. Arab traders from Malay and Borneo introduced Islam into the southern islands and extended their influence as far north as Luzon. The first Europeans to visit (1521) the Philippines were those in the Spanish expedition around the world led by the Portuguese explorer Ferdinand Magellan. Other Spanish expeditions followed, including one from New Spain (Mexico) under López de Villalobos, who in 1542 named the islands for the infante Philip, later Philip II.Spanish Control
The conquest of the Filipinos by Spain did not begin in earnest until 1564, when another expedition from New Spain, commanded by Miguel López de Legaspi, arrived. Spanish leadership was soon established over many small independent communities that previously had known no central rule. By 1571, when López de Legaspi established the Spanish city of Manila on the site of a Moro town he had conquered the year before, the Spanish foothold in the Philippines was secure, despite the opposition of the Portuguese, who were eager to maintain their monopoly on the trade of East Asia.
Manila repulsed the attack of the Chinese pirate Limahong in 1574. For centuries before the Spanish arrived the Chinese had traded with the Filipinos, but evidently none had settled permanently in the islands until after the conquest. Chinese trade and labor were of great importance in the early development of the Spanish colony, but the Chinese came to be feared and hated because of their increasing numbers, and in 1603 the Spanish murdered thousands of them (later, there were lesser massacres of the Chinese).
The Spanish governor, made a viceroy in 1589, ruled with the advice of the powerful royal audiencia. There were frequent uprisings by the Filipinos, who resented the encomienda system. By the end of the 16th cent. Manila had become a leading commercial center of East Asia, carrying on a flourishing trade with China, India, and the East Indies. The Philippines supplied some wealth (including gold) to Spain, and the richly laden galleons plying between the islands and New Spain were often attacked by English freebooters. There was also trouble from other quarters, and the period from 1600 to 1663 was marked by continual wars with the Dutch, who were laying the foundations of their rich empire in the East Indies, and with Moro pirates. One of the most difficult problems the Spanish faced was the subjugation of the Moros. Intermittent campaigns were conducted against them but without conclusive results until the middle of the 19th cent. As the power of the Spanish Empire waned, the Jesuit orders became more influential in the Philippines and acquired great amounts of property.Revolution, War, and U.S. Control
It was the opposition to the power of the clergy that in large measure brought about the rising sentiment for independence. Spanish injustices, bigotry, and economic oppressions fed the movement, which was greatly inspired by the brilliant writings of José Rizal. In 1896 revolution began in the province of Cavite, and after the execution of Rizal that December, it spread throughout the major islands. The Filipino leader, Emilio Aguinaldo, achieved considerable success before a peace was patched up with Spain. The peace was short-lived, however, for neither side honored its agreements, and a new revolution was brewing when the Spanish-American War broke out in 1898.
After the U.S. naval victory in Manila Bay on May 1, 1898, Commodore George Dewey supplied Aguinaldo with arms and urged him to rally the Filipinos against the Spanish. By the time U.S. land forces had arrived, the Filipinos had taken the entire island of Luzon, except for the old walled city of Manila, which they were besieging. The Filipinos had also declared their independence and established a republic under the first democratic constitution ever known in Asia. Their dreams of independence were crushed when the Philippines were transferred from Spain to the United States in the Treaty of Paris (1898), which closed the Spanish-American War.
In Feb., 1899, Aguinaldo led a new revolt, this time against U.S. rule. Defeated on the battlefield, the Filipinos turned to guerrilla warfare, and their subjugation became a mammoth project for the United States—one that cost far more money and took far more lives than the Spanish-American War. The insurrection was effectively ended with the capture (1901) of Aguinaldo by Gen. Frederick Funston, but the question of Philippine independence remained a burning issue in the politics of both the United States and the islands. The matter was complicated by the growing economic ties between the two countries. Although comparatively little American capital was invested in island industries, U.S. trade bulked larger and larger until the Philippines became almost entirely dependent upon the American market. Free trade, established by an act of 1909, was expanded in 1913.
When the Democrats came into power in 1913, measures were taken to effect a smooth transition to self-rule. The Philippine assembly already had a popularly elected lower house, and the Jones Act, passed by the U.S. Congress in 1916, provided for a popularly elected upper house as well, with power to approve all appointments made by the governor-general. It also gave the islands their first definite pledge of independence, although no specific date was set.
When the Republicans regained power in 1921, the trend toward bringing Filipinos into the government was reversed. Gen. Leonard Wood, who was appointed governor-general, largely supplanted Filipino activities with a semimilitary rule. However, the advent of the Great Depression in the United States in the 1930s and the first aggressive moves by Japan in Asia (1931) shifted U.S. sentiment sharply toward the granting of immediate independence to the Philippines.The Commonwealth
The Hare-Hawes-Cutting Act, passed by Congress in 1932, provided for complete independence of the islands in 1945 after 10 years of self-government under U.S. supervision. The bill had been drawn up with the aid of a commission from the Philippines, but Manuel L. Quezon, the leader of the dominant Nationalist party, opposed it, partially because of its threat of American tariffs against Philippine products but principally because of the provisions leaving naval bases in U.S. hands. Under his influence, the Philippine legislature rejected the bill. The Tydings-McDuffie Independence Act (1934) closely resembled the Hare-Howes-Cutting-Act, but struck the provisions for American bases and carried a promise of further study to correct "imperfections or inequalities."
The Philippine legislature ratified the bill; a constitution, approved by President Roosevelt (Mar., 1935) was accepted by the Philippine people in a plebiscite (May); and Quezon was elected the first president (Sept.). When Quezon was inaugurated on Nov. 15, 1935, the Commonwealth of the Philippines was formally established. Quezon was reelected in Nov., 1941. To develop defensive forces against possible aggression, Gen. Douglas MacArthur was brought to the islands as military adviser in 1935, and the following year he became field marshal of the Commonwealth army.World War II
War came suddenly to the Philippines on Dec. 8 (Dec. 7, U.S. time), 1941, when Japan attacked without warning. Japanese troops invaded the islands in many places and launched a pincer drive on Manila. MacArthur's scattered defending forces (about 80,000 troops, four fifths of them Filipinos) were forced to withdraw to Bataan Peninsula and Corregidor Island, where they entrenched and tried to hold until the arrival of reinforcements, meanwhile guarding the entrance to Manila Bay and denying that important harbor to the Japanese. But no reinforcements were forthcoming. The Japanese occupied Manila on Jan. 2, 1942. MacArthur was ordered out by President Roosevelt and left for Australia on Mar. 11; Lt. Gen. Jonathan Wainwright assumed command.
The besieged U.S.-Filipino army on Bataan finally crumbled on Apr. 9, 1942. Wainwright fought on from Corregidor with a garrison of about 11,000 men; he was overwhelmed on May 6, 1942. After his capitulation, the Japanese forced the surrender of all remaining defending units in the islands by threatening to use the captured Bataan and Corregidor troops as hostages. Many individual soldiers refused to surrender, however, and guerrilla resistance, organized and coordinated by U.S. and Philippine army officers, continued throughout the Japanese occupation.
Japan's efforts to win Filipino loyalty found expression in the establishment (Oct. 14, 1943) of a "Philippine Republic," with José P. Laurel, former supreme court justice, as president. But the people suffered greatly from Japanese brutality, and the puppet government gained little support. Meanwhile, President Quezon, who had escaped with other high officials before the country fell, set up a government-in-exile in Washington. When he died (Aug., 1944), Vice President Sergio Osmeña became president. Osmeña returned to the Philippines with the first liberation forces, which surprised the Japanese by landing (Oct. 20, 1944) at Leyte, in the heart of the islands, after months of U.S. air strikes against Mindanao. The Philippine government was established at Tacloban, Leyte, on Oct. 23.
The landing was followed (Oct. 23-26) by the greatest naval engagement in history, called variously the battle of Leyte Gulf and the second battle of the Philippine Sea. A great U.S. victory, it effectively destroyed the Japanese fleet and opened the way for the recovery of all the islands. Luzon was invaded (Jan., 1945), and Manila was taken in February. On July 5, 1945, MacArthur announced "All the Philippines are now liberated." The Japanese had suffered over 425,000 dead in the Philippines.
The Philippine congress met on June 9, 1945, for the first time since its election in 1941. It faced enormous problems. The land was devastated by war, the economy destroyed, the country torn by political warfare and guerrilla violence. Osmeña's leadership was challenged (Jan., 1946) when one wing (now the Liberal party) of the Nationalist party nominated for president Manuel Roxas, who defeated Osmeña in April.The Republic of the Philippines
Manuel Roxas became the first president of the Republic of the Philippines when independence was granted, as scheduled, on July 4, 1946. In Mar., 1947, the Philippines and the United States signed a military assistance pact (since renewed) and the Philippines gave the United States a 99-year lease on designated military, naval, and air bases (a later agreement reduced the period to 25 years beginning 1967). The sudden death of President Roxas in Apr., 1948, elevated the vice president, Elpidio Quirino, to the presidency, and in a bitterly contested election in Nov., 1949, Quirino defeated José Laurel to win a four-year term of his own.
The enormous task of reconstructing the war-torn country was complicated by the activities in central Luzon of the Communist-dominated Hukbalahap guerrillas (Huks), who resorted to terror and violence in their efforts to achieve land reform and gain political power. They were largely brought under control (1954) after a vigorous attack launched by the minister of national defense, Ramón Magsaysay. The Huks continued to function, however, until 1970, and other Communist guerrilla groups have persisted in their opposition to the Philippine government. Magsaysay defeated Quirino in Nov., 1953, to win the presidency. He had promised sweeping economic changes, and he did make progress in land reform, opening new settlements outside crowded Luzon island. His death in an airplane crash in Mar., 1957, was a serious blow to national morale. Vice President Carlos P. García succeeded him and won a full term as president in the elections of Nov., 1957.
In foreign affairs, the Philippines maintained a firm anti-Communist policy and joined the Southeast Asia Treaty Organization in 1954. There were difficulties with the United States over American military installations in the islands, and, despite formal recognition (1956) of full Philippine sovereignty over these bases, tensions increased until some of the bases were dismantled (1959) and the 99-year lease period was reduced. The United States rejected Philippine financial claims and proposed trade revisions.
Philippine opposition to García on issues of government corruption and anti-Americanism led, in June, 1959, to the union of the Liberal and Progressive parties, led by Vice President Diosdad Macapagal, the Liberal party leader, who succeeded García as president in the 1961 elections. Macapagal's administration was marked by efforts to combat the mounting inflation that had plagued the republic since its birth; by attempted alliances with neighboring countries; and by a territorial dispute with Britain over North Borneo (later Sabah), which Macapagal claimed had been leased and not sold to the British North Borneo Company in 1878.Marcos and After
Ferdinand E. Marcos, who succeeded to the presidency after defeating Macapagal in the 1965 elections, inherited the territorial dispute over Sabah; in 1968 he approved a congressional bill annexing Sabah to the Philippines. Malaysia suspended diplomatic relations (Sabah had joined the Federation of Malaysia in 1963), and the matter was referred to the United Nations. (The Philippines dropped its claim to Sabah in 1978.) The Philippines became one of the founding countries of the Association of Southeast Asian Nations (ASEAN) in 1967. The continuing need for land reform fostered a new Huk uprising in central Luzon, accompanied by mounting assassinations and acts of terror, and in 1969, Marcos began a major military campaign to subdue them. Civil war also threatened on Mindanao, where groups of Moros opposed Christian settlement. In Nov., 1969, Marcos won an unprecedented reelection, easily defeating Sergio Osmeña, Jr., but the election was accompanied by violence and charges of fraud, and Marcos's second term began with increasing civil disorder.
In Jan., 1970, some 2,000 demonstrators tried to storm Malcañang Palace, the presidential residence; riots erupted against the U.S. embassy. When Pope Paul VI visited Manila in Nov., 1970, an attempt was made on his life. In 1971, at a Liberal party rally, hand grenades were thrown at the speakers' platform, and several people were killed. President Marcos declared martial law in Sept., 1972, charging that a Communist rebellion threatened, and opposition to Marcos's government did swell the ranks of Communist guerrilla groups, which continued to grow into the mid-1980s and continued on a smaller scale into the 21st cent. The 1935 constitution was replaced (1973) by a new one that provided the president with direct powers. A plebiscite (July, 1973) gave Marcos the right to remain in office beyond the expiration (Dec., 1973) of his term. Meanwhile the fighting on Mindanao had spread to the Sulu Archipelago. By 1973 some 3,000 people had been killed and hundreds of villages burned. Throughout the 1970s poverty and governmental corruption increased, and Imelda Marcos, Ferdinand's wife, became more influential.
Martial law remained in force until 1981, when Marcos was reelected, amid accusations of electoral fraud. On Aug. 21, 1983, opposition leader Benigno Aquino was assassinated at Manila airport, which incited a new, more powerful wave of anti-Marcos dissent. After the Feb., 1986, presidential election, both Marcos and his opponent, Corazon Aquino (the widow of Benigno), declared themselves the winner, and charges of massive fraud and violence were leveled against the Marcos faction. Marcos's domestic and international support eroded, and he fled the country on Feb. 25, 1986, eventually obtaining asylum in the United States.
Aquino's government faced mounting problems, including coup attempts, significant economic difficulties, and pressure to rid the Philippines of the U.S. military presence (the last U.S. bases were evacuated in 1992). In 1990, in response to the demands of the Moros, a partially autonomous Muslim region was created in the far south. In 1992, Aquino declined to run for reelection and was succeeded by her former army chief of staff Fidel Ramos. He immediately launched an economic revitalization plan premised on three policies: government deregulation, increased private investment, and political solutions to the continuing insurgencies within the country. His political program was somethat successful, opening dialogues with the Communist and Muslim guerillas. Although Muslim unrest and violence continued into the 21st cent, the government signed a peace accord with the Moro National Liberation Front (MNLF) in 1996, which led to an expansion of the autonomous region in 2001.
Several natural disasters, including the 1991 eruption of Mt. Pinatubo on Luzon and a succession of severe typhoons, slowed the country's economic progress in the 1990s. The Philippines, however, escaped much of the economic turmoil seen in other East Asian nations in 1997 and 1998, in part by following a slower pace of development imposed by the International Monetary Fund. Joseph Marcelo Estrada, a former movie actor, was elected president in 1998, pledging to help the poor and develop the country's agricultural sector. In 1999 he announced plans to amend the constitution in order to remove protectionist provisions and attract more foreign investment.
Late in 2000, Estrada's presidency was buffetted by charges that he accepted millions of dollars in payoffs from illegal gambling operations. Although his support among the poor Filipino majority remained strong, many political, business, and church leaders called for him to resign. In Nov., 2000, Estrada was impeached by the house of representatives on charges of graft, but the senate, controlled by Estrada's allies, provoked a crisis (Jan., 2001) when it rejected examining the president's bank records. As demonstrations against Estrada mounted and members of his cabinet resigned, the supreme court stripped him of the presidency, and Vice President Gloria Macapagal-Arroyo was sworn in as Estrada's successor. Estrada was indicted on charges of corruption in April, and his supporters attempted to storm the presidential palace in May. In Sept., 2007, he was convicted of corruption and sentenced to life imprisonment, but Estrada, who had been under house arrest since 2001, was pardoned the following month by President Macapagal-Arroyo.
A second Muslim rebel group, the Moro Islamic Liberation Front (MILF), agreed to a cease-fire in June, 2001, but fighting with fundamentalist Islamic guerrillas continued, and there was a MNLF uprising on Jolo in November. Following the Sept. 11, 2001, terrorist attacks on the United States, the U.S. government provided (2002) training and assistance to Philippine troops fighting the guerrillas. In 2003 fighting with the MILF again escalated, despite pledges by both sides that they would negotiate and exercise restraint; however, a truce was declared in July. In the same month several hundred soldiers were involved in a mutiny in Manila that the government claimed was part of a coup attempt.
Macapagal-Arroyo was elected president in her own right in May, 2004, but the balloting was marred by violence and irregularities as well as a tedious vote-counting process that was completed six weeks after the election. A series of four devastating storms during November and December killed as many as 1,000 in the country's north and east, particularly on Luzon. In early 2005 heavy fighting broke out on Mindanao between government forces and a splinter group of MILF rebels, and there was also fighting with a MNLF splinter group in Jolo.
In June, 2005, the president was beset by a vote-rigging charge based on a tape of a conversation she had with an election official. She denied the allegation while acknowledging that she had been recorded and apologizing for what she called a lapse in judgment, but the controversy combined with other scandals (including allegations that her husband and other family members had engaged in influence peddling and received bribes) to create a national crisis. Promising government reform, she asked (July) her cabinet to resign, and several cabinet members subsequently called on Macapagal-Arroyo to resign (as did Corazon Aquino). At the same time the supreme court suspended sales tax increases that had been enacted in May as part of a tax reform package designed to reduce the government's debt. In August and September the president survived an opposition move to impeach her when her opponents failed to muster the votes needed to force a trial in the senate.
In Feb., 2006, the government engaged in talks, regarded as a prelude to formal peace negotiations, with the MILF, and dicussions between the two sides continued in subsequent months. Late in the month, President Macapagal-Arroyo declared a weeklong state of emergency when a coup plot against her was discovered. Intended to coincide with the 20th anniversary celebrations of the 1986 demonstrations that brought down Ferdinand Marcos, the coup was said to have involved several army generals and left-wing legislators. The state of emergency was challenged in court and upheld after the fact, but the supreme court declared aspects of the emergency's enforcement unconstitutional.
In October the supreme court declared a move to revise the constitution through a "people's initiative," replacing the presidential system of government with a parliamentary one, unconstitutional, but the government only abandoned its attempt to revise the constitution in December after the Roman Catholic church attacked an attempt by the house of representatives to call a constituent assembly and by the opposition-dominated senate. In 2006 there was fierce fighting on Jolo between government forces and Islamic militants; it continued into 2007, and there were also clashes in Basilan and Mindanao.
In Jan., 2007, a government commission blamed many of the more than 800 deaths of activists during Macapagal-Arroyo's presidency on the military. The president promised action in response to the report, but the chief of the armed forces denounced the report as unfair and strained. Congressional elections in May, 2007, were marred by fraud allegations and by violence during the campaign; the voting left the opposition in control of the senate and Macapagal-Arroyo's allies in control of the house. In November there was a brief occupation of a Manila hotel by soldiers, many of whom had been involved in the 2003 mutiny. In Oct., 2007, the president's husband was implicated in a kickback scandal involving a Chinese company; the investigation continued into 2008, and prompted demonstrations by her opponents and calls for her to resign.
A peace agreement that would have expanded the area of Mindanao that was part of the Muslim autonomous region was reached in principle with the MILF in Nov., 2007. Attempts to finalize the agreement, however, collapsed in July, 2008, when Muslims accused the government of reopening settled issues; the agreement was also challenged in court by Filipinos opposed to it. In August significant fighting broke out between government forces and rebels that the MILF said were renegades; two months later the supreme court declared the agreement unconstitutional. Fighting in the region continued into 2009.
Luzon was battered by several typhoons in Sept.-Oct, 2009; the Manila area and the mountainous north were most severely affected, and more than 900 persons died. In Nov., 2009, the country was stunned by the murder of the wife of an opposition candidate for the governorship of Maguindanao prov. and a convoy of 56 people who joined her as she went to register his candidacy; the governor, Andal Ampatuan, and his son were charged with rebellion and murder respectively in relation to the slaughter and events after it.
See E. H. Blair and J. A. Robertson, ed., The Philippine Islands, 1493-1888 (55 vol., 1903-9; Vol. LIII, Bibliography); L. Morton, The Fall of the Philippines (1953); T. Friend, Between Two Empires: The Ordeal of the Philippines, 1929-1946 (1965); E. G. Maring and J. M. Maring, Historical and Cultural Dictionary of the Philippines (1973); B. D. Romulo, Inside the Palace: The Rise and Fall of Ferdinand and Imelda Marcos (1987); S. Burton, Impossible Dream: The Marcoses, the Aquinos, and the Unfinished Revolution (1988); D. J. Steinberg, The Philippines (1988); D. Wurfel, Filipino Politics (1988); S. Karnow, In Our Image: America's Empire in the Philippines (1989); B. M. Linn, The U.S. Army and Counterinsurgency in the Philippine War, 1899-1902 (1989).
See A. G. Adair and M. H. Crockett, ed., Heroes of the Alamo (2d ed. 1957); Lon Tinkle, 13 Days to Glory (1958); W. Lord, A Time to Stand (1961); W. C. Davis, Three Roads to the Alamo (1998); R. Roberts and J. S. Olson, A Line in the Sand (2000).
The islands, composed mainly of limestone and coral, rise from a vast submarine plateau. Most are generally low and flat, riverless, with many mangrove swamps, brackish lakes (connected with the ocean by underground passages), and coral reefs and shoals. Fresh water is obtained from rainfall and from desalinization. Navigation is hazardous, and many of the outer islands are uninhabited and undeveloped, although steps have been taken to improve transportation facilities. Hurricanes occasionally cause severe damage, but the climate is generally excellent. In addition to New Providence, other main islands are Grand Bahama, Great and Little Abaco (see Abaco and Cays), the Biminis, Andros, Eleuthera, Cat Island, San Salvador, Great and Little Exuma (Exuma and Cays), Long Island, Crooked Island, Acklins Island, Mayaguana, and Great and Little Inagua (see Inagua).
The population is primarily of African and mixed African and European descent; some 12% is of European heritage, with small minorities of Asian and Hispanic descent. More than three quarters of the people belong to one of several Protestant denominations and nearly 15% are Roman Catholic. English is the official language. The Bahamas have a relatively low illiteracy rate. The government provides free education through the secondary level; the College of the Bahamas was established in 1974, although most Bahamians who seek a higher education study in Jamaica or elsewhere.
The islands' vivid subtropical atmosphere—brilliant sky and sea, lush vegetation, flocks of bright-feathered birds, and submarine gardens where multicolored fish swim among white, rose, yellow, and purple coral—as well as rich local color and folklore, has made the Bahamas one of the most popular resorts in the hemisphere. The islands' many casinos are an additional attraction, and tourism is by far the country's most important industry, providing 60% of the gross domestic product and employing about half of the workforce. Financial services are the nation's other economic mainstay, although many international businesses left after new government regulations on the financial sector were imposed in late 2000. Salt, rum, aragonite, and pharmaceuticals are produced, and these, along with animal products and chemicals, are the chief exports. The Bahamas also possess facilities for the transshipment of petroleum. The country's main trading partners are the United States and Spain. Since the 1960s, the transport of illegal narcotic drugs has been a problem, as has the flow of illegal refugees from other islands.
The Bahamas are governed under the constitution of 1973 and have a parliamentary form of government. There is a bicameral legislature consisting of a 16-seat Senate and a 40-seat House of Assembly. The prime minister is the head of government, and the monarch of Great Britain and Northern Ireland, represented by an appointed governor-general, is the titular head of state. The nation is divided into 21 administrative districts.
Before the arrival of Europeans, the Bahamas were inhabited by the Lucayos, a group of Arawaks. Christopher Columbus first set foot in the New World in the Bahamas (1492), presumably at San Salvador, and claimed the islands for Spain. Although the Lucayos were not hostile, they were soon exterminated by the Spanish, who did not in fact colonize the islands.
The first settlements were made in the mid-17th cent. by the English. In 1670 the islands were granted to the lords proprietors of Carolina, who did not relinquish their claim until 1787, although Woodes Rogers, the first royal governor, was appointed in 1717. Under Rogers the pirates and buccaneers, notably Blackbeard, who frequented the Bahama waters, were driven off. The Spanish attacked the islands several times, and an American force held Nassau for a short time in 1776. In 1781 the Spanish captured Nassau and took possession of the whole colony, but under the terms of the Treaty of Paris (1783) the islands were ceded to Great Britain.
After the American Revolution many Loyalists settled in the Bahamas, bringing with them black slaves to labor on cotton plantations. Plantation life gradually died out after the emancipation of slaves in 1834. Blockade-running into Southern ports in the U.S. Civil War enriched some of the islanders, and during the prohibition era in the United States the Bahamas became a base for rum-running.
The United States leased areas for bases in the Bahamas in World War II and in 1950 signed an agreement with Great Britain for the establishment of a proving ground and a tracking station for guided missiles. In 1955 a free trade area was established at the town of Freeport. It proved enormously successful in stimulating tourism and has attracted offshore banking.
In the 1950s black Bahamians, through the Progressive Liberal party (PLP), began to oppose successfully the ruling white-controlled United Bahamian party; but it was not until the 1967 elections that they were able to win control of the government. The Bahamas were granted limited self-government as a British crown colony in 1964, broadened (1969) through the efforts of Prime Minister Lynden O. Pindling. The PLP, campaigning on a platform of immediate independence, won an overwhelming victory in the 1972 elections and negotiations with Britain were begun.
On July 10, 1973, the Bahamas became a sovereign state within the Commonwealth of Nations. In 1992, after 25 years as prime minister and facing recurrent charges of corruption and ties to drug traffickers, Pindling was defeated by Hubert Ingraham of the Free National Movement (FNM). A feeble economy, mostly due to a decrease in tourism and the poor management of state-owned industries, was Ingraham's main policy concern. Ingraham was returned to office in 1997 with an ironclad majority, but lost power in 2002 when the PLP triumphed at the polls and PLP leader Perry Christie replaced Ingraham as prime minister. Concern over the government's readiness to accommodate the tourist industry contributed to the PLP's losses in the 2007 elections, and Ingraham and the FNM regained power.
See H. P. Mitchell, Caribbean Patterns (2d ed. 1970); J. E. Moore, Pelican Guide to the Bahamas (1988).
The legend echoed the epic poem Nibelungenlied in which the dragon-slaying hero Siegfried is stabbed in the back by Hagen von Tronje. Der Dolchstoß is cited as an important factor in Adolf Hitler's later rise to power, as the Nazi Party grew its original political base largely from embittered World War I veterans, and those who were sympathetic to the Dolchstoß interpretation of Germany's then-recent history.
Many were under the impression that the Triple Entente had ushered in the war, and as such saw the war as one in which Germany's cause was justified. Imperial Russia was seen to have expansionist ambitions and France's dissatisfaction due to the outcome of the Franco-Prussian War was widely known. Later, the Germans were shocked to learn that Great Britain had entered the war, and many felt their country was being "ganged up on"; the Germans felt that Britain was using the Belgian neutrality issue to enter the war and neutralize a Germany that was threatening its own commercial interests.
As the war dragged on, illusions of an easy victory were smashed, and Germans began to suffer tremendously from what would become a long and enormously costly war. With the initial Euphoria gone, old divisions resurfaced. Nationalist loyalties came into question once again as initial enthusiasms subsided. Subsequently, suspicion of Roman Catholics, Social Democrats and Jews grew. There was a considerable amount of political tension prior to the war, especially due to the growing presence of Social Democrats in the Reichstag. This was a great concern for aristocrats in power and the military; this contingent was particularly successful in denying Erich Ludendorff the funds for the German Army that he claimed were necessary and lobbied for.
On November 1, 1916, the German Military High Command administered the Judenzählung (German for "Jewish Census"). It was designed to confirm allegations of the lack of patriotism among German Jews, but the results of the census disproved the accusations and were not made public. A number of German Jews viewed "the Great War" as an opportunity to prove their commitment to the German homeland.
Civil disorder grew as a result of an inability to make ends meet, with or without the alleged "shortage of patriotism." While it is true that production slumped during the crucial years of 1917 and 1918, the nation had maximized its war effort and could take no more. Raw production figures confirm that Germany could not have possibly won a war of attrition against Britain, France and the United States combined. Despite its overwhelming individual power, Germany's industrial might and population were matched and outclassed by the Entente as a whole. Russia's exit in late 1917 did little to change the overall picture, as the United States had already joined the war on April 16th of that same year. American industrial capacity alone outweighed that of Germany.
Although the Germans were frequently depicted as "primordial aggressors responsible for the war", German peace proposals were all but rejected. Ludendorff was convinced that the Entente wanted little other than a Carthaginian peace. This was not the message most Germans heard coming from the other side. Woodrow Wilson's Fourteen Points were particularly popular among the German people. Socialists and liberals, especially the Social Democrats that formed the majority of the parliamentary body, were already known "agitators" for social change prior to 1914. When peace and full restoration were promised by the Allies, patriotic enthusiasm especially began to wane. Likewise, Germany's allies began to question the cause for the war as the conflict dragged on, and found their questions answered in the Allied propaganda.
When the armistice finally came in 1918, Ludendorff's prophecy appeared accurate almost immediately; although the fighting had ended, the British maintained their blockade of the European continent for a full year, leading to starvation and severe malnutrition. The non-negotiable peace agreed to by Weimar politicians in the Treaty of Versailles was certainly not what the German peace-seeking populace had expected.
Conservatives, nationalists and ex-military leaders began to speak critically about the peace and Weimar politicians, socialists, communists, and Jews were viewed with suspicion due to presumed extra-national loyalties. It was claimed that they had not supported the war and had played a role in selling out Germany to its enemies. These November Criminals, or those who seemed to benefit from the newly formed Weimar Republic, were seen to have "stabbed them in the back" on the home front, by either criticizing German nationalism, instigating unrest and strikes in the critical military industries or profiteering. In essence the accusation was that the accused committed treason against the "benevolent and righteous" common cause.
These theories were given credence by the fact that when Germany surrendered in November 1918, its armies were still in French and Belgian territory, Berlin remained 450 miles from the nearest front, and the German armies retired from the field of battle in good order. The Allies had been amply resupplied by the United States, which also had fresh armies ready for combat, but Britain and France were too war-weary to contemplate an invasion of Germany with its unknown consequences. No Allied army had penetrated the western German frontier, Western Front, and on the Eastern Front, Germany had already won the war against Russia, concluded with the Treaty of Brest-Litovsk. In the West, Germany had come close to winning the war with the Spring Offensive. Contributing to the Dolchstoßlegende, its failure was blamed on strikes in the arms industry at a critical moment of the offensive, leaving soldiers without an adequate supply of materiel. The strikes were seen to be instigated by treasonous elements, with the Jews taking most of the blame. This overlooked Germany's strategic position and ignored how the efforts of individuals were somewhat marginalized on the front, since the belligerents were engaged in a new kind of war. The industrialization of war had dehumanized the process, and made possible a new kind of defeat which the Germans suffered as a total war emerged.
The weakness of Germany's strategic position was exacerbated by the rapid collapse of its allies in late 1918, following allied victories on the Eastern and Italian fronts. Bulgaria was the first to sign an armistice on September 29 1918 at Saloniki. On October 30 the Ottoman Empire capitulated at Mudros. On November 3 Austria-Hungary sent a flag of truce to ask for an Armistice. The terms, arranged by telegraph with the Allied Authorities in Paris, were communicated to the Austrian Commander and accepted. The Armistice with Austria-Hungary was signed in the Villa Giusti, near Padua, on November 3. Austria and Hungary signed separate armistices following the overthrow of the Habsburg monarchy.
Nevertheless, this social mythos of domestic betrayal resonated among its audience, and its claims would codify the basis for public support for the emerging Nazi Party, under a racialist-based form of nationalism. The anti-Semitism was intensified by the Bavarian Soviet Republic, a Communist government which ruled the city of Munich for two weeks before being crushed by the Freikorps militia. Many of the Bavarian Soviet Republic's leaders were Jewish, a fact that allowed anti-Semitic propagandists to make the connection with "Communist treason".
I have asked His Excellency to now bring those circles to power which we have to thank for coming so far. We will therefore now bring those gentlemen into the ministries. They can now make the peace which has to be made. They can eat the broth which they have prepared for us!
On November 11, 1918, the representatives of the newly formed Weimar Republic signed an armistice with the Allies which would end World War I. The subsequent Treaty of Versailles led to further territorial and financial losses. As the Kaiser had been forced to abdicate and the military relinquished executive power, it was the temporary "civilian government" that sued for peace - the signature on the document was of the Catholic Centrist Matthias Erzberger, a civilian, who was later killed for his alleged treason. This led to the signing of the Treaty of Versailles. Even though they publicly despised the treaty, it was most convenient for the generals — there were no war-crime tribunals, they were celebrated as undefeated heroes, and they could covertly prepare for removing the republic that they had helped to create.
The official birth of the term itself possibly can be dated to mid-1919, when Ludendorff was having lunch with British general Sir Neil Malcolm. Malcolm asked Ludendorff why it was that he thought Germany lost the war. Ludendorff replied with his list of excuses: the home front failed us, etc. Then Sir Neil Malcolm said that "it sounds like you were stabbed in the back, then?" The phrase was to Ludendorff's liking, and he let it be known among the general staff that this was the 'official' version, then disseminated throughout German society. This was picked up by right-wing political factions and used as a form of attack against the SPD-led early Weimar government, which had come to power in the German Revolution of November 1918.
In November 1919, the newly elected Weimar National Assembly initiated a Untersuchungsausschuß für Schuldfragen to investigate the causes of the World War and Germany's defeat. On November 18th, von Hindenburg testified in front of this parliamentary commission, and cited a December 17, 1918 Neue Zürcher Zeitung article that summarized two earlier articles in the Daily Mail by British General Frederick Barton Maurice with the phrase that the German army had been 'dagger-stabbed from behind by the civilian populace' ("von der Zivilbevölkerung von hinten erdolcht."). (Maurice later disavowed having used the term himself.). It was particularly this testimony of Hindenburg that led to the wide spread of the Dolchstoßlegende in post-WWI Germany.
Richard Steigmann-Gall says that the stab-in-the-back legend traces back to a sermon preached on February 3, 1918, by Protestant Court Chaplain Bruno Doehring, six months before the war had even ended. German scholar Boris Barth, in contrast to Steigmann-Gall, implies that Doehring did not actually use the term, but spoke only of 'betrayal.' Barth traces the first documented use to a centrist political meeting in the Munich Löwenbräu-Keller on November 2, 1918, in which Ernst Müller-Meiningen, a member of the Progressive coalition in the Reichstag, used the term to exhort his listeners to keep fighting:
As long as the front holds, we damned well have the duty to hold out in the homeland. We would have to be ashamed of ourselves in front of our children and grandchildren if we attacked the battle front from the rear and gave it a dagger-stab. (wenn wir der Front in den Rücken fielen und ihr den Dolchstoss versetzten.)
Barth also shows that the term was primarily popularized by the patriotic German newspaper Deutsche Tageszeitung that repeatedly quoted the Neue Zürcher article after Hindenburg had referred to it in front of the parliamentary inquiry commission.
Charges of a Jewish conspirational element in Germany's defeat drew heavily upon figures like Kurt Eisner, a Berlin-born German Jew who lived in Munich. He had written about the illegal nature of the war from 1916 onward, and he also had a large hand in the Munich revolution until he was assassinated in February 1919. The Weimar Republic under Friedrich Ebert violently suppressed workers' uprisings with the help of Gustav Noske and Reichswehr General Groener, and tolerated the paramilitary Freikorps forming all across Germany. In spite of such tolerance, the Republic's legitimacy was constantly attacked with claims such as the stab-in-the-back. Many of its representatives such as Matthias Erzberger and Walther Rathenau were assassinated, and the leaders were branded as "criminals" and Jews by the right-wing press dominated by Alfred Hugenberg.
German historian Friedrich Meinecke already attempted to trace the roots of the term in a June 11, 1922, article in the Viennese newspaper Neue Freie Presse. In the 1924 national election, the Munich cultural journal Süddeutsche Monatshefte published a series of articles blaming the SPD and trade unions for Germany's defeat in World War I (the illustration on this page is the April 1924 title of that journal, which came out during the trial of Adolf Hitler and Ludendorff for high treason following the Beer Hall Putsch in 1923. The editor of an SPD newspaper sued the journal for defamation, giving rise to what is known as the Munich Dolchstossprozess from October 19 to November 20, 1924. Many prominent figures testified in that trial, including members of the parliamentary committee investigating the reasons for the defeat, so some of its results were made public long before the publication of the committee report in 1928.
The Dolchstoß was a central image in propaganda produced by the many right-wing and traditionally conservative political parties that sprang up in the early days of the Weimar Republic, including Hitler's NSDAP. For Hitler himself, this explanatory model for World War I was of crucial personal importance. He had learned of Germany's defeat while being treated for temporary blindness following a gas attack on the front. In Mein Kampf, he described a vision at this time which drove him to enter politics. Throughout his career, he railed against the "November criminals" of 1918, who had stabbed the German Army in the back.
Even provisional President Friedrich Ebert contributed to the myth when he saluted returning veterans with the oration that "no enemy has vanquished you" (kein Feind hat euch überwunden!) and "they returned undefeated from the battlefield (sie sind vom Schlachtfeld unbesiegt zurückgekehrt)" on November 10th, 1918. The latter quote was shortened to im Felde unbesiegt as a semi-official slogan of the Reichswehr. Ebert had meant these sayings as a tribute to the German soldier, but it only contributed to the prevailing feeling.
Research Team Publishes New Methods for Synthetic Generation of Influenza Vaccines Design Enables More Rapid Response to Potential Pandemics
May 15, 2013; LA JOLLA, Calif.and ROCKVILLE, Md. -- the following information was released by the J. Craig Venter Institute... | http://www.reference.com/browse/the | 13 |
10 | With the exception of tidal energy, our focus thus far has been on land-based energy sources. Meanwhile, the ocean absorbs a prodigious fraction of the Sun’s incident energy, creating thermal gradients, currents, and waves whipped up by winds. Let’s put some scales on the energetics of these sources and see if we may turn to them for help. We’ve got our three boxes ready: abundant, potent, and niche (puny). Time to do some sorting!
Wherever there is a thermal gradient, our eyes light up because we can create a heat flow across the gradient and capture some fraction of the energy flow to do useful work. This is called a heat engine, the efficiency of which is capped by the theoretical maximum (Th − Tc)/Th, where “h” and “c” subscripts refer to absolute temperatures of the hot and cold reservoirs, respectively. In the ocean, we are rather limited in how much gradient is available. The surface does not tend to exceed 30°C (303 K), while the depths cannot get much cooler than 0°C (273 K; pressure and salinity allow it to go a few degrees negative). The maximum thermodynamic efficiency therefore tops out at 10%, and in practice we might get half of this in a real application. The general scheme of producing energy from thermal gradients in the ocean is called ocean thermal energy conversion (OTEC).
How much energy is available? First of all, water is tremendously efficient at storing thermal energy, packing 4184 Joules per liter per degree (definition of the kilocalorie). Therefore, extracting the heat from a cubic meter of water at 30°C—leaving it at 0°C—represents 125 MJ of energy. Turned into electricity at 5% efficiency, we would need to process 160 cubic meters per second to generate a standard power plant’s output of 1 GW. Remember that we’re using the most extreme temperature difference for our figures. Given that the elevated surface temperatures will only be found in the top 100 m of water (above the thermocline), we must chew through 1.6 m² of ocean area per second to make our gigawatt. In a day, we convert a square patch 370 m on a side.
But this doesn’t get at how much can be sustainably recharged. The thermal energy derives, after all, from solar input. In the tropics, we might expect a patch of ocean to receive 250 W/m² of sunlight on average. It takes a square area 9 km on a side to annually recharge the 1 GW draw (at 5% extraction efficiency: the other 95% is dumped into the depths as waste heat at close to 0°C). This figure ignores thermal exchange with the air, which will tend to be in the range of 5–20 W/m² per °C difference between air and water. Also, radiative losses will reach 150 W/m² in clear skies. Approximating these effects to produce a net 100 W/m² retained as heat, we need our annual square to be about 14 km on a side.
The 200 km² patch we need to supply a 1 GW “plant” gets multiplied by 13,000 to hit our 13 TW global appetite. That’s an area comparable to the land area of the Indonesian islands: New Guinea, Borneo, Sumatra, etc. (wanted to pick something in warm water to stare at on map). Clearly we have the oceanic space. And as such, we throw OTEC into the “abundant” box. It’s basically a form of solar power at 5% efficiency available over a large fraction of the globe. So no real surprise that it should be abundant.
I did not factor in evaporative cooling, which can be rather significant. But it would have a hard time knocking the total resource out of the abundant box. In rough numbers, half of the total solar energy budget reaches the ground, and something like 70% of this is absorbed by oceans, for 35% of the total. Meanwhile, evaporation claims 23% of the solar budget, effectively taking a 2/3 bite out of the thermal energy deposited. So we need something like the area of Australia in the ocean. Like I say—still abundant.
Comparing the daily volume/area draw to the recharge area, we compute an interesting timescale: 4 years. In other words, if we isolated a patch of ocean 14 km on a side that could generate 1 GW of OTEC power, it would take 4 years to process the entire volume (above 100 m depth). This is reassuringly longer than the one year recharge time, allowing for seasonal variation and adequate mixing.
A look at the map above shows the regions for which our 30°C assumption is valid. These regions tend not to be near the major demand. If we want to park an OTEC plant off the shore of a more temperate location, several things happen. The temperature difference and therefore the quantity of thermal storage obviously shrinks (by roughly factor of two). The thermodynamic efficiency likewise takes a factor of two hit. And the warm layer is shallower at higher latitudes (say a factor of two). The net effect is a factor of 8 greater area of water processing per 1 GW OTEC plant. The area for solar collection likewise increases—by almost a factor of two for reduced insolation, and by an additional factor of two to account for reduced efficiency.
Since the energy produced is a quadratic function of ΔT, a temperate OTEC plant becomes seriously impaired in the winter. At 40° latitude off the U.S. coast, I calculate that the winter production is at 40% the summer production in both the Atlantic and Pacific oceans.
Operating and maintaining an offshore power plant in seawater, transmitting the power to land, dealing with storms and other mishaps are serious challenges. OTEC reduces to a low efficiency, operationally difficult method for harvesting solar thermal energy. It seems we would be better off getting 15% in solar thermal plants in sunny areas. I’m not sure why we’d waste our time on OTEC when there are better (cheaper) ways to collect the abundant energy of the Sun. OTEC has some advantage in not having to build the collector, and in the fluid delivery system, but this would seem to be a minor plus stacked against the operational disadvantages. OTEC deserves a spot in the abundant box, but practicalities limit its likely role.
Much as tuna was once marketed as the “chicken of the sea,” ocean currents are the “wind energy of the sea.” Recalling that the kinetic power in a fluid flow is ½ρAv³, where ρ is the density of the fluid, A is the area described by the collecting rotor, and v is the velocity of the fluid, we note that the density of water is about 800 times that of air. Big score! But the velocity tends to be smaller, and has a cubic power to knock it back. A strong mid-ocean current might reach 2 knots, or 1 m/s. Compared to a wind speed of 10 m/s (22 m.p.h.), we get 1000 times less power per rotor area. So we’re right back to where we were with wind.
We have a lot more ocean area than land area. And we are not as constrained in the ocean as we are on land to keep our turbines near the surface, so we could exploit more vertical space—to a point. Ocean currents tend to be confined to the upper 400 meters of water, so the depth gain is only a factor of 2–3 times what we access for wind. On the plus side, we note that ocean currents are far more steady/robust, so we would not be plagued by intermittency the way we are with solar and wind. On balance, wind fell into the “potent” box, so ocean currents surely deserve at least this rating—practicalities aside, of course.
We would naturally first want to exploit pinch-points (straits, narrow inlets, etc.), where currents may be up to 5 m/s, now delivering 100 times the power per area compared to a windmill at 10 m/s. But currents tend to be large in these pinch points due to tidal fluctuations, not steady flow, so we’re just tapping into the tidal energy budget—previously characterized as a niche source. To the extent that steady-current pinch points exist, they make a natural choice for locating underwater turbines, but such places are limited—especially in terms of area or they would not be “pinch” points—so the total power available is small.
Given a Choice?
We saw that a given rotor area will deliver comparable power whether placed in open ocean currents or on land in a moderate wind. Now I ask you: is it easier/cheaper to put a giant turbine on land, or upside-down at sea? Think about access for maintenance, salt water corrosion, transmission of electric power, all the bruised fish, etc.
Most of us have marveled at the awesome power of waves crashing into a beach, pier, or headland. It’s enough to knock us over, unlike solar or wind power. And all that coastline—surely it’s a winner!
Waves represent third-string solar energy: solar energy is absorbed by land and sea, making thermal gradients in the air that generate wind. Wind then pushes on the surface of water, building up waves. The wind-wave interaction is self-reinforcing: the higher a wave sticks up, the more energy the wind can dump into it. Many of the waves arriving on a coastline were generated in storms somewhere across the ocean.
The Earth absorbs about 120,000 TW of solar energy (that which isn’t directly reflected). About 1% of this ends up being dissipated in winds (1200 TW). Of this, about 5% (60 TW) goes into wave generation. Some of this fights itself (wind against wave), some wave energy dissipates on its own via viscosity and turbulence, and some gets eaten up in shallow waters (e.g., around archipelagos) without making landfall. All of these chip away at the amount of wave energy accessible near land. I might venture a guess that we receive something comparable to our 13 TW global demand on our shores.
I want to satisfy myself with a ground-up estimate to see if the numbers make sense. Let’s say a 2 m-high wave (trough to crest) arrives every ten seconds, traveling at a jogging speed of 3 m/s. The waves are therefore 30 m apart. Each meter of wave-front then has a volume of about 30 cubic meters (average height is 1 m above the trough for a sinusoidal shape). The gravitational potential energy, mgh, above the trough comes to 225 kJ, and the kinetic energy, ½mv², is 135 kJ. If we were able to capture all of this energy, we would collect 360 kJ every 10 seconds, or 36 kW of power for each meter of coastline. Compared to solar or wind power density—at approximately 200 and 30 W/m², respectively—this sounds like a huge number: 36,000 W/m. But the fact that the denominator is linear in length and not a square makes a gigantic difference. The Earth has far more square meters than linear meters of coastline.
Happily, my figure compares well with values you can find by searching for “wave energy potential map” in Google. A 2000 km coastline might therefore net 70 GW if one could catch all the energy at 100% efficiency. The lower-48 in the U.S. might then collect something like 200 GW of wave power to the chagrin of surfers, representing something like 7% of the total domestic power demand. A very recent report puts the wave energy off California’s 1800 km coastline at 140 TWh per year, working out to an average of 16 GW—coming in a good deal less than my stupid calculation.
The full report for the U.S. cites a total available continental shelf wave energy for the lower 48 states as 910 TWh per year, amounting to 104 GW of continuous power. But only about half of this is deemed to be recoverable, totaling 54 GW. This puts it in the same ballpark as hydroelectric, if fully developed across our continental shelves.
Using my 35 kW/m number for a global calculation, I will make the crude estimate that there is enough coastline to circle the globe twice—considering that not all coastline faces the prevailing swell and is therefore penalized. Whatever. This makes for 80,000 km of coastline, delivering 2.8 TW—or about 20% of global demand, if fully developed.
A quick Google search shows estimates of global wave power of 2 TW, 3.5 TW, 1–10 TW: all roughly consistent with my crude estimate.
Waves in a Box
The numbers push me into putting wave energy in the “niche” box, since my criterion for “potent” is the ability to satisfy a quarter of our need if fully developed. It might possibly qualify as potent, but it’s borderline. Where did the 60 TW of total dissipation into wave energy go? A bit of digging suggests that half is lost in deep-water breaking (think of the “roaring 50′s” in the southern hemisphere). The rest is lost in wave-turbulence interaction and bottom friction. It would seem that there is greater inefficiency than I appreciated in delivering wave energy to land.
Critters in Common
Each of the three energy resources we’ve discussed here require placement of energy conversion equipment into the sea. I’ve seen what sunken ships look like even after a few decades. They’re beautiful in one sense—teeming with colorful life—but not so much if functionality is more important. I cringe to think of the maintenance costs of our energy infrastructure placed out to sea.
The ocean covers 72% of the Earth’s area, absorbing vast amounts of solar energy as heat, moving around the globe in great conveyor belts, and capturing some fraction of wind energy in its waves. Ocean thermal earns a spot in the abundant box, ocean currents easily meet the potent criterion—being greater than wind potential, while waves fall short into the niche box. All three forms of ocean energy just add to the pile of alternative methods for creating electricity, being useless for heat or directly as transportation fuel.
Furthermore, only wave energy is conveniently delivered to our shores. Practically speaking, scaling facilities to capture ocean thermal and ocean current energy crosses a line of practicality that we are unlikely to exceed as long as other large-scale energy options (solar photovoltaics, solar thermal, wind, nuclear) remain more convenient. And yes, for all its complication, I would still guess nuclear to be easier to accomplish at scale than the ocean technologies. Anything on land gets an immediate boost in my book.
In this sense, ocean energy—much like solar energy in space, or pools of methane on Titan—falls into more of a “so what” category for me. Sure, it’s there, and I’m pleased to say it is even abundant. But practicalities will likely preclude us from pursuing it in a big way—at least in the near term when we face our great transition from fossil fuels. Wave energy is slightly less impractical, but the widely varying technologies I have seen demonstrated strike me as no more than cute, given the puny amount of power available in total.
So as I cast about looking for reasons why I should not worry about our energy future, I find little solace when I look to the sea. We’ll see nuclear fusion next week. No, really. | http://physics.ucsd.edu/do-the-math/2012/01/the-motion-of-the-ocean/ | 13 |
10 | Keeping Track and Displaying Data
Last year, children represented and compared data using tally marks. Begin with this concept to reinforce the idea that each tally mark stands for one piece of data that is collected. The connection between tally marks and summarizing data can then be shown visually.
Prerequisite Skills and Concepts: Children should have basic counting skills and be able to skip-count by fives. Children should be able to make tally marks and interpret data.
Write the months of the year in large letters on the chalkboard.
- Say: Today, we will make a tally chart of the birthdays in our class. It will show the month of each person's birthday. I need a volunteer to record the tally marks.
Choose a volunteer to write one tally mark next to the month that is announced by each student.
- Ask: In which month is your birthday?
Ask each child the same question. As the volunteer records the tallies, children should realize that each student is receiving their own tally mark on the board. This is crucial to understanding that each tally mark represents an individual's birthday.
- Ask: How can we show tally marks so they will be easier to count later?
Have volunteers offer ways to show tallies for easier counting. Lead children to recommend that every fifth tally be a slash through the first four existing tallies. This offers an easy method for skip-counting by fives.
Continue questioning each student until all birthday months are recorded. Allow time for the class to study the tally chart.
- Ask: How many students are present today? Please count the total number of students in class today.
Children should respond with the correct number of students in the class.
- Ask: How many tally marks should be on the board?
Children should realize that the number of tally marks should be the same as the number of students in the class. If the numbers do not match, find and correct the problem.
- Ask: How did you count the tallies?
Children should realize they can skip-count by fives for the tallies with diagonals. Then they can count up by ones for the single tallies left on the board.
- Ask: What conclusions can you draw just by looking at the tally chart on the board?
Allow enough time for all children to draw at least one conclusion. Write each conclusion on the board. Conclusions can include the following:
most popular month for birthdays, least popular month for birthdays, months with no birthdays, comparison of tallies for 2 or more months
Make a copy of the tally chart on the board. You may use it later to demonstrate how to create a bar graph or line plot from existing data.
Keeping Track and
Range and Mode: | http://www.eduplace.com/math/mathsteps/2/b/2.data.ideas1.html | 13 |
10 | Why do Screws Work Loose?
Definition of Screw Loosening
Materials fastened using screws are held together by the force of tension generated by the elongation of the bolt shaft (the bolt axis force) and by the force of compression generated in the objects being tightened (the tightening force). These two forces remain in balance as long as no external forces are applied to the objects being fastened by the screws. The general term for the forces involved in pulling or fastening the two materials together is the pretension force.
In some situations, such as in the course of using machinery, the pretension force applied at the time that the materials forming the machinery were originally fastened may decrease for a variety of reasons. This spontaneous decrease in the pretension force is what is described in general terms as screw loosening.
Classification of Loosening
|Loosening not due to return rotation
||Loosening due to return rotation
- 1) Initial loosening
- In the case of fine irregularities such as surface roughness, external forces acting on the bolt and the parts in contact following initial tightening cause the screw to loosen with the passing of time.
- 2) Collapse loosening
- When the surface pressure on the parts in contact is too high, plastic deformation of the surfaces can occur.
- 3) Loosening due to minute tremor-induced abrasion
- Abrasion can occur between the parts in contact, particularly at the joint surfaces of parts tightened parts as they slide against each other due to external forces. This abrasion can cause the bolt to loosen.
- 4) Loosening due to permanent deformation of sealing material
- In the case that a different type of sealing material such as a gasket is used, deformation of this material can cause the bolt to loosen.
- 5) Loosening due to excessive external force
- Apart from surface collapse caused by the surface pressure on the parts in contact, excessive external pressure can lead to plastic elongation of the bolt, causing it to loosen.
- 6) Loosening due to thermal causes
- Changes occur in the bolt’s axial force induced by changes in temperature. In the case that a bolt elongates when exposed to high temperature, the bolt axis force is reduced, causing the bolt to loosen.
- 1) Loosening due to repeated external force applied in the axial rotation direction
- The moment of the bolt’s axial line rotation works on the materials being tightened, causing the parts in contact to slide against each other and also against the nut or bolt head parts that are above the point of contact. In such a case, the nut or bolt can undergo return rotation, causing it to loosen.
- 2) Loosening due to repeated external force applied perpendicular to the axis
- If an external force is repeatedly applied in a direction perpendicular to the bolt’s axis, the parts in contact may slide against each other forcing the nut or bolt to undergo return rotation and causing it to loosen.
- 3) Loosening due to repeated external force applied in the axial direction
- If an external force is repeatedly applied in the direction of the bolt’s axis, regardless of whether the force is quasi-static or shocking, it can cause the bolt to loosen.
Screw Looseness Test
|Type of test
|Axis perpendicular vibration
||Connect a fixed plate and a diaphragm by means of test bolt(s) and nuts(s), then apply an external vibration force in the direction perpendicular to the diaphragm axis and generate vibrational displacement. Make the displacement parallel without including a rotational component.
|Axial rotation vibration
||Apply torque to the diaphragm against the fixed plate and generate rotational displacement in the direction of the bolt’s axis. Make the displacement rotational without including a parallel component.
||Set an arm against the diaphragm and set a weight on its end. Generate rotational displacement by placing the fixed plate onto the diaphragm and applying vibration.
|Increased/decreased axial direction load
||Apply clamps to the bolt head and nut seat, respectively, then repeatedly apply a load in the direction of the bolt’s axis by means of a tension tester.
|Vertically place a screw-fastened body tightened by means of a test bolt and nut into a slotted hole, and vibrate the hole itself up and down on a vibration table.
Then apply impacts in a direction perpendicular to the bolt’s axis to the lower and upper ends of the slotted hole.
||Drop screw fastened body consisting of two cylinders tightened by means of a test bolt and nut from a certain height, and apply impacts aimed at separating the cylinders in the direction of the bolt’s axis.
||Connect a fixed body and an impact receiving plate by means of a test bolt and nut, and apply impacts in a direction perpendicular to the bolt’s axis direction by hammering on the impact receiving plate.
Looseness Prevention Countermeasures
||Disk spring washers,
|Reducing the surface pressure
||Highly rigid flat washers
|Stopping mechanical rotation
||Slotted nuts, split pin-attached bolts,
tongued washers, claw washers
|Increasing the screw part’s
degree of adhesion
|Sheet metal screws, coiled inserts
|Increasing the return torque
||Non-metallic insert-attached detent nuts
(e.g. nylon, nuts, nylock),
all-metallic detent nuts (e.g. U nuts),
(e.g. flange nuts, skirt nuts),
pre-belling torque form nuts
(e.g. tough lock nuts, space lock nuts)
|Forced locking by eliminating
the fitting gap
|Solidification and adhesion
within the fitting gap
|Anaerobic glue (e.g. Lock Tight),
glue-containing capsule-attached bolts
(e.g. chemical anchor bolts)
- A. The return rotation resistance-use group are countermeasures that in almost all cases can prevent any reduction in the pretension force (axial force), which means that their effectiveness against looseness due to return rotation is considered high. However, in the case of double nuts, unless they are completely locked (pinioned), their performance is likely to decline remarkably, as has been shown in variety of tests. Also, anaerobic glue does not exhibit its effectiveness unless it is used in line with the using instructions.
Compared with these measures, at the present time HARDLOCK Nuts are considered to be the most effective connection method available.
- B. The return rotation resistance-use group includes methods that do not allow the pretension force to decline by more than a limited extent and others that merely slow down the rate at which the pretension resistance declines. With such methods, the level of reliability varies, with some methods serving only as a means of screw disjointing prevention or dropping prevention.
- C. It is considered that initial loosening and collapse loosening countermeasures are not usually suitable for preventing loosening due to return rotation. This is because their seating surface prevention functions are poor.
Test Data on HARDLOCK Nuts
HARDLOCK Nuts are classified as return rotation prevention-use countermeasures. They are evaluated as looseness stopping nuts that exhibit a performance superior to that of ordinary double nuts. A variety of tests have yielded data on the performance of Hard Luck Nuts, as follows.
- 1)Axis perpendicular vibration tests
- These tests, of which the Junker vibration test developed in Germany is a famous example, are a representative method of evaluating screw loosening. Junker test data on the HARDLOCK Nut’s performance gathered by the Shonan Institute of Technology is available.
- 2) Axial rotation vibration tests
- No test data is available at present.
- 3) Axial direction load increase/decrease tests
- These tests form a method of measuring the axial force reduction due to bolt fatigue with repeated loading. No HARDLOCK Nut-related axial direction load increase/decrease test data is available at present. However, test data is available on friction joint-use high-tensile bolts employed in structures such as pylons.
Application example: HARDLOCK Nuts are employed in pylons and other structures (NTT pylons, etc.) as a replacement for high-tensile bolts.
- 4) Impact tests
- a) The US NAS 3350 and 3354 standard vibration tests are well known internationally, HARDLOCK Industry has adopted these test standards as in-house standards as well.
b) Regarding dropping tests, in-house comparison test data from a certain ironworks is available.
c) Regarding hammer tests, no test data is available but there are extensive actual application records.
- 5)The HARDLOCK Nut’s major features apart from its loosening stopping function
- a) Sufficient fastening is possible regardless of the axial force (pre-set axial force control is possible).
b) Effectiveness is maintained even in the case of repeated use (repeated re-use is possible until nut metal fatigue occurs).
c) Compatible with a wide range of environmental conditions (flexible combinations of shape, material and surface processing).
*Reference: extract from Neji Teiketsu Guidebook (Screw Fastening Guidebook), section on "Fastening". | http://www.hardlock.co.jp/en/tech/reason.php | 13 |
11 | Classical mechanics describes the behavior of objects in motion. Any moving mass has momentum and energy . When two objects collide, the momentum and energy contained in each object changes. We will study the basic newtonian physics.
Momentum is the product of an object’s mass and its velocity. The standard unit of mass is the kilogram (kg), and the standard unit of speed is the meter per second (m/s). Momentum magnitude is expressed in kilogram-meters per second (kg · m/s). If the mass of an object moving at a certain speed increases by a factor of 5, then the momentum increases by a factor of 5, assuming that the speed remains constant. If the speed increases by a factor of 5 but the mass remains constant, then again, the momentum increases by a factor of 5.
Momentum As A Vector
Suppose that the speed of an object (in meters per second) is v , and that the mass of the object (in kilograms) is m . Then the magnitude of the momentum p is their product:
p = mv
This is not the whole story. To fully describe momentum, the direction as well as the magnitude must be defined. This means that we must consider the velocity of the mass in terms of its speed and direction. (A 2-kg brick flying through your east window is not the same as a 2-kg brick flying through your north window.) If we let v represent the velocity vector and p represent the momentum vector, then we can say
p = mv
The momentum of a moving object can change in any of three different ways:
- A change in the mass of the object
- A change in the speed of the object
- A change in the direction of the object’s motion
Let’s consider the second and third of these possibilities together; then this constitutes a change in the velocity.
Imagine a mass, such as a space ship, coasting along a straight-line path in interstellar space. Consider a point of view, or reference frame , such that the velocity of the ship can be expressed as a nonzero vector pointing in a certain direction. A force F may be applied to this vessel by firing a rocket engine. Imagine that there are several engines on this space ship, one intended for driving the vessel forward at increased speed and others capable of changing the vessel’s direction. If any engine is fired for t seconds with force vector of F newtons (as shown by the three examples in Fig. 8-1), then the product F t is called the impulse . Impulse is a vector, symbolized by the uppercase boldface letter I , and is expressed in kilogram meters per second (kg · m/s):
I = F t
Impulse produces a change in velocity. This is clear enough; this is the purpose of rocket engines in a space ship! Recall the formula concerning mass m , force F , and acceleration a :
F = m a
Substitute m a for F in the equation for impulse. Then we get this:
I = ( m a ) t
Now remember that acceleration is a change in velocity per unit time. Suppose that the velocity of the space ship is v 1 before the rocket is fired and v 2 afterwards. Then, assuming that the rocket engine produces a constant force while it is fired,
a = ( v 2 − v 1 )/ t
We can substitute in the preceding equation to get
I = m [( v 2 − v 1 )/ t ] t = m v 2 − m v 1
This means the impulse is equal to the change in the momentum.
We have just derived an important law of newtonian physics. Reduced to base units in SI, impulse is expressed in kilogram-meters per second (kg · m/s), just as is momentum. You might think of impulse as momentum in another form. When an object is subjected to an impulse, the object’s momentum vector p changes. The vector p can grow larger or smaller in magnitude, it can change direction, or both these things can happen.
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
ACTIVITIESGet Outside! 10 Playful Activities
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Bullying in Schools
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Should Your Child Be Held Back a Grade? Know Your Rights
- First Grade Sight Words List | http://www.education.com/study-help/article/physics-help-momentum-work-energy/ | 13 |
10 | Lesson Plans and Worksheets
Browse by Subject
Fish Teacher Resources
Find teacher approved Fish educational resource ideas and activities
Students investigate the anatomy of fish by observing a painting and listening to a fish story. For this oceanography lesson, students examine the themes of the painting Still Life with Fish by William Merritt Chase, and the story The Rainbow Fish by Marcus Pfister. Students create a colorful fish collage with their classmates.
Help learners discover methods to estimate animal population. They will participate in a simulation of catching and tagging fish in order to estimate the fish population. They scoop and count goldfish crackers, record data, and use formulas to determine a whole population based on their sample.
Learning to read data tables is an important skill. Use this resource for your third, fourth, or fifth graders. Learners will will study tables of fish collection data to draw conclusions. The data is based on fish environments in the Hudson River estuary. Data tables and worksheets are included.
Information is provided on Gray's Reef, Florida Keys, and Flower Garden Banks marine sanctuaries. Young marine biologists then visit the FishBase and REEF databases to collect fish species information for each location. They then complete a data table comparing the different marine sanctuaries. This a wonderful activity for giving your explorers experience with real databases.
Reading Joanna Cole's The Magic School Bus on the Ocean Floor? Jump into the adventure with Ms. Frizzle using this reading activity guide. Learners begin by examining the globe, comparing the area covered by land and ocean. They are guided through this process with mathematical and visual analysis. Next, students complete a lab, examining a real fish and recording their observations. They compare the fish's swimming with that of a human. More ocean-related activities follow!
Ever wonder how scientists track fish underwater? Your class can learn how with this informative instructional activity. First, they will read a paragraph about androgynous fish, tagging, and data analysis. Then, your scientists must answer five short-answer questions. What an interesting topic! | http://www.lessonplanet.com/lesson-plans/fish/3 | 13 |
16 | Chemists often gather data regarding physical and chemical properties of substances. Although these data can be organized in many ways, the most useful ways uncover trends or patterns among the values. These patterns often trigger attempts to explain regularities.
The development of the periodic table is a good example of this approach. Recall that you predicted a property of one element from values of that property for neighboring elements on the periodic table.
In a similar vein, we seek patterns among boiling point data for some hydrocarbons. During evaporation and boiling, individual molecules in the liquid state gain enough energy to overcome intermolecular forces and enter the gaseous state.
Answer the following questions about boiling point data given in table 1 above.
2. Assume we want to search for a trend or pattern among these boiling points.
3. Use your new data table to answer these questions:
4. What can you infer about intermolecular attractions in decane compare to those in butane?
5. Intermolecular forces also help explain other liquid properties such as viscosity and freezing points.
Alkane Boiling points: Trends
Boiling is a physical change whereby a liquid is converted in to a gas. Boiling occurs when the vapor pressure of a liquid is equal to the atmospheric pressure pushing on the liquid. But what other factors affect the boiling point of a liquid?
To explain relative boiling points we must take into account a number of properties for each substance. The properties include molar mass, structure, polarity and hydrogen bonding ability. All of these properties can effect the boiling point of a liquid. In this exploration we will investigate the boiling points for the first ten straight chain alkane hydrocarbons. We are most interested in the effect of molecular size (mass) upon a substance's relative boiling point.
1. Listed in table 1 below is boiling point data for the first ten straight chain alkanes. Enter the data into your calculator for analysis by assigning:
3. Use your graph to determine the average change in boiling point (in degrees Celsius) when a carbon atom and two hydrogen atoms are added to a given alkane chain.
4. Describe the relationship between the number of carbons and the boiling point by using one of the following terms ( # of carbon atoms and boiling point ""):
5. The pattern of boiling points among the first ten alkanes allows you to predict the boiling points of other alkanes. Lets assume the relationship is linear and proportional. Use the calculator to Calculate the regression equation for the data. Enter the equation into Y1 and plot the line of "best fits". Record the regression equation and sketch the calculator screen in your lab report.
6. From your graph, extrapolate the boiling points of undecane (C11H24), dodecane (C12H26) and tridecane (C13H28).
8. We have already noted that a substance's boiling point depends on it's intermolecular forces-that is, on attractions among it's molecules. In a summary paragraph discuss how intermolecular attractions are related to the number of carbon atoms in each molecule for alkanes you have studied.
Alkane Boiling Points: Isomers
You have already observed the boiling points of straight chain alkanes are related to the number of carbon atoms in their molecules. Increased intermolecular attractions are related to the greater molecule-molecule contact possible for larger alkanes.
For example, consider the boiling points of some isomers
1. Boiling points for two sets of isomers are listed below:
2. Within a given series, how does the boiling point change as the number of carbon side-changes increase?
3. Match each boiling point to the appropriate C7H16 isomer: 98.4 C, 92.0 C, 79.2 C.
4. Write a summary paragraph explaining what you have learned in activity one, two and three with regards to the following terms: | http://dwb.unl.edu/calculators/activities/BP.html | 13 |
11 | A variable is a container that holds values that are used in a Java program. To be able to use a variable it needs to be declared. Declaring variables is normally the first thing that happens in any program.
How to Declare a Variable
Java is a strongly typed programming language. This means that every variable must have a data type associated with it. For example, a variable could be declared to use one of the eight primitive data types: byte, short, int, long, float, double, char or boolean.
A good analogy for a variable is to think of a bucket. We can fill it to a certain level, we can replace what's inside it, and sometimes we can add or take something away from it. When we declare a variable to use a data type it's like putting a label on the bucket that says what it can be filled with. Let's say the label for the bucket is "Sand". Once the label is attached, we can only ever add or remove sand from the bucket. Anytime we try and put anything else into it, we will get stopped by the bucket police. In Java, you can think of the compiler as the bucket police. It ensures that programmers declare and use variables properly.
To declare a variable in Java, all that is needed is the data type followed by the variable name:
In the above example, a variable called "numberOfDays" has been declared with a data type of int. Notice how the line ends with a semi-colon. The semi-colon tells the Java compiler that the declaration is complete.
Now that it has been declared, numberOfDays can only ever hold values that match the definition of the data type (i.e., for an int data type the value can only be a whole number between -2,147,483,648 to 2,147,483,647).
Declaring variables for other data types is exactly the same:
byte nextInStream; short hour; long totalNumberOfStars; float reactionTime; double itemPrice;
Before a variable can be used it must be given an initial value. This is called initializing the variable. If we try to use a variable without first giving it a value:
int numberOfDays; //try and add 10 to the value of numberOfDays numberOfDays = numberOfDays + 10;
the compiler will throw an error:
variable numberOfDays might not have been initialized
To initialize a variable we use an assignment statement. An assignment statement follows the same pattern as an equation in mathematics (e.g., 2 + 2 = 4). There is a left side of the equation, a right side and an equals sign (i.e., "=") in the middle. To give a variable a value, the left side is the name of the variable and the right side is the value:
int numberOfDays; numberOfDays = 7;
In the above example, numberOfDays has been declared with a data type of int and has been giving an initial value of 7. We can now add ten to the value of numberOfDays because it has been initialized:
int numberOfDays; numberOfDays = 7; numberOfDays = numberOfDays + 10; System.out.println(numberOfDays);
Typically, the initializing of a variable is done at the same time as its declaration:
//declare the variable and give it a value all in one statement int numberOfDays = 7;
Choosing Variable Names
The name given to a variable is known as an identifier. As the term suggests, the way the compiler knows which variables it's dealing with is through the variable's name.
There are certain rules for identifiers:
- reserved words cannot be used.
- they cannot start with a digit but digits can be used after the first character (e.g., name1, n2ame are valid).
- they can start with a letter, an underscore (i.e., "_") or a dollar sign (i.e., "$").
- you cannot use other symbols or spaces (e.g., "%","^","&","#").
Always give your variables meaningful identifiers. If a variable holds the price of a book, then call it something like "bookPrice". If each variable has a name that makes it clear what it's being used for, it will make finding errors in your programs a lot easier.
Finally, there are naming conventions in Java that I would encourage you to use. You may have noticed that all the examples I have given follow a certain pattern. When more than one word is used in combination in a variable name it is given a capital letter (e.g., reactionTime, numberOfDays.) This is known as mixed case and is the preferred choice for variable identifiers. | http://java.about.com/od/understandingdatatypes/a/declaringvars.htm | 13 |
19 | A gender role is a set of social and behavioral norms that are generally considered appropriate for either a man or a woman in a social or interpersonal relationship. Gender roles vary widely between cultures and even in the same cultural tradition have differed over time and context. There are differences of opinion as to which observed differences in behavior and personality between genders are entirely due to innate personality of the peron and which are due to cultural or social factors, and are therefore the product of socialization, or to what extent gender differences are due to biological and physiological differences.
Views on gender-based differentiation in the workplace and in interpersonal relationships have often undergone profound changes as a result of feminist and/or economic influences, but there are still considerable differences in gender roles in almost all societies. It is also true that in times of necessity, such as during a war or other emergency, women are permitted to perform functions which in "normal" times would be considered a male role, or vice versa.
Gender has several definitions. It usually refers to a set of characteristics that are considered to distinguish between male and female, reflect one's biological sex, or reflect one's gender identity. Gender identity is the gender(s), or lack thereof, a person self-identifies as; it is not necessarily based on biological sex, either real or perceived, and it is distinct from sexual orientation. It is one's internal, personal sense of being a man or a woman (or a boy or girl). There are two main genders: masculine (male), or feminine (female), although some cultures acknowledge more genders. Androgyny, for example, has been proposed as a third gender. Some societies have more than five genders, and some non-Western societies have three genders – man, woman and third gender. Gender expression refers to the external manifestation of one's gender identity, through "masculine," "feminine," or gender-variant or gender neutral behavior, clothing, hairstyles, or body characteristics.
Gender role theory
||This section needs additional citations for verification. (January 2012)|
Gender role theory posits that boys and girls learn the appropriate behavior and attitudes from the family and overall culture they grow up with, and so non-physical gender differences are a product of socialization. Social role theory proposes that the social structure is the underlying force for the gender differences. Social role theory proposes that the sex-differentiated behavior is driven by the division of labor between two sexes within a society. Division of labor creates gender roles, which in turn, lead to gendered social behavior.
The physical specialization of the sexes is considered to be the distal cause of gender roles. Men’s unique physical advantages in terms of body size and upper body strength provided them an edge over women in those social activities that demanded such physical attributes such as hunting, herding and warfare. On the other hand, women’s biological capacity for reproduction and child-bearing is proposed to explain their limited involvement in other social activities. Such divided activity arrangement for the purpose of achieving activity-efficiency led to the division of labor between sexes. Social role theorists have explicitly stressed that the labor division is not narrowly defined as that between paid employment and domestic activities, rather, is conceptualized to include all activities performed within a society that are necessary for its existence and sustainability. The characteristics of the activities performed by men and women became people's perceptions and beliefs of the dispositional attributes of men or women themselves. Through the process of correspondent inference (Gilbert, 1998), division of labor led to gender roles, or gender stereotype. Ultimately, people expect men and women who occupy certain position to behave according to these attributes.
These socially constructed gender roles are considered to be hierarchical and characterized as a male-advantaged gender hierarchy (Wood & Eagly, 2002). The activities men were involved in were often those that provided them with more access to or control of resources and decision making power, rendering men not only superior dispositional attributes via correspondence bias (Gilbert, 1998), but also higher status and authority as society progressed. The particular pattern of the labor division within a certain society is a dynamic process and determined by its specific economical and cultural characteristics. For instance, in an industrial economy, the emphasis on physical strength in social activities becomes less compared with that in a less advanced economy. In a low birth rate society, women will be less confined to reproductive activities and thus more likely to be involved in a wide range of social activities. The beliefs that people hold about the sexes are derived from observations of the role performances of men and women and thus reflect the sexual division of labor and gender hierarchy of the society (Eagly et al., 2000).
The consequences of gender roles and stereotypes are sex-typed social behavior (Eagly et al., 2004) because roles and stereotypes are both socially shared descriptive norms and prescriptive norms. Gender roles provide guides to normative behaviors that are typical, ought-to-be and thus “likely effective” for each sex within certain social context. Gender roles also depict ideal, should-be, and thus desirable behaviors for men and women who are occupying a particular position or involving in certain social activities. Put it another way, men and women, as social beings, strive to belong and seek for approval by complying and conforming to the social and cultural norms within their society. The conformity to social norms not only shapes the pattern, but also maintains the very existence of sex-typed social behavior (Eagly et al., 2004).
In summary, social role theory “treats these differing distributions of women and men into roles as the primary origin of sex-differentiated social behavior, their impact on behavior is mediated by psychological and social processes” (Eagly, 1997), including “developmental and socialization processes, as well as by processes involved in social interaction (e.g., expectancy confirmation) and self-regulation” (Eagly et al., 2004).
The cognitive development theory of gender roles is mentioned in Human Sexuality by Janelle Carroll. This assumes that children go through a pattern of development that is universal to all. This theory follows Piaget's proposition that children can only process a certain amount of information at each stage of development. As children mature they become more aware that gender roles are situational. Therefore theorists predict that rigid gender role behavior may decrease around the ages of 7 or 8. Carroll also mentions a theory under the name of "Gender Schema Theory: Our Cultural Maps" which was first proposed by Sandra Bem. Bem believed that we all thought according to schemas, which is a cognitive way to organize our world. She further said that we all have a gender schema to organize the ways we view gender around us. Information is consistently being transferred to us about gender and what it is to be masculine and feminine. This is where Bem splits from cognitive theorists who believe gender is important "to children because of their physicalistic ways of thinking". Carroll also says that the gender schema can become so ingrained that we are not aware of its power.
Social construction of gender difference
This[who?] perspective proposes that gender difference is socially constructed (see Social construction of gender difference). Social constructionism of gender moves away from socialization as the origin of gender differences; people do not merely internalize gender roles as they grow up but they respond to changing norms in society. Children learn to categorize themselves by gender very early on in life. A part of this is learning how to display and perform gendered identities as masculine or feminine. Boys learn to manipulate their physical and social environment through physical strength or other skills, while girls learn to present themselves as objects to be viewed. Children monitor their own and others’ gendered behavior. Gender-segregated children's activities creates the appearance that gender differences in behavior reflect an essential nature of male and female behavior.
Judith Butler, in works such as Gender Trouble and Undoing Gender, contends that being female is not "natural" and that it appears natural only through repeated performances of gender; these performances in turn, reproduce and define the traditional categories of sex and/or gender. A social constructionist view looks beyond categories and examines the intersections of multiple identities, the blurring of the boundaries of essentialist categories. This is especially true with regards to categories of male and female that are typically viewed by others as binary and opposites of each other. By deconstructing categories of gender, the value placed on masculine traits and behaviors disappears. However, the elimination of categories makes it difficult to make any comparisons between the genders or to argue and fight against male domination.
Talcott Parsons' view
Working in the United States, Talcott Parsons developed a model of the nuclear family in 1955, which at that place and time was the prevalent family structure. It compared a strictly traditional view of gender roles (from an industrial-age American perspective) to a more liberal view.
The Parsons model was used to contrast and illustrate extreme positions on gender roles. Model A describes total separation of male and female roles, while Model B describes the complete dissolution of gender roles. (The examples are based on the context of the culture and infrastructure of the United States.)
|Model A – Total role segregation||Model B – Total integration of roles|
|Education||Gender-specific education; high professional qualification is important only for the man||Co-educative schools, same content of classes for girls and boys, same qualification for men and women.|
|Profession||The workplace is not the primary area of women; career and professional advancement is deemed unimportant for women||For women, career is just as important as for men; equal professional opportunities for men and women are necessary.|
|Housework||Housekeeping and child care are the primary functions of the woman; participation of the man in these functions is only partially wanted.||All housework is done by both parties to the marriage in equal shares.|
|Decision making||In case of conflict, man has the last say, for example in choosing the place to live, choice of school for children, buying decisions||Neither partner dominates; solutions do not always follow the principle of finding a concerted decision; status quo is maintained if disagreement occurs.|
|Child care and education||Woman takes care of the largest part of these functions; she educates children and cares for them in every way||Man and woman share these functions equally.|
However, these structured positions become less common in a liberal-individualist society; actual behavior of individuals is usually somewhere between these poles.
According to the interactionist approach, roles (including gender roles) are not fixed, but are constantly negotiated between individuals. In North America and southern South America, this is the most common approach among families whose business is agriculture.
Gender roles can influence all kinds of behaviors, such as choice of clothing, choice of work and personal relationships, e.g., parental status (See also Sociology of fatherhood).
The process through which the individual learns and accepts roles is called socialization. Socialization works by encouraging wanted and discouraging unwanted behavior. These sanctions by agents of socialization such as the family, schools, and the media make it clear to the child what is expected of the child by society. Mostly, accepted behavior is not produced by outright reforming coercion from an accepted social system. In some other cases, various forms of coercion have been used to acquire a desired response or function.
Homogenization vs. ethnoconvergence difference
It is claimed[by whom?] that even in monolingual, industrial societies like much of urban North America, some individuals do cling to a "modernized" primordial identity, apart from others and with this a more diverse gender role is recognized or developed. Some intellectuals, such as Michael Ignatieff, argue that convergence of a general culture does not directly entail a similar convergence in ethnic, social and self identities. This can become evident in social situations, where people divide into separate groups by gender roles and cultural alignments, despite being of an identical "super-ethnicity", such as nationality.
Within each smaller ethnicity, individuals may tend to see it perfectly justified to assimilate with other cultures including sexuality and some others view assimilation as wrong and incorrect for their culture or institution. This common theme, representing dualist opinions of ethnoconvergence itself, within a single ethnic or common values groups is often manifested in issues of sexual partners and matrimony, employment preferences, etc. These varied opinions of ethnoconvergence represent themselves in a spectrum; assimilation, homogenization, acculturation, gender identities and cultural compromise are commonly used terms for ethnoconvergence which flavor the issues to a bias.
Often it is in a secular, multi-ethnic environment that cultural concerns are both minimalized and exacerbated; Ethnic prides are boasted, hierarchy is created ("center" culture versus "periphery") but on the other hand, they will still share a common "culture", and common language and behaviors. Often the elderly, more conservative-in-association of a clan, tend to reject cross-cultural associations, and participate in ethnically similar community-oriented activities.
Anthropology and evolution
The idea that differences in gender roles originate in differences in biology has found support in parts of the scientific community. 19th-century anthropology sometimes used descriptions of the imagined life of paleolithic hunter-gatherer societies for evolutionary explanations for gender differences. For example, those accounts maintain that the need to take care of offspring may have limited the females' freedom to hunt and assume positions of power.
Due to the influence of (among others) Simone de Beauvoir's feminist works and Michel Foucault's reflections on sexuality, the idea that gender was unrelated to sex gained ground during the 1980s, especially in sociology and cultural anthropology. This view claims that a person could therefore be born with male genitals but still be of feminine gender. In 1987, R.W. Connell did extensive research on whether there are any connections between biology and gender role and concluded that there were none. However, there continues to be debate on the subject. Simon Baron-Cohen, a Cambridge Univ. professor of psychology and psychiatry, claims that "the female brain is predominantly hard-wired for empathy, while the male brain is predominantly hard-wired for understanding and building systems."
Dr. Sandra Lipsitz Bem is a psychologist who developed the gender schema theory to explain how individuals come to use gender as an organizing category in all aspects of their life. It is based on the combination of aspects of the social learning theory and the cognitive-development theory of sex role acquisition. In 1971, she created the Bem Sex Role Inventory to measure how well you fit into your traditional gender role by characterizing your personality as masculine, feminine, androgynous, or undifferentiated. She believed that through gender-schematic processing, a person spontaneously sorts attributes and behaviors into masculine and feminine categories. Therefore, an individual processes information and regulate their behavior based on whatever definitions of femininity and masculinity their culture provides.
The current trend in Western societies toward men and women sharing similar occupations, responsibilities and jobs suggests that the sex one is born with does not directly determine one's abilities. While there are differences in average capabilities of various kinds (E.g. physical strength) between the sexes, the capabilities of some members of one sex will fall within the range of capabilities needed for tasks conventionally assigned to the other sex.
In addition, research at the Yerkes National Primate Research Center has also shown that gender roles may be biological among primates. Yerkes researchers studied the interactions of 11 male and 23 female Rhesus monkeys with human toys, both wheeled and plush. The males played mostly with the wheeled toys while the females played with both types equally. Psychologist Kim Wallen has, however, warned against overinterpeting the results as the color and size of the toys may also be factors in the monkey's behavior.
Changing roles
A person's gender role is composed of several elements and can be expressed through clothing, behaviour, choice of work, personal relationships and other factors. These elements are not concrete and have evolved through time (for example women's trousers).
Traditionally only feminine and masculine gender roles existed, however, over time many different acceptable male or female gender roles have emerged. An individual can either identify themselves with a subculture or social group which results in them having diverse gender roles. Historically, for example, eunuchs had a different gender role because their biology was changed.
Androgyny, a term denoting the display of both male and female behaviour, also exists. Many terms have been developed to portray sets of behaviors arising in this context. The masculine gender role in the West has become more malleable since the 1950s. One example is the "sensitive new age guy", which could be described as a traditional male gender role with a more typically "female" empathy and associated emotional responses. Another is the metrosexual, a male who adopts or claims to be born with similarly "female" grooming habits. Some have argued that such new roles are merely rebelling against tradition more so than forming a distinct role. However, traditions regarding male and female appearance have never been concrete, and men in other eras have been equally interested with their appearance. The popular conceptualization of homosexual men, which has become more accepted in recent decades, has traditionally been more androgynous or effeminate, though in actuality homosexual men can also be masculine and even exhibit machismo characteristics. One could argue that since many homosexual men and women fall into one gender role or another or are androgynous, that gender roles are not strictly determined by a person's physical sex. Whether or not this phenomenon is due to social or biological reasons is debated. Many homosexual people find the traditional gender roles to be very restrictive, especially during childhood. Also, the phenomenon of intersex people, which has become more publicly accepted, has caused much debate on the subject of gender roles. Many intersexual people identify with the opposite sex, while others are more androgynous. Some see this as a threat to traditional gender roles, while others see it as a sign that these roles are a social construct, and that a change in gender roles will be liberating.
According to sociology research, traditional feminine gender roles have become less relevant in Western society since industrialization started. For example, the cliché that women do not follow a career is obsolete in many Western societies. On the other hand, the media sometimes portrays women who adopt an extremely classical role as a subculture. Women take on many roles that were traditionally reserved for men, as well as behaviors and fashions, which may cause pressure on many men to be more masculine and thus confined within an even smaller gender role, while other men react against this pressure. For example, men's fashions have become more restrictive than in other eras, while women's fashions have become more broad. One consequence of social unrest during the Vietnam War era was that men began to let their hair grow to a length that had previously (within recent history) been considered appropriate only for women. Somewhat earlier, women had begun to cut their hair to lengths previously considered appropriate only to men.
Some famous people known for their androgynous appearances in the 20th century include Brett Anderson, Gladys Bentley, David Bowie, Pete Burns, Boy George, Norman Iceberg, k.d. lang, Annie Lennox, Jaye Davidson, Marilyn Manson, Freddie Mercury, Marlene Dietrich, Mylène Farmer, Gackt, Mana (musician), Michael Jackson, Grace Jones, Marc Bolan, Brian Molko, Julia Sweeney (as Pat), Genesis P-Orridge, Prince and Kristen McMenamy.
Ideas of appropriate behavior according to gender vary among cultures and era, although some aspects receive more widespread attention than others. R.W. Connell in Men, Masculinities and Feminism claims:
- "There are cultures where it has been normal, not exceptional, for men to have homosexual relations. There have been periods in 'Western' history when the modern convention that men suppress displays of emotion did not apply at all, when men were demonstrative about their feeling for their friends. Mateship in the Australian outback last century is a case in point."
Other aspects, however, may differ markedly with time and place. In the Middle Ages, women were commonly associated with roles related to medicine and healing. Due to the rise of witch-hunts across Europe and the institutionalization of medicine, these roles eventually came to be monopolized by men. In the last few decades, however, these roles have become largely gender-neutral in Western society.
The elements of convention or tradition seem to play a dominant role in deciding which occupations fit in with which gender roles. In the United States, physicians have traditionally been men, and the few people who defied that expectation received a special job description: "woman doctor". Similarly, there were special terms like "male nurse", "woman lawyer", "lady barber", "male secretary," etc. But in the former Soviet Union countries, medical doctors are predominantly women. Also, throughout history, some jobs that have been typically male or female have switched genders. For example, clerical jobs used to be considered a men's jobs, but when several women began filling men's job positions due to World War II, clerical jobs quickly became dominated by women. It became more feminized, and women workers became known as "typewriters" or "secretaries". There are many other jobs that have switched gender roles. Many jobs are continually evolving as far as being dominated by women or men.
The majority of Western society is not often tolerant of one gender fulfilling another role. In fact, homosexual communities are more tolerant of and do not complain about such behavior. For instance, someone with a masculine voice, a five o'clock shadow (or a fuller beard), an Adam's apple, etc., wearing a woman's dress and high heels, carrying a purse, etc., would most likely draw ridicule or other unfriendly attention in ordinary social contexts (the stage and screen excepted). It is seen by some in that society that such a gender role for a man is not acceptable. The traditions of a particular culture often direct that certain career choices and lifestyles are appropriate to men, and other career choices and lifestyles are appropriate to women. In recent years, many people have strongly challenged the social forces that would prevent people from taking on non-traditional gender roles, such as women becoming fighter pilots or men becoming stay-at-home fathers. Men who defy or fail to fulfill their expected gender role are often called effeminate. In modern western societies, women who fail to fulfill their expected gender roles frequently receive only minor criticism for doing so.
I Corinthians, 11:14 and 15 indicates that it is inappropriate for a man to wear his hair long, and good for a woman to wear her hair long.
Muhammad described the high status of mothers in both of the major hadith Collections (Bukhari and Muslim). One famous account is:
"A man asked the Prophet: 'Whom should I honor most?' The Prophet replied: 'Your mother'. 'And who comes next?' asked the man. The Prophet replied: 'Your mother'. 'And who comes next?' asked the man. The Prophet replied: 'Your mother!'. 'And who comes next?' asked the man. The Prophet replied: 'Your father'"
In Islam, the primary role played by women is to be mothers, and mothers are considered the most important part of the family. A well known Hadith of the prophet says: "I asked the Prophet who has the greatest right over a man, and he said, 'His mother'". While a woman is considered the most important member of the family, she is not the head of the family. Therefore, it is possible to conclude that importance has no relevance with being the head of the family.
Hindu deities are more ambiguously gendered than deities of other world religions, such as Christianity, Islam, and others. For example, Shiva, deity of creation, destruction, and fertility, can appear entirely or mostly male, predominately female, and ambiguously gendered. Despite this, females are more restricted than males in their access to sacred objects and spaces, such as the inner sanctums in Hindu temples. This can be explained in part because women’s bodily functions, such as menstruation, are often seen by both men and women as polluting and/or debilitating. Males are therefore more symbolically associated with divinity and higher morals and ethics than females are. This informs female and males relations, and informs how the differences between males and females are understood.
However, in a religious cosmology like Hinduism, which prominently features female and androgynous deities, some gender transgression is allowed. For instance, in India, a group of people adorn themselves as women and are typically considered to be neither man nor woman, or man plus woman. This group is known as the hijras, and has a long tradition of performing in important rituals, such as the birth of sons and weddings. Despite this allowance for transgression, Hindu cultural traditions portray women in contradictory ways. On one hand, women’s fertility is given great value, and on the other, female sexuality is depicted as potentially dangerous and destructive.
In the USA, single men are greatly outnumbered by single women at a ratio of 100 single women to every 86 single men, though never-married men over age 15 outnumber women by a 5:4 ratio (33.9% to 27.3%) according to the 2006 US Census American Community Survey. This very much depends on age group, with 118 single men per 100 single women in their 20s, versus 33 single men to 100 single women over 65.
The numbers are different in other countries. For example, China has many more young men than young women, and this disparity is expected to increase. In regions with recent conflict such as Chechnya, women may greatly outnumber men.
In a cross-cultural study by David Buss, men and women were asked to rank certain traits in order of importance in a long-term partner. Both men and women ranked "kindness" and "intelligence" as the two most important factors. Men valued beauty and youth more highly than women, while women valued financial and social status more highly than men.
Masculine and feminine cultures and individuals generally differ in how they communicate with others. For example, feminine people tend to self-disclose more often than masculine people, and in more intimate details. Likewise, feminine people tend to communicate more affection, and with greater intimacy and confidence than masculine people. Generally speaking, feminine people communicate more and prioritize communication more than masculine.
Traditionally, masculine people and feminine people communicate with people of their own gender in different ways. Masculine people form friendships with other masculine people based on common interests, while feminine people build friendships with other feminine people based on mutual support. However, both genders initiate opposite-gender friendships based on the same factors. These factors include proximity, acceptance, effort, communication, common interests, affection and novelty.
Context is very important when determining how we communicate with others. It is important to understand what script it is appropriate to use in each respective relationship. Specifically, understanding how affection is communicated in a given context is extremely important. For example, masculine people expect competition in their friendships. They avoid communicating weakness and vulnerability. They avoid communicating personal and emotional concerns. Masculine people tend to communicate affection by including their friends in activities and exchanging favors. Masculine people tend to communicate with each other shoulder-to-shoulder (e.g. watching sports on a television).
In contrast, feminine people do not mind communicating weakness and vulnerability. In fact, they seek out friendships more in these times. For this reason, feminine people often feel closer to their friends than masculine people do. Feminine people tend to value their friends for listening and communicating non-critically, communicating support, communicating feelings of enhances self-esteem, communicating validation, offering comfort and contributing to personal growth. Feminine people tend to communicate with each other face-to-face (e.g. meeting together to talk over lunch).
Communicating with a friend of the opposite gender is often difficult because of the fundamentally different scripts that masculine people and feminine people use in their friendships. Another challenge in these relationships is that masculine people associate physical contact with communicating sexual desire more than feminine people. Masculine people also desire sex in their opposite-gender relationships more than feminine people. This presents serious challenges in cross-gender friendship communication. In order to overcome these challenges, the two parties must communicate openly about the boundaries of the relationship.
Communication and gender cultures
A communication culture is a group of people with an existing set of norms regarding how they communicate with each other. These cultures can be categorized as masculine or feminine. Other communication cultures include African Americans, older people, Indian Native Americans, gay men, lesbians, and people with disabilities. Gender cultures are primarily created and sustained by interaction with others. Through communication we learn about what qualities and activities our culture prescribes to our sex.
While it is commonly believed that our sex is the root source of differences and how we relate and communicate to others, it is actually gender that plays a larger role. Whole cultures can be broken down into masculine and feminine, each differing in how they get along with others through different styles of communication. Julia T. Wood's studies explain that "communication produces and reproduces cultural definitions of masculinity and femininity." Masculine and feminine cultures differ dramatically in when, how and why they use communication
Communication styles
- Men tend to talk more than women in public situations, but women tend to talk more than men at home.
- Women are more inclined to face each other and make eye contact when talking, while men are more likely to look away from each other.
- Men tend to jump from topic to topic, but women tend to talk at length about one topic.
- When listening, women make more noises such as “mm-hmm” and “uh-huh”, while men are more likely to listen silently.
- Women are inclined to express agreement and support, while men are more inclined to debate.
The studies also reported that in general both genders communicated in similar ways. Critics, including Suzette Haden Elgin, have suggested that Tannen's findings may apply more to women of certain specific cultural and economic groups than to women in general. Although it is widely believed that women speak far more words than men, this is actually not the case.
Julia T. Wood describes how "differences between gender cultures infuse communication." These differences begin at childhood. Maltz and Broker’s research showed that the games children play contribute to socializing children into masculine and feminine cultures. For example, girls playing house promotes personal relationships, and playing house does not necessarily have fixed rules or objectives. Boys, however, tended to play more competitive team sports with different goals and strategies. These differences as children make women operate from assumptions about communication and use rules for communication that differ significantly from those endorsed by most men. Wood produced the following theories regarding gender communication:
- Misunderstandings stem from differing interaction styles
- Men and women have different ways of showing support, interest and caring
- Men and women often perceive the same message in different ways
- Women tend to see communication more as a way to connect and enhance the sense of closeness in the relationship
- Men see communication more as a way to accomplish objectives
- Women give more response cues and nonverbal cues to indicate interest and build a relationship
- Men use feedback to signal actual agreement and disagreement
- For women, "ums" "uh-huhs" and "yeses" simply mean they are showing interest and being responsive
- For men, these same responses indicate is agreement or disagreement with what is being communicated
- For women, talking is the primary way to become closer to another person
- For men, shared goals and accomplishing tasks is the primary way to become close to another person
- Men are more likely to express caring by doing something concrete for or doing something together with another person
- Women can avoid being hurt by men by realizing how men communicate caring
- Men can avoid being hurt by women by realizing how women communicate caring
- Women who want to express caring to men can do so more effectively by doing something for them or doing something with them
- Men who want to express caring to women can do so more effectively by verbally communicating that they care
- Men emphasize independence and are therefore less likely to ask for help in accomplishing an objective
- Men are much less likely to ask for directions when they are lost than women
- Men desire to maintain autonomy and to not appear weak or incompetent
- Women develop identity within relationships more than men
- Women seek out and welcome relationships with others more than men
- Men tend to think that relationships jeopardize their independence
- For women, relationships are a constant source of interest, attention and communication
- For men, relationships are not as central
- The term "Talking about us" means very different things to men and women
- Men feel that there is no need to talk about a relationship that is going well
- Women feel that a relationship is going well as long as they are talking about it
- Women can avoid being hurt by realizing that men don't necessarily feel the need to talk about a relationship that is going well
- Men can help improve communication in a relationship by applying the rules of feminine communication
- Women can help improve communication in a relationship by applying the rules of masculine communication
- Just as Western communication rules wouldn't necessarily apply in an Asian culture, masculine rules wouldn't necessarily apply in a feminine culture, and vice versa.
Finally, Wood describes how different genders can communicate to one another and provides six suggestions to do so.
- Individuals should suspend judgment. When a person finds his or herself confused in a cross-gender conversation, he or she should resist the tendency to judge and instead explore what is happening and how that person and their partner might better understand each other.
- Recognize the validity of different communication styles. Feminine tendency to emphasize relationships, feelings and responsiveness does not reflect inability to adhere to masculine rules for competing any more than masculine stress on instrumental outcomes is a failure to follow feminine rules for sensitivity to others. Wood says that it is inappropriate to apply a single criterion - either masculine or feminine - to both genders' communication. Instead, people must realize that different goals, priorities and standards pertain to each.
- Provide translation cues. Following the previous suggestions helps individuals realize that men and women tend to learn different rules for interaction and that it makes sense to think about helping the other gender translate your communication. This is especially important because there is no reason why one gender should automatically understand the rules that are not part of his or her gender culture.
- Seek translation cues. Interactions can also be improved by seeking translation cues from others. Taking constructive approaches to interactions can help improve the opposite gender culture's reaction.
- Enlarge your own communication style. By studying other culture's communication we learn not only about other cultures, but also about ourselves. Being open to learning and growing can enlarge one's own communication skills by incorporating aspects of communication emphasized in other cultures. According to Wood, individuals socialized into masculinity could learn a great deal from feminine culture about how to support friends. Likewise, feminine cultures could expand the ways they experience intimacy by appreciating "closeness in doing" that is a masculine specialty.
- Wood reiterates again, as her sixth suggestion, that individuals should suspend judgment. This concept is incredibly important because judgment is such a part of Western culture that it is difficult not to evaluate and critique others and defend our own positions. While gender cultures are busy judging other gender cultures and defending themselves, they are making no headway in communicating effectively. So, suspending judgment is the first and last principle for effective cross-gender communication.
Gender stereotypes
Stereotypes create expectations regarding emotional expression and emotional reaction. Many studies find that emotional stereotypes and the display of emotions "correspond to actual gender differences in experiencing emotion and expression."
Stereotypes generally dictate how and by whom and when it is socially acceptable to display an emotion. Reacting in a stereotype-consistent manner may result in social approval while reacting in a stereotype-inconsistent manner could result in disapproval. It should be noted that what is socially acceptable varies substantially over time and between local cultures and subcultures.
According to Niedenthal et al.:
Virginia Woolf, in the 1920s, made the point: "It is obvious that the values of women differ very often from the values which have been made by the other sex. Yet it is the masculine values that prevail" (A Room of One's Own, N.Y. 1929, p. 76). Sixty years later, psychologist Carol Gilligan was to take up the point, and use it to show that psychological tests of maturity have generally been based on masculine parameters, and so tended to show that women were less 'mature'. She countered this in her ground-breaking work, In a Different Voice, (Harvard University Press, 1982), holding that maturity in women is shown in terms of different, but equally important, human values.
Communication and sexual desire
Mets, et al. explain that sexual desire is linked to emotions and communicative expression. Communication is central in expressing sexual desire and "complicated emotional states," and is also the "mechanism for negotiating the relationship implications of sexual activity and emotional meanings." Gender differences appear to exist in communicating sexual desire.
For example, masculine people are generally perceived to be more interested in sex than feminine people, and research suggests that masculine people are more likely than feminine people to express their sexual interest. This can be attributed to masculine people being less inhibited by social norms for expressing their desire, being more aware of their sexual desire or succumbing to the expectation of their gender culture. When feminine people employ tactics to show their sexual desire, they are typically more indirect in nature.
Various studies show different communication strategies with a feminine person refusing a masculine person's sexual interest. Some research, like that of Murnen, show that when feminine people offer refusals, the refusals are verbal and typically direct. When masculine people do not comply with this refusal, feminine people offer stronger and more direct refusals. However, research from Perper and Weis showed that rejection includes acts of avoidance, creating distractions, making excuses, departure, hinting, arguments to delay, etc. These differences in refusal communication techniques are just one example of the importance of communicative competence for both masculine and feminine gender cultures.
As long as a person's perceived physiological sex is consistent with that person's gender identity, the gender role of a person is so much a matter of course in a stable society that people rarely even think of it. Only in cases where an individual has a gender role that is inconsistent with his or her sex will the matter draw attention. Some people mix gender roles to form a personally comfortable androgynous combination or violate the scheme of gender roles completely, regardless of their physiological sex. People who are transgender have a gender identity or expression that differs from the sex which they were assigned at birth. The Preamble of The Yogyakarta Principles cite the idea of the Convention on the Elimination of All Forms of Discrimination Against Women that "States must take measures to seek to eliminate prejudices and customs based on the idea of the inferiority or the superiority of one sex or on stereotyped roles for men and women." for the rights of transgender people.
For approximately the last 100 years women have been fighting for the same rights as men (especially around the turn from 19th to 20th century with the struggle for women's suffrage and in the 1960s with second-wave feminism and radical feminism) and were able to make changes to the traditionally accepted feminine gender role. However, most feminists today say there is still work to be done.
Numerous studies and statistics show that even though the situation for women has improved during the last century, discrimination is still widespread: women earn an average of 77 cents to every one dollar men earn ("The Shriver Report", 2009), occupy lower-ranking job positions than men, and do most of the housekeeping work. There are several reasons for the wage disparity. A recent (October 2009) report from the Center for American Progress, "The Shriver Report: A Woman's Nation Changes Everything" tells us that women now make up 48% of the US workforce and "mothers are breadwinners or co-breadwinners in a majority of families" (63.3%, see figure 2, page 19 of the Executive Summary of The Shriver Report).
A recent article in The New York Times indicated that gender roles are still prevalent in many upscale restaurants. A restaurant's decor and menu typically play into which gender frequents which restaurant. Whereas Cru, a restaurant in New York's, Greenwich Village, "decorated in clubby brown tones and distinguished by a wine list that lets high rollers rack up breathtaking bills," attracts more men than women, places like Mario Batali's, Otto, serves more women than men, as a result that the restaurant has been "designed to be more approachable, with less swagger." Servers of both men and women at the same table still often go with the assumption that the male is the go-to person, as far as who receives the check and makes the wine decisions, but this appears to be a trend that is being used with more caution, especially with groups of younger people. Restaurants that used to cater to more men or women are now also trying to change their decor in the hopes of attracting broader equity.
Note that many people consider some or all of the following terms to have negative connotations.
- A male adopting (or who is perceived as adopting) a female gender role might be described as effeminate, foppish, or sissy. Even more pejorative terms include mollycoddled, milksop, sop, mamma's boy, namby-pamby, pansy, fru-fru, girlie-boy, girlie-man, and nancy boy.
- A female adopting (or who is perceived as adopting) a male role might be described as butch, a dyke, a tomboy, or as an amazon (See amazon feminism). More pejorative terms include battleaxe.
Sexual orientation
||This section needs additional citations for verification. (November 2010)|
The demographics of sexual orientation in any population is difficult to establish with reasonable accuracy. However, some surveys suggest that a greater proportion of men than women report that they are exclusively homosexual, whereas more women than men report being bisexual.
Studies have suggested that heterosexual men are only aroused by images of women, whereas some women who claim to be heterosexual are aroused by images of both men and women. However, different methods are required to measure arousal for the anatomy of a man versus that of a woman.
Traditional gender roles include male attraction to females, and vice versa. Homosexual and bisexual people, among others, usually don't conform to these expectations. An active conflict over the cultural acceptability of non-heterosexuality rages worldwide. The belief or assumption that heterosexual relationships and acts are "normal" is described – largely by the opponents of this viewpoint – as heterosexism or in queer theory, heteronormativity. Gender identity and sexual orientation are two separate aspects of individual identity, although they are often mistakenly conflated in the media.
Perhaps it is an attempt to reconcile this conflict that leads to a common assumption that one same-sex partner assumes a pseudo-male gender role and the other assumes a pseudo-female role. For a gay male relationship, this might lead to the assumption that the "wife" handled domestic chores, was the receptive sexual partner during sex, adopted effeminate mannerisms, and perhaps even dressed in women's clothing. This assumption is flawed, as many homosexual couples tend to have more equal roles, and the effeminate behavior of some gay men is usually not adopted consciously, and is often more subtle. Feminine or masculine behaviors in some homosexual people might be a product of the socialization process, adopted unconsciously due to stronger identification with the opposite sex during development. The role of both this process and the role of biology is debated.
Cohabitating couples with same-sex partners are typically egalitarian when they assign domestic chores. Though sometimes these couples assign traditional female responsibilities to one partner and traditional male responsibilities to the other, generally same-sex domestic partners challenge traditional gender roles in their division of household responsibilities, and gender roles within homosexual relationships are flexible. For instance, cleaning and cooking, traditionally both female responsibilities, might be assigned to different people. Carrington (1999) observed the daily home lives of 52 gay and lesbian couples and found that the length of the work week and level of earning power substantially affected the assignment of housework, regardless of gender or sexuality.
Cross-dressing is often restricted to festive occasions, though people of all sexual orientations routinely engage in various types of cross-dressing either as a fashion statement or for entertainment. Distinctive styles of dress, however, are commonly seen in gay and lesbian circles. These fashions sometimes emulate the traditional styles of the opposite gender (For example, lesbians who wear t-shirts and boots instead of skirts and dresses, or gay men who wear clothing with traditionally feminine elements, including displays of jewelry or coloration), but others do not. Fashion choices also do not necessarily align with other elements of gender identity. Some fashion and behavioral elements in gay and lesbian culture are novel, and do not really correspond to any traditional gender roles, such as rainbow jewelry or the gay techno/dance music subculture. In addition to the stereotypically effeminate one, another significant gay male subculture is homomasculinity, emphasizing certain traditionally masculine or hypermasculine traits.
The term dyke, commonly used to mean lesbian, sometimes carries associations of a butch or masculine identity, and the variant bulldyke certainly does. Other gender-role-charged lesbian terms include lipstick lesbian, chapstick lesbian, and stone femme. "Butch," "femme," and novel elements are also seen in various lesbian subcultures.
External social pressures may lead some people to adopt a persona which is perceived as more appropriate for a heterosexual (for instance, in an intolerant work environment) or homosexual (for instance, in a same-sex dating environment), while maintaining a somewhat different identity in other, more private circumstances. The acceptance of new gender roles in Western societies, however, is rising. However, during childhood and adolescence, gender identities which differ from the norm are often the cause of ridicule and ostracism, which often results in psychological problems. Some are able to disguise their differences, but others are not. Even though much of society has become more tolerant, gender roles are still very prevalent in the emotionally charged world of children and teenagers, which makes life very difficult for those who differ from the established norms.
The role of ideology in enculturation
||This article's factual accuracy is disputed. Please help to ensure that disputed facts are reliably sourced. See the relevant discussion on the talk page. (May 2008)|
||This section may contain original research. (July 2008)|
High levels of agreement on the characteristics different cultures to males and females reflects consensus in gender role ideology. The Netherlands, Germany, Finland, England and Italy are among the most egalitarian modern societies concerning gender roles, whereas the most traditional roles are found in Nigeria, Pakistan, India, Japan, and Malaysia. Men and women cross-culturally rate the ideal self as more masculine than their self. Women in nomadic, high-food-accumulator cultures are more likely to have their work honored and respected than those in sedentary, agricultural societies. US females are more conforming to others than are males. Men use more rational-appearing aggression than women, while women use more social manipulation than men do. This is related to, but not solely determined by, age and hormones though some researchers would suggest that women are not necessarily less aggressive than men but tend to show their aggression in more subtle and less overt ways (Bjorkqvist et al. 1994, Hines and Saudino 2003). Male aggression may be a "gender marking" issue breaking away from the instruction of the mother during adolescence. Native American gender roles depend on the cultural history of the tribe.
Criminal justice
A number of studies conducted since the mid-90s have found direct correlation between a female criminal’s ability to conform to gender role stereotypes, particularly murder committed in self-defense, and the severity of their sentencing.
In prison
||This section's factual accuracy is disputed. Please help to ensure that disputed facts are reliably sourced. See the relevant discussion on the talk page. (March 2008)|
The following tendencies have been observed in U.S. prisons - not internationally. Gender roles in male prisons go further than the "Don't drop the soap"-joke. The truth is that some prisoners, either by choice or by force, take on strict 'female roles' according to prison set guidelines. For instance, a 'female' in prison is seen as timid, submissive, passive, and a means of sexual pleasure. When entering the prison environment some inmates "turn out" on their own free will, meaning they actively pursue the 'female role' in prison to gain some form of social power and/or prestige. Other, unlucky inmates, are forced to partake in 'female role' activities through coercion; the most common means being physical abuse. The inmates that are forced to "turn out" are commonly referred to as "punks". Other terms used to describe 'female' inmates are "girls", "kids", and "gumps". Some of the labels may be used as a means of describing one's ascribed status. For example, a "kid" is one that is usually dominated by their owner, or "daddy". The "daddy" is usually one with a high social status and prestige within the prison (e.g. gang leader). The "female" gender role is constructed through the mirror image of what the inmates perceive as a male. For instance, inmates view men as having strength, power, prestige, and an unyielding personality. However, the inmates don't refer to the female guards, who have power and prestige over the inmates, as males. The female guards are commonly referred to as "dykes", "ditch lickers", and lesbians. These roles are also assumed in female prisons.
Women who enter prison society often voluntarily enter into lesbianism, as a means of protection from gangs or stronger females. In doing so, they will take on the submissive role to a dominant female in exchange for that dominating female keeping them safe. Those who do not enter voluntarily into lesbianism might at one time or another be group raped, to introduce them into that circle, and sometimes they will be referred to as sheep, meaning anyone can have them. It is to avoid that status that most female inmates choose a mate, or allow themselves to be chosen as a mate, which can make them available to only a minimal number of partners during their incarceration, as opposed to a large number. So, in a sense, an inmate undergoes a "female role" in the prison system either by choice or by yielding to excessive coercion, and it is that yielding that terms the once male inmates as "females", and which identifies the stronger females in a female prison system as "males".
See also
|Content from Wikipedia was used in the development of this page.|
|The Wikipedia version is Gender role|
|Special thank you to participants of Wikipedia's WikiProject LGBT studies!|
- "What do we mean by "sex" and "gender"?". World Health Organization. Retrieved 2009-09-29.
- Gay and Lesbian Alliance Against Defamation. ‘’GLAAD Media Reference Guide, 8th Edition. Transgender Glossary of Terms”, ‘’GLAAD’’, USA, May 2010. Retrieved on 2011-11-20.
- (Maccoby, E.E., Sex differences in intellectual functioning, 1966.; Bem S.L., 1975)
- Graham, Sharyn (2001), Sulawesi's fifth gender, Inside Indonesia, April–June 2001.
- Roscoe, Will (2000). Changing Ones: Third and Fourth Genders in Native North America. Palgrave Macmillan (June 17, 2000) ISBN 0-312-22479-6
See also: Trumbach, Randolph (1994). London’s Sapphists: From Three Sexes to Four Genders in the Making of Modern Culture. In Third Sex, Third Gender: Beyond Sexual Dimorphism in Culture and History, edited by Gilbert Herdt, 111-36. New York: Zone (MIT). ISBN 978-0-942299-82-3
- Eagly, A. H., Beall, A., & Sternberg, R. S. (Eds.). (2004). The psychology of gender (2nd ed.). New York: Guilford Press. ISBN 978-1593852443.
- Carroll, J. L. (2013). Sexuality now: embracing diversity: Gender role theory. Belmont, CA Wandsworth.p. 93
- Deustch, F. M. (2007). Undoing gender. ‘’Gender and Society, 21,’’ 106-127.
- Cahill, S. E. (1986).The male communication pattern and traits tend to be honest, direct, and factual, and is considered the “report” type of talk. When a male speaks, he is basing his information on facts and is being direct as possible without beating around the bush with back-channeling or holding words (Svecz. A,M. 2010). Childhood socialization as recruitment process: Some lessons from the study of gender development. In P. Alder and P. Alder. (Eds). ‘’Sociological studies of child development.’’ Greenwich, CT: JAI Press.
- Fenstermaker, S. & West, C. (2002). ‘’Doing gender, doing difference: Inequality, power, and institutional change.’’ New York, NY; Routledge; p. 8
- Butler, J. (1990). ‘’Gender trouble: Feminism and the subversion of identity.’’ New York; Routledge.
- Franco-German TV Station ARTE, Karambolage, August 2004.
- Brockhaus: Enzyklopädie der Psychologie, 2001.
- Connell, Robert William: Gender and Power, Cambridge: University Press 1987.
- Bem,S.L.(1981). Gender schema theory:A cognitive account of sex typing. Psychological Review,88,354–364
- Gender Gap: The Biology of Male-Female Differences. David P. Barash, Judith Eve Lipton. Transaction Publishers, 2002.
- Barash, David P. Lipton, Judith Eve. "Gender Gap: The Biology of Male-Female Differences". Transaction Publishers, 2002.
- Yerkes Researchers Find Sex Differences in Monkey Toy Preferences Similar to Humans
- "Male monkeys prefer boys' toys". Newscientist.com. doi:10.1016/j.yhbeh.2008.03.008. Retrieved 2010-04-17.
- Ehrenreich, Barbara; Deirdre English (2010). Witches, Midwives and Nurses: A History of Women Healers (2nd ed.). The Feminist Press. pp. 44–87. ISBN 0-912670-13-4.
- Boulis, Ann K.; Jacobs, Jerry A. (2010). The changing face of medicine: women doctors and the evolution of health care in America. Ithaca, N.Y.: ILR. ISBN 0-8014-7662-3. "Encouraging one's daughter to pursue a career in medicine is no longer an unusual idea… Americans are now more likely to report that they feel comfortable recommending a career in medicine for a young woman than for a young man."
- Bullough, Vern L.; Bonnie Bullough (1993). Crossdressing, Sex, and Gender (1st ed.). University of Pennsylvania Press. 1993. p. 390. ISBN 978-0-8122-1431-4.
- Box Office Mojo, LLC (1998). "Cross Dressing / Gender Bending Movies". Box Office Mojo, LLC. Retrieved 2006-11-08.
- The Human Rights Campaign (2004). "Transgender Basics". The Human Rights Campaign. Archived from the original on November 9, 2006. Retrieved 2006-11-08.
- Peletz, Michael Gates. Gender, Sexuality, and Body Politics in Modern Asia. Ann Arbor, MI: Association for Asian Studies, 2011. Print.
- Men hold the edge on gender gap odds' Oakland Tribune October 21, 2003
- Facts for features: Valentine’s Day U.S. Census Bureau Report February 7, 2006
- '40m Bachelors And No Women' The Guardian March 9, 2004
- 'Polygamy Proposal for Chechen Men' BBC January 13, 2006
- Wood, J. T. (1998). Gender Communication, and Culture. In Samovar, L. A., & Porter, R. E., Intercultural communication: A reader. Stamford, CT: Wadsworth.
- Tannen, Deborah (1990) Sex, Lies and Conversation; Why Is It So Hard for Men and Women to Talk to Each Other? The Washington Post, June 24, 1990
- Maltz, D., & Borker, R. (1982). A cultural approach to male-female miscommunication. In J. Gumperz (Ed.), Language and social identity (pp. 196-216). Cambridge, UK: Cambridge University Press.
- Metts, S., Sprecher, S., & Regan, P. C. (1998). Communication and sexual desire. In P. A. Andersen & L. K. Guerrero (Eds.) Handbook of communication and emotion. (pp. 354-377). San Diego: Academic Press.
- Perot and Byrne Murnen SK, Perot A, Byrne D. Coping with unwanted sexual activity: Normative responses, situational determinants, and individual differences. Journal of Sex Research. 1989;26(1):85–106.,
- Perper, T., & Weis, D. L. (1987). Proceptive and rejective strategies of U.S. and Canadian college women. The Journal of Sex Research, 23, 455-480.
- Kiger, Kiger; Riley, Pamela J. (July 1, 1996). "Gender differences in perceptions of household labor". The Journal of Psychology. Retrieved 2009-10-23.
- Maria Shriver and the Center for American Progress (October 19, 2009). "The Shriver Report: A Woman's Nation Changes Everything". Center for American Progress. Retrieved 2009-10-23.
- The New York Times."Old Gender Roles With Your Dinner?" Oct. 8, 2008.
- Statistics Canada, Canadian Community Health Survey, Cycle 2.1. off-site links: Main survey page.
- Pas de Deux of Sexuality Is Written in the Genes
- Dwyer, D. (2000). Interpersonal Relationships [e-book] (2nd ed.). Routledge. p. 104. ISBN 0-203-01971-7.
- Cherlin, Andrew (2010). Public and Private Families, an introduction. McGraw-Hill Companies, Inc. p. 234.
- Crook, Robert (2011). Our Sexuality. Wadsworth Cengage Learning. p. 271.
- Cherlin, Andrew (2010). Public and Private Families, an Introduction. McGraw-Hill Companies, Inc. p. 234.
- According to John Money, in the case of androgen-induced transsexual status, "The clitoris becomes hypertrophied so as to become a penile clitoris with incomplete fusion and a urogenital sinus, or, if fusion is complete, a penis with urethra and an empty scrotum" (Gay, Straight, and In-Between, p. 31). At ovarian puberty, "menstruation through the penis" begins (op. cit., p. 32). In the case of the adrenogenital syndrome, hormonal treatment could bring about "breast growth and menstruation through the penis" (op. cit., p. 34). In one case an individual was born with a fully formed penis and empty scrotum. At the age of puberty that person's own physician provided treatment with cortisol. "His breasts developed and heralded the approach of first menstruation, through the penis".
- Williams, J.E., & Best, D.L. (1990). Sex and psyche: Gender and self viewed cross-culturally. Newbury Park, CA: Sage
- Williams, J.E., & Best, D.L. (1990) Measuring sex stereotypes: A multination study. Newbury Park, CA: Sage
- van Leeuwen, M. (1978). A cross-cultural examination of psychological differentiation in males and females. International Journal Of Psychology, 13(2), 87.
- Bjfrkqvist, K., Osterman, K., & Lagerspetz, K. M. J. (1993). Sex differences in covert aggression among adults. Aggressive Behavior. 19.
- C. André Christie-Mizell. The Effects of Traditional Family and Gender Ideology on Earnings: Race and Gender Differences. Journal of Family and Economic Issues 2006. Volume 27, Number 1 / April, 2006.
- Chan, W. (2001). Women, Murder and Justice. Hampshire: Palgrave.
- Hart, L. (1994). Fatal Women: Lesbian Sexuality and the Mark of Aggression. Princeton: Princeton University Press.
- Ballinger, A. (1996.) The Guilt of the Innocent and the Innocence of the Guilty: The Cases of Marie Fahmy and Ruth Ellis. In Wight, S. & Myers, A. (Eds.) No Angels: Women Who Commit Violence. London: Pandora.
- Filetti, J. S. (2001). From Lizzie Borden to Lorena Bobbitt: Violent Women and Gendered Justice. Journal of American Culture, Vol.35, No. 3, pp.471–484.
- John M. Coggeshall: The Best of Anthropology Today: ‘Ladies’ Behind Bars: A Liminal Gender as Cultural Mirror
- International Foundation (For) Gender Education,
- Gender PAC.
- Career advancement for professional women returners to the workplace
- Men and Masculinity Research Center (MMRC), seeks to give people (especially men) across the world a chance to contribute their perspective on topics relevant to men (e.g., masculinity, combat sports, fathering, health, and sexuality) by participating in Internet-based psychological research.
- The Society for the Psychological Study of Men and Masculinity (Division 51 of the American Psychological Association): SPSMM advances knowledge in the psychology of men through research, education, training, public policy, and improved clinical practice.
- Gender Stereotypes - Changes in People's Thoughts, A report based on a survey on roles of men and women. | http://www.wikiqueer.org/w/Gender_role | 13 |
18 | Kids Science Fair Projects
Tips for kids science fair projects for the concerned
Many kids science fair projects organizers encourage
the participation of parents in the entire process.
Other than the fact that in many cases, young children
will need help with the handling of materials for their
kids science fair projects such as glass and
chemicals, or if they need to use flames or other tools
for their project, it can actually be lots of fun for
dad or mom (or maybe both!) to work with their children!
It's very important for parents to be enthusiastic
as that will rub off on the child. At the same time,
remember all you eager dads and moms - this is your
child's experiment - not yours! You had your chance
decades ago! Let your kid do the thinking and experimenting!
You're there to facilitate and guide your child - to
make sure that he or she goes through the right steps.
But at the end of the day, its going to be your child's
Choosing your kids science fair project category -
it doesn't always have to be an experiment
You've probably heard that kids science fair projects
are all about the scientific method. I'll tell you all
about the scientific method very soon. But before that,
you need to know that by actually doing an experiment,
you will get to practice the scientific method. This
means that you will practice collecting and analyzing
data in a manner that is well accepted by real scientists
around the world. Also, it's a well known fact that
hands-on experiences tremendously increase a person's
speed of learning!
Science Fairs range in scale from a single class of
students sharing their projects to an international
fair offering scholarship money as prizes. They generally
share a common philosophy, and require that participants
use and demonstrate the sientific method. Although many
of them require you to conduct investigation by experimentation
(ie: inquiry-based learning) and encourage the development
of critical thinking skills, the experiment is not the
only one of the types to choose from.
There are broadly speaking, 3 types of activities to
1. The investigation
He're you're expected to ask a question, constructs
a theory or hypothesis, draws a conclusion and then
tests that hypothesis by constructing an experment.
2. The laboratory experiment
This requires you to repeat an "experiment" found in
textbooks, workbooks and other references. Itt does
not seek to investigate new theories.
3. The report or poster
This is an activity that is based on extensive research
done in books and other materials in order to write
a paper on the chosen topic. Backboards (posters) are
then used to illustrate key concepts from the research
Remember - whatever model that you choose, eventually,
it will help bring out the "detective" in you! - and
it will let you (or rather, your child) show off your
detective skills. In particular the first type will
let you actually choose which mystery to solve. Then
you get to show just how creative you are in spotting
clues that will help you to solve the puzzle.
The study of matter and energy and of interactions
between the two, grouped in traditional fields such
as acoustics, optics, mechanics, thermodynamics, and
electromagnetism, as well as in modern extensions including
atomic and nuclear physics, cryogenics, solid-state,
particle, and plasma physics.
The study of the composition, structure, properties,
and reactions of matter, especially of atomic and molecular
Chapters of our Handbook | http://www.all-science-fair-projects.com/kids-science-fair-projects.html | 13 |