score
int64
10
1.34k
text
stringlengths
296
618k
url
stringlengths
16
1.13k
year
int64
13
18
16
This story was written for an online encyclopedia project during the dotcom boom. Computers are machines that automatically store and recall information and make calculations. In a sense they’ve been with us for a long time. For example, the ancient Sumerian’s stored data on clay tablets and the Chinese invented the abacus. However, these days the term ‘computer’ tends to apply to electronic machines that process digital information at very high speeds. Although computers are relatively new, they have had a profound effect on our lives. Most of our activity is depended on computers. You can’t buy a loaf or bread or visit a bank without using a computer. In deed, the computer business in its many guises is now the world’s largest industry. It wasn’t always that way. Computers evolved over many years. Charles Babbage, an English inventor is widely regarded as the grandfather of modern computing. In the 1830s Babbage designed a mechanical Analytical Engine. It incorporated many ideas familiar to modern computer scientists. Although work started on building the Analytical Engine, the project was never finished. In fact, it is doubtful if 19th Century engineering tolerances were stringent enough for the machine to work. Among Babbage’s collaborators was Lord Byron’s daughter, Augusta Ada King, Countess of Lovelace. Ada Lovelace is generally regarded at the first computer programmer because she wrote notes showing how the engine could be used for various complex mathematical calculations. At the end of the 19th century, the American government turned to mechanical tabulating machines and punched card readers to cope with measuring its rapidly growing population. This was the start of data processing. In the mid-1920s, one of the companies making punched card equipment changed its name to International Business Machines or IBM. IBM and its rivals developed mechanical accounting machines during the first half of the 20th Century. During the Second World War IBM worked with the US government to develop electronic calculating machines to handle specific mathematical calculations. One early digital machine was the Harvard Mark 1. The main breakthrough of digital computing was to use the binary number system – which can represent any number by using combinations of zero and one (or in electronic circuits, off and on). Simple electronic circuits can process binary numbers at incredibly fast speeds. Moreover, all kinds of information: numbers, text, pictures and music can be quickly converted to and from a binary format. In 1946, ENIAC – the Electronic Numerical Integrator And Computer – was born. ENIAC’s main claim to fame was that it was a general-purpose machine; it could be altered to perform different types of calculation. In the late 1940s a team at Manchester University in the North of England took this a stage further and built the first programmable computer – that is, one where the type of calculation is determined by a set of commands known as a program. First generation computers used electronic valves; the second generation in the 1950s were built with transistors. Transistors were smaller and faster, but they didn’t require teams of engineers to constantly replace faulty valves. This made them much cheaper to operate, which meant companies could afford to use them. By the 1970s computers were being built with integrated circuits or chips. Early chips had dozens of transistors on a single device. From that point on, computer technology continued to get faster, smaller and cheaper as chipmakers crammed more transistors and hence more computer functions onto mass-produced chips. First and second generation computers were ‘mainframe’ machines, the 1970s saw the birth of minicomputers and, later, microcomputers. In 1981 IBM sold its first desktop microcomputer, which it called the ‘personal computer’ or PC. Once engineers figured out how to squeeze useable processing power into a desktop package, the portable computer wasn’t far behind. Portable computers evolved from early luggable machines through laptops and notebooks to modern palmtop handheld computers like the Palm Pilot. In some respects modern, mobile phones represent the logical extension of this trend. Early computers were isolated outposts of processing power, but designers quickly realised that the binary signals used inside computers could be easily transmitted to other devices — hence the birth of computer networking. Early computer networks were mainly developed to speed the flow of data to and from a computer. For example a remote warehouse might send inventory data to a central office. Over time the communications potential of computer networks became more important. One of the first new uses of computer networking was electronic mail and file swapping. Originally used by scientists to collaborate on projects it quickly captured the world’s imagination. The scientist’s used a communications protocol called TCP/IP (Transmission Control Protocol/Internet Protocol); they called their network the Internet. Members of the public started accessing the Internet in the late 1980s, in the early 1990s it commercialised and by the mid-1990s almost every commercial computer network or data service was linked to the Internet. The World Wide Web, a simple, graphical way of displaying and linking information, drove much of this growth. Until the late 1980s most people thought of computers as being electronic copies of humans. We talked of electronic brains and machine intelligence. Worriers lost sleep speculating that one-day machines might want to replace us. The focus was on processing power. Then, almost overnight computers went from being a rival species to a place we visited. The Internet explosion of the late 1980s and early 1990s sparked talk of cyberspace and web-surfing. More importantly computing move from its old application and data-centric model to its new communication-centric model. This change has seen the way we use computers change dramatically. Some of today’s computing devices don’t offer much in the way of processing – that’s all handled remotely on big computers – but they are very strong on communications. Tiny handheld computers using wireless networks are finding business applications as diverse as logistics and technical support – for example people supporting Coca-Cola equipment use the technology to log customer reported faults and reorder cans of drink. Email is still the most popular application. Consumer applications include online banking and checking weather, news and sports results. Just about all these functions can be handled at least as well by suitably equipped mobile phones. The Wireless Application Protocol (WAP) is a set of specifications that lets developers use the Wireless Markup Language (WML) to build networked applications designed for handheld wireless devices. It has been specifically designed to work within the constraints of limited memory and CPU; small, monochrome screens; low bandwidth (i.e. slow data) and erratic connections. WAP is a de facto standard, with wide telecommunications industry support. At the time of writing the telecommunications industry is developing a new generation of wireless services which can deliver faster and more reliable services so that, for instance, it will be possible to conduct video conferencing using a mobile phone. Another development is the rise of Application Service Providers – a new breed of computer software utility that rents out remotely delivered applications. All computers, from the humblest processors controlling a kitchen fridge to the large supercomputers being used by rocket scientists share the same four basic components: Input and Output – the components that control the way information passes between a computer and the rest of the world. Among other possibilities input might be a keyboard, mouse or touch screen. It can also be voice recognition, an optical scanner, digital video camera or a data communications link. Output can include voice synthesis, a screen display and printer. Processor – sometimes called the central control unit, this is the smart part of a computer’s hardware that orchestrates everything else. The processor pulls instructions from the computer memory in sequence and acts on these instructions, either performing calculations or sending commands and data to other computer components. Memory – the place where data or program instructions are stored while they are not actually being used by the processor. Modern computers have a number of different types of memory. Cache memory stores small amounts of vital information that is currently in use. Cache is a very fast kind of semiconductor memory and can be located on the same chip as the processor. Ram (random access memory) is a fast semiconductor memory where information in use – but not immediate use – is stored. It tends to be many times larger than cache memory. Disc memory is much slower, but tends to be much larger. Software – Computer software is the set of sequenced instructions controlling the hardware. The name comes from a joke used in the early days of computing to distinguish programs from hardware; it’s the bit that doesn’t hurt if it drops on your foot. Software comes in a variety of forms, the most important are: system software and applications software — though the distinction between these two types is becoming increasingly blurred. System software controls the way a computer manages its internal processes. The best-known type of system software is the operating system. Historically an operating system was simply a suite of programs that managed input and output, and controlled the way application programs were dealt with. Increasingly modern operating systems contain utility programs and components of application software. Applications software refers to the programs, such as word processors or email clients that actually perform tasks for the computer user.
http://billbennett.co.nz/computers-encyclopedia-entry/
13
216
Analysis of variance Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means and their associated procedures (such as "variation" among and between groups). In ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes t-test to more than two groups. Doing multiple two-sample t-tests would result in an increased chance of committing a type I error. For this reason, ANOVAs are useful in comparing (testing) three or more means (groups or variables) for statistical significance. Background and terminology ANOVA is a particular form of statistical hypothesis testing heavily used in the analysis of experimental data. A statistical hypothesis test is a method of making decisions using data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result (when a probability (p-value) is less than a threshold (significance level)) justifies the rejection of the null hypothesis. In the typical application of ANOVA, the null hypothesis is that all groups are simply random samples of the same population. This implies that all treatments have the same effect (perhaps none). Rejecting the null hypothesis implies that different treatments result in altered effects. By construction, hypothesis testing limits the rate of Type I errors (false positives leading to false scientific claims) to a significance level. Experimenters also wish to limit Type II errors (false negatives resulting in missed scientific discoveries). The Type II error rate is a function of several things including sample size (positively correlated with experiment cost), significance level (when the standard of proof is high, the chances of overlooking a discovery are also high) and effect size (when the effect is obvious to the casual observer, Type II error rates are low). The terminology of ANOVA is largely from the statistical design of experiments. The experimenter adjusts factors and measures responses in an attempt to determine an effect. Factors are assigned to experimental units by a combination of randomization and blocking to ensure the validity of the results. Blinding keeps the weighing impartial. Responses show a variability that is partially the result of the effect and is partially random error. ANOVA is the synthesis of several ideas and it is used for multiple purposes. As a consequence, it is difficult to define concisely or precisely. "Classical ANOVA for balanced data does three things at once: - As exploratory data analysis, an ANOVA is an organization of an additive data decomposition, and its sums of squares indicate the variance of each component of the decomposition (or, equivalently, each set of terms of a linear model). - Comparisons of mean squares, along with F-tests ... allow testing of a nested sequence of models. - Closely related to the ANOVA is a linear model fit with coefficient estimates and standard errors." In short, ANOVA is a statistical tool used in several ways to develop and confirm an explanation for the observed data. - It is computationally elegant and relatively robust against violations to its assumptions. - ANOVA provides industrial strength (multiple sample comparison) statistical analysis. - It has been adapted to the analysis of a variety of experimental designs. As a result: ANOVA "has long enjoyed the status of being the most used (some would say abused) statistical technique in psychological research." ANOVA "is probably the most useful technique in the field of statistical inference." ANOVA is difficult to teach, particularly for complex experiments, with split-plot designs being notorious. In some cases the proper application of the method is best determined by problem pattern recognition followed by the consultation of a classic authoritative test. (Condensed from the NIST Engineering Statistics handbook: Section 5.7. A Glossary of DOE Terminology.) - Balanced design - An experimental design where all cells (i.e. treatment combinations) have the same number of observations. - A schedule for conducting treatment combinations in an experimental study such that any effects on the experimental results due to a known change in raw materials, operators, machines, etc., become concentrated in the levels of the blocking variable. The reason for blocking is to isolate a systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting randomization. - A set of experimental runs which allows the fit of a particular model and the estimate of effects. - Design of experiments. An approach to problem solving involving collection of data that will support valid, defensible, and supportable conclusions. - How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect. - Unexplained variation in a collection of observations. DOE's typically require understanding of both random error and lack of fit error. - Experimental unit - The entity to which a specific treatment combination is applied. - Process inputs an investigator manipulates to cause a change in the output. - Lack-of-fit error - Error that occurs when the analysis omits one or more important terms or factors from the process model. Including replication in a DOE allows separation of experimental error into its components: lack of fit and random (pure) error. - Mathematical relationship which relates changes in a given response to changes in one or more factors. - Random error - Error that occurs due to natural variation in the process. Random error is typically assumed to be normally distributed with zero mean and a constant variance. Random error is also called experimental error. - A schedule for allocating treatment material and for conducting treatment combinations in a DOE such that the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs.[nb 1] - Performing the same treatment combination more than once. Including replication allows an estimate of the random error independent of any lack of fit error. - The output(s) of a process. Sometimes called dependent variable(s). - A treatment is a specific combination of factor levels whose effect is to be compared with other treatments. Classes of models There are three classes of models used in the analysis of variance, and these are outlined here. The fixed-effects model of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see if the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole. Random effects models are used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model. A mixed-effects model contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types. Example: Teaching experiments could be performed by a university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives. Defining fixed and random effects has proven elusive, with competing definitions arguably leading toward a linguistic quagmire. Assumptions of ANOVA The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Even when the statistical model is nonlinear, it can be approximated by a linear model for which an analysis of variance may be appropriate. Textbook analysis using a normal distribution - Independence of observations – this is an assumption of the model that simplifies the statistical analysis. - Normality – the distributions of the residuals are normal. - Equality (or "homogeneity") of variances, called homoscedasticity — the variance of data in groups should be the same. The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors ('s) are independent and In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and David R. Cox. In its simplest form, the assumption of unit-treatment additivity[nb 2] states that the observed response from experimental unit when receiving treatment can be written as the sum of the unit's response and the treatment-effect , that is The assumption of unit-treatment addivity implies that, for every treatment , the th treatment have exactly the same effect on every experiment unit. The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant. The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling. Derived linear model Kempthorne uses the randomization-distribution and the assumption of unit treatment additivity to produce a derived linear model, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent! The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments. Statistical models for observational data However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald A. Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public. Summary of assumptions The normal-model based ANOVA analysis assumes the independence, normality and homogeneity of the variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis. However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are no necessary assumptions for ANOVA in its full generality, but the F-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest. Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses, which are believed to follow a multiplicative model. According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition. Characteristics of ANOVA ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance results are independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding. Logic of ANOVA The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial, "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean." Partitioning of the sum of squares ANOVA uses traditional standardized terminology. The definitional equation of sample variance is , where the divisor is called the degrees of freedom (DF), the summation is called the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means. If the null hypothesis is true, all three variance estimates are equal (within sampling error). The fundamental technique is a partitioning of the total sum of squares SS into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels. The number of degrees of freedom DF can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect. See also Lack-of-fit sum of squares. The F-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic where MS is mean square, = number of treatments and = total number of cases to the F-distribution with , degrees of freedom. Using the F-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution. The expected value of F is (where n is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1 the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls. The textbook method of concluding the hypothesis test is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the numerator degrees of freedom, the denominator degrees of freedom and the significance level (α). If F ≥ FCritical (Numerator DF, Denominator DF, α) then reject the null hypothesis. The computer method calculates the probability (p-value) of a value of F greater than or equal to the observed value. The null hypothesis is rejected if this probability is less than or equal to the significance level (α). The two methods produce the same result. The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (maximizing power for a fixed significance level). To test the hypothesis that all treatments have exactly the same effect, the F-test's p-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum.[nb 3] The ANOVA F–test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.[nb 4] ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients." "[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..." ANOVA for a single factor The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors. A relatively complete discussion of the analysis (models, data summaries, ANOVA table) of the completely randomized experiment is available. ANOVA for multiple factors ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used. The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz). All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare. The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results. Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects." Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot. A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications. Worked numeric examples Some analysis is required in support of the design of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments. The number of experimental units In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential. Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals. Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards. Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confident interval. Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true. Several standardized measures of effect gauge the strength of the association between a predictor (or set of predictors) and the dependent variable. Effect-size estimates facilitate the comparison of findings in studies and across disciplines. A non-standardized measure of effect size with meaningful units may be preferred for reporting purposes. η2 ( eta-squared ): Eta-squared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). On average it overestimates the variance explained in the population. As the sample size gets larger the amount of bias gets smaller, Cohen (1992) suggests effect sizes for various indexes, including ƒ (where 0.1 is a small effect, 0.25 is a medium effect and 0.4 is a large effect). He also offers a conversion table (see Cohen, 1988, p. 283) for eta squared (η2) where 0.0099 constitutes a small effect, 0.0588 a medium effect and 0.1379 a large effect. It is always appropriate to carefully consider outliers. They have a disproportionate impact on statistical conclusions and are often the result of errors. It is prudent to verify that the assumptions of ANOVA have been met. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. One rule of thumb: "If the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations and our results will still be approximately correct." A statistically significant effect in ANOVA is often followed up with one or more different follow-up tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are planned (a priori) or post hoc. Planned tests are determined before looking at the data and post hoc tests are performed after looking at the data. Often one of the "treatments" is none, so the treatment group can act as a control. Dunnett's test (a modification of the t-test) tests whether each of the other treatment groups has the same mean as the control. Post hoc tests such as Tukey's range test most commonly compare every group mean with every other group mean and typically incorporate some method of controlling for Type I errors. Comparisons, which are most commonly planned, can be either simple or compound. Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Following ANOVA with pair-wise multiple-comparison tests has been criticized on several grounds. There are many such tests (10 in one table) and recommendations regarding their use are vague or conflicting. Study designs and ANOVAs There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model. Some popular designs use the following types of ANOVA: - One-way ANOVA is used to test for differences among two or more independent groups (means),e.g. different levels of urea application in a crop. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2. - Factorial ANOVA is used when the experimenter wants to study the interaction effects among the treatments. - Repeated measures ANOVA is used when the same subjects are used for each treatment (e.g., in a longitudinal study). - Multivariate analysis of variance (MANOVA) is used when there is more than one response variable. Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; Unbalanced experiments offer more complexity. For single factor (one way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and F-ratios will depend on the order in which the sources of variation are considered." The simplest techniques for handling unbalanced data restore balance by either throwing out data or by synthesizing missing data. More complex techniques use regression. ANOVA is (in part) a significance test. The American Psychological Association holds the view that simply reporting significance is insufficient and that reporting confidence bounds is preferred. ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized. While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. The development of least-squares methods by Laplace and Gauss circa 1800 provided an improved method of combining observations (over the existing practices of astronomy and geodesy). It also initiated much study of the contributions to sums of squares. Laplace soon knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827 Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800 astronomers had isolated observational errors resulting from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885. Sir Ronald Fisher introduced the term "variance" and proposed a formal analysis of variance in a 1918 article The Correlation Between Relatives on the Supposition of Mendelian Inheritance. His first application of the analysis of variance was published in 1921. Analysis of variance became widely known after being included in Fisher's 1925 book Statistical Methods for Research Workers. One of the attributes of ANOVA which ensured its early popularity was computational elegance. The structure of the additive model allows solution for the additive coefficients by simple algebra rather than by matrix calculations. In the era of mechanical calculators this simplicity was critical. The determination of statistical significance also required access to tables of the F function which were supplied by early statistics texts. |Wikimedia Commons has media related to: Analysis of variance| - Randomization is a term used in multiple ways in this material. "Randomization has three roles in applications: as a device for eliminating biases, for example from unobserved explanatory variables and selection effects: as a basis for estimating standard errors: and as a foundation for formally exact significance tests." Cox (2006, page 192) Hinkelmann and Kempthorne use randomization both in experimental design and for statistical analysis. - Unit-treatment additivity is simply termed additivity in most texts. Hinkelmann and Kempthorne add adjectives and distinguish between additivity in the strict and broad senses. This allows a detailed consideration of multiple error sources (treatment, state, selection, measurement and sampling) on page 161. - Rosenbaum (2002, page 40) cites Section 5.7 (Permutation Tests), Theorem 2.3 (actually Theorem 3, page 184) of Lehmann's Testing Statistical Hypotheses (1959). - The F-test for the comparison of variances has a mixed reputation. It is not recommended as a hypothesis test to determine whether two different samples have the same variance. It is recommended for ANOVA where two estimates of the variance of the same sample are compared. While the F-test is not generally robust against departures from normality, it has been found to be robust in the special case of ANOVA. Citations from Moore & McCabe (2003): "Analysis of variance uses F statistics, but these are not the same as the F statistic for comparing two population standard deviations." (page 554) "The F test and other procedures for inference about variances are so lacking in robustness as to be of little use in practice." (page 556) "[The ANOVA F test] is relatively insensitive to moderate nonnormality and unequal variances, especially when the sample sizes are similar." (page 763) ANOVA assumes homoscedasticity, but it is robust. The statistical test for homoscedasticity (the F-test) is not robust. Moore & McCabe recommend a rule of thumb. - Gelman (2005, p 2) - Howell (2002, p 320) - Montgomery (2001, p 63) - Gelman (2005, p 1) - Gelman (2005, p 5) - "Section 5.7. A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 5 April 2012. - "Section 4.3.1 A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 14 Aug 2012. - Montgomery (2001, Chapter 12: Experiments with random factors) - Gelman (2005, pp 20–21) - Snedecor, George W.; Cochran, William G. (1967). Statistical Methods (6th ed.). p. 321. - Cochran & Cox (1992, p 48) - Howell (2002, p 323) - Anderson, David R.; Sweeney, Dennis J.; Williams, Thomas A. (1996). Statistics for business and economics (6th ed.). Minneapolis/St. Paul: West Pub. Co. pp. 452–453. ISBN 0-314-06378-1. - Anscombe (1948) - Kempthorne (1979, p 30) - Cox (1958, Chapter 2: Some Key Assumptions) - Hinkelmann and Kempthorne (2008, Volume 1, Throughout. Introduced in Section 2.3.3: Principles of experimental design; The linear model; Outline of a model) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.3: Completely Randomized Design; Derived Linear Model) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.6: Completely randomized design; Approximating the randomization test) - Bailey (2008, Chapter 2.14 "A More General Model" in Bailey, pp. 38–40) - Hinkelmann and Kempthorne (2008, Volume 1, Chapter 7: Comparison of Treatments) - Kempthorne (1979, pp 125–126, "The experimenter must decide which of the various causes that he feels will produce variations in his results must be controlled experimentally. Those causes that he does not control experimentally, because he is not cognizant of them, he must control by the device of randomization." "[O]nly when the treatments in the experiment are applied by the experimenter using the full randomization procedure is the chain of inductive inference sound. It is only under these circumstances that the experimenter can attribute whatever effects he observes to the treatment and the treatment only. Under these circumstances his conclusions are reliable in the statistical sense.") - Freedman[full citation needed] - Montgomery (2001, Section 3.8: Discovering dispersion effects) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.10: Completely randomized design; Transformations) - Bailey (2008) - Montgomery (2001, Section 3-3: Experiments with a single factor: The analysis of variance; Analysis of the fixed effects model) - Cochran & Cox (1992, p 2 example) - Cochran & Cox (1992, p 49) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.7: Completely randomized design; CRD with unequal numbers of replications) - Moore and McCabe (2003, page 763) - Gelman (2008) - Montgomery (2001, Section 5-2: Introduction to factorial designs; The advantages of factorials) - Belle (2008, Section 8.4: High-order interactions occur rarely) - Montgomery (2001, Section 5-1: Introduction to factorial designs; Basic definitions and principles) - Cox (1958, Chapter 6: Basic ideas about factorial experiments) - Montgomery (2001, Section 5-3.7: Introduction to factorial designs; The two-factor factorial design; One observation per cell) - Wilkinson (1999, p 596) - Montgomery (2001, Section 3-7: Determining sample size) - Howell (2002, Chapter 8: Power) - Howell (2002, Section 11.12: Power (in ANOVA)) - Howell (2002, Section 13.7: Power analysis for factorial experiments) - Moore and McCabe (2003, pp 778–780) - Wilkinson (1999, p 599) - Montgomery (2001, Section 3-4: Model adequacy checking) - Moore and McCabe (2003, p 755, Qualifications to this rule appear in a footnote.) - Montgomery (2001, Section 3-5.8: Experiments with a single factor: The analysis of variance; Practical interpretation of results; Comparing means with a control) - Hinkelmann and Kempthorne (2008, Volume 1, Section 7.5: Comparison of Treatments; Multiple Comparison Procedures) - Howell (2002, Chapter 12: Multiple comparisons among treatment means) - Montgomery (2001, Section 3-5: Practical interpretation of results) - Cochran & Cox (1957, p 9, "[T]he general rule [is] that the way in which the experiment is conducted determines not only whether inferences can be made, but also the calculations required to make them.") - "The Probable Error of a Mean". Biometrika 6: 1–0. 1908. doi:10.1093/biomet/6.1.1. - Montgomery (2001, Section 3-3.4: Unbalanced data) - Montgomery (2001, Section 14-2: Unbalanced data in factorial design) - Wilkinson (1999, p 600) - Gelman (2005, p.1) (with qualification in the later text) - Montgomery (2001, Section 3.9: The Regression Approach to the Analysis of Variance) - Howell (2002, p 604) - Howell (2002, Chapter 18: Resampling and nonparametric approaches to data) - Montgomery (2001, Section 3-10: Nonparametric methods in the analysis of variance) - Stigler (1986) - Stigler (1986, p 134) - Stigler (1986, p 153) - Stigler (1986, pp 154–155) - Stigler (1986, pp 240–242) - Stigler (1986, Chapter 7 - Psychophysics as a Counterpoint) - Stigler (1986, p 253) - Stigler (1986, pp 314–315) - The Correlation Between Relatives on the Supposition of Mendelian Inheritance. Ronald A. Fisher. Philosophical Transactions of the Royal Society of Edinburgh. 1918. (volume 52, pages 399–433) - On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. Ronald A. Fisher. Metron, 1: 3-32 (1921) - Scheffé (1959, p 291, "Randomization models were first formulated by Neyman (1923) for the completely randomized design, by Neyman (1935) for randomized blocks, by Welch (1937) and Pitman (1937) for the Latin square under a certain null hypothesis, and by Kempthorne (1952, 1955) and Wilk (1955) for many other designs.") - Anscombe, F. J. (1948). "The Validity of Comparative Experiments". Journal of the Royal Statistical Society. Series A (General) 111 (3): 181–211. doi:10.2307/2984159. JSTOR 2984159. MR 30181. - Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line. - Belle, Gerald van (2008). Statistical rules of thumb (2nd ed.). Hoboken, N.J: Wiley. ISBN 978-0-470-14448-0. - Cochran, William G.; Cox, Gertrude M. (1992). Experimental designs (2nd ed.). New York: Wiley. ISBN 978-0-471-54567-5. - Cohen, Jacob (1988). Statistical power analysis for the behavior sciences (2nd ed.). Routledge ISBN 978-0-8058-0283-2 - Cohen, Jacob (1992). "Statistics a power primer". Psychology Bulletin 112 (1): 155–159. doi:10.1037/0033-2909.112.1.155. PMID 19565683. - Cox, David R. (1958). Planning of experiments. Reprinted as ISBN 978-0-471-57429-3 - Cox, D. R. (2006). Principles of statistical inference. Cambridge New York: Cambridge University Press. ISBN 978-0-521-68567-2. - Freedman, David A.(2005). Statistical Models: Theory and Practice, Cambridge University Press. ISBN 978-0-521-67105-7 - Gelman, Andrew (2005). "Analysis of variance? Why it is more important than ever". The Annals of Statistics 33: 1–53. doi:10.1214/009053604000001048. - Gelman, Andrew (2008). "Variance, analysis of". The new Palgrave dictionary of economics (2nd ed.). Basingstoke, Hampshire New York: Palgrave Macmillan. ISBN 978-0-333-78676-5. - Hinkelmann, Klaus & Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7. - Howell, David C. (2002). Statistical methods for psychology (5th ed.). Pacific Grove, CA: Duxbury/Thomson Learning. ISBN 0-534-37770-X. - Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952) Wiley ed.). Robert E. Krieger. ISBN 0-88275-105-0. - Lehmann, E.L. (1959) Testing Statistical Hypotheses. John Wiley & Sons. - Montgomery, Douglas C. (2001). Design and Analysis of Experiments (5th ed.). New York: Wiley. ISBN 978-0-471-31649-7. - Moore, David S. & McCabe, George P. (2003). Introduction to the Practice of Statistics (4e). W H Freeman & Co. ISBN 0-7167-9657-0 - Rosenbaum, Paul R. (2002). Observational Studies (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-98967-9 - Scheffé, Henry (1959). The Analysis of Variance. New York: Wiley. - Stigler, Stephen M. (1986). The history of statistics : the measurement of uncertainty before 1900. Cambridge, Mass: Belknap Press of Harvard University Press. ISBN 0-674-40340-1. - Wilkinson, Leland (1999). "Statistical Methods in Psychology Journals; Guidelines and Explanations". American Psychologist 54 (8): 594–604. doi:10.1037/0003-066X.54.8.594. - Box, G. E. P. (1953). "Non-Normality and Tests on Variances". Biometrika (Biometrika Trust) 40 (3/4): 318–335. JSTOR 2333350. - Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, I. Effect of Inequality of Variance in the One-Way Classification". The Annals of Mathematical Statistics 25 (2): 290. doi:10.1214/aoms/1177728786. - Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, II. Effects of Inequality of Variance and of Correlation Between Errors in the Two-Way Classification". The Annals of Mathematical Statistics 25 (3): 484. doi:10.1214/aoms/1177728717. - Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics 150. New York: Springer-Verlag. ISBN 0-387-98578-6. - Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2. - Cox, David R. & Reid, Nancy M. (2000). The theory of design of experiments. (Chapman & Hall/CRC). ISBN 978-1-58488-195-7 - Fisher, Ronald (1918). "Studies in Crop Variation. I. An examination of the yield of dressed grain from Broadbalk". Journal of Agricultural Science 11: 107–135. - Freedman, David A.; Pisani, Robert; Purves, Roger (2007) Statistics, 4th edition. W.W. Norton & Company ISBN 978-0-393-92972-0 - Hettmansperger, T. P.; McKean, J. W. (1998). Robust nonparametric statistical methods. Kendall's Library of Statistics 5 (First ed.). New York: Edward Arnold. pp. xiv+467 pp. ISBN 0-340-54937-8. MR 1604954. Unknown parameter - Lentner, Marvin; Thomas Bishop (1993). Experimental design and analysis (Second ed.). P.O. Box 884, Blacksburg, VA 24063: Valley Book Company. ISBN 0-9616255-2-X. - Tabachnick, Barbara G. & Fidell, Linda S. (2007). Using Multivariate Statistics (5th ed.). Boston: Pearson International Edition. ISBN 978-0-205-45938-4 - Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6. MR 2283455. |Wikiversity has learning materials about Analysis of variance| - SOCR ANOVA Activity and interactive applet. - Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R - NIST/SEMATECH e-Handbook of Statistical Methods, section 7.4.3: "Are the means equal?"
http://en.wikipedia.org/wiki/Analysis_of_variance
13
14
Amino acids are organic compounds made of carbon, hydrogen, oxygen, nitrogen, and (in some cases) sulfur bonded in characteristic formations. Strings of amino acids make up proteins, of which there are countless varieties. Of the 20 amino acids required for manufacturing the proteins the human body needs, the body itself produces only 12, meaning that we have to meet our requirements for the other eight through nutrition. This is just one example of the importance of amino acids in the functioning of life. Another cautionary illustration of amino acids' power is the gamut of diseases (most notably, sickle cell anemia) that impair or claim the lives of those whose amino acids are out of sequence or malfunctioning. Once used in dating objects from the distant past, amino acids have existed on Earth for at least three billion years—long before the appearance of the first true organisms. Amino acids are organic compounds, meaning that they contain carbon and hydrogen bonded to each other. In addition to those two elements, they include nitrogen, oxygen, and, in a few cases, sulfur. The basic structure of an amino-acid molecule consists of a carbon atom bonded to an amino group (-NH2), a carboxyl group (-COOH), a hydrogen atom, and a fourth group that differs from one amino acid to another and often is referred to as the-R group or the side chain. The-R group, which can vary widely, is responsible for the differences in chemical properties. This explanation sounds a bit technical and requires a background in chemistry that is beyond the scope of this essay, but let us simplify it somewhat. Imagine that the amino-acid molecule is like the face of a compass, with a carbon atom at the center. Raying out from the center, in the four directions of the compass, are lines representing chemical bonds to other atoms or groups of atoms. These directions are based on models that typically are used to represent amino-acid molecules, though north, south, east, and west, as used in the following illustration, are simply terms to make the molecule easier to visualize. To the south of the carbon atom (C) is a hydrogen atom (H), which, like all the other atoms or groups, is joined to the carbon center by a chemical bond. To the north of the carbon center is what is known as an amino group (-NH2). The hyphen at the beginning indicates that such a group does not usually stand alone but normally is attached to some other atom or group. To the east is a carboxyl group, represented as-COOH. In the amino group, two hydrogen atoms are bonded to each other and then to nitrogen, whereas the carboxyl group has two separate oxygen atoms strung between a carbon atom and a hydrogen atom. Hence, they are not represented as O2. Finally, off to the west is the R-group, which can vary widely. It is as though the other portions of the amino acid together formed a standard suffix in the English language, such as -tion. To the front of that suffix can be attached all sorts of terms drawn from root words, such as educate or The name amino acid, in fact, comes from the amino group and the acid group, which are the most chemically reactive parts of the molecule. Each of the common amino acids has, in addition to its chemical name, a more familiar name and a three-letter abbreviation that frequently is used to identify it. In the present context, we are not concerned with these abbreviations. Amino-acid molecules, which contain an amino group and a carboxyl group, do not behave like typical molecules. Instead of melting at temperatures hotter than 392°F (200°C), they simply decompose. They are quite soluble, or capable of being dissolved, in water but are insoluble in nonpolar solvents (oil-and all oil-based products), such as benzene or ether. All of the amino acids in the human body, except glycine, are either right-hand or left-hand versions of the same molecule, meaning that in some amino acids the positions of the carboxyl group and the R-group are switched. Interestingly, nearly all of the amino acids occurring in nature are the left-hand versions of the molecules, or the L-forms. (There-fore, the model we have described is actually the left-hand model, though the distinctions between "right" and "left"—which involve the direction in which light is polarized—are too complex to discuss here.) Right-hand versions (D-forms) are not found in the proteins of higher organisms, but they are present in some lower forms of life, such as in the cell walls of bacteria. They also are found in some antibiotics, among them, streptomycin, actinomycin, bacitracin, and tetracycline. These antibiotics, several of which are well known to the public at large, can kill bacterial cells by interfering with the formation of proteins necessary for maintaining life and for reproducing. A chemical reaction that is characteristic of amino acids involves the formation of a bond, called a peptide linkage, between the carboxyl group of one amino acid and the amino group of a second amino acid. Very long chains of amino acids can bond together in this way to form proteins, which are the basic building blocks of all living things. The specific properties of each kind of protein are largely dependent on the kind and sequence of the amino acids in it. Other aspects of the chemical behavior of protein molecules are due to interactions between the amino and the carboxyl groups or between the various R-groups along the long chains of amino acids in the molecule. Amino acids function as monomers, or individual units, that join together to form large, chainlike molecules called polymers, which may contain as few as two or as many as 3,000 amino-acid units. Groups of only two amino acids are called dipeptides, whereas three amino acids bonded together are called tripeptides. If there are more than 10 in a chain, they are termed polypeptides, and if there are 50 or more, these are known as proteins. All the millions of different proteins in living things are formed by the bonding of only 20 amino acids to make up long polymer chains. Like the 26 letters of the alphabet that join together to form different words, depending on which letters are used and in which sequence, the 20 amino acids can join together in different combinations and series to form proteins. But whereas words usually have only about 10 or fewer letters, proteins typically are made from as few as 50 to as many as 3,000 amino acids. Because each amino acid can be used many times along the chain and because there are no restrictions on the length of the chain, the number of possible combinations for the formation of proteins is truly enormous. There are about two quadrillion different proteins that can exist if each of the 20 amino acids present in humans is used only once. Just as not all sequences of letters make sense, however, not all sequences of amino acids produce functioning proteins. Some other sequences can function and yet cause undesirable effects, as we shall see. DNA (deoxyribonucleic acid), a molecule in all cells that contains genetic codes for inheritance, creates encoded instructions for the synthesis of amino acids. In 1986, American medical scientist Thaddeus R. Dryja (1940-) used amino-acid sequences to identify and isolate the gene for a type of cancer known as retinoblastoma, a fact that illustrates the importance of amino acids in the body. Amino acids are also present in hormones, chemicals that are essential to life. Among these hormones is insulin, which regulates sugar levels in the blood and without which a person would die. Another is adrenaline, which controls blood pressure and gives animals a sudden jolt of energy needed in a high-stress situation—running from a predator in the grasslands or (to a use a human example) facing a mugger in an alley or a bully on a playground. Biochemical studies of amino-acid sequences in hormones have made it Just as proteins form when amino acids bond together in long chains, they can be broken down by a reaction called hydrolysis, the reverse of the formation of the peptide bond. That is exactly what happens in the process of digestion, when special digestive enzymes in the stomach enable the breaking down of the peptide linkage. (Enzymes are a type of protein—see Enzymes.) The amino acids, separated once again, are released into the small intestine, from whence they pass into the bloodstream and are carried throughout the organism. Each individual cell of the organism then can use these amino acids to assemble the new and different proteins required for its specific functions. Life thus is an ongoing cycle in which proteins are broken into individual amino-acid units, and new proteins are built up from these amino acids. Out of the many thousands of possible amino acids, humans require only 20 different kinds. Two others appear in the bodies of some animal species, and approximately 100 others can be found in plants. Considering the vast numbers of amino acids and possible combinations that exist in nature, the number of amino acids essential to life is extremely small. Yet of the 20 amino acids required by humans for making protein, only 12 can be produced within the body, whereas the other eight—isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine—must be obtained from the diet. (In addition, adults are capable of synthesizing arginine and histidine, but these amino acids are believed to be essential to growing children, meaning that children cannot produce them on their own.) A complete protein is one that contains all of the essential amino acids in quantities sufficient for growth and repair of body tissue. Most proteins from animal sources, gelatin being the only exception, contain all the essential amino acids and are therefore considered complete proteins. On the other hand, many plant proteins do not contain all of the essential amino acids. For example, lysine is absent from corn, rice, and wheat, whereas corn also lacks tryptophan and rice lacks threonine. Soybeans are lacking in methionine. Vegans, or vegetarians who consume no animal proteins in their diets (i.e., no eggs, dairy products, or the like) are at risk of malnutrition, because they may fail to assimilate one or more essential amino acid. Amino acids can be used as treatments for all sorts of medical conditions. For example, tyrosine may be employed in the treatment of Alzheimer's disease, a condition characterized by the onset of dementia, or mental deterioration, as well as for alcohol-withdrawal symptoms. Taurine is administered to control epileptic seizures, treat high blood pressure and diabetes, and support the functioning of the liver. Numerous other amino acids are used in treating a wide array of other diseases. Sometimes the disease itself involves a problem with amino-acid production or functioning. In the essay Vitamins, there is a discussion of pellagra, a disease resulting from a deficiency of the B-group vitamin known as niacin. Pellagra results from a diet heavy in corn, which, as we have noted, lacks lysine and tryptophan. Its symptoms often are described as the "three Ds": diarrhea, dermatitis (or skin inflammation), and dementia. Thanks to a greater understanding of nutrition and health, pellagra has been largely eradicated, but there still exists a condition with almost identical symptoms: Hartnup disease, a genetic disorder named for a British family in the late 1950s who suffered from it. Hartnup disease is characterized by an inability to transport amino acids from the kidneys to the rest of the body. The symptoms at first seemed to suggest to physicians that the disease, which is present in one of about 26,000 live births, was pellagra. Tests showed that sufferers did not have inadequate tryptophan levels, however, as would have been the case with pellagra. On the other hand, some 14 amino acids have been found in excess within the urine of Hartnup disease sufferers, indicating that rather than properly transporting amino acids, their bodies are simply excreting them. This is a potentially very serious condition, but it can be treated with the B vitamin nicotinamide, also used to treat pellagra. Supplementation of tryptophan in the diet also has shown positive results with some patients. It is also possible for small mistakes to occur in the amino-acid sequence within the body. While these mistakes sometimes can be tolerated in nature without serious problems, at other times a single misplaced amino acid in the polymer chain can bring about an extremely serious condition of protein malfunctioning. An example of this is sickle cell anemia, a fatal disease ultimately caused by a single mistake in the amino acid sequence. In the bodies of sickle cell anemia sufferers, who are typically natives of sub-Saharan Africa or their descendants in the United States or elsewhere, glutamic acid is replaced by valine at the sixth position from the end of the protein chain in the hemoglobin molecule. (Hemoglobin is an iron-containing pigment in red blood cells that is responsible for transporting oxygen to the tissues and removing carbon dioxide from them.) This small difference makes sickle cell hemoglobin molecules extremely sensitive to oxygen deficiencies. As a result, when the red blood cells release their oxygen to the tissues, as all red blood cells do, they fail to re-oxygenate in a normal fashion and instead twist into the shape that gives sickle cell anemia its name. This causes obstruction of the blood vessels. Before the development of a treatment with the drug hydroxyurea in the mid-1990s, the average life expectancy of a person with sickle cell anemia was about 45 years. The Evolution essay discusses several types of dating, a term referring to scientific efforts directed toward finding the age of a particular item or phenomenon. Methods of dating are either relative (i.e., comparative and usually based on rock strata, or layers) or absolute. Whereas relative dating does not involve actual estimates of age in years, absolute dating does. One of the first types of absolute-dating techniques developed was amino-acid racimization, introduced in the 1960s. As noted earlier, there are "left-hand" L-forms and "right-hand" D-forms of all amino acids. Virtually all living organisms (except some microbes) incorporate only the L-forms, but once the organism dies, the L-amino acids gradually convert to the mirror-image D-amino acids. Numerous factors influence the rate of conversion, and though amino-acid racimization was popular as a form of dating in the 1970s, there are problems with it. For instance, the process occurs at different rates for different amino acids, and the rates are further affected by such factors as moisture and temperature. Because of the uncertainties with amino-acid racimization, it has been largely replaced by other absolute-dating methods, such as the use of radioactive isotopes. Certainly, amino acids themselves have offered important keys to understanding the planet's distant past. The discovery, in 1967 and 1968, of sedimentary rocks bearing traces of amino acids as much as three billion years old had an enormous impact on the study of Earth's biological history. Here, for the first time, was concrete evidence of life—at least, in a very simple chemical form—existing billions of years before the first true organism. The discovery of these amino-acid samples greatly influenced scientists' thinking about evolution, particularly the very early stages in which the chemical foundations of life were established. "Amino Acids." Institute of Chemistry, Department of Biology, Chemistry, and Pharmacy, Freie Universität, Berlin (Web site). <http://www.chemie.fu-berlin.de/chemistry/bio/amino-acids_en.html>. Goodsell, David S. Our Molecular Nature: The Body's Motors, Machines, and Messages. New York: Copernicus, 1996. "Introduction to Amino Acids." Department of Crystallography, Birbeck College (Web site). <http://www.cryst.bbk.ac.uk/education/AminoAcid/overview.html>. Michal, Gerhard. Biochemical Pathways: An Atlas of Biochemistry and Molecular Biology. New York: John Wiley and Sons, 1999. Newstrom, Harvey. Nutrients Catalog: Vitamins, Minerals, Amino Acids, Macronutrients—Beneficial Use, Helpers, Inhibitors, Food Sources, Intake Recommendations, and Symptoms of Over or Under Use. Jefferson, NC: McFarland and Company, 1993. Ornstein, Robert E., and Charles Swencionis. The Healing Brain: A Scientific Reader. New York: Guilford Press, 1990. Reference Guide for Amino Acids (Web site). <http://www.realtime.net/anr/aminoacd.html#tryptophn>. Silverstein, Alvin, Virginia B. Silverstein, and Robert A. Silverstein. Proteins. Illus. Anne Canevari Green. Brookfield, CT: Millbrook Press, 1992. Springer Link: Amino Acids (Web site). <http://link.springer.de/link/service/journals/00726/>. Organic compounds made of carbon, hydrogen, oxygen, nitro gen, and (in some cases) sulfur bonded in characteristic formations. Strings of amino acids make up proteins. The chemical forma tion NH2, which is part of all amino acids. The area of the bio logical sciences concerned with the chemical substances and processes in organisms. The formation COOH, which is common to all amino acids. A substance in which atoms of more than one element are bond ed chemically to one another. A group of only two amino acids. Deoxyribonucleic acid, a molecule in all cells and many viruses containing genetic codes for inheritance. A protein material that speeds up chemical reactions in the bodies of plants and animals. Amino acids that cannot be manufactured by the body, and which therefore must be obtained from the diet. Proteins that contain essential amino acids are known as complete proteins. A unit of information about a particular heritable (capable of being inherited) trait that is passed from parent to offspring, stored in DNA molecules called chromosomes. Molecules produced by living cells, which send signals to spots remote from their point of origin and induce specific effects on the activities of other cells. A group of atoms, usually but not always representing more than one element, joined in a structure. Compounds typically are made up of molecules. At one time chemists used the term organic only in reference to living things. Now the word is applied to compounds containing carbon and hydrogen. A bond between the carboxyl group of one amino acid and the amino group of a second amino acid. Large, chainlike molecules composed of numerous subunits known as monomers. A group of between 10 and 50 amino acids. Large molecules built from long chains of 50 or more amino acids. Proteins serve the functions of promoting normal growth, repairing damaged tissue, contributing to the body's immune system, and making enzymes. Ribonucleic acid, the molecule translated from DNA in the cell nucleus, the control center of the cell, that directs protein synthesis in the cytoplasm, or the space between cells. To manufacture chemically, as in the body. A group of three amino acids.
http://www.scienceclarified.com/everyday/Real-Life-Chemistry-Vol-5/Amino-Acids.html
13
12
Spitzer Space Telescope (SST) NASA's Spitzer Space Telescope (SST) Keeping spacecraft operating in the cold reaches of space is a big challenge. Instruments freeze up, parts stop working, and spacecraft eventually die. The Mars Phoenix spacecraft only lasted a few months at the Red Planet's north pole before it succumbed to the extreme cold of the oncoming winter. So, why in the world would NASA deliberately launch a spacecraft whose instruments were chilled down to -456 degrees Fahrenheit using liquid helium? Was the space agency trying to fail? No. The Spitzer Space Telescope (SST) was cooled to just 3.67 degrees Fahrenheit above absolute zero because it had to measure the heat of infrared light emitted by extremely cold objects it observes. Launched on Aug. 25, 2003, the Spitzer Space Telescope was the last – but certainly not the least – of NASA's four Great Observatories. The telescope is named after the late Lyman Spitzer, a Princeton University astrophysics professor who proposed sending a large telescope into space in 1946. Spitzer was instrumental in getting the first Great Observatory, the Hubble Space Telescope, built and launched into space. The $800 million Spitzer Space Telescope orbits the sun more than 62 million miles behind Earth. SST is thousands of times more sensitive than Earth-based observatories, giving scientists unprecedented views of stars, galaxies, and other cosmic phenomena. In 2004, the Spitzer Space Telescope was viewing a core of gas and dust when it spotted the faint red glow of what is likely the youngest star ever seen. The red area had previously appeared to be completely dark based on observations taken by ground-based telescopes and Spitzer's predecessor, the Infrared Space Observatory. The following year, SST became the first telescope to directly capture the light of exo-planets, specifically HD 209458b and TrES-1. Although it was unable to resolve the light into images, this marked the first time that a planet outside of our Solar System had been directly observed. Earlier worlds had been identified by the pull they exerted on the stars they orbited. In 2006, SST found an 80-light-year-long nebula near the center of the Milky Way Galaxy that astronomers named the Double Helix Nebula due to its double spiral appearance. Scientists believe the nebula is a result of massive magnetic fields being generated by a gas disc that is orbiting a super massive black hole at the center of our galaxy. Spitzer's on-board supply of super-cold liquid helium ran out on May 15, 2009. At that point, the telescope warned up slightly from -456 degrees Fahrenheit to -404 degrees Fahrenheit. The observatory then began its warm mission, which involved measuring wavelengths that are not affect by the loss of coolant. NASA is building the James Webb Space Telescope as a successor to the Spitzer and Hubble observatories. The Webb telescope is scheduled for launch in 2018. Spitzer Space Telescope website Lyman Spitzer biography
http://rocketcityspacepioneers.com/space/spitzer-space-telescope
13
10
Conditions for interference 1. The waves from light sources must be coherent with each other. This means that they must be of the same frequency, with a constant phase difference between them. 2. The amplitude (maximum displacement) of interfering waves must have the same magnitude. Slight variations produce lack of contrast in the interference pattern. Young's Double Slit Experiment - Apparatus It is important to realise that the diagram is not to scale. Typically the distance (D) between the double slits and the screen is ~ 0.2 m (20 cm). The distance (a) between the double slits is ~ 10-3m (1mm). The preferred monochromatic light source is a sodium lamp. Young's Double Slit Experiment - Display You will notice some dimming in the image from the centre travelling outwards. This is because the regular light-dark bands are superimposed on the light pattern from the single slit. The intensity pattern is in effect a combination of both the single-slit diffraction pattern and the double slit interference pattern. The amplitude of the diffraction pattern modulates the interference pattern. In other words, the diffraction pattern acts like an envelope containing the interference pattern. The image above is taken from the central maximum area of a display. Young's Double Slit Experiment - theory The separation (y) of bright/dark fringes can be calculated using simple trigonometry and algebra. Consider two bright fringes at C and D. For the fringe at C, the method is to find the path difference between the two rays S1C and S2C . This is then equated to an exact number of wavelengths n. A similar expression is found for the fringe at D, but for the number of wavelengths n+1 . The two expressions are then combined to exclude n . With reference to triangle CAS2 , using Pythagoras' Theorem: substituting for AC and S2A in terms of xC , a and D also, with reference to triangle CBS1 Subtracting equation (ii from equation (i , Using 'the difference of two squares' to expand the LHS, The path difference S2C - S1C is therefore given by: In reality, a ~ 10-3m and D ~ 0.2 m . The length a is much smaller than D. The two rays S2C and S1C are roughly horizontal and each equal to D, cancelling the 2's, For a bright fringe at point C the path difference S2C - S1C must be a whole number (n) of wavelengths (λ). Rearranging to make xC the subject, Similarly for the next bright fringe at D, when the path difference is one wavelength longer (n+1), hence the fringe separation xD - xC is given by, assigning the fringe separation the letter y , or with wavelength λ the subject, |MORE . . .| |MORE . . .| |MORE . . .|
http://a-levelphysicstutor.com/wav-light-inter.php
13
16
Source: NASA/Goddard Space Flight Center While the Moon is Earth's only natural satellite, there are thousands of artificial satellites circling our planet for navigation, communications, entertainment, and science. These satellites are an integral part of our everyday life, and they provide a source for scientific data unavailable from Earth's surface. This video segment adapted from NASA's Goddard Space Flight Center describes some of the different kinds of satellites that orbit Earth. From their vantage points far above Earth's surface, satellites offer views of both outer space and our planet. Satellites located above the atmosphere can provide clearer and more detailed views of the universe than ground telescopes can; they are free from the distortion and absorption effects caused by Earth's atmosphere. The view of Earth itself from orbit also allows for observations that could not be taken from the ground. Earth-observing satellites can provide continuous views of the planet and gather data daily over the entire globe. Sensors and other instruments onboard satellites gather data about Earth by detecting electromagnetic radiation — most commonly in the visible and infrared ranges. Visible light images show the amount of sunlight that is reflected from objects and can be used to study clouds, aerosols, and Earth's surface. Infrared radiation is invisible, but with the help of computers, infrared light is very valuable in studying many processes on Earth. For example, infrared data gathered by satellites can identify temperature differences in ocean currents, as well as reveal cloud structures and movements. Infrared radiation can also be used to examine the thickness of ice in the polar regions and to help explain volcanic eruptions. In addition, infrared radiation is used to measure vegetation cover and determine the composition of soils, rocks, and gases. Microwaves — high-energy radio waves — can study the Earth system in two different ways. Passive microwaves are measured similarly to infrared radiation — objects emit and reflect microwaves and the data is used to study sea surface temperature, atmosphere, soil moisture, sea ice, ocean currents, and pollutants. Microwaves have an advantage over infrared wavelengths because they are less affected by water vapor, aerosols, and clouds, and therefore they can be measured in a wider range of conditions. Active microwave sensing uses the radar technique, in which the satellite releases pulses of microwaves towards Earth's surface and then detects the reflected radiation. This radar method allows for distance measurements such as wave height, topography, and glacier flow. Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co. We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment.
http://www.teachersdomain.org/resource/ess05.sci.ess.eiu.essatellites/
13
11
TenMarks teaches you how to find the shortest distance using perpendicular distance theorems. Read the full transcript » Learn about Perpendicular Line Theorems In this lesson let’s learn about perpendicular line theorems and how to use them to calculate the shortest distance. Perpendicular line theorems are actually quite interesting we’ll do a few examples so the first one gives us a figure and it says name the shortest distance from point W to line XZ. This is point W we need to find the shortest difference between W and this line XZ and we need to write and solve than an equality for X. So key thing to remember is from any point the shortest distance to a line is figured out by drawing up a perpendicular line means this is 90 degrees. So the shortest distance from any point to a line is the line WY which is perpendicular to the original line XZ. So what is the shortest distance that is WY is the shortest distance which means WY is shorter than WX or WZ which means WY is shorter than WZ, right because WZ has to be longer than WY because this is the shortest distance. If WY is less than WZ we know WZ is 19 and WY is the length of WY is X+8 then this is the inequality, how do we solve for it subtract A from both sides, we X is less than 11. So if WY is the shortest distance and this is the link of WY the inequalities solution is X is less than 11. You get the shortest distance. Now let’s look at a couple of other examples. It says line CD which is the straight line here forms a linear pair of congruent angles on segments AB and EF. So this two are line segments AB and EF and CD forms a linear pair of congruent angles which means both these angles on 90 degrees. Right we need to prove that AB is parallel to EF, now the theorems states that if we have a line that intersects two lines and it forms perpendicular angles here and it forms perpendicular angles with the other line then these two lines are indeed parallel. If the is line intersects line AB with one set of angles in this case 90 degrees and the same measurement is the way then to sex line too which is line EF then these two lines are indeed parallel. So since this is 90, this is 90, and this is 90 and the intersection between CD and AB and EF are—CD intersects both these lines at 90 degrees. Since these two angles are equal that means AB is parallel to EF that is true and the reason it’s true is because angle CMB which is this angle is equal to angle C and F. Since they’re both equal to 90 degrees these two lines have to be parallel for these two to have the same angled measure. Let’s try one more. Here we need to compute X and Y. We can see these are two parallel lines, two parallel lines and it’s being intersected by AF, C and D are two parallel lines I intersected by AF so what does that mean if I can see this is 90 degrees right that means this is 90 degrees as well because AE is 180 degrees since C B intersects AE at a perpendicular angle that means angle CBA plus angles CBE which is these two angles that have marked in green equals 180 degrees. Since we’ve given that CBA is 90 degrees plus angle CBE equals 180 that means angles CBE equals 90 degrees, Right so this is 90 what do we know 6y equals 90 degrees. 6 x y is the measure of this angle but we know this angle is 90 degrees. So 6 x y equals 90 that’s divided by 6 on both sides what do I get y = 15. So I’ve already solved for y. Now that I know y equals 15, now let’s look at these two angles I’m going to raise what I’ve marked here and start all over. So what do I know these two are parallel lines since two are parallel lines if this is 90 degrees then this is also 90 degrees what does that mean that 6y equals 5x + 4y that means these two angles are equal, if these angles are equal 6 x 15 is 90 = 5x + 4 times 15 is 60, right so subtracting 60 degrees from both sides, 30 degrees equals 5 times x this gets cancelled, 30 degrees 5 times x let’s divide both sides by 5 to simplify and we get x=30/5 is 6. So x is 6 so what have we solved x=6 and y=15. And the reason we could solve this is because we know that if we have two parallel
http://www.healthline.com/hlvideo-5min/learn-about-perpendicular-line-theorems-285014226
13
16
|To place a straight line equal to a given straight line with one end at a given point.| |Let A be the given point, and BC the given straight line. It is required to place a straight line equal to the given straight line BC with one end at the point A. |Join the straight line AB from the point A to the point B, and construct the equilateral triangle DAB on it.||Post. 1 |Produce the straight lines AE and BF in a straight line with DA and DB. Describe the circle CGH with center B and radius BC, and again, describe the circle GKL with center D and radius DG.||Post.2| |Since the point B is the center of the circle CGH, therefore BC equals BG. Again, since the point D is the center of the circle GKL, therefore DL equals DG.||I.Def.15| |And in these DA equals DB, therefore the remainder AL equals the remainder BG.||C.N.3| |But BC was also proved equal to BG, therefore each of the straight lines AL and BC equals BG. And things which equal the same thing also equal one another, therefore AL also equals BC.||C.N.1| |Therefore the straight line AL equal to the given straight line BC has been placed with one end at the given point A.| Another, different, expectation is that one might use a compass to transfer the distance BC over to the point A. It is clear from Euclid's use of postulate 3 that the point to be used for the center and a point that will be on the circumference must be constructed before applying the postulate; postulate 3 is not used to transfer distance. Sometimes postulate 3 is likened to a collapsing compass, that is, when the compass is lifted off the drawing surface, it collapses. It could well be that in some earlier Greek geometric theory abstracted compasses that could transfer distances. If that speculation is correct, then this proposition would be a late addition to the theory. The construction of the proposition allows a weaker postulate (namely postulate 3) to be assumed. |When using a compass and a straightedge to perform this construction there are more circles drawn than shown in the diagram that accompanies the proposition. These are the two circles needed to construct the equilateral triangle ABD. One side, AB, of that triangle isn't necessary for the construction. Altogether, four circles and two lines are required for this construction. Note that this constuction assumes that all the point A and the line BC lie in a plane. It may also be used in space, however, since Proposition XI.2 implies that A and BC do lie in a plane. Next proposition: I.3
http://aleph0.clarku.edu/~djoyce/java/elements/bookI/propI2.html
13
13
1st: Graph the function when given the equation 2nd: Match a given graph to its equation 3rd: Write the equation of function given its graph I feel that this order helps students complete the last task better. For this matching game, I printed the solution page of the worksheet from kuta(http://www.kutasoftware.com/freeia2.html) and cut the equation and graph on separate index cards for each group. I knew my students would find it easy and it allowed the kiddos to work with one another on a task that was not that long. I planned on using this the opener to my student's review day for their upcoming quiz but then I started to make them. To make one set took me 10 minutes and I was planning on making 10. I made 2. I plan on making the rest to use for next year-I will be making these while watching a movie or getting a student to make them! While we reviewed, I handed out the index cards to students who where done and had them match with the sets I had completed. It worked well to keep those students working with something they find 'fun'! I also attached the notes I give my students when we first look at the writing the equation of an absolute value function graph. The lesson went well--my students typically find it "easy" and nice break. I see it as the calm before the storm. Piecewise functions are next.
http://walkinginmathland.weebly.com/1/post/2012/03/writing-equations-of-absolute-value-functions.html
13
21
Primary mathematics/Negative numbers - The intended audience for this lesson is educators and parents, whether in traditional or home-schooling environments. For the same lesson, but aimed directly at students, whether children or adults, please see this Wikibooks page. Introducing Children to Negative Numbers The concept of negativity is can be difficult to explain to children. This is in part because, from a child’s perspective, there are no obvious examples of negative numbers naturally found in nature. Temperatures below zero are an artificial construct, as is the idea of below sea-level, or floors in a building below the first floor. The concepts of owing money or borrowing candy bars are abstract ideas as well. Children generally do not understand “overnight” what negativity is. However, introducing children to all of these examples will, over time, help them to make the necessary connections that will lead to a strong basic understanding of the concept. Children should be encouraged to think of their own examples of negativity. Concepts such as backwards and forwards, up and down, give and take, hot and cold, should be explored. Models and Manipulatives There are a few mathematical models used to introduce negative numbers to children but the most commonly used one is most certainly the number line. Because the number line is the one-dimensional component of the Cartesian coordinate system, it should eventually be modeled both horizontally and vertically, and because the x and y axis' of the Cartesian system run left to right and down to up respectively, number lines should be presented this way as well. Connections can then be made between number lines, negative numbers and thermometers, elevators, rulers (for measuring sea-level). Another way to teach negative numbers is with a set of manipulatives that look like little plus and minus signs (sometimes known as integer tiles). These tiles can be grouped, combined, and swapped around in ways that model the four basic arithmetic operations. They can then be modeled at a higher level of abstraction by students on paper. These manipulatives and their accompanying visual models are normally introduced in the upper primary grades. Below is an example of addition using these tiles: Note that in this model's answer set, plus signs and minus signs can be grouped to create null sets, whose value is 0. In this particular problem, 2 null sets have been found and circled in the answer. What remains is the sum of the two initial numbers - in this case a single negative tile. Problems should be set up using this method involving two negative numbers, as well as negative and positive numbers where the sum of these numbers is equal to both positive and negative answers. Other activities and suggestions: There is also a "red chip/black chip" manipulative, where red chips are used to show negative numbers and black positve numbers. This manipulative can be confusing for students because both the red chips and the black chips exist. A lesser known manipulative used to teach negative numbers is The ZeroSum Ruler, a description of which can be found on the blog http://zerosumruler.wordpress.com/ The ZeroSum Ruler is a foldable ruler that allows students to add integers of opposite signs wth absolute value. The ZeroSum Ruler also works for subtraction, as "3 - 8", for example, is the sum of 3 and -8. Give students pictorial representations of using this model and have them identify the associated equation. Have students model three different ways to model the number -5, 3, and 0 using tiles. Give students equations with larger and more complicated numbers until they, in recognizing how cumbersome this model can get, begin to solve the problems by creating their own algorithms or "rules" to speed up the process. In this way, students will understand at a fundamental level why you can just subtract 46.2 from 234.5 and put a negative sign on the answer to solve the problem . Subtraction Involving Negative Numbers In the earlier primary grades, students use various manipulatives as they explore subtraction. Candy, money, toys, are all things that children relate to. But because negative numbers are not found in nature, the subtraction of negative numbers should be introduced only when students are comfortable with more abstract models such as number lines and plus and minus tiles. The number line will have the advantage in that students will already be familiar with their use for adding and subtracting positive numbers. In the upper primary grades, students should be exposed to the idea that addition and subtraction are inverse operations. Because students are usually taught to "walk backwards" to model subtraction on a numberline, in order to model the subtraction of a negative integer, we can tell them to "do the opposite" of "walking backwards", thus modeling the idea that subtracting a negative is the same as adding a positive. But this can be very confusing for students and other models should be introduced to give students a broader understanding. The operation of subtraction can be best understood by most students in terms of the act of "taking away". Teaching a student to say (and eventually think) the words "take away" whenever they see a subtraction sign is a solid teaching strategy that just happens to translate well to working with negative numbers when using plus and minus tiles. The following illustration models the problem The statement "5 negative tiles take away 2 negative tiles" describes the values and the operation. Note that it is important to stress the words "take away". Students need to become comfortable with this level of problem before they are exposed to problems where the first number in the equation does not permit the taking away of the tiles specified. Consider the problem . It is not possible to "take away" 4 negative tiles when there is only 1 negative tile in the initial set. Students must be taught to model the number -1 in such a way that there are 4 negative tiles available to remove: Notice that in this illustration, the number -1 has been modeled using three null sets (circled in blue). With the inclusion of these null sets, the value of the number in the rectangle is still -1. Note that the student could put in 10 or 20 null sets and the answer will still be the same. Once students are exposed to this method, they might use more null sets than necessary. This is OK. It is in figuring out for themselves how many null sets are necessary that they truly come to an understanding of the operation. Students should learn to model problems such as:, , and . After achieving a certain level of proficiency with this model, students soon come to their own personal understanding of what it means when we say that that subtracting a negative is the same as adding a positive. Multiplying negative numbers - Multiply a positive number by a negative normally. The result is negative. - Multiply two negative numbers normally. The result, however, is positive. Think of it as a double negative making a positive. Dividing negative numbers - Divide a positive number by a negative normally. The result is negative. - Divide a negative number by a positive normally. The result is negative. - Divide a negative number by a negative normally. The result, however, is positive. Think of it as a double negative making a positive. - This same logic also applies to fractions, where the numerator (top number) and/or denominator (bottom number) is negative. That is, if one of the two numbers is negative, then the fraction is negative. If both numbers are negative, the minus signs cancel, and the fraction is positive.
http://en.wikiversity.org/wiki/Primary_mathematics:Negative_numbers
13
27
For tens of thousands of years, humans have created sculptures by carving pieces from a solid block. They have chipped away at stone, metal, wood and ceramics, creating art by subtracting material. Now, a group of scientists from Harvard University have figured out how to do the same thing with DNA. First, Yonggang Ke builds a solid block of DNA from individual Lego-like bricks. Each one is a single strand of the famous double helix that folds into a U-shape, designed to interlock with four neighbours. You can see what happens in the diagram below, which visualises the strands as two-hole Lego bricks. Together, hundreds of them can anneal into a solid block. And because each brick has a unique sequences, it only sticks to certain neighbours, and occupies a set position in the block. This means that Ke can create different shapes by leaving out specific bricks from the full set, like a sculptor removing bits of stone from a block. Starting with a thousand-brick block, he carved out 102 different shapes, with complex features like cavities, tunnels, and embossed symbols. Each one is just 25 nanometres wide in any direction, roughly the size of the smallest viruses. Here’s the 12th piece from my BBC column In 2001, the Human Genome Project gave us an almost complete draft of the 3 billion letters in our DNA. We joined an elite club of species with their genome sequences, one that is growing with every passing month. These genomes contain the information necessary for building their respective owners, but it’s information that we still struggle to parse. To date, no one can take the code from an organism’s genes and predict all the details of its shape, behaviour, development, physiology—the collection of traits known as its phenotype. And yet, the basis of those details are there, all captured in stretches of As, Cs, Gs and Ts. “Cells know pretty reliably how to do this,” says Leonid Kruglyak from Princeton University. “Every time you start with a chicken genome, you get a chicken, and every time you start with an elephant genome, you get an elephant.” As our technologies and understanding advance, will we eventually be able to look at a pile of raw DNA sequence and glean all the workings of the organism it belongs to? Just as physicists can use the laws of mechanics to predict the motion of an object, can biologists use fundamental ideas in genetics and molecular biology to predict the traits and flaws of a body based solely on its genes? Could we pop a genome into a black box, and print out the image of a human? Or a fly? Or a mouse? In North America’s Sonoran desert, there’s a fly that depends on a cactus. Thanks to a handful of changes in a gene called Neverland, Drosophila pachea can no longer make chemicals that it needs to grow and reproduce. These genetic changes represent the evolution of subservience – they inextricably bound the fly to the senita cactus, the only species with the substances the fly needs. The Neverland gene makes a protein of the same name, which converts cholesterol into 7-dehydrocholesterol. This chemical reaction is the first of many that leads to ecdysone – a hormone that all insects need to transform from a larva into an adult. Most species make their own ecdysone but D.pachea is ill-equipped. Because of its Neverland mutations, the manufacturing process fails at the very first step. Without intervention, the fly would be permanently stuck in larval mode. Hence the name, Neverland—fly genes are named after what happens to the insect when the gene is broken. Fortunately, in the wild, D.pachea can compensate for its genetic problem by feeding on the senita cactus. The cactus produces lathosterol—a chemical related to cholesterol. D.pachea’s version of Neverland can still process this substitute, and uses it to kickstart the production of ecdysone. The senita is the only plant in the Sonoran desert that makes lathosterol, the only one that lets the fly bypass the deficiency that would keep it forever young. It has become the fly’s dealer, pushing out chemicals that it cannot live without, and all because of changes to a single fly gene. The cheetah’s spots look like the work of a skilled artist, who has delicately dabbed dots of ink upon the animal’s coat. By contrast, the king cheetah – a rare breed from southern Africa – looks like the same artist had a bad day and knocked the whole ink pot over. With thick stripes running down its back, and disorderly blotches over the rest of its body, the king cheetah looks so unusual that it was originally considered a separate species. Its true nature as a mutant breed was finally confirmed in 1981 when two captive spotted females each gave birth to a king. Two teams of scientists, led by Greg Barsh from the HudsonAlpha Institute for Biotechnology and Stanford University, and Stephen O’Brien from the Frederick National Laboratory for Cancer Research have discovered the gene behind the king cheetah’s ink-stains. And it’s the same gene that turns a mackerel-striped tabby cat into a blotched “classic” one. Back in 2010, Eduardo Eizirik, one of O’Brien’s team, found a small region of DNA that seemed to control the different markings in mackerel and blotched tabbies. But, we only have a rough draft of the cat genome, they couldn’t identify any specific genes within the area. The study caught the attention of Barsh, who had long been interested in understanding how cats get their patterns, from tiger stripes to leopard rosettes. The two teams started working together. A different version of this story appears at The Scientist. Honeybee workers spend their whole lives toiling for their hives, never ascending to the royal status of queens. But they can change careers. At first, they’re nurses, which stay in the hive and tend to their larval sisters. Later on, they transform into foragers, which venture into the outside world in search of flowers and food. This isn’t just a case of flipping between tasks. Nurses and foragers are very distinct sub-castes that differ in their bodies, mental abilities, and behaviour – foragers, for example, are the ones that use the famous waggle dance. “[They’re] as different as being a scientist or journalist,” explains Gro Amdam, who studies bee behaviour. “It’s really amazing that they can sculpt themselves into those two roles that require very specialist skills.” The transformation between nurse and forager is significant, but it’s also reversible. If nurses go missing, foragers can revert back to their former selves to fill the employment gap. Amdam likens them to the classic optical illusion (shown on the right) which depicts both a young debutante and an old crone. “The bee genome is like this drawing,” she says. “It has both ladies in it. How is the genome able to make one of them stand out and then the other? The answer lies in ‘epigenetic’ changes that alter how some of the bees’ genes are used, without changing the underlying DNA. Amdam and her colleague Andrew Feinberg found that the shift from nurse to forager involves a set of chemical marks, added to the DNA of few dozen genes. These marks, known as methyl groups, are like Post-It notes that dictate how a piece of text should be read, without altering the actual words. And if the foragers change back into nurses, the methylation marks also revert. Together, they form a toolkit for flexibility, a way of seeing both the crone and the debutante in the same picture, a way of eking out two very different and reversible skill-sets from the same genome. Every whale and dolphin evolved from a deer-like animal with slender, hoofed legs, which lived between 53 and 56 million years ago. Over time, these ancestral creatures became more streamlined, and their tails widened into flukes. They lost their hind limbs, and their front ones became paddles. And they became smarter. Today, whales and dolphins – collectively known as cetaceans – are among the most intelligent of mammals, with smarts that rival our own primate relatives. Now, Shixia Xu from Nanjing Normal University has found that a gene called ASPM seems to have played an important role in the evolution of cetacean brains. The gene shows clear signatures of adaptive change at two points in history, when the brains of some cetaceans ballooned in size. But ASPM has also been linked to the evolution of bigger brains in another branch of the mammal family tree – ours. It went through similar bursts of accelerated evolution in the great apes, and especially in our own ancestors after they split away from chimpanzees. It seems that both primates and cetaceans—the intellectual heavyweights of the animal world—could owe our bulging brains to changes in the same gene. “It’s a significant result,” says Michael McGowen, who studies the genetic evolution of whales at Wayne State University. “The work on ASPM shows clear evidence of adaptive evolution, and adds to the growing evidence of convergence between primates and cetaceans from a molecular perspective.” Back in 2001, the Human Genome Project gave us a nigh-complete readout of our DNA. Somehow, those As, Gs, Cs, and Ts contained the full instructions for making one of us, but they were hardly a simple blueprint or recipe book. The genome was there, but we had little idea about how it was used, controlled or organised, much less how it led to a living, breathing human. That gap has just got a little smaller. A massive international project called ENCODE – the Encyclopedia Of DNA Elements – has moved us from “Here’s the genome” towards “Here’s what the genome does”. Over the last 10 years, an international team of 442 scientists have assailed 147 different types of cells with 24 types of experiments. Their goal: catalogue every letter (nucleotide) within the genome that does something. The results are published today in 30 papers across three different journals, and more. For years, we’ve known that only 1.5 percent of the genome actually contains instructions for making proteins, the molecular workhorses of our cells. But ENCODE has shown that the rest of the genome – the non-coding majority – is still rife with “functional elements”. That is, it’s doing something. It contains docking sites where proteins can stick and switch genes on or off. Or it is read and ‘transcribed’ into molecules of RNA. Or it controls whether nearby genes are transcribed (promoters; more than 70,000 of these). Or it influences the activity of other genes, sometimes across great distances (enhancers; more than 400,000 of these). Or it affects how DNA is folded and packaged. Something. According to ENCODE’s analysis, 80 percent of the genome has a “biochemical function”. More on exactly what this means later, but the key point is: It’s not “junk”. Scientists have long recognised that some non-coding DNA has a function, and more and more solid examples have come to light [edited for clarity - Ed]. But, many maintained that much of these sequences were, indeed, junk. ENCODE says otherwise. “Almost every nucleotide is associated with a function of some sort or another, and we now know where they are, what binds to them, what their associations are, and more,” says Tom Gingeras, one of the study’s many senior scientists. And what’s in the remaining 20 percent? Possibly not junk either, according to Ewan Birney, the project’s Lead Analysis Coordinator and self-described “cat-herder-in-chief”. He explains that ENCODE only (!) looked at 147 types of cells, and the human body has a few thousand. A given part of the genome might control a gene in one cell type, but not others. If every cell is included, functions may emerge for the phantom proportion. “It’s likely that 80 percent will go to 100 percent,” says Birney. “We don’t really have any large chunks of redundant DNA. This metaphor of junk isn’t that useful.” That the genome is complex will come as no surprise to scientists, but ENCODE does two fresh things: it catalogues the DNA elements for scientists to pore over; and it reveals just how many there are. “The genome is no longer an empty vastness – it is densely packed with peaks and wiggles of biochemical activity,” says Shyam Prabhakar from the Genome Institute of Singapore. “There are nuggets for everyone here. No matter which piece of the genome we happen to be studying in any particular project, we will benefit from looking up the corresponding ENCODE tracks.” There are many implications, from redefining what a “gene” is, to providing new clues about diseases, to piecing together how the genome works in three dimensions. “It has fundamentally changed my view of our genome. It’s like a jungle in there. It’s full of things doing stuff,” says Birney. “You look at it and go: “What is going on? Does one really need to make all these pieces of RNA? It feels verdant with activity but one struggles to find the logic for it. Think of the human genome as a city. The basic layout, tallest buildings and most famous sights are visible from a distance. That’s where we got to in 2001. Now, we’ve zoomed in. We can see the players that make the city tick: the cleaners and security guards who maintain the buildings, the sewers and power lines connecting distant parts, the police and politicians who oversee the rest. That’s where we are now: a comprehensive 3-D portrait of a dynamic, changing entity, rather than a static, 2-D map. And just as London is not New York, different types of cells rely on different DNA elements. For example, of the roughly 3 million locations where proteins stick to DNA, just 3,700 are commonly used in every cell examined. Liver cells, skin cells, neurons, embryonic stem cells… all of them use different suites of switches to control their lives. Again, we knew this would be so. Again, it’s the scale and the comprehensiveness that matter. “This is an important milestone,” says George Church, a geneticist at the Harvard Medical School. His only gripe is that ENCODE’s cells lines came from different people, so it’s hard to say if differences between cells are consistent differences, or simply reflect the genetics of their owners. Birney explains that in other studies, the differences between cells were greater than the differences between people, but Church still wants to see ENCODE’s analyses repeated with several types of cell from a small group of people, healthy and diseased. That should be possible since “the cost of some of these [tests] has dropped a million-fold,” he says. The next phase is to find out how these players interact with one another. What does the 80 percent do (if, genuinely, anything)? If it does something, does it do something important? Does it change something tangible, like a part of our body, or our risk of disease? If it changes, does evolution care? [Update 07/09 23:00 Indeed, to many scientists, these are the questions that matter, and ones that ENCODE has dodged through a liberal definition of “functional”. That, say the critics, critically weakens its claims of having found a genome rife with activity. Most of the ENCODE’s “functional elements” are little more than sequences being transcribed to RNA, with little heed to their physiological or evolutionary importance. These include repetitive remains of genetic parasites that have copied themselves ad infinitum, the corpses of dead and once-useful genes, and more. To include all such sequences within the bracket of “functional” sets a very low bar. Michael Eisen from the Howard Hughes Medical Institute said that ENCODE’s definition as a “meaningless measure of functional significance” and Leonid Kruglyak from Princeton University noted that it’s “barely more interesting” than saying that a sequence gets copied (which all of them are). To put it more simply: our genomic city’s got lots of new players in it, but they may largely be bums. This debate is unlikely to quieten any time soon, although some of the heaviest critics of ENCODE’s “junk” DNA conclusions have still praised its nature as a genomic parts list. For example, T. Ryan Gregory from Guelph University contrasts their discussions on junk DNA to a classic paper from 1972, and concludes that they are “far less sophisticated than what was found in the literature decades ago.” But he also says that ENCODE provides “the most detailed overview of genome elements we’ve ever seen and will surely lead to a flood of interesting research for many years to come.” And Michael White from the Washington University in St. Louis said that the project had achieved “an impressive level of consistency and quality for such a large consortium.” He added, “Whatever else you might want to say about the idea of ENCODE, you cannot say that ENCODE was poorly executed.” ] Where will it lead us? It’s easy to get carried away, and ENCODE’s scientists seem wary of the hype-and-backlash cycle that befell the Human Genome Project. Much was promised at its unveiling, by both the media and the scientists involved, including medical breakthroughs and a clearer understanding of our humanity. The ENCODE team is being more cautious. “This idea that it will lead to new treatments for cancer or provide answers that were previously unknown is at least partially true,” says Gingeras, “but the degree to which it will successfully address those issues is unknown. “We are the most complex things we know about. It’s not surprising that the manual is huge,” says Birney. “I think it’s going to take this century to fill in all the details. That full reconciliation is going to be this century’s science.” Find out more about ENCODE: So, that 80 percent figure… Let’s build up to it. We know that 1.5 percent of the genome codes for proteins. That much is clearly functional and we’ve known that for a while. ENCODE also looked for places in the genome where proteins stick to DNA – sites where, most likely, the proteins are switching a gene on or off. They found 4 million such switches, which together account for 8.5 percent of the genome.* (Birney: “You can’t move for switches.”) That’s already higher than anyone was expecting, and it sets a pretty conservative lower bound for the part of the genome that definitively does something. In fact, because ENCODE hasn’t looked at every possible type of cell or every possible protein that sticks to DNA, this figure is almost certainly too low. Birney’s estimate is that it’s out by half. This means that the total proportion of the genome that either creates a protein or sticks to one, is around 20 percent. To get from 20 to 80 percent, we include all the other elements that ENCODE looked for – not just the sequences that have proteins latched onto them, but those that affects how DNA is packaged and those that are transcribed at all. Birney says, “[That figure] best coveys the difference between a genome made mostly of dead wood and one that is alive with activity.” [Update 5/9/12 23:00: For Birney's own, very measured, take on this, check out his post. ] That 80 percent covers many classes of sequence that were thought to be essentially functionless. These include introns – the parts of a gene that are cut out at the RNA stage, and don’t contribute to a protein’s manufacture. “The idea that introns are definitely deadweight isn’t true,” says Birney. The same could be said for our many repetitive sequences: small chunks of DNA that have the ability to copy themselves, and are found in large, recurring chains. These are typically viewed as parasites, which duplicate themselves at the expense of the rest of the genome. Or are they? The youngest of these sequences – those that have copied themselves only recently in our history – still pose a problem for ENCODE. But many of the older ones, the genomic veterans, fall within the “functional” category. Some contain sequences where proteins can bind, and influence the activity of nearby genes. Perhaps their spread across the genome represents not the invasion of a parasite, but a way of spreading control. “These parasites can be subverted sometimes,” says Birney. He expects that many skeptics will argue about the 80 percent figure, and the definition of “functional”. But he says, “No matter how you cut it, we’ve got to get used to the fact that there’s a lot more going on with the genome than we knew.” [Update 07/09 23:00 Birney was right about the scepticism. Gregory says, “80 percent is the figure only if your definition is so loose as to be all but meaningless.” Larry Moran from the University of Toronto adds, “Functional" simply means a little bit of DNA that's been identified in an assay of some sort or another. That’s a remarkably silly definition of function and if you're using it to discount junk DNA it's downright disingenuous.” This is the main criticism of ENCODE thus far, repeated across many blogs and touched on in the opening section of this post. There are other concerns. For example, White notes that many DNA-binding proteins recognise short sequences that crop up all over the genome just by chance. The upshot is that you’d expect many of the elements that ENCODE identified if you just wrote out a random string of As, Gs, Cs, and Ts. “I've spent the summer testing a lot of random DNA,” he tweeted. “It’s not hard to make it do something biochemically interesting.” Gregory asks why, if ENCODE is right and our genome is full of functional elements, does an onion have around five times as much non-coding DNA as we do? Or why pufferfishes can get by with just a tenth as much? Birney says the onion test is silly. While many genomes have a tight grip upon their repetitive jumping DNA, many plants seem to have relaxed that control. Consequently, their genomes have bloated in size (bolstered by the occasional mass doubling). “It’s almost as if the genome throws in the towel and goes: Oh sod it, just replicate everywhere.” Conversely, the pufferfish has maintained an incredibly tight rein upon its jumping sequences. “Its genome management is pretty much perfect,” says Birney. Hence: the smaller genome. But Gregory thinks that these answers are a dodge. “I would still like Birney to answer the question. How is it that humans “need” 100% of their non-coding DNA, but a pufferfish does fine with 1/10 as much [and] a salamander has at least 4 times as much?” [I think Birney is writing a post on this, so expect more updates as they happen, and this post to balloon to onion proportions].] [Update 07/09/12 11:00: The ENCODE reactions have come thick and fast, and Brendan Maher has written the best summary of them. I'm not going to duplicate his sterling efforts. Head over to Nature's blog for more.] * (A cool aside: John Stamatoyannopoulos from the University of Washington mapped these protein-DNA contacts by looking for “footprints” where the presence of a protein shields the underlying DNA from a “DNase” enzyme that would otherwise slice through it. The resolution is incredible! Stamatoyannopoulos could “see” every nucleotide that’s touched by a protein – not just a footprint, but each of its toes too. Joe Ecker from the Salk Institute thinks we should be eventually able to “dynamically footprint a cellular response”. That is, expose a cell to something—maybe a hormone or a toxin—and check its footprints over time. You can cross-reference those sites to the ENCODE database, and reconstruct what’s going on in the cell just by “watching” the shadows of proteins as they descend and lift off.) Find out more about ENCODE: The simplistic view of a gene is that it’s a stretch of DNA that is transcribed to make a protein. But each gene can be transcribed in different ways, and the transcripts overlap with one another. They’re like choose-your-own-adventure books: you can read them in different orders, start and finish at different points, and leave out chunks altogether. Fair enough: We can say that the “gene” starts at the start of the first transcript, and ends at the end of the final transcript. But ENCODE’s data complicates this definition. There are a lot of transcripts, probably more than anyone had realised, and some connect two previously unconnected genes. The boundaries for those genes widen, and the gaps between them shrink or disappear. Gingeras says that this “intergenic” space has shrunk by a factor of four. “A region that was once called Gene X is now melded to Gene Y.” Imagine discovering that every book in the library has a secret appendix, that’s also the foreword of the book next to it. These bleeding boundaries seem familiar. Bacteria have them: Their genes are cramped together in a miracle of effective organisation, packing in as much information as possible into a tiny genome. Viruses epitomise such genetic economy even better. I suggested that comparison to Gingeras. “Exactly!” he said. “Nature never relinquished that strategy.” Bacteria and viruses can get away with smooshing their protein-encoding genes together. But not only do we have more proteins, but we also need a vast array of sequences to control when, where and how they are deployed. Those elements need space too. Ignore them, and it looks like we have a flabby genome with sequence to spare. Understand them, and our own brand of economical packaging becomes clear. (However, Birney adds, “In bacteria and viruses, it’s all elegant and efficient. At the moment, our genome just seems really, really messy. There’s this much higher density of stuff, but for me, emotionally it doesn’t have that elegance when we see in a bacterial genome.“) Given these blurred boundaries, Gingeras thinks that it no longer makes sense to think of a gene as a specific point in the genome, or as its basic unit. Instead, that honour falls to the transcript, made of RNA rather than DNA. “The atom of the genome is the transcript,” says Gingeras. “They are the basic unit that’s affected by mutation and selection.” A “gene” then becomes a collection of transcripts, united by some common factor. There’s something poetic about this. Our view of the genome has long been focused on DNA. It’s the thing the genome project was deciphering. It is converted into RNA, giving it a more fundamental flavour. But out of those two molecules, RNA arrived on the planet first. It was copying itself and evolving long before DNA came on the scene. “These studies are pointing us back in that direction,” says Gingeras. They recognise RNA’s role, not as simply an intermediary between DNA and proteins, but something more primary. Find out more about ENCODE: For the last decade, geneticists have run a seemingly endless stream of “genome-wide association studies” (GWAS), attempting to understand the genetic basis of disease. They have thrown up a long list of SNPs – variants at specific DNA letters—that correlate with the risk of different conditions. The ENCODE team have mapped all of these to their data. They found that just 12 percent of the SNPs lie within protein-coding areas. They also showed that compared to random SNPs, the disease-associated ones are 60 percent more likely to lie within functional, non-coding regions, especially in promoters and enhancers. This suggests that many of these variants are controlling the activity of different genes, and provides many fresh leads for understanding how they affect our risk of disease. “It was one of those too good to be true moments,” says Birney. “Literally, I was in the room [when they got the result] and I went: Yes!” Imagine a massive table. Down the left side are all the diseases that people have done GWAS studies for. Across the top are all the possible cell types and transcription factors (proteins that control how genes are activated) in the ENCODE study. Are there hotspots? Are there SNPs that correspond to both? Yes. Lots, and many of them are new. Take Crohn’s disease, a type of bowel disorder. The team found five SNPs that increase the risk of Crohn’s, and that are recognised by a group of transcription factors called GATA2. “That wasn’t something that the Crohn’s disease biologists had on their radar,” says Birney. “Suddenly we’ve made an unbiased association between a disease and a piece of basic biology.” In other words, it’s a new lead to follow up on. “We’re now working with lots of different disease biologists looking at their data sets,” says Birney. “In some sense, ENCODE is working form the genome out, while GWAS studies are working from disease in.” Where they meet, there is interest. So far, the team have identified 400 such hotspots that are worth looking into. Of these, between 50 and 100 were predictable. Some of the rest make intuitive sense. Others are head-scratchers. Find out more about ENCODE: Writing the genome out as a string of letters invites a common fallacy: that it’s a two-dimensional, linear entity. It’s anything but. DNA is wrapped around proteins called histones like beads on a string. These are then twisted, folded and looped in an intricate three-dimensional way. The upshot is that parts of the genome that look distant when you write the sequences out can actually be physical neighbours. And this means that some switches can affect the activity of far away genes Job Dekker from the University of Massachusetts Medical School has now used ENCODE data to map these long-range interactions across just 1 percent of the genome in three different types of cell. He discovered more than 1,000 of them, where switches in one part of the genome were physically reaching over and controlling the activity of a distant gene. “I like to say that nothing in the genome makes sense, except in 3D,” says Dekker. “It’s really a teaser for the future of genome science,” Dekker says. Gingeras agrees. He thinks that understanding these 3-D interactions will add another layer of complexity to modern genetics, and extending this work to the rest of the genome, and other cell types, is a “next clear logical step”. Find out more about ENCODE: ENCODE is vast. The results of this second phase have been published in 30 central papers in Nature, Genome Biology and Genome Research, along with a slew of secondary articles in Science, Cell and others. And all of it is freely available to the public. The pages of printed journals are a poor repository for such a vast trove of data, so the ENCODE team have devised a new publishing model. In the ENCODE portal site, readers can pick one of 13 topics of interest, and follow them in special “threads” that link all the papers. Say you want to know about enhancer sequences. The enhancer thread pulls out all the relevant paragraphs from the 30 papers across the three journals. “Rather than people having to skim read all 30 papers, and working out which ones they want to read, we pull out that thread for you,” says Birney. And yes, there’s an app for that. Transparency is a big issue too. “With these really intensive science projects, there has to be a huge amount of trust that data analysts have done things correctly,” says Birney. But you don’t have to trust. At least half the ENCODE figures are interactive, and the data behind them can be downloaded. The team have also built a “Virtual Machine” – a downloadable package of the almost-raw data and all the code in the ENCODE analyses. Think of it as the most complete Methods section ever. With the virtual machine, “you can absolutely replay step by step what we did to get to the figure,” says Birney. “I think it should be the standard for the future.” Find out more about ENCODE: Compilation of other ENCODE coverage Icelandic horses can move in an odd way. All horses have three natural gaits: the standard walk; the two-beat trot, where diagonally opposite pairs of legs hit the ground together; and the four-beat gallop, where the four feet hit the ground in turn. To those, Icelandic horses add the tölt. It has four beats, like the gallop, but a tölting horse always has at least one foot on the ground, while a galloping one is essentially flying for part of its stride. This constant contact makes for a smoother ride. It also looks… weird, like watching a horse power-walk straight into the uncanny valley. The tölt is just one of several special ambling gaits that some horses can pull off, but others cannot. These abilities can be heritable, to about the same extent that height is in humans. Indeed, some horses like the Tennessee Walking horse have been bred to specialise in certain gaits. Now, a team of Swedish, Icelandic and American scientists has shown that these special moves require a single change in a gene called DMRT3. It creates a protein used in neurons of a horse’s spine, those which coordinate the movements of its limbs. It’s a gait-keeper. On 13 June, 2011, a woman was transferred to the National Institutes of Health Clinical Center with an infection of Klebsiella pneumoniae. This opportunistic bacterium likes to infect people whose immune systems have been previously weakened, and it does well in hospitals. In recent years, it has also evolved resistance to carbapenems – the frontline antibiotics that are usually used to treat it. These resistant strains kill more than half of the people they infect, and the new patient at the NIH hospital was carrying just such a strain. She was kept to herself, in her own room. Any doctors or visitors had to wear gowns and gloves. The only contacts she had with other patients were two brief stints in an intensive care unit. The woman eventually recovered and was released on 15 July. But by then, she had already spread her infection to at least three other patients, despite the hospital’s strict precautions. None of them knew it at the time, for K.pneumoniae can silently colonise the guts of its host without causing symptoms for long spans of time. The second patient was diagnosed with K.pneumoniae on 5 August, and every week after that, a new case popped up. The hospital took extreme measures. All the infected people were kept in a separate part of the hospital, and assigned a dedicated group of staff who didn’t work on any other patients. The outbreak was contained, but not before it had spread to 18 people in total, and killed 6 of them. How did the bacteria manage to spread so effectively, despite everything that the hospital did to stem its flow? K.pneumoniae’s stealthy nature makes it nigh impossible to work out the path of transmission through normal means. Instead, Evan Snitkin from the National Human Genome Research Institute sequenced the entire genomes of bacteria taken from all the infected patients. His study is the latest in a growing number of efforts to use the power of modern genetic technology to understand the spread and evolution of diseases. Words like “individual” are hard to use when it comes to the black cottonwood tree. Each tree can sprout a new one that’s a clone of the original, and still connected by the same root system. This “offspring” is arguably the same tree – the same “individual” – as the “parent”. This semantic difficulty gets even worse when you consider their genes. Even though the parent and offspring are clones, it turns out that they have stark genetic differences between them. It gets worse: when Brett Olds sequenced tissues from different parts of the same black cottonwood, he found differences in thousands of genes between the topmost bud, the lowermost branch, and the roots. In fact, the variation within a single tree can be greater than that across different trees. As Olds told me, “This could change the classic paradigm that evolution only happens in a population rather than at an individual level.” There are uncanny parallels here to a story about cancer that I wrote last year, in which British scientists showed that a single tumour can contain a world of diversity, with different parts evolving individually from one another. I learned about Olds’ study at the Ecological Society of America Annual Meeting and wrote about it for Nature. Head over there for the details. Photo by Born1945
http://blogs.discovermagazine.com/notrocketscience/category/genetics/
13
12
More than 16 years of USGS science will come to fruition next week in the Grand Canyon and its surroundings when the U.S. Department of the Interior releases Colorado River water from Lake Powell reservoir under its new science-based protocol for adaptive management of Glen Canyon Dam. The November 19 controlled release, called a high-flow experiment, simulates a natural small flood that might have occurred before the dam was completed in 1963. Scientists have shown that floods redistribute sand and mud, thereby creating sandbars that help maintain and restore camping beaches and create favorable conditions for nursery habitat for native fish, including the endangered humpback chub (Gila cypha) in Glen Canyon National Recreation Area, Grand Canyon National Park, and the Hualapai Indian Reservation. Newly created river deposits are also the substrate on which many components of the native ecosystem depend. Large floods once passed through the Grand Canyon each spring, formed by the large volume of melting snowmelt of the distant Rocky Mountains. Small floods occurred in summer and fall when rainstorms occurred in the deserts of northeastern Arizona, northwestern New Mexico, central and southern Utah, western Wyoming or western Colorado. Whenever they occurred, large and small floods carried large loads of sand and mud through the Grand Canyon, and a small proportion of this fine sediment created sand bars along the Colorado River. Today, approximately 95 percent of the fine sediment that once was transported through the Grand Canyon is deposited in Lake Powell. Only fine sediment delivered to the Colorado River by tributaries downstream from Glen Canyon Dam supplies sand and mud to the post-dam ecosystem. The new science-based protocol allows releases of reservoir water to be timed to follow periods when the tributary streams have recently delivered fine sediment into the Grand Canyon. The controlled flood created by the release of reservoir water is intended to redistribute the fine sediment that has accumulated on the bed of the Colorado River in summer and fall 2012, and to move that sediment from the river bed to the channel margins where new flood deposits provide valuable ecological and recreation benefit. The high-flow experiments provide periodic renewal of this sediment movement process. Most of the sand and mud comes from the Paria River that enters the Colorado River 15 miles downstream from the dam, at Lees Ferry, Ariz. The USGS Grand Canyon Monitoring and Research Center, along with colleagues in the U. S. Fish and Wildlife Service, Arizona Game and Fish Department, Bureau of Reclamation, National Park Service, academia and other federal and state natural resource agencies studied physical and ecological processes that occurred during three previous controlled floods in 1996, 2004 and 2008. Results from these studies, as well as other studies of the Colorado River conducted since the late 1980s, created a body of research that underpins the federal Glen Canyon Dam Adaptive Management Program. Next week’s controlled flood is the first under a new management protocol announced this year and effective through 2020. At noon Monday, November 19, the dam’s river outlet tubes will be opened. Typically, reservoir releases are routed through power-plant turbines and thereby produce hydroelectricity. However, the outlet tubes allow some reservoir water to bypass the power plant, thereby allowing for larger volumes of water to directly enter the river. Flow through these outlet tubes does not go through the turbines, and these waters do not produce hydroelectricity. The outlet tubes are only used in rare times of high inflow when additional water must be released from the reservoir, or when an environmental objective is served by creating a controlled flood. In early and mid-November, reservoir releases will fluctuate between 7,000 and 9,000 cubic feet per second. On November 18, flows will be gradually raised to 27,300 cubic feet per second, the present capacity of the power-plant turbines. At noon November 19, the river outlet tubes will begin to be opened, and the total flow of the Colorado River will reach 42,300 cubic feet per second at 9 p.m. November 19. Peak flows will remain at this amount until 9 p.m. November 20, when the release will begin to be decreased. Early in the morning of Friday, November 23, the river outlet tubes will be closed. Power-plant flows will thereafter be slowly decreased, and flows will return to the range of 7,000 to 9000 cubic feet per second at 10 p.m. November 23. Prior to completion of Glen Canyon Dam, flows of at least 50,000 cubic feet per second occurred every year, and floods of at least 125,000 cubic feet per second occurred every 8 years on average. The largest flood at Lees Ferry measured by the USGS was 170,000 cubic feet per second in 1921; a flood in 1884 is estimated to have been 210,000 cubic feet per second. The flow regime of the modern Colorado River is very different. In most years, all flows are routed through the power-plant turbines, and the total annual volume of water released downstream fulfills obligations to downstream users in the United States and Mexico. Post-dam floods are about 60 percent less than before dam construction, and low flows are much higher than in the past. Next week’s controlled flood was designed by the Bureau of Reclamation based on USGS data from gages that measure stream flow and sediment concentration from the Paria River, the Little Colorado River, and several smaller tributaries. At most stations, the USGS measures and records stream flow every 15 minutes. The Paria and Little Colorado stations are networked with satellite telemetry, making it possible to monitor stream flow in real time. These data are augmented by field crews’ periodic measurements of suspended sediment as well as samples collected by automated pump samplers. “Throughout summer and fall 2012, the USGS research team developed, and continually revised, estimates of the total amount of sand and of mud delivered by the Paria River, as well as estimating the fate of that fine sediment as it was transported further downstream through the Grand Canyon,” said Jack Schmidt, chief of the USGS Grand Canyon Monitoring and Research Center. “These data are the scientific foundation on which the planned high-flow experiment is based. Without the estimates of the amount of sand and mud delivered from tributaries, it would not have been possible to implement the Protocol for these high flow experiments. The entire program of utilizing small controlled floods to rehabilitate the Grand Canyon ecosystem depends on state-of-the-science monitoring efforts by the USGS to measure sediment transport rates in real time and to provide those data to the Bureau of Reclamation and to other agencies. “The USGS program of measuring and reporting sand and mud transport in real time and in such a challenging environment is unprecedented in the scientific management of rivers,” Schmidt said. USGS data show that the Paria River delivered at least 593,000 tons of sand to the Colorado River between late July and the end of October 2012 – enough to fill a building the size of a 100-yard NFL football field about 24 stories high. Long-term measurements show that this amount is about 26 percent less than delivered by the Paria in an average year, but is still sufficient to trigger a small controlled flood intended to rehabilitate the downstream ecosystem. Public notice of the controlled flood has been made by the Department of the Interior. USGS scientists have used numerical models and previous observations to predict the elevation to which flood waters will rise in an effort to provide advice to recreational boaters and campers who are regulated by the National Park Service. When Glen Canyon Dam was planned in the 1950s, little consideration was given to how dam operations might affect downstream resources. The dam was completed before enactment of the National Environmental Policy Act of 1969 and the Endangered Species Act of 1973, which mandated such considerations. By the 1960s and 1970s, recognition of the environmental consequences of the dam and its operation grew. The National Park Service, USGS scientists, and river recreationists noted the physical transformation of the river in the Grand Canyon, including the loss of large beaches used for camping, narrowing of rapids that hampered navigation, and changes in the distribution and composition of riparian vegetation. It took decades of measurement and analysis, including critical observations made during the past high-flow experiments, for scientists and engineers to understand how the Colorado River transports fine sediment and to understand the consequences to the river landscape of the changes in flow regime and sediment supply. Research by USGS scientists David M. Rubin and David J. Topping identified the characteristics of the Colorado River’s sediment transporting flows and the technologies necessary to measure the inflow and fate of fine sediment entering from tributary floods. Numerical models developed by USGS hydrologist Scott Wright and others allowed prediction of how fine sediment moves through the Grand Canyon. The predictive tools developed by a large team of USGS scientists and academic collaborators now allows real-time computation of the amount and distribution of fine sediment that enters the Grand Canyon each year. As the Adaptive Management Plan’s science protocol is implemented through 2020, USGS researchers will be working on several related environmental issues. One area of research in connection with the high-flow experiments concerns their effect on native and non-native fish. One goal of managing the river is to have a healthy non-native rainbow trout (Oncorhynchus mykiss) fishery upstream from Lees Ferry, in Glen Canyon National Recreation Area, but to limit interactions between rainbow trout and the endangered humpback chub that primarily lives in an area of the river about 75 miles downstream from the dam. The USGS, in cooperation with the Arizona Game and Fish Department, will determine the effects of autumn high-flow experiments on rainbow trout populations in Glen Canyon. Spring floods are known to have a large impact on first-year rainbow trout survival and movement. However, the effects of fall floods are less well understood – either directly, on the fish themselves, or indirectly, on the aquatic food base. USGS scientists are marking trout before and after the high-flow experiment to monitor their response, while the food base will be monitored before, during, and after the event. Much study is still needed to fully understand how controlled floods change sandbar patterns, especially in eddies along the channel’s edge. Other research is under way to improve understanding of the connection between sandbar size and deposition of windblown sand upslope at archeological sites. Windblown sand helps protect these archaeological sites. The 2004 and 2008 experiments indicated that in least some cases, newly deposited sandbars exposed to the wind caused elevated wind-blown sand transport rates and that wind-deposited sand dunes were rejuvenated. Some of these newly formed dunes covered archaeological sites or filled small gullies that potentially erode archaeological resources. Knowledge gained from this and future high-flow experiments will be used to refine the timing, duration, frequency, and conditions of future controlled floods called for under the science protocol through 2020, and will also be available to inform future management decisions for downstream resources. “The new protocol is built upon the tremendous knowledge we’ve gained from over 16 years of scientific research and experimentation conducted under the Glen Canyon Dam Adaptive Management Program,” said U.S. Interior Secretary Ken Salazar. “We’ve taken that knowledge and translated it into a flexible framework that enables us to determine, based on science, when the conditions are right to conduct these releases to maximize the ecosystem benefits along the Colorado River corridor in Grand Canyon National Park.” Receive news and updates:
http://www.usgs.gov/blogs/features/usgs_top_story/science-assists-glen-canyon-dam-management/?from=title
13
35
Fusion power refers to power generated by nuclear fusion reactions. In Physics and Nuclear chemistry, nuclear fusion is the process by which multiple- like charged atomic nuclei join together to form a heavier nucleus In this kind of reaction, two light atomic nuclei fuse together to form a heavier nucleus and in doing so, release energy. The nucleus of an Atom is the very dense region consisting of Nucleons ( Protons and Neutrons, at the center of an atom In a more general sense, the term can also refer to the production of net usable power from a fusion source, similar to the usage of the term "steam power. " Most design studies for fusion power plants involve using the fusion reactions to create heat, which is then used to operate a steam turbine, similar to most coal-fired power stations as well as fission-driven nuclear power stations. A steam turbine is a mechanical device that extracts Thermal energy from pressurized Steam, and converts it into useful mechanical work Nuclear fission is the splitting of the nucleus of an atom into parts (lighter nuclei) often producing Free neutrons and other smaller nuclei which may Nuclear power is any Nuclear technology designed to extract usable Energy from atomic nuclei via controlled Nuclear reactions The largest current experiment is the Joint European Torus [JET]. JET, the Joint European Torus, is the largest Nuclear fusion experimental reactor yet built In 1997, JET produced a peak of 16. 1 MW of fusion power (65% of input power), with fusion power of over 10 MW sustained for over 0. The watt (symbol W) is the SI derived unit of power, equal to one Joule of energy per Second. 5 sec. In June 2005, the construction of the experimental reactor ITER, designed to produce several times more fusion power than the power put into the plasma over many minutes, was announced. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from The production of net electrical power from fusion is planned for DEMO, the next generation experiment after ITER. DEMO (DEMOnstration Power Plant is a proposed Nuclear fusion power plant that is intended to build upon the expected success of the ITER (originally The basic concept behind any fusion reaction is to bring two or more atoms very close together, close enough that the strong nuclear force in their nuclei will pull them together into one larger atom. In particle physics the strong interaction, or strong force, or color force, holds Quarks and Gluons together to form Protons and If two light nuclei fuse, they will generally form a single nucleus with a slightly smaller mass than the sum of their original masses. The difference in mass is released as energy according to Einstein's mass-energy equivalence formula E = mc². In Physics, mass–energy equivalence is the concept that for particles slower than light any Mass has an associated Energy and vice versa. If the input atoms are sufficiently massive, the resulting fusion product will be heavier than the reactants, in which case the reaction requires an external source of energy. The dividing line between "light" and "heavy" is iron. Iron (ˈаɪɚn is a Chemical element with the symbol Fe (ferrum and Atomic number 26 Above this atomic mass, energy will generally be released in nuclear fission reactions, below it, in fusion. Nuclear fission is the splitting of the nucleus of an atom into parts (lighter nuclei) often producing Free neutrons and other smaller nuclei which may Fusion between the atoms is opposed by their shared electrical charge, specifically the net positive charge of the nuclei. In order to overcome this electrostatic force, or "Coulomb barrier", some external source of energy must be supplied. ---- Bold text Coulomb's law', developed in the 1780s by French physicist Charles Augustin de Coulomb, may be stated in scalar form The Coulomb barrier, named after physicist Charles-Augustin de Coulomb (1736&ndash1806 is the energy barrier due to Electrostatic interaction that two nuclei need The easiest way to do this is to heat the atoms, which has the side effect of stripping the electrons from the atoms and leaving them as bare nuclei. The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J In most experiments the nuclei and electrons are left in a fluid known as a plasma. In Physics and Chemistry, plasma is an Ionized Gas, in which a certain proportion of Electrons are free rather than being bound The temperatures required to provide the nuclei with enough energy to overcome their repulsion is a function of the total charge, so hydrogen, which has the smallest nuclear charge therefore reacts at the lowest temperature. Hydrogen (ˈhaɪdrədʒən is the Chemical element with Atomic number 1 Helium has an extremely low mass per nucleon and therefore is energetically favoured as a fusion product. Helium ( He) is a colorless odorless tasteless non-toxic Inert Monatomic Chemical As a consequence, most fusion reactions combine isotopes of hydrogen ("protium", deuterium, or tritium) to form isotopes of helium (³He or 4He). In Physics and Nuclear chemistry, nuclear fusion is the process by which multiple- like charged atomic nuclei join together to form a heavier nucleus A hydrogen atom is an atom of the chemical element Hydrogen. The electrically neutral Deuterium, also called heavy hydrogen, is a Stable isotope of Hydrogen with a Natural abundance in the Oceans of Earth Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. Perhaps the three most widely considered fuel cycles are based on the D-T, D-D, and p-11B reactions. Other fuel cycles (D-³He and ³He-³He) would require a supply of ³He, either from other nuclear reactions or from extraterrestrial sources, such as the surface of the moon or the atmospheres of the gas giant planets. The details of the calculations comparing these reactions can be found here. In Physics and Nuclear chemistry, nuclear fusion is the process by which multiple- like charged atomic nuclei join together to form a heavier nucleus The easiest (according to the Lawson criterion) and most immediately promising nuclear reaction to be used for fusion power is: Deuterium is a naturally occurring isotope of hydrogen and as such is universally available. In Nuclear fusion research the Lawson criterion, first derived by John D Deuterium, also called heavy hydrogen, is a Stable isotope of Hydrogen with a Natural abundance in the Oceans of Earth Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. Helium-4 ( or) is a non- Radioactive and light Isotope of Helium. This article is a discussion of neutrons in general For the specific case of a neutron found outside the nucleus see Free neutron. Isotopes (Greek isos = "equal" tópos = "site place" are any of the different types of atoms ( Nuclides The large mass ratio of the hydrogen isotopes makes the separation rather easy compared to the difficult uranium enrichment process. Enriched uranium is a kind of Uranium in which the percent composition of Uranium-235 has been increased through the process of Isotope separation. Tritium is also an isotope of hydrogen, but it occurs naturally in only negligible amounts due to its radioactive half-life of 12. In Nuclear physics, beta decay is a type of Radioactive decay in which a Beta particle (an Electron or a Positron) is emitted Half-Life (computer-game page here It's already listed in the disambiguation page 32 years. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions: The reactant neutron is supplied by the D-T fusion reaction shown above, the one which also produces the useful energy. A breeder reactor is a Nuclear reactor that generates new Fissile or fissionable material at a greater rate than it consumes such material Lithium (ˈlɪθiəm is a Chemical element with the symbol Li and Atomic number 3 The reaction with 6Li is exothermic, providing a small energy gain for the reactor. An exothermic reaction is a Chemical reaction that releases Heat. The reaction with 7Li is endothermic but does not consume the neutron. In Thermodynamics, the word endothermic "within-heating" describes a process or reaction that absorbs Energy in the form of Heat. At least some 7Li reactions are required to replace the neutrons lost by reactions with other elements. Most reactor designs use the naturally occurring mix of lithium isotopes. The supply of lithium is more limited than that of deuterium, but still large enough to supply the world's energy demand for thousands of years. Several drawbacks are commonly attributed to D-T fusion power: The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of current fission power reactors, posing problems for material design. Neutron flux is a term referring to the number of Neutrons passing through an Area over a span of Time. Design of suitable materials is underway but their actual use in a reactor is not proposed until the generation after ITER. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from After a single series of D-T tests at JET, the largest fusion reactor yet to use this fuel, the vacuum vessel was sufficiently radioactive that remote handling needed to be used for the year following the tests. JET, the Joint European Torus, is the largest Nuclear fusion experimental reactor yet built On the other hand, the volumetric deposition of neutron power can also be seen as an advantage. If all the power of a fusion reactor had to be transported by conduction through the surface enclosing the plasma, it would be very difficult to find materials and a construction that would survive, and it would probably entail a relatively poor efficiency. Though more difficult to facilitate than the deuterium-tritium reaction, fusion can also be achieved through the reaction of deuterium with itself. This reaction has two branches that occur with nearly equal probability: |D + D||→ T||+ p| |→ ³He||+ n| The optimum temperature for this reaction is 15 keV, only slightly higher than the optimum for the D-T reaction. The first branch does not produce neutrons, but it does produce tritium, so that a D-D reactor will not be completely tritium-free, even though it does not require an input of tritium or lithium. Most of the tritium produced will be burned before leaving the reactor, which reduces the tritium handling required, but also means that more neutrons are produced and that some of these are very energetic. The neutron from the second branch has an energy of only 2. 45 MeV, whereas the neutron from the D-T reaction has an energy of 14. 1 MeV, resulting in a wider range of isotope production and material damage. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons is only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from limitations of lithium resources and a somewhat softer neutron spectrum. The price to pay compared to D-T is that the energy confinement (at a given pressure) must be 30 times better and the power produced (at a given pressure and volume) is 68 times less. If aneutronic fusion is the goal, then the most promising candidate may be the proton-boron reaction: Under reasonable assumptions, side reactions will result in about 0. Aneutronic fusion is any form of Fusion power where no more than 1% of the total energy released is carried by Neutrons Since the most-studied fusion reactions 1% of the fusion power being carried by neutrons. At 123 keV, the optimum temperature for this reaction is nearly ten times higher than that for the pure hydrogen reactions, the energy confinement must be 500 times better than that required for the D-T reaction, and the power density will be 2500 times lower than for D-T. In Engineering, the term specific power can refer to power either per unit of Mass, Volume or Area, although power per unit of Since the confinement properties of conventional approaches to fusion such as the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts. The idea of using human-initiated fusion reactions was first made practical for military purposes, in nuclear weapons. A nuclear weapon is an explosive device that derives its destructive force from Nuclear reactions either fission or a combination of fission and fusion. In a hydrogen bomb, the energy released by a fission weapon is used to compress and heat fusion fuel, beginning a fusion reaction which can release a very large amount of energy. The first fusion-based weapons released some 500 times more energy than early fission weapons. Civilian applications, in which explosive energy production must be replaced by a controlled production, are still being developed. Although it took less than ten years to go from military applications to civilian fission energy production, it has been very different in the fusion energy field; more than fifty years have already passed without any commercial fusion energy production plant coming into operation. Registration of the first patent related to a fusion reactor by the United Kingdom Atomic Energy Authority, the inventors being Sir George Paget Thomson and Moses Blackman, dates back to 1946. The United Kingdom Atomic Energy Authority (UKAEA was established in 1954 as a Statutory corporation to oversee and pioneer the development of Nuclear energy within Sir George Paget Thomson, FRS ( May 3, 1892 &ndash September 10, 1975) was an English Physicist and Moses Blackman ( December 6, 1908 - June 3, 1983) was a South African born British crystallographer. Some basic principles used in the ITER experiment are described in this patent: toroidal vacuum chamber, magnetic confinement, and radio frequency plasma heating. Radio frequency ( RF) is a Frequency or rate of Oscillation within the range of about 3 Hz to 300 GHz The U. S. fusion program began in 1951 when Lyman Spitzer began work on a stellarator under the code name Project Matterhorn. Lyman Strong Spitzer Jr ( June 26, 1914 &ndash March 31, 1997) was an American theoretical physicist. A stellarator is a device used to confine a hot plasma with magnetic fields in order to sustain a controlled Nuclear fusion reaction His work led to the creation of the Princeton Plasma Physics Laboratory, where magnetically confined plasmas are still studied. Princeton Plasma Physics Laboratory (PPPL is a United States Department of Energy national laboratory for Plasma physics and Nuclear fusion The stellarator concept fell out of favor for several decades afterwards, plagued by poor confinement issues, but recent advances in computer technology have led to a significant resurgence in interest in these devices. A stellarator is a device used to confine a hot plasma with magnetic fields in order to sustain a controlled Nuclear fusion reaction A wide variety of other magnetic geometries were also experimented with, notably with the magnetic mirror. A magnetic mirror is a Magnetic field configuration where the field strength changes when moving along a field line These systems also suffered from similar problems when higher performance versions were constructed. A new approach was outlined in the theoretical works fulfilled in 1950–1951 by I.E. Tamm and A.D. Sakharov in Soviet Union, which laid the foundations of the tokamak. Igor Yevgenyevich Tamm ( Russian И́горь Евге́ньевич Та́мм) ( July 8 1895 &ndash April 12 1971) was Andrei Dmitrievich Sakharov (Андре́й Дми́триевич Са́харов (May 21 1921 – December 14 1989 was an eminent Soviet nuclear Physicist The Union of Soviet Socialist Republics (USSR was a constitutionally Socialist state that existed in Eurasia from 1922 to 1991 A tokamak is a machine producing a toroidal Magnetic field for confining a plasma. Experimental research of these systems started in 1956 in Kurchatov Institute, Moscow by a group of Soviet scientists lead by Lev Artsimovich. Kurchatov Institute ( Роcсийский научный центр "Курчатовский Институт" Russian Scientific Centre "Kurchatov Institute" is Moscow (Москва́ romanised: Moskvá, IPA: see also other names) is the Capital and the largest city of Lev Andreevich Artsimovich ( Арцимович Лев Андреевич in Russian; also transliterated Arzimowitsch) ( 25 February The group constructed the first tokamaks, the most successful of them being T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, conducting the first quasistationary thermonuclear fusion reaction ever. History The city was founded in 1893 as the future site of the Trans-Siberian Railway bridge crossing the great Siberian river Ob, and was known as The tokamak was dramatically more efficient than the other approaches of the same era, and most research after the 1970s concentrated on variations of this theme. The same is true today, where very large tokamaks like ITER are hoping to demonstrate several milestones on the way to commercial power production, including a burning plasma with long burn times, high power output and online fueling. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from The fusion energy gain factor, usually expressed with the symbol Q, is the ratio of Fusion power produced in a Nuclear fusion reactor to the power required There are no guarantees that the project will be successful, as previous generations of machines have faced formerly unseen problems on many occasions. But the entire field of high temperature plasmas is much better understood now due to the earlier research, and there is considerable optimism that ITER will meet its goals. If successful, ITER would be followed by a "commercial demonstrator" system, similar to the very earliest power-producing fission reactors built in the era before wide-scale commercial deployment of larger machines started in the 1960s and 1970s. Even with these goals met, there are a number of major engineering problems remaining, notably finding suitable "low activity" materials for reactor construction, demonstrating secondary systems including practical tritium extraction, and building reactor designs that allow their reactor core to be removed when it becomes embrittled due to the neutron flux. Practical generators based on the tokamak concept remain far in the future. The public at large has been somewhat disappointed, as the initial outlook for practical fusion power plants was much rosier than is now realized; a pamphlet from the 1970s printed by General Atomic stated that "Several commercial fusion reactors are expected to be online by the year 2000. " The Z-pinch phenomenon has been known since the end of the 18th century. In Fusion power research the Z-pinch, or zeta pinch, is a type of plasma confinement system that uses an electrical current in the plasma to generate The 18th century lasted from 1701 to 1800 in the Gregorian calendar, in accordance with the Anno Domini / Common Era numbering system Its use in the fusion field comes from research made on toroidal devices, initially in the Los Alamos National Laboratory right from 1952 (Perhapsatron), and in the United Kingdom from 1954 (ZETA), but its physical principles remained for a long time poorly understood and controlled. Los Alamos National Laboratory (LANL (previously known at various times as Site Y, Los Alamos Laboratory, and Los Alamos Scientific Laboratory) is a Year 1952 ( MCMLII) was a Leap year starting on Tuesday (link will display full calendar of the Gregorian calendar. The United Kingdom of Great Britain and Northern Ireland, commonly known as the United Kingdom, the UK or Britain,is a Sovereign state located Year 1954 ( MCMLIV) was a Common year starting on Friday (link will display full 1954 Gregorian calendar) Pinch devices were studied as potential development paths to practical fusion devices through the 1950s, but studies of the data generated by these devices suggested that instabilities in the collapse mechanism would doom any pinch-type device to power levels that were far too low to suggest continuing along these lines would be practical. Most work on pinch-type devices ended by the 1960s. Recent work on the basic concept started as a result of the appearance of the "wires array" concept in the 1980s, which allowed a more efficient use of this technique. The Sandia National Laboratory runs a continuing wire-array research program with the Zpinch machine. The Z machine is the largest X-ray generator in the world and is designed to test materials in conditions of extreme temperature and pressure In addition, the University of Washington's ZaP Lab have shown quiescent periods of stability hundreds of times longer than expected for plasma in a Z-pinch configuration, giving promise to the confinement technique. See Washington (disambiguation for other uses The University of Washington, founded in 1861, is a public research University The technique of implosion of a microcapsule irradiated by laser beams, the basis of laser inertial confinement, was first suggested in 1962 by scientists at Lawrence Livermore National Laboratory, shortly after the invention of the laser itself in 1960. A laser is a device that emits Light ( Electromagnetic radiation) through a process called Stimulated emission. Inertial confinement fusion ( ICF) is a process where Nuclear fusion reactions are initiated by heating and compressing a fuel target typically in the form of The Lawrence Livermore National Laboratory ( LLNL) in Livermore California is a scientific research laboratory founded by the University of California in 1952 Lasers of the era were very low powered, but low-level research using them nevertheless started as early as 1965. More serious research started in the early 1970s when new types of lasers offered a path to dramatically higher power levels, levels that made inertial-confinement fusion devices appear practical for the first time. By the late 1970s great strides had been made in laser power, but with each increase new problems were found in the implosion technique that suggested even more power would be required. By the 1980s these increases were so large that using the concept for generating net energy seemed remote. Most research in this field turned to weapons research, always a second line of research, as the implosion concept is somewhat similar to hydrogen bomb operation. The Teller–Ulam design is a Nuclear weapon design which is used in Megaton -range Thermonuclear weapons and is more colloquially referred to as "the Work on very large versions continued as a result, with the very large National Ignition Facility in the US and Laser Mégajoule in France supporting these research programs. The National Ignition Facility, or NIF, is a Laser -based Inertial confinement fusion (ICF research device under construction at the Lawrence Laser Mégajoule ( LMJ) is an experimental Inertial confinement fusion (ICF device being built in France by the French nuclear science directorate More recent work had demonstrated that significant savings in the required laser energy are possible using a technique known as "fast ignition". The savings are so dramatic that the concept appears to be a useful technique for energy production again, so much so that it is a serious contender for pre-commercial development once again. There are proposals to build an experimental facility dedicated to the fast ignition approach, known as HiPER. The High Power laser Energy Research facility ( HiPER) is an experimental laser-driven Inertial confinement fusion (ICF device undergoing preliminary design for possible At the same time, advances in solid state lasers appear to improve the "driver" systems' efficiency by about ten times (to 10- 20%), savings that make even the large "traditional" machines almost practical, and might make the fast ignition concept outpace the magnetic approaches in further development. A solid-state laser is a Laser that uses a gain medium that is a Solid, rather than a Liquid such as in Dye lasers or a Gas The laser-based concept has other advantages as well. The reactor core is mostly exposed, as opposed to being wrapped in a huge magnet as in the tokamak. This makes the problem of removing energy from the system somewhat simpler, and should mean that a laser-based device would be much easier to perform maintenance on, such as core replacement. Additionally, the lack of strong magnetic fields allows for a wider variety of low-activation materials, including carbon fiber, which would both reduce the frequency of such swaps, as well as reducing the radioactivity of the discarded core. In other ways the program has many of the same problems as the tokamak; practical methods of energy removal and tritium recycling need to be demonstrated, and in addition there is always the possibility that a new previously unseen collapse problem will arise. Throughout the history of fusion power research there have been a number of devices that have produced fusion at a much smaller level, not being suitable for energy production, but nevertheless starting to fill other roles. Inventor of the Cathode Ray Tube Television Philo T. Farnsworth patented his first Fusor design in 1968, a device which uses inertial electrostatic confinement. The cathode ray tube (CRT is a Vacuum tube containing an Electron gun (a source of electrons and a Fluorescent screen with internal or Philo Taylor Farnsworth ( August 19, 1906 – March 11, 1971) was an American inventor The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T Inertial electrostatic confinement (often abbreviated as IEC) is a concept for retaining a plasma using an electrostatic field Towards the end of the 1960s, Robert Hirsch designed a variant of the Farnsworth Fusor known as the Hirsch-Meeks fusor. See also Hirsch report Dr Robert L Hirsch is a former senior Energy program adviser for Science Applications International Corporation The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T This variant is a considerable improvement over the Farnsworth design, and is able to generate neutron flux in the order of one billion neutrons per second. Although the efficiency was very low at first, there were hopes the device could be scaled up, but continued development demonstrated that this approach would be impractical for large machines. Nevertheless, fusion could be achieved using a "lab bench top" type set up for the first time, at minimal cost. This type of fusor found its first application as a portable neutron generator in the late 1990s. Neutron generators are Neutron source devices which contain compact Linear accelerators and that produce Neutrons by fusing Isotopes of Hydrogen An automated sealed reaction chamber version of this device, commercially named Fusionstar was developed by EADS but abandoned in 2001. The European Aeronautic Defence and Space Company EADS NV ( EADS) is a large European aerospace corporation formed by the merger on July 10, Its successor is the NSD-Fusion neutron generator. Neutron generators are Neutron source devices which contain compact Linear accelerators and that produce Neutrons by fusing Isotopes of Hydrogen Robert W. Bussard's Polywell concept is roughly similar to the Fusor design, but replaces the problematic grid with a magnetically contained electron cloud which holds the ions in position and gives an accelerating potential. Robert W Bussard ( August 11 1928 &ndash October 6 2007) was an American Physicist who worked primarily in Nuclear The polywell is a plasma confinement concept that combines elements of Inertial electrostatic confinement and Magnetic confinement fusion, intended The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T Bussard claimed that a scaled up version would be capable of generating net power. In April 2005, a team from UCLA announced it had devised a novel way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to smash deuterium atoms together. The University of California Los Angeles (generally known as UCLA) is a public research university located in Westwood Los Angeles, California, United Lithium tantalate (LiTaO3 is a crystalline solid which possesses unique Optical, Piezoelectric and Pyroelectric properties which make it valuable However, the process does not generate net power. See Pyroelectric fusion. Pyroelectric fusion refers to the technique of using pyroelectric Crystals Such a device would be useful in the same sort of roles as the fusor. The likelihood of a catastrophic accident in a fusion reactor in which injury or loss of life occurs is much smaller than that of a fission reactor. This article is a subarticle of Nuclear power. A nuclear reactor is a device in which Nuclear chain reactions are initiated controlled The primary reason is that the fission products in a fission reactor continue to generate heat through beta-decay for several hours or even days after reactor shut-down, meaning that a meltdown is plausible even after the reactor has been stopped. In contrast, fusion requires precisely controlled conditions of temperature, pressure and magnetic field parameters in order to generate net energy. If the reactor were damaged, these parameters would be disrupted and the heat generation in the reactor would rapidly cease. There is also no risk of a runaway reaction in a fusion reactor, since the plasma is normally burnt at optimal conditions, and any significant change will render it unable to produce excess heat. Runaway reactions are also less of a concern in modern fission reactors, as they are typically designed to immediately shut down under accident conditions, but in a fusion reactor such behaviour is almost unavoidable, and there is thus little need to carefully design them to achieve this extra safety feature. Although the plasma in a fusion power plant will have a volume of 1000 cubic meters or more, the density of the plasma is extremely low, and the total amount of fusion fuel in the vessel is very small, typically a few grams. If the fuel supply is closed, the reaction stops within seconds. In comparison, a fission reactor is typically loaded with enough fuel for one or several years, and no additional fuel is necessary to keep the reaction going. In the magnetic approach, strong fields are developed in coils that are held in place mechanically by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to any other industrial accident, and could be effectively stopped with a containment building similar to those used in existing (fission) nuclear generators. A containment building, in its most common usage is a Steel or reinforced concrete structure enclosing a Nuclear reactor. The laser-driven inertial approach is generally lower-stress. Although failure of the reaction chamber is possible, simply stopping fuel delivery would prevent any sort of catastrophic failure. Most reactor designs rely on the use of liquid lithium as both a coolant and a method for converting stray neutrons from the reaction into tritium, which is fed back into the reactor as fuel. Lithium (ˈlɪθiəm is a Chemical element with the symbol Li and Atomic number 3 Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. Lithium is highly flammable, and in the case of a fire it is possible that the lithium stored on-site could be burned up and escape. In this case the tritium contents of the lithium would be released into the atmosphere, posing a radiation risk. However, calculations suggest that the total amount of tritium and other radioactive gases in a typical power plant would be so small, about 1 kg, that they would have diluted to legally acceptable limits by the time they blew as far as the plant's perimeter fence. A perimeter fence is a structure that circles the Perimeter of an area to prevent access The natural product of the fusion reaction is a small amount of helium, which is completely harmless to life and does not contribute to global warming. Global warming is the increase in the average measured temperature of the Of more concern is tritium, which, like other isotopes of hydrogen, is difficult to retain completely. Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. During normal operation, some amount of tritium will be continually released. There would be no acute danger, but the cumulative effect on the world's population from a fusion economy could be a matter of concern. The 12 year half-life of tritium would at least prevent unlimited build-up and long-term contamination without appropriate containment techniques. Current ITER designs are investigating total containment facilities for any tritium. The large flux of high-energy neutrons in a reactor will make the structural materials radioactive. The radioactive inventory at shut-down may be comparable to that of a fission reactor, but there are important differences. The half-life of the radioisotopes produced by fusion tend to be less than those from fission, so that the inventory decreases more rapidly. A radionuclide is an Atom with an unstable nucleus, which is a nucleus characterized by excess energy which is available to be imparted either to a newly-created Furthermore, there are fewer unique species, and they tend to be non-volatile and biologically less active. Unlike fission reactors, whose waste remains dangerous for thousands of years, most of the radioactive material in a fusion reactor would be the reactor core itself, which would be dangerous for about 50 years, and low-level waste another 100. By 300 years the material would have the same radioactivity as coal ash. Fly ash is one of the residues generated in the Combustion of Coal. . In current designs, some materials will yield waste products with long half-lives. Additionally, the materials used in a fusion reactor are more "flexible" than in a fission design, where many materials are required for their specific neutron cross-sections. This allows a fusion reactor to be designed using materials that are selected specifically to be "low activation", materials that do not easily become radioactive. Vanadium, for example, would become much less radioactive than stainless steel. Vanadium (vəˈneɪdiəm is a Chemical element that has the symbol V and Atomic number 23 In Metallurgy, stainless steel is defined as a Steel Alloy with a minimum of 11 Carbon fibre materials are also low-activation, as well as being strong and light, and are a promising area of study for laser-inertial reactors where a magnetic field is not required. In general terms, fusion reactors would create far less radioactive material than a fission reactor, the material it would create is less damaging biologically, and the radioactivity "burns off" within a time period that is well within existing engineering capabilities. Although fusion power uses nuclear technology, the overlap with nuclear weapons technology is small. Tritium is a component of the trigger of hydrogen bombs, but not a major problem in production. Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. The Teller–Ulam design is a Nuclear weapon design which is used in Megaton -range Thermonuclear weapons and is more colloquially referred to as "the The copious neutrons from a fusion reactor could be used to breed plutonium for an atomic bomb, but not without extensive redesign of the reactor, so that clandestine production would be easy to detect. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with (the more scientifically developed) magnetic confinement fusion. Inertial confinement fusion ( ICF) is a process where Nuclear fusion reactions are initiated by heating and compressing a fuel target typically in the form of Magnetic confinement fusion is an approach to generating Fusion energy that uses Magnetic fields to confine the fusion fuel in the form of a plasma. Large-scale reactors using neutronic fuels (e. g. ITER) and thermal power production (turbine based) are most comparable to fission power from an engineering and economics viewpoint. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from Nuclear power is any Nuclear technology designed to extract usable Energy from atomic nuclei via controlled Nuclear reactions Both fission and fusion power plants involve a relatively compact heat source powering a conventional steam turbine-based power plant, while producing enough neutron radiation to make activation of the plant materials problematic. Neutron activation is the process in which Neutron radiation induces Radioactivity in materials and occurs when atomic nuclei capture Free neutrons The main distinction is that fusion power produces no high-level radioactive waste (though activated plant materials still need to be disposed of). There are some power plant ideas which may significantly lower the cost or size of such plants; however, research in these areas is nowhere near as advanced as in tokamaks. A tokamak is a machine producing a toroidal Magnetic field for confining a plasma. Fusion power commonly proposes the use of deuterium, an isotope of hydrogen, as fuel and in many current designs also use lithium. Deuterium, also called heavy hydrogen, is a Stable isotope of Hydrogen with a Natural abundance in the Oceans of Earth Isotopes (Greek isos = "equal" tópos = "site place" are any of the different types of atoms ( Nuclides Lithium (ˈlɪθiəm is a Chemical element with the symbol Li and Atomic number 3 Assuming a fusion energy output equal to the current global output and that this does not increase in the future, then the known current lithium reserves would last 3000 years, lithium from sea water would last 60 million years, and a more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years. Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion: The first human-made, large-scale production of fusion reactions was the test of the hydrogen bomb, Ivy Mike, in 1952 . The Teller–Ulam design is a Nuclear weapon design which is used in Megaton -range Thermonuclear weapons and is more colloquially referred to as "the Ivy Mike was the codename given to the first US test of a fusion device where a major part of the explosive yield came from fusion It was once proposed to use hydrogen bombs as a source of power by detonating them in underground caverns and then generating electricity from the heat produced, but such a power plant is unlikely ever to be constructed, for a variety of reasons. (See the PACER project for more details. The PACER project carried out at Los Alamos National Laboratory in the mid-1970s explored the possibility of a Fusion power system that would involve exploding small ) Controlled thermonuclear fusion (CTF) refers to the alternative of continuous power production, or at least the use of explosions that are so small that they do not destroy a significant portion of the machine that produces them. To produce self-sustaining fusion, the energy released by the reaction (or at least a fraction of it) must be used to heat new reactant nuclei and keep them hot long enough that they also undergo fusion reactions. Retaining the heat is called energy confinement and may be accomplished in a number of ways. The hydrogen bomb really has no confinement at all. The fuel is simply allowed to fly apart, but it takes a certain length of time to do this, and during this time fusion can occur. This approach is called inertial confinement. Inertial confinement fusion ( ICF) is a process where Nuclear fusion reactions are initiated by heating and compressing a fuel target typically in the form of If more than milligram quantities of fuel are used (and efficiently fused), the explosion would destroy the machine, so theoretically, controlled thermonuclear fusion using inertial confinement would be done using tiny pellets of fuel which explode several times a second. To induce the explosion, the pellet must be compressed to about 30 times solid density with energetic beams. If the beams are focused directly on the pellet, it is called direct drive, which can in principle be very efficient, but in practice it is difficult to obtain the needed uniformity. An alternative approach is indirect drive, in which the beams heat a shell, and the shell radiates x-rays, which then implode the pellet. X-radiation (composed of X-rays) is a form of Electromagnetic radiation. The beams are commonly laser beams, but heavy and light ion beams and electron beams have all been investigated. An ion beam is a type of Particle beam consisting of Ions. Ion beams have many uses in Electronics manufacturing (principally Ion implantation Inertial confinement produces plasmas with impressively high densities and temperatures, and appears to be best suited to weapons research, X-ray generation, very small reactors, and perhaps in the distant future, spaceflight. They rely on fuel pellets with close to a "perfect" shape in order to generate a symmetrical inward shock wave to produce the high-density plasma, and in practice these have proven difficult to produce. For the music album by Converter see Shock Front For the 1977 horror film see Shock Waves A shock wave (also called A recent development in the field of laser induced ICF is the use of ultrashort pulse multi-petawatt lasers to heat the plasma of an imploding pellet at exactly the moment of greatest density after it is imploded conventionally using terawatt scale lasers. This page lists examples of the power in Watts produced by various different sources of energy This research will be carried out on the (currently being built) OMEGA EP petawatt and OMEGA lasers at the University of Rochester and at the GEKKO XII laser at the institute for laser engineering in Osaka Japan, which if fruitful, may have the effect of greatly reducing the cost of a laser fusion based power source. The University of Rochester ( U of R UR) is a private, nonsectarian Coeducational Research University located in Rochester At the temperatures required for fusion, the fuel is in the form of a plasma with very good electrical conductivity. Electrical conductivity or specific conductivity is a measure of a material's ability to conduct an Electric current. This opens the possibility to confine the fuel and the energy with magnetic fields, an idea known as magnetic confinement. In Physics, a magnetic field is a Vector field that permeates space and which can exert a magnetic force on moving Electric charges Magnetic confinement fusion is an approach to generating Fusion energy that uses Magnetic fields to confine the fusion fuel in the form of a plasma. The Lorenz force works only perpendicular to the magnetic field, so that the first problem is how to prevent the plasma from leaking out the ends of the field lines. In Physics, the Lorentz force is the Force on a Point charge due to Electromagnetic fields It is given by the following equation There are basically two solutions. The first is to use the magnetic mirror effect. A magnetic mirror is a Magnetic field configuration where the field strength changes when moving along a field line If particles following a field line encounter a region of higher field strength, then some of the particles will be stopped and reflected. Advantages of a magnetic mirror power plant would be simplified construction and maintenance due to a linear topology and the potential to apply direct conversion in a natural way, but the confinement achieved in the experiments was so poor that this approach has been essentially abandoned. The second possibility to prevent end losses is to bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. In Geometry, a torus (pl tori) is a Surface of revolution generated by revolving a Circle in three dimensional space about an axis Coplanar The most highly developed system of this type is the tokamak, with the stellarator being next most advanced, followed by the Reversed field pinch. A tokamak is a machine producing a toroidal Magnetic field for confining a plasma. A stellarator is a device used to confine a hot plasma with magnetic fields in order to sustain a controlled Nuclear fusion reaction A reversed-field pinch (RFP is a device used to produce and contain near-thermonuclear plasmas. Compact toroids, especially the Field-Reversed Configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. A Field-Reversed Configuration (FRC is a device developed for Magnetic fusion energy research that confines a plasma on closed magnetic field lines without a central A spheromak is a Magnetic fusion energy concept in which the plasma is in magnetohydrodynamic equilibrium In Topology, a geometrical object or space is called simply connected (or 1-connected) if it is Path-connected and every path between two points can be Compact toroids still have some enthusiastic supporters but are not backed as readily by the majority of the fusion community. Finally, there are also electrostatic confinement fusion systems, in which ions in the reaction chamber are confined and held at the center of the device by electrostatic forces, as in the Farnsworth-Hirsch Fusor, which is not believed to be able to developed into a power plant. Inertial electrostatic confinement (often abbreviated as IEC) is a concept for retaining a plasma using an electrostatic field An ion is an Atom or Molecule which has lost or gained one or more Valence electrons giving it a positive or negative electrical charge The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T The Polywell, an advanced variant of the fusor, has shown a degree of research interest as of late; however, the technology is relatively immature, and major scientific and engineering questions remain which researchers under the auspices of the U. The polywell is a plasma confinement concept that combines elements of Inertial electrostatic confinement and Magnetic confinement fusion, intended The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T S. Office of Naval Research hope to further investigate. The Office of Naval Research ( ONR) headquartered in Arlington Virginia ( Ballston) is the office within the United States Department of the A more subtle technique is to use more unusual particles to catalyse fusion. The best known of these is Muon-catalyzed fusion which uses muons, which behave somewhat like electrons and replace the electrons around the atoms. Muon-catalyzed fusion ( μCF) is a process allowing Nuclear fusion to take place at Temperatures significantly lower than the temperatures required for These muons allow atoms to get much closer and thus reduce the kinetic energy required to initiate fusion. Muons require more energy to produce than we can get back from muon-catalysed fusion, making this approach impractical for the generation of power. Some researchers have reported excess heat, neutrons, tritium, helium and other nuclear effects in so-called cold fusion systems. Cold fusion, sometimes called low energy nuclear reactions (LENR or condensed matter nuclear science, is a set of effects reported in controversial laboratory experiments In 2004, a peer review panel was commissioned by the US Department of Energy to study these claims: two thirds of its members found the evidences of nuclear reactions unconvincing, five found the evidence "somewhat convincing" and one was entirely convinced. In 2006, Mosier-Boss and Szpak, researchers in the U.S. Navy's Space and Naval Warfare Systems Center San Diego, reported evidence of nuclear reactions, which have been independently replicated. Space and Naval Warfare Systems Center San Diego (SSC San Diego is the U Research into sonoluminescence induced fusion, sometimes known as "bubble fusion", also continues, although it is met with as much skepticism as cold fusion is by most of the scientific community. Sonoluminescence is the emission of short bursts of Light from imploding bubbles in a Liquid when excited by Sound. Bubble fusion, also known as sonofusion, is the non-technical name for a Nuclear fusion reaction hypothesized to occur during Sonoluminescence, an extreme In fusion research, achieving a fusion energy gain factor Q = 1 is called breakeven and is considered a significant although somewhat artificial milestone. The fusion energy gain factor, usually expressed with the symbol Q, is the ratio of Fusion power produced in a Nuclear fusion reactor to the power required Ignition refers to an infinite Q, that is, a self-sustaining plasma where the losses are made up for by fusion power without any external input. In a practical fusion reactor, some external power will always be required for things like current drive, refueling, profile control, and burn control. A value on the order of Q = 20 will be required if the plant is to deliver much more energy than it uses internally. There have been many design studies for fusion power plants. Despite many differences, there are several systems that are common to most. To begin with, a fusion power plant, like a fission power plant, is customarily divided into the nuclear island and the balance of plant. Nuclear power is any Nuclear technology designed to extract usable Energy from atomic nuclei via controlled Nuclear reactions The balance of plant is the conventional part that converts high-temperature heat into electricity via steam turbines. A steam turbine is a mechanical device that extracts Thermal energy from pressurized Steam, and converts it into useful mechanical work It is much the same in a fusion power plant as in a fission or coal power plant. In a fusion power plant, the nuclear island has a plasma chamber with an associated vacuum system, surrounded by plasma-facing components (first wall and divertor) maintaining the vacuum boundary and absorbing the thermal radiation coming from the plasma, surrounded in turn by a blanket where the neutrons are absorbed to breed tritium and heat a working fluid that transfers the power to the balance of plant. If magnetic confinement is used, a magnet system, using primarily cryogenic superconducting magnets, is needed, and usually systems for heating and refueling the plasma and for driving current. In inertial confinement, a driver (laser or accelerator) and a focusing system are needed, as well as a means for forming and positioning the pellets. Although the standard solution for electricity production in fusion power plant designs is conventional steam turbines using the heat deposited by neutrons, there are also designs for direct conversion of the energy of the charged particles into electricity. These are of little value with a D-T fuel cycle, where 80% of the power is in the neutrons, but are indispensable with aneutronic fusion, where less than 1% is. Aneutronic fusion is any form of Fusion power where no more than 1% of the total energy released is carried by Neutrons Since the most-studied fusion reactions Direct conversion has been most commonly proposed for open-ended magnetic configurations like magnetic mirrors or Field-Reversed Configurations, where charged particles are lost along the magnetic field lines, which are then expanded to convert a large fraction of the random energy of the fusion products into directed motion. A magnetic mirror is a Magnetic field configuration where the field strength changes when moving along a field line A Field-Reversed Configuration (FRC is a device developed for Magnetic fusion energy research that confines a plasma on closed magnetic field lines without a central The particles are then collected on electrodes at various large electrical potentials. Typically the claimed conversion efficiency is in the range of 80%, but the converter may approach the reactor itself in size and expense. Developing materials for fusion reactors has long been recognized as a problem nearly as difficult and important as that of plasma confinement, but it has received only a fraction of the attention. The International Fusion Material Irradiation Facility, also known as IFMIF, is an international scientific research program designed to test materials for suitability for use The neutron flux in a fusion reactor is expected to be about 100 times that in existing pressurized water reactors (PWR). Pressurized water reactor ( PWR s (also VVER if of Russian design are generation II nuclear power reactors that use ordinary Water Each atom in the blanket of a fusion reactor is expected to be hit by a neutron and displaced about a hundred times before the material is replaced. Furthermore the high-energy neutrons will produce hydrogen and helium in various nuclear reactions that tends to form bubbles at grain boundaries and result in swelling, blistering or embrittlement. One also wishes to choose materials whose primary components and impurities do not result in long-lived radioactive wastes. Finally, the mechanical forces and temperatures are large, and there may be frequent cycling of both. The problem is exacerbated because realistic material tests must expose samples to neutron fluxes of a similar level for a similar length of time as those expected in a fusion power plant. Such a neutron source is nearly as complicated and expensive as a fusion reactor itself would be. Proper materials testing will not be possible in ITER, and a proposed materials testing facility, IFMIF, is still at the design stage in 2005. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from The International Fusion Material Irradiation Facility, also known as IFMIF, is an international scientific research program designed to test materials for suitability for use The material of the plasma facing components (PFC) is a special problem. The PFC do not have to withstand large mechanical loads, so neutron damage is much less of an issue. They do have to withstand extremely large thermal loads, up to 10 MW/m², which is a difficult but solvable problem. Regardless of the material chosen, the heat flux can only be accommodated without melting if the distance from the front surface to the coolant is not more than a centimeter or two. The primary issue is the interaction with the plasma. One can choose either a low-Z material, typified by graphite although for some purposes beryllium might be chosen, or a high-Z material, usually tungsten with molybdenum as a second choice. See also List of elements by atomic number In Chemistry and Physics, the atomic number (also known as the proton The Mineral graphite, as with Diamond and Fullerene, is one of the Allotropes of carbon. Beryllium (bəˈrɪliəm is a Chemical element with the symbol Be and Atomic number 4 See also List of elements by atomic number In Chemistry and Physics, the atomic number (also known as the proton Tungsten (ˈtʌŋstən also known as wolfram (/ˈwʊlfrəm/ is a Chemical element that has the symbol W and Atomic number 74 Molybdenum (məˈlɪbdənəm from the Greek word for the metal " Lead " is a Group 6 Chemical element with the symbol Mo Use of liquid metals (lithium, gallium, tin) has also been proposed, e. g. , by injection of 1-5 mm thick streams flowing at 10 m/s on solid substrates. If graphite is used, the gross erosion rates due to physical and chemical sputtering would be many meters per year, so one must rely on redeposition of the sputtered material. Sputtering is a process whereby Atoms are Ejected from a solid target material due to bombardment of the target by energetic Ions It is commonly used for The location of the redeposition will not exactly coincide with the location of the sputtering, so one is still left with erosion rates that may be prohibitive. An even larger problem is the tritium co-deposited with the redeposited graphite. The tritium inventory in graphite layers and dust in a reactor could quickly build up to many kilograms, representing a waste of resources and a serious radiological hazard in case of an accident. The consensus of the fusion community seems to be that graphite, although a very attractive material for fusion experiments, cannot be the primary PFC material in a commercial reactor. The sputtering rate of tungsten can be orders of magnitude smaller than that of carbon, and tritium is not so easily incorporated into redeposited tungsten, making this a more attractive choice. On the other hand, tungsten impurities in a plasma are much more damaging than carbon impurities, and self-sputtering of tungsten can be high, so it will be necessary to ensure that the plasma in contact with the tungsten is not too hot (a few tens of eV rather than hundreds of eV). Tungsten also has disadvantages in terms of eddy currents and melting in off-normal events, as well as some radiological issues. It is far from clear whether nuclear fusion will be economically competitive with other forms of power. The many estimates that have been made of the cost of fusion power cover a wide range, and indirect costs of and subsidies for fusion power and its alternatives make any cost comparison difficult. The low estimates for fusion appear to be competitive with but not drastically lower than other alternatives. The high estimates are several times higher than alternatives. While fusion power is still in early stages of development, vast sums have been and continue to be invested in research. In the EU almost € 10 billion was spent on fusion research up to the end of the 1990s, and the new ITER reactor alone is budgeted at € 10 billion. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from It is estimated that up to the point of possible implementation of electricity generation by nuclear fusion, R&D will need further promotion totalling around € 60-80 billion over a period of 50 years or so (of which € 20-30 billion within the EU). Nuclear fusion research receives € 750 million (excluding ITER funding), compared with € 810 million for all non-nuclear energy research combined , putting research into fusion power well ahead of that of any single rivaling technology. Fusion power would provide much more energy for a given weight of fuel than any technology currently in use, and the fuel itself (primarily deuterium) exists abundantly in the Earth's ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Deuterium, also called heavy hydrogen, is a Stable isotope of Hydrogen with a Natural abundance in the Oceans of Earth Although this may seem a low proportion (about 0. 015%), because nuclear fusion reactions are so much more energetic than chemical combustion and seawater is easier to access and more plentiful than fossil fuels, some experts estimate that fusion could supply the world's energy needs for centuries. An important aspect of fusion energy in contrast to many other energy sources is that the cost of production is inelastic. In Economics and business studies the price elasticity of demand (PED is a measure of the sensitivity of quantity demanded to changes in price The cost of wind energy, for example, goes up as the optimal locations are developed first, while further generators must be sited in less ideal conditions. With fusion energy, the production cost will not increase much, even if large numbers of plants are built. It has been suggested that even 100 times the current energy consumption of the world is possible. Some problems which are expected to be an issue in the next century such as fresh water shortages can actually be regarded merely as problems of energy supply. Water resources are sources of Water that are useful or potentially useful to Humans Uses of water include Agricultural, industrial, Household For example, in desalination plants, seawater can be purified through distillation or reverse osmosis. Desalination, desalinization, or desalinisation refers to any of several processes that remove excess salt and other Minerals from Water Seawater is Water from a Sea or Ocean. On average seawater in the world's oceans has a Salinity of about 3 Distillation is a method of separating Mixtures based on differences in their volatilities in a boiling liquid mixture Reverse osmosis (RO is a separation process that uses pressure to force a Solution through a membrane that retains the Solute on one side and allows the However, these processes are energy intensive. Even if the first fusion plants are not competitive with alternative sources, fusion could still become competitive if large scale desalination requires more power than the alternatives are able to provide. Despite being technically non-renewable, fusion power has many of the benefits of long-term renewable energy sources (such as being a sustainable energy supply compared to presently-utilized sources and emitting no greenhouse gases) as well as some of the benefits of such much more limited energy sources as hydrocarbons and nuclear fission (without reprocessing). Non-renewable energy is energy taken from "finite resources that will eventually dwindle, becoming too expensive or too environmentally damaging to retrieve" Greenhouse gases are gaseous constituents of the atmosphere bothnatural and anthropogenic that absorb and emit radiation at specific wavelengths within the spectrum of thermal infrared Nuclear reprocessing separates components of Spent nuclear fuel such as Reprocessed uranium Plutonium Minor Like these currently dominant energy sources, fusion could provide very high power-generation density and uninterrupted power delivery (due to the fact that it is not dependent on the weather, unlike wind and solar power). The weather is a set of all the phenomena occurring in a given Atmosphere at a given Time. Despite optimism dating back to the 1950s about the wide-scale harnessing of fusion power, there are still significant barriers standing between current scientific understanding and technological capabilities and the practical realization of fusion as an energy source. Research, while making steady progress, has also continually thrown up new difficulties. Therefore it remains unclear that an economically viable fusion plant is even possible. An editorial in New Scientist magazine opined that "if commercial fusion is viable, it may well be a century away. New Scientist is a weekly International science magazine and website covering recent developments in science and technology for a general English -speaking " Ironically, a pamphlet printed by General Atomics in 1970's stated that "By the year 2000, several commercial fusion reactors are expected to be on-line. General Atomics is a nuclear physics and Defense contractor headquartered in San Diego California. " Several fusion reactors have been built, but as yet none has produced more thermal energy than electrical energy consumed. Despite research having started in the 1950s, no commercial fusion reactor is expected before 2050. The ITER project is currently leading the effort to commercialize fusion power. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from A kibibyte (a contraction of ki lo bi nary byte) is a unit of Information or Computer storage, established by the International
http://citizendia.org/Fusion_power
13
37
In a polygon, an exterior angle is formed by a side and an extension of an adjacent side. Exterior angles of a polygon have several unique properties. The sum of exterior angles in a polygon is always equal to 360 degrees. Therefore, for all equiangular polygons, the measure of one exterior angle is equal to 360 divided by the number of sides in the polygon. Next to your angle is formed by a side and an extension of an adjacent So right here I've drawn an exterior angle. I could draw in two more by extending that side and forming another exterior angle, and I could extend this side forming a third exterior angle. But is there anything special about the sum of an exterior angle? To do that, let's look at a table. And I have it separated into three parts. The number of sides. The measure of one exterior angle and the sum of all of the exterior angles. So we're going to start with regular polygons, which means sides are the same and the angles are the same. So over here I'm going to draw an equilateral triangle and I'm going to include my exterior angles. So we're going to assume that this is an equilateral triangle. If I look at the number of exterior angles, that's going to be 3. So if we go back here, number of sides is three. We're going to ask ourselves what's the measure of just one of these. Well, if I look closely, this is a linear pair, so it has to sum to 180 degrees. We know in an equilateral triangle that each degree measure of the angle is 60 degrees. Meaning that each of these exterior angles is 120 degrees. So I'm going to write in measure of one exterior angle is 120 degrees. So to find the sum, a shortcut for adding is multiplication. I'm going to multiply 3 times 120 and I'm going to get 360 degrees. So let's see if it's different for a square. So I'm going to draw in a regular quadrilateral, also known as a square. So, again, we're going to assume that we have four congruent angles, four congruent sides. And we know that this has to be 90 degrees, which means its supplement would also be 90 degrees. So every single one of these exterior angles is going to be 90 degrees and we have four of them. So the sum 4 times 90 is 360. Looks like we're developing a pattern here. I'm going to guess that for 5 I'm going to multiply by something and I'm going to get 360 degrees. Let's check it out. If I have a pentagon, and I draw in my exterior angles here, again, this is a regular polygon. So all sides are congruent, all angles are congruent. We know that 108 degrees is the measure of one angle in a regular polygon. So its supplement is 72 degrees. So the measure of one exterior angle is going to be 72 degrees and sure enough 5 times 72 is 360 degrees. So if we're going to generalize this for any polygon with N sides, the sum of the exterior angles will always be 360 degrees. Always. And I should include the dot, dot, dot here if we want to find the measure of just one of these if it's equiangular, we're going to take the total sum which is always 360 and divide by the number of sides. So a couple of key things here. First one, if you want to find the measure of one exterior angle in a regular polygon, 360 divided by N. If you want to find the sum of all of the angles it's 360 degrees no matter how many sides you have.
http://brightstorm.com/math/geometry/polygons/exterior-angles-of-a-polygon/
13
20
Introduction to Vector Addition and Scalar Multiplication In the strictly mathematical definition of a vector, the only operations that vectors are required to possess are those of addition and scalar multiplication. (Compare this with the operations allowed on ordinary real numbers, or scalars, in which we are given addition, subtraction, multiplication, and division). For instance, in a raw vector space there is no obvious way to multiply two vectors together to get a third vector--even though we will define a couple of ways of performing vector multiplication in Vector Multiplication. It makes sense, then, to begin studying vectors with an investigation of the operations of vector addition and scalar multiplication. This section will be entirely devoted to explaining addition and scalar multiplication of two- and three-dimensional vectors. This explanation will involve two different, yet equivalent, methods: the component method and the graphical method.
http://www.sparknotes.com/physics/vectors/vectoraddition/summary.html
13
11
Mercury, in astronomy, nearest planet to the sun, at a mean distance of 36 million mi (58 million km); its period of revolution is 88 days. Mercury passes through phases similar to those of the moon as it completes each revolution about the sun, although the visible disk varies in size with respect to its distance from the earth. Because its greatest elongation is 28°, it is seen only for a short time after sunset or before sunrise. Since observation of Mercury is particularly unfavorable when it is near the horizon, the planet has often been studied in full daylight, with the sun's light blocked off. Mercury has the most elliptic orbit of the planets in the solar system. Its great eccentricity of orbit and its great orbital speed provided one of the important tests of Einstein's general theory of relativity. Mercury's perihelion (its closest point to the sun) is observed to advance by 43” each century more than can be explained from planetary perturbations using Newton's theory of gravitation, yet in nearly exact agreement with the prediction of the general theory. Mercury is the smallest planet in the solar system, having a diameter of 3,031 mi (4,878 km); both Jupiter's moon Ganymede and Saturn's moon Titan are larger. Its mean density relatively high, a little less than that of the earth; its core is believed to occupy about 85% of its radius and to consist of a probably solid iron core, surrounded by a liquid iron layer, which is surrounded by a solid iron-sulfide layer. Mercury's small mass and proximity to the sun prevent it from having an appreciable atmosphere, although a slight amount of carbon dioxide has been detected. The surface of Mercury is much like that of the moon, as was shown Mariner 10 in flybys in 1974–75 and Messenger in flybys in 2008–9 and in orbit beginning in 2011. Most of its craters were formed during a period of heavy bombardment by small asteroids early in the solar system's history. Messenger, which became the first space probe to orbit Mercury, also has found solid evidence of volcanism. Messenger also corroborated that there is ice near the planet's north pole in craters where areas are in permanent shadow; measurements by earth-based radar in the 1990s had suggested that there was ice near the poles. It was long thought that Mercury's period of rotation on its axis was identical to its period of revolution, so that the same side of the planet always faced the sun. However, radar studies in 1965 showed a period of rotation of 58.6 days. This results in periods of daylight and night of 88 earth days each, with the daylight temperatures reaching as high as 800°F (450°C). Night temperatures are believed to drop as low as - 300°F ( - 184°C). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Astronomy: General
http://www.infoplease.com/encyclopedia/science/mercury-astronomy.html
13
32
The radian is represented by the symbol "rad" or, more rarely, by the superscript c (for "circular measure"). For example, an angle of 1.2 radians would be written as "1.2 rad" or "1.2c" (the second symbol can be mistaken for a degree: "1.2°"). However, the radian is mathematically considered a "pure number" that needs no unit symbol, and in mathematical writing the symbol "rad" is almost always omitted. In the absence of any symbol radians are assumed, and when degrees are meant the symbol ° is used. More generally, the magnitude in radians of any angle subtended by two radii is equal to the ratio of the length of the enclosed arc to the radius of the circle; that is, θ = s /r, where θ is the subtended angle in radians, s is arc length, and r is radius. Conversely, the length of the enclosed arc is equal to the radius multiplied by the magnitude of the angle in radians; that is, s = rθ. It follows that the magnitude in radians of one complete revolution (360 degrees) is the length of the entire circumference divided by the radius, or 2πr /r, or 2π. Thus 2π radians is equal to 360 degrees, meaning that one radian is equal to 180/π degrees. The concept of radian measure, as opposed to the degree of an angle, should probably be credited to Roger Cotes in 1714. He had the radian in everything but name, and he recognized its naturalness as a unit of angular measure. The term radian first appeared in print on June 5, 1873, in examination questions set by James Thomson (brother of Lord Kelvin) at Queen's College, Belfast. He used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, vacillated between rad, radial and radian. In 1874, Muir adopted radian after a consultation with James Thomson. Conversely, to convert from degrees to radians, multiply by π/180. For example, You can also convert radians to revolutions by dividing number of radians by 2π. 2π radians are equal to one complete revolution, which is 400g. So, to convert from radians to grads multiply by 200/π, and to convert from grads to radians multiply by π/200. For example, The table shows the conversion of some common angles. In calculus and most other branches of mathematics beyond practical geometry, angles are universally measured in radians. This is because radians have a mathematical "naturalness" that leads to a more elegant formulation of a number of important results. Most notably, results in analysis involving trigonometric functions are simple and elegant when the functions' arguments are expressed in radians. For example, the use of radians leads to the simple limit formula which is the basis of many other identities in mathematics, including Because of these and other properties, the trigonometric functions appear in solutions to mathematical problems that are not obviously related to the functions' geometrical meanings (for example, the solutions to the differential equation , the evaluation of the integral , and so on). In all such cases it is found that the arguments to the functions are most naturally written in the form that corresponds, in geometrical contexts, to the radian measurement of angles. The trigonometric functions also have simple and elegant series expansions when radians are used; for example, the following Taylor series for sin x : If x were expressed in degrees then the series would contain messy factors involving powers of π/180: if x is the number of degrees, the number of radians is y = πx /180, so Mathematically important relationships between the sine and cosine functions and the exponential function (see, for example, Euler's formula) are, again, elegant when the functions' arguments are in radians and messy otherwise. Although the radian is a unit of measure, it is a dimensionless quantity. This can be seen from the definition given earlier: the angle subtended at the centre of a circle, measured in radians, is the ratio of the length of the enclosed arc to the length of the circle's radius. Since the units of measurement cancel, this ratio is dimensionless. Another way to see the dimensionlessness of the radian is in the series representations of the trigonometric functions, such as the Taylor series for sin x mentioned earlier: If x had units, then the sum would be meaningless: the linear term x cannot be added to (or have subtracted) the cubic term or the quintic term , etc. Therefore, x must be dimensionless. Similarly, angular acceleration is often measured in radians per second per second (rad/s2). The reasons are the same as in mathematics. Metric prefixes have limited use with radians, and none in mathematics. The milliradian (0.001 rad, or 1 mrad) is used in gunnery and targeting, because it corresponds to an error of 1 m at a range of 1000 m (at such small angles, the curvature is negligible). The divergence of laser beams is also usually measured in milliradians. Smaller units like microradians (μrads) and nanoradians (nrads) are used in astronomy, and can also be used to measure the beam quality of lasers with ultra-low divergence. Similarly, the prefixes smaller than milli- are potentially useful in measuring extremely small angles. However, the larger prefixes have no apparent utility, mainly because to exceed 2π radians is to begin the same circle (or revolutionary cycle) again.
http://www.reference.com/browse/circular-measure
13
16
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information from plain text into code or cipher. In non-technical usage, a "cipher" is the same thing as a "code"; however, the concepts are distinct in cryptography. In classical cryptography, ciphers were distinguished from codes. Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates". When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it. The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext. Most modern ciphers can be categorized in several ways - By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers). - By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality. Etymology of "Cipher" The word "cipher" in former times meant "zero" and had the same origin: Middle French as cifre and Medieval Latin as cifra, from the Arabic صفر ṣifr = zero (see Zero—Etymology). "Cipher" was later used for any decimal digit, even any number. There are many theories about how the word "cipher" may have come to mean "encoding": - Encoding often involved numbers. - The Roman number system was very cumbersome because there was no concept of zero (or empty space). The concept of zero (which was also called "cipher"), which is now common knowledge, was alien to medieval Europe, so confusing and ambiguous to common Europeans that in arguments people would say "talk clearly and not so far fetched as a cipher". Cipher came to mean concealment of clear messages or encryption. - The French formed the word "chiffre" and adopted the Italian word "zero". - The English used "zero" for "0", and "cipher" from the word "ciphering" as a means of computing. - The Germans used the words "Ziffer" (digit) and "Chiffre". - The Dutch still use the word "cijfer" to refer to a numerical digit. - The Italians and the Spanish also use the word "cifra" to refer to a number. - The Serbians use the word "cifra", which refers to a digit, or in some cases, any number. Besides "cifra", they use word "broj" for a number. Ibrahim Al-Kadi concluded that the Arabic word sifr, for the digit zero, developed into the European technical term for encryption. Ciphers versus codes In non-technical usage, a "(secret) code" typically means a "cipher". Within technical discussions, however, the words "code" and "cipher" refer to two different concepts. Codes work at the level of meaning—that is, words or phrases are converted into something else and this chunking generally shortens the message. Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are also used synonymously to substitution and transposition. Historically, cryptography was split into a dichotomy of codes and ciphers; and coding had its own terminology, analogous to that for ciphers: "encoding, codetext, decoding" and so on. However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique. Types of cipher There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys. Historical ciphers Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers and transposition ciphers. For example "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs. Simple ciphers were replaced by polyalphabetic substitution ciphers which changed the substitution alphabet for every letter. For example "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. It is possible to create a secure pen and paper cipher based on a one-time pad though, but the usual disadvantages of one-time pads apply. During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods. Modern ciphers Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data. By type of key used ciphers are divided into: - symmetric key algorithms (Private-key cryptography), where the same key is used for encryption and decryption, and - asymmetric key algorithms (Public-key cryptography), where two different keys are used for encryption and decryption. In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The Feistel cipher uses a combination of substitution and transposition techniques. Most block cipher algorithms are based on this structure. In an asymmetric key algorithm (e.g., RSA), there are two separate keys: a public key is published and enables any sender to perform encryption, while a private key is kept secret by the receiver and enables only him to perform correct decryption. Ciphers can be distinguished into two types by the type of input data: - block ciphers, which encrypt block of data of fixed size, and - stream ciphers, which encrypt continuous streams of data Key size and vulnerability In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) three factors above all count: - Mathematical advances that allow new attacks or weaknesses to be discovered and exploited. - Computational power available, i.e., the computing power which can be brought to bear on the problem. It is important to note that average performance/capacity of a single computer is not the only factor to consider. An adversary can use multiple computers at once, for instance, to increase the speed of exhaustive search for a key (i.e., "brute force" attack) substantially. - Key size, i.e., the size of key used to encrypt a message. As the key size increases, so does the complexity of exhaustive search to the point where it becomes impracticable to crack encryption directly. Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly. An example of this process can be found at Key Length which uses multiple reports to suggest that a symmetric cipher with 128 bits, an asymmetric cipher with 3072 bit keys, and an elliptic curve cipher with 512 bits, all have similar difficulty at present. See also - Ibrahim A. Al-Kadi, "Cryptography and Data Security: Cryptographic Properties of Arabic", proceedings of the Third Saudi Engineering Conference. Riyadh, Saudi Arabia: Nov 24-27, Vol 2:910-921., 1991. ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (March 2009)| - Richard J. Aldrich, GCHQ: The Uncensored Story of Britain's Most Secret Intelligence Agency, HarperCollins July 2010. - Helen Fouché Gaines, "Cryptanalysis", 1939, Dover. ISBN 0-486-20097-3 - Ibrahim A. Al-Kadi, "The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126. - David Kahn, The Codebreakers - The Story of Secret Writing (ISBN 0-684-83130-9) (1967) - David A. King, The ciphers of the monks - A forgotten number notation of the Middle Ages, Stuttgart: Franz Steiner, 2001 (ISBN 3-515-07640-9) - Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966. ISBN 0-88385-622-0 - William Stallings, Cryptography and Network Security, principles and practices, 4th Edition |Look up cipher in Wiktionary, the free dictionary.| - SecurityDocs Resource for encryption whitepapers - Accumulative archive of various cryptography mailing lists. Includes Cryptography list at metzdowd and SecurityFocus Crypto list.
http://en.wikipedia.org/wiki/Ciphers
13
14
To understand meteors, one must also understand meteoroids and meteorites. First, a meteoroid is a particle in the solar system. The meteoroid may be as small as a grain of sand, or, as large as a boulder. When the meteor enters the Earth’s atmosphere, and becomes visible as a shooting star, it is called a meteor. If the meteor makes it to the ground, it is called a meteorite. Meteors, also called shooting stars, occur in the Earth’s mesosphere at an altitude of about 40-60 miles. Millions of meteors enter the Earth’s atmosphere every day, though the vast majority are observed at night. Their visibility in the night sky is due to air friction which causes the meteor to glow and emit a trail of gasses and melted particles that lasts for about a second. Meteor showers are relatively common events that occur when the Earth passes through a trail of debris left by a comet. Sometimes Meteoroids make it throughout the atmosphere and hit the ground, where they are referred to as meteorites. There are over 31,000 documented meteorites to have been found, although only five or six new ones are found every year. The largest meteorite ever found was in the African nation of Namibia. It weighs over 100 tons and left a huge impact crater in the ground. Scientists believe the massive Berringer Carter in Arizona was formed when a 300,000 ton meteorite crashed to the ground over 49,000 years ago. On November 30, 1954, the Hodges Meteorite (actually a fragment of a meteorite) crashed through the roof of the residence of Ann Hodges in the town of Sylacauga, Alabama. It bounced off a table before striking her in the leg. Although she was badly bruised, she was not seriously injured. It was the first recorded instance of a meteorite injuring a human. The actual meteorite was donated to the Alabama Museum of Natural History after various legal battles concerning ownership. Some scientists believe the impact of a large meteorite from an asteroid or comet in Mexico’s Yucatan Peninsula was responsible for the extinction of dinosaurs some 65 million years ago. Such an impact would have had catastrophic global consequences including immediate climate change, numerous earthquakes, volcano eruptions, wildfires, and massive supertsuanims, along with the proliferation of massive amounts of dust and debris that would block solar energy and lead to a disruption in photosynthesis. Most meteorites that reach the Earth are classified as chondrites or achondrites, while a small percentage are iron meteorites and stony-iron meteorites. Most meteorites are chondrites. Chondrites contain silicate materials that were melted in space, amino acids, and other presolar grains, particles likely formed from stellar explosions. Diamond and graphite are among materials found to be present in these grains. Chondrites are thought to be over 4.5 billion years of age and to have originated in the asteroid belt, where they never formed larger bodies. Achondrites are less common. These type of meteorites seem to be similar to igneous rock. Iron meteorites make up less than five percent of meteorite finds. These type of meteorites are thought to come from the core of asteroids that were once molten. Finally, stony-iron meteorites constitute less than one percent of all meteorite falls. They are made of iron-nickel metal and different silicates.
http://mrnussbaum.com/space/meteors/
13
21
Exploring Learning Styles and Instruction Learning is an interactive process, the product of student and teacher activity within a specific learning environment. These activities, which are the central elements of the learning process, show a wide variation in pattern, style and quality (Keefe, 1987). Learning problems frequently are not related to the difficulty of the subject matter but rather to the type and level of cognitive process required to learn the material (Keefe, 1988). Gregorc and Ward (1977) claim that if educators are to successfully address the needs of the individual they have to understand what "individual" means. They must relate teaching style to learning style. The famous case of Tinker versus DesMoines Community School District (1969) which concerns itself with student rights will be extended to encompass the right of a student to learn in ways that complement his ability to achieve. Public Law 94-142 which requires the identification of learning style and individualization for all handicapped children is one step away from mandating individualization for all students (Dunn and Dunn, 1978). Educators must learn to base programs on the differences that exist among students rather than on the assumption that everyone learns the same way (Keefe, 1987). Learning has taken place when we observe a change of learner behavior resulting from what has been experienced. Similarly, we can recognize the learning style of an individual student only by observing his overt behavior. Learning style is a consistent way of functioning that reflects the underlying causes of learning behavior (Keefe, 1987). Keefe (1991) describes learning style as both a student characteristic and an instructional strategy. As a student characteristic, learning style is an indicator of how a student learns and likes to learn. As an instructional strategy, it informs the cognition, context and content of learning. Each learner has distinct and consistent preferred ways of perception, organization and retention. These learning styles are characteristic cognitive, affective, and physiological behaviors that serve as relatively stable indicators of how learners perceive, interact with and respond to the learning environment Students learn differently from each other (Price, 1977). Caplan (1981) has determined that brain structure influences language structure acquisition. It has been shown that different hemispheres of the brain contain different perception avenues (Schwartz, Davidson, &Maer, 1975). Stronck (1980) claims that several types of cells present in some brains are not present in others and such differences occur throughout the brain's structure. Talmadge and Shearer (1969) have determined that learning styles do exist. Their study shows that the characteristics of the content of a learning experience are a critical factor affecting relationships that exist between learner characteristics and instructional methods. Reiff(1992) claims that styles influence how students learn, how teachers teach, and how they interact. Each person is born with certain preferences toward particular styles, but these preferences are influenced by culture, experience and development. Keefe (1987) asserts that perceptual style is a matter of learner choice, but that preference develops from infancy almost subconsciously. A teacher alert to these preferences can arrange for flexibility in the learning environment. Learning style is the composite of characteristic cognitive, affective and physiological factors (Keefe, 1991). A useful approach for understanding and describing learning styles is the consideration of these factors. Cognitive styles are the information processing habits of an individual. These represent a person's typical modes of perceiving, thinking, remembering, and problem solving (Keefe, 1991). External information is received through the network of perceptual modalities. This information is the raw data that the brain processes for learning to occur. If there is a deficit in a perceptual modality the brain will receive incorrect or incomplete data and limited or inappropriate learning will occur (Keefe, 1988). Learning modalities are the sensory channels or pathways through which individuals give, receive, and store information. Most students learn with all of their modalities but have certain strengths and weaknesses in a specific modality (Reiff, 1992). These avenues of preferred perception include kinesthetic/tactual, auditory and visual (Eiszler, 1983). Stronck (1980) describes the kinesthetic/tactual learners as the ones who try things out, touch, feel, and manipulate. Kinesthetic/tactual learners express their feelings physically. They gesture when speaking, are poor listeners, and they lose interest in long speeches. These students learn best by doing. They need direct involvement in what they are learning. More than thirty percent of our students may have a kinesthetic/tactual preference for learning (Barbe, 1979). Auditory learners talk about what to do when they learn. They enjoy listening, but cannot wait to have a chance to talk themselves. These students respond well to lecture and discussion (Barbe, 1979). Visual learners learn by seeing. They think in pictures and have vivid imaginations. They have greater recall of concepts that are presented visually (Barbe, Most of the students not doing well in school are kinesthetic\tactual learners. Instruction geared toward the other modalities can cause these learners to fall behind. As this happens, students begin to lose confidence in themselves and resent school because of repeated failure (Reiff, 1992). An effective means to reach all learners is modality-based instruction, which consists of organizing around the different modalities to accommodate the needs of the learner. Modality based instruction consists of using a variety of motivating, introductory techniques and then providing alternative strategies when a student fails to grasp the skill or concept. If a learner does not initially understand the lesson, the teacher needs to intervene, personalize instruction and reteach using a different method (Reiff, 1992). Perceptual modality preferences are not separate units of learning style. Instruments and assessment approaches that lead teachers and researchers to consider modality preferences in general terms may contribute to the misunderstanding of individual differences rather than help develop and use information on individual differences in teaching (Eiszler, 1983). Affective components of learning styles include personality and emotional characteristics related to the areas of persistence, responsibility, motivation and peer interaction (Reiff, 1992). The physiological components of learning styles are biologically based modes of response that are founded on sex-related differences, personal nutrition and health, and reactions to the physical environment (Keefe, 1991). Student performances in different subject areas are related to how individuals do, in fact, learn. Systematic ways to identify individual preferences for learning and suggestions for teaching students with varying learning styles can be based on an individual's diagnosis of his learning style (Price, 1977). Comprehension of ~ individual differences and learning styles can provide teachers with the theory and knowledge upon which to base decisions. Once a teacher has determined why a student responds in a certain way, then they can make more intelligent decisions about instruction methods (Reef, Several research studies have demonstrated that students can identify their own learning styles; when exposed to a teaching style that matches their learning style, students score higher on tests than those not taught in their learning style; and it is advantageous to teach and test students in their preferred modalities (Dunn and Dunn, 1978). The Learning Style Profile (LSP) provides educators with a well validated and easy-to-use instrument for diagnosing the characteristics of an individual's learning style. LSP provides an overview of the tendencies and preferences of the individual learner (Keefe,l991). All students can benefit from a responsive learning environment and from the enhancement of their learning skills (Keefe, 1991). No educational program can be successful without attention to the personal learning needs of individual students. A single approach to instruction whether traditional or innovative, simply does not do the job (Keefe, 1987). Using one teaching style or learning style exclusively is not conducive to a successful educational program (Dunn and Dunn, l978). "Hard to reach and hard to teach students " are more successful when taught with different modality strategies (Reiff, 1992). Students vary widely in their cognitive styles yet few teachers consider this variable when planning instruction (Fenstermacher, 1983). If we wish students to have optimum learning in our schools, we must change the way we deliver instruction. If a student continues to fail to respond to changed instruction then we must retrain his or her cognitive styles to make school success possible (Keefe, 1987). It is nothing less than revolutionary to base instructional planning on an analysis of each student's learning characteristics. To do so moves education away from the traditional assembly-line mass production model to a handcrafted one (Keefe, 1987). Planning appropriate and varied lessons will improve both instructional and classroom management. Realistically, a teacher cannot be expected to have a different lesson for every child in the classroom, however, lessons can reflect an understanding of individual differences by appropriately incorporating strategies for a variety of styles. When individual differences are considered, many researchers claim that students will have higher achievement, a more positive attitude, and an improved self-concept (Reiff, 1992). Planning learning-style based instruction involves diagnosing individual learning style; profiling group preferences; determining group strengths and weaknesses; examining subject content for areas that may create problems with weak skills; analyzing students' prior achievement scores; remediating weak skills; assessing current instructional methods to determine whether they are adequate or require more flexibility; and modifying the learning environment and developing personalized learning experiences (Keefe, 1991). A better understanding of learning style can help teachers reduce frustration for themselves and their students (Reiff, 1992). A knowledge of style can also show teachers how some of their own behaviors can hinder student progress Eiszler (1983) claims that varying teaching strategies to address all channels promotes learning no matter what students' preferences of cognitive styles are. Dunn (1979) showed that slow learners tend to increase their amounts of achievement when varied multisensory methods were used as a form of instruction. However, not everyone agrees with matching learning styles and teaching styles. Rector and Henderson (1970) have determined through their research that the effect of various teaching strategies depends on such factors as the nature of the concept to be taught, the students' characteristics, and the time available. In their study no significant difference was found in different teaching strategies and student achievement. Today low achievement is blamed directly on the school, their teachers, and the instructional programs or methods being used. Achievement scores reveal only where a child is academically. I.Q. tests suggest a child's potential, not why he or she has not progressed further or more quickly. Personality instruments serve to explain student behavior but they provide little insight into how to help him achieve. It is possible however to help each child learn more efficiently by diagnosing the individual's learning style (Dunn and Dunn, 1978). Just juggling the requirements of courses without attention to what needs to occur between teachers and students inside the classroom will not automatically produce better prepared students. Students not only need to feel confident that they can learn but also need to possess skills that they can use to facilitate their learning (Kilpatrick, 1985). Students who understand their learning styles and who exercise active control over their cognitive skills do better in school. They are better adjusted, have more positive attitudes toward learning and achieve at higher levels than their less skillful peers As teachers continue to restructure the learning environment so as to accommodate various learning styles, evaluation must occur to determine the effectiveness of the teaching and learning process. Exploring and implementing alternative evaluation methods will provide the teacher with more complete and accurate information about the capabilities of their students. For example, student products, students working in cooperative groups, role-playing or simulated situations, questions on audiotapes or computers are other avenues through which we can test students rather than the traditional paper and pencil method (Reiff, 1992). If a student does not learn the way we teach him, we must teach him the way he learns (Dunn and Dunn, 1978). As educators we must strive to continue to learn not only from research, but also from our students and each other. This continued education will certainly benefit our students as we try new ideas and new teaching strategies. As we implement new ideas we will address more learning styles and further facilitate the education of our students. We should not seek to have students, who are products of our teaching style, be clones of ourselves, but rather we should strive to teach our students how to build upon their strengths and become better educated individuals. By addressing students' learning styles and planning instruction accordingly we will meet more individuals' educational needs and will be more successful in our educational goals. Barbe, W. B., & Swassing, R. H. (1979). Teaching through modality strengths. New York, NY: Zane-Bloser, Inc. Caplan, D. (1981). Prospects for neurolinguistic theory. Cognition, 10(1 3), 59-64. Dunn, R. (1979). Learning-A matter of style. Educational Leadership, Dunn, R., & Dunn, K (1978, March). How to create hands-on materials. Instructor, pp. 134-141. Dunn, R., & Dunn, K (1978). Teaching students through their individual learning styles. Reston, VA: Reston Publishing Company, Inc. Eislzer, C. F. (1983). Perceptual preferences as an aspect of adolescent learning styles. Education, 103(3), 231-242. Fenstermacher, G. D. (1983). Individual differences and the common curriculum. Chicago, L: National Society for the Study of Education. Gregorc, A. F., & Ward, H. B. (1977). Implications for learning and teaching: A new definition for individual. NASSP Bulletin, 61, 20-26. Keefe, J. W. (1991). Learning style: Cognitive and thinking skills. Reston, VA: National Association of Secondary School Principals. Keefe, J. W. (1988). Profiling and utilizing learning style. Reston, VA: National Association of Secondary School Principals. Keefe, J. W. (1987). Theory and practice. Reston, VA: National Association of Secondary School Principals. Price, G., Dunn, R., & Dunn, K. (1977). A summary of research on learning style. New York, NY: American Educational Research Association. Rector, R. E., & Henderson, K. B. (1970). The relative effectiveness of four strategies for teaching mathematical concepts. Journal for Research in Mathematics Education, 1, 69-75. Reiff, J. C. (1992). Learning styles. Washington, DC: National Education Shwartz, G. E., Davidson, R. J., & Maer, F. (1975). Right hemisphere lateralization for emotion in the human brain: Interactions with cognition. Science, 190(4211), 286-288. Stronck, D. R. (1980). The educational implications of human individuality. American Biology Teacher, 42, 146-151. Talmadge, G. K., & Shearer, J. W. (1969). Relationship among learning styles, instructional methods and the nature of leaming experiences. Journal of Educational Psychology, 57, 222-230. One inch square Wheat Thins? Materials: Box of Wheat Thins Wheat Thins are advertised as being one inch square. However, the average wheat thin is one inch square. Divide the class into groups and have your students use a ruler to determine the size of one wheat thin. Record each groups' measures on the board. Have students average the measures and determine how close the average is to 1 square inch. Be sure to instruct students on the reasons why the measure is a square measure. You may want to have a class discussion on truth in advertising and how it relates to What is one square foot? Materials: One box of Wheat Thins for each group The floor tiles in most classrooms are one square foot. Divide students into groups. Have your students assume that each cracker is one square inch and determine the area of the floor tile by covering the tile with crackers and counting the number of crackers on each tile. Ask if there is a quicker way to determine this area. Students should be able to determine the area by multiplying the length of each side of the tile. Extension: Estimate the perimeter of the football field in terms of wheat thins. What does the biggest area mean? Materials: 12 Wheat Thins for each student Have students to build a geometric figure that will encompass the most area using the crackers for the perimeter. Discuss their findings. This activity may also be done on a geoboard with a string 12 inches long. Students should find that the largest area occurs when they construct a square. One perimeter, how many different areas? Have students to use a 12 inch piece of string to construct the following shapes and have them to find the area of each shape. Square, Rectangle with one side 2 units long, Equilateral triangle, Right Isosceles Triangle, Circle Which shape has the greatest area? What makes area and perimeter different? How do the area of a circle and the area of a square compare? Materials: Grid paper, scissors, paper and pencil Use grid paper to make a circle, a square and a rectangle of the same area. What is the smallest area possible? How do the areas of each compare? Using average pace length to determine area Materials: Yardstick, paper and pencil Have students to determine their average pace length by walking 100 feet 10 times and averaging the number of paces it took them each time they walked. Use this average to calculate the area of a portion of land by "stepping off' the perimeter of the land and recording the lengths of each side. One suggestion is to determine if the band practice field will 'fit' on the student parking lot. Animals and their Home Ranges Have students research different animals and record the size of their home ranges. An animal's home range is the amount of space the animal needs to fulfill its requirements for food, breeding, and so forth. Have students make graphs comparing the size of the animal to the area of its home range. Students should then discuss what might happen to an animal if the size of its habitat is altered through a natural disaster such as fire or man's development of the land. Students should realize that the larger the animal, the larger the home range of the animal. Presenting Surface Area Materials: One inch grid paper Assorted rectangular boxes. Tape, Paper, Pencil Divide your class into groups and give each group several sheets of grid paper and a set of the other materials. Use the one inch square grid paper to cover the boxes. Tell your students that they are not to let the squares overlap and that they need to be certain to cover all exposed surfaces (like gift wrapping the box). Have students to find the area of each side by counting the number of squares on each side. After recording this information have them find the total surface area of each box by adding the areas of the sides together. You may wish to ask students about finding a shortcut for doing this and they may derive the formula for the surface area of a box for you. Be sure that you are including an imaginary or real lid on your Modeling the Room Have student groups measure all of the objects in the room to determine their dimensions. They are to build a scale model of their object using the grid paper and a scale of the class's choosing. They will need to label their object so that others can identify it. Use each group's object and put them together to from a scale model of the room. This activity should reinforce the need for accurate measures and what a scale model represents. Discuss relative size of the objects. Invariably, someone will have represented one of the objects incorrectly. Materials: Grid paper Patterns for solids Have student groups build rectangular solids from grid paper. Let them choose which solid to model. Once they have built the models they need to find the surface area of the model. Formulating the area of a triangle Use grid paper and pencil to draw a parallelogram. Have students cut the parallelogram so that they have 2 equal triangles. Find the area of the original rectangle and the two triangles. Discuss the relationships between the areas. Students should derive that the area of each triangle is half that of the parallelogram. Extension or beforehand: Have students draw a parallelogram on a sheet of grid paper and cut it out. Students should cut the parallelogram so that the pieces form a rectangle of the same area. Geometric Lake Day Provide students with a pool, swim rings, measuring devices, beech towels, balls, umbrellas, etc. They can provide edible solids of their choice, such as brownies and sodas. Students should complete the activity sheet. Provide students with circular objects and a measuring device. Have students complete the chart provided on exploring circumference and diameter. Give students different quadrilaterals inscribed in circles and have them complete the Discovering Ptolemy activity sheet. It is more interesting to the students to draw their own quadrilaterals and to post the measures on a chart. Students usually need to work together to formally state the
http://jwilson.coe.uga.edu/EMT705/EMT705.Hood.html
13
17
Strategies for Solving Systems of Equations on the ACT A system of equations is a set of two or more equations that include two or more variables. To solve a system of equations on the ACT Math test, you need one equation for every variable in the system. This usually means two equations and two variables. You can solve a system of linear equations in two ways: With substitution. With this technique, you solve one equation for a variable in terms of the other(s), and then you substitute this value into the second equation. By combining equations (elimination). To use this method, you add or subtract the two equations in such a way that one variable drops out of the resulting equation. Both of these methods are similar in that they allow you to write a single equation in one variable, which you can then solve using your usual bag of algebra tricks. After you know the value of one variable, you can substitute this value back into one of the original two equations (usually the easier one) to get the value of the remaining variable. Substitution is easier to use when a variable in one equation is already isolated or when it can be isolated easily. If x + 9 = y and 7x – 2 = 2y, what is the value of xy? This question gives you two equations in two variables. In the first equation, y is already isolated on one side of the equation, so substitution should work well. Substitute x + 9 for y in the second equation: Simplify and solve: Now that you know the value of x, substitute this value back into the equation that looks easiest to work with — in this case, the first equation — and solve for y: Thus, x = 4 and y = 13, so xy = 52. The correct answer is Choice (E). The technique of combining equations is easier to use when both equations contain essentially the same term. Check out the following example. If 4s + 5t = 9 and 9s + 5t = –11, what is the value of s + t? Answering this question using substitution would be difficult because neither variable is very easy to isolate on one side of the equations. However, both equations include the term 5t, so you can combine the two equations using subtraction. When you subtract one equation from the other, the t term drops out. The resulting equation is easy to solve: As always, when you know the value of one variable, you can substitute this value back into either equation — whichever looks easiest — and solve for the other variable, like this: So s = –4 and t = 5, meaning s + t = 1. As a result, the correct answer is Choice (B).
http://www.dummies.com/how-to/content/strategies-for-solving-systems-of-equations-on-the.html
13
28
Electricity: Static Electricity Static Electricity: Problem Set Overview This set of 33 problems targets your ability to determine circuit quantities such as the quantity of charge, separation distance between charges, electric force, electric field strength, and resultant forces and field strengths from verbal descriptions and diagrams of physical situations pertaining to electric circuits. Problems range in difficulty from the very easy and straight-forward to the very difficult and complex. The more difficult problems are color-coded as blue problems. Relating the Quantity of Charge to Numbers of Protons and Electrons: Atoms are the building blocks of all objects. These atoms possess protons, neutrons and electrons. While neutrons are electrically neutral, the protons and electrons possess electrical charge. The proton and electron have a predictable amount of charge, with the proton being assigned a positive type of charge and the electron a negative type. The charge on an electron has a well-accepted, experimentally-determined value of -1.6 x 10-19 C (where the negative simply indicates the type of charge). Protons have an equal amount of charge and an opposite type; thus, the charge of a proton is +1.6 x 10-19 C. Objects consisting of atoms containing protons and electrons can have an overall charge on them if there is an imbalance of protons and electrons. An object with more protons than electrons will be charged positively and an object with more electrons than protons will be charged negatively. The magnitude of the quantity of charge on an object is simply the difference between the number of protons and electrons multiplied by 1.6 x 10-19 C. Coulomb's Law of Electric Force: A charged object can exert an attractive or repulsive force on other charged objects in its vicinity. The amount of force follows a rather predictable pattern which is dependent upon the amount of charge present on the two objects and the distance of separation. Coulomb's law of electric force expresses the relationship in the form of the following equation: Felect = k•Q1•Q2/d2 where Felect represents the magnitude of the electric force (in Newtons), Q1 and Q2 represent the quantity of charge (in Coulombs) on objects 1 and 2, and d represents the separation distance between the objects' centers (in meters). The symbol k represents a constant of proportionality known as Coulomb's constant and has the value of 9.0 x 109 N•m2/C2. A charged object can exert an electric influence upon objects from which they are spatially separated. This action-at-a-distance phenomenon is sometimes explained by saying the charged object establishes an electric field in the space surrounding it. Other objects which enter the field interact with the field and experience the influence of the field. The strength of the electric field can be tested by measuring the force exerted on a test charge. Of course, the more charge on the test charge, the more force which would be experienced by it. While the force experienced by the test charge is proportional to the amount of charge on the test charge, the ratio of force to charge would be the same regardless of the amount of charge on the test charge. By definition, the electric field strength (E) at a given location about a source charge is simply the ratio of the force experienced (F) by a test charge to the quantity of charge on the test charge (qtest). E = F / qtest The electric field strength as created by a source charge (Q) varies with location. In accord with Coulomb's law, the force on a test charge is greatest when closest to the source charge and less when further away. Substitution of the expression for force into the above equation and subsequent algebraic simplification yields a second equation for electric field (E) which expresses its strength in terms of the variables which effect it. The equation is E = k • Q / d2where k is Coulombs constant of 9.0 x 109 N•m2/C2, Q is the quantity of charge on the source creating the field and d is the distance from the center of the source. Direction of Force and Field Vectors: Many problems in this problem set will demand that you understand the directional nature of electric force and electric field. Electric forces between objects can be attractive or repulsive. Objects charged with an opposite type of charge will be attracted to each other and objects charged with the same type of charge will be repelled by each other. These attractive and repulsive interactions describe the direction of the forces exerted upon any object. In some instances involving configurations of three or more charges, an object will experience two or more forces in the same or different directions. In such instances, the interest is usually in knowing what the net electric force is. Finding the net electric force involves determining the magnitude and direction of the individual forces and then adding them up to determine the net force. When adding electric forces, the direction must be considered. A 10 unit force to the left and a 25 unit force to the right add up to a 15 unit force to the right. Such reasoning about direction will be critical to analyzing situations where two or more forces are present. Electric field is also a vector quantity that has a directional nature associated with it. By convention, the direction of the electric field vector at any location surrounding a source charge is in the direction that a positive test charge would be pushed or pulled if placed at that location. Even if a negative charge is used to measure the strength of a source charge's field, the convention for direction is based upon the direction of force on a positive test charge. Adding Vectors - SOH CAH TOA and Pythagorean Theorem: Electric field and electric force are vector quantities which have a direction. In situations in which there are two or more force or field vectors present, it is often desired to know what the net electric force or field is. Finding the net value from knowledge of individual values requires that vectors be added together in head-to-tail fashion. If the vectors being added are at right angles to each other, then the Pythagorean theorem can be used to determine the resultant or net value; a trigonometric function can be used to determine an angle and subsequently a direction. If the vectors being added are not at right angles to each other, then the usual procedure of adding them involves using a trigonometric function to resolve each vector into x- and y-components. The components are then added together to determine the sum of all x- and y-components. These sum values can then be added together in a right triangle to determine the net or resultant vector. And as usual, a trigonometric function can be used to determine an angle and subsequently a direction of the net or resultant vector. The graphic below depicts by means of diagrams how the components of a vector can be added together to determine the resultant of vectors A and B. Comparing Gravitational and Electrical Forces: Gravitational forces and electrical forces are often compared to each other. Both force types are fundamental forces which act over a distance of separation. Gravitational forces are based on masses attracting and follow the law of universal gravitation equation. Fgrav = G • m1 • m2 / d2 where m1 and m2 are the masses of the attracting objects (in kg), d is the separation distance as measured from object center to object center (in meters) and G is a proportionality constant with a value of 6.67x 10-11 N•m2/kg2. Electrical forces are based on charged objects attracting or repelling and follow the Coulomb's law equation (as stated above). Some of the problems on this set will involve comparisons of the magnitude of the electric force to the magnitude of the gravitational force. The simultaneous use of both equations will be necessary in the solution of such problems. Habits of an Effective Problem-Solver An effective problem solver by habit approaches a physics problem in a manner that reflects a collection of disciplined habits. While not every effective problem solver employs the same approach, they all have habits which they share in common. These habits are described briefly here. An effective problem-solver... - ...reads the problem carefully and develops a mental picture of the physical situation. If needed, they sketch a simple diagram of the physical situation to help visualize it. - ...identifies the known and unknown quantities and records themin an organized manner, often times recording them on the diagram iteself. They equate given values to the symbols used to represent the corresponding quantity (e.g., Q1 = 2.4 μC; Q2 = 3.8 μC; d = 1.8 m; Felect = ???). - ...plots a strategy for solving for the unknown quantity; the strategy will typically center around the use of physics equations be heavily dependent upon an understanding of physics principles. - ...identifies the appropriate formula(s) to use, often times writing them down. Where needed, they perform the needed conversion of quantities into the proper unit. - ...performs substitutions and algebraic manipulations in order to solve for the unknown quantity. Additional Readings/Study Aids: The following pages from The Physics Classroom Tutorial may serve to be useful in assisting you in the understanding of the concepts and mathematics associated with these problems. - Charge and Charge Interactions - Coulomb's Law - Inverse Square Law - Newton's Laws and the Electrical Force - Electric Field Intensity
http://www.physicsclassroom.com/calcpad/estatics/index.cfm
13
16
From Sega Retro Assembly language or simply assembly is a human-readable notation for the machine language that a specific computer architecture uses. Machine language, a pattern of bits encoding machine operations, is made readable by replacing the raw values with symbols called mnemonics. For example, a computer with the appropriate processor will understand this x86/IA-32 machine instruction: For programmers, however, it is easier to remember the equivalent assembly language representation: which means to move the hexadecimal value 61 (97 decimal) into the processor register with the name 'al'. The mnemonic "mov" is short for "move," and a comma-separated list of arguments or parameters follows it; this is a typical assembly language statement. Unlike in high-level languages, there is usually a 1-to-1 correspondence between simple assembly statements and machine language instructions. Transforming assembly into machine language is accomplished by an assembler, and the reverse by a disassembler. Every computer architecture has its own machine language, and therefore its own assembly language. Computers differ by the number and type of operations that they support. They may also have different sizes and numbers of registers, and different representations of data types in storage. While all general-purpose computers are able to carry out essentially the same functionality, the way they do it differs, and the corresponding assembly language must reflect these differences. In addition, multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set. In these cases, the most popular one is usually that used by the manufacturer in their documentation. Instructions in assembly language are generally very simple, unlike in a high-level language. Any instruction that references memory (for data or as a jump target) will also have an addressing mode to determine how to calculate the required memory address. More complex operations must be built up out of these simple operations. Some operations available in most instruction sets include: Specific instruction sets will often have single, or a few instructions for common operations which would otherwise take many instructions. Examples: Assembly language directives In addition to codes for machine instructions, assembly languages have extra directives for assembling blocks of data, and assigning address locations for instructions or code. They usually have a simple symbolic capability for defining values as symbolic expressions which are evaluated at assembly time, making it possible to write code that is easier to read and understand. Like most computer languages, comments can be added to the source code which are ignored by the assembler. They also usually have an embedded macro language to make it easier to generate complex pieces of code or data. In practice, the absence of comments and the replacement of symbols with actual numbers makes the human interpretation of disassembled code considerably more difficult than the original source would be. Usage of assembly language There is some debate over the usefulness of assembly language. It is often said that modern compilers can render higher-level languages into codes that run as fast as hand-written assembly, but counter-examples can be made, and there is no clear consensus on this topic. It is reasonably certain that, given the increase in complexity of modern processors, effective hand-optimization is increasingly difficult and requires a great deal of knowledge. However, some discrete calculations can still be rendered into faster running code with assembly, and some low-level programming is simply easier to do with assembly. Some system-dependent tasks performed by operating systems simply cannot be expressed in high-level languages. In particular, assembly is often used in writing the low level interaction between the operating system and the hardware, for instance in device drivers. Many compilers also render high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes. It's also common, especially in relatively low-level languages such as C, to be able to embed assembly language into the source code with special syntax. Programs using such facilities, such as the Linux kernel, often construct abstractions where different assembly is used on each platform the program supports, but it is called by portable code through a uniform interface. Many embedded systems are also programmed in assembly to obtain the absolute maximum functionality out of what is often very limited computational resources, though this is gradually changing in some areas as more powerful chips become available for the same minimal cost. Another common area of assembly language use is in the system BIOS of a computer. This low-level code is used to initialize and test the system hardware prior to booting the OS and is stored in ROM. Once a certain level of hardware initialization has taken place, code written in higher level languages can be used, but almost always the code running immediately after power is applied is written in assembly language. This is usually due to the fact system RAM may not yet be initialized at power-up and assembly language can execute without explicit use of memory, especially in the form of a stack. Assembly language is also valuable in reverse engineering, since many programs are distributed only in machine code form, and machine code is usually easy to translate into assembly language and carefully examine in this form, but very difficult to translate into a higher-level language. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose.
http://segaretro.org/Assembly_language?rdfrom=http%3A%2F%2Finfo.sonicretro.org%2Findex.php%3Ftitle%3DASM%26redirect%3Dno
13
45
Pythagorean tuning (Greek: Πυθαγόρεια κλίμακα) is a system of musical tuning in which the frequency ratios of all intervals are based on the ratio 3:2, "found in the harmonic series." This ratio, also known as the "pure" perfect fifth, is chosen because it is one of the most consonant and easy to tune by ear. Attributed to Pythagoras (sixth century BC), and widely used up to the beginning of the 16th century, "the Pythagorean system would appear to be ideal because of the purity of the fifths, but other intervals, particularly the major third, are so badly out of tune that major chords [may be considered] a dissonance." The Pythagorean scale is any scale which may be constructed from only pure perfect fifths (3:2) and octaves (2:1) or the gamut of twelve pitches constructed from only pure perfect fifths and octaves, and from which specific scales may be drawn (see Generated collection). For example, the series of fifths generated above gives seven notes, a diatonic major scale on C in Pythagorean tuning, shown in notation on the top right. In Greek music it was used to tune tetrachords and the twelve tone Pythagorean system was developed by medieval music theorists using the same method of tuning in perfect fifths, however there is no evidence that Pythagoras himself went beyond the tetrachord. Pythagorean tuning is based on a stack of intervals called perfect fifths, each tuned in the ratio 3:2, the next simplest ratio after 2:1. Starting from D for example (D-based tuning), six other notes are produced by moving six times a ratio 3:2 up, and the remaining ones by moving the same ratio down: This succession of eleven 3:2 intervals spans across a wide range of frequency (on a piano keyboard, it encompasses 77 keys). Since notes differing in frequency by a factor of 2 are given the same name, it is customary to divide or multiply the frequencies of some of these notes by 2 or by a power of 2. The purpose of this adjustment is to move the 12 notes within a smaller range of frequency, namely within the interval between the base note D and the D above it (a note with twice its frequency). This interval is typically called the basic octave (on a piano keyboard, an octave encompasses only 13 keys ). For instance, the A is tuned such that its frequency equals 3:2 times the frequency of D — if D is tuned to a frequency of 288 Hz, then A is tuned to 432 Hz. Similarly, the E above A is tuned such that its frequency equals 3:2 times the frequency of A, or 9:4 times the frequency of D — with A at 432 Hz, this puts E at 648 Hz. Since this E is outside the above-mentioned basic octave (i.e. its frequency is more than twice the frequency of the base note D), it is usual to halve its frequency to move it within the basic octave. Therefore, E is tuned to 324 Hz, a 9:8 (= one epogdoon) above D. The B at 3:2 above that E is tuned to the ratio 27:16 and so on. Starting from the same point working the other way, G is tuned as 3:2 below D, which means that it is assigned a frequency equal to 2:3 times the frequency of D — with D at 288 Hz, this puts G at 192 Hz. This frequency is then doubled (to 384 Hz) to bring it into the basic octave. When extending this tuning however, a problem arises: no stack of 3:2 intervals (perfect fifths) will fit exactly into any stack of 2:1 intervals (octaves). For instance a stack such as this, obtained by adding one more note to the stack shown above will be similar but not identical in size to a stack of 7 octaves. More exactly, it will be about a quarter of a semitone larger (see Pythagorean comma). Thus, A♭ and G♯, when brought into the basic octave, will not coincide as expected. The table below illustrates this, showing for each note in the basic octave the conventional name of the interval from D (the base note), the formula to compute its frequency ratio, its size in cents, and the difference in cents (labeled ET-dif in the table) between its size and the size of the corresponding one in the equally tempered scale. |Note||Interval from D||Formula||Frequency In the formulas, the ratios 3:2 or 2:3 represent an ascending or descending perfect fifth (i.e. an increase or decrease in frequency by a perfect fifth), while 2:1 or 1:2 represent an ascending or descending octave. In equal temperament, pairs of enharmonic notes such as A♭ and G♯ are thought of as being exactly the same note — however, as the above table indicates, in Pythagorean tuning they have different ratios with respect to D, which means they are at a different frequency. This discrepancy, of about 23.46 cents, or nearly one quarter of a semitone, is known as a Pythagorean comma. To get around this problem, Pythagorean tuning ignores A♭, and uses only the 12 notes from E♭ to G♯. This, as shown above, implies that only eleven just fifths are used to build the entire chromatic scale. The remaining fifth (from G♯ to E♭) is left badly out-of-tune, meaning that any music which combines those two notes is unplayable in this tuning. A very out-of-tune interval such as this one is known as a wolf interval. In the case of Pythagorean tuning, all the fifths are 701.96 cents wide, in the exact ratio 3:2, except the wolf fifth, which is only 678.49 cents wide, nearly a quarter of a semitone flatter. If the notes G♯ and E♭ need to be sounded together, the position of the wolf fifth can be changed. For example, a C-based Pythagorean tuning would produce a stack of fifths running from D♭ to F♯, making F♯-D♭ the wolf interval. However, there will always be one wolf fifth in Pythagorean tuning, making it impossible to play in all keys in tune. Size of intervals The table above shows only intervals from D. However, intervals can be formed by starting from each of the above listed 12 notes. Thus, twelve intervals can be defined for each interval type (twelve unisons, twelve semitones, twelve intervals composed of 2 semitones, twelve intervals composed of 3 semitones, etc.). As explained above, one of the twelve fifths (the wolf fifth) has a different size with respect to the other eleven. For a similar reason, each of the other interval types, except for the unisons and the octaves, has two different sizes in Pythagorean tuning. This is the price paid for seeking just intonation. The tables on the right and below show their frequency ratios and their approximate sizes in cents. Interval names are given in their standard shortened form. For instance, the size of the interval from D to A, which is a perfect fifth (P5), can be found in the seventh column of the row labeled D. Strictly just (or pure) intervals are shown in bold font. Wolf intervals are highlighted in red. The reason why the interval sizes vary throughout the scale is that the pitches forming the scale are unevenly spaced. Namely, the frequencies defined by construction for the twelve notes determine two different semitones (i.e. intervals between adjacent notes): - The minor second (m2), also called diatonic semitone, with size (e.g. between D and E♭) - The augmented unison (A1), also called chromatic semitone, with size (e.g. between E♭ and E) Conversely, in an equally tempered chromatic scale, by definition the twelve pitches are equally spaced, all semitones having a size of exactly As a consequence all intervals of any given type have the same size (e.g., all major thirds have the same size, all fifths have the same size, etc.). The price paid, in this case, is that none of them is justly tuned and perfectly consonant, except, of course, for the unison and the octave. For a comparison with other tuning systems, see also this table. By definition, in Pythagorean tuning 11 perfect fifths (P5 in the table) have a size of approximately 701.955 cents (700+ε cents, where ε ≈ 1.955 cents). Since the average size of the 12 fifths must equal exactly 700 cents (as in equal temperament), the other one must have a size of 700−11ε cents, which is about 678.495 cents (the wolf fifth). Notice that, as shown in the table, the latter interval, although enharmonically equivalent to a fifth, is more properly called a diminished sixth (d6). Similarly, - 9 minor thirds (m3) are ≈ 294.135 cents (300−3ε), 3 augmented seconds (A2) are ≈ 317.595 cents (300+9ε), and their average is 300 cents; - 8 major thirds (M3) are ≈ 407.820 cents (400+4ε), 4 diminished fourths (d4) are ≈ 384.360 cents (400−8ε), and their average is 400 cents; - 7 diatonic semitones (m2) are ≈ 90.225 cents (100−5ε), 5 chromatic semitones (A1) are ≈ 113.685 cents (100+7ε), and their average is 100 cents. In short, similar differences in width are observed for all interval types, except for unisons and octaves, and they are all multiples of ε, the difference between the Pythagorean fifth and the average fifth. Notice that, as an obvious consequence, each augmented or diminished interval is exactly 12ε (≈ 23.460) cents narrower or wider than its enharmonic equivalent. For instance, the d6 (or wolf fifth) is 12ε cents narrower than each P5, and each A2 is 12ε cents wider than each m3. This interval of size 12ε is known as a Pythagorean comma, exactly equal to the opposite of a diminished second (≈ −23.460 cents). This implies that ε can be also defined as one twelfth of a Pythagorean comma. Pythagorean intervals Four of the above mentioned intervals take a specific name in Pythagorean tuning. In the following table, these specific names are provided, together with alternative names used generically for some other intervals. Notice that the Pythagorean comma does not coincide with the diminished second, as its size (524288:531441) is the reciprocal of the Pythagorean diminished second (531441:524288). Also ditone and semiditone are specific for Pythagorean tuning, while tone and tritone are used generically for all tuning systems. Interestingly, despite its name, a semiditone (3 semitones, or about 300 cents) can hardly be viewed as half of a ditone (4 semitones, or about 400 cents). All the intervals with prefix sesqui- are justly tuned, and their frequency ratio, shown in the table, is a superparticular number (or epimoric ratio). The same is true for the octave. |Generic names||Specific names| |Quality and number||Other naming conventions||Pythagorean tuning||5-limit tuning||1/4-comma |1||augmented unison||A1||chromatic semitone, |2||diminished third||d3||tone, whole tone, whole step| |2||major second||M2||sesquioctavum (9:8)| |3||minor third||m3||semiditone (32:27)||sesquiquintum (6:5)| |4||major third||M3||ditone (81:64)||sesquiquartum (5:4)| |5||perfect fourth||P4||diatessaron||sesquitertium (4:3)| |7||perfect fifth||P5||diapente||sesquialterum (3:2)| |12||(perfect) octave||P8||diapason||duplex (2:1)| Because of the wolf interval, this tuning is rarely used nowadays, although it is thought to have been widespread. In music which does not change key very often, or which is not very harmonically adventurous, the wolf interval is unlikely to be a problem, as not all the possible fifths will be heard in such pieces. Because most fifths in Pythagorean tuning are in the simple ratio of 3:2, they sound very "smooth" and consonant. The thirds, by contrast, most of which are in the relatively complex ratios of 81:64 (for major thirds) and 32:27 (for minor thirds), sound less smooth. For this reason, Pythagorean tuning is particularly well suited to music which treats fifths as consonances, and thirds as dissonances. In western classical music, this usually means music written prior to the 15th century. From about 1510 onward, as thirds came to be treated as consonances, meantone temperament, and particularly quarter-comma meantone, which tunes thirds to the relatively simple ratio of 5:4, became the most popular system for tuning keyboards. At the same time, syntonic-diatonic just intonation was posited by Zarlino as the normal tuning for singers. However, meantone presented its own harmonic challenges. Its wolf intervals proved to be even worse than those of the Pythagorean tuning (so much so that it often required 19 keys to the octave as opposed to the 12 in Pythagorean tuning). As a consequence, meantone was not suitable for all music. From around the 18th century, as the desire grew for instruments to change key, and therefore to avoid a wolf interval, this led to the widespread use of well temperaments and eventually equal temperament. - Bragod is a duo giving historically informed performances of mediaeval Welsh music using the crwth and six-stringed lyre using Pythagorean tuning - Gothic Voices – Music for the Lion-Hearted King (Hyperion, CDA66336, 1989), directed by Christopher Page (Leech-Wilkinson) - Lou Harrison performed by John Schneider and the Cal Arts Percussion Ensemble conducted by John Bergamo - Guitar & Percussion (Etceter Records, KTC1071, 1990): Suite No. 1 for guitar and percussion and Plaint & Variations on "Song of Palestine" See also - Enharmonic scale - List of meantone intervals - List of musical intervals - Regular temperament - Musical temperament - Timaeus (dialogue), in which Plato discusses Pythagorean tuning - Whole-tone scale - Benward & Saker (2003). Music: In Theory and Practice, Vol. I, p. 56. Seventh Edition. ISBN 978-0-07-294262-0. - Gunther, Leon (2011). The Physics of Music and Color, p.362. ISBN 978-1-4614-0556-6. - Sethares, William A. (2005). Tuning, Timbre, Spectrum, Scale, p.163. ISBN 1-85233-797-4. - Peter A. Frazer The Development of Musical Tuning Systems - Asiatic Society of Japan (1879). Transactions of the Asiatic Society of Japan, Volume 7, p.82. Asiatic Society of Japan. - Wolf intervals are operationally defined herein as intervals composed of 3, 4, 5, 7, 8, or 9 semitones (i.e. major and minor thirds or sixths, perfect fourths or fifths, and their enharmonic equivalents) the size of which deviates by more than one syntonic comma (about 21.5 cents) from the corresponding justly intonated interval. Intervals made up of 1, 2, 6, 10, or 11 semitones (e.g. major and minor seconds or sevenths, tritones, and their enharmonic equivalents) are considered to be dissonant even when they are justly tuned, thus they are not marked as wolf intervals even when they deviate from just intonation by more than one syntonic comma. - Milne, A., Sethares, W.A. and Plamondon, J., "Isomorphic Controllers and Dynamic Tuning: Invariant Fingerings Across a Tuning Continuum", Computer Music Journal, Winter 2007, Vol. 31, No. 4, Pages 15–32. - However, 3/28 is described as "almost exactly a just major third." Sethares (2005), p.60. - Daniel Leech-Wilkinson (1997), "The good, the bad and the boring", Companion to Medieval & Renaissance Music. Oxford University Press. ISBN 0-19-816540-4. - "A Pythagorean tuning of the diatonic scale", with audio samples. - "Pythagorean Tuning and Medieval Polyphony", by Margo Schulter.
http://en.wikipedia.org/wiki/Pythagorean_tuning
13
15
Advertising Campaign (Dan Snook) 1. Students will use matrix operations to solve a real life application. 2. Students will be able to sort given information into matrix form. This activity is presented as a practical application of the use of matrix multiplication. It is envisioned to be presented after students have been introduced to the matrix perations of addition and multiplication. Remind students that they are overwhelmed with advertising everyday in a multitude of places. It is important for them to know that advertising is a multi-billion dollar industry. Also, advertising often offers high paying careers. This activity shows the inner workings of one aspect of this industry. 1. Begin the activity by reviewing examples of matrix multiplication. Include checking the size of the matrix and whether or not is possible to multiply. Look also at the failure of the commutative property with matrix multiplication. 2. Distribute the worksheets and read the introduction to the problem. Discuss what the goals of an advertising campaign might be. Also, discuss market share. 3. The difficulty of the problem is in the set up of the matrix. Allow the students to try this on their own. Then go over the results as a class. 4. Multiply the matrices. Use graphing calculators if desired. Then interpret the results. "Advertising Campaign" worksheets, dry erase board Review of matrix multiplication (10 minutes), Introduction of the problem (5 minutes), Individual work to set up matrix (5 minutes), Large group discussion of matrix (10 minutes), Large group solution of problem (10 minutes) Discrete Mathematics Concepts: Matrix dimensions, elements, multiplication Related Mathematics Concepts: NCTM Standards Addressed Problem Solving, Reasoning, Communication, Connections, Algebra, Discrete Math Colorado and District Standards Addressed Number Sense (1), Algebraic Methods (2), Data Collection and Analysis (3), Problem Solving Techniques (5), Linking Concepts and Procedures (6) This activity fits well into the initial discussion of matrices in an Algebra I class. It provides a concrete application to matrix multiplication. Activities involving applications of advertising could be continued. For example, the advertising industries' manipulation of the truth of the major and/or minor premise in an argument, and how it effects the conclusions that you draw about their products. This is a good activity to use while teaching matrix operations. Multiplying matrices is a difficult problem for students at first. Having mastered the procedure, this activity shows a use for the technique. Colorado Model Mathematics Standards Task Force. (1995) Colorado model content standards for mathematics. National Council of Teachers of Mathematics. (1989). Curriculum and evaluation standards for school mathematics. Reston, VA: Author:
http://www.colorado.edu/education/DMP/activities/matrices/dlsact04.html
13
12
In this lesson our instructor talks about arcs and chords. First she discusses arcs and chords theorem 1. After, she lectures on inscribed polygons. Then she talks about arcs and chords theorem 2 and 3. Four complete extra example videos round up this lesson. Arc of the chord: An arc that shares the same endpoints of the chord In a circle or in congruent circles, two minor arcs are congruent if and only if their corresponding chords are congruent Inscribed polygon: An inscribed polygon is a polygon inside the circle with all of its vertices on the circle In a circle, if a diameter is perpendicular to a chord, then it bisects the chord and its arc In a circle or in congruent circles, two chords are congruent if and only if they are equidistant from the center Arcs and Chords Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.
http://www.educator.com/mathematics/geometry/pyo/arcs-and-chords.php
13
30
2. The Remainder Theorem and the Factor Theorem This section discusses the historical method of solving higher degree polynomial equations. As we discussed in the previous section Polynomial Functions and Equations, a polynomial function is of the form: f(x) = a0xn + a1xn-1 + a2xn-2 + ... + an a0 ≠ 0 and n is a positive integer, called the degree of the polynomial. f(x) = 7x5 + 4x3 − 2x2 − 8x + 1 is a polynomial function of degree 5. First, let's consider what happens when we divide numbers. Say we try to divide `13` by `5`. We will get the answer `2` and have a remainder of `3`. We could write this as: `13/5 = 2 + 3/5` Another way of thinking about this example is: `13 = 2 × 5 + 3` Division of polynomials is something like our number example. If we divide a polynomial by (x − r), we obtain a result of the form: f(x) = (x − r) q(x) + R where q(x) is the quotient and R is the remainder. Divide f(x) = 3x2 + 5x − 8 by (x − 2). The Remainder Theorem Consider f(x) = (x − r)q(x) + R Note that if we let x = r, the expression becomes f(r) = (r − r) q(r) + R f(r) = R This leads us to the Remainder Theorem which states: If a polynomial f(x) is divided by (x − r) and a remainder R is obtained, then f(r) = R. Use the remainder theorem to find the remainder for Example 1 above, which was divide f(x) = 3x2 + 5x − 8 by (x − 2). By using the remainder theorem, determine the remainder when 3x3 − x2 − 20x + 5 is divided by (x + 4). The Factor Theorem The Factor Theorem states: If the remainder f(r) = R = 0, then (x − r) is a factor of f(x). The Factor Theorem is powerful because it can be used to find roots of polynomial equations. Is (x + 1) a factor of f(x) = x3 + 2x2 − 5x − 6? 1. Find the remainder R by long division and by the Remainder Theorem. (2x4 - 10x2 + 30x - 60) ÷ (x + 4) 2. Find the remainder using the Remainder Theorem (x4 − 5x3 + x2 − 2x + 6) ÷ (x + 4) 3. Use the Factor Theorem to decide if (x − 2) is a factor of f(x) = x5 − 2x4 + 3x3 − 6x2 − 4x + 8. 4. Determine whether `-3/2` is a zero (root) of the function: f(x) = 2x3 + 3x2 − 8x − 12. Didn't find what you are looking for on this page? Try search: Online Algebra Solver This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Go to: Online algebra solver Ready for a break? Play a math game. (Well, not really a math game, but each game was made using math...) The IntMath Newsletter Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents! Short URL for this Page Save typing! You can use this URL to reach this page: Algebra Lessons on DVD Easy to understand algebra lessons on DVD. See samples before you commit. More info: Algebra videos
http://www.intmath.com/equations-of-higher-degree/2-factor-remainder-theorems.php
13
18
Quadrilateral Overview Basics of quadrilaterals including concave, convex ones. Parallelograms, rectangles, rhombi and squares ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - What I wanna do in this video is give an overview of quadrilaterals. - And you can imagine from this prefix, or I guess you could say from the beginning of this word - quad - This involves four of something. - And quadrilaterals, as you can imagine, are, are shapes. - And we're gonna be talking about two-dimensional shapes that have four sides, and four vertices, and four angles. - So, for example, one, two, three, four. - That is a quadrilateral. - Although that last side didn't look too straight. - One, two, three, four. That is a quadrilateral. - One, two, three, four. These are all quadrilaterals. - They all have four sides, four vertices, and clearly four angles. - One angle, two angles, three angles, and four angles. - Here you can measure. Here actually let me draw this one a little bit bigger 'cause it's interesting. - So in this one right over here you have one angle, two angles, three angles - and then you have this really big angle right over there. - If you look at the, if you look at the interior angles of this quadrilateral. - Now quadrilaterals, as you can imagine, can be subdivided into other groups - based on the properties of the quadrilaterals. - And the main subdivision of quadrilaterals is between concave and convex quadrilaterals - So you have concave, and you have convex. - And the way I remember concave quadrilaterals, or really concave polygons of any number of shapes - is that it looks like something has caved in. - So, for example, this is a concave quadrilateral - It looks like this side has been caved in. - And one way to define concave quadrilaterals, - so let me draw it a little bit bigger, - so this right over here is a concave quadrilateral, - is that it has an interior angle, it has an interior angle that is larger than 180 degrees. - So, for example, this interior angle right over here is larger, is larger than 180 degrees. - It's an interesting proof, maybe I'll do a video, it's actually a pretty simple proof, - to show that if you have a concave quadrilateral - if at least one of the interior angles has a measure larger than 180 degrees - that none of the sides can be parallel to each other. - The other type of quadrilateral, you can imagine, - is when all of the interior angles are less than 180 degrees. - And you might say, "Well, what happens at 180 degrees?" - Well, if this angle was 180 degrees then these wouldn't be two different sides - it would just be one side and that would look like a triangle. - But if all of the interior angles are less than 180 degrees, - then you are dealing with a convex quadrilateral. - So this convex quadrilateral would involve that one and that one over there. - So this right over here is what a convex quadrilateral, - this is what a convex quadrilateral could look like. - Four points. Four sides. Four angles. - Now within convex quadrilaterals there are some other interesting categorizations. - So now we're just gonna focus on convex quadrilaterals - so that's gonna be all of this space over here. - So one type of convex quadrilateral is a trapezoid. - A trapezoid. And a trapezoid is a convex quadrilateral - and sometimes the definition here is a little bit, - different people will use different definitions, - so some people will say a trapezoid is a quadrilateral that has exactly two sides that are parallel to each other - So, for example, they would say that this right over here - this right over here is a trapezoid, where this side is parallel to that side. - If I give it some letters here, if I call this trapezoid A, B, C, D, - we could say that segment AB is parallel to segment DC - and because of that we know that this is, that this is a trapezoid - Now I said that the definition is a little fuzzy because some people say - you can have exactly one pair of parallel sides - but some people say at least one pair of parallel sides. - So if you say, if you use the original definition, - and that's the kind of thing that most people are referring to when they say a trapezoid, - exactly one pair of parallel sides, it might be something like this, - but if you use a broader definition of at least one pair of parallel sides, - then maybe this could also be considered a trapezoid. - So you have one pair of parallel sides. Like that. - And then you have another pair of parallel sides. Like that. - So this is a question mark where it comes to a trapezoid. - A trapezoid is definitely this thing here, where you have one pair of parallel sides. - Depending on people's definition, this may or may not be a trapezoid. - If you say it's exactly one pair of parallel sides, this is not a trapezoid because it has two pairs. - If you say at least one pair of parallel sides, then this is a trapezoid. - So I'll put that as a little question mark there. - But there is a name for this regardless of your definition of what a trapezoid is. - If you have a quadrilateral with two pairs of parallel sides, - you are then dealing with a parallelogram. - So the one thing that you definitely can call this is a parallelogram. - And I'll just draw it a little bit bigger. - So it's a quadrilateral. If I have a quadrilateral, and if I have two pairs of parallel sides - So two of the opposite sides are parallel. - So that side is parallel to that side and then this side is parallel to that side there - You're dealing with a parallelogram. - And then parallelograms can be subdivided even further. - They can be subdivided even further if the four angles in a parallelogram are all right angles, - you're dealing with a rectangle. So let me draw one like that. - So if the four sides, so from parallelograms, these are, this is all in the parallelogram universe. - What I'm drawing right over here, that is all the parallelogram universe. - This parallelogram tells me that opposite sides are parallel. - And if we know that all four angles are 90 degrees - and we've proven in previous videos how to figure out the sum of the interior angles of any polygon - and using that same method, you could say that the sum of the interior angles of a rectangle, - or of any, of any quadril, of any quadrilateral, is actually a hund- is actually 360 degrees, - and you see that in this special case as well, but maybe we'll prove it in a separate video. - But this right over here we would call a rectangle - a parallelogram, opposite sides parallel, - and we have four right angles. - Now if we have a parallelogram, where we don't necessarily have four right angles, - but we do have, where we do have the length of all the sides be equal, - then we're dealing with a rhombus. So let me draw it like that. - So it's a parallelogram. This is a parallelogram. - So that side is parallel to that side. This side is parallel to that side. - And we also know that all four sides have equal lengths. - So this side's length is equal to that side's length. - Which is equal to that side's length, which is equal to that side's length. - Then we are dealing with a rhombus. - So one way to view it, all rhombi are parallelograms - All rectangles are parallelograms - All parallelograms you cannot assume to be rectangles. - All parallelograms you cannot assume to be rhombi. - Now, something can be both a rectangle and a rhombus. - So let's say this is the universe of rectangles - So the universe of rectangles. Drawing a little of a venn diagram here. - Is that set of shapes, and the universe of rhombi is this set of shapes right over here. - So what would it look like? - Well, you would have four right angles, and they would all have the same length. - So, it would look like this. - So it would definitely be a parallelogram. - It would be a parallelogram. Four right angles. - Four right angles, and all the sides would have the same length. - And you probably. This is probably the first of the shapes that you learned, or one of the first shapes. - This is clearly a square. - So all squares are both rhombi, are are members of the, they can also be considered a rhombus - and they can also be considered a rectangle, - and they could also be considered a parallelogram. - But clearly, not all rectangles are squares - and not all rhombi are squares - and definitely not all parallelograms are squares. - This one, clearly, right over here is neither a rectangle, nor a rhombi - nor a square. - So that's an overview, just gives you a little bit of taxonomy of quadrilaterals. - And then in the next few videos, we can start to explore them and find their interesting properties - Or just do interesting problems involving them. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/math/geometry/quadrilaterals-and-polygons/v/quadrilateral-overview
13
18
Antarctica is covered by the world’s largest ice sheet, and it is losing mass. At present, ice is slipping into the sea from the continent’s icy edge more quickly than snowfall is accumulating in the high-altitude interior. The imbalance means that Antarctic ice loss is contributing to rising sea level. To keep track of Antarctic ice losses and gains, scientists need an accurate picture of the ice perimeter. This image includes a tiny portion of a new map of the Antarctic ice edge; it shows the area around Law Promontory, which juts out from East Antarctica’s coastline near Stefansson Bay. In places where ice extends beyond the edge of the continent, the map shows the grounding line—the point where the ice sheet separates from land and begins to float on the ocean. The colored lines show the ice edge/grounding line based on the latest analysis (red) as well as the previous best estimate (gray line), which was based on the Mosaic of Antarctica (MOA). The ice edge is overlaid on a natural-color image from the Landsat satellite captured on January 29, 2010. The new map is the result of an international effort called the Antarctic Surface Accumulation and Ice Discharge (ASAID) project. Led by Robert Bindschadler of NASA’s Goddard Space Flight Center, the collaborators used data and images from multiple sources—including images from Landsat 7 and the LIMA mosaic, precise elevation data from ICESat, and grounding line estimates from earlier studies—to create the most detailed map yet produced. The dramatic improvement provided by the new map is most apparent around the rocky outcrops off the coast; smatterings of ice-encased islands are a common occurrence around Antarctica. The earlier mapping effort had included the rocky outcrops and sometimes icebergs as part of the ice sheet perimeter, but elevation data and interpretation based on higher-resolution (more detailed) imagery enabled the researchers to identify the ice perimeter more accurately. To complete its map, the team connected 3.5 million geographic points around Antarctica. The team identified a perimeter for Antarctica’s ice of roughly 53,610 kilometers (33,312 miles). Determining the perimeter of Antarctica’s ice sheet is not simply an academic exercise. NASA satellites have observed significant ice loss in Antarctica, especially along the Antarctic Peninsula, but quantifying that loss has been difficult. Although scientists calculated the flow speed of 33 of the continent’s biggest glaciers, such outflow glaciers occupy just 5 percent of the coastline, and account for only half of the lost ice. Ice can migrate from land to sea by other means than the biggest glaciers. Ice shelves can collapse, icebergs can calve off thin ice tongues, and chunks of ice may slide over the edges of precipices. By providing a precise map of Antarctica’s ice perimeter, ASAID promises to improve estimates of ice loss. Bindschadler remarks, “This project has been a major achievement to come from the International Polar Year. This project included young scientists, it was an international effort, and it produced freely available data—all from satellites.” Ted Scambos of the National Snow and Ice Data Center observes, “ASAID doesn’t just show the location of the ice edge, it provides the elevation all along that line. That’s a key step in measuring mass balance because it tells you the ice thickness near the grounding line.” - Brunt, Kelly M., Fricker, Helen A., Padman, Laurie, Scambos, Ted A., O'Neel, Shad. (2010). Mapping the grounding zone of the Ross Ice Shelf, Antarctica, using ICESat laser altimetry. Annals of Glaciology, 51(55), 71–79. - International Polar Year Portal. (2006, December 29). ASAID: Antarctic Surface Accumulation and Ice Discharge. Accessed July 22, 2010. - Hansen, K. (2010, July 22). Antarctica Traced from Space. NASA. Accessed July 22, 2010.
http://visibleearth.nasa.gov/view.php?id=44740
13
31
Light or visible light is electromagnetic radiation that is visible to the human eye, and is responsible for the sense of sight. Visible light has wavelength in a range from about 380 nanometres to about 740 nm, with a frequency range of about 405 THz to 790 THz. In physics, the term light sometimes refers to electromagnetic radiation of any wavelength, whether visible or not. Primary properties of light are intensity, propagation direction, frequency or wavelength spectrum, and polarisation, while its speed in a vacuum, 299,792,458 meters per second (about 300,000 kilometre per second), is one of the fundamental constants of nature. Light, which is emitted and absorbed in tiny "packets" called photons, exhibits properties of both waves and particles. This property is referred to as the wave–particle duality. The study of light, known as optics, is an important research area in modern physics. The speed of light in a vacuum is defined to be exactly 299,792,458 m/s (approximately 186,282 miles per second). The fixed value of the speed of light in SI units results from the fact that the metre is now defined in terms of the speed of light. Certain other mechanisms can produce light: scintillation electroluminescence Sonoluminescence triboluminescence Cherenkov radiation There are many sources of light. The most common light sources are thermal: a body at a given temperature emits a characteristic spectrum of black-body radiation. The most common examples include Sun, artificial lights, incandescent light bulbs There are three main theories on light Particle theory Wave theory Quantum theory Newtons particle theory of light says that light is made up of little particles. They obey the same laws of physics as other particles. The elementary particle according to particle theory of light is photon evidence of light as particle Light can exhibit both a wave theory, and a particle theory at the same time. Much of the time, light behaves like a wave. Light waves are also called electromagnetic waves because they are made up of both electric (E) and magnetic (H) fields. Electromagnetic fields oscillate perpendicular to the direction of wave travel, and perpendicular to each other. Light waves are known as transverse waves as they oscillate in the direction traverse to the direction of wave travel. The Electromagnetic Wave Waves have two important characteristics - wavelength and frequency. Wave length .this is the distance between peaks of a wave. Wavelengths are measured in units of length - meters, When dealing with light, wavelengths are in the order of nanometres (1 x 10-9) Frequency: This is the number of peaks that will travel past a point in one second. Frequency is measured in cycles per second. The term given to this is Hertz (Hz) named after the 19th century discoverer of radio waves - Heinrich Hertz. 1 Hz = 1 cycle per second A third anomaly that arose in the late 19th century involved a contradiction between the wave theory of light and measurements of the electromagnetic spectrum emitted by thermal radiators, or so- called black bodies. Physicists struggled with this problem, which later became known as the ultraviolet catastrophe, unsuccessfully for many years. In 1900, Max Planck developed a new theory of black- body radiation that explained the observed spectrum . Plancks theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta, and the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by where h is Plancks constant, λ is the wavelength and c is the speed of light. Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength: As it originally stood, this theory did not explain the simultaneous wave- and particle-like natures of light, though Planck would later work on theories that did. In 1918, Planck received the Nobel Prize in Physics for his part in the founding of quantum theory. Reflection is the change in direction of a wave front at an interface between two different media so that the wave front returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for secular reflection the angle at which the wave is incident on the surface equals the angle at which it is reflected. Mirrors exhibit secular reflection. In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X- rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors. The bouncing back of light when it falls on an object is called reflection of light. Reflection of light consist of incident ray ,reflected ray and normal The ray that falls on a surface is called incident ray. The line perpendicular to the point of incidence is called normal ray The ray that moves away after falling on a surface is called reflected ray . The laws of reflection are as follows: The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane. The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal. The reflected ray and the incident ray are on the opposite sides of the normal. When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fibber boundaries of an organic material) and by its surface, if it is rough. Thus, an image is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambert an reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert s cosine law. The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation. When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle. The centre of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus. Light bounces exactly back in the direction from which it came due to a nonlinear optical process. In this type of reflection, not only the direction of the light is reversed, but the actual wave fronts are reversed as well. A conjugate reflector can be used to remove aberrations from a beam by reflecting it and then passing the reflection through the abating optics a second time A mirror whose polished, reflecting surface is a part of a hollow sphere of glass or plastic is called a spherical mirror. Depending upon the nature of the reflecting surface of a mirror, the spherical mirror is classified as: Concave mirror Convex mirror A concave mirror, or converging mirror, has a reflecting surface that bulges inward (away from the incident light). Concave mirrors reflect light inward to one focal point. They are used to focus light. Unlike convex mirrors, concave mirrors show different image types depending on the distance between the object and the mirror. These mirrors are called "converging" because they tend to collect light that falls on them, refocusing parallel incoming rays toward a focus. This is because the light is reflected at different angles, since the normal to the surface differs with each spot on the mirror. A convex mirror, fish eye mirror or diverging mirror, is a curved mirror in which the reflective surface bulges toward the light source. Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focus (F) and the centre of curvature (2F) are both imaginary points "inside" the mirror, which cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. A collimated (parallel) beam of light diverges (spreads out) after reflection from a convex mirror, since the normal to the surface differs with each spot on the mirror. A lens is an optical device with perfect or approximate axial symmetry which transmits and refracts light, converging or diverging the beam. A simple lens consists of a single optical element. A compound lens is an array of simple lenses (elements) with a common axis; the use of multiple elements allows more optical aberrations to be corrected than is possible with a single element. Lenses are typically made of glass or transparent plastic. Elements which refract electromagnetic radiation outside the visual spectrum are also called lenses: for instance, a microwave lens can be made from paraffin wax. The variant spelling lens is sometimes seen. While it is listed as an alternative spelling in some dictionaries, most mainstream dictionaries do not list it as acceptable Most lenses are spherical lenses: their two surfaces are parts of the surfaces of spheres, with the lens axis ideally perpendicular to both surfaces. Each surface can be convex (bulging outwards from the lens), concave (depressed into the lens), or planar (flat). The line joining the centres of the spheres making up the lens surfaces is called the axis of the lens. Typically the lens axis passes through the physical centre of the lens, because of the way they are manufactured. Lenses may be cut or ground after manufacturing to give them a different shape or size. The lens axis may then not pass through the physical centre of the lens. Tonic or sphere-cylindrical lenses have surfaces with two different radii of curvature in two orthogonal planes. They have a different focal power in different meridians. This is a form of deliberate astigmatism. More complex are aspheric lenses. These are lenses where one or both surfaces have a shape that is neither spherical nor cylindrical. Such lenses can produce images with much less aberration than standard simple lenses Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. If both surfaces have the same radius of curvature, the lens is convex. A lens with two concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is Plano-convex or plane-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus. It is this type of lens that is most commonly used in corrective lenses. If the lens is biconvex or plane-convex, a collimated beam of light travelling parallel to the lens axis and passing through the lens will be converged (or focused) to a spot on the axis, at a certain distance behind the lens (known as the focal length). In this case, the lens is called a positive or converging lens. If the lens is biconcave or plane-concave, a collimated beam of light passing through the lens is diverged (spread); the lens is thus called a negative or diverging lens. The beam after passing through the lens appears to be emanating from a particular point on the axis in front of the lens; the distance from this point to the lens is also known as the focal length, although it is negative with respect to the focal length of a converging lens. Convex-concave (meniscus) lenses can be either positive or negative, depending on the relative curvatures of the two surfaces. A negative meniscus lens has a steeper concave surface and will be thinner at the centre than at the periphery. Conversely, a positive meniscus lens has a steeper convex surface and will be thicker at the centre than at the periphery. An ideal thin lens with two surfaces of equal curvature would have zero optical power, meaning that it would neither converge nor diverge light. All real lenses have a nonzero thickness, however, which affects the optical power. To obtain exactly zero optical power, a meniscus lens must have slightly unequal curvatures to account for the effect of the lens thickness Lenses are used as prosthetics for the correction of visual impairments such as myopia, hyperopic, presbyopia, and astigmatism. (See corrective lens, contact lens, eyeglasses.) Most lenses used for other purposes have strict axial symmetry; eyeglass lenses are only approximately symmetric. They are usually shaped to fit in a roughly oval, not circular, frame; the optical centres are placed over the eyeballs; their curvature may not be axially symmetric to correct for astigmatism. Sunglasses lenses are designed to attenuate light; sunglass lenses that also correct visual impairments can be custom made. Other uses are in imaging systems such as monocular, binoculars, telescopes, microscopes, cameras and projectors. Some of these instruments produce a virtual image when applied to the human eye; others produce a real image which can be captured on photographic film or an optical sensor, or can be viewed on a screen. In these devices lenses are sometimes paired up with curved mirrors to make a catadioptric system where the lenses spherical aberration corrects the opposite aberration in the mirror (such as Schmidt and meniscus correctors). Convex lenses produce an image of an object at infinity at their focus; if the sun is imaged, much of the visible and infrared light incident on the lens is concentrated into the small image. A large lens will create enough intensity to burn a flammable object at the focal point. Since ignition can be achieved even with a poorly made lens, lenses have been used as burning-glasses for at least 2400 years. A modern application is the use of relatively large lenses to concentrate solar energy on relatively small photovoltaic cells, harvesting more energy without the need to use larger, more expensive, cells. Radio astronomy and radar systems often use dielectric lenses, commonly called a lens antenna to refract electromagnetic radiation into a collector antenna. Lenses can become scratched and abraded. Abrasion resistant coatings are available to help control this. Refraction is the change in direction of a wave due to a change in its speed. This is most commonly observed when a wave passes from one medium to another at any angle other than 90° or 0°. Refraction of light is the most commonly observed phenomenon, but any type of wave can refract when it interacts with a medium, for example when sound waves pass from one medium into another or when water waves move into water of a different depth. Refraction is described by Snells law, which states that the angle of incidence θ1 is related to the angle of refraction θ2 by where v1 and v2 are the wave velocities in the respective media, and n1 and n2 the refractive indices. In general, the incident wave is partially refracted and partially reflected; the details of this behaviour are described by the Fresnel equations. In optics, refraction occurs when waves travel from a medium with a given refractive index to a medium with another at an angle. At the boundary between the media, the waves phase velocity is altered, usually causing a change in direction. Its wavelength increases or decreases but its frequency remains constant. For example, a light ray will refract as it enters and leaves glass, assuming there is a change in refractive index. A ray traveling along the normal (perpendicular to the boundary) will change speed, but not direction. Refraction still occurs in this case. Understanding of this concept led to the invention offenses and the refracting telescope. In optics the refractive index or index of refraction of a substance or medium is a measure of the speed of light in that medium. It is expressed as a ratio of the speed of light in vacuum relative to that in the considered medium. This can be written mathematically as: n = speed of light in a vacuum / speed of light in medium. For example, the refractive index of water is 1.33, meaning that light travels 1.33 times faster in vacuum than it does in water. (See typical values of materials here). As light moves from a medium, such as air, water, or glass, into another it may change its propagation direction in proportion to the change in refractive index. This refraction is governed by Snells law, and is illustrated in the figure to the right. Refractive index of materials varies with the wavelength of light. This is called dispersion and results in a slightly different refractive index for each colour. The wavelength λ of light in a material is determined by the refractive index according to λ = λ0 / n, where λ0 is the wavelength of the light in vacuum. Brewsters angle, the critical angle for total internal reflection, and the reflectivity of a surface is also affected by the refractive index. These material parameters can be calculated using the Fresnel equations. The concept of refractive index can be used with wave phenomena other than light, e.g. sound. In this case the speed of sound is used instead of that of light and a reference medium other than vacuum must be chosen. The refractive index, n, of a medium is defined as the ratio of the speed, c, of a wave phenomenon such as light or sound in a reference medium to the phase speed, up, of the wave in the medium in question: It is most commonly used in the context of light with vacuum as a reference medium, although historically other reference media (e.g. air at a standardized pressure and temperature) have been common. It is usually given the symbol n. In the case of light, it equals where is the materials relative permittivity, and or is its relative permeability. For most naturally occurring materials, or is very close to 1 at optical frequencies, therefore n is approximately . Contrary to a widespread misconception, the real part of a complex n may be less than one, depending upon the material and wavelength (see dispersion (optics)). This has practical technical applications, such as effective mirrors for X-rays based on total external reflection. The phase speed is defined as the rate at which the crests of the waveform propagate; that is, the rate at which the phase of the waveform is moving. The group speed is the rate at which the envelope of the waveform is propagating; that is, the rate of variation of the amplitude of the waveform. Provided the waveform is not distorted significantly during propagation, it is the group speed that represents the rate at which information (and energy) may be transmitted by the wave (for example, the speed at which a pulse of light travels down an optical fibber). For the analytic properties constraining the unequal phase and group speeds in dispersive media, refer to dispersion (optics). Another common definition of the refractive index comes from the refraction of a light ray entering a medium. The refractive index is the ratio of the sins of the angles of incidence θ1 and refraction θ2 as light passes into the medium or mathematically The angles are measured to the normal of the surface. This definition is base on Snells law and is equivalent to the definition above if the light enters from the reference medium (normally vacuum). A complex refractive index is often used to take absorption into account. This is further discussed in the Dispersion and absorption section below. A closely related quantity is refractivity, which in atmospheric applications is denoted N and defined as N = 106(n - 1). The 106 factor is used because for air, n deviates from unity at most a few parts per thousand. The refractive index of a material is the most important property of any optical system that uses refraction. It is used to calculate the focusing power of lenses, and the dispersive power of prisms. It can also be used as a useful tool to differentiate between different types of gemstone, due to the unique chatoyance each individual stone displays. Since refractive index is a fundamental physical property of a substance, it is often used to identify a particular substance, confirm its purity, or measure its concentration. Refractive index is used to measure solids (glasses and gemstones), liquids, and gases. Most commonly it is used to measure the concentration of a solute in an aqueous solution. A refract meter is the instrument used to measure refractive index. For a solution of sugar, the refractive index can be used to determine the sugar content (see Bricks). In GPS, the index of refraction is utilized in ray-tracing to account for the radio propagation delay due to the Earths electrically neutral atmosphere. It is also used in Satellite link design for the Computation of radio wave attenuation in the atmosphere
http://www.slideshare.net/anannda/light-11522698
13
19
Want to stay on top of all the space news? Follow @universetoday on Twitter Barely two weeks into the 8 month journey to the Red Planet, NASA’s Curiosity Mars Science Lab (MSL) rover was commanded to already begin collecting the first science of the mission by measuring the ever present radiation environment in space. Engineers powered up the MSL Radiation Assessment Detector (RAD) that monitors high-energy atomic and subatomic particles from the sun, distant supernovas and other sources. RAD is the only one of the car-sized Curiosity’s 10 science instrument that will operate both in space as well as on the Martian surface. It will provide key data that will enable a realistic assessment of the levels of lethal radiation that would confront any potential life forms on Mars as well as Astronauts voyaging between our solar systems planets. “RAD is the first instrument on Curiosity to be turned on. It will operate throughout the long journey to Mars,” said Don Hassler, RAD’s principal investigator from the Southwest Research Institute in Boulder, Colo. These initial radiation measurements are focused on illuminating possible health effects facing future human crews residing inside spaceships. Video Caption: The Radiation Assessment Detector is the first instrument on Curiosity to begin science operations. It was powered up and began collecting data on Dec. 6, 2011. Credit: NASA “We want to characterize the radiation environment inside the spacecraft because it’s different from the radiation environment measured in interplanetary space,” says Hassler. RAD is located on the rover which is currently encapsulated within the protective aeroshell. Therefore the instrument is positioned inside the spacecraft, simulating what it would be like for an astronaut with some shielding from the external radiation, measuring energetic particles. “The radiation hitting the spacecraft is modified by the spacecraft, it gets changed and produces secondary particles. Sometimes those secondary particles can be more damaging than the primary radiation itself.” “What’s new is that RAD will measure the radiation inside the spacecraft, which will be very similar to the environment that a future astronaut might see on a future mission to Mars.” Curiosity’s purpose is to search for the ingredients of life and assess whether the rovers landing site at Gale Crater could be or has been favorable for microbial life. The Martian surface is constantly bombarded by deadly radiation from space. Radiation can destroy the very organic molecules which Curiosity seeks. “After Curiosity lands, we’ll be taking radiation measurements on the surface of another planet for the first time,” notes Hassler. RAD was built by a collaboration of the Southwest Research Institute, together with Christian Albrechts University in Kiel, Germany with funding from NASA’s Human Exploration Directorate and Germany’s national aerospace research center, Deutsches Zentrum für Luft- und Raumfahrt. “What Curiosity might find could be a game-changer about the origin and evolution of life on Earth and elsewhere in the universe,” said Doug McCuistion, director of the Mars Exploration Program at NASA Headquarters in Washington. “One thing is certain: The rover’s discoveries will provide critical data that will impact human and robotic planning and research for decades.” Curiosity was launched from Florida on Nov. 26. After sailing on a 254 day and 352-million-mile (567-million-kilometer) interplanetary flight from the Earth to Mars, Curiosity will smash into the atmosphere at 13,000 MPH on August 6, 2012 and pioneer a nail biting and first-of-its-kind precision rocket powered descent system to touchdown inside layered terrain at Gale Crater astride a 3 mile (5 km) high mountain that may have preserved evidence of ancient or extant Martian life. Miraculously, NASA’s Opportunity Mars rover and onboard instruments and cameras have managed to survive nearly 8 years of brutally harsh Martian radiation and arctic winters. Complete Coverage of Curiosity – NASA’s Next Mars Rover launched 26 Nov. 2011 Read continuing features about Curiosity by Ken Kremer starting here: Flawlessly On Course Curiosity Cruising to Mars – No Burn Needed Now NASA Planetary Science Trio Honored as ‘Best of What’s New’ in 2011- Curiosity/Dawn/MESSENGER Curiosity Mars Rover Launch Gallery – Photos and Videos Curiosity Majestically Blasts off on ‘Mars Trek’ to ascertain ‘Are We Alone? Mars Trek – Curiosity Poised to Search for Signs of Life Curiosity Rover ‘Locked and Loaded’ for Quantum Leap in Pursuit of Martian Microbial Life Science Rich Gale Crater and NASA’s Curiosity Mars Rover in Glorious 3-D – Touchdown in a Habitable Zone Curiosity Powered Up for Martian Voyage on Nov. 26 – Exclusive Message from Chief Engineer Rob Manning NASA’s Curiosity Set to Search for Signs of Martian Life Curiosity Rover Bolted to Atlas Rocket – In Search of Martian Microbial Habitats Closing the Clamshell on a Martian Curiosity Curiosity Buttoned Up for Martian Voyage in Search of Life’s Ingredients Assembling Curiosity’s Rocket to Mars Encapsulating Curiosity for Martian Flight Test Dramatic New NASA Animation Depicts Next Mars Rover in Action
http://www.universetoday.com/91959/curiosity-starts-first-science-on-mars-sojurn-how-lethal-is-space-radiation-to-lifes-survival/
13
13
for a place of habitation" William Bradford In late December of 1620, one hundred and two men, women, and children began to establish the second English permanent settlement in the New World. They christened their plantation New Plymouth after their last port of call in England. Within the next few decades, New Plymouth gave rise to numerous townships and communities in the area that came to be known as New England. The goal of its communities as well as the composition of its population placed New England in a category apart from any of the other English settlements planted in the New World during the seventeenth century. What did New Plymouth look like? And what was the character of daily existence in the town? These are questions that can only be answered by research. There are numerous surviving seventeenth century accounts that may prove to be useful in constructing a model of Plymouth plantation. However, many aspects of daily life were not deemed noteworthy by past informants and are thus lost to the historical record. The colonists may not have written about their daily activities, such as cooking dinner or constructing barns, but they did leave behind the material remains of these endeavors. The archaeological record may therefore prove valuable in learning about the day to day goings on of the colonists and how they may have perceived them. However, archaeological excavations have yet to be carried out on Plymouth plantation because it has yet to be found. The past research places the town on a Plymouth landmark known as Burial Hill, but there are many discrepancies in this interpretation. Subsequently, the plantation could just as easily lie on nearby Watson Hill. For these reasons, the previous research must be built upon and reinterpreted in order to contribute to our knowledge of the past and further our understanding of the American experience. Background: The colonization of New England In order to understand the mindset of the emigrants known today as the Pilgrims as well as their motives for colonization, we must first examine their system of beliefs. One can hardly conduct research into the history of New England without first realizing the extent and importance of its Puritan roots. Puritans, and their radical Separatist counterparts, were a minority in Old England, however they comprised the religious majority in New England. The Church of England, created by Henry VIII in the sixteenth century, made divorce feasible, rejected Latin as the official language of the Church, abolished confession, and allowed the clergy to marry (Bradford 1981: ix). Although these changes were radical for the time, many in England did not feel that they were radical enough. The Puritans wanted to reorganize and simplify the Anglican Church, thus making corruption within the Church less likely. They recognized only the sacraments of Baptism and Communion and preferred a body of church elders to the conventional Church hierarchy (Bradford 1981: x). "They wanted, above all, to return to a simpler church and to regain that passionate intensity that comes with true conversion" (Bradford 1981: xi). English Puritanism borrowed heavily from the theological teachings of Martin Luther and John Calvin, whose philosophies had spread throughout Western Europe during the sixteenth century. Luther and Calvin preached against corruption in the established churches and emphasized a more personalized covenant with God. "Puritans followed Luther and Calvin in objecting to the doctrine of Works - the submission to ecclesiastical authority and ordained participation in church ritual - because they were no substitute for the individual encounter with God" (Bradford 1981: xi). Although most Puritans wanted to reform or "purify" the Church of England, a number of groups believed that the Church was irreparable. One such group of Separatists, as they were known, had its roots in the small village of Scrooby, in Nottinghamshire, England. It was in Scrooby, in the year 1607, that a group of people came together to form an illegal separate church after withdrawing from their Anglican parishes. As English citizens were required by law to become members of the Church of England, many of the Scrooby group suffered persecution, in the form of fines and imprisonments (Simmons 1976: 16). It was for this reason that the Scrooby congregation decided to relocate to Holland, which enforced religious toleration, in 1608. William Bradford, who was a member of the group, wrote about this exodus some forty years later in New England: The homeless congregation took a year-long respite in Amsterdam, after which they settled in Leyden, "a fair and beautiful city and of a sweet situation . . ." (Bradford 1981: 16-17).Yet seeing themselves thus molested, and that there was no hope of their continuance there, by a joint consent they decided to go into the Low Countries, where they heard was freedom of religion for all men; as also how sundry from London and other parts of the land had been exiled and persecuted for the same cause, and were gone thither, and lived at Amsterdam and other places of the land. (Bradford 1981: 10) The group remained in Leyden for eleven years, "enjoying much sweet and delightful society and spiritual comfort together in the ways of God . . ." (Bradford 1981: 18). Their flock grew over the years as they accepted other exiled Englishmen into their congregation, including the notable minister John Robinson. Although they no longer suffered religious persecution at the hands of a national government, the Leyden group remained alienated. They were English men and women, removed from their homes and exiled in a strange land. They did not speak the language and were not familiar with the customs of the Dutch. "Yea, some preferred and chose the prisons in England rather than this liberty in Holland . . ." (Bradford 1981: 24). Dissention grew to the point that the congregation eventually decided to relocate. Bradford states several reasons for their removal from Holland, among which are the cultural hardships the English group bore in a strange land, as well as the increasing age of many of its members (Bradford 1981: 24). However, it seems that there were two great sources of anxiety that persuaded the congregation to migrate once again. The first was that an armistice between Spain and Holland was about to come to an end. The twelve year's truce, which began on March 30, 1609 was due to end in 1621, thus renewing the threat of religious persecution by way of the Spanish Inquisition. The second, and probably the most influential, reason for migration was the fear of acculturation into Dutch society. Many of the children of the group either had not known or could not remember much of England, and it was postulated that the children would lose their language and culture and essentially cease to be English. (Bradford 1981: 23-26) Members of the Leyden congregation ultimately made the decision to leave Holland, but they were uncertain in choosing a new place for habitation. They could not return to England for the same reasons that had forced their exile into Holland twelve years before. It appears that English America was the closest possible substitute for Old England, and they would be free to worship as they saw fit. In their struggle for a new residence, many of the group expressed a new desire "of laying some good foundation . . . for the propagation and advancing the gospel of the kingdom of Christ in those remote parts of the world . . ." (Bradford 1981: 26). This notion was significant and remained a motivation for English emigration into New England for decades to come. The congregation sent Robert Cushman and John Carver as emissaries to the Virginia Company of London, which was sponsoring the English venture at Jamestown, to apply for land patents within the Company's vast holdings. The Virginia Company accommodated the group and even granted them a patent, but it was never exploited. The Leyden assembly favored the support of a group of private backers and adventurers, headed by a London merchant named Thomas Weston. Bradford writes in his history, Of Plymouth Plantation, Thus the Leyden group acquired a set of sponsors as well as a prospective location for their plantation.[T]hey had heard, both by Mr. Weston and others, that sundry Honourable Lords had obtained a large grant from the King for the more northerly parts of that country . . . to be called by another name, viz., New England. Unto which Mr. Weston and the chief of them began to incline it was best for them to go . . . for the hope of present profit to be made by the fishing that was found in that country. (Bradford 1981: 39-40) The Leyden assembly recruited mariners and miscellaneous workers and artisans in London and set sail, aboard the Mayflower, on September 6, 1620. Of this group of one hundred and two, known today as the Pilgrims, only thirty-five were actually members of the Leyden congregation (Simmons 1976: 16). They arrived at Cape Cod on November 19, 1620 but did not find a suitable place to plant their community until the nineteenth of December. They chose a site with a protected harbor and high grounds, suitable for defense, and christened their plantation New Plymouth, after their last port of call in England. (Heath ed. 1963: 41-42) The planting of New Plymouth was followed by an influx of immigration into New England during the next three decades. During the 1630's and 1640's numerous towns sprang up in Plymouth colony while the colony of Massachusetts Bay grew to the North around the port town of Boston. Historian R.C. Simmons writes, "The arrival of the Pilgrims in Plymouth . . . marked the beginning of a voluntary movement of religiously discontented persons to America, a movement that would swell to a flood in the late 1620's and the 1630's" (Simmons 1976: 17). Other Puritan communities eagerly followed the example of the Pilgrims, who had set a precedent of leaving the Old World for the New in order to "advance the gospel . . . in those remote parts of the world . . ." (Bradford 1981: 26). The Pilgrims left England primarily to escape religious persecution by the early Stuart rulers, however, other Puritan groups came to the New World with the hope of fulfilling a greater purpose. "Their 'peculiar mission' was to establish the true Christian commonwealth that would thenceforth serve as a model for the rest of the Christian world" (Greene 1988: 21). In its religious orientation, New England differed greatly from the other English New World settlements of the time, however, religion was not the only distinguishing factor. In comparison with other English colonies, the age structure and sex ratio of the New England emigrant population closely resembled that of Old England (Anderson 1993: 102-103). The most apparent explanation for the region's demographics is that whole communities and families were often transplanted from England. "Fully 87.8 percent of the emigrants traveled with relatives of one sort or another. Nearly three-quarters came in nuclear family units, with or without children" (Anderson 1993: 104). New England also experienced lower mortality rates than either England or any of its colonies, due to relatively less serious epidemics (Greene 1988: 20). All of these factors contributed to the population growth and relative stability of New England during the seventeenth century. Problems with the Indians in the 1670's culminated in the conflict known as King Philip's War. A lack of trade and commercial farming, along with the toll of King Philip's War made Plymouth the poorest of the New England colonies. Plymouth colony had received charters from private adventurers as well as the Dominion of New England, however it had never received a formal charter from the English crown. The late 1680's were turbulent for Plymouth as it sat on the edge of bankruptcy. The story of the colony finally came to an end in 1691, when it was incorporated, by a formal charter, into the colony of Massachusetts Bay. (Simmons 1976: 108) Archaeology and the interpretation of the past Historical documents have long been the mainstay of researchers interested in exploring early life in Plymouth colony. The historical record of Plymouth is abundant with documents such as court records, probate inventories, deeds and wills, and eyewitness accounts. Despite the richness and diversity of the Plymouth records, they paint an incomplete picture of life in the colony. While the historical record is a valuable tool in the interpretation of the past, there is much to be learned that can not be found within the pages of a book or a court record. The simple occurrences of everyday life were things that none thought to write about and are thus lost to the historical record. Whereas informants may not have written about everyday activities, such as cooking dinner or replacing the worn clapboards of a dwelling house, they did leave behind the material remains of these endeavors. The archaeological record, with its complement of nails, shards of glass, and broken ceramics, may therefore prove helpful in learning about the everyday goings on of the colonists and how they may have perceived them. Anthropologist James Deetz writes, ". . . the combined use of archaeological and documentary materials should permit us to say something about the past that could not have been said using only one set of data" (Deetz 1996: 32). The problem that arises in conducting archaeological investigations of early Plymouth Plantation is essentially where to begin. Archaeologist Ivor Noel Hume seemingly states the obvious when he writes, "The first requirement for an archaeologist is that he have a site to dig . . ." (Noel Hume 1969: 23). However, finding a site is not always an easy task, especially if it is in an urban environment and has been continuously occupied since its settlement by European colonizers, such as New Plymouth. The remainder of this paper will therefore be spent in reviewing the past research and reinterpreting the historical sources in order to shed light on the possible whereabouts of the initial settlement and fortification of New Plymouth as well as what it may have looked like. There are many surviving primary sources from seventeenth century Plymouth, a great deal of which have been published. Many of the surviving accounts are essentially advertisements written by those in the colony aimed at encouraging investors and attracting prospective emigrants. Letters and personal papers also prove to be a wealth of descriptive information. And of course there are the usual state related records, such as wills, deeds, probate inventories, and court records. The following is a review of many of the primary sources that may be of interest to the researcher of early Plymouth colony. Histories and Relations William Bradford's Of Plymouth Plantation 1620-1647 is a valuable resource and a remarkable book. William Bradford, who was a member of the Leyden congregation as well as the colony's second governor, basically relates the story of the colony from its separatist roots in Leyden through the settlement and growth of Plymouth Plantation. A Relation or Journal of the English Plantation settled at Plymouth in New England . . . (1622), otherwise known as Mourt's Relation, was written by George Morton, William Bradford, Edward Winslow, and Robert Cushman to illustrate the progress of the colony and encourage English investors. This book recounts the construction of New Plymouth as well as several journeys into the surrounding areas and subsequent encounters with the Native populations. Good News from New England (1624), written by Edward Winslow and also known as Winslow's Relation, tells of the happenings of New Plymouth after the arrival of the Fortune in November of 1621. This is also a relation written with the intent of encouraging investors and highlighting the progress of the colony. A large part of this relation deals with Native and English interactions. A Description of New England, written by John Smith in 1616, describes the area of New England, witnessed by Smith on a voyage along the coast of New England. He speaks of the Native populations as well as New England's prospects for possible adventurers. Advertisements For the unexperienced Planters of New England, or anywhere . . . (1631), by John Smith is basically a description of the New England area. John Smith visited Plymouth and provided counts for people, houses, and livestock, as well as descriptions of the town of New Plymouth. This little account also contains John Smith's map of New England, drawn in 1614. New England's Trials (1620), also by John Smith, tells of the prosperous fishing industry of northern New England and Newfoundland and of the shipping traffic between Europe and New England. Smith wrote this book to encourage English investors and establish markets for New English fish in European countries. Three Visitors to Early Plymouth is an important compilation of letters about the settlement of New Plymouth before 1627. These letters are a wealth of descriptive information, including the only known reference to the orientation of Plymouth with compass bearings. The accounts were written by a Virginian, an Englishman, and a Dutch ambassador from the neighboring settlement of New Amsterdam. Records of the Colony of New Plymouth in New England, first published in the mid nineteenth century, is the major body of published records from Plymouth. This compilation includes items such as court records, deeds, and wills, and is invaluable to anyone doing research on Plymouth colony. There are but three known seventeenth century maps of Plymouth colony (Appendix A): Samuel de Champlain's map, drawn in 1605 and published in 1613, of Port St. Louis is most likely a map of Plymouth Harbor, complete with soundings. The map illustrates a large Indian village surrounding the harbor. This map may have been known by navigators and adventurers of the time. John Smith's map of New England, published in his Advertisements for the Planters of New England (1631), was drawn in 1614 on a voyage along the New England coast. This is a general map of southern New England and is a very detailed depiction of the coastline and inland waterways. William Bradford's sketch of "The meersteads & garden plotes of which came first layd out 1620" is the only known depiction of the original town layout. Found on the first page of Volume twelve of the Records of the Colony of New Plymouth, the sketch shows seven house lots laid out along the intersection of "the streete" and the "high way." The nineteenth century: an impetus for research Fueled by a strong nationalist mentality, nineteenth century America was a country eagerly searching for an identity. To this end, Americans began to look back to their colonial roots for a sense of pride and accomplishment. Episodes from America's colonial past became infused in the literature of the time. Henry Wadsworth Longfellow's "The Courtship of Miles Standish," for example, was first published in 1888. The legend states that John Alden, a cooper who came over on the Mayflower, went to speak to Prescilla Mullins on behalf of Miles Standish. When he began to plead his case for Miles Standish, she uttered the now famous line, "Prithee, why don't you speak for yourself John?" (Addison 1911: 128). As the stories circulated and became popular, the nation began to forge its own origins based in a mythical past complete with saints and heroes, of which American society was directly descended. It was at this time that the solemn image of the Pilgrims, dressed in black from head to toe and covered with buckles, was fabricated. The story of the Pilgrims and their flight from an oppressive English monarchy in order to establish religious freedom for all became prevalent. And why wouldn't a story such as this become popular? The Pilgrims were attractive in that they seemingly embodied much of what America stood for; religious freedom, perseverance, and an unwavering sense of individuality. They refused to adhere to an unjust law and compromise their beliefs and, in so doing, they established a nation of their own. The fact of the matter is that of the one hundred and two men, women, and children who sailed to Plymouth aboard the Mayflower, only thirty-five were actually fleeing any sort of religious persecution (Simmons 1976: 16). The sixty-seven other passengers that made up the majority of the colonists were comprised of soldiers, artisans, and laborers. However, the group is collectively known as the Pilgrims, sometimes referred to as the "Pilgrim fathers." As this latter term implies, the Pilgrims were seen as the forefathers of modern American culture, thus making nineteenth century Americans their progeny. The nineteenth century preoccupation with the story of the Pilgrims inspired many historians and researchers to look through the surviving seventeenth century documents and records. Many of the documents and court records, letters, and memoirs were transcribed and published. Consequently, many of the historical interpretations available today were composed during this time period. Many of these histories were no doubt influenced by the notion of a mythical past. For example, the author of the book The Romantic Story of the Mayflower Pilgrims, published in 1911, uses language in a way to venerate the Pilgrims as the progenitors of modern America. The intent of the author is expressed in the title of his book, and as such, the Pilgrims were much romanticized in history books as well as in the literature of the time.Such is the story of the Mayflower Pilgrims, romantic, heroic idyllic, based also upon the principles which have molded and maintained a mighty free nation . . . For on this hallowed spot, with its historic environment and its striking reminders of a great and honoured past, was rocked the cradle of a nation whose civil and religious liberty it was the first rude home. (Addison 1911: 103,111) Much of the nineteenth century research was aimed at interpreting and describing the physical aspects of Plymouth Plantation, such as its appearance as well as its location and orientation. In this line of research, one of the major works of the time is William T. Davis's Ancient Landmarks of Plymouth. First published in 1883, Ancient Landmarks of Plymouth remains a valuable resource to present day researchers. Davis's interpretations and conclusions have generally become accepted as historically accurate, however it is very likely that Davis, like others of his time, was influenced by the numerous myths and stories that were circulating about Plymouth and the Pilgrims. When one reviews the past research, it becomes apparent that there are discrepancies between the historical sources and the accepted interpretation of Plymouth Plantation. However, before these discrepancies can be highlighted or any hypothesis formulated, the geography and specific conditions of the Plymouth Bay area must be first be understood. Cape Cod's distinctive shape affords protection to the inland coastline and waterways. A long spit, known today as Plymouth Beach, juts into Plymouth Bay, a smaller subsidiary of Massachusetts Bay, and creates a protected harbor. The seashore quickly gives way to higher ground and numerous landforms, most of which are small hills and plateaus. Pleistocene glacial deposits lie above sedimentary strata as a testament to the landscape forming power of glaciation. The immediate landscape of the present site of the town of Plymouth contains several distinctive landforms worthy of noting. The first is a tidal creek that bisects the town. The creek, named Town Brook, connects a large inland lake, known as the Billington Sea, to Plymouth Harbor and feeds numerous small ponds along the way. Town Brook lies between the two highest landforms of the town; Burial Hill to the north and Watson Hill to the south. According to the United States Geological Survey (USGS), Burial Hill, so named for its cemetery, rises one hundred and twenty feet above sea level. It is significant to note that the eastern face of the hill, closest to Plymouth Harbor, is a steep incline. Watson Hill, according to the USGS, rises some ninety feet above sea level at a much more gradual pitch than Burial Hill. (See Appendix C) Keeping the landforms and general geographical features of the Plymouth area in mind, we may now return to William T. Davis and his interpretation of the town of New Plymouth. Davis does not provide a detailed model of what the town may have looked like, but he does go into detail with place names and their change over time. He pays particular attention to Burial Hill and the street names and comes to the conclusion that the initial settlement of New Plymouth must have been built upon Burial Hill. However, Davis rarely cites his sources and relies heavily on oral tradition. He subsequently uses the phrase "it is well known that . . ." often to justify his statements. Some of his assumptions are therefore questionable and should be taken as such. Burial Hill is prominent in Davis's interpretation of Plymouth Plantation. It receives its name from the cemetery located on the hill's summit. Of the grave markers in the cemetery, there are but four surviving from the seventeenth century. These are the grave stones of Edward Gray, 1681; William Crow, 1684; Hannah Clark, 1687; and Thomas Clark, 1697 (Davis 1883: 133). Before its use as a cemetery, Davis cites Burial Hill as the location for the original fortification of the town, constructed in 1622. At the time of the publishing of Ancient Landmarks of Plymouth, much was not known about site formation processes as archaeology was still very much in its infancy. It is very likely that if one were inclined to look, one may find brick fragments "a little beneath the surface" almost anywhere in Plymouth.It is well known that its commanding position induced the erection of fortifications upon [Burial Hill] at an early period . . . Traces may now be seen of the fort of the Pilgrims on the top of the hill, at what was, in the earliest days, the junction of Leyden and Spring Streets, where its guns could command both. In 1643 a watch house was built near the site of the fort, and a little beneath the surface fragments of the brick used in its construction may still be found. (Davis 1883: 130, 135) Davis states that the fort was enlarged several times and "rebuilt one hundred feet square" in 1676, at the time of King Philip's War. The following year "its material was sold to William Harlow and used by him in the construction of a house still standing." He further states, "In a recent repair of the house its oak posts and beams were laid bare, and disclosed the ancient mortises made in fitting the frame of the fort. An ancient iron hinge was also found . . . which was probably one of the hinges on which the gate of the fort was hung" (Davis 1883: 135). Nowhere in the records does it actually say that the fortification was dismantled the year following King Phillip's War. The Harlow house story may be an example of the influence that local legend exerts on historical interpretation as it can be neither confirmed nor denied. Davis devotes a significant number of pages to street names and their origins as well as their change over time. He states, "Before 1633 there were five streets and two lanes laid out within the limits of the town" (Davis 1883: 156). The first two streets, laid in 1620, created an intersection at the heart of the town. Leyden Street led eastward from the fort to the harbor and Spring Street crossed Leyden Street to connect the community to Town Brook (Davis 1883: 156-157). Davis writes of the remaining streets, William T. Davis speaks of the streets in respect to the late nineteenth century, when his book was published. For example, Davis identifies the nineteenth century Leyden Street as the pathway that led from the seventeenth century fort to Plymouth Harbor. He basically uses nineteenth century landmarks to describe the seventeenth century landscape, hence the title of his work, Ancient Landmarks of Plymouth. In doing this, the validity of some of Davis's conclusions must be called into question and they will indeed be inspected in further detail later in this paper.The second of the five streets was South Street, now Market Street, and the three others, High, now Summer Street, Main and New, sometimes in old deeds called Queen Street, now North Street. The two lanes referred to are Spring Lane . . . now called Spring Street, and Woods Lane, or 'lane leading to the woods,' which is now Samoset Street. (Davis 1883: 157) Ancient Landmarks of Plymouth is not the only nineteenth century work dealing with Plymouth. There are many nineteenth century volumes of seventeenth century records, letters, and relations, of which the editors often comment in footnotes or articles. Among these works is Alexander Young's 1841 Chronicles of the Pilgrim fathers of the colony of Plymouth. On this passage from Mourt's Relation, "so many as could went to work on the hill where we purposed to build our platform for our ordnance . . . and from whence we may see far into the sea, and might be easier impaled, having two rows of houses and a fair street," Alexander Young comments, "I think something is omitted here. The house-lots were not laid out on the hill, but in front of it, on Leyden Street" (Young 1841: 170). Young goes so far as to correct a first person account, and his note illustrates a nineteenth century model of the town of New Plymouth, with a fortification on a hill and the town beside the seashore, in front of the hill. Nineteenth century maps are very useful in visualizing the nineteenth century model of the town. The map of Plymouth Village of 1846, the map of Plymouth of 1830, and a map of Plymouth from Young's 1841 work all show a village in between the hill line and the seashore. The map of Plymouth from the 1879 Plymouth County Atlas shows Burial Hill as the "Site of a watch-tower used by Pilgrims," thus naming Burial Hill as the site of the original fortification. The 1830 and 1846 maps show Watson's Hill as the site of "the first Indian treaty" and the "first interview of Massasoit with the Pilgrims" respectively. (See Appendix B) The seventeenth century accounts In order to gain a better understanding of what the town of Plymouth may have looked like as well as its location, the surviving seventeenth century sources must be utilized. William Bradford's sketch of the town, entitled "The meersteads & garden plotes of which came first layd out 1620," is the only known map of the original town layout (See Appendix A). The sketch shows seven house lots, facing "the streete" and bisected by a "high way." The house lots are located on what Bradford terms "The south side," while "The north side" is essentially bare. Mourt's Relation states that the number of dwellings to be built was reduced to nineteen by placing single men with families and the town was to be built "having two rows of houses and a fair street" (Heath ed. 1963: 42). During the winter of 1620-1621 the Pilgrims were forced to live aboard the Mayflower and commute to the shore daily in order to build the town. Edward Winslow writes, in a letter to George Morton dated December 11, 1621 (the following winter), "we have built seven dwelling houses and four for the use of the plantation and have made preparations for diverse others" (Young 1841: 230). Although the Fortune left thirty-five new colonists upon its departure on December 13, 1621, many of the original settlers had died the previous winter. The seven original house lots were probably meant to provide shelter for the entire group until more could be built. William T. Davis states, "It is probable . . . that in the rapidly-reduced condition of the colony the seven houses laid down on the plan were all that could be built or were needed to furnish shelter from the winter's cold . . ." (Davis 1883: 53). Soon after the departure of the Fortune the residents of the newly constructed town faced a new peril. In January of 1622, the Narragansett Indians sent a message to the people of Plymouth in the form of a bundle of arrows wrapped within a snakeskin. This was interpreted by Squanto, whom the English had befriended, as a military threat and a challenge to the well being of the colonists. As an answer to the Narragansett challenge, the Governor filled the snakeskin with powder and shot and sent it back but they would not receive it and thus sent it back to the English. (Bradford 1981: 106) Fearing the threat of an Indian attack, the colonists thought it necessary to impale their town. Edward Winslow writes of this endeavor in his relation, On March 22, 1622, three hundred and forty seven members of Virginia's English population perished in a great Algonquin uprising (Morgan 1975: 99). Upon hearing this news, the Plymouth colonists decided to fortify their newly impaled town. The erection of the fort was accomplished in two months (May-June, 1622) and was built on the hill enclosed by the palisade of the town. Bradford gives a good description of the fort in his history,In the mean time, knowing our own weakness, notwithstanding our high words and lofty looks towards them; and still lying open to all casualty, having, as yet, under GOD, no other defence than our arms: we thought it most needful to impale our town . . . Taking in the top of the hill under which our town is seated: making four bulwarks or jetties without the ordinary circuit of the pale, from whence we could defend the whole town in three whereof, are gates; and the fourth, in time to be. (Bercovitch ed. 1986: 520) This summer they built a fort with good timber, both strong and comely, which was of good defense, made with a flat roof and battlements, on which their ordnance were mounted, and where they kept constant watch, especially in time of danger. It served them also for a meeting house and was fitted accordingly for that use. (Bradford 1981: 123) There are a few valuable first person accounts of the town of New Plymouth made by visitors to the town during the 1620's. The first is in a letter from John Pory to the Earl of Southampton dated January 13, 1622/23. John Pory had been a student and friend of Richard Hakluyt, the great promoter of colonialism, and was a cousin to George Yeardley, the governor of Virginia at the time. Pory stopped by New England on his way back to Old England on a mission to inspect the fishing industry, in the hopes of getting the Virginia Company a piece of the royal monopoly held by the Council for New England. He was also to assess the new settlement of Plymouth as well as its future prospects. The town had been fortified for less than a year when John Pory visited Plymouth and his letters do not bring forth much new information, however he does give an idea of the size of the town and the extent of the palisade. And their industry as well appeareth by their building, as by a substantial palisado about their [town] of 2700 foot in compass, stronger than I have seen any in Virginia, and lastly by a blockhouse which they have erected in the highest place of the town to mount their ordnance upon, from whence they may command all the harbour. (James ed. 1963: 11) Emmanuel Altham wrote several accounts of the colony and his most useful account of the town is his first. He provides a much more detailed description of the community than John Pory, in a letter to Sir Edward Altham dated September, 1623, This reference is significant because Altham provides a count of twenty or so houses, up from Edward Winslow's December, 1621 count of seven dwellings and four common houses. He also provides a more detailed description of the town's defenses.I mean the plantation at Patuxet (Plymouth's Indian name). It is well situated upon a high hill close unto the seaside . . . In this plantation is about twenty houses, four or five of which are very fair and pleasant, and the rest (as time will serve) shall be made better. And this town is in such manner that it makes a great street between the houses, and at the upper end of the town there is a strong fort, both by nature and art, with six pieces of reasonable good artillery mounted thereon; in which fort is continual watch . . . . This town is paled round about with pale of eight foot long, or thereabouts, and in the pale are three great gates. (James ed. 1963: 24) Captain John Smith visited Plymouth plantation in 1624 and placed a description of the town in his 1631 work entitled Advertisements For the unexperienced Planters of New England, or any where. Smith writes, "their Towne containes two and thirty houses, whereof seven were burnt . . ." and he observes a population of "about an hundred and fourescore persons" (Smith 1971: 18). If John Smith's house count is correct, then the colonists must have built twelve or so dwellings in one year, since the 1623 count of "about twenty houses" by Emmanuel Altham. There are a few other possibilities for the increased number of structures between the two accounts. John Smith was writing to encourage emigration and investors and may therefore have exaggerated in order to illustrate the progress of the colony. Another possibility is that Smith may have included common houses in his count or Altham may not have included burnt structures. John Smith goes on to describe the town, which he states, "was impailed about halfe a mile, within which within a high Mount, a Fort, with a Watch-tower, well built of stone, lome, and wood, their Ordnance well mounted . . ." (Smith 1971: 18). This description is similar to the others, with the notable addition of a watch tower to the fort, and is most likely accurate. Perhaps the most descriptive of the accounts of the 1620's came from Isaack de Rasieres, a visiting ambassador from the neighboring Dutch colony of New Amsterdam. In a letter to Samuel Blommaert of circa 1628, de Rasieres writes, Isaack de Rasieres's rather lengthy account is a wealth of descriptive information. He goes so far as to describe the construction materials of the structures as well as the weight of the cannon shot. This is the first reference that contains any description of the square stockade that commanded the intersection of the town, and it is reasonable to assume that this was added between 1623 and 1628. De Rasieres's description is also the only known account of the town with reference to compass bearings, which is extremely important, as will be shown later, in assessing the possible locations for the original town site.New Plymouth lies on the slope of a hill stretching east towards the sea-coast, with a broad street about a cannon shot of 800 feet long, leading down the hill; with a [street] crossing in the middle, northwards to the rivulet and southwards to the land. The houses are constructed of clapboards, with gardens also enclosed behind and at the sides with clapboards, so that their houses and courtyards are arranged in very good order, with a stockade against sudden attack; and at the ends of the streets there are three wooden gates. In the center, on the cross street, stands the Governor's house, before which is a square stockade upon which four patereros (small cannon) are mounted, so as to enfilade the streets. Upon the hill they have a large square house, with a flat roof, built of thick sawn planks stayed with oak beams, upon the top of which they have six cannon, which shoot iron balls of four and five pounds, and command the surrounding country. The lower part they use for their church, where they preach on Sundays and the usual holidays. (James ed. 1963: 76) The previous first person accounts were used to create a working model of the town of New Plymouth by the living history museum known as Plimoth Plantation. The depiction is of a diamond shaped, palisaded town with houses lined up on either side of a main street. Garden plots are located behind the houses and continue to the extent of the palisade wall. At the center of the town is the intersection with the "square stockade upon which four patereros are mounted" identified by de Rasieres. Of the four points of the diamond shaped palisade wall, three are quadrangular bastions with entrance gates and the fourth is a square blockhouse and watch tower complete with ordnance. (See Appendix C) The model is a very useful tool and helps us to visualize the previously mentioned accounts of the town. However, there are a few possible flaws in the rendering that are worthy of noting. For example, the bastions in the model are quadrangular and most likely would have been circular, as in other English frontier fortifications of the time (Kelso 1996: 34-35). The shape of the palisade walls is unknown and could have existed in many forms, therefore the depiction of a diamond shaped town is as good a guess as can be made until archaeologists can prove otherwise. The final discrepancy in the Plimoth Plantation depiction is that it is too small. If the dwelling and population counts in the previous descriptions of the town are believed to be accurate than the model must be greatly expanded in order to hold more house and garden plots. However, it must be remembered that the Plimoth Plantation rendering is a model and is not meant to be a literal depiction of the town, therefore its greatest value is as a reference material. New Plymouth was a New World English colonial settlement, however there were numerous colonial campaigns in the Old World that had establish a precedent for English colonial frontier architecture. At the onset of colonial endeavors in Virginia and New England, a campaign was already in progress to colonize Northern Ireland. The Ulster plantation was widely known and was used as a model for other English frontier settlements. As a result, the influence of the Ulster plantation may be seen in the architecture of various New World settlements, including Plymouth plantation. It is reasonable to assume that under similar conditions, English colonists in the New World would take advantage of the experiences of their countrymen in northern Ireland. The Ulster plantation provided the only available prototype to English adventurers of the New World. Anthony Garvan writes of the Ulster communities, This reference should immediately bring to mind the "square stockade upon which four patereros are mounted, so as to enfilade the streets" that de Rasieres described in 1628 (James ed. 1963: 76). New Plymouth also fit the tripartite town plan as it consisted of house lots as well as a dual purpose bawn, or fort, that also served as a place of religious worship for the town residents.Within this narrow empire England had developed a traditional technique of control for frontier communities. At its base stood the bastide fortress, essentially a garrison town designed to give English minority groups protection from numerically superior but largely unorganized native populations . . . Not only did he wall the city against external attack, but he provided against internal rebellion of the Irish as well . . . In 1618 - 1619 [Londonderry] carried on his ideas and placed cannon in the market place, which commanded the main streets of the town . . . Every town . . . had three parts: the cottages of the tenants, the lot for the Established Church building, and the bawn. (Garvan 1951: 27-28, 35) In many ways Londonderry was the prototypical Ulster plantation. Garvan writes, The plan of Londonderry is very similar to that of the Plimoth Plantation rendering of New Plymouth, including the axes of the streets and the back lots or "garden plotes." However, Londonderry was a large settlement, and New Plymouth was a small village. The smaller settlement of Macosquin may provide a better image of what New Plymouth may have looked like. The plan of Macosquin shows a main street that leads to a fortification on a high knoll at one end and a church at the other. Lined up on either side of the main street are houses with back lots. There are two smaller streets that intersect the main street at right angles and lead to a river. New Plymouth could very well have looked much like Macosquin, which would make sense as they were both frontier settlements in uncertain environments. Taken as a whole, New Plymouth consists of elements from both Londonderry and Macosquin, which strengthens the argument that the New England colonists could have drawn from the experiences of the Ulster Plantation.Londonderry reflected continental, colonial, and military traditions adapted to the peculiar social conditions of the Ulster plantation . . . it was a walled city, but the rudimentary bastions of the early seventeenth century replaced medieval round towers . . . it drew a grid of streets parallel to the principle cross axes of the four gates . . . the houses were laid out with large gardens and back lots in part to reduce the population within the walls and in part to provide food in time of siege. (Garvan 1951: 35) Reinterpretation: The possible sites of Plymouth plantation Taken as a collective, the first person descriptions of the early seventeenth century plantation of New Plymouth have several points in common. First and foremost, virtually every account specifically states that New Plymouth lies on the slope of a hill, in contrast with the nineteenth century model of a town with a fort on a hill and house lots in front of the hill. The present Burial Hill has generally been accepted as the landform that was utilized by the colonists for defensive purposes. William T. Davis, as aforementioned, stated that "It is well known that [Burial Hill's] commanding position induced the erection of fortifications upon it at an early period . . ." (Davis 1883: 129). However, the first person accounts state that it was not only a defensive structure that sat upon a hill, but the entire palisaded town. Emmanuel Altham stated "[New Plymouth] is well situated upon a high hill close unto the seaside . . ." (James ed. 1963: 24). Reviewing the geography of Burial Hill, Frank H. Perkins writes, "It is irregular in form and contains about eight acres" (Perkins 1947: 5). It must also be remembered that, according to the United States Geological Survey (see Appendix C), the eastern face of Burial Hill rises sharply. De Rasieres stated that "New Plymouth lies on the slope of a hill stretching east towards the sea-coast . . ."(James ed. 1963: 76). Supposing that the nineteenth century model of a fortification on Burial Hill is correct, it becomes questionable as to whether the entire town of New Plymouth could fit on eight acres of the hill, especially on its steep eastern face. In contrast to Burial Hill, nearby Watson Hill has a much more gradual slope and, more importantly, is much larger than Burial Hill. Nowhere in the records does it actually state that the town was constructed on Burial Hill, thus the Burial Hill scenario is an historical interpretation. It subsequently remains a possibility that the historical record may be utilized to create another distinct model. Mourt's Relation describes the landing of the Mayflower, under the date of December 19, 1620, There is no doubt that this is a reference to the choosing of the site of Plymouth for the construction of the town. The "great hill" referred to in the latter part of the account has been interpreted as Burial Hill because it is the highest in the area. According to the USGS (see Appendix C), Burial Hill rises one hundred and twenty feet above sea level, however Watson Hill rises nearly as high at ninety feet above sea level. The "very sweet brook" in the account is most likely Town Brook, which runs between Burial Hill and Watson Hill and therefore "runs under the hill side" of both.After our landing and viewing of the places, so well as we could we came to a conclusion, by most voices to set on the mainland, on the first place, on a high ground, where there is a great deal of land cleared, and hath been planted with corn three or four years ago, and there is a very sweet brook runs under the hill side, and many delicate springs of as good water as can be drunk, and where we may harbor our shallops and boats exceeding well . . . In one field is a great hill on which we point to make a platform and plant our ordnance, which will command all round about. From thence we may see into the bay, and far into the sea . . . . (Heath ed. 1963: 41) The account of the first meeting of Massasoit, the great Wampanoag sagamore (or chief), with the leaders of the plantation is recounted in Mourt's Relation under the date of March 22, 1621 as follows, The 1830 and 1846 maps (Appendix B) show Watson Hill as the site of the "first interview of Massasoit with the Pilgrims," and the site where the "first Indian treaty" was created. It is most likely that the agreement between Massasoit and the colonists was made in the "house then in building, where we placed a green rug and three or four cushions . . ." If Watson Hill was indeed the site of the first "interview" of Massasoit and the colonists, then it is more than likely also the site of the house to which Massasoit was escorted. If this is true than the original site of the town must be upon Watson Hill as well.[A]fter an hour the king came to the top of a hill over against us, and had in his train sixty men . . . Squanto went again unto him, who brought word that we should send one to parley with him, which we did . . . In the end he left him in the custody of Quadequina his brother, and came over the brook, and some twenty men following . . . . Captain Standish and Master Williamson met the king at the brook, with half a dozen musketeers. They saluted him and he them, so one going over, the one on the one side, and the other on the other, conducted him to a house then in building, where we placed a green rug and three or four cushions . . . . (Heath ed. 1963: 56) The ambiguity of many of the surviving accounts has led historians and antiquarians to believe that Burial Hill holds the site of the original town, however there is a first person description of the town in relation to compass bearings. The circa 1628 Isaack de Rasieres letter describes the town in relation to the points of the compass, "New Plymouth lies on the slope of a hill stretching east towards the sea-coast, with a broad street . . . leading down the hill; with a [street] crossing in the middle, northwards to the rivulet and southwards to the land" (James ed. 1963: 76). According to William Bradford's sketch, the heart of the town was an intersection of two streets in which William T. Davis, among others, postulated that one street led to Town Brook and the other to the seashore. The only body of water that could be referred to as a rivulet in the immediate vicinity of Plymouth is Town Brook and if one of the streets led north to Town Brook, than the town must have been on the south side of the rivulet. This would place the town on Watson Hill, on the south side of Town Brook. Although the de Rasieres letter challenges the notion that Burial Hill was the site of Plymouth plantation, the reference has generally been dismissed. It has been either ignored or argued that de Rasieres simply reversed his bearings and the settlement was therefore constructed on the north side of Town Brook, on Burial Hill. The notion that de Rasieres confused his bearings is improbable because he clearly stated that "New Plymouth lies on the slope of a hill stretching east towards the sea-coast . . ." He obviously knew which way was east, therefore there is little reason to doubt his other compass bearings. However, one historical reference should not be taken as irrefutable evidence that the original settlement of New Plymouth lay on the south side of Town Brook. According to New Englands Prospect, written in 1634, the Native term for the plantation of New Plymouth was Pawtuxet (Patuxet) (Bercovitch ed. 1986: 116). New Plymouth was built on the site of the previous village of Patuxet, which had been wiped out by a series of epidemics brought by European fishermen a few years before the coming of the Pilgrims. Mourt's Relation confirms Patuxet as the site of New Plymouth in a brief reference, "and Squanto, the only native of Patuxet, where we now inhabit . . ." (Heath 1963: 55). William T. Davis states in his work Ancient Landmarks of Plymouth that Patuxet was "The Indian name, perhaps, of that part of Plymouth south of Town Brook" (Davis 1883: 153). The Mourt's Relation passage describing the site for the construction of the plantation states that it was "a high ground, where there is a great deal of land cleared, and hath been planted with corn three or four years ago . . ." (Heath 1963: 41). Davis states that Watson Hill was "called by the Indians Cantaughcantiest, meaning 'planted fields' . . ." (Davis 1883: 156). It is very likely that Cantaughcantiest, or Watson Hill, was the hill that "hath been planted with corn three or four years ago" that was described as the site of the town in Mourt's Relation and a part of the Native village of Patuxet. The colonists befriended a number of Indians in the early years of the colony that served as interpreters and teachers to the English. Although the most well known was Squanto, there was another by the name of Hobomok that was very close to the English. Bradford writes in his history, "And there was another Indian called Hobomok come to live amongst them, a proper lusty (strong or stout) man, and a man of account for his valour and parts amongst the Indians, and continued very faithful and constant to the English till he died" (Bradford 1981: 97). Ironically, Hobomok died from a European disease contracted from his close English friends. It may be implied from Bradford's passage that Hobomok lived within the town of New Plymouth. Edward Winslow confirms this notion in his relation, "In the mean time, an Indian called Hobbamock, who still lived in the town, told us, that he feared the Massachusets . . ." (Bercovitch ed. 1986: 521). William T. Davis states that "Hobbamak's Ground" was "a parcel of land on Watson's Hill occupied by Hobbamak, by permission of the colony, before 1623" (Davis 1883: 152). It is probable that Hobamok lived on this parcel of land on Watson Hill and if he did indeed live in the town, as Winslow and Bradford suggest, than the town was most likely situated upon Watson Hill as well. Skipping ahead to the twentieth century, there was an archaeological effort to locate part of the original town. Archaeologist James Deetz, working with Plimoth Plantation, attempted to dig the homesite of Governor William Bradford in August of 1970. Using the following passage from William T. Davis, he initiated excavations on Main Street upon Burial Hill, If Davis was correct in his placement of William Bradford's house, than the team of archaeologists should have been digging in Bradford's backyard (see 1879 map Appendix B). What they found were numerous refuse pits and lots of artifacts, however, they were not William Bradford's artifacts. None of the collection was from Bradford's time, but was instead of eighteenth century origins. According to Deetz, based on the collection of artifacts the site could be pushed back no farther than the 1720's at the earliest. (Personal communication with J. Deetz: 3-25-97)The remainder of the land between School Street and Main Street belonged to Governor Bradford, and the tradition that his house was located there has never been disputed by the most critical antiquarian. The letter of DeRasieres, giving an account of his visit to Plymouth in 1627, and a description of the town at that time, places the house beyond the possibility of a doubt on the corner of the [Town] square and Main Street. He says "in the centre on the cross street stands the governor's house." (Davis 1883: 193) The pattern of trash disposal most common on seventeenth century Anglo sites is known by archaeologists as sheet or broadcast refuse, so named because it results in a widespread "sheet" of artifacts distributed over the site. Trash or refuse is simply thrown out of the nearest convenient opening, such as a door or a window, therefore it is distributed evenly across the site. Subsequently, if the Plimoth Plantation team was in fact digging in Bradford's backyard, they should have found seventeenth century artifacts widely distributed over the site. Trash pits were not widely used until the eighteenth century and eventually replaced sheet refuse as the disposal method of choice. The fact that refuse pits were found on the site further strengthens the eighteenth century date. As governor Bradford's house was supposedly located next to the intersection at the heart of the palisaded town, and there were no seventeenth century artifacts found at his alleged house site, it is a distinct possibility that the town site is not on Burial Hill. The question thus arises as to who actually lived on the eighteenth century site on Burial Hill. Davis states in Ancient Landmarks of Plymouth of Main Street that "its ancient name was Hanover Street" (Davis 1883: 157). The Hanoverian royal house was established when George I took the English throne from the Stuarts in 1714. If Hanover Street was named after the English royal lineage, then it was almost certainly an eighteenth century road. Although it does not provide an answer as to who lived on the site, it could explain the collection of eighteenth century artifacts as well as the refuse pits. There was not a significant amount of research into the history of Plymouth colony until the nineteenth century, when ideas of nationalism sparked an interest into the country's origins. In its eagerness for an identity, Americans forged their own origins based in a mythical past complete with saints and heroes, of which Americans were directly descended. The Pilgrims were seen as the progenitors of American culture and a great wave of research was sparked, as well as the collection and publishing of seventeenth century sources. The nineteenth century model of Plymouth plantation subsequently became the model accepted by historians and antiquarians. However, the numerous legends and myths circulating at the time became a possible source of bias for nineteenth century researchers. It is therefore beneficial to return to the primary sources - the seventeenth century sources - in order to reinterpret the past research and come up with new models of the settlement. This paper is an attempt at doing just that and it will hopefully lead to future investigations into Plymouth plantation, archaeological as well as historical. Addison, Albert Christopher. 1911: The Romantic Story of the Mayflower Pilgrims. L. C. Page & Company: Boston Anderson, Virginia Dejohn. 1993: "Migrants and Motives: Religion and the Settlement of New England, 1630-1640" in Katz, ed. Colonial America. McGraw-Hill, Inc.: New York Bercovitch, Sacvan. 1986: A Library of American Puritan Writings. Volume 9 - The Seventeenth Century. Ams Press, Inc.: New York Bradford, William. 1981: Of Plymouth Plantation 1620-1647. The Modern Library: New York Davis, William T. 1883: Ancient Landmarks of Plymouth. A. Williams and Company: Boston Deetz, James. 1996: In Small Things Forgotten: An Archaeology of Early American Life. Anchor Books Doubleday: New York Garvan, Anthony N. B. 1951: Architecture and Town Planning in Colonial Connecticut. Yale University Press: New Haven Greene, Jack P. 1988: Pursuits of Happiness: The Social Development of Early Modern British Colonies and the Formation of American Culture. The University of North Carolina Press: Chapel Hill Heath, Dwight B. 1963: Mourt's Relation: A Journal of the Pilgrims at Plymouth. Corinth Books: New York James, Sydney V. 1963: Three Visitors to Early Plymouth: Letters about the Pilgrim Settlement in New England during its first seven years. Plimoth Plantation Kelso, William M. 1996: Jamestown Rediscovery II. The Association for the Preservation of Virginia Antiquities Morgan, Edmund S. 1975: American Slavery American Freedom: The Ordeal of Colonial Virginia. W. W. Norton & Company: New York Noel Hume, Ivor 1969: Historical Archaeology. Alfred A. Knopf: New York Perkins, Frank H. 1947: Handbook of Old Burial Hill Plymouth Massachussetts. Rogers Print, Inc.: Plymouth, Mass. Simmons, R. C. 1976: The American Colonies: From Settlement to Independence. W. W. Norton & Company: New York Smith, John. 1971: Advertisements for the Planters of New England. Theatrum Orbis Terrarum Ltd.: Amsterdam Young, Alexander 1841: Chronicles of the Pilgrim Fathers of the Colony of Plymouth: from 1602 to 1625. C. C. Little and J. Brown: Boston Seventeenth century maps Nineteenth Century Maps
http://www.histarch.illinois.edu/plymouth/jbthesis.html
13
14
The mass of water vapour in a unit volume of moist air of a given temperature and pressure. (unit g/m3) The mass of air per unit volume. It is a function of temperature, humidity, and pressure. The atmospheric pressure, temperature and humidity all affect the density of the air. On a hot day, at high altitude or on a moist day, the air is less dense. A reduction in air density reduces the amount of oxygen available for combustion and therefore reduces the engine horsepower and torque possible. For tweaking the fuel/air mixture, the air density is the most important consideration. The distance above sea level. The Kestrel Weather Meters calculate altitude based on the measured station pressure and the input barometric pressure - or "reference pressure". An instrument for measuring wind speed or air flow. An instrument for measuring atmospheric pressure. Temperature scale with the ice point of water as 0 and the boiling point as 100 at 1 standard atmosphere pressure. The degree Celsius is equal in magnitude to the Kelvin. The Celsius scale is the same as the centigrade scale. The temperature in Celsius = the temperature in Kelvin - 273.15 Delta T is the spread between the wet bulb temperature and the dry bulb temperature (in degrees C). Delta T measurement is used primarily by agricultural professionals involved in crop spraying. Delta T offers a quick guide to determining acceptable spraying conditions. For example, it is not recommended to apply pesticides when Delta T is above 10 - a range of 2 to 8 is ideal. A Kestrel 3500 delta T pocket weather meter calculates the delta T for you - quickly and accurately with barometric pressure correction in its wet bulb temperature calculation. Density altitude is one way to express the air density. Density altitude is the altitude at which the density of the Standard Atmosphere is the same as the density of the air being measured. So to calculate density altitude, it is necessary to calculate the actual density of the air, and then find the altitude at which that same air density occurs in the Standard Atmosphere. The concept of density altitude is commonly used to help express the effects of aircraft performance, however, the underlying property of interest is actually the air density. It is often used by individuals who tune high performance internal combustion engines, such as race car engines. Air density is perhaps the single most important factor affecting airplane performance. It has a direct bearing on: As a result of a density altitude that is higher than the actual physical altitude, the following effects are observed: Race car drivers and pit crews also monitor the density altitude because the car's high performance engine will perform differently depending on the density altitude. The Kestrel 4250 racing weather tracker is an ideal instrument to monitor this and other performance-affecting weather conditions. Some long range shooters also consider the density altitude, as it can influence the bullet's performance. The temperature at which dew or condensation begins to form. i.e. The temperature at which moisture forms on a surface. Temperature scale with the ice point as 32 and the steam point as 212. Thus the temperature in Fahrenheit = 32 +1.8 x the temperature in Celsius. A scale F0 to F6 that indicates the amount of damage a tornado causes. The Black Globe on the Kestrel 4400 Heat Stress Tracker is representative of the amount of heat-absorption via the colour black. Typically, Globe Temperature is taken using a 6" diameter copper globe painted black with an internal thermometer. However, the Kestrel 4400 Heat Stress Tracker uses a 1" copper globe painted black for its calculations. Globe Temperature is representative of the temperature of the Black Globe itself without accounting for air temperature. Black Globe temperature will fluctuate between, but always remain near, air temperature and Mean Radiant Temperature. This variability is due to wind speed. The faster the air moves over the globe thermometer, the closer Globe Temperature approaches air temperature. Inversely if there is zero movement of air, Globe Temperature equals Mean Radiant Temperature. Unit of pressure used in meteorology. One hectopascal equals 100 Pascals (1 hPa = 100 Pa) An instrument for measuring humidity. Unit of thermodynamic temperature. 1/273.16 of the thermodynamic temperature of the triple point of water. Much like Globe Temperature, the Kestrel 4400 Heat Stress Tracker defines Mean Radiant Temperature (MRT) as the effects of the environment on the Black Globe. However, Mean Radiant Temperature accounts for the dry air temperature and temperature of the Black Globe, whereas Globe Temperature is concentrated on temperature of the Black Globe itself. Mean Radiant Temperature is primarily used to define the comfort of an individual in a defined, closed space (four walls and a ceiling). It is regarded as the most important measurement governing indoors comfort. Old unit of pressure, now replaced by the hectopascal. (1 mbar = 1 hPa) The Kestrel 4400 Heat Stress Tracker’s Naturally Aspirated Wet Bulb Temperature function accounts for the effects of humidity on the human body. By combining relative humidity and wind speed, the temperature displayed is indicative of the evaporative cooling happening to the Kestrel 4400. SI unit of pressure defined as the pressure which applied on a plane area of one square metre, exerts perpendicularly to this surface a total force of one Newton. The ratio, expressed as a percentage, of measured air density to standard air density. Standard air density uses standard (fixed) values for temperature, humidity and pressure. RAD increases with an increase in barometric pressure (e.g., going to a lower elevation) and/or a decrease in ambient temperature. Conversely, RAD decreases with a decrease in barometric pressure and/or an increase in temperature. At standard temperature and pressure the RAD will be 100 percent. If a measurement of Relative Air Density takes into account the humidity it is known as the 'corrected RAD' . RAD is important to tuners setting up the correct jet size for bike and motor racing. By knowing the conditions and changing the jets accordingly, it is possible to achieve the maximum possible horsepower for the actual weather conditions experienced. The Kestrel 4250 racing weather tracker will provide you with an easy and accurate measurement of Relative Air Density. The ratio, expressed as a percentage, of the actual vapour pressure to the saturation vapour pressure. Mass of water vapour per unit mass of humid air. (unit g/kg) A semiconductor device of which the electrical resistance changes with temperature. The virtual temperature is the temperature of dry air that would have the same density and pressure as the moist air. Water Grains is a measure of how much moisture (water) in grains is in a pound (lb) of air. Water grains is primarily used by those involved in motor racing. Since water grains is an absolute measurement of moisture, a racer can look at this value to determine if the engine will be more or less powerful in comparison to previous atmospheric conditions. For bracket and class racers, this helps determine the dial in or throttle stop timers. For professional drag racers, (or any type of drag racing where the first one to the finish line wins) it helps in determining adjustments to the clutch, fuel, ignition, chassis, etc., that may need to be made to compensate for the loss or gain in power. More water in the air equals less power, while less water in the air equals more power. Racers find Water Grains more useful than Relative Humidity as its value does not change with temperature. A composite measurement of Naturally Aspirated Wet Bulb, Globe Temperature, and Dry Bulb Temperature. This environmental data combines temperature, humidity, wind speed, and thermal radiation to assess heat stress. The WBGT index was developed by the United States Marine Corps Parris Island in 1956 to reduce heat stress injuries in recruits. Prompted by this experience, the Department of the Navy commissioned studies on the effects of heat on exercise performance. These studies resulted in a heat index called the Wet Bulb Globe Temperature. In 1989, WBGT was suggested as an international standard (ISO 7243). Although the military were the ones to develop this index, their current use of WBGT is scattered. Most bases have one man-made device (Botsball), and conditions are reported using a flag system to report the WBGT index throughout the base. The temperature of the wet thermometer bulb in a wet and dry bulb hygrometer. The wet bulb is surrounded by wet fibres and evaporation of water from the fibres cools the wet bulb. The rate of evaporation depends on the relative humidity of the air. The chilling effect of the wind can be represented by the lower temperature that would be required to produce the same chilling sensation for a person walking in calm conditions. This is known as the wind chill equivalent temperature and is an important indicator in assessing the comfort of personnel spending periods outdoors. It is not an indication that an unheated inanimate object will cool below the ambient air temperature. The Kestrel's calculation of Wind Chill utilises the (US) NWS Wind Chill Temperature (WCT) Index, revised 2001, with wind speed adjusted by a factor of 1.5 to yield equivalent results for wind speed measured at 10m above ground. For more information about wind chill see table If you can't find the information you are after, please contact us and hopefully we can help. The Bluetooth® word mark and logos are registered trademarks owned by Bluetooth SIG, Inc. and any use of such marks by Richard Paul Russell Ltd is under license. Kestrel, Kestrel Tracker, Pocket Wind, Pocket Weather and the Kestrel logos are trademarks owned by Neilsen Kellerman. Other trademarks and trade names are those of their respective owners.
http://www.r-p-r.co.uk/kestrel/kestrel_knowledge.htm
13
12
a flow chart items are organized in a sequence. Flow charts have may be a chain of cause and effect, explaining a process. may organize past events in a time sequence, recounting what happened. may show a series of steps, forming instructions. may be a sequence of reasons, forming an argument. flow charts are combinations of chains, forks, and loops. use flow charts in the classroom? plan an explanation, a procedure (instructions), a recount (such as a news story), a narrative, or an argument. (More about visual planning can be found here.) summarize an explanation, a procedure, a recount, a narrative, or an argument. (More about visual summaries can be found of topics that suit flow charts include the water cycle, life cycles, how products are made, where a certain food comes from, preparation for a debate, how machines work, and so on. Flow charts are in fact one of the most useful and adaptable visual texts in the classroom. visual texts to compare with this one: following two visual texts are often confused with flow charts: arrows (or lines) to organize facts in groups and subgroups. (An example is a family tree.) arrows to link participants showing how they are connected. (An example is a food web.) and web diagrams are "fixed" and show relationships, whereas flow charts show something "moving" through a system. to Examples page Back © Black Cockatoo Publishing PL 2004
http://www.k-8visual.info/xFlowCh.html
13
34
About the Image One way to help visualize the relative distances in the solar system is to imagine a model in which the solar system is reduced in size by a factor of a billion (109). The Earth is then about 1.3 cm in diameter (the size of a grape). The Moon orbits about a foot away. The Sun is 1.5 meters in diameter (about the height of a man) and 150 meters (about a city block) from the Earth. Jupiter is 15 cm in diameter (the size of a large grapefruit) and 5 blocks away from the Sun. Saturn (the size of an orange) is 10 blocks away; Uranus and Neptune (lemons) are 20 and 30 blocks away. A human on this scale is the size of an atom; the nearest star would be over 40,000 km away! The Moon, the closest solar system body to us, is about 400,000 km away from the Earth, which means it takes about 2 seconds for radio signal from Earth to reach the Moon and travel back. You could hear this delay in communications between the Apollo astronauts and the ground control. The most distant planet from the Earth isn't Pluto anymore. Pluto was reclassified as a "dwarf planet"; a dwarf planet is not just a small planet - it belongs to a separate class of objects. Neptune is now the outer-most planet in our solar system. Its orbit places it at ~ 4,500,000,000 km or 30 AU from the Sun. Pluto is still an interesting member of the solar system, however - its orbit is actually very eccentric and takes Pluto 4,400,000,000 - 7,400,000,000 km (30 - 49 AU) from the Sun. Pluto's orbit is also inclined with respect to the planets and doesn't fall within the same plane. As a result of its eccentricity, Pluto occasionally comes closer to the Sun than the planet Neptune does! The Outer Reaches of the Solar System There are objects belonging to our Solar System that are even farther than the orbit of our planets. The Kuiper Belt is a disk-shaped region past the orbit of Neptune, roughly 4,400,000,000 to 14,900,000,000 km (30 to 100 AU) from the Sun, that consists mainly of small bodies which are the remnants from the Solar System's formation. It also contains at least one dwarf planet - Pluto. Pluto is indeed now considered to be a member of the Kuiper Belt - the largest object belonging to it, in fact! Like other members of the Belt, it is composed primarily of rock and ice and is relatively small. There is an excellent discussion on why Pluto was reclassified from "planet" to "dwarf planet" and Kuiper Belt Object (KBO) here. The Kuiper Belt is also believed to be the source for short-period comets (ie, those that take less than 200 years or orbit). Pluto is not the only dwarf planet in our solar system - Eris, 27% more massive than Pluto, was discovered in 2003. Eris and its moon Dysnomia have a current distance from the Sun of 97 AU, which is nearly 3 times as far from the Sun as Pluto is. Eris is part of a region of space beyond the Kuiper Belt known as the scattered disc. The scattered disc is sparesely populated with icy minor planets. These so-called Scattered Disc Objects or SDO's are among the most distant and thus the most cold objects in the solar system. The innermost portion of the scattered disc overlaps with the Kuiper Belt, but its outer limits extend much farther away from the Sun and farther above and below the ecliptic than Belt. Although their origin is not completely understood, it is thought that Scattered Disc Objects were previously members of the Kuiper Belt, which got ejected into eccentric, scattered orbits through close encounters with Neptune. From the surface of a Scattered Disc Object, the Sun would look like little more than an exceptionally bright star. Moving still further away from the Sun, we reach the Oort Cloud. In 1950, astronomer Jan Oort proposed that long-period comets reside in a vast spherical cloud residing 50,000 to 50,000+ AU from the Sun, at the outer reaches of the Solar System. This major reservoir of comets has come to be known as the Oort Cloud. The Kuiper belt can be described as disc or doughnut-shaped, but the Oort cloud is more like a very thick "bubble" that surrounds the entire solar system, reaching about half-way from the Sun to the next nearest star. Statistics imply that it may contain as many as a trillion (1012) comets. Unfortunately, since the individual comets are so small and at such large distances, we have no direct evidence for the Oort Cloud. The Oort Cloud is, however the best theory to explain how long-period comets exist. 50,000 AU seems like a very large distance from the Sun - but the nearest star to us is over 271,000 AU away! Image Credit: Oort Cloud image by Calvin J. Hamilton. Used with permisison. How Do We Calculate Distances of This Magnitude? Johannes Kepler, born in 1571, was the first to explain the motions of the planets in the sky, by realizing that the planets revolved around the Sun - and that their orbits were actually ellipses, not perfect circles. He also knew that the movement of the planets around the Sun could be described by physics - and in mathematical terms. The closer the planet was to the Sun, the faster it moved. Conversely, farther planets orbited the Sun more slowly. Knowing this, he was able to connect the average distance of a planet from the Sun with the time it takes that planet to orbit the Sun once. Though he wasn't able to come up with distance measurements in kilometers, Kepler was able to order the planets by distance and to figure out their proportional distances. For example, he knew that Mars was about 1.5 times farther from the Sun than the Earth. If you hold your finger in front of your face, close one eye and look with the other, then switch eyes, you'll see your finger seem to "shift " with respect to more distant objects behind it. This is because your eyes are separated from each other by a distance of a few inches - so each eye sees the finger in front of you from a slightly different angle. The amount your finger seems to shift is called its "parallax". Even with modern technology, measuring distances by parallax isn't trivial - and the errors can be big - as we can see from Cassini's measurement of the Earth-Mars distance. One of the most accurate ways to measure the distances to the planets is by bouncing radar off them, or sending a spacecraft there, which can send a radio signal back to the Earth that can be timed. Radar is essentially microwave electromagnetic radiation (microwaves fall under the radio spectrum). Since electromagnetic radiation, in all of its forms, is light, we know that radar travels at the speed of light - 2.99 x 105 km/s. Simply, distance traveled is equal to the time multiplied by the velocity. If we bounce radar off a planet, and measure the time it takes the signal to go there and back, we can use this information to calculate the distance of the planet. Distant Solar System Objects There are other modern methods to calculate the distances to objects on the fringes of our Solar System, like Kuiper Belt or Scattered Disc Objects. However, these techniques are often based on those Kepler employed! Several observations of the object's position in the sky are recorded, which are then used to determine the orbit of the object - then the position of the object along each point can be calculated. Nowadays, even home PCs are powerful enough that there are some advanced amateur astronomers who not only discover comets and asteroids but determine their orbits. Why Are These Distances Important To Astronomers? Knowing the distances to objects in our solar system, tells us how big it is - and how far away our neighboring planets are. How far the planets are from the Sun is particularly meaningful - here's why. If you place a candle at arm's length in an otherwise dark room, you'll see a bright flame. If you stand twice as far from the candle, you will see that it is now a quarter as bright as before. When you increase the distance by a factor of 2 (or 3 or 4, ...), the same amount of light has spread to an area 4 times (or 9 or 16, ...) bigger. This means the amount of light per unit area is 1/4 (or 1/9, 1/16, ...); since our eyes have the same area no matter how far the candle is, the brightness we perceive is also decreased by the same factor. Similarly, the distance from the Sun determines how much sunlight a planet receives. On Mars, which is 1.6 AU from the Sun (AU being the average Sun-Earth distance), the sunlight is about 2.5 times weaker than on Earth. That is the major reason why it is so cold on Mars, so cold that water does not exist as liquid on the Martian surface today. On Venus, which is 0.7 AU from the Sun, the sunlight is twice as intense as on Earth. This (combined with the greenhouse effect of its thick atmosphere) makes Venus a boiling hot place, unsuitable for human habitation. This leads to the idea of a 'habitable zone' --- for a star of a given brightness, you can determine the approximate range of distances a planet has to be for liquid water to exist. Life as we know it will not be able to evolve on a planet outside such a habitable zone. For detailed, up-to-date, information about our Solar System, see the wonderful "Nine Planets" page, written by Bill Arnett. See also NASA JPL's page on the planets and A Comprehensive Guide to Our Solar System. For more information about the Oort Cloud and the Kuiper Belt, see Phil Plait's Bad Astronomy page.
http://heasarc.gsfc.nasa.gov/docs/cosmic/solar_system_info.html
13
11
OOP is a better way of solving computer problems compared to a procedural programming language such as C. OOP uses classes which contain members (variables) and methods (functions). OOP uses a modular type of programming structure. OOP is a type of programming in which programmers define not only the data type of a data structure, but also the types of operations that can be applied to the data structure. In this way, the data structure becomes an object that includes both data and functions. In addition, programmers can create relationships between one object and another. For example, objects can inherit characteristics from other objects. One of the main advantages of object-oriented programming over procedural programming is that they enable programmers to create modules that do not need to be changed when a new type of object is added. A programmer can simply create a new object that inherits many of its features fro existing objects. This makes object-oriented programs easier to modify. In order to use the OOP paradigm, a programmer can use one of the programming languages such as C++, Java or Smalltalk. The C++ programming language provides a model of memory and computation that closely matches that of most computers. In addition, it provides powerful and flexible mechanisms for abstraction; that is, language constructs that allow the programmer to introduce and use new types of objects that match the concepts of an application. Thus, C++ supports styles of programming that rely on fairly direct manipulation of hardware resources to deliver a high degree of efficiency plus higher-level styles of programming that rely on user-defined types to provide a model of data and computation that is closer to a human’s view of the task being performed by a computer. These higher-level styles of programming are often called data abstraction, object-oriented programming, and generic programming. In the next tutorial we will discuss about these features.
http://www.exforsys.com/tutorials/c-plus-plus/object-oriented-programming-paradigm.html
13
12
TEACHING STANDARD A: Teachers of science plan an inquiry-based science program for their students. In doing this, teachers Develop a framework of yearlong and short-term goals for students. Select science content and adapt and design curricula to meet the interests, abilities, and experiences of students. Select teaching and assessment strategies that support the development of student understanding and nurture a community of science learners. TEACHING STANDARD B: Teachers of science guide and facilitate learning. In doing this, teachers Focus and support inquiries while interacting with students. Orchestrate discourse among students about scientific ideas. Challenge students to accept and share responsibility for their own learning. Encourage and model the skills of scientific inquiry, as well as the curiosity, openness to new ideas and data, and skepticism that characterize science. TEACHING STANDARD D: Teachers of science design and manage learning environments that provide students with the time, space, and resources needed for learning science. In doing this, teachers Structure the time available so that students are able to engage in extended investigations. Create a setting for student work that is flexible and supportive of science inquiry. Ensure a safe working environment. Make the available science tools, materials, media, and technological resources accessible to students. Identify and use resources outside Engage students in designing the learning environment. TEACHING STANDARD E: Teachers of science develop communities of science learners that reflect the intellectual rigor of scientific inquiry and the attitudes and social values conducive to science learning. In doing this, teachers Display and demand respect for the diverse ideas, skills, and experiences of all students. Enable students to have a significant voice in decisions about the content and context of their work and require students to take responsibility for the learning of all members of the community. Nurture collaboration among students. Structure and facilitate ongoing formal and informal discussion based on a shared understanding of rules of scientific discourse. Model and emphasize the skills, attitudes, and values of scientific inquiry. CONTENT STANDARD A: As a result of activities in grades 9-12, all students should develop Abilities necessary to do scientific inquiry Understandings about scientific inquiry CONTENT STANDARD C: As a result of their activities in grades 9-12, all students should develop Behavior of organisms to develop the abilities that characterize science as inquiry, they must actively participate in scientific investigations, and they must actually use the cognitive and manipulative skills associated with the formulation of scientific explanations. This standard describes the fundamental abilities and understandings of inquiry, as well as a larger framework for conducting scientific investigations of natural phenomena. component of successful scientific inquiry in grades 9-12 includes having students reflect on the concepts that guide the inquiry. Also important is the prior establishment of an adequate knowledge base to support the investigation and help develop scientific explanations. The concepts of the world that students bring to school will shape the way they engage in science investigations, and serve as filters for their explanations of scientific phenomena. Left unexamined, the limited nature of students' beliefs will interfere with their ability to develop a deep understanding of science. Thus, in a full inquiry, instructional strategies such as small-group discussions, labeled drawings, writings, and concept mapping should be used by the teacher of science to gain information about students' current explanations. Those student explanations then become a baseline for instruction as teachers help students construct explanations aligned with scientific knowledge; teachers also help students evaluate their own explanations and those made by scientists. also need to learn how to analyze evidence and data. The evidence they analyze may be from their investigations, other students' investigations, or databases. Data manipulation and analysis strategies need to be modeled by teachers of science and practiced by students. Determining the range of the data, the mean and mode values of the data, plotting the data, developing mathematical functions from the data, and looking for anomalous data are all examples of analyses students can perform. Teachers of science can ask questions, such as "What explanation did you expect to develop from the data?" "Were there any surprises in the data?" "How confident do you feel about the accuracy of the data?" Students should answer questions such as these during full and partial inquiries. Top of Page Table of Contents
http://www.woodrow.org/teachers/bi/1998/fly_grooming/Standards.htm
13
13
The first eleven maps show the location of 2367 galaxies as they lie on the sky. These are the galaxies that were known to have velocities less than 3,000 kilometers/ second at the time the cartography of the maps began. The only aspect of these maps that requires significant explanation is the coordinate system. Galactic coordinates are used. The equator in this system is the plane of the Milky Way Galaxy. The zero point in longitude is the Galactic Center. In the equator of the Milky Way, there are numerous clouds of interstellar dust that obscure our view of objects, in addition to a great enhancement in the density of stars. Consequently, our clear windows onto the Universe beyond the Milky Way are away from the Galactic equatorial regions and toward the poles. The entire sky is shown on 10 plates. A hemisphere is displayed on a polar map that highlights the region poleward of 60 degrees and four mid-latitude maps that highlight the latitude range 0 to 60 degrees in longitude intervals of 90 degrees. The zone of obscuration associated with the equator of the Milky Way Galaxy is indicated by the darkened background on these maps. Of course, there is not an abrupt transition from transparency to opacity. At the transitional contour that is plotted, roughly 40 percent of the blue light of a distant galaxy has been absorbed by the intervening dust. Some galaxies have been detected through considerable amounts of dust. Generally, there is not too much bias against the detection of galaxies that are farther than 30 degrees from the Galactic Equator, but extreme bias against those within 20 degrees of the Equator. Aside from the problem of Galactic obscuration there is reasonably good all-sky coverage with the present sample of 2367 galaxies. Unpublished material obtained with the Parkes Radio Telescope in Australia is included, with the consequence that there is reasonable homogeneity between the northern and southern celestial hemispheres. A symbol is plotted at the Galactic latitude and longitude of each galaxy in the sample. The symbol denotes the morphological type of the galaxy. The size of the symbol is a measure of the angular size of the galaxy as we see it, so nearer galaxies tend to be bigger than more distant galaxies. The color of the symbol is coded to the systemic velocity of the galaxy. Nearby galaxies with small velocities, or redshift, are represented by cool, blue colors and more distant galaxies with larger redshift are represented by warm, red colors. The specifics of these details are described in the legends on each plate. There are enlargements of the two most crowded regions: The Fornax Cluster on Plate 8, and the Virgo Cluster on Plate 11. Further information on the 2367 galaxies that constitute our sample are provided in the companion publication, the Nearby Galaxies Catalog (ref. 1). Enough of words. The maps are more eloquent. * More extensive discussion is reserved for articles that will be submitted to technical journals in due course.
http://ned.ipac.caltech.edu/level5/March04/Virgo_cluster/Tully3.html
13
11
Millimeter wave radar brings international recognition The radar work of Engineering Experiment Station scientists significantly contributed to the national technology base. It also established Georgia Tech's international reputation in radar research and development. Pioneering the Millimeter Wave Environment Expanding upon radar work started at the end of World War II, EES scientists began investigating the millimeter portion of the electromagnetic spectrum. The advantages of millimeter waves include their ability to provide accurate, excellent image identification and resolution. They also provide remote measurements while operating through smoke, dust, fog or rain. However, millimeter waves are vulnerable to absorption by certain atmospheric and meteorological activity. EES scientists discovered which frequencies work best for a particular task. They better identified and refined windows of attenuation of the frequencies that mitigate atmospheric interference with the signals. The millimeter-wave research that began at EES continued under GTRI. It evolved into an ongoing process of discovering the appearance of objects from tanks to raindrops when viewed by high-frequency waves. Researchers determined the types of data they could derive from the interaction of those objects with the waves. In the process, they pioneered the fundamental science of the millimeter wave environment. They also invented the hardware such as antennas, receivers and transmitters to use that end of the spectrum. First Military Application The first military designation millimeter wave radar was built at Georgia Tech in the late 1950s. It was followed by a succession of increasingly advanced models. Research to build a radar with a wavelength as near to one millimeter as possible was ongoing. By the 1980s, it culminated in the development of the world's highest frequency microwave radar, operating at 225 GHz. EES developed this radar to test high frequency radar techniques, and to obtain the first target and natural background at these frequencies. The device provides useful imaging with an antenna less than 30 centimeters in diameter. It is coherent, meaning it can detect Doppler returns from moving targets. Millimeter spectroscopy research paved the way for exploiting millimeter waves for measurements in: - Radio astronomy, - Satellite-based studies of the upper atmosphere, - Climate, rainfall and vegetation patterns, and - A host of other environmental concerns. Millimeter wave radiometry developed at GTRI has improved the accuracy of weather forecasting and computer models of the Earth's climate. In the late 1970s, a millimeter wave radiometer began service aboard a NASA aircraft, where it monitored storm activity from an altitude of 60,000 feet. Scanning about 5,000 miles of atmosphere per hour, the device recorded the emitted and reflected energy of storms, including the almost infinitesimal amounts of energy emitted by moisture inside a storm. Subsequent applications of radiometric data have helped scientists study rainfall patterns, ocean winds, soil moisture levels and vegetation characteristics, to help develop long-range forecasts of profound climatic changes. Georgia Tech scientists also achieved a number of firsts in millimeter characterization of clutter and targets - essential data for reliable millimeter radar systems. Since the 1960s, more than a dozen projects have provided millimeter measurements of the ocean, rain, snow-covered ground, desert, foliage and foreign military vehicles. In the 1980s,researchers conducted a comprehensive study of the image-quality effects of atmospheric turbulence and precipitation on millimeter wave propagation. The versatility of millimeter wave technology is illustrated by the radar flashlight, a device that detects respiration at a distance. Originally developed to locate wounded soldiers on battlefields, it could prove useful in situations where access is difficult, such as a collapsed building. Radiometric measurement has also proven useful for interpreting data from interplanetary space probes. At Tech's Radio Astronomy and Propagation Laboratory, millimeter-wave measurements in a planetary atmosphere simulator helped scientists plan the Cassini mission to Saturn, and Galileo's rendezvous with Jupiter. Other activities in the lab included microwave and millimeter-wave receiver design and testing, remote-sensing system design and design of spacecraft radio occultation experiments. GTRI scientists continue to examine the potential of millimeter wave technology in automatic target recognition systems, as well as various electronic countermeasures and counter-countermeasures. These include decoy beacons, threat assessment, reconnaissance and signal disruption.
http://[email protected]/history/innovations/millimeter-wave-radar-brings-international-recognition
13
43
An important part of first grade math is learning addition and subtraction facts. Students need to develop an understanding of the relationship between addition and subtraction as well as develop strategies for quick recall of facts. This post includes some of the best resources for helping students learn addition and subtraction facts to 18. Math Fables Too by Greg Tang Illustrated by Taia Morley This book is math and science all in one. Students will learn interesting facts about the behavior of animals such as bats, archerfish, and seagulls, they will develop their vocabulary and learn addition facts. Students are presented with a number of animals grouped in different ways. For instance 4 herons are all together, then 3 use a feather and 1 uses a twig to lure fish to the surface of the water. There are many excellent teaching opportunities in this book. by Greg Tang Illustrated by Greg Paprocki In this book math is combined with art history. Each page focuses on a famous work of art by artists such as Degas, Warhol, and Pollock. Each page has groupings of objects related to the painting. Students are asked to add together the groups to get a certain sum. Students also get the opportunity to see how three or more groups can be used to get the desired sum. Tang tells students how many ways it is possible to get the sum and provides all the illustrated answers in the back of the book. written and illustrated by Loreen Leedy This is a very well illustrated and engaging book with stories that students will love. Leedy includes all the basic concepts of addition including definitions, place value, horizontal and vertical computation, adding groups of the same things and groups of different things, as well as incorporating word/story problems throughout. written and illustrated by Loreen Leedy The same characters from Mission: Addition appear in Leedy’s subtraction book. This is a great book for teaching the concept of subtraction. Leedy covers all the basics of subtraction including definitions and showing each step of a subtraction problem. She uses numbers, words, and pictures to tell subtraction stories. Subtracting with Sebastian Pig and Friends On a Camping Trip by Jill Anderson illustrated by Amy Huntington In this book some of the things that Sebastian and his friends need on their camping trip are disappearing. Each page spread tells of a missing item. “Where Are the Worms? There were seven worms. Now there is just one! How many worms are missing?” Each problem in the story is represented in the text by a word problem and also in Sebastian’s notebook where he writes the number sentence, draws a picture representation of the problem and then lists the addition facts from that family. Students who pay attention will see that it’s a group of mice who are taking the campers things. Websites for Students Addition Word Problems is a great activity for students to get experience solving story problems. If a student enters an incorrect answer they can click on the button “Explanation” and they will see a written explanation of the problem, a strategy for solving, and the number sentence for the correct answer. When they are finished playing they will get a summary of their time and how many problems they solved correctly. Alien Addition is an arcade style game where students must use the arrow keys to move back and forth firing the laser “sum” at the spaceship with the corresponding number sentence. Students may select the range for facts as well as the speed of the game. At the end of each stage students get a summary of hits and misses (incorrect answers). Misses show the student’s answer and the correct number sentence. Minus Mission is the subtraction version of Alien Addition. Sum Sense is a subtraction game that challenges students by giving them three number cards that they must arrange into the correct number sentence. They can choose the number of problems they want and a time from one to ten minutes to solve. If the student answers incorrectly they are told to try again. Hidden Pictures can be played with either addition or subtraction facts. Students solve problems to uncover a photograph. The photos are of animals in their natural habitats and a short description is given for each one. Addition Concentration is also a fun game on this website. Funbrain includes several games that are great for practicing addition and subtraction facts. Line Jumper gives students the visual of a number line for solving addition and subtraction problems between 1-20. Students can play Tic Tac Toe against the computer with addition or subtraction facts. In Math Baseball students solve addition or subtraction problems to move around the bases and score runs. A one or two player version is available. Additional Resources for Teachers Mathwire.com is a great source for addition and subtraction classroom games and templates you can use for assessment or as center activities. The site also includes good ideas and resources for incorporating writing in math. UEN.org provides a great resource for games and centers that will help students practice their addition and subtraction facts. Templates in pdf form are included for all the games as well as background information, instructional procedures, assessment plans, and extension ideas. There are some good short video clips on addition and subtraction that you could incorporate into lessons or use at listening stations. Many use music and interesting visuals and could be helpful for students who are having difficulty remembering certain facts. Here are a few good videos that I have found: Adding and Subtracting Song, Adding 9 + 1, Doubles Doubles, and Five Bees, which also counts in Spanish.
http://blog.richmond.edu/openwidelookinside/archives/author/kw4vy
13
10
Evolution of Thermophilic Archaea From MicrobeWiki, the student-edited microbiology resource by Julia DeNiro Crenarcheota and Hyperthermophiles: An OverviewThermophilic and hyperthermophilic archaea, specifically crenarcheotes in the class Thermoprotei, are known to inhabit environments such as hot springs, ocean vents, and geysers which are inhospitable to many other forms of life.(Figure 2) Such habitats not only have extremely high temperatures--at some ocean vents, temperature might even reach 400 degrees Celsius--but have high concentrations of dissolved minerals. At the same time, they have low concentrations of oxygen, if they are not completely anaerobic, and they are often low in pH as well. Though various genera of these archaea flourish at different temperatures, the most striking examples of hyperthermophiles include archaea in the genera Pyrodictium, (Figure 1) which are adapted to live around thermal vents at the ocean floor, at temperatures ranging from 100-110 degrees Celsius. Acidophiles are also prevalent in this class, especially those belonging to the order Sulfolobales: Sulfolobus solfatericus flourishes at a pH ranging from only 2 to 4. The acidophiles are usually found in environments rich in sulfur, and they obtain energy using H2, H2S, and elemental sulfur as electron donors. Yet there are many archaea which grow at neutral or high pH: rod-shaped species in the genera Thermoproteus, Thermofilum, and Pyrobaculum, as well as coccoid species in the genus Desulfurococcus. 11 Though certain species of Thermoprotei use oxygen in respiration, most of the metabolisms of these archaea are anaerobic. Moreover, most of these archaea are also chemolithotrophs, meaning that they must take their energy from inorganic sources. Hydrogen sulfide, hydrogen, methane, sulfur, and nitrogen are all potential energy sources for chemolithotrophs. A particularly high number of species reduce sulfur and various sulfates for energy, particularly those which inhabit solfataric fields. Solfataric fields, composed of soils heated up by volcanic emissions from magma chambers, are known for their high elemental sulfur content. 11 However, sulfur use is not limited to archaea which thrive on land. Ignicoccus islandicus oxidizes hydrogen with sulfur, forming hydrogen sulfide. Pyrodictium might oxidize hydrogen with sulfur or else use anaerobic fermentation. The hostile habitats in which crenarcheotes live resemble, in many cases, the conditions on early Earth. The planet was extremely hot and radioactive, much more so than today; though it is thought that a crust developed relatively soon after Earth's formation, this crust was thin and made entirely of igneous rock. The early atmosphere consisted of water vapor, CO2, nitrogen, hydrogen, methane, NH3, and CO. 12 As the early atmosphere is believed to have had no oxygen in it, the first life forms to develop must clearly have been anaerobic. If life forms evolved before Earth cooled--and it is quite possible that they did, as they could have flourished beneath Earth's forming crust, away from crashing meteorites--these organisms might have been the ancestors of today's thermophilic prokaryotes, including archaea. Though it is relatively certain among microbiologists that the first microbes, ancestors of all life on Earth today, evolved nearly 4 billion years ago, it is still quite uncertain whether these early microbes were thermophiles. Microfossils and Isotopic Evidence Though remains of microbial life nearly 4 billion years old have been found, it is typically impossible to tell whether this life was more closely related to bacteria or to archaea. The different forms of proteins and rRNA that distinguish archaea from bacteria are, of course, not preserved in microfossils. Isotopic records can be made of the lipids in ancient cells, but the isopropene chains that comprise archaeal lipids have not yet been found in rocks older than 1.6 billion years. 6 If we were to draw conclusions about archaeal evolution solely from this evidence, we would assume that archaea in their present form evolved relatively late in the history of microbes. However, other isotopic records have provided evidence of the early existence of methanogenesis, about 2.7 Gya. Methanogenesis today only occurs in eukaryarchaea. If this process was occurring nearly 3 Gya ago, it is likely that archaea had evolved much earlier. Whether or not these early archaea were thermophilic or the ancestors of today's crenarchaea, however, is still a matter of speculation and hypothesis. Microbial communities in hot springs can also be preserved as microfossils if their environment is saturated with silica. When waters of these hydrothermal environments cool--whether through plate tectonics or even through climate change--silica present in the water precipitates to form gelatinous, amorphous masses called sinters. Microbes and clay particles that fall into the water then become trapped in the sinters. Eventually, lithification occurs, as happened at Soro hot springs on Ol Kokwe, a volcanic island in Lake Baringo, Kenya. Three types of sinters are found here: structureless silica which fills pores in rocks, silica which lines pores in rocks, and crusts of silica that formed on particles of detritus. Though the sinters in Lake Baringo only date back to about one million years ago--from the late Pleistocene--the many microbial remains found in the silica illustrate a surprisingly effective method of preservation of microfossils. Possibly sinters that formed from cooling hot springs billions of years ago might have preserved microfossils as effectively as the hot springs in Lake Baringo. 10 In 2000, strange filaments were first reported in volcanic hosted massive sulfide deposits (VHMS) at the Kangaroo Caves Formation, East Pilbara, Western Australia. (Figure 3) It was hypothesized that these filaments were microfossils, specifically of ancient thermophilic chemotrophs. 1. Changes in orientation of filaments in different areas may even be evidence of biological variation between organisms in different microenvironments; such a phenomenon is common today in modern hot-spring thermophiles. Metal precipitation and volcanic rock textures indicate that East Pilbara was once covered by over 1000 m of seawater. It is estimated from looking at the minerals in the ore that temperature might have reached 300 degrees Celsius, and this hypothesis, coupled with the volcanic rock found in the deposit, is substantial evidence that Kangaroo Caves was once a hydrothermal vent in a primordial sea2. The rocks and the structures in them are about 3.2 billion years old, meaning that if these filaments really are the remains of primitive microbes, they were flourishing in the Archaean Eon, at the same time as many ancient cyanobacteria. Neither cell structure nor metabolism can be determined by looking at these filaments; even if we could be sure that they were microfossils, we could still not tell whether they were bacteria or archaea. However, it is an exciting possibility that thermophilic life did exist on early Earth and perhaps evolved even earlier, and that if the filaments are microfossils of thermophiles, they are the ancestors of today's thermophilic archaea. Archaeal Traits as Possible Adaptations to High TemperaturesThree main features present in most species of archaea are possible adaptations to high temperatures, perhaps indicating that the common ancestor of all archaea was a hyperthermophile.4 The first characteristic is the ubiquitous presence of the topoisomerase reverse gyrase (Figure 5). 3 Most thermophiles, both bacteria and archaea, have positive supercoils in their DNA; as positive supercoils are stronger and more difficult to untangle than negative supercoils, they are useful to organisms which grow at high temperatures, as the heat cannot destroy the genetic material as easily. While not all the DNA in today's archaea is found in positive supercoils, reverse gyrase is found not only in thermophilic archaea, but in some species of mesophilic archaea, indicating that perhaps it was present in the common ancestor of all archaea. Most of the modification occurs in the bases of these molecules. Nine modified nucleosides are known to be present only in archaea, and all of them are most commonly found in thermophilic species, most of which were discovered in a 1991 experiment. 5 Eleven species of thermophilic archaea were studied in this experiment: Thermoplasma acidophilum, Methanobacterium thermoautotrophicum, Archaeoglobus fulgidus, Methanothermus fervidus, Sulfolobus solfataricus, Thermoproteas neutrophilus, Acidianus infernus, Thermodiscus maritimus, Thermococcus sp., Pyrobaculum islandicum, and Pyrodictium occultum. The occurrance of the same modified nucleosides across these different genera of thermophiles indicates at least a common ancestor of thermophilic archaea, if not of archaea in general. Horizontal Gene Transfer Between Thermophilic Bacteria and Thermophlic Archaea Though archaea resemble bacteria in many ways, most of their genetic structure, not to mention their enzymes involved in genetic processes, are more similar to those of eukaryotes. Their DNA, though organized in one circular chromosome, is bound to DNA-binding proteins homologous to the histones that bind eukaryotic chromosomes. Archaeal RNA polymerase can initiate transcription in vitro, like eukaryotic RNA polymerase. Finally, though archaeal genes are often organized like operons, introns have been found in archaeal rRNA and tRNA genes. These archaeal features seem to suggest that archaea are quite a bit more complex than bacteria and therefore evolved much later; the possible thermophilic ancestors of modern life would have been bacteria, not archaea. In addition, if the molecular characteristics of archaea mentioned above--reverse gyrase, ether-linked lipids, and modified nucleosides--are ancient adaptations to thermophily, why are they only shared by archaea? Why have they not developed in thermophilic bacteria? There is no real answer to this question, but previous studies have shown that thermophilic bacteria such as Aquifex and Thermotoga diverged relatively early from the rest of the bacterial lineage. In fact, Aquifex and Thermotoga share so many metabolic traits with thermophilic archaea--reduction of hydrogen for energy and growth at high temperatures, for example--and so much gene transfer has occurred between these bacteria and archaea, that these two genera of bacteria and archaea might be more closely related than previously thought. A 1998 experiment demonstrated that the bacterium Aquifex aeolicus and various species of hyperthermophilic archaea share remarkable genetic similarities. 7 It was discovered that a high fraction of A. aeolicus gene products have proteins commonly found in archaea, a much higher fraction than gene products present in non-thermophilic bacteria, such as Bacillus subtilis and E. coli. (Figure 8) According to the experiment, at least 10% of Aquifex genes have been horizontally transferred from archaea. 7 This discovery does not necessarily indicate that the two lineages are closely related, as horizontal transfer occurs between nonrelated bacteria species, as well as between bacteria and archaea and even bacteria and eukaryotes. However, the archaeal species with which Aquifex shares its genes are all thermophilic, and so the gene exchange could have been a result of convergent evolution, in which tolerance of high temperatures was selected for. Therefore, probably both Aquifex and the archaea used in the 1998 experiment evolved at a time and in an environment in which growth at high temperatures was necessary for survival, perhaps during a time of uncontrolled global warming. Because of the relative old age of such bacterial lineages as Aquifex and its apparent early divergence from other bacteria, this convergence possibly occurred in the Hadean Era, while Earth was still cooling. However, as of yet we can only infer these possibilities. There is not yet enough information on genome sequences in bacterial thermophiles to determine just how ancient they are, or how many genes they exchanged with archaeal thermophiles. Unfortunately, despite the genomic, metabolic, and fossil evidence of the early evolution of thermophilic archaea, not enough information has been collected to draw any conclusions. The fossil record is incomplete; there is no concrete evidence of archaeal evolution before 1.6 Gya, long after microbes first evolved on this planet. It is not even certain whether certain earlier fossils found in sulfide deposits are really the remains of thermophiles. Though various traits useful to thermophiles are shared by all archaea, they could have developed because of convergent evolution, perhaps in response to a global warming event later than 3 Gya, not necessarily because all archaea share a common, thermophilic ancestor. Though there is a great deal of genetic similarity between thermophilic bacteria and thermophilic archaea, most of it is the result of horizontal gene transfer, and in any case, not enough genomes of thermophilic bacteria have yet been sequenced to draw conclusions about their relationship with thermophilic archaea. In short, much more research must be done and much more evidence, both ancient and modern, must be collected before microbiologists can reasonably hypothesize that archaeal lineages appeared early in the history of Earth, or that present-day archaea, let alone all life on Earth, evolved from a thermophilic ancestor. (1)Wacey, David. Early Life on Earth. 2009. Topics in Geobiology. Volume 31. p. 221-227 (2)Rasmussen, Birger. "Filamentous microfossils in a 3,235-million-year-old volcanogenic massive sulphide deposit". Nature. 2000. Volume 405. p. 676-679. (3)Hsieh, Tao-shih and Plank, Jody L. "Reverse Gyrase Functions as a DNA Renaturase: Annealing of Complimentary Single-Stranded Circles and Positive Supercoiling of a Bubble Substrate". J. Biol. Chem. 2006. Volume 281. p. 5640-5647. (4)Wiegel, Jurgen and Adams, Michael W.W. Thermophiles: The Keys to Molecular Evolution and the Origin of Life?. 1998. CRC. p. 141-142. (5)Edmonds, C.G., Crain, P.F., Gupta, R, Hashizume, T., Hocart, C.H., Kowalak, J.A., Pomerantz, S.C., Stetter, K.O., and McCloskey, J.A. "Posttranscriptional modification of tRNA in thermophilic archaea (Archaebacteria)". J. Bacteriol. 1991. Volume 173. p. 3138-3148. (6)Gribaldo, Simonetta and Brochier-Armanet, Celine. "The Origin and Evolution of Archaea: A State of the Art. The Royal Society: Biological Sciences. 2006. Volume 361. p. 1007-1022. (7)Aravind, L., Tatusov, Roman L., Wolf, Yuri I., Walker, D. Roland, and Koonin, Eugene V. "Evidence for massive gene exchange between archaeal and bacterial hyperthermophiles". Trends in Genetics. 1998. Volume 14. p. 442-444. (9)"Archaea: Morphology". 2008 (10)Renaut, R.W., Jones, B., Tiercelin, J.-J., and Tarits, C. "Sublacustrine precipitation of hydrothermal silica in rift lakes: evidence from Lake Baringo, central Kenya Rift Valley". Sedimentary Geology. 2002. Volume 148. p. 235-257. (11)Stetter, Karl O. "Hyperthermophiles--Life in a Hot and Inorganic Environment". Nova Acta Leopoldina. 2008. Volume 96. p. 13-18. (12) Crisp, Edward L., Ph.D. "The Origin of Life and Evolution of Early Life". West Virginia University at Parkersburg. 30 April, 2009
http://microbewiki.kenyon.edu/index.php/Evolution_of_Thermophilic_Archaea
13
45
In this lesson, angles will be examined as drawn in a coordinate plane. This allows for the diagramming of angles which are much larger than or which may be negative. Values of the six fundamental trig functions will be found for such angles. The Lesson: In a right triangle, one angle is 90º and the side across from this angle is called the hypotenuse. The two sides which form the angle are called the legs of the right triangle. We show a right triangle below. The legs are defined as either “opposite” or “adjacent” (next to) the angle A. Definitions: We shall call the opposite side “opp,” the adjacent side “adj” and the hypotenuse “hyp.” In the following definitions, sine is called “sin,” cosine is called “cos” and tangent is called “tan.” The origin of these terms relates to arcs and tangents to a circle. Let's Practice: For more information on these functions reference the lesson on sine, cosine and tangent. - sin(A) = - cos(A) = - tan(A) = Three more trigonometric ratios can be defined as the reciprocals of these fundamental ratios. They are cosecant, secant, and cotangent. The ratios are given by the following equations: - csc(A) = - sec(A) = - cot(A) = - We diagram angles in quadrants two and three. Let A = -150º and B = 150º . Because these angles are in quadrants two and three, we draw a “reference triangle” in each quadrant. Using the fact that these triangles are triangles, we can label the sides. The “reference angle” is the 30º angle in the triangles with vertex at the Origin of the coordinate axes. For angle A = -150º we use the opp, adj and hyp sides of the reference triangle to calculate the values of the six trig functions: sin(A) = sin(-150º) = For angle B = +150º , the only difference is that the opposite side is +1 instead of -1. We have: cos(-150º) = : csc, sec and cot can be found by inverting these results. sin(B) = sin(150º) = Again, csc, sec and cot can be found by inverting these results. - Diagram 405º showing the reference angle and use the reference triangle to calculate the sin, cos and tan of 405º. We show the diagram below. The opp and adj sides of the reference triangle are both . The hyp is 2. This is true of any triangle. Using the sides of the reference triangle, we can produce values of trig functions of : The csc, sec and tan can be found by inverting these values. - Find the sin and tan of a quadrant three angle with a reference triangle having opposite side -2 and hypotenuse 5. If this angle , what is the measure of this angle? We have sin(q) = . To find tan(q) we diagram the angle below and use the Pythagorean Theorem to find that the adjacent side has measure - . We have tan(q) = We can find the reference angle using on a calculator. We find ref angle . The angle q is . We have . We can generalize some of these results. - Notice that the opposite side is negative in quadrants three and four. Therefore the sine which is is negative in quadrants three and four but positive in quadrants one and two. - Notice that the adjacent side is negative in quadrants two and three. Therefore the cosine which is is negative in quadrants two and three and positive in quadrants one and four. - The tangent will be positive when the opposite and adjacent sides have the same sign because the tangent is . This occurs in quadrants one and three. One way to remember which trig functions are positive in each quadrant is illustrated in the diagram below. Using the letter in the diagram as a device for remembering positive values, we have “All Students Take Calculus” reminding us of the basic trig functions which are positive in each quadrant: All are positive in quadrant one; Sine is positive in quadrant two; Tangent is positive in quadrant three; and Cosine is positive in quadrant four. They are negative elsewhere, which would be obvious if a good diagram is labeled showing the possible negative values of the opposite or adjacent sides. (Note that the hypotenuse is the absolute value of the distance from the origin and is always positive).
http://www.algebralab.org/lessons/lesson.aspx?file=Trigonometry_TrigRefAngles.xml
13
41
Equality and other comparisons So far we have seen how to use the equals sign to define variables and functions in Haskell. Writing r = 5 in a source file will cause occurrences of r to be replaced by 5 in all places where it makes sense to do so according to the scope of the definition. Similarly, f x = x + 3 causes occurrences of f followed by a number (which is taken as f's argument) to be replaced by that number plus three. In Mathematics, however, the equals sign is also used in a subtly different and equally important way. For instance, consider this simple problem: Example: Solve the following equation: When we look at a problem like this one, our immediate concern is not the ability to represent the value as , or vice-versa. Instead, we read the equation as a proposition, which says that some number gives 5 as result when added to 3. Solving the equation means finding which, if any, values of make that proposition true. In this case, using elementary algebra we can convert the equation into and finally to , which is the solution we were looking for. The fact that it makes the equation true can be verified by replacing with 2 in the original equation, leading us to , which is of course true. The ability of comparing values to see if they are equal turns out to be extremely useful in programming. Haskell allows us to write such tests in a very natural way that looks just like an equation. The main difference is that, since the equals sign is already used for defining things, we use a double equals sign, ==. To see it at work, you can start GHCi and enter the proposition we wrote above like this: Prelude> 2 + 3 == 5 True GHCi returns "True" lending further confirmation of being equal to 5. As 2 is the only value that satisfies the equation, we would expect to obtain different results with other numbers. Prelude> 7 + 3 == 5 False Nice and coherent. Another thing to point out is that nothing stops us from using our own functions in these tests. Let us try it with the function f we mentioned at the start of the module: Prelude> let f x = x + 3 Prelude> f 2 == 5 True Just as expected, since f 2 is just 2 + 3. In addition to tests for equality, we can just as easily compare two numerical values to see which one is larger. Haskell provides a number of tests including: < (less than), > (greater than), <= (less than or equal to) and >= (greater than or equal to), which work just like == (equal to). For a simple application, we could use < alongside the area function from the previous module to see whether a circle of a certain radius would have an area smaller than some value. Prelude> let area r = pi * r^2 Prelude> area 5 < 50 False At this point, GHCi might look like some kind of oracle (or not) which can tell you if propositions are true or false. That's all fine and dandy, but how could that help us to write programs? And what is actually going on when GHCi "answers" such "questions"? To understand that, we will start from a different but related question. If we enter an arithmetical expression in GHCi the expression gets evaluated, and the resulting numerical value is displayed on the screen: Prelude> 2 + 2 4 If we replace the arithmetical expression with an equality comparison, something similar seems to happen: Prelude> 2 == 2 True But what is that "True" that gets displayed? It certainly does not look like a number. We can think of it as something that tells us about the veracity of the proposition 2 == 2. From that point of view, it makes sense to regard it as a value – except that instead of representing some kind of count, quantity, etc. it stands for the truth of a proposition. Such values are called truth values, or boolean values. Naturally, there are only two possible boolean values – True and False. An introduction to types When we say True and False are values, we are not just making an analogy. Boolean values have the same status as numerical values in Haskell, and indeed you can manipulate them just as well. One trivial example would be equality tests on truth values: Prelude> True == True True Prelude> True == False False True is indeed equal to True, and True is not equal to False. Now, quickly: can you answer whether 2 is equal to True? Prelude> 2 == True <interactive>:1:0: No instance for (Num Bool) arising from the literal `2' at <interactive>:1:0 Possible fix: add an instance declaration for (Num Bool) In the first argument of `(==)', namely `2' In the expression: 2 == True In the definition of `it': it = 2 == True The correct answer is you can't, because the question just does not make sense. It is impossible to compare a number with something that is not a number, or a boolean with something that is not a boolean. Haskell incorporates that notion, and the ugly error message we got is, in essence, stating exactly that. Ignoring all of the obfuscating clutter (which we will get to understand eventually) what the message tells us is that, since there was a number (Num) on the left side of the ==, some kind of number was expected on the right side. But a boolean value (Bool) is not a number, and so the equality test exploded into flames. The general concept, therefore, is that values have types, and these types define what we can or cannot do with the values. In this case, for instance, True is a value of type Bool, just like False (as for the 2, while there is a well-defined concept of number in Haskell the situation is slightly more complicated, so we will defer the explanation for a little while). Types are a very powerful tool because they provide a way to regulate the behaviour of values with rules which make sense, making it easier to write programs that work correctly. We will come back to the topic of types many times as they are very important to Haskell, starting with the very next module of this book. What we have seen so far leads us to the conclusion that an equality test like 2 == 2 is an expression just like 2 + 2, and that it also evaluates to a value in pretty much the same way. That fact is actually given a passing mention on the ugly error message we got on the previous example: In the expression: 2 == True Therefore, when we type 2 == 2 in the prompt and GHCi "answers" True it is just evaluating an expression. But there is a deeper truth involved in this process. A hint is provided by the very same error message: In the first argument of `(==)', namely `2' GHCi called 2 the first argument of (==). In the previous module we used the term argument to describe the values we feed a function with so that it evaluates to a result. It turns out that == is just a function, which takes two arguments, namely the left side and the right side of the equality test. The only special thing about it is the syntax: Haskell allows two-argument functions with names composed only of non-alphanumeric characters to be used as infix operators, that is, placed between their arguments. The only caveat is that if you wish to use such a function in the "standard" way (writing the function name before the arguments, as a prefix operator) the function name must be enclosed in parentheses. So the following expressions are completely equivalent: Prelude> 4 + 9 == 13 True Prelude> (==) (4 + 9) 13 True Writing the expression in this alternative style further drives the point that (==) is a function with two arguments just like areaRect in the previous module was. What's more, the same considerations apply to the other relational operators we mentioned (<, >, <=, >=) and to the arithmetical operators (+, *, etc.) – all of them are just functions. This generality is an illustration of one of the strengths of Haskell – there are few "special cases", and that helps to keep things simple. In general, we could say that all tangible things in Haskell are either values, variables or functions. One nice and useful way of seeing both truth values and infix operators in action are the boolean operations, which allows us to manipulate truth values as in logic propositions. Haskell provides us three basic functions for that purpose: - (&&) performs the and operation. Given two boolean values, it evaluates to True if both the first and the second are True, and to False otherwise. Prelude> (3 < 8) && (False == False) True Prelude> (&&) (6 <= 5) (1 == 1) False - (||) performs the or operation. Given two boolean values, it evaluates to True if either the first or the second are True (or if both are true), and to False otherwise. Prelude> (2 + 2 == 5) || (2 > 0) True Prelude> (||) (18 == 17) (9 >= 11) False - not performs the negation of a boolean value; that is, it converts True to False and vice-versa. Prelude> not (5 * 2 == 10) False One relational operator we didn't mention so far in our discussions about comparison of values is the not equal to operator. It is also provided by Haskell as the (/=) function, but if we had to implement it a very natural way of doing so would be: x /= y = not (x == y) Note that it is perfectly legal syntax to write the operators infix, even when defining them. Another detail to note is that completely new operators can be created out of ASCII symbols (basically, those that are found on the keyboard). Earlier on in this module we proposed two questions about the operations involving truth values: what was actually going on when we used them and how they could help us in the task of writing programs. While we now have a sound initial answer for the first question, the second one could well look a bit nebulous to you at this point, as we did little more than testing one-line expressions here. We will tackle this issue by introducing a feature that relies on boolean values and operations and allows us to write more interesting and useful functions: guards. To show how guards work, we are going to implement the absolute value function. The absolute value of a number is the number with its sign discarded; so if the number is negative (that is, smaller than zero) the sign is inverted; otherwise it remains unchanged. We could write the definition as: The key feature of the definition is that the actual expression to be used for calculating depends on a set of propositions made about . If we use the first expression, but if we use the second one instead. If we are going to implement the absolute value function in Haskell we need a way to express this decision process. That is exactly what guards help us to do. Using them, the implementation could look like this: Example: The abs function. abs x | x < 0 = 0 - x | otherwise = x Remarkably, the above code is almost as readable as the corresponding mathematical definition. In order to see how the guard syntax fits with the rest of the Haskell constructs, let us dissect the components of the definition: - We start just like in a normal function definition, providing a name for the function, abs, and saying it will take a single parameter, which we will name x. - Instead of just following with the = and the right-hand side of the definition, we entered a line break, and, following it, the two alternatives, placed in separate lines. These alternatives are the guards proper. An important observation is that the whitespace is not there just for aesthetic reasons, but it is necessary for the code to be parsed correctly. - Each of the guards begins with a pipe character, |. After the pipe, we put an expression which evaluates to a boolean (also called a boolean condition or a predicate), which is followed by the rest of the definition – the equals sign and the right-hand side which should be used if the predicate evaluates to True. - The otherwise deserves some additional explanation. If none of the preceding predicates evaluate to True, the otherwise guard will be deployed by default. In this case, if x is not smaller than zero, it must be greater than or equal to zero, so the final predicate could have just as easily been x >= 0; otherwise is used here for the sake of convenience and readability. where and Guards where clauses are particularly handy when used with guards. For instance, consider this function, which computes the number of (real) solutions for a quadratic equation, : numOfSolutions a b c | disc > 0 = 2 | disc == 0 = 1 | otherwise = 0 where disc = b^2 - 4*a*c The where definition is within the scope of all of the guards, sparing us from repeating the expression for disc. - The term is a tribute to the mathematician and philosopher George Boole. - In case you found this statement to be quite bold, don't worry – we will go even further in due course. - Technically, that just covers how to get the absolute value of a real number, but let's ignore this detail for now. - abs is also provided by Haskell, so in a real-world situation you don't need to worry about providing an implementation yourself. - We could have joined the lines and written everything in a single line, but in this case it would be a lot less readable.
http://en.m.wikibooks.org/wiki/Haskell/Truth_values
13
11
Germanphysicist and 1918 Nobel laureate, who was the originator of the quantum theory. Planck recalled that his "original decision to devote myself to science was a direct result of the discovery . . . that the laws of human reasoning coincide with the laws governing the sequences of the impressions we receive from the world about us; that, therefore, pure reasoning can enable man to gain an insight into the mechanism of the [world]. . . ." He deliberately decided, in other words, to become a theoretical physicist at a time when theoretical physics was not yet recognized as a discipline in its own right. But he went further: he concluded that the existence of physical laws presupposes that the "outside world is something independent from man, something absolute, and the quest for the laws which apply to this absolute appeared . . . as the most sublime scientific pursuit in life." The first instance of an absolute in nature that impressed Planck deeply, even as a Gymnasium student, was the law of the conservation of energy, the first law of thermodynamics. Later, during his university years, he became equally convinced that the entropy law, the second law of thermodynamics, was also an absolute law of nature. The second law became the subject of his doctoral dissertation at Munich, and it lay at the core of the researches that led him to discover the quantum of action, now known as Planck's constant h, in 1900. In 1859-60 Kirchhoff had defined a blackbody as an object that reemits all of the radiant energy incident upon it; i.e., it is a perfect emitter and absorber of radiation. There was, therefore, something absolute about blackbody radiation, and by the 1890s various experimental and theoretical attempts had been made to determine its spectral energy distribution--the curve displaying how much radiant energy is emitted at different frequencies for a given temperature of the blackbody. Planck was particularly attracted to the formula found in 1896 by his colleague Wilhelm Wien at the Physikalisch-Technische Reichsanstalt (PTR) in Berlin-Charlottenburg, and he subsequently made a series of attempts to derive "Wien's law" on the basis of the second law of thermodynamics. By October 1900, however, other colleagues at the PTR, the experimentalists Otto Richard Lummer, Ernst Pringsheim, Heinrich Rubens, and Ferdinand Kurlbaum, had found definite indications that Wien's law, while valid at high frequencies, broke down completely at low frequencies. Planck learned of these results just before a meeting of the German Physical Society on October 19. He knew how the entropy of the radiation had to depend mathematically upon its energy in the high-frequency region if Wien's law held there. He also saw what this dependence had to be in the low-frequency region in order to reproduce the experimental results there. Planck guessed, therefore, that he should try to combine these two expressions in the simplest way possible, and to transform the result into a formula relating the energy of the radiation to its frequency. The result, which is known as Planck's radiation law, was hailed as indisputably correct. To Planck, however, it was simply a guess, a "lucky intuition." If it was to be taken seriously, it had to be derived somehow from first principles. That was the task to which Planck immediately directed his energies, and by December 14, 1900, he had succeeded--but at great cost. To achieve his goal, Planck found that he had to relinquish one of his own most cherished beliefs, that the second law of thermodynamics was an absolute law of nature. Instead he had to embrace Ludwig Boltzmann's interpretation, that the second law was a statistical law. In addition, Planck had to assume that the oscillators comprising the blackbody and re-emitting the radiant energy incident upon them could not absorb this energy continuously but only in discrete amounts, in quanta of energy; only by statistically distributing these quanta, each containing an amount of energy h proportional to its frequency, over all of the oscillators present in the blackbody could Planck derive the formula he had hit upon two months earlier. He adduced additional evidence for the importance of his formula by using it to evaluate the constant h (his value was 6.55 10-27 erg-second, close to the modern value), as well as the so-called Boltzmann constant (the fundamental constant in kinetic theory and statistical mechanics), Avogadro's number, and the charge of the electron. As time went on physicists recognized ever more clearly that--because Planck's constant was not zero but had a small but finite value--the microphysical world, the world of atomic dimensions, could not in principle be described by ordinary classical mechanics. A profound revolution in physical theory was in the making. Planck's concept of energy quanta, in other words, conflicted fundamentally with all past physical theory. He was driven to introduce it strictly by the force of his logic; he was, as one historian put it, a reluctant revolutionary. Indeed, it was years before the far-reaching consequences of Planck's achievement were generally recognized, and in this Einstein played a central role. In 1905, independently of Planck's work, Einstein argued that under certain circumstances radiant energy itself seemed to consist of quanta (light quanta, later called photons), and in 1907 he showed the generality of the quantum hypothesis by using it to interpret the temperature dependence of the specific heats of solids. In 1909 Einstein introduced the wave-particle duality into physics. In October 1911 he was among the group of prominent physicists who attended the first Solvay conference in Brussels. The discussions there stimulated Henri Poincare to provide a mathematical proof that Planck's radiation law necessarily required the introduction of quanta--a proof that converted James (later Sir James) Jeans and others into supporters of the quantum theory. In 1913 Niels Bohr also contributed greatly to its establishment through his quantum theory of the hydrogen atom. Ironically, Planck himself was one of the last to struggle for a return to classical theory, a stance he later regarded not with regret but as a means by which he had thoroughly convinced himself of the necessity of the quantum theory. Opposition to Einstein's radical light quantum hypothesis of 1905 persisted until after the discovery of the Compton effect in 1922. Planck became permanent secretary of the mathematics and physics sections of the Prussian Academy of Sciences in 1912 and held that position until 1938; he was also president of the Kaiser Wilhelm Society (now the Max Planck Society) from 1930 to 1937. These offices and others placed Planck in a position of great authority, especially among German physicists; seldom were his decisions or advice questioned. His authority, however, stemmed fundamentally not from the official appointments he held but from his personal moral force. His fairness, integrity, and wisdom were beyond question. It was completely in character that Planck went directly to Hitler in an attempt to reverse Hitler's devastating racial policies and that he chose to remain in Germany during the Nazi period to try to preserve what he could of German physics. Planck was a man of indomitable will. Had he been less stoic, and had he had less philosophical and religious conviction, he could scarcely have withstood the tragedies that entered his life after age 50. In 1909, his first wife, Marie Merck, the daughter of a Munich banker, died after 22 years of happy marriage, leaving Planck with two sons and twin daughters. The elder son, Karl, was killed in action in 1916. The following year, Margarete, one of his daughters, died in childbirth, and in 1919 the same fate befell Emma, his other daughter. World War II brought further tragedy. Planck's house in Berlin was completely destroyed by bombs in 1944. Far worse, the younger son, Erwin, was implicated in the attempt made on Hitler's life on July 20, 1944, and in early 1945 he died a horrible death at the hands of the Gestapo. That merciless act destroyed Planck's will to live. At war's end, American officers took Planck and his second wife, Marga von Hoesslin, whom he had married in 1910 and by whom he had had one son, to Gottingen. There, on October 4, 1947, in his 89th year, he died. Death, in the words of James Franck, came to him "as a redemption." Technical books that treat Planck's work and the history of quantum physics include Edmund Whittaker, A History of the Theories of Aether and Electricity, rev. and enlarged ed., vol. 2, The Modern Theories, 1900-1926 (1953, reissued 1987); Max Jammer, The Conceptual Development of Quantum Mechanics (1966, reissued 1989); Armin Hermann, The Genesis of Quantum Theory (1899-1913) (1971; originally published in German, 1969); Roger H. Stuewer, The Compton Effect: Turning Point in Physics (1975); Hans Kangro, Early History of Planck's Radiation Law (1976; originally published in German, 1970); and Thomas S. Kuhn, Black-Body Theory and the Quantum Discontinuity, 1894-1912 (1978, reprinted 1987). Nontechnical books include Barbara Lovett Cline, The Questioners: Physicists and the Quantum Theory (1965); Emilio Segre, From X-Rays to Quarks: Modern Physicists and Their Discoveries (1980); Ilse Rosenthal-Schneider, Reality and Scientific Truth: Discussions with Einstein, von Laue, and Planck (1980); and Alex Keller, The Infancy of Atomic Physics: Hercules in His Cradle (1983). Especially noteworthy are three articles by Martin J. Klein: "Max Planck and the Beginning of the Quantum Theory," Archive for History of Exact Sciences, 1(5):459-479 (1962), "Planck, Entropy, and Quanta, 1901-1906," The Natural Philosopher, 1:83-108 (1963), and "Thermodynamics and Quanta in Planck's Work," Physics Today, 19:23-32 (1966). Main Page | About Us | All text is available under the terms of the GNU Free Documentation License. Timeline of Nobel Prize Winners is not affiliated with The Nobel Foundation. A Special Thanks to the 3w-hosting.com for helping make this site a success. External sites are not endorsed or supported by http://www.nobel-winners.com/ Copyright © 2003 All Rights Reserved.
http://www.nobel-winners.com/Physics/max_karl_ernst_ludwig_planck.html
13
36
Yesterday’s look at black holes and their potential role in generating energy for advanced civilizations flows naturally into newly released work from Xavier Hernandez and William Lee (National Autonomous University of Mexico). The astronomers have been studying how dark matter behaves in the vicinity of black holes, simulating the way early galaxies would have interacted with it. Current theory suggests that clumps of dark matter drew together gas that eventually became the stars and galaxies we see around us in the cosmos. How to study material that is invisible save for its gravitational influence? Its effect on gravitational lensing is one way, but Hernandez and Lee have found another. The duo looked for clues in the massive black holes now thought to be at the center of most large galaxies. Assuming such black holes are common, then large haloes of dark matter have coexisted with massive black holes over most of the history of the universe. It follows that part of the growth of these central black holes has come at the expense of captured dark matter particles. By studying how central black holes grow through accretion, the scientists place upper limits on the maximum density of dark matter at the center of haloes. Image: Artist’s schematic impression of the distortion of spacetime by a supermassive black hole at the centre of a galaxy. The black hole will swallow dark matter at a rate which depends on its mass and on the amount of dark matter around it. Credit: Felipe Esquivel Reed. From the paper: We find the process to be characterised by the onset of a rapid runaway growth phase after a critical timescale. This timescale is a function of the mass of the black hole and the local density of dark matter. By requiring that the runaway phase does not occur, as then the swallowing up of the halo by the black hole would seriously distort the former, we can obtain upper limits to the maximum allowed density of dark matter at the centres of haloes. Concentrate dark matter greater than 250 solar masses per cubic parsec, the authors find, and you get a runaway black hole, one that engulfs so much dark matter that the galaxy involved would be fundamentally changed. “Over the billions of years since galaxies formed, such runaway absorption of dark matter in black holes would have altered the population of galaxies away from what we actually observe.” We don’t see that result, which implies that dark matter is distributed evenly through dark matter haloes rather than found in clumps of the sort that could lead to the runaway scenario. That’s a useful result because some dark matter models, analyzed via computer simulation, assume that dark matter is clumpy, following what dark matter researchers call ‘cuspy density profiles.’ The new work gives weight to dark halo density profiles that are constant, a sign that earlier modeling of these haloes is suspect. This is how science often progresses, one tweak at a time, as we deepen our understanding of the dark matter that observation suggests must shape the galaxies around us. The paper is “An upper limit to the central density of dark matter haloes from consistency with the presence of massive central black holes,” accepted by Monthly Notices of the Royal Astronomical Society (preprint).
http://www.centauri-dreams.org/?p=11778
13
11
Where is the center of a triangle? There are actually thousands of centers! Here are the 4 most popular ones: Centroid, Circumcenter, Incenter and Orthocenter For each of those, the "center" is where special lines cross, so it all depends on those lines! Let's look at each one: Draw a line (called a "median") from a corner to the midpoint of the opposite side. |Draw a line (called a "perpendicular bisector") at right angles to the midpoint of each side. Where all three lines intersect is the center of a triangle's "circumcircle", called the "circumcenter": |Draw a line (called the "angle bisector") from a corner so that it splits the angle in half Where all three lines intersect is the center of a triangle's "incircle", called the "incenter": |Draw a line (called the "altitude") at right angles to a side and going through the opposite corner. Where all three lines intersect is the "orthocenter": Note that sometimes the edges of the triangle have to be extended outside the triangle in order to draw the altitudes. Then the orthocenter is also outside the triangle.
http://www.mathsisfun.com/geometry/triangle-centers.html
13
12
Your child’s science fair project is off to a great start. She has chosen a topic that she is excited about, but she needs your help figuring out how to use the steps of the scientific method to organize her project. If you’re like many parents, it’s been awhile since you’ve had to use the scientific method. Here’s a quick rundown of the six steps. Note: Science teachers and books vary as to how many steps are involved. You may see as few as four or as many as seven; the difference lies in whether or not the steps are broken down into sub-steps. 1. Observation: Observation could also be stated as coming up with an idea or, more simply, curiosity. By observing the world around her, your child can begin to notice things happen or certain phenomena. Once she finds something that really piques her curioisity, she’ll move on to Step 2. 2. Question (also known as State the Problem): This step takes Observation a bit further. Now it’s time to verbalize what has piqued your child’s curiosity. What exactly did she wonder about? Does she want to know if there’s a connection between two things? Is she wondering if plants grow better in once set of circumstances then another? 3. Hypothesis: A hypothesis is an “educated guess” at the answer to the question posed. It doesn’t matter if the hypothesis is correct or incorrect, that’s what the science fair project is supposed to discover. It is important that the hypothesis is related to the question asked. For example, hypothesizing that raccoons sleep during the day because they are avoiding predators doesn’t have anything to do with why the weather patterns seem to be changing in the southern part of the United States. 4. Experimentation: Experimentation is coming up with a test to prove or disprove the hypothesis. It involves creating a test that looks at all the different variables and has a few sub-steps to it. - 4a. Predicition: After the experiment is designed, your child will need to make note of what she thinks is going to happen. - 4b. Gather data: Your child will gather information about what happened to each variable during and after the experiment. 5. Analysis: This is one of the trickier steps of the scientific methods. Using charts, graphs or other ways of displaying the data gathered, your child will take a hard look at it to see if there are any visisble patterns. Then she will need to decide whether or not she has enough information to support or disprove her hypothesis. 6. Conclusion: Using the data, your child will come to a conclusion that either supports her hypothesis or suggests another question. If the data brings forth a new question, then the scientific method begins all over again.
http://childparenting.about.com/od/schoollearning/a/steps-of-scientific-method.htm
13
26
picture of the day of the three known main-belt comets (red lines), the five innermost planets (black lines; from the center outward: and Jupiter), a sample of 100 main-belt asteroids (orange lines), and two "typical" comets (Halley's Comet, and Tempel 1, target of Deep Impact mission) as blue lines. Positions of the main-belt and planets on March 1, 2006, are plotted with black dots. Image credit: Pedro Lacerda (Univ. Hawaii; Univ. Coimbra, Portugal) Apr 07, 2006 When Asteroids Become Comets The surprising discovery of asteroids with comet tails supports the longstanding claim of the electrical theorists—that the essential difference between asteroids and comets is the shape of their orbits. According to a recent story in USA Today, astronomers are “rethinking long-held beliefs about the distant domains of comets and asteroids, abodes they've always considered light-years apart”. The discovery has forced astronomers to speculate that some asteroids are actually “dirty snowballs in disguise”. For many years the standard view of asteroids asserted that they are composed of dust, rock, and metal and that most occupy a belt between Mars and Jupiter. In contrast, comets were claimed to arrive from a home in deep space, most coming from an imagined “Oort Cloud” at the outermost reaches of the solar system, where they are supposed to have accreted from leftover dust and ices from the formation of the solar But now, “the locales of comets and asteroids may not be such a key distinction”, states Dan Vergano, reporting on the work of two University of Hawaii astronomers, Henry Hsieh and David Jewitt. In a survey of 300 asteroids lurking in the asteroid belt, the astronomers detected three objects that “look a lot like comets … ejecting little comet tails at times from their surfaces”. The three red circles in the illustration above describe the orbits of these bodies Of course, this is not the first instance of an 'asteroid' sporting a cometary tail. The asteroid Chiron, orbiting between Saturn and Uranus, was seen to develop a coma and tail between 1988 and 1989. It is now officially classified as both an asteroid and a comet. Chiron belongs to a class of objects called 'Centaurs' crossing the orbits of various gas giants. Though they move on minimally eccentric orbits through a relatively remote and weak region of the Sun’s electric field, Wallace Thornhill and other electrical theorists believe these bodies should all be watched carefully for telltale signs of minor cometary activity. And in fact the asteroid 60558 Echeclus, discovered in 2000, did display a cometary coma detected in 2005, and it too is now classified as both an asteroid and a comet. In the electric view, there is no fundamental distinction between a comet and an asteroid, apart from their orbits. Comets are not primordial objects formed by impact accretion – an improbable and unfalsifiable model (“it happened long, long ago and far, far away”). Asteroids, comets and meteorites are all 'born' in interplanetary electrical events. Their distinctive orbital groupings and spectral features simply point to separate catastrophic events and to different planetary bodies involved in different phases of solar system history. A comet is simply an electrical display and was recognized as such by scientists in the 19th century. So an 'asteroid' on a sufficiently elliptical orbit will do precisely what a comet does—it will discharge electrically. What distinguishes the cometary 'asteroids', observed by the University of Hawaii astronomers, are the paths they follow, moving them through the radial electric field of the Sun to a greater extent than is typical of other bodies in the 'asteroid belt' (See chart above). Cometary effects may also be expected from an asteroid if it passes through the huge electric comet tail [called the magnetosphere] of a giant planet. The astronomers’ recent investigation only reinforces the argument of the electrical theorists: The electric model is eminently testable, with highly specific and unique predictions; and it has so far met every test provided by the space age. Please visit our new "Thunderblog" page Through the initiative of managing editor Dave Smith, we’ve begun the launch of a new presentations of fact and opinion, with emphasis on and the explanatory power of the Electric Universe." new: online video page The Electric Sky and The Electric Universe
http://thunderbolts.info/tpod/2006/arch06/060407cometasteroid.htm
13
14
Accuracy and precision Accuracy and precision are terms applied to individual measurements or methods of measurement. Accuracy refers to how closely a measurement agrees with the true value it is trying to measure, whereas precision refers to how close repeated measurements are (or would be) to each other. Precision may refer to the degree to which a measurement is rounded before being reported, but can also be an inherent characteristic of the method of measurement being used. A system of measurement is called valid if it is both accurate and precise. In the context of weighing an apple, for example, a reported weight of 503.276 grams would be more precise than a reported weight of 500 grams, however neither measurement would be accurate if the apple actually weighs 452 grams. In this case, a reported weight of 460 grams would be more accurate than the 500-gram reported weight. The scale itself in the preceding example (the method of measurement) would be considered accurate if repeated measurements of the weight of the apple were correct on the average (i.e., if their arithmetic mean matches the actual weight of the apple); the scale would be considered precise if repeated measurements agree with each other (i.e., if the standard deviation of the weights is small). Systematic error and bias If a method of measurement results in values that are consistently too large or too small, the method may be called biased. In this case, the method has some source of so-called systematic error. The bias of the method is the difference between its average (or mean) measurement and the true value of what was being measured; in the 452-gram apple example above, if the scale reports an average value of 500 grams when the apple is repeatedly weighed, the the bias of the scale would be 48 grams. A measurement method that gives a bias of zero is called unbiased. A method of measurement would have the highest possible degree of precision if repeated measurements always result in the same value. Most methods of measurement in the real world, however, have some source of random error, which makes repeated measurements disagree with one another even if the method is unbiased. To overcome random error, taking repeated measurements and averaging them can greatly improve the precision of the final measurement, but this will not affect its accuracy. Reporting measurements to an unrealistic degree of precision is a fallacy.
http://wiki.ironchariots.org/index.php?title=Accuracy_and_precision&oldid=11666
13
11
The second Section concentrates on applications of remote sensing to geological studies. A list of principal uses begins this page. Special attention is given to ways in which remote sensing (especially through image classification) can aid in making geologic maps. The notion of "formation" is discussed and reasons given as to why this standard geologic map unit cannot be recognized as such in imagery alone. One of the pitfalls of making these maps solely from imagery - namely, the presence of soil and/or vegetation cover - is mentioned. Some typical spectral signatures of different rock types are displayed. Many readers of the Tutorial and particularly Sections 2, 5, 17, and 19, will find the geological concepts and underlying principles unfamiliar or even unknown to them. To assist these individuals in building a quick background, a primer or review of the essential ideas of Geology is offered as an added page, which you can access by clicking here. A similar opportunity is provided in Section 14 dealing with Principles of Meteorology. Geologists have used aerial photographs for decades to serve as databases from which they can do the following: 1. Pick out rock units (stratigraphy) 2. Study the expression and modes of the origin of landforms (geomorphology) 3. Determine the structural arrangements of disturbed strata (folds and faults) 4. Evaluate dynamic changes from natural events (e.g., floods; volcanic eruptions) 5. Seek surface clues (such as alteration and other signs of mineralization) to subsurface deposits of ore minerals, oil and gas, and groundwater. 6. Function as a visual base on which a geologic map is drawn either directly or on a transparent overlay. With the advent of space imagery, geoscientists now can extend that use in three important ways: 1) The advantage of large area or synoptic coverage allows them to examine in single scenes (or in mosaics) the geological portrayal of Earth on a regional basis 2) The ability to analyze multispectral bands quantitatively in terms of numbers (DNs) permits them to apply special computer processing routines to discern and enhance certain compositional properties of Earth materials 3) The capability of merging different types of remote sensing products (e.g., reflectance images with radar or with thermal imagery) or combining these with topographic elevation data and with other kinds of information bases (e.g., thematic maps; geophysical measurements and chemical sampling surveys) enables new solutions to determining interrelations among various natural properties of earth phenomena. While these new space-driven approaches have not yet revolutionized the ways in which geoscientists conduct their field studies, they have proven to be indispensable techniques for improving the geologic mapping process and carrying out practical exploration for mineral and energy resources on a grand scale. We now consider several examples of geologic applications using these new approaches. We concentrate initially on how Landsat Thematic Mapper (TM) data for a local region in Utah are manipulated to identify different rock types, map them over a large area using supervised classification, and correlate their spatial patterns with independent information on their structural arrangement. Next, our focus changes to examination of geologic structures, particularly lineaments, as displayed in regional settings in the U.S., Canada, and Africa. Then, in Section 5, we will look at how space-acquired data fit into current methods of exploring for mineral and hydrocarbon deposits by considering a case study of a mineralized zone and at a large-area Landsat scene in Oklahoma. In Section 18, we will return to a geologic theme by examining landforms at regional scales, (so-called mega-geomorphology), as the principal subject in considering how remote sensing is used in basic science studies. Most geologic maps are also stratigraphic maps, that is, they record the location and identities of sequences of rock types according to their relative ages. The fundamental rock unit is the formation (abbreviated as Fm or fm), defined simply as a distinct mappable set of rocks (if sedimentary, then usually layered) that has a specific geographic distribution. A formation typically is characterized by one or two dominant types of rock materials. The term "formation" is most commonly associated with strata, namely layers of sediments that have hardened into sedimentary rocks. Under most conditions, sediments are laid down in horizontal or nearly so layers on sea floors, lake bottoms, and transiently in river beds. Here is a typical set of sedimentary layers exposed in a road cut (note that the layers have been cut and slightly offset by a break which is termed a "fault"): If we see sedimentary rocks inclined at more than a few degrees from the horizontal, we should suspect that these are involved in displacements from their original horizontal state by forces (tectonic) that cause the rocks to bend and curve (folds) or break (faults). Here is a roadcut along a Maryland highway that is passing through the fold belt of the Appalachians. As we shall see later in this Section, inclined layers can produce curved structures called anticlines (uparched) and synclines (downwarched), Here is an example of the latter (also present in the above Maryland roadcut): Any given formation is emplaced over some finite span of geologic time. We can approximate its age by the fossils (evidence of past life) that were deposited with it during the time in which these life forms existed. Age dating by determining the amounts of radioactive elements and their decay-daughter products can usually produce even more accurate age estimates. Another, less precise, approach to fixing the age (span) of a rock unit is to note its position in the sequence of other rock units, some of whose ages are independently. We can correlate the units with equivalent ones mapped elsewhere that have had their ages worked out. This method tends to bracket the time in which the sedimentary formation was deposited but erosional influences may lead to uncertainties. The association of sedimentary layers with specific time intervals constitutes the field of stratigraphy. Igneous and metamorphic rocks also have time significance and are treated as rock units (some may retain layered characteristics) on geologic maps (which show all stratigraphic units in a legend). Remote-sensing displays, whether they are aerial photos or space-acquired images, show the surface distribution of the multiple formations usually present and, under appropriate conditions, the type(s) of rocks in the formations. The formations show patterns that depend on their proximity to the surface, their extent over the surveyed area, their relative thicknesses, their structural attitude (horizontal or inclined layers), and their degree of erosion. Experienced geologists can recognize some rock types just by their appearance in the photo/image. The identify others types from their spectral signatures. Over the spectral range covered by the Landsat TM bands, the types and ages of rocks show distinct variations at specific wavelengths. This is evident in the following spectral plots showing laboratory-determined curves obtained by a reflectance spectrometer for a group of diverse sedimentary rocks from Wyoming: 2-1 From these spectra, predict the general color of these four rock units: Niobrara Fm; Chugwater Fm; Frontier Fm; Thermopolis Fm. ANSWER 2-2: What spectrally distinguishes the Mowry Fm from the Thermopolis Fm; the Jelm Fm from the White River Conglomerate? ANSWER A common way of mapping formation distribution is to rely on training sites at locations within the photo/image. Geologists identify the rocks by consulting area maps or by visiting specific sites in the field. They then extrapolate the rocks' appearance photographically or by their spectral properties across the photo or image to locate the units in the areas beyond the site (in effect, the supervised classification approach). In doing geologic mapping from imagery, we know that formations are not necessarily exposed everywhere. Instead they may be covered with soil or vegetation. In drawing a map, a geologist learns to extrapolate surface exposures underneath covered areas, making logical deductions as to which hidden units are likely to occur below the surface. In working with imagery alone, these deductions may prove difficult and are a source of potential error. Also, rock ages are not directly determined from spectral data, so that identifying a particular formation requires some independent information (knowledge of a region's rock types and their sequence). In exceptional instances, such as those to be shown on the next three pages, when geologic strata are turned on their side (from folding; discussed on page 2-5) so that the successive geologic units are visible as a sequence, the changes within and between each discrete unit can be measured in terms of some spectral property, as for example, variations in the reflectance of a given band, or a ratio of bands. When plotted as shown below the result are tracings that resemble (analogously) those made from well logging of such properties as electric resistivity, permeability, magnetic intensity and other geophysical parameters. Here are two figures, the top showing the succession of sedimentary strata exposed along the Casper Arch in central Wyoming; the bottom being reflectance "logs" derived from spectral traverses along one of the lines in the upper image: In the lower diagram, the bottom unit is the Permian Phosphoria Formation, extending upward from the Triassic Chugwater Formation to the Frontier sandstone (Cretaceous) at the top. On the right the left tracing is of TM band 3 (red), with 0% reflectance on the right extending to 70% on the left, and the right tracing goes from 0% on left to 50% on the right. Before looking at some specific examples of the use of space imagery for geologic structure analysis, this is a good point to introduce one particular advantage of having space observing systems that can repetitively cover the same large regions over the four seasons. Two Landsat images appear below: one taken during the southern Winter in South Africa; the other during the height of Spring. The area includes Johannesburg, some of the gold mines in the Witwatersrand district, and the Pilanesburg pluton (near the top). In the wintertime, some of the underlying rock units fail to show distinctly because the entire scene has its vegetation (mostly grasslands) dormant. But with plant reawakening in Spring, different units have different vegetation types and these variably modify the colors displayed, thus revealing the more complex structures in the region. Collaborators: Code 935 NASA GSFC, GST, USAF Academy
http://www.fas.org/irp/imint/docs/rst/Sect2/Sect2_1.html
13
13
A density map depicts a number of objects per unit area. It is almost always represented as a grid (in raster format). The calculation typically computes a weighted average of objects found within a specified distance (the "radius") where the weights depend on distance. The pattern of weights (which usually decrease with distance) is called the kernel. A generalization of this procedure further weights the density by a numerical attribute of the object, such as a count. For example, the objects may be points designating housing units, the counts may represent people in those units, and the resulting density is population per unit area. The results of a density calculation, because they are values per unit area, typically have different numeric ranges than the original data. Continuing the previous example, there may be one to 12 people in the housing units, but the number of people per square kilometer may run into many thousands. Densities can be converted back to counts: multiplying the density in a grid cell by the cell's area (not its side length) estimates the total in that cell (such as the total number of people living in it). Summing those counts over a region (a "zonal sum" operation) uses the density map to count things lying within the region. A good check of a density calculation is to perform this summation for the entire map and compare it to the count of the original objects (or the sum of their attributes, as appropriate). Except for edge effects, the two values should be equal. Often a GIS will compute densities in units per square meter but the results are needed in units per square kilometer, square mile, or some other unit of area. To make the conversion, multiply by the factor needed to convert the new area units to the old area units. For example, one square kilometer equals 1,000,000 square meters, so to convert a density grid from people per square meter to people per square kilometer, multiply all its values by 1,000,000. Misunderstanding the areal units of measure used in the GIS density calculation is a likely explanation when large discrepancies are found in the check described in the preceding paragraph. Density maps are often thought of as a form of interpolation, but this can be deceiving, because they have little in common with true interpolation methods (such as trend surfaces, IDW, splines, or natural neighbor), except that they all produce a grid that looks like a continuous surface. It is useful to think of a kernel density calculation as spreading the original objects out. The kernel function itself describes the spread. The larger the radius, the greater the spreading. Two choices are made when specifying a kernel density calculation: the shape of the kernel and its radius. The radius is the most important parameter: a large radius spreads everything so far out that the resulting density map shows little variation, whereas a small radius results in sharp isolated peak-like features with little consolidation or clustering of points. Often experimentation is required to select an appropriate radius. The shape of the kernel determines the apparent smoothness of the density map. A kernel with abrupt changes in weights (such as one with constant weights, dropping to zero beyond the radius) can create evident discontinuities in the density map. A gaussian kernel tapers off with distance so smoothly that its density map is guaranteed to be very smooth (in theory, having derivatives of all orders). Another term for a kernel density calculation is convolution.
http://gis.stackexchange.com/tags/density/info
13
10
General Chemistry w/Lab II Experiment: VSEPR & 3D Molecular Structures In this laboratory activity you will gain additional practice predicting the threedimensional structure of molecules and ions without the use of models. You will also use your knowledge of geometry and bond angles to predict the relative stability of several cyclic compounds, and will compare the structures of some larger drug molecules. For small molecules the shape or geometry can be accurately predicted using ValenceShell ElectronPair Repulsion (VSEPR) theory. This model is discussed at some length in your text, pages 373382. Be sure to read this section carefully prior to attempting this activity. In this exercise you will begin by predicting the geometry of several molecules and ions. You will then be able to check your work by viewing a computer generated 3D model of the molecule or ion. Be sure that you complete all work in your lab notebook. You will work with a partner for this exercise. Part I VSEPR Inorganic Molecules:For each molecule or ion draw the Lewis structure. Then make a 3D sketch of the molecule or ion that shows the arrangement of all electron pairs about the central atom, name the arrangement, label each using the AXmEn notation, and predict its shape/geometry (including angles). Each substance has one central atom. |1. BrF5||2. ClF3||3. PBr4+| |4. I3-||5. SO3||6. CCl2O| |7. SF4||8. PCl3||9. XeF4||10. XeF2| Organic Molecules:These molecules all have more than one central atom. After drawing each Lewis structure, again make a 3D sketch of the molecule, name the arrangement about each central atom, label each central atom using the AXmEn notation, and predict its shape/geometry (including angles.) Note that the formulas given below have been written so as to indicate how the atoms are bonded. |11. CH3CHOHCOOH||12. COOHCH2COOH| |13. CH2CHCONH2||14. CH3CCCH2Cl| Cyclic Organic Compounds:The following compounds are all cyclic, with the carbon atoms arranged in a ring: |15. C3H6||16. C4H8||17. C6H12| Some of these are more stable than others. Given what you know about the possible geometries around carbon, and considering how well (or not) your models fit the ideal geometries, which of these compounds do you think is the most stable? The least stable? To answer these questions first draw the Lewis structures, look at the bond angles, and then explain your choice. Once you have finished predicting the geometry for each molecule or ion above, show your work to your instructor before continuing on to the next step. Now its time to check your work. Using MS Internet Explorer, go to the Molecular Models web page, at Okanagan University College, to find the correct structures. Once at the web page, select the link to the Formula Index. Use this index to look up each of the substances. (Note that the index does not show the charge of an ion.) Each will first appear in a ball and stick representation. To change the way the molecule is displayed, choose the desired representation from the options to the right of the display. (Note that the models displayed on this web site do not show nonbonding pairs.) You can rotate a molecule by dragging on any part of the model. Part II Structural Similarities The threedimensional structure of a molecule is an essential part of its chemical function and reactivity. Sometimes molecules with similar structures have similar properties. Other times they are vastly different. Consider the following drugs. Two of them have very similar 3D structures. Which two are they? To find out, use MS Internet Explorer to return to the Molecular Models web page, and select Drugs. The drugs are listed by the number of carbon atoms. Compare the structures (Again, you can rotate a molecule by dragging on any part of the model. It is easiest to compare structures by opening two different windows and viewing the structures side by side.), and try to find the two which are most alike. In your lab analysis explain your choice in a few brief sentences. Analysis and Report In your report be sure that you answer the questions posed above, and include a paragraph summarizing the concepts you have learned in this exercise. Return to the Laboratory Experiments Schedule
http://www.instruction.greenriver.edu/knutsen/chem150/vsepr.html
13
14
Geometry Chapter 4.6 Isosceles, Equilateral, and Right Triangles A) Base angles - the two angles adjacent to the base of an isosceles triangle. B) Vertex angles - the angle opposite the base of an isosceles triangle. A) Base Angles Theorem: if two sides of a triangle are congruent, then the angles opposite them are congruent. B) Converse of the Base Angles Theorem: if two angles of a triangle are congruent, the the sides opposite them are congruent. C) Hypotenuse - Leg Congruence Theorem (HL = HL): if the hypotenuse and a leg of a right triangle are congruent to the hypotenuse and leg of a second right triangle, then the two triangles are congruent. Example: Given Δ ABC and ΔDEF are both right triangles, BC = EF and AC = DF, then ΔABC = ΔDEF. A) If a triangle is equilateral, then it is equiangular. B) If a triangle is equiangular, then it is equilateral.
http://mcuer.blogspot.com/2007/07/geometry-chapter-46-isosceles.html
13
32
Lesson Plan 1: Left Hand, Right Hand - Solving Systems of Equations Teachers will need the following: Students will need the following: - Notebook or journal - Graphing calculators 1. Introduce the lesson by asking students to think about linear situations the class has already studied. Ask several students to each share a particular situation. (Possible responses include: the time in seconds between seeing lightning and hearing the thunder as a function of one's distance from the lightning; the cost of renting a car as a function of the miles one drives.) 2. Ask students to work in groups and brainstorm two different linear situations that they could plot on the same graph so that the two situations could be compared. 3. Solicit suggestions from the groups. Clarify and make sure everybody understands all of the scenarios and why it would be reasonable to want to compare and/or contrast them. If some of the scenarios aren't feasible or reasonable, this should be part of the discussion as well. (Possible responses include: comparing costs of driving two different cars as a function of the distance you drive; comparing costs of renting a car from two different rental companies as a function of the miles you drive.) Hand out the Right Hand/Left Hand activity sheets and ask students to turn to the full-page grid. Students should hold the paper in such a way that there are more squares running horizontally than vertically, and they should hold their pencil in their right hand. When told to begin, they should start writing M, D, M, D - one letter per square - across the page for three seconds. (Teachers can use any two-letter combination - in the video Jenny used the letters for the abbreviation of Maryland.) When time is up, students should put down their pencils. Students should record the results of the first round in the table provided on the activity sheet (e.g., time: 3 seconds, squares: 7). They will repeat the process right handed four more times with the teacher timing them for different intervals (ranging from three to 10 seconds) each time. Students should record the resulting data in their table after each effort. (Note: The data should approximate a linear function, but there may be some variability caused by inconsistencies due to human error.) 3. Repeat the experiment with the left hand, recording the data each time. 4. Ask the students to draw a scatterplot for the two sets of data on the same graph, using the grid provided on the activity sheet. 5. Discuss whether or not the point (0, 0) should be part of the graph. Students should tell you that they do think the origin should be a part of the data because in zero seconds, zero letters can be written. 6. Have the students find an equation for the line of best fit for both sets of data. Most of the students will have two lines that intersect at the origin, but have different slopes. 7. Have students gather in groups and have each group discuss the similarities and differences of their graphs. 8. After the group discussions, ask students what an ambidextrous person's graph would look like. Have a student to come to the front of the class and demonstrate what he or she thinks the two lines would look like. 9. Ideally, the two lines should lie on top of each other because an ambidextrous person would fill the same number of squares with each hand over the same period of time. 10. At this point, the class has looked at individual student graphs and at a graph for an ambidextrous person. Students should realize that most of the graphs had one point of intersection - the origin - and they should understand that this intersection (0, 0) is the one point where the two sets of data are the same. They should also realize that an ambidextrous person's two lines would lie on top of one another and would thus share an infinite number of intersections (or solutions). 11. Hold a discussion in which the class investigates the graphs for several different scenarios. Using the three transparencies, show a graph on the overhead that has two intersecting lines, with two different slopes and two different y-intercepts. Ask students to think about the graph as it relates to solving systems of equations. 12. Show a graph of two parallel lines and ask students to draw conclusions. Students might comment, for instance, that this scenario will not work for the experiment they just performed. They should note that because these two lines never intersect, there is not a common point or solution. 13. Show a graph with two lines on top of each other. Ask students to draw some conclusions about the graph. 14. Ask students to think about all the possibilities when two linear equations are graphed on the same coordinate plane. At the end of the discussion, they should conclude that there are three possible scenarios: 15. Discuss the meaning of a "solution" for each of the three scenarios. At the end of the discussion, they should be able to understand that the two intersecting lines have one solution, the two parallel lines have no solution, and the two coincident lines have an infinite number of solutions. - two intersecting lines - two parallel lines - two lines on top of each other (coincident lines). 16. Present the "Ace vs. Better Car Rentals" graph on the overhead. Ask the students to work in their groups to come up with a story, to discuss what is happening in this scenario, and to draw conclusions. The students should notice that the two rental companies charge different initial fees, making the y-intercepts different. They also charge different rates per mile, which give the lines different slopes. If a person drives 300 miles in one day, the cost is the same for both cars. If the person drives less than 300 miles, then Ace Car Rental is cheaper; more than 300 miles, Better Car Rental is cheaper. 17. Ask the groups to share their discussions. Ask the students to discuss all the different concepts that arose in the lesson, and have each group write a summary of what they learned.
http://www.learner.org/workshops/algebra/workshop3/lessonplan1b.html
13
10
Over the years, there have been various hypotheses about the origin of the Moon. Historically, the major theories have been fission, capture, giant impact, and co-accretion. A chemical analysis of lunar rocks may force scientists to revise the leading theory for the Moon's formation: that the satellite was born when a Mars-sized body smacked into the infant Earth some 4.5 billion years ago. If that were the case, the Moon ought to bear the chemical signature of both Earth and its proposed second parent. But a study published today in Nature Geoscience suggests that the Moon’s isotopic composition reflects only Earth's contribution. Junjun Zhang at the University of Chicago in Illinois and her colleagues used a mass spectrometer to make the most precise measurement so far of the relative abundance of titanium-50 and titanium-47 in Moon rocks gathered by the Apollo missions in the 1970s. The authors report that the lunar ratio of the two isotopes is identical to that found in Earth’s mantle, within about 4 parts per million. This presents a conundrum for the lunar-formation model, Zhang says, because any Mars-sized body that might have collided with the fledgling Earth is believed to have been chemically distinct. Studies of meteorites — the modern stand-ins for planet-sized bodies that once roamed the Solar System — indicate that such objects had an isotopic titanium abundance that could have deviated from the terrestrial value by as much as 600 parts per million. And because simulations suggest that the second body contributed more than 40% of the Moon’s bulk, the lunar isotopic ratio shouldn’t mirror the terrestrial value so closely. Zhang and colleagues chemical analysis is not the first to challenge the impact theory. Researchers have long known that the isotopic ratio of oxygen in Moon rocks bears the same signature as Earth’s mantle. But because oxygen is easily vaporized in a collision, it could have been readily exchanged between Earth and the cloud of vapor and magma that was produced by the impact and coalesced to form the moon, allowing both bodies to reach the same isotopic abundance. Titanium does not vaporize as easily and it would have been more difficult — although not impossible — for both bodies to have reached the same ratio, notes Zhang. Other models that merit consideration, Zhang says, include the fission model, according to which the Moon was spun out of Earth’s mantle early on, when the planet’s centrifugal force might have exceeded its gravitational force. But Canup says that although the collision model may need revision, it need not be abandoned. She has modeled a collision between Earth and a renegade protoplanet about twice the mass of Mars — heavier than previously considered. A more massive second body would have substantially altered Earth's original isotopic composition, leading to a newborn Moon and evolving Earth that are more similar than in previous simulations. Zhang agrees there are still ways for the collision model to work. If the fledgling Moon had cooled more slowly than assumed, there could have been enough time for an exchange of titanium isotopes between the cloud of vapor and magma and the Earth. The Giant Impact Hypothesis emerged as the leading model after a conference about the Moon origin in Kona, Hawaii. In the 1990s, Robin Canup picked up the research and created simulations to determine possible scenarios that might explain the formation of the Moon. The theory does explain many matters such as lack of iron, oxygen isotope ratio, and angular momentum. For further information: http://www.nature.com/news/question-over-theory-of-lunar-formation-1.10300
http://www.enn.com/enn_original_news/article/44178
13
11
Which color are there more/fewer of? Graphing Skittles (or M&Mís) To solve problems by making and using data from a bar graph. Materials for each child hand-out of graph small bag of skittles Materials for teacher bag of skittles Review ďbar graphing using dataĒ on overhead projector. Children will sit on carpet facing overhead projector. Tell the students that we want to find out which color this class likes more and which color this class likes least from red, orange, yellow, green, purple (the colors in either Skittles or M&Mís). Have students think about which color they like best from the colors we have to choose from. Ask each student which color they chose and mark it on the overhead in the proper column/box. After collecting data, model bar graphing. Fill in one square in the proper column for each color chosen. When graphing is complete, analyze the graph with the children. Ask children which color has the most votes and which color has the fewest. Explain that the children are going to be making their own bar graphs. Demonstrate the following for one column on the overhead. Tell the children that they will each receive a blank graph and a bag of candy. First, they will need to separate the candy into groups by color. For each color of candy, they will need to fill in a square on their graph in the corresponding column using the same color of crayon. When they are finished, they are to raise their hand so the teacher can check their work and ask a few questions (i.e. which color has more/fewer, how many are there of _______ color, etc.). Once their work has been checked, enjoy the candy!!! When everyone is finished, have the children discuss their results with the rest of the class. Did everyone have the same results? Individual assessment with children after they have finished their bar graphs. Does their data and bar graph compare? Do they understand the concepts of more and fewer when looking at a bar graph? Can they determine the amount in a column without counting? Create a whole class bar graph of all of the candy. Which color were there more/fewer of out of the entire class?
http://teachers.net/lessons/posts/2024.html
13
16
algibra 9th grade if i have a problem 3x+14=3x it would solve to be 0=14? If we consider each side of the equation separately we see that they each represent straight lines in the form y=mx+b (the slope-intercept form). The equation y=3x+14 is a line with slope of 3 and y-intercept of 14. The equation y=3x is a line with slope of 3 and y-intercept of 0. These lines have the same slope so they must be parallel, and therefore they have no points in common. In other words, they never cross! There is no value of x that satisfies the equation 3x+14=3x. For short, we can just say "no solution." If we change the equation slightly so that 3x+14=3x+14 we get a very different answer. If we subtract 3x from both sides we get 14=14 which is always true. We could also subtract 14 from both sides then divide both sides by 3 and we get x=x, which is also always true. Using the graphical approach it's easy to see that each side of the equation is the SAME line. This means the two lines have ALL points in common and therefore EVERY value of x satisfies the equation! A solution would be a value of x for which the equation is true. Since subtracting 3x from both sides gives the equation 14 = 0 which is false, there is no solution. one can prove that there is no solution by Contradiction. If there is a solution we get 14=0 after simplification Which is false. So we conclude that there is no solution.
http://www.wyzant.com/answers/2527/if_i_have_a_problem_3x143x_it_would_solve_to_be_014
13
14
This activity provides students with the plans for making a one-axis accelerometer that can be used to measure acceleration in different environments ranging from +3 g to -3 g. The device consists of a triangular shaped poster board box they construct with a lead fishing sinker suspended in its middle with a single strand of a rubber band. Before using the device, students must calibrate it for the range of accelerations it can measure. The pattern for making the accelerometer box is included in this guide. It must be doubled in size. It is recommended that several patterns be available for the students to share. To save on materials, students can work in teams to make a single accelerometer. Old file folders can be substituted for the poster board. The student reader can be used at any time during the activity. The instructions call for three egg (shaped) sinkers. Actually, only one is needed for the accelerometer. The other two are used for caiibrating the accelerometer and can be shared between teams. When the boxes are being assembled, the three sides are brought together to form a prism shape and held securely with masking tape. The ends should not be folded down yet. A rubber band is cut and one end is inserted into a hole punched into one of the box ends. Tie the rubber band to a small paper clip. This will prevent the end of the rubber band from sliding through the hole. The other end of the rubber band is slipped through the sinker first and then tied off at the other end of the box with another paper clip. As each rubber band end is tied, the box ends are closed and held with more tape. The two flaps on each end overlap the prism part of the box on the outside. It is likely that the rubber band will need some adjustment so it is at the right tension. This can be easily done by rolling one paper clip over so the rubber band winds up on it. When the rubber band is lightly stretched, tape the clip down. After gluing the sinker in place on the rubber band, the accelerometer must be calibrated. The position of the sinker when the box is standing on one end indicates the acceleration of 1 gravity (1 g). By making a paper clip hook, a second sinker is hung from the first and the new position of the first sinker indicates an acceleration of 2g9. A third sinker indicates 3 g. Inverting the box and repeating the procedure yields positions for negative 1, 2, and 3 g. Be sure the students understand that a negative g acceleration is an acceleration in a direction opposite gravity's pull. Finally, the half way position of the sinker when the box is laid on its side is 0 g. Students are then challenged to use their accelerometers to measure various accelerations. They will discover that tossing the device or letting it fall will cause the sinker to move, but it will be difficult to read the scale. It is easier to read if the students jump with the meter. In this case, they must keep the meter in front of their faces through the entire jump. Better still would be to take the accelerometer on a fast elevator, on a trampoline, or a roller coaster at an amusement park. Acceleration is the rate at which an object's velocity is changing. The change can be in how fast the object is moving, a direction change, or both. If you are driving an automobile and press down on the gas pedal (called the accelerator), your velocity changes. Let's say you go from 0 kilometers to 50 kilometers per hour in 10 seconds. Your acceleration is said to be 5 kilometers per hour per second. In other words, each second you are going 5 kilometers per hour faster than the second before. In 10 seconds, you reach 50 kilometers per hour. You feel this acceleration by being pressed into the back of your car seat. Actually, it is the car seat pressing against you. Because of the property of inertia, your body resists acceleration. You also experience acceleration when there is a change in direction. Let's say you are driving again but this time at a constant speed in a straight line. Then, the road curves sharply to the right. Without changing speed, you make the turn and feel your body pushed into the left wall of the car. Again, it is actually the car pushing on you. This time, your acceleration was a change in direction. Can you think of situations in which acceleration is both a change in speed and direction? The reason for this discussion on acceleration is that it is important to understand that the force of gravity produces an acceleration on objects. Imagine you are standing at the edge of a cliff and you drop a baseball over the edge. Gravity accelerates the ball as it falls. The acceleration is 9.8 meters per second per second. After 5 seconds, the ball is traveling at a rate of nearly 50 meters per second. To create a microgravity environment where the effects of gravity on an experiment are reduced to zero, NASA would have to accelerate that experiment (make it fall) at exactly the same rate gravity does. In practice, this is hard to do. When you jump into the air, the microgravity environment you experience is about 1/100th the acceleration of Earth's gravity. The best microgravity environment that NASA's parabolic aircraft can create is about 1/1000th g. On the Space Shuttle in Earth orbit, microgravity is about one-millionth g. In practical terms, if you dropped a ball there, the ball would take about 17 minutes just to fall 5 meters! Accelerometer Construction and Calibration The instructions below are for making a measuring device called an accelerometer. Accelerometers are used to measure how fast an object changes its speed in one or more directions. This accelerometer uses a lead weight suspended by a rubber band to sense changes in an object's motion.
http://quest.nasa.gov/space/teachers/microgravity/2accel.html
13
16
LinearIn mathematics, a linear function f(x) is one which satisfies the following two properties (but see below for a slightly different usage of the term): - Superposition: f(x + y) = f(x) + f(y) - Homogeneity: f(αx) = αf(x) for all α The concept of linearity can be extended to linear operators which are linear if they satisfy the superposition and homogenity relations. Examples of linear operators are del and the derivative function. When a differential equation can be expressed in linear form, it is particularly easy to solve by breaking the equation up into smaller pieces, solving each of those pieces, and adding the solutions up. In a slightly different usage to the above, a polynomial of degree 1 is said to be linear. Over the reals, a linear function is one of the form: - f(x) = mx + c Note that this usage of the term linear is not the same as the above, because linear polynomials over the real numbers do not in general satisfy either superposition or homogeneity. In fact, they do so if and only if c = 0.
http://www.encyclopedia4u.com/l/linear.html
13
18
How to Construct Vectors in R A vector is the simplest type of data structure in R. The R manual defines a vector as a single entity consisting of a collection of things. A collection of numbers, for example, is a numeric vector — the first five integer numbers form a numeric vector of length 5. To construct a vector, type the following in the console: > c(1,2,3,4,5) 1 2 3 4 5 In constructing your vector, you have successfully used a function in R. In programming language, a function is a piece of code that takes some inputs and does something specific with them. In constructing a vector, you tell the c() function to construct a vector with the first five integers. The entries inside the parentheses are referred to as arguments. You also can construct a vector by using operators. An operator is a symbol you stick between two values to make a calculation. The symbols +, -, *, and / are all operators, and they have the same meaning they do in mathematics. Thus, 1+2 in R returns the value 3, just as you’d expect. One very handy operator is called sequence, and it looks like a colon (:). Type the following in your console: > 1:5 1 2 3 4 5 That’s more like it. With three keystrokes, you’ve generated a vector with the values 1 through 5. Type the following in your console to calculate the sum of this vector: > sum(1:5) 15
http://www.dummies.com/how-to/content/how-to-construct-vectors-in-r.navId-812016.html
13
11
QR Codes can be an excellent way of directing students to content. With a quick click of a smartphone QR Code Reader app, you can reveal a hidden code instantly and experience a popular trend in the Japanese culture. How can we use this tool in the classroom? Check out the following assignment: I encourage you to download a free QR Code Reader App and simply take a picture of the image. You will instantly be directed to the appropriate article. For this assignment, you will use the argumentative and rhetorical “tools” we have examined up to this point in class. Your task is to scan the QR Code displayed on the interactive board and analyze the structure of the argument. Once you have a good sense of how the argument is constructed and have determined whether that construction is or is not effective in making the argument, write an analysis of the argument. Remember, part of your task in writing this analysis isn’t just to show how the argument was constructed, but is to argue that your analysis is accurate and logical. Your task is not to argue with the argument, but to consider how that argument is made. As we prepare to transition to Common Core State Standards, it is imperative our students learn and understand how to construct an effective argument. The QR Code in this assignment simply allows students easy access to the article and adds a little mysterious creativity to the lesson. Whether the code includes content from a book in the library, a YouTube video, a seminar room at a given time, or a fun scavenger hunt, students will be directed immediately with a quick pic on a smartphone. What are some effective ways to use a QR Code in your classroom?
http://brainvibeforeducators.blogspot.com/2011/07/qr-codes-can-add-mysterious-creativity.html
13
29
The title of this article is viewed by some people as a contradiction of terms. Too often science fair projects are dreaded by teachers, librarians, and parents, as well as by the students. This is an unfortunate situation usually resulting from a lack of instructional materials to allow students, with a minimum of assistance depending on their age, to develop the project themselves. Science is a search for answers. Science projects are good ways to learn more about science as students search for answers to specific problems. Instructional materials are needed to give guidance and provide ideas, but students must do their part in the search by planning experiments, finding and recording information related to the problem, and organizing the collected data to find an answer. Presenting the project at a science fair can be a rewarding experience if the exhibit has been properly prepared. Trying to assemble a project overnight, however, only results in frustration and a poor grade. The student is also cheated out of the fun of being a science detective. Solving a scientific mystery, like solving a detective mystery, requires planning and the careful collecting of data. Students should be encouraged to start their projects with curiosity and a desire to learn something new. The following sections provide suggestions of how students can get started on this scientific quest. I divide a sample project into its parts and provide a format that can be used to guide students through other projects, regardless of the topic. Selecting a Topic Encourage students to look through many science fair instructional books before choosing the topic they like best and want to know more about. The most helpful books contain sample projects, and each project begins with a brief summary of the objectives to be determined. Regardless of the problem a student eventually chooses to solve, the discovery process will make the student more knowledgeable Keep a Journal I encourage students to purchase a bound notebook, which serves as their journal. Everything relating to the project should be kept in this book. It should contain all original ideas, as well as ideas obtained from books or from other people, like teachers and scientists. It should also include descriptions of experiments, as well as diagrams, photographs, and written observations of all results. Every entry should be dated and as neat as possible. A neat, orderly journal provides a complete and accurate record of the project from start to finish. It is also proof of the time spent sleuthing out the answers to the project's problem. Information from this journal can be used to write a project report, and the journal itself can be part of the project display. Creating an A+ Project I find that most students need more direction than just being provided a list of possible science fair topics to choose from. Most students are more successful if they start with a simple experiment and, using it as the core, develop their project around it. This general procedure in developing any experiment into a possible A+ project is described in the following sample science project. The first section is a "cookbook" experiment - follow the recipe and the result is guaranteed. In fact, the expected results and an explanation of why the results were achieved are given. This experiment not only provides a foundation experiment on which to build, it also can be considered part of the research material. There are two basic types of reproduction of living organisms. One type, sexual reproduction, requires the union of male and female sex cells, or gametes (sperm and eggs) in the formation of a new organism. The second type is asexual reproduction in which there is no union of sex cells. In this project, you will study one kind of asexual reproduction by examining the ability of plants to reproduce by vegetative propagation. You will also discover some methods and special plant organs by which plants can reproduce asexually. To produce a new plant by fragmentation. 2 1-qt (1-L) jars Fill the jars three-fourths full with distilled water. Use the scissors to cut four healthy stems with healthy leaves from the geranium plant. Place two stems, cut ends down, into each jar of distilled water (Fig. 1). Place the jars where they will receive direct sunlight. Observe the cut ends of the stems daily for two to three weeks. Transfer the cuttings to flowerpots filled with potting soil for further growth. Keep the plants watered and observe their growth for several months. In 10 to 14 days, small roots can be seen growing from the ends of the stems (Fig.1). These roots continue to grow. The potted stems mature into plants resembling the original (parent) plant. Why? Asexual reproduction is a method of reproducing a new organism from one parent. One type of asexual reproduction is vegetative propagation (the production of a new organism from a nonsexual part of one parent). In multicelled organisms such as plants, broken pieces from the plants can develop into new plants. Special roots, called adventitious roots, develop directly from stems or leaves instead of from the normal root system. Fragmentation is an example of vegetative asexual reproduction. In this process, a new plant grows from a part broken from a parent plant. The cutting taken from the geranium plant grows into a plant identical to the parent plant. Asexual reproduction has several advantages. First, this method can be used to grow identical plants faster and more successfully than a method that relies on seed germination. Second, seedless fruit can be produced and propagated via vegetative reproduction. Third, asexual reproduction preserves the status quo in that the offspring are always exactly like the parent. By making small changes to some part of the original experiment, a student can attain different results. The first step in changing the original experiment into a new science fair project is to try new approaches. In this stage of the project, questions are provided to encourage the student to explore the effects of changing one variable at a time, but answers are not provided. In this part of the asexual reproduction project, for example, here are three different ideas for expanding the original experiment. Do leaves affect the ability of a stem to reproduce by fragmentation? Repeat the experiment two times, first using stems with no leaves, and then using stems with a greater number of leaves than the stems used in the original experiment. Does the type of plant affect its ability to reproduce by fragmentation? Repeat the original experiment using stems from different types of houseplants. Discuss the project with a professional at a nursery and secure sample cuttings from different types of plants. For vegetative propagation to occur, adventitious roots must form. The development of these special roots depends on a hormone called auxin. Can the production of adventitious roots be hastened by pretreating the cuttings with a synthetic auxin solution? Repeat the original experiment using synthetic auxin purchased from a nursery. Follow the procedure on the product's packaging for treating Design Your Own Experiment Other known related experiments, or experiments designed by the student, can be performed to further investigate and solve the project problem. Student-designed experiments should receive adult approval before they are performed. The following suggestions for new experiments are based on answering the question "What parts of a plant can grow into an offspring?" The procedures allow you to determine the ability of plants to propagate from roots, stems, and leaves. Grow plants from carrot tops (roots) by filling a shallow container with sand (Fig. 2). Thoroughly wet the sand with water and insert the cut end of the carrot tops into the wet sand. Place the container in a lighted area and keep the sand wet. Observe the tops of the carrots for several weeks. Transfer them to a deeper container for further maturing of the plants. Check with a professional at a nursery for the best growing soil for carrots. Bulbs are plants with short, underground stems and thick, fleshy leaves. The leaves store food for the growth of the plant. Plant several bulbs, such as onions, tulips, daffodils, or lilies, in potting soil. After two weeks, make daily observations of one of the bulbs by removing and carefully brushing away the soil. Allow the other bulbs to continue growing undisturbed. Tubers, such as white potatoes, are plants with swollen, underground stems. The "eyes" on a potato are tuber buds from which a new plant will grow. Leave some potatoes in a closed cabinet for several weeks. Make daily observations of the eyes on the potatoes. Other ways to propagate plants from potatoes include (a) cutting the eyes from a potato and planting them in soil; and (b) placing four toothpicks around the center of a sweet potato and placing the potato, pointed side down, into a jar of water (Fig. 3). Place a bryophyllum or jade plant leaf on the surface of potting soil. Keep the soil moist and observe the edges of the leaf. Display photographs of the different stages of development of the plants in each of the preceding experiments. Display the pictures along with data tables of daily growth measurements. Use the healthier plants as part of the project display. Check It Out Research, the process of collecting information about the topic being studied, is an important part of the project. Research is not the last step but a continuing process starting with the formulation of the project purpose and hypothesis and continuing with the explanation of all experimental results. Here are some questions and tips to guide students in seeking out specific information related to the project. All McIntosh apple trees are clones of an original tree found 150 years ago on the farm of John McIntosh in Ontario, Canada. This cloning has been accomplished by grafting. Find out more about grafting of plants. What is a scion? Why is the stock often grown from seed? What are the advantages of grafting? Strawberries grow from runners, stems that grow horizontally rather than vertically. Find out more about this type of vegetative propagation. What are rhizomes? How does a stolon differ from a rhizome? Spores are small bodies containing a nucleus and a small amount of cytoplasm. Find out more about sporulation, the asexual production of spores. How do spores ensure survival of the plant during unfavorable environmental conditions? What type of plant produces spores? German biologist Theodor Boveri's experiments showed that heredity is a result of the nuclear material called chromosomes. Through the continuous process of cell division called mitosis, the blueprint material in chromosomes is duplicated. Use a biology text to find out more about Boveri's experiment and about the process of mitosis. How many steps are in the process. What happens in each step? Students must keep in mind that while their display represents all that they have done, it must tell the story of the project in such a way that it attracts and holds the viewers' interest. It should be kept simple. All the information should not be crammed into one place. To conserve space on the display, and still exhibit all of their work, students might choose to keep some of the charts, graphs, pictures, and other materials in the journal instead of on the display While the display should explain everything about a project, students also must discuss their project and answer the judges' questions. Practicing a speech in front of friends who will ask questions helps to "polish" the presentation. A presenter should never respond to any question with "I do not know." Instead, a student should admit that a particular piece of information was not discovered during the research and then offer other, relevant and interesting information that was found. My final advice to each student is be proud of your project, and approach the judges with enthusiasm about your work. Science Fair Books 1001 Ideas for Science Projects. Marion Brisk. 1992. 242 pages. Grades 8 and up. Soft cover. 45-1594C Each . . $12.00 The Complete Handbook of Science Fair Projects. Julianne Bochinski. 1991. 224 pages. Grades 7 and up. Soft cover. 45-1596 Each . . $12.95 666 Science Tricks and Experiments. Robert J. Brown. 1984. 427 pages. Grades 1-9. Soft cover. 45-1596C Per set . . $22.95 Science Fair: Developing a Successful and Fun Project. Maxine Iritz. 1987. 89 pages. Grades 7 and up. Soft cover. 45-1596E Each . . $10.95 The Thomas Edison Book of Easy and Incredible Experiments. James Cook. 1988. 146 pages. Grades 4 and up. Hard cover. 45-1597B Each . . $24.95 Simple Science Experiments with Everyday Materials. Muriel Mandell. 1989. 128 pages. Grades 1-9. Hard cover. 45-1597C Each . . $12.95 Botany: 49 Science Fair Projects. Robert Bonnet and Daniel Keen. 1989. 148 pages. Grades 3-7. Soft cover. 45-8007 Each . . $9.95 Physics for Kids: 49 Easy Experiments with Heat. Robert Wood. 1990. 150 pages. Grades 3-7. Soft cover. 45-9403D Each . . $9.95 Projects for Young Scientists Series These quality publications are written so junior and senior high school students can work on projects independently or as part of classroom assignments. Subjects introduce basic concepts involved, then provide a wide range of projects. Hard cover. 45-1599 Biology Projects (127 pg) . . $16.95 45-1599E Energy Projects (127 pg) . . $16.95 45-1599F Engineering Projects (126 pg) . . $16.95 45-1599G Space Science Projects (127 pg) . . $16.95 45-1599S Set (one each of above) . . $67.80 Janice VanCleave's Science Project Books Help students discover the FUN of learning science! Science for Every Kid Series. This popular series by Janice VanCleave for kids grades 2-7 sets a new standard in science activity books. Each contains 101 simple, hands-on, low-cost experiments that have been tested repeatedly - and really work! The author provides a statement of purpose, materials list, illustrated instructions, and a brief discussion for each experiment. 240-256 pages each, soft covers. 45-1592G Biology for Every Kid . . $10.95 45-9010 Earth Science for Every Kid . . $10.95 45-9193 Astronomy for Every Kid . . $10.95 45-9402E Physics for Every Kid . . $10.95 45-9456B Chemistry for Every Kid . . $10.95 Science for Every Kid Set. One each of five very special books at a special price. For a limited time only, purchase all five books and receive more than a 10% discount. 45-1597S Per set . . $49.25 Math for Every Kid: Easy Activities that Make Learning Math Fun. Grades 3-7. Janice VanCleave. 1991. 215 pages. Provides simple problems and activities to teach about measurement, fractions, graphs, geometric figures, problem solving, and more. All activities can be performed safely and inexpensively in the classroom or at home, using activities that relate math to daily life. Soft cover. 91-7930 Each . . $10.95 Animals Book. Grades 2-7. 1993. 88 pages. How do birds eat without teeth? How were dinosaur tracks formed? Janice VanCleave uses 20 simple and fun-filled experiments to explore these and other questions about animals. Experiments use inexpensive household materials and require minimal preparation and cleanup. 96-0340 Each . . $9.95 Gravity. Grades 2-7. How are satellites launched into orbit? Does gravity affect plants? Former teacher Janice VanCleave uses marbles, cardboard, beans, and other commonplace items to teach about gravity and speed. All experiments require a minimum of preparation and cleanup. 88 pages. 1993. 96-0802 Each. . $9.95 Molecules. Grades 2-7. 1993. 88 pages. What are molecules made of? How does heat affect molecules? Janice VanCleave uses 20 simple experiments with everyday materials to explain how molecules work. These make exciting teacher demonstrations or hands-on activities for students. 96-0810 Each . . $9.95 200 Gooey, Slippery, Slimy, and Weird Fun Experiments. Grades 2-7. 1993. 116 pages. Packed with illustrations, this book uses simple problems, activities, and experiments to explain science principles through hands-on experience. Children learn how science relates to their everyday lives by making exciting discoveries about the world in which they live. 96-0723 Each . . $12.95 A+ Projects in Chemistry Janice VanCleave. 1993. 233 pages. You will be amazed at how easy it is for students to turn their ideas into winning science fair projects. For ages 12 and up, this title, written by a former Teacher of the Year, explores 30 different topics and offers dozens of ideas for experiments. Each topic explains how to get started with a basic project and then shows how to design your own experiment based on the original approach. 45-1597H Hard cover, each . . $22.95 45-1597P Soft cover, each . . $12.95 A+ Projects In Biology. Janice VanCleave. 1993. 217 pages. If you and your students have trouble turning their ideas into winning science fair projects, this is the book for you. For ages 12 and up, this title, written by a former Teacher of the Year, explores 30 different topics in botany, zoology, and the human body. Each topic explains how to get started with a basic project and then shows how to design your own experiment based on the original approach. 45-1593H Hard cover, each . . $22.95 45-1593P Soft cover, each . . $12.95 Deluxe Rosette Award Ribbons. Award beautiful rosette ribbons for achievements. Each ribbon is imprinted with award (1st, 2nd, or 3rd place) and "Science Fair." Two 10" streamers and 33/4" rosette. 90-1600A 1st Place . . $5.50 90-1601A 2nd Place . . $5.50 90-1602A 3rd Place . . $5.50 Rosette Award Ribbons. Each ribbon has a rosette and one streamer. Award Ribbons. Inspire your students with 1st, 2nd, or 3rd place ribbons. All participants can receive an award with our Honorable Mention ribbon. Box of 12. 90-1620 1st Place . . $11.50 90-1621 2nd Place . . $11.50 90-1622 3rd Place . . $11.50 90-1623 Honorable Mention . . $11.50 Science Fair Exhibit Panel at a Reduced Price! A great idea for science fairs, math expositions, learning centers, or any other projects where a display board is needed. This 48 x 36" white-surfaced board is lightweight and ready to use, making transportation of the project easy, as well as allowing the student more time to devote to the project itself. Special! Save Over 25% on Exhibit Panels Call for Quotes on Larger Quantities 65-9000 Each . . $7.30 $5.00 65-9001 Per box of 20 . . $138.95 $99.95
http://www.accessexcellence.org/RC/CT/fun_science_fair_projects.php
13
32
In the first billion years after the Big Bang, the universe was a smaller, emptier place, without the huge galaxies that dominate today. But there were already supermassive black holes that had grown shockingly huge by gorging themselves on gas. Generally speaking, the size of a black hole at the center of a galaxy will be proportional to the size of the galaxy as a whole. This makes sense — after all, a black hole needs a constant source of material to consume in order to grow larger, and a relatively tiny galaxy won't provide much food. That's why it was so shocking when the Sloan Digital Sky Survey (SDSS) detected supermassive black holes that existed just 700 million years after the Big Bang, long before the age of giant galaxies. Carnegie Mellon physicist Tiziana Di Matteo explains the conundrum that this discovery posed, in a press release: "The Sloan Digital Sky Survey found supermassive black holes at less than 1 billion years. They were the same size as today's most massive black holes, which are 13.6 billion years old. It was a puzzle. Why do some black holes form so early when it takes the whole age of the universe for others to reach the same mass?" Today's supermassive black holes are generally the result of galactic collisions, in which smaller black holes were merged together into one larger super-structure. But that explanation doesn't work for these ancient black holes, which predate the earliest galactic collisions and in fact formed at a time when these early, tiny galaxies were far more isolated than their counterparts are today. The theoretical model and the observational data for these black holes simply didn't line up, which is why the Carnegie Mellon researchers decided to take a new approach. They created an incredibly complex computer simulation called MassiveBlack, which was tasked with recreating the first billion years after the Big Bang. Di Matteo describes the scope of this simulation: "This simulation is truly gigantic. It's the largest in terms of the level of physics and the actual volume. We did that because we were interested in looking at rare things in the universe, like the first black holes. Because they are so rare, you need to search over a large volume of space." You can see a part of the simulation in the image on the left, which shows the distribution of gas across the entire volume of space. The image then zooms in three times, each instance increasing the magnification by a factor of ten, to reveal the supermassive quasar on the right side, with huge streams of gas flowing into it - but more on that in a moment. Of course, even the most colossal simulation can't make these black holes just magically appear — or at least that wouldn't be considered a particularly useful result. MassiveBlack still had to follow the known laws of physics, and that's why what happened next is so cool. Fellow researcher Rupert Croft explains: "We didn't put anything crazy in. There's no magic physics, no extra stuff. It's the same physics that forms galaxies in simulations of the later universe," said Croft. "But magically, these early quasars, just as had been observed, appear. We didn't know they were going to show up. It was amazing to measure their masses and go 'Wow! These are the exact right size and show up exactly at the right point in time.' It's a success story for the modern theory of cosmology." So just how did these ancient black holes get so massive? It's all about the movement of gas. In today's galaxies, cold gas flows towards the central black hole, but en route it slams into other gas. This temporarily heats up and slows down the gas, a process known as shock heating, which slows the rate of black hole growth. But these ancient black holes weren't surrounded by massive, fully formed galaxies, and so the gas was able to flow directly along filaments, which are the vast thread-like structures that link together different parts of the cosmos. With nothing to slow the gas down, the black holes were able to eat this constant diet of cold, fast food, growing exponentially faster than black holes do today. This new discovery may also help us understand better the formation of the first galaxies, which likely sprang up around these engorged black holes. Via the Astrophysical Journal Supplemental Series. Top image by NASA/JPL. Simulation image courtesy of Yu Feng.
http://io9.com/5867779/the-most-ancient-black-holes-grew-fat-on-cosmic-fast-food
13
20
NASA's Deep Impact mission will only prove what scientists think they already know about the birth of the solar system, says one University of Missouri-Rolla researcher. The July 4 "comet shot" is expected to yield data dating back 4.5 billion years, when most scientists believe the solar system was formed out of an interstellar cloud of gas and dust. Since the frozen interiors of comets are thought to possess information from that time, it is believed we can learn more about the original cloud of gas and dust by sending a projectile into the core of a passing comet. Not so, says Dr. Oliver Manuel, professor of nuclear chemistry at UMR. “Comets travel in and out of the solar system, toward the sun and away from the sun, losing and gaining material,” Manuel explains. “But the building blocks that made the outer parts of the solar system are different from the building blocks that made the inner solar system.” For the record, Manuel believes the sun was born in a catastrophic supernova explosion and not in a slowly evolving cloud of space stuff. According to Manuel’s model, heavy elements from the interior of the supernova created the rocky planets and the sun; and the lighter elements near the surface of the supernova created the outer, gaseous planets. Therefore, Manuel says, data from Deep Impact won't be useful. “The comet data will show a mixture of material from the inner and outer layers of the supernova, but it won't tell us anything about the beginnings of the solar system,” Manuel says. “NASA still says the solar system was born in an interstellar cloud and that the sun is a ball of hydrogen with a well-behaved hydrogen fusion reactor in the middle of it. But it’s not, and that will color the data from Deep Impact. It will appear to confirm a flawed theory about the birth of the solar system.” Manuel says the sun is the remains of a supernova, and that it has a neutron star at its core. According to a paper he presented last week at a nuclear research facility in Dubna, Russia, neutron emissions represent the greatest power source ever known, triggering hydrogen fusion in the sun, generating an enormous magnetic field, explaining phenomena like solar flares and causing climate change on earth. Findings published by other researchers last year in Science magazine (May 21, 2004) suggested that, in fact, a nearby supernova probably did contribute material (Iron-60) to an ambiguous cloud that formed the solar system. What Manuel reported 27 years earlier in Science (Jan. 14, 1977) is that the supernova blast created the entire solar system and all of its iron. “So Deep Impact is NASA’s big cosmic fireworks show for the Fourth of July, but they’re going to end up using smoke and mirrors to help validate this theory about a big cloud of dust that supposedly made the solar system,” Manuel says. Source: University of Missouri-Rolla Explore further: Slovenian flyer lands in France on return trip from Arctic
http://phys.org/news4899.html
13
16
Basic Radar Systems Principle of Operation Radar is an acronym for Radio Detection and Ranging. The term "radio" refers to the use of electromagnetic waves with wavelengths in the so-called radio wave portion of the spectrum, which covers a wide range from 104 km to 1 cm. Radar systems typically use wavelengths on the order of 10 cm, corresponding to frequencies of about 3 GHz. The detection and ranging part of the acronym is accomplished by timing the delay between transmission of a pulse of radio energy and its subsequent return. If the time delay is Dt, then the range may be determined by the simple R = cDt/2 where c = 3 x 108 m/s, the speed of light at which all electromagnetic waves propagate. The factor of two in the formula comes from the observation that the radar pulse must travel to the target and back before detection, or twice the range. A radar pulse train is a type of amplitude modulation of the radar frequency carrier wave, similar to how carrier waves are modulated in communication systems. In this case, the information signal is quite simple: a single pulse repeated at regular intervals. The common radar carrier modulation, known as the pulse train is shown below. The common parameters of radar as defined by referring to Figure 1. PW = pulse width. PW has units of time and is commonly expressed in ms. PW is the duration of the pulse. RT = rest time. RT is the interval between pulses. It is measured in ms. PRT = pulse repetition time. PRT has units of time and is commonly expressed in ms. PRT is the interval between the start of one pulse and the start of another. PRT is also equal to the sum, PRT = PW+RT. PRF = pulse repetition frequency. PRF has units of time-1 and is commonly expressed in Hz (1 Hz = 1/s) or as pulses per second (pps). PRF is the number of pulses transmitted per second and is equal to the inverse of PRT. RF = radio frequency. RF has units of time-1 or Hz and is commonly expressed in GHz or MHz. RF is the frequency of the carrier wave which is being modulated to form the pulse train. A practical radar system requires seven basic components as illustrated Transmitter. The transmitter creates the radio wave to be sent and modulates it to form the pulse train. The transmitter must also amplify the signal to a high power level to provide adequate range. The source of the carrier wave could be a Klystron, Traveling Wave Tube (TWT) or Magnetron. Each has its own characteristics and limitations. 2. Receiver. The receiver is sensitive to the range of frequencies being transmitted and provides amplification of the returned signal. In order to provide the greatest range, the receiver must be very sensitive without introducing excessive noise. The ability to discern a received signal from background noise depends on the signal-to-noise ratio (S/N). The background noise is specified by an average value, called the noise-equivalent-power (NEP). This directly equates the noise to a detected power level so that it may be compared to the return. Using these definitions, the criterion for successful detection of a target is Pr > (S/N) NEP, where Pr is the power of the return signal. Since this is a significant quantity in determining radar system performance, it is given a unique designation, Smin, and is called the Minimum Signal for Detection. Smin = (S/N) NEP Since Smin, expressed in Watts, is usually a small number, it has proven useful to define the decibel equivalent, MDS, which stands for Minimum Discernible Signal. MDS = 10 Log (Smin/1 mW) When using decibels, the quantity inside the brackets of the logarithm must be a number without units. I the definition of MDS, this number is the fraction Smin /1 mW. As a reminder, we use the special notation dBm for the units of MDS, where the "m" stands for 1 mW. This is shorthand for decibels referenced to 1 mW, which is sometimes written as dB//1mW. In the receiver, S/N sets a threshold for detection which determines what will be displayed and what will not. In theory, if S/N = 1, then only returns with power equal to or greater than the background noise will be displayed. However, the noise is a statistical process and varies randomly. The NEP is just the average value of the noise. There will be times when the noise exceeds the threshold that is set by the receiver. Since this will be displayed and appear to be a legitimate target, it is called a false alarm. If the SNR is set too high, then there will be few false alarms, but some actual targets may not be displayed known as a miss). If SNR is set too low, then there will be many false alarms, or a high false alarm rate (FAR). Some receivers monitor the background and constantly adjust the SNR to maintain a constant false alarm rate, and therefore all called CFAR receivers. Some common receiver 1.) Pulse Integration. The receiver takes an average return strength over many pulses. Random events like noise will not occur in every pulse and therefore, when averaged, will have a reduced effect as compared to actual targets that will be in every pulse. 2.) Sensitivity Time Control (STC). This feature reduces the impact of returns from sea state. It reduces the minimum SNR of the receiver for a short duration immediately after each pulse is transmitted. The effect of adjusting the STC is to reduce the clutter on the display in the region directly around the transmitter. The greater the value of STC, the greater the range from the transmitter in which clutter will be removed. However, an excessive STC will blank out potential returns close to the transmitter. 3.) Fast Time Constant (FTC). This feature is designed to reduce the effect of long duration returns that come from rain. This processing requires that strength of the return signal must change quickly over it duration. Since rain occurs over and extended area, it will produce a long, steady return. The FTC processing will filter these returns out of the display. Only pulses that rise and fall quickly will be displayed. In technical terms, FTC is a differentiator, meaning it determines the rate of change in the signal, which it then uses to discriminate pulses which are not changing rapidly. 3. Power Supply. The power supply provides the electrical power for all the components. The largest consumer of power is the transmitter which may require several kW of average power. The actually power transmitted in the pulse may be much greater than 1 kW. The power supply only needs to be able to provide the average amount of power consumed, not the high power level during the actual pulse transmission. Energy can be stored, in a capacitor bank for instance, during the rest time. The stored energy then can be put into the pulse when transmitted, increasing the peak power. The peak power and the average power are related by the quantity called duty cycle, DC. Duty cycle is the fraction of each transmission cycle that the radar is actually transmitting. Referring to the pulse train in Figure 2, the duty cycle can be seen to be: DC = PW / PRF Synchronizer. The synchronizer coordinates the timing for range determination. It regulates that rate at which pulses are sent (i.e. sets PRF) and resets the timing clock for range determination for each pulse. Signals from the synchronizer are sent simultaneously to the transmitter, which sends a new pulse, and to the display, which resets the return sweep. Duplexer. This is a switch which alternately connects the transmitter or receiver to the antenna. Its purpose is to protect the receiver from the high power output of the transmitter. During the transmission of an outgoing pulse, the duplexer will be aligned to the transmitter for the duration of the pulse, PW. After the pulse has been sent, the duplexer will align the antenna to the receiver. When the next pulse is sent, the duplexer will shift back to the transmitter. A duplexer is not required if the transmitted power is low. Antenna. The antenna takes the radar pulse from the transmitter and puts it into the air. Furthermore, the antenna must focus the energy into a well-defined beam which increases the power and permits a determination of the direction of the target. The antenna must keep track of its own orientation which can be accomplished by a synchro-transmitter. There are also antenna systems which do not physically move but are steered electronically (in these cases, the orientation of the radar beam is already known a priori). The beam-width of an antenna is a measure of the angular extent of the most powerful portion of the radiated energy. For our purposes the main portion, called the main lobe, will be all angles from the perpendicular where the power is not less than ½ of the peak power, or, in decibels, -3 dB. The beam-width is the range of angles in the main lobe, so defined. Usually this is resolved into a plane of interest, such as the horizontal or vertical plane. The antenna will have a separate horizontal and vertical beam-width. For a radar antenna, the beam-width can be predicted from the dimension of the antenna in the plane of q = l/L q is the beam-width in radians, l is the wavelength of the radar, and L is the dimension of the antenna, in the direction of interest (i.e. width or height). In the discussion of communications antennas, it was stated that the beam-width for an antenna could be found using q = 2l/L. So it appears that radar antennas have one-half of the beam-width as communications antennas. The difference is that radar antennas are used both to transmit and receive the signal. The interference effects from each direction combine, which has the effect of reducing the beam-width. Therefore when describing two-way systems (like radar) it is appropriate to reduce the beam-width by a factor of ½ in the beam-width The directional gain of an antenna is a measure of how well the beam is focused in all angles. If we were restricted to a single plane, the directional gain would merely be the ratio 2p/q. Since the same power is distributed over a smaller range of angles, directional gain represents the amount by which the power in the beam is increased. In both angles, then directional gain would be given by: Gdir = 4p/q f since there are 4p steradians corresponding to all directions (solid angle, measured in steradians, is defined to be the area of the beam front divided by the range squared, therefore a non-directional beam would cover an area of 4pR2 at distance R, therefore 4p steradians). Here we used: q = horizontal beam-width (radians) f = vertical beam-width (radians) Sometimes directional gain is measured in decibels, namely 10 log (Gdir). As an example, an antenna with a horizontal beam-width of 1.50 (0.025 radians) and vertical beam-width of 20o (0.33 radians) will have: directional gain(dB) = 10 log (4 p/ 0.025 0.333) = 30.9 dB Example: find the horizontal and vertical beam-width of the AN/SPS-49 long range radar system, and the directional gain in dB. The antenna is 7.3 m wide by 4.3 m tall, and operates at 900 MHz. The wavelength, l=c/f = 0.33 m. Given that L= 7.3 m, then q = l/L = 0.33/7.3 = 0.045 radians, or q = 30. The antenna is 4.3 m tall, so a similar calculation gives f = 0.076 radians f = 40. The directional gain, Gdir = 4p/(0.045 0.076) = 3638. Expressed in decibels, directional gain = 10 Log(3638) = 35.6 dB. Display. The display unit may take a variety of forms but in general is designed to present the received information to an operator. The most basic display type is called an A-scan (amplitude vs. Time delay). The vertical axis is the strength of the return and the horizontal axis is the time delay, or range. The A-scan provides no information about the direction of the target. The most common display is the PPI (plan position indicator). The A-scan information is converted into brightness and then displayed in the same relative direction as the antenna orientation. The result is a top-down view of the situation where range is the distance from the origin. The PPI is perhaps the most natural display for the operator and therefore the most widely used. In both cases, the synchronizer resets the trace for each pulse so that the range information will begin at the origin. In this example, the use of increased STC to suppress the sea clutter would be helpful. of the parameters of the basic pulsed radar system will affect performance in some way. Here we find specific examples and quantify this dependence The duration of the pulse and the length of the target along the determine the duration of the returned pulse. In most cases the length of the return is usually very similar to the transmitted pulse. In the display unit, the pulse (in time) will be converted into a pulse in distance. The range of values from the leading edge to the trailing edge will create some uncertainty in the range to the target. Taken at face value, the ability to accurately measure range is determined by the pulse width. If we designate the uncertainty in measured range as the range resolution, RRES, then it must be equal to the range equivalent of the pulse width, namely: RRES = c PW/2 Now, you may wonder why not just take the leading edge of the pulse as the range which can be determined with much finer accuracy? The problem is that it is virtually impossible to create the perfect leading edge. In practice, the ideal pulse will really appear like: To create a perfectly formed pulse with a vertical leading edge would require an infinite bandwidth. In fact you may equate the bandwidth, b, of the transmitter to the minimum pulse width, PW by: PW = 1/2b Given this insight, it is quite reasonable to say that the range can be determined no more accurately than cPW/2 or equivalently RRES = c/4b In fact, high resolution radar is often referred to as wide-band radar which you now see as equivalent statements. One term is referring to the time domain and the other the frequency domain. The duration of the pulse also affects the minimum range at which the radar system can detect. The outgoing pulse must physically clear the antenna before the return can be processed. Since this lasts for a time interval equal to the pulse width, PW, the minimum displayed range is then: RMIN = c PW/2 The minimum range effect can be seen on a PPI display as a saturated or blank area around the origin. Increasing the pulse width while maintaining the other parameters the same will also affect the duty cycle and therefore the average power. For many systems, it is desirable to keep the average power fixed. Then the PRF must be simultaneously changed with PW in order to keep the product PW x PRF the same. For example, if the pulse width is reduced by a factor of ½ in order to improve the resolution, then the PRF is usually doubled. Pulse Repetition Frequency (PRF) The frequency of pulse transmission affects the maximum range that can be displayed. Recall that the synchronizer resets the timing clock as each new pulse is transmitted. Returns from distant targets that do no reach the receiver until after the next pulse has been sent will not be displayed correctly. Since the timing clock has been reset, they will be displayed as if the range where less than actual. If this were possible, then the range information would be considered ambiguous. An operator would not know whether the range were the actual range or some greater The maximum actual range that can be detected and displayed without ambiguity, or the maximum unambiguous range, is just the range corresponding to a time interval equal to the pulse repetition time, PRT. Therefore, the maximum unambiguous range, RUNAMB = c PRT/2 = c/(2PRF) When a radar is scanning, it is necessary to control the scan rate so that a sufficient number of pulses will be transmitted in any particular direction in order to guarantee reliable detection. If too few pulses are used, then it will more difficult to distinguish false targets from actual ones. False targets may be present in one or two pulses but certainly not in ten or twenty in a row. Therefore to maintain a low false detection rate, the number of pulses transmitted in each direction should be kept high, usually above ten. For systems with high pulse repetition rates (frequencies), the radar beam can be repositioned more rapidly and therefore scan more quickly. Conversely, if the PRF is lowered the scan rate needs to be reduced. For simple scans it is easy to quantify the number of pulses that will be returned from any particular target. Let t represent the dwell time, which is the duration that the target remains in the radar's beam during each scan. The number of pulses, N, that the target will be exposed to during the dwell time is: N = t PRF We may rearrange this equation to make a requirement on the dwell time for a particular scan tmin = Nmin /PRF So it is easy to see that high pulse repetition rates require smaller dwell times. For a continuous circular scan, for example, the dwell time is related to the rotation rate and the beam-width. t = q/W where q = beam-width [degrees] W = rotation rate [degrees/sec] which will give the dwell time in seconds. These relationships can be combined, giving the following equation from which the maximum scan rate may be determined for a minimum number of pulses per scan: = q PRF/N Finally, the frequency of the radio carrier wave will also have affect on how the radar beam propagates. At the low frequency extremes, radar beams will refract in the atmosphere and can be caught in "ducts" which result in long ranges. At the high extreme, the radar beam will behave much like visible light and travel in very straight lines. Very high frequency radar beams will suffer high losses and are not suitable for long range systems. The frequency will also affect the beam-width. For the same antenna size, a low frequency radar will have a larger beam-width than a high frequency one. In order to keep the beam-width constant, a low frequency radar will need a large Theoretical Maximum Range Equation A radar receiver can detect a target if the return is of sufficient Let us designate the minimum return signal that can be detected as Smin, which should have units of Watts, W. The size and ability of a target to reflect radar energy can be summarized into a single term, s, known as the radar cross-section, which has units of m2. If absolutely all of the incident radar energy on the target were reflected equally in all directions, then the radar cross section would be equal to the target's cross-sectional area as seen by the transmitter. In practice, some energy is absorbed and the reflected energy is not distributed equally in all directions. Therefore, the radar cross-section is quite difficult to estimate and is normally determined by Given these new quantities we can construct a simple model for the radar power that returns to the receiver: Pr = Pt G 1/4pR2 s 1/4pR2 Ae The terms in this equation have been grouped to illustrate the sequence from transmission to collection. Here is the sequence G = r Gdir The transmitter puts out peak power Pt into the antenna, which focuses it into a beam with gain G. The power gain is similar to the directional gain, Gdir, except that it must also include losses from the transmitter to the antenna. These losses are summarized by the single term for efficiency, r. The radar energy spreads out uniformly in all directions. The power per unit area must therefore decrease as the area increases. Since the energy is spread out over the surface of a sphere the factor of 1/4pR2 accounts for the reduction. The radar energy is collected by the surface of the target and reflected. The radar cross section s accounts for both of these processes. The reflected energy spreads out just like the transmitted energy. The receiving antenna collects the energy proportional to its effective area, known as the antenna's aperture, Ae. This also includes losses in the reception process until the signal reaches the receiver. Hence the subscript "e" for "effective." The effective aperture is related to the physical aperture, A, by the same efficiency term used in power gain, given the symbol r. So that Ae = r A Our criterion for detection is simply that the received power, exceed the minimum, Smin. Since the received power decreases with range, the maximum detection range will occur when the received power is equal to the minimum , i.e. Pr = Smin. If you solve for the range, you get an equation for the maximum theoretical Perhaps the most important feature of this equation is the fourth-root dependence. The practical implication of this is that one must greatly increase the output power to get a modest increase in performance. For example, in order to double the range, the transmitted power would have to be increased 16-fold. You should also note that the minimum power level for detection, Smin, depends on the noise level. In practice, this quantity constantly be varied in order to achieve the perfect balance between high sensitivity which is susceptible to noise and low sensitivity which may limit the radar's ability to detect targets. Example: Find the maximum range of the AN/SPS-49 radar, given the following data Antenna Size = 7.3 m wide by 4.3 m tall Efficiency = 80 % Peak power = 360 kW Cross section = 1 m2 Smin = 1 10-12 W We know from the previous example, that the directional antenna gain, Gdir = 4p/qf = 4p/(.05 x .07) = 3430 The power gain, G = r Gdir G = 2744. Likewise, the effective aperture, Ae = rA = .8(7.3 x 4.3) Ae = 25.1 m2. Therefore the range is , or R = 112 km.
http://www.fas.org/man/dod-101/navy/docs/es310/radarsys/radarsys.htm
13
10
Blood pressure is the pressure of the blood against the blood vessel walls as blood flows through the vessels. The heart beats about 60 to 70 times a minute. With each beat as the heart contracts, a surge of blood is pumped from the heart into the arteries. The pressure in the artery walls during this surge is measured as the systolic blood pressure (a higher number). Between beats, the heart is relaxed and there is much less pressure on the artery walls. This is measured as the diastolic blood pressure (a lower number). Blood pressure is given as two numbers written as 120/80 mm Hg and is measured with a device called a sphygmomanometer in millimeters (mm) of mercury (Hg). The pressure depends on the amount of blood pumped through the heart in addition to the resistance and elasticity of the blood vessels to the amount of blood flowing. Blood pressure is necessary to sustain life. It continuously forces blood carrying oxygen and nutrients from the heart to the organs and tissues of the body. Blood pressure levels can go up or down in the course of a day depending on activity and stress levels, medications, or diet. A person's blood pressure is determined by the contraction of the heart's ventricles, which pump blood into the aorta and subsequently throughout the body. The normal adult blood pressure has a systolic number of 120 and a diastolic number of 80. Systolic pressure is taken when the heart contracts; diastolic pressure is taken when the heart is relaxed. Normally, about 5.5 quarts (5.25 liters) of blood goes through the heart and blood vessels each minute, an amount called cardiac output. The body is dependent on its volume of blood to maintain blood pressure. If a person experiences heavy blood loss, blood pressure will plunge. Similarly, an increase in blood volume, in cases like water retention, will cause blood pressure to rise. The brain's medulla contains a cluster of nerves, called the cardiovascular center, that control heart rate, the contraction of the ventricles, and blood vessel diameter. Sensory receptors monitor the stretching of blood vessel walls. During exercise, the heart rate rises and the ventricles contract more forcefully. The cardiovascular center then monitors the dilation (expansion) or constriction of peripheral blood vessels. For example, the blood vessels to organs directly involved the exercise will expand. Blood flow to skeletal muscles may increase by a factor of 10 and that to the heart and skin can triple. Simultaneously, constriction will occur in the blood vessels of the digestive system. The sensory receptors in the walls of blood vessels continually monitor blood pressure. When the receptors |Classification of blood pressure (BP)| |SOURCE: Joint National Committee on Detection, Evaluation, and Treatment of High Blood Pressure.| |Category||Range (mm Hg)||Recommendation| |Normal BP||Systolic <140; diastolic <85||Recheck in 2 years| |High-normal BP||Diastolic 85–89||Recheck in 1 year| |Mild hypertension||Diastolic 90–104||Confirm within 2 months| |Moderate hypertension||Diastolic 105–114||Evaluate within 1 month| |Severe hypertension||Diastolic [.greaterequal] 115||Evaluate immediately or within 1 week| |Borderline isolated systolic hypertension||Systolic 140–159; diastolic <90||Confirm within 2 months| |Isolated systolic hypertension||Systolic [.greaterequal] 160; diastolic <90||Confirm within 2 months| detect an increase in aortic pressure, for example, the cardiovascular center directs the lowering of the heart rate and the stretching of blood vessels, which decreases the blood pressure. A decrease in blood pressure causes an increased heart rate and vasoconstriction. As people age, the blood vessels become less flexible and the heart muscle is less strong, resulting in a smaller output and lower maximum heart rate. Systolic pressure tends to rise as a person ages. Coronary artery disease, which causes the blood vessels in the heart to receive inadequate oxygenation, can cause chest pain or heart attack. Atherosclerosis (clogging of the arteries) can also cause an increase in blood pressure. Role in human health The Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC) develops high blood pressure prevention and control recommendations for healthcare providers. According to the JNC, optimal blood pressure (BP) measurement is a systolic blood pressure (SBP) of 120mm Hg or less and a diastolic blood pressure (DBP) of 80mm Hg or less. Blood pressure is still considered normal at levels of 130mm Hg SBP or less and 85mm Hg DBP or less. Periodic blood pressure measurement is recommended every one to two years for adults with normal blood pressure. A healthcare provider should determine the frequency of blood pressure measurement based on each patient's individual risk factors for high blood pressure. Individual risk factors that contribute to high blood pressure, such as diabetes, a family history of high blood pressure, a diet high in fat and cholesterol, being African-American, elderly, overweight, a smoker, or heavy drinker, are important to consider when advising patients on the frequency of periodic blood pressure measurement. Prevention and management of high blood pressure requires not only active participation by the patient but also education and support from health care providers. Patient education is a shared responsibility among physicians, nurses, dietitians, and allied health professionals. While patient education is time-consuming, it is very important to the process of maintaining health and preventing disease. Common diseases and disorders High blood pressure, also called hypertension, is a cardiovascular disease affecting nearly 50 million Americans. The higher than normal pressure pushes blood against the artery walls causing the heart to work harder in order to pump blood to the body. The JNC defines high blood pressure as a systolic blood pressure (SBP) of 140mm Hg or greater, a diastolic blood pressure (DBP) of 90mm Hg or greater, or taking high blood pressure (antihypertensive) medications. High blood pressure often has no warning signs or symptoms. So, if it is not identified or treated, high blood pressure can damage the arteries and organs causing serious medical problems over time. If not properly managed, high blood pressure can increase the risk of developing, among other problems, the following: - Atherosclerosis, also called "hardening of the arteries"—High blood pressure can cause atherosclerosis or a thickening and narrowing of the blood vessel walls. This can slow or prevent blood flow through the arteries and may lead to heart attack or stroke. - Stroke—High blood pressure can cause the arteries to narrow and lead to a stroke if a blood clot blocks one of the narrowed arteries (thrombolytic stroke) or if a weakened blood vessel in the brain ruptures (hemorrhagic stroke). - Coronary heart disease—High blood pressure can cause the coronary arteries to narrow and harden. The coronary arteries carry oxygen to the heart muscle so it can function to pump blood to the body. If blood cannot flow properly through the coronary arteries to the heart, the heart cannot get enough oxygen. This can cause chest pain (angina). If the blood flow to the heart muscle is blocked, it can cause a heart attack. Heart disease is the leading cause of death in the United States. - Congestive heart failure—Over years, uncontrolled high blood pressure can cause the heart muscle to compensate by becoming larger (dilatation) to allow more blood to fill it, by thickening the heart muscle (hyper- trophy) to pump more forcefully, or by beating faster to increase circulation. According to the National Institutes of Health, uncontrolled high blood pressure increases the risk of heart failure by 200%, compared with those who do not have high blood pressure. - Kidney failure—Over years, high blood pressure can damage the blood vessels of the kidney. The damage may cause the kidneys to no longer filter waste from the blood adequately, which could require dialysis treatment or possibly a kidney transplant. The cause of high blood pressure is usually unknown, in which case it is called primary or essential hypertension. This cannot be cured. However, it can be easily diagnosed and, in most cases, controlled with lifestyle modifications and/or medications. Some of the lifestyle modifications for high blood pressure prevention and management include: - Weight loss if the patient is overweight. As weight increases, blood pressure rises. - Cutting down on alcohol, no more than one drink per day for women and no more than two drinks per day for men. - Decreasing salt and sodium, saturated fat, and cholesterol. - Increasing physical activity, especially aerobic activity 30 to 45 minutes on most days. - Stopping smoking. High blood pressure medications work in various ways. They can affect the force of the heartbeat, the blood vessels, and the amount of fluid in the body. Some of the different types of medications prescribed to treat high blood pressure are: - Diuretics, also called "water pills," decrease the amount of fluid in the body by flushing excess water and sodium from the body through the urine. - Beta blockers make the heart beat less often and with less force by reducing nerve impulses to the heart and blood vessels. - Calcium channel blockers relax the blood vessels by preventing calcium from entering the muscle cells of the heart. - Alpha blockers relax the blood vessels by way of the nervous system. They decrease renin secretion, which is involved in angiotensin II formation. - Vasodilators widen blood vessels by relaxing the muscle in the vessel walls. - Angiotensin converting enzyme (ACE) inhibitors relax the blood vessels by preventing angiotensin II from being formed. High blood pressure can sometimes be traced to a cause such as an adrenal gland tumor, kidney disease, hormone abnormalities, birth control pills, or pregnancy. This is called secondary hypertension and can usually be cured if the cause disappears or is corrected. Angiotensin converting enzyme (ACE) inhibitor—A drug used to decrease pressure inside blood vessels. Artery—A blood vessel that carries blood from the heart to the body. Beta blocker—A drug used to slow heart rate and reduce pressure inside blood vessels. Calcium channel blocker—A drug used to relax blood vessels and the heart muscle. Cardiovascular—The heart and blood vessels. Congestive heart failure—A cardiovascular dis- ease that involves the heart muscle's diminished or loss of pumping ability, generally causes fluid that cannot be completely ejected from the heart to back up in the lungs. Diastolic blood pressure—The lower number of a blood pressure measurement or the pressure when the heart is at rest. Diuretic—A drug that eliminates excess fluid in the body. Fat—One of the nutrients that supply calories to the body. Hypertension—High blood pressure. Hypertrophy—Enlargement of tissue or an organ. Millimeter (mm)—A unit of measurement equal to one-thousandth of a meter. Risk factors—Behaviors, traits, or conditions in a person that are associated with an increased chance (risk) of disease. Sign—An objective observation of an illness. Sphygmomanometer—A manual device used to measure blood pressure. Symptom—Any indication of disease noticed or felt by a patient. Systolic blood pressure—The higher number of a blood pressure measurement or the pressure when the heart is contracting. Report of the United States Preventive Services Task Force. Guide to Clinical Preventive Services. International Medical Publishing, 1996. Tortora, Gerard, and Sandra Grabowski. Principles of Anatomy and Physiology. 8th ed. New York: John Wiley and Sons, 1996. American College of Cardiology. Heart House. 9111 Old Georgetown Rd., Bethesda, MD 20814-1699. (800) 253-4636. <http://www.acc.org>. American Heart Association. National Center. 7272 Greenville Ave., Dallas, TX 75231. (800) AHA-USA1. <http://www.americanheart.org>. American Society of Hypertension. 515 Madison Ave., Ste. 1212, New York, NY 10022. (212) 644-0650. <http://www.ash-us.org>. National Heart, Lung, and Blood Institute. Information Center. PO Box 30105, Bethesda, MD 20824-0105. (800) 575-WELL. <http://www.nhlbi.nih.gov>. National High Blood Pressure Education Program. NHLBI Health Information Center. PO Box 30105, Bethesda, Maryland 20824-0105. (301) 592-8573. National Heart, Lung, and Blood Institute. Healthy Heart Handbook for Women. 1997. <http://www.nhlbi.nih.gov>. The Sixth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Pub No. 98-4080. November 1997. Deborah Eileen Parker, R.N.
http://www.healthline.com/galecontent/blood-pressure-1
13
103
Transcript: OK, so anyway, let's get started. So, the first unit of the class, so basically I'm going to go over the first half of the class today, and the second half of the class on Tuesday just because we have to start somewhere. So, the first things that we learned about in this class were vectors, and how to do dot-product of vectors. So, remember the formula that A dot B is the sum of ai times bi. And, geometrically, it's length A times length B times the cosine of the angle between them. And, in particular, we can use this to detect when two vectors are perpendicular. That's when their dot product is zero. And, we can use that to measure angles between vectors by solving for cosine in this. Hopefully, at this point, this looks a lot easier than it used to a few months ago. So, hopefully at this point, everyone has this kind of formula memorized and has some reasonable understanding of that. But, if you have any questions, now is the time. No? Good. Next we learned how to also do cross product of vectors in space -- -- and remember, we saw how to use that to find area of, say, a triangle or a parallelogram in space because the length of the cross product is equal to the area of a parallelogram formed by the vectors a and b. And, we can also use that to find a vector perpendicular to two given vectors, A and B. And so, in particular, that comes in handy when we are looking for the equation of a plane because we've seen -- So, the next topic would be equations of planes. And, we've seen that when you put the equation of a plane in the form ax by cz = d, well, in there is actually the normal vector to the plane, or some normal vector to the plane. So, typically, we use cross product to find plane equations. OK, is that still reasonably familiar to everyone? Yes, very good. OK, we've also seen how to look at equations of lines, and those were of a slightly different nature because we've been doing them as parametric equations. So, typically we had equations of a form, maybe x equals some constant times t, y equals constant plus constant times t. z equals constant plus constant times t where these terms here correspond to some point on the line. And, these coefficients here correspond to a vector parallel to the line. That's the velocity of the moving point on the line. And, well, we've learned in particular how to find where a line intersects a plane by plugging in the parametric equation into the equation of a plane. We've learned more general things about parametric equations of curves. So, there are these infamous problems in particular where you have these rotating wheels and points on them, and you have to figure out, what's the position of a point? And, the general principle of those is that you want to decompose the position vector into a sum of simpler things. OK, so if you have a point on a wheel that's itself moving and something else, then you might want to first figure out the position of a center of a wheel than find the angle by which the wheel has turned, and then get to the position of a moving point by adding together simpler vectors. So, the general principle is really to try to find one parameter that will let us understand what has happened, and then decompose the motion into a sum of simpler effect. So, we want to decompose the position vector into a sum of simpler vectors. OK, so maybe now we are getting a bit out of some people's comfort zone, but hopefully it's not too bad. Do you have any general questions about how one would go about that, or, yes? Sorry? What about it? Parametric descriptions of a plane, so we haven't really done that because you would need two parameters to parameterize a plane just because it's a two dimensional object. So, we have mostly focused on the use of parametric equations just for one dimensional objects, lines, and curves. So, you won't need to know about parametric descriptions of planes on a final, but if you really wanted to, you would think of defining a point on a plane as starting from some given point. Then you have two vectors given on the plane. And then, you would add a multiple of each of these vectors to your starting point. But see, the difficulty is to convert from that to the usual equation of a plane, you would still have to go back to this cross product method, and so on. So, it is possible to represent a plane, or, in general, a surface in parametric form. But, very often, that's not so useful. Yes? How do you parametrize an ellipse in space? Well, that depends on how it's given to you. But, OK, let's just do an example. Say that I give you an ellipse in space as maybe the more, well, one exciting way to parameterize an ellipse in space is maybe the intersection of a cylinder with a slanted plane. That's the kind of situations where you might end up with an ellipse. OK, so if I tell you that maybe I'm intersecting a cylinder with equation x squared plus y squared equals a squared with a slanted plane to get, I messed up my picture, to get this ellipse of intersection, so, of course you'd need the equation of a plane. And, let's say that this plane is maybe given to you. Or, you can switch it to form where you can get z as a function of x and y. So, maybe it would be z equals, I've already used a; I need to use a new letter. Let's say c1x c2y plus d, whatever, something like that. So, what I would do is first I would look at what my ellipse does in the directions in which I understand it the best. And, those directions would be probably the xy plane. So, I would look at the xy coordinates. Well, if I look at it from above xy, my ellipse looks like just a circle of radius a. So, if I'm only concerned with x and y, presumably I can just do it the usual way for a circle. x equals a cosine t. y equals a sine t, OK? And then, z would end up being just, well, whatever the value of z is to be on the slanted plane above a given xy position. So, in fact, it would end up being ac1 cosine t plus ac2 sine t plus d, I guess. OK, that's not a particularly elegant parameterization, but that's the kind of thing you might end up with. Now, in general, when you have a curve in space, it would rarely be the case that you have to get a parameterization from scratch unless you are already being told information about how it looks in one of the coordinate planes, you know, this kind of method. Or, at least you'd have a lot of information that would quickly reduce to a plane problem somehow. Of course, I could also just give you some formulas and let you figure out what's going on. But, in general, we've done more stuff with plane curves. With plane curves, certainly there's interesting things with all sorts of mechanical gadgets that we can study. OK, any other questions on that? No? OK, so let me move on a bit and point out that with parametric equations, we've looked also at things like velocity and acceleration. So, the velocity vector is the derivative of a position vector with respect to time. And, it's not to be confused with speed, which is the magnitude of v. So, the velocity vector is going to be always tangent to the curve. And, its length will be the speed. That's the geometric interpretation. So, just to provoke you, I'm going to write, again, that formula that was that v equals T hat ds dt. What do I mean by that? If I have a curve, and I'm moving on the curve, well, I have the unit tangent vector which I think at the time I used to draw in blue. But, blue has been abolished since then. So, I'm going to draw it in red. OK, so that's a unit vector that goes along the curve, and then the actual velocity is going to be proportional to that. And, what's the length? Well, it's the speed. And, the speed is how much arc length on the curve I go per unit time, which is why I'm writing ds dt. That's another guy. That's another of these guys for the speed, OK? And, we've also learned about acceleration, which is the derivative of velocity. So, it's the second derivative of a position vector. And, as an example of the kinds of manipulations we can do, in class we've seen Kepler's second law, which explains how if the acceleration is parallel to the position vector, then r cross v is going to be constant, which means that the motion will be in an plane, and you will sweep area at a constant rate. So now, that is not in itself a topic for the exam, but the kinds of methods of differentiating vector quantities, applying the product rule to take the derivative of a dot or cross product and so on are definitely fair game. I mean, we've seen those on the first exam. They were there, and most likely they will be on the final. OK, so I mean that's the extent to which Kepler's law comes up, only just knowing the general type of manipulations and proving things with vector quantities, but not again the actual Kepler's law itself. I skipped something. I skipped matrices, determinants, and linear systems. OK, so we've seen how to multiply matrices, and how to write linear systems in matrix form. So, remember, if you have a 3x3 linear system in the usual sense, so, you can write this in a matrix form where you have a 3x3 matrix and you have an unknown column vector. And, their matrix product should be some given column vector. OK, so if you don't remember how to multiply matrices, please look at the notes on that again. And, also you should remember how to invert a matrix. So, how did we invert matrices? Let me just remind you very quickly. So, I should say 2x2 or 3x3 matrices. Well, you need to have a square matrix to be able to find an inverse. The method doesn't work, doesn't make sense. Otherwise, then the concept of inverse doesn't work. And, if it's larger than 3x3, then we haven't seen that. So, let's say that I have a 3x3 matrix. What I will do is I will start by forming the matrix of minors. So, remember that minors, so, each entry is a 2x2 determinant in the case of a 3x3 matrix formed by deleting one row and one column. OK, so for example, to get the first minor, especially in the upper left corner, I would delete the first row, the first column. And, I would be left with this 2x2 determinant. I take this times that minus this times that. I get a number that gives my first minor. And then, same with the others. Then, I flip signs according to this checkerboard pattern, and that gives me the matrix of cofactors. OK, so all it means is I'm just changing the signs of these four entries and leaving the others alone. And then, I take the transpose of that. So, that means I read it horizontally and write it down vertically. I swept the rows and the columns. And then, I divide by the inverse. Well, I divide by the determinant of the initial matrix. OK, so, of course, this is kind of very theoretical, and I write it like this. Probably it makes more sense to do it on an example. I will let you work out examples, or bug your recitation instructors so that they do one on Monday if you want to see that. It's a fairly straightforward method. You just have to remember the steps. But, of course, there's one condition, which is that the determinant of a matrix has to be nonzero. So, in fact, we've seen that, oh, there is still one board left. We've seen that a matrix is invertible -- -- exactly when its determinant is not zero. And, if that's the case, then we can solve the linear system, AX equals B by just setting X equals A inverse B. That's going to be the only solution to our linear system. Otherwise, well, AX equals B has either no solution, or infinitely many solutions. Yes? The determinant of a matrix real quick? Well, I can do it that quickly unless I start waving my hands very quickly, but remember we've seen that you have a matrix, a 3x3 matrix. Its determinant will be obtained by doing an expansion with respect to, well, your favorite. But usually, we are doing it with respect to the first row. So, we take this entry and multiply it by that determinant. Then, we take that entry, multiply it by that determinant but put a minus sign. And then, we take that entry and multiply it by this determinant here, and we put a plus sign for that. OK, so maybe I should write it down. That's actually the same formula that we are using for cross products. Right, when we do cross products, we are doing an expansion with respect to the first row. That's a special case. OK, I mean, do you still want to see it in more details, or is that OK? Yes? That's correct. So, if you do an expansion with respect to any row or column, then you would use the same signs that are in this checkerboard pattern there. So, if you did an expansion, actually, so indeed, maybe I should say, the more general way to determine it is you take your favorite row or column, and you just multiply the corresponding entries by the corresponding cofactors. So, the signs are plus or minus depending on what's in that diagram there. Now, in practice, in this class, again, all we need is to do it with respect to the first row. So, don't worry about it too much. OK, so, again, the way that we've officially seen it in this class is just if you have a1, a2, a3, b1, b2, b3, c1, c2, c3, so if the determinant is a1 times b2 b3, c2 c3, minus a2 b1 b3 c1 c3 plus a3 b1 b2 c1 c2. And, this minus is here basically because of the minus in the diagram up there. But, that's all we need to know. Yes? How do you tell the difference between infinitely many solutions or no solutions? That's a very good question. So, in full generality, the answer is we haven't quite seen a systematic method. So, you just have to try solving and see if you can find a solution or not. So, let me actually explain that more carefully. So, what happens to these two situations when a is invertible or not? So, remember, in the linear system, you can think of a linear system as asking you to find the intersection between three planes because each equation is the equation of a plane. So, Ax = B for a 3x3 system means that x should be in the intersection of three planes. And then, we have two cases. So, the case where the system is invertible corresponds to the general situation where your three planes somehow all just intersect in one point. And then, the situation where the determinant, that's when the determinant is not zero, you get just one point. However, sometimes it will happen that all the planes are parallel to the same direction. So, determinant a equals zero means the three planes are parallel to a same vector. And, in fact, you can find that vector explicitly because that vector has to be perpendicular to all the normals. So, at some point we saw other subtle things about how to find the direction of this line that's parallel to all the planes. So, now, this can happen either with all three planes containing the same line. You know, they can all pass through the same axis. Or it could be that they have somehow shifted with respect to each other. And so, it might look like this. Then, the last one is actually in front of that. So, see, the lines of intersections between two of the planes, so, here they all pass through the same line, and here, instead, they intersect in one line here, one line here, and one line there. And, there's no triple intersection. So, in general, we haven't really seen how to decide between these two cases. There's one important situation where we have seen we must be in the first case that when we have a homogeneous system, so that means if the right hand side is zero, then, well, x equals zero is always a solution. It's called the trivial solution. It's the obvious one, if you want. So, you know that, and why is that? Well, that's because all of your planes have to pass through the origin. So, you must be in this case if you have a noninvertible system where the right hand side is zero. So, in that case, if the right hand side is zero, there's two cases. Either the matrix is invertible. Then, the only solution is the trivial one. Or, if a matrix is not invertible, then you have infinitely many solutions. If B is not zero, then we haven't really seen how to decide. We've just seen how to decide between one solution or zero,infinitely many, but not how to decide between these last two cases. Yes? I think in principle, you would be able to, but that's, well, I mean, that's a slightly counterintuitive way of doing it. I think it would probably work. Well, I'll let you figure it out. OK, let me move on to the second unit, maybe, because we've seen a lot of stuff, or was there a quick question before that? No? OK. OK, so what was the second part of the class about? Well, hopefully you kind of vaguely remember that it was about functions of several variables and their partial derivatives. OK, so the first thing that we've seen is how to actually view a function of two variables in terms of its graph and its contour plot. So, just to remind you very quickly, if I have a function of two variables, x and y, then the graph will be just the surface given by the equation z equals f of xy. So, for each x and y, I plot a point at height given with the value of the a function. And then, the contour plot will be the topographical map for this graph. It will tell us, what are the various levels in there? So, what it amounts to is we slice the graph by horizontal planes, and we get a bunch of curves which are the points at given height on the plot. And, so we get all of these curves, and then we look at them from above, and that gives us this map with a bunch of curves on it. And, each of them has a number next to it which tells us the value of a function there. And, from that map, we can, of course, tell things about where we might be able to find minima or maxima of our function, and how it varies with respect to x or y or actually in any direction at a given point. So, now, the next thing that we've learned about is partial derivatives. So, for a function of two variables, there would be two of them. There's f sub x which is partial f partial x, and f sub y which is partial f partial y. And, in terms of a graph, they correspond to slicing by a plane that's parallel to one of the coordinate planes, so that we either keep x constant, or keep y constant. And, we look at the slope of a graph to see the rate of change of f with respect to one variable only when we hold the other one constant. And so, we've seen in particular how to use that in various places, but, for example, for linear approximation we've seen that the change in f is approximately equal to f sub x times the change in x plus f sub y times the change in y. So, you can think of f sub x and f sub y as telling you how sensitive the value of f is to changes in x and y. So, this linear approximation also tells us about the tangent plane to the graph of f. In fact, when we turn this into an equality, that would mean that we replace f by the tangent plane. We've also learned various ways of, before I go on, I should say, of course, we've seen these also for functions of three variables, right? So, we haven't seen how to plot them, and we don't really worry about that too much. But, if you have a function of three variables, you can do the same kinds of manipulations. So, we've learned about differentials and chain rules, which are a way of repackaging these partial derivatives. So, the differential is just, by definition, this thing called df which is f sub x times dx plus f sub y times dy. And, what we can do with it is just either plug values for changes in x and y, and get approximation formulas, or we can look at this in a situation where x and y will depend on something else, and we get a chain rule. So, for example, if f is a function of t time, for example, and so is y, then we can find the rate of change of f with respect to t just by dividing this by dt. So, we get df dt equals f sub x dx dt plus f sub y dy dt. We can also get other chain rules, say, if x and y depend on more than one variable, if you have a change of variables, for example, x and y are functions of two other guys that you call u and v, then you can express dx and dy in terms of du and dv, and plugging into df you will get the manner in which f depends on u and v. So, that will give you formulas for partial f partial u, and partial f partial v. They look just like these guys except there's a lot of curly d's instead of straight ones, and u's and v's in the denominators. OK, so that lets us understand rates of change. We've also seen yet another way to package partial derivatives into not a differential, but instead, a vector. That's the gradient vector, and I'm sure it was quite mysterious when we first saw it, but hopefully by now, well, it should be less mysterious. OK, so we've learned about the gradient vector which is del f is a vector whose components are just the partial derivatives. So, if I have a function of just two variables, then it's just this. And, so one observation that we've made is that if you look at a contour plot of your function, so maybe your function is zero, one, and two, then the gradient vector is always perpendicular to the contour plot, and always points towards higher ground. OK, so the reason for that was that if you take any direction, you can measure the directional derivative, which means the rate of change of f in that direction. So, given a unit vector, u, which represents some direction, so for example let's say I decide that I want to go in this direction, and I ask myself, how quickly will f change if I start from here and I start moving towards that direction? Well, the answer seems to be, it will start to increase a bit, and maybe at some point later on something else will happen. But at first, it will increase. So, the directional derivative is what we've called f by ds in the direction of this unit vector, and basically the only thing we know to be able to compute it, the only thing we need is that it's the dot product between the gradient and this vector u hat. In particular, the directional derivatives in the direction of I hat or j hat are just the usual partial derivatives. That's what you would expect. OK, and so now you see in particular if you try to go in a direction that's perpendicular to the gradient, then the directional derivative will be zero because you are moving on the level curve. So, the value doesn't change, OK? Questions about that? Yes? Yeah, so let's see, so indeed to look at more recent things, if you are taking the flux through something given by an equation, so, if you have a surface given by an equation, say, f equals one. So, say that you have a surface here or a curve given by an equation, f equals constant, then the normal vector to the surface is given by taking the gradient of f. And that is, in general, not a unit normal vector. Now, if you wanted the unit normal vector to compute flux, then you would just scale this guy down to unit length, OK? So, if you wanted a unit normal, that would be the gradient divided by its length. However, for flux, that's still of limited usefulness because you would still need to know about ds. But, remember, we've seen a formula for flux in terms of a non-unit normal vector, and n over n dot kdxdy. So, indeed, this is how you could actually handle calculations of flux through pretty much anything. Any other questions about that? OK, so let me continue with a couple more things we need to, so, we've seen how to do min/max problems, in particular, by looking at critical points. So, critical points, remember, are the points where all the partial derivatives are zero. So, if you prefer, that's where the gradient vector is zero. And, we know how to decide using the second derivative test whether a critical point is going to be a local min, a local max, or a saddle point. Actually, we can't always quite decide because, remember, we look at the second partials, and we compute this quantity ac minus b squared. And, if it happens to be zero, then actually we can't conclude. But, most of the time we can conclude. However, that's not all we need to look for an absolute global maximum or minimum. For that, we also need to check the boundary points, or look at the behavior of a function, at infinity. So, we also need to check the values of f at the boundary of its domain of definition or at infinity. Just to give you an example from single variable calculus, if you are trying to find the minimum and the maximum of f of x equals x squared, well, you'll find quickly that the minimum is at zero where x squared is zero. If you are looking for the maximum, you better not just look at the derivative because you won't find it that way. However, if you think for a second, you'll see that if x becomes very large, then the function increases to infinity. And, similarly, if you try to find the minimum and the maximum of x squared when x varies only between one and two, well, you won't find the critical point, but you'll still find that the smallest value of x squared is when x is at one, and the largest is at x equals two. And, all this business about boundaries and infinity is exactly the same stuff, but with more than one variable. It's just the story that maybe the minimum and the maximum are not quite visible, but they are at the edges of a domain we are looking at. Well, in the last three minutes, I will just write down a couple more things we've seen there. So, how to do max/min problems with non-independent variables -- So, if your variables are related by some condition, g equals some constant. So, then we've seen the method of Lagrange multipliers. OK, and what this method says is that we should solve the equation gradient f equals some unknown scalar lambda times the gradient, g. So, that means each partial, f sub x equals lambda g sub x and so on, and of course we have to keep in mind the constraint equation so that we have the same number of equations as the number of unknowns because you have a new unknown here. And, the thing to remember is that you have to be careful that the second derivative test does not apply in this situation. I mean, this is only in the case of independent variables. So, if you want to know if something is a maximum or a minimum, you just have to use common sense or compare the values of a function at the various points you found. Yes? Will we actually have to calculate? Well, that depends on what the problem asks you. It might ask you to just set up the equations, or it might ask you to solve them. So, in general, solving might be difficult, but if it asks you to do it, then it means it shouldn't be too hard. I haven't written the final yet, so I don't know what it will be, but it might be an easy one. And, the last thing we've seen is constrained partial derivatives. So, for example, if you have a relation between x, y, and z, which are constrained to be a constant, then the notion of partial f partial x takes several meanings. So, just to remind you very quickly, there's the formal partial, partial f, partial x, which means x varies. Y and z are held constant. And, we forget the constraint. This is not compatible with a constraint, but we don't care. So, that's the guy that we compute just from the formula for f ignoring the constraints. And then, we have the partial f, partial x with y held constant, which means y held constant. X varies, and now we treat z as a dependent variable. It varies with x and y according to whatever is needed so that this constraint keeps holding. And, similarly, there's partial f partial x with z held constant, which means that, now, y is the dependent variable. And, the way in which we compute these, we've seen two methods which I'm not going to tell you now because otherwise we'll be even more over time. But, we've seen two methods for computing these based on either the chain rule or on differentials, solving and substituting into differentials.
http://xoax.net/math/crs/multivariable_calculus_mit/lessons/Lecture34/
13
19
Originating Technology/NASA Contribution Imagine you are about to be dropped in the middle of a remote, inhospitable region—say the Kalahari Desert. What would you want to have with you on your journey back to civilization? Food and water, of course, but you can only carry so much. A truck would help, but what would you do when it runs out of gas? Any useful resources would have to be portable and—ideally—sustainable. Astronauts on future long-term missions would face similar circumstances as those in this survivalist scenario. Consider, for example, a manned mission to explore the surface of Mars. Given the extreme distance of the journey, the high cost of carrying cargo into space, and the impossibility of restocking any supplies that run out, astronauts on Mars would need to be able to “live off the land” as much as possible—a daunting proposition given the harsh, barren Martian landscape. Not to mention the lack of breathable air. Another consideration is fuel; spacecraft might have enough fuel to get to Mars, but not enough to return. The Moon is like a day trip on one tank of gas, but Mars is a considerably greater distance. In the course of planning and preparing for space missions, NASA engineers consistently run up against unprecedented challenges like these. Finding solutions to these challenges often requires the development of entirely new technologies. A number of these innovations—inspired by the extreme demands of the space environment—prove to be solutions for terrestrial challenges, as well. While developing a method for NASA to produce oxygen and fuel on Mars, one engineer realized the potential for the technology to generate something in high demand on Earth: energy. K.R. Sridhar was director of the Space Technologies Laboratory at the University of Arizona when Ames Research Center asked him to develop a solution for helping sustain life on Mars. Sridhar’s team created a fuel cell device that could use solar power to split Martian water into oxygen for breathing and hydrogen for use as fuel for vehicles. Sridhar saw potential for another application, though. When the NASA Mars project ended in 2001, Sridhar’s team shifted focus to develop a commercial venture exploring the possibility of using its NASA-derived technology in reverse—creating electricity from oxygen and fuel. On the surface, this sounds like standard hydrocarbon fuel cell technology, in which oxygen and a hydrocarbon fuel such as methanol flow into the cell where an electrolyte triggers an electrochemical reaction, producing water, carbon dioxide, and electrons. Fuel cells have provided tantalizing potential for a clean, alternative energy source since the first device was invented in 1839, and NASA has used fuel cells in nearly every mission since the 1960s. But conventional fuel cell technology features expensive, complicated systems requiring precious metals like platinum as a catalyst for the energy-producing reaction. Sridhar’s group believed it had emerged from its NASA work with innovations that, with further development, could result in an efficient, affordable fuel cell capable of supplying clean energy wherever it is needed. In 2001, Sridhar’s team founded Ion America and opened research and development offices on the campus of the NASA Research Park at Ames Research Center. There, with financial backing from investors who provided early funding to companies like Google, Genentech, Segway, and Amazon.com, the technology progressed and began attracting attention. In 2006, the company delivered a 5-kilowatt (kW) fuel cell system to The Sim Center, a national center for computational engineering, at The University of Tennessee Chattanooga, where the technology was successfully demonstrated. Now called Bloom Energy and headquartered in Sunnyvale, California, the company this year officially unveiled its NASA-inspired technology to worldwide media fanfare. “NASA is a for encouraging innovation. It’s all about are seemingly unsolvable.” Bloom Energy’s ES-5000 Energy Server employs the planar solid oxide fuel cell technology Sridhar’s team originally created for the NASA Mars project. At the core of the server are square ceramic fuel cells about the size of old fashioned computer floppy disks. Crafted from an inexpensive sand-like powder, each square is coated with special inks (lime-green ink on the anode side, black on the cathode side) and is capable of producing 25 watts—enough to power a light bulb. Stacking the cells—with cheap metal alloy squares in between to serve as the electrolyte catalyst—increases the energy output: a stack about the size of a loaf of bread can power an average home, and a full-size Energy Server with the footprint of a parking space can produce 100 kW, enough to power a 30,000-square-foot office building or 100 average U.S. homes. Solid oxide fuel cells like those in Bloom’s Energy Server operate at temperatures upwards of 1,800 °F. The high temperatures, efficiently harnessed by the Bloom system’s materials and design, enable the server to use natural gas, any number of environmentally friendly biogasses created from plant waste, or methane recaptured from landfills and farms. Fuel is fed into the system along with water. The high temperatures generate steam, which mixes with the fuel to create a reformed fuel called syngas on the surface of the cell. As the syngas moves across the anode, it draws oxygen ions from the cathode, and an electrochemical reaction results in electricity, water, and only a small amount of carbon dioxide—a process that according to Bloom is about 67-percent cleaner than that of a typical coal-fired power plant when using fossil fuels and 100-percent cleaner with renewable fuels. The server can switch between fuels on the fly and does not require an external chemical reformer or the expensive precious metals, corrosive acids, or molten materials required by other conventional fuel cell systems. The technology’s “plug and play” modular architecture allows users to generate more power by simply adding more servers, resulting in a “pay as you grow” scenario in which customers can increase their energy output as their needs increase. The Bloom Energy Server also offers the benefits of localized power generation; the servers are located on site and off the grid, providing full-time power—as opposed to intermittent sources like solar and wind—without the inefficiencies of transmission and distribution, Bloom says. Future servers may even return to the original NASA function of using electricity to generate oxygen and hydrogen. The company envisions feeding electricity from wind or solar power into its servers along with water to produce storable hydrogen and oxygen. The server could then use the stored gasses to generate electricity during cloudy, low-wind, and nighttime conditions. Stored hydrogen could even be used to provide fuel for hydrogen-powered cars. Bloom quietly installed its first commercial Energy Server in 2008, and since then its servers have generated more than 11 million kilowatt hours (kWh) of electricity, along with a corresponding 14-million-pound reduction in carbon dioxide emissions, which the company says is the equivalent of powering about 1,000 American homes for 1 year and planting 1 million trees. Bloom’s current customers are a who’s-who of Fortune 500 companies, including Google, eBay, Bank of America, The Coca-Cola Company, and FedEx. Bloom says its customers can expect a return on their investment from energy cost savings within 3–5 years, and eBay has already claimed more than $100,000 in savings on electricity expenses. Sridhar believes it will be another 5 to 10 years before Bloom’s technology becomes cost-effective for home use. At that point, he sees the Bloom Energy Server as a solution for remote and underdeveloped areas in need of power. He says the company’s mission is “to make clean, reliable energy affordable to everyone in the world.” “One in three humans lives without power,” Sridhar says. “Energy demand exceeds supply.” Just within the United States, 281 gigawatts of new generating capacity—the output of 937 new 300-megawatt power plants—will be necessary by 2025 to meet national energy demands, according to the U.S. Energy Information Administration. The Bloom Energy Server may soon offer an environmentally sound option for meeting that challenge, a solution derived from the demands of space exploration. “NASA is a tremendous environment for encouraging innovation,” says Sridhar. “It’s all about solving problems that are seemingly unsolvable. After realizing we could make oxygen on Mars, making electrons on Earth seemed far less daunting. We’re grateful to NASA for giving us a challenge with serendipitous impact for mankind.” Energy Server™ is a trademark of Bloom Energy.
http://spinoff.nasa.gov/Spinoff2010/er_3.html
13
28
Before jumping on to the applications of Rolle’s theorem let us study its definition. Rolle’s theorem simply states that if a function f is differentiable in the open interval (a, b) and continuous in the closed interval [a, b] and if it also attains equal value at two distinct points, i.e., f(a) = f(b), then there is at least one Point c between a and b where the first derivative of the function ( the Slope of the Tangent line to the graph of the function) is zero, i.e., f '(c) = 0. So, it is clear that there exist a point c between a and b, i.e. (a,b) where the tangent line on the function is parallel to the secant drawn between a and b, as shown in figure1. If we talk about the applications of Rolle’s Theorem, then it plays an important intermediate role or we can say it plays a base in proving many more important theorems like Taylor’s theorem, mean value theorem and extreme value theorem. Rolle’s theorem is almost interchangeable with Mean value theorem as Mean value theorem states that: If a function f is differentiable in the open interval (a, b) and continuous in the closed interval [a, b], then there exist a point c at which, f’(c) = f(b) – f(a) / (b-a). It is one of the most useful applications of Rolle’s Theorem used in Calculus. Extreme value theorem which comes under applications of Rolle’s Theorem states that: If a function f is real valued and is also continuous in the closed interval [a, b], then f(x) must have at least one minimum and one maximum value, that is, there exist points c and d in the closed interval [a, b], f(c) ≥ f(x) ≥ f(d) for all x in the Domain [a, b]. This theorem plays an important role in determining the maximum and minimum value of the function in certain intervals as earlier we were using critical points for that purpose. So, indirectly we can call it as an application of Rolle’s Theorem.
http://math.tutorcircle.com/calculus/applications-of-rolles-theorem.html
13
14
Date: 14 Mar 1986 In 1986, the European spacecraft Giotto became one of the first spacecraft ever to encounter and photograph the nucleus of a comet, passing and imaging Halley's nucleus as it receded from the sun. Data from Giotto's camera were used to generate this enhanced image of the potato shaped nucleus that measures roughly 15 km across. Some surface features on the dark Every 76 years Comet Halley returns to the inner solar system and each time the nucleus sheds about a 6-m deep layer of its ice and rock into space. This debris shed from Halley's nucleus eventually disperses into an orbiting trail responsible for the Orionids meteor shower, in October of every year, and the Eta Aquarids meteor shower every May. What Scientists/Engineers Say About This Image: "Going back further in time, ESA's Giotto mission to comet 1P/Halley was a milestone, showing for the first time that comets have nuclei. This was the first time a spacecraft came close enough to look through the fog surrounding a comet. Hard to believe that was only in 1986." --Peter Jenniskens: Research Scientist, SETI Institute and NASA Ames Research Center (Read More of what Peter Jenniskens has to say about this and other significant events by clicking here.) "Data from Giotto's camera, which used an automated targeting system, included a spectacular image of the potato shaped nucleus that measures roughly 15 km across. What surprised everyone was that the nucleus was not a snow-white ice ball, but dark as a lump of coal. Some craggy surface features and craters could be seen, and jets of gas and dust streaming into Halley's coma. This was the first-ever image of a "primitive body," and a highly active one at that. The automated targeting device was even fooled, homing in on a jet coming off the dark surface as the spacecraft flew past (rather than the surface itself, which was expected to be bright). Data obtained on the composition of single comet grains discovered something new -- grains of pure organic material or "CHON" -- and nothing else, proving that comets are largely organic material rather than snowballs!" --Jeff Cuzzi: Research Scientist, NASA Ames Research Center (Read More of what Jeff Cuzzi has to say about this and other significant events by clicking here.) Credit: Halley Multicolor Camera Team, Giotto Project, ESA
http://solarsystem.nasa.gov/multimedia/display.cfm?Category=Planets&IM_ID=11423
13
11
For a given chemical element, every atom has the same number of protons in its nucleus, but the number of neutrons per atom may vary. In other words, the atoms of an element can have two or more different structures, which have the same atomic number (number of protons) but different mass numbers (number of protons plus neutrons). Based on these differences, the element can have different forms known as isotopes, each of which is made up of atoms with the same atomic structure. Isotopes that are radioactive are called radioisotopes. The term isotope comes from Greek and means "at the same place"—all the different isotopes of an element are placed at the same location on the periodic table. The isotopes of a given element have nearly identical chemical properties but their physical properties show somewhat greater variation. Thus the process of isotope separation represents a significant technological challenge. A particular atomic nucleus with a specific number of protons and neutrons is called a nuclide. The distinction between the terms isotope and nuclide has somewhat blurred, and they are often used interchangeably. Isotope is usually used when referring to several different nuclides of the same element; nuclide is more generic and is used when referencing only one nucleus or several nuclei of different elements. The properties of isotopes can be used for a variety of applications. Many people are aware that specific radioactive isotopes are used to produce nuclear power and nuclear weapons. In addition, radioactive isotopes or isotopes of different masses can be used as tracers in chemical and biochemical reactions, or to date geological samples. Also, several forms of spectroscopy rely on the unique nuclear properties of specific isotopes. In scientific nomenclature, isotopes and nuclides are specified by the name of the particular element (implicitly giving the atomic number) followed by a hyphen and the mass number. For example, carbon-12 and carbon-14 are isotopes of carbon; uranium-235 and uranium-238 are isotopes of uranium. Alternatively, the number of nucleons (protons and neutrons) per atomic nucleus may be denoted as a superscripted prefix attached to the chemical symbol of the element. Thus, the above examples would be denoted as 12C, 14C, 235U, and 238U, respectively. Isotones, Isobars, Nuclear isomers Isotopes are nuclides having the same atomic number (number of protons). They should be distinguished from isotones, isobars, and nuclear isomers. - Isotones are nuclides that have the same number of neutrons. For example, boron-12 and carbon-13 are isotones, because there are seven neutrons in each of their atoms. - Isobars are nuclides that have the same mass number (sum of protons plus neutrons). For example, carbon-12 and boron-12 are isobars. (In meteorology, however, an isobar is a line of constant pressure on a graph.) - Nuclear isomers are different excited states (energy states) of the same type of nucleus. A transition from one nuclear isomer to another is accompanied by emission or absorption of a gamma ray, or the process of internal conversion. (Nuclear isomers should not be confused with chemical isomers.) Variation in properties of isotopes A neutral atom has the same number of electrons as protons. Thus, the atoms of all the isotopes of an element have the same number of protons and electrons and the same electronic structure. Given that the chemical behavior of an atom is largely determined by its electronic structure, the isotopes of a particular element exhibit nearly identical chemical behavior. The main exception to this rule is what is called the "kinetic isotope effect": heavier isotopes tend to react somewhat more slowly than lighter isotopes of the same element. This "mass effect" is most pronounced for protium (1H) as compared with deuterium (2H), because deuterium has twice the mass of protium. For heavier elements, the differences between the atomic masses of the isotopes are not so pronounced, and the mass effect is much smaller, usually negligible. Likewise, two molecules that differ only in the isotopic nature of their atoms (isotopologues) will have identical electronic structures. Therefore, their physical and chemical properties will be almost indistinguishable (again with deuterium being the primary exception to this rule). The vibrational modes of a molecule are determined by its shape and the masses of its constituent atoms. Consequently, isotopologues will have different sets of vibrational modes. Given that vibrational modes allow a molecule to absorb photons of corresponding (infrared) energies, isotopologues have different optical properties in the infrared range. Although isotopes exhibit nearly identical electronic and chemical behavior, their nuclear behavior varies dramatically. Atomic nuclei consist of protons and neutrons bound together by the strong nuclear force. As protons are positively charged, they repel one another. Neutrons, being electrically neutral, allow some separation between the positively charged protons, reducing the electrostatic repulsion. Neutrons also stabilize the nucleus, because at short ranges they attract each other and protons equally by the strong nuclear force, and this attraction also offsets the electrical repulsion between protons. For this reason, one or more neutrons are necessary for two or more protons to be bound together in a nucleus. As the number of protons increases, additional neutrons are needed to form a stable nucleus. For example, the neutron/proton ratio of 3He is 1:2, but the neutron/proton ratio of 238U is greater than 3:2. If the atomic nucleus contains too many or too few neutrons, it is unstable and subject to nuclear decay. Occurrence in nature Most elements have several different isotopes that can be found in nature. The relative abundance of an isotope is strongly correlated with its tendency toward nuclear decay—short-lived nuclides decay quickly and their numbers are reduced just as fast, while their long-lived counterparts endure. This, however, does not mean that short-lived species disappear entirely—many are continually produced through the decay of longer-lived nuclides. Also, short-lived isotopes such as those of promethium have been detected in the spectra of stars, where they are presumably being made continuously, by a process called stellar nucleosynthesis. The tabulated atomic mass of an element is an average that takes into account the presence of multiple isotopes with different masses and in different proportions. According to generally accepted cosmology, virtually all nuclides—other than isotopes of hydrogen and helium, and traces of some isotopes of lithium, beryllium, and boron—were built in stars and supernovae. Their respective abundances result from the quantities formed by these processes, their spread through the galaxy, and their rates of decay. After the initial coalescence of the solar system, isotopes were redistributed according to mass (see also Origin of the Solar System). The isotopic composition of elements is different on different planets, making it possible to determine the origin of meteorites. Molecular mass of isotopes The atomic mass (Mr) of an element is determined by its nucleons. For example, carbon-12 has six protons and six neutrons, while carbon-14 has six protons and eight neutrons. When a sample contains two isotopes of an element, the atomic mass of the element is calculated by the following equation: Here, Mr(1) and Mr(2) are the molecular masses of each individual isotope, and “%abundance” is the percentage abundance of that isotope in the sample. Applications of isotopes Several applications capitalize on properties of the various isotopes of a given element. Use of chemical properties - One of the most common applications is known as "isotopic labeling"—the use of unusual isotopes as tracers or markers in chemical and biochemical reactions. For example, isotopes of different masses can be distinguished by techniques such as mass spectrometry or infrared spectroscopy (see "Properties"). Alternatively, if a radioactive isotope is used, it can be detected by the radiation it emits—a technique called radioisotopic labeling. - A technique similar to radioisotopic labeling is radiometric dating. Using the known half-life of an unstable element, one can estimate the amount of time that has elapsed since a known level of isotope came into existence. The most widely known example is radiocarbon dating, which is used to determine the age of carbon-containing materials. - The kinetic isotope effect can be used to determine the mechanism of a reaction, by substituting one isotope for another. Use of nuclear properties - The nuclear reactions of certain radioactive isotopes are utilized for the production of nuclear power and nuclear weapons. - Several forms of spectroscopy rely on the unique nuclear properties of specific isotopes. For example, nuclear magnetic resonance (NMR) spectroscopy can be used for isotopes with a nonzero nuclear spin. The most common isotopes used with NMR spectroscopy are 1H, 2D, 15N, 13C, and 31P. - Mössbauer spectroscopy also relies on the nuclear transitions of specific isotopes, such as 57Fe. - Isotope table - Table of nuclides - List of particles All Links Retrieved December 19, 2007. - Atomic weights of all isotopes - Atomgewichte, Zerfallsenergien und Halbwertszeiten aller Isotope - Exploring the Table of the Isotopes at the LBNL New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/Isotope
13
15
A percentage is a way of expressing a proportion or a fraction as a whole number. A number such as "45%" ("45 percent" or "45 per cent") is actually shorthand for the fraction 45/100. In British English, percent is always written as two words (per cent), and the spelling as one word is considered incorrect. In American English, percent is common, and is usually considered correct. In the early part of the twentieth century, there was a dotted abbreviation form per cent., which came from the original Latin per centum. The concept of considering values as parts of a hundred is originally Greek. As an illustration, - "45 percent of human beings..." is equivalent to both of the following: - "45 out of every 100 people..." - "0.45 of the human population..." A percentage may be a number larger than 100; for example, 200% of a number refers to twice the number. In fact, this would be a 100% increase, while a 200% increase would give a number three times the original value. Thus one can see the relationship between percent increase and times increase. The symbol for percent "%" is a stylised form of the two zeros. It evolved from a symbol similar except for a horizontal line instead of diagonal (c. 1650), which in turn evolved from a symbol representing "P cento" (c. 1425). Traditionally, the symbol follows the number to which it applies. Yet recently there are some examples on the net which use the symbol preceding the number. This may have something to do either with a firm's typographic style, or perhaps an international standard relating to the metric system. And in Unicode, there is also an "ARABIC PERCENT SIGN" (U+066A), which has the circles replaced by square dots set on edge. In computing, other names for the character include: ITU-T: percent sign; mod; grapes. INTERCAL: double-oh-seven. Confusion from the use of percentages Many confusions arise from the use of percentages, due to inconsistent usage or misunderstanding of basic arithmetic. Due to inconsistent usage, it is not always clear from the context what a percentage is relative to. When speaking of a "10% rise" or a "10% fall" in a quantity, the usual interpretation is that this is relative to the initial value of that quantity; for example, a 10% increase on an item initially priced at 100$ is 10$, giving a new price of 110$; to many people, any other usage is incorrect. In the case of interest rates, however, it is a common practice to use the percent change differently: suppose that an initial interest rate is given as a percentage like 10%. Suppose the interest rate rises to 20%. This could be described as a 100% increase, measuring the increase relative to the initial value of the interest rate. However, many people say in practice "The interest rate has risen by 10%," meaning 10% of 100% additional to the initial 10% (giving 20% in total), though it should mean according to the usual interpretation of percentages a 10% increase on the initial 10% (giving 11%). To counter this confusion, the expression "percentage points" is often used. So, in the previous example, "The interest rate has increased by 10 percentage points" would be an unambiguous expression that the rate is now 20%. Often also, the term "basis points" is used, one basis point being one one hundredth of a percentage point. Thus, the interest rate above increased by 1000 basis points. A common error when using percentages is to imagine that a percentage increase is cancelled out when followed by the same percentage decrease. A 50% increase from 100 is 100 + 50, or 150. A 50% reduction from 150 is 150 - 75, or 75. In general, the net effect is: - (1 + x)(1 - x) = 1 - x2 i.e. a net decrease proportional to the square of the percentage change. Owners of dot com stocks came to understand that even if a stock has sunk 99%, it can nevertheless still sink another 99%. Also, if a stock rises by a large percentage, you're still broke if the stock subsequently drops 100% meaning it has a zero value.
http://www.biologydaily.com/biology/%25
13
18
The inverse of a function “undoes” that function. An example may be the best way to explain what this means: the inverse of x2 . Let’s see how For the Math IC, it is important to know how to find the inverse of a simple function mathematically. For example: is the inverse of f(x) = 3x + The easiest way to find the inverse of a function is to break the function apart step by step. The function f(x) = 3x + 2 requires that for any value of x, it must be first multiplied by three and then added to 2. The inverse of this function must begin by subtracting 2 and then dividing by three, undoing the original function: f–1(x) You should know how an inverse works in order to deal with any conceptual inverse questions the Math IC might throw at you. But if you are ever asked to come up with the inverse of a particular function, there is an easy method that will always Replace the variable f(x) the places of x and y. - Replace y with f–1(x). Here’s an example of the method in action: is the inverse of the function f(x) First, replace f(x) with y. Then switch the places of x and y, and solve for y. Finding Whether the Inverse of a Function Is a Function Contrary to their name, inverse functions are not necessarily functions at all. Take a look at this question: the inverse of f(x) = x3 Begin by writing y Next, switch the places of x Solve for y . Now you need to analyze the inverse of the function and decide whether for every x there is only one y . If only one y associated with each x , you’ve got a function. Otherwise, you don’t. In this case, every x that falls within the domain turns out one value for y, so f–1 is a function. Here’s another question: is the inverse of f(x) = 2|x – 1|, and is it a function? Again, replace x with y and solve for y: Now, since you’re dealing with an absolute value, split The inverse of f(x) is this set of two equations. As you can see, for every value of x except 0, the inverse of the function assigns two values of y. is not a function.
http://www.sparknotes.com/testprep/books/sat2/math1c/chapter10section4.rhtml
13
15
Glossary · History In quantum mechanics, the concept of matter waves or de Broglie waves (pron.: //) reflects the wave–particle duality of matter. The theory was proposed by Louis de Broglie in 1924 in his PhD thesis. The de Broglie relations show that the wavelength is inversely proportional to the momentum of a particle and is also called de Broglie wavelength. Also the frequency of matter waves, as deduced by de Broglie, is directly proportional to the total energy E (sum of its rest energy and the kinetic energy) of a particle. Historical context At the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell’s equations, while matter was thought to consist of localized particles (See history of wave and particle viewpoints). This division was challenged when, in his 1905 paper on the photoelectric effect, Albert Einstein postulated that light was emitted and absorbed as localized packets, or "quanta" (now called photons). These quanta would have an energy where is the frequency of the light and h is Planck’s constant. Einstein’s postulate was confirmed experimentally by Robert Millikan and Arthur Compton over the next two decades. Thus it became apparent that light has both wave-like and particle-like properties. De Broglie, in his 1924 PhD thesis, sought to expand this wave-particle duality to all particles: |“||When I conceived the first basic ideas of wave mechanics in 1923–24, I was guided by the aim to perform a real physical synthesis, valid for all particles, of the coexistence of the wave and of the corpuscular aspects that Einstein had introduced for photons in his theory of light quanta in 1905.||”| In 1926, Erwin Schrödinger published an equation describing how this matter wave should evolve—the matter wave equivalent of Maxwell’s equations—and used it to derive the energy spectrum of hydrogen. That same year Max Born published his now-standard interpretation that the square of the amplitude of the matter wave gives the probability to find the particle at a given place. This interpretation was in contrast to De Broglie’s own interpretation, in which the wave corresponds to the physical motion of a localized particle. de Broglie relations Quantum mechanics where h is Planck's constant. The equation can be equivalently written as using the definitions - is the reduced Planck's constant (also known as Dirac's constant, pronounced "h-bar"), - is the angular wavenumber, - is the angular frequency. Special relativity allows the equations to be written as where m0 is the particle's rest mass, v is the particle's velocity, γ is the Lorentz factor, and c is the speed of light in a vacuum. See group velocity for details of the derivation of the de Broglie relations. Group velocity (equal to the particle's speed) should not be confused with phase velocity (equal to the product of the particle's frequency and its wavelength). In the case of a non-dispersive medium, they happen to be equal, but otherwise they are not. which is frame-independent. Experimental confirmation Matter waves were first experimentally confirmed to occur in the Davisson-Germer experiment for electrons, and the de Broglie hypothesis has been confirmed for other elementary particles. Furthermore, neutral atoms and even molecules have been shown to be wave-like. In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target. The angular dependence of the reflected electron intensity was measured, and was determined to have the same diffraction pattern as those predicted by Bragg for x-rays. Before the acceptance of the de Broglie hypothesis, diffraction was a property that was thought to be only exhibited by waves. Therefore, the presence of any diffraction effects by matter demonstrated the wave-like nature of matter. When the de Broglie wavelength was inserted into the Bragg condition, the observed diffraction pattern was predicted, thereby experimentally confirming the de Broglie hypothesis for electrons. This was a pivotal result in the development of quantum mechanics. Just as the photoelectric effect demonstrated the particle nature of light, the Davisson–Germer experiment showed the wave-nature of matter, and completed the theory of wave-particle duality. For physicists this idea was important because it means that not only can any particle exhibit wave characteristics, but that one can use wave equations to describe phenomena in matter if one uses the de Broglie wavelength. Neutral atoms Experiments with Fresnel diffraction and specular reflection of neutral atoms confirm the application of the de Broglie hypothesis to atoms, i.e. the existence of atomic waves which undergo diffraction, interference and allow quantum reflection by the tails of the attractive potential. Advances in laser cooling have allowed cooling of neutral atoms down to nanokelvin temperatures. At these temperatures, the thermal de Broglie wavelengths come into the micrometre range. Using Bragg diffraction of atoms and a Ramsey interferometry technique, the de Broglie wavelength of cold sodium atoms was explicitly measured and found to be consistent with the temperature measured by a different method. This effect has been used to demonstrate atomic holography, and it may allow the construction of an atom probe imaging system with nanometer resolution. The description of these phenomena is based on the wave properties of neutral atoms, confirming the de Broglie hypothesis. Recent experiments even confirm the relations for molecules and even macromolecules, which are normally considered too large to undergo quantum mechanical effects. In 1999, a research team in Vienna demonstrated diffraction for molecules as large as fullerenes. The researchers calculated a De Broglie wavelength of the most probable C60 velocity as 2.5 pm. More recent experiments prove the quantum nature of molecules with a mass up to 6910 amu. In general, the De Broglie hypothesis is expected to apply to any well isolated object. Spatial Zeno effect The matter wave leads to the spatial version of the quantum Zeno effect. If an object (particle) is observed with frequency Ω >> ω in a half-space (say, y < 0), then this observation prevents the particle, which stays in the half-space y > 0 from entry into this half-space y < 0. Such an "observation" can be realized with a set of rapidly moving absorbing ridges, filling one half-space. In the system of coordinates related to the ridges, this phenomenon appears as a specular reflection of a particle from a ridged mirror, assuming the grazing incidence (small values of the grazing angle). Such a ridged mirror is universal; while we consider the idealised "absorption" of the de Broglie wave at the ridges, the reflectivity is determined by wavenumber k and does not depend on other properties of a particle. See also - Atom optics - Atomic de Broglie microscope - Atomic mirror - Bohr model - Electron diffraction - Faraday wave - Schrödinger equation - Theoretical and experimental justification for the Schrödinger equation - Thermal de Broglie wavelength - L. de Broglie, Recherches sur la théorie des quanta (Researches on the quantum theory), Thesis (Paris), 1924; L. de Broglie, Ann. Phys. (Paris) 3, 22 (1925). - Resnick, R.; Eisberg, R. (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd ed.). New York: John Wiley & Sons. ISBN 0-471-87373-X. - Louis de Broglie "The Reinterpretation of Wave Mechanics" Foundations of Physics, Vol. 1 No. 1 (1970) - Holden, Alan (1971). Stationary states. New York: Oxford University Press. ISBN 0-19-501497-9. - Mauro Dardo, Nobel Laureates and Twentieth-Century Physics, Cambridge University Press 2004, pp. 156–157 - R.B.Doak; R.E.Grisenti, S.Rehbein, G.Schmahl, J.P.Toennies2, and Ch. Wöll (1999). "Towards Realization of an Atomic de Broglie Microscope: Helium Atom Focusing Using Fresnel Zone Plates". Physical Review Letters 83 (21): 4229–4232. Bibcode:1999PhRvL..83.4229D. doi:10.1103/PhysRevLett.83.4229. - F. Shimizu (2000). "Specular Reflection of Very Slow Metastable Neon Atoms from a Solid Surface". Physical Review Letters 86 (6): 987–990. Bibcode:2001PhRvL..86..987S. doi:10.1103/PhysRevLett.86.987. PMID 11177991. - D. Kouznetsov; H. Oberst (2005). "Reflection of Waves from a Ridged Surface and the Zeno Effect". Optical Review 12 (5): 1605–1623. Bibcode:2005OptRv..12..363K. doi:10.1007/s10043-005-0363-9. - H.Friedrich; G.Jacoby, C.G.Meister (2002). "quantum reflection by Casimir–van der Waals potential tails". Physical Review A 65 (3): 032902. Bibcode:2002PhRvA..65c2902F. doi:10.1103/PhysRevA.65.032902. - Pierre Cladé; Changhyun Ryu, Anand Ramanathan, Kristian Helmerson, William D. Phillips (2008). "Observation of a 2D Bose Gas: From thermal to quasi-condensate to superfluid". arXiv:0805.3519. - Shimizu; J.Fujita (2002). "Reflection-Type Hologram for Atoms". Physical Review Letters 88 (12): 123201. Bibcode:2002PhRvL..88l3201S. doi:10.1103/PhysRevLett.88.123201. PMID 11909457. - D. Kouznetsov; H. Oberst, K. Shimizu, A. Neumann, Y. Kuznetsova, J.-F. Bisson, K. Ueda, S. R. J. Brueck (2006). "Ridged atomic mirrors and atomic nanoscope". Journal of Physics B 39 (7): 1605–1623. Bibcode:2006JPhB...39.1605K. doi:10.1088/0953-4075/39/7/005. - Arndt, M.; O. Nairz, J. Voss-Andreae, C. Keller, G. van der Zouw, A. Zeilinger (14 October 1999). "Wave-particle duality of C60". Nature 401 (6754): 680–682. Bibcode:1999Natur.401..680A. doi:10.1038/44348. PMID 18494170. - Gerlich, S.; S. Eibenberger, M. Tomandl, S. Nimmrichter, K. Hornberger, P. J. Fagan, J. Tüxen, M. Mayor & M. Arndt (05 April 2011). "Quantum interference of large organic molecules". Nature Communications 2 (263): 263–. Bibcode:2011NatCo...2E.263G. doi:10.1038/ncomms1263. PMC 3104521. PMID 21468015. Further reading - Broglie, Louis de, The wave nature of the electron Nobel Lecture, 12, 1929 - Tipler, Paul A. and Ralph A. Llewellyn (2003). Modern Physics. 4th ed. New York; W. H. Freeman and Co. ISBN 0-7167-4345-0. pp. 203–4, 222–3, 236. - Web version of Thesis, translated by Kracklauer (English) - Zumdahl, Steven S. (2005). Chemical Principles (5th ed.). Boston: Houghton Mifflin. ISBN 0-618-37206-7. - An extensive review article "Optics and interferometry with atoms and molecules" appeared in July 2009: http://www.atomwave.org/rmparticle/RMPLAO.pdf. - This paper appeared in a collection of papers titled "Scientific Papers Presented to Max Born on his retirement from the Tait Chair of Natural Philosophy in the University of Edinburgh", published in 1953 (Oliver and Boyd):
http://en.wikipedia.org/wiki/Matter_wave
13
44
Introduction to Graphs and the Graphing Calculator. R. Basic Concepts of Algebra. The Real-Number System. Integer Exponents, Scientific Notation, and Order of Operations. Addition, Subtraction, and Multiplication of Polynomials. Factoring. Rational Expressions. Radical Notation and Rational Exponents. The Basics of Equation Solving. 1. Graphs, Functions, and Models. Functions, Graphs, and Graphers. Linear Functions, Slope, and Applications. Modeling: Data Analysis, Curve Fitting, and Linear Regression. More on Functions. Symmetry and Transformations. Variation and Applications. Distance, Midpoints, and Circles. 2. Functions and Equations: Zeros and Solutions. Zeros of Linear Functions and Models. The Complex Numbers. Zeros of Quadratic Functions and Models. Analyzing Graphs of Quadratic Functions. Modeling: Data Analysis, Curve Fitting, and Quadratic Regression. Zeros and More Equation Solving. Solving Inequalities. 3. Polynomial and Rational Functions. Polynomial Functions and Models. Polynomial Division; The Remainder and Factor Theorems. Theorems about Zeros of Polynomial Functions. Rational Functions. Polynomial and Rational Inequalities. 4. Exponential and Logarithmic Functions. Composite and Inverse Functions. Exponential Functions. Logarithmic Functions and Graphs. Properties of Logarithmic Functions. Solving Exponential and Logarithmic Equations. Applications and Models: Growth and Decay. 5. The Trigonometric Functions. Trigonometric Functions of Acute Angles. Applications of Right Triangles. Trigonometric Functions of Any Angle. Radians, Arc Length, and Angular Speed. Circular Functions: Graphs and Properties. Graphs of Transformed Sine and Cosine Functions. 6. Trigonometric Identities, Inverse Functions, and Equations. Identities: Pythagorean and Sum and Difference. Identities: Cofunction, Double-Angle, and Half-Angle. Proving Trigonometric Identities. Inverses of the Trigonometric Functions. Solving Trigonometric Equations. 7. Applications of Trigonometry. The Law of Sines. The Law of Cosines. Complex Numbers: Trigonometric Form. Polar Coordinates and Graphs. Vectors and Applications. Vector Operations. 8. Systems and Matrices. Systems of Equations in Two Variables. Systems of Equations in Three Variables. Matrices and Systems of Equations. Matrix Operations. Inverses of Matrices. Systems of Inequalities and Linear Programming. Partial Fractions. 9. Conic Sections. The Parabola. The Circle and the Ellipse. The Hyperbola. Nonlinear Systems of Equations. 10. Sequences, Series, and Combinatorics. Sequences and Series. Arithmetic Sequences and Series. Geometric Sequences and Series. Mathematical Induction. Combinatorics: Permutations. Combinatorics: Combinations. The Binomial Theorem. Probability. Appendixes. A. Descartes' Rule of Signs. B. Determinants and Cramer's Rule. C. Parametric Equations Other Editions of Algebra And Trigonometry : Graphs And Models - Text Only: Free Shipping Get Free Shipping on orders over $25 (not including Rental and Marketplace). Order arrives in 5-10 business days. Need it faster? We offer fast, flat-rate expedited shipping options. Not the right book for you? We'll gladly take it back within 30 days. To return an eTextbook: Your eTextbook is non-returnable once it's been activated. You must contact us about returning your eTextbook before you activate it. Returns are accepted within 30 days of the purchase date on your order confirmation. This book qualifies for guaranteed cash back! Buy it now for , then: Sell it back by: Guaranteed cash back: Cost of this book after cash back: Take advantage of Guaranteed Cash Back. Send your book to us in good condition before the end of the buyback period, we'll send YOU a check, and you'll pay less for your textbooks! If you find this book for less on Amazon.com (direct from Amazon, not marketplace sellers), we'll match it. In our warehouse, waiting to ship directly to you. We hand-inspect every used textbook to make sure it's in good condition. Buy it now. Sell it later! Get an extra 10% cash back. Sell this textbook for cash! When you're done with this book, sell it back to Textbooks.com. In addition to the the best possible buyback price, you'll get an extra 10% cash back just for being a customer. We buy good-condition used textbooks year 'round, 24/7. No matter where you bought it, Textbooks.com will buy your textbooks for the most cash. We hand-inspect every one of our used textbooks to ensure good condition. Our used textbooks do NOT have: Missing or torn pages Missing or torn cover Torn or damaged binding A broken spine This textbook has never been used. Due to the size of eTextbooks, a high-speed internet connection (cable modem, DSL, LAN) is required for download stability and speed. Your connection can be wired or wireless. Being online is not required for reading an eTextbook after successfully downloading it. You must only be connected to the Internet during the download process. Windows XP, Windows Vista or Mac OS X 10.3 or above At least 280 MB RAM, a 600 mHZ processor and 110 MB of hard drive space 1024x768 (or larger) screen resolution What is the Marketplace? It's another way for you to get the right price on the books you need. We approved every Marketplace vendor to sell their books on Textbooks.com, so you know they're all reliable. What are Marketplace shipping options? Marketplace items do not qualify for free shipping. When ordering from the Marketplace, please specify whether you want the seller to send your book Standard ($3.99/item) or Express ($6.99/item). To get free shipping over $25, just order directly from Textbooks.com instead of through the Marketplace. FREE UPS 2nd Day Air Terms Rental and Marketplace items are excluded. Offer is valid from 1/21/2013 12:00PM to 1/23/2013 11:59AM CST. Your order must be placed by 12 Noon CST to be processed on the same day. Minimum order value is $100.00 excluding Rental and Marketplace items. To redeem this offer, select "FREE UPS 2ND DAY AIR" at checkout. Offer not is not valid on previous orders. Compared to the prices of new and used, sold directly from Amazon.com (not marketplace sellers). If you find Amazon.com directly selling the textbook for a lower price, WE'LL MATCH IT...GUARANTEED. details
http://www.textbooks.com/Algebra-and-Trigonometry-Graphs-and-Models---Text-Only-3rd-Edition/9780321279118/Marvin-L-Bittinger.php?mppg=2
13
12
The Andromeda galaxy's dusty pink arms contrast with older stars that glow blue in a striking new image from the Spitzer Space Telescope. Using about 3000 separate exposures, Pauline Barmby of the Harvard-Smithsonian Center for Astrophysics in Cambridge, US, and her team found that the spiral galaxy has about a trillion stars. This measurement is the first census of Andromeda's stars in the infrared part of the spectrum and agrees with previous estimates of the stars' combined mass. "The Spitzer data trace with startling clarity the star-forming material all the way into the inner part of the galaxy," says George Helou, deputy director of NASA's Spitzer Science Center at the California Institute of Technology. Gas and dust "The challenge is to understand what shapes the distribution of this gas and dust, and what modulates the star formation at different locations." Andromeda, also called M31, has also been the target for the Gemini North Telescope on Mauna Kea, Hawaii, US. It took the deepest and highest-resolution pictures yet of the galaxy's central bulge and inner disc in near-infrared light. Using adaptive optics that compensate for distortions caused by turbulence in the atmosphere, Knut Olsen, an astronomer with the US National Optical Astronomy Observatory, found that most stars in the galaxy's bulge and inner disc are old. This suggests that the disc has existed in its current form for at least 6 billion years. Galaxies are thought to form by mergers of material over time, but a galactic disc needs a relatively stable environment to thrive. "The discs of galaxies can't form in these violent collisions," Olsen told New Scientist. Given the age of the observed stars, Olsen's data suggest that this violent phase of Andromeda's evolution lasted until 6 billion years ago. This concurs with computer simulations that indicate that the disc could have formed at this time. Another group, led by Tim Davidge of the Herzberg Institute of Astrophysics, Canada, took Gemini data and found a young, violent star near the centre of the Andromeda galaxy. The discovery of an "asymptotic giant branch" star came as a surprise because scientists thought that the gravitational field of the nearby supermassive black hole would destroy the gas and dust clouds needed to nurture young stars. This type of star is short-lived and is also found in the Milky Way. "Now we see that the centres of M31 and the Milky Way may be more similar than once thought," Davidge says. The two galaxies are expected to collide in roughly 5 billion years. The research was presented at the American Astronomical Society Meeting in Alberta, Canada, on Monday. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Mon Feb 23 14:23:46 GMT 2009 by Maddie What are some names of the stars in this galaxy? Does anyone know, or any other stars out of th milky way... All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
http://www.newscientist.com/article/dn9282
13
21
Primordial Black Holes One goal of Astropulse is to detect mini black holes that are evaporating due to "Hawking radiation". These mini black holes would have been created during the big bang, unlike currently known black holes.What is a black hole? When a huge star runs out of fuel, it collapses and then explodes in a supernova. What's left over is a massive but relatively small object called a black hole. To give you an idea of a black hole's density: the sun has a radius of 700,000 km. A black hole that weighs as much as the sun would have a radius of 3 km. If you could fill a typical pop can (12 oz) with the material in such a black hole, the can would weigh 7 trillion tons, almost as much as the weight of the water in Lake Superior.* Perhaps the most famous fact about black holes is the one that makes them "black": because of a black hole's strong gravity, nothing, not even light, can escape from within a black hole. One way to express this is to say that the escape velocity inside a black hole is greater than the speed of light.** What is escape velocity? It's the speed at which you have to throw something to escape a planet or star. For instance, how fast would you have to throw a baseball up in the air, in order to launch it into space, so that it would travel farther and farther away and never come back? (Neglecting air friction.) That's the Earth's escape velocity, about 25,000 miles per hour. This is somewhat more than the top speed of the space shuttle. (The shuttle can afford to travel more slowly, because it propels itself over a period of time, rather than being launched with a single throw.) If you could somehow go beneath the surface of a black hole, and shine a flashlight out into space, the light coming out of the flashlight would not be going fast enough to escape -- it would fall back toward the black hole. So a black hole never radiates light from inside itself. That makes it "black", so it cannot be seen through a telescope ... it blends right in with the blackness of space around it.Hawking radiation ... at least, that's what scientists used to think. But the astrophysicist Stephen Hawking theorized that black holes would radiate a tiny amount of light and matter into space. This happens because of virtual particles that appear and disappear in the vacuum. "Vacuum" is the word scientists use to describe space that it is as empty as possible, with no particles, dust, gas, light, or anything else in it. But it turns out that even a vacuum is not truly empty. It contains no ordinary particles, but it does contain "virtual" particles, which appear from nothing and then disappear before they have a chance to go anywhere or do anything. Virtual particles are created in pairs; one virtual particle is made of matter (like an electron), and the other is antimatter (like a positron.) According to Stephen Hawking, it's possible for virtual particles to be created just outside the surface of a black hole, with one of the two particles falling into the black hole, and the other escaping. This process takes matter from the black hole, and allows it to leave in the form of the escaping particles. It turns out that smaller black holes radiate more than large black holes. A black hole with the mass of the sun would radiate a negligible amount of energy; it would take 7E33 (7 with 33 zeroes after it) such black holes to equal a 60 watt light bulb. On the other hand, a "small" black hole of one billion tons (the mass of the Great Wall of China) would emit 300 million watts. Over time, this radiation would cause the mini black hole to shrink, and as it shrinks, it would radiate even more strongly. After billions of years, it would finally radiate all of its mass away, evaporating suddenly. We hope that this evaporation would produce radio waves that Astropulse can detect. The evaporation wouldn't create radio waves directly. Instead, it would create an expanding fireball of high energy gamma rays and particles. This fireball would interact with the surrounding magnetic field, pushing it out and generating radio waves.Birth of a black hole Where would a mini black hole come from? No known black hole is that small. In fact, all known black holes are created in one of two ways. First, they can come from the collapse and supernova explosion of a large star. A star collapses many times over its life span, each time burning a new kind of fuel. Initially, a star burns hydrogen in a nuclear fusion reaction, turning the hydrogen into helium. The heat from this reaction keeps the star from collapsing under its own weight. When the hydrogen is used up, the star collapses and becomes hotter, until it's hot enough to burn helium. Each time the star uses up one kind of fuel, it collapses further until it can burn the next kind of fuel. When the star uses up all of its fuel, it can settle down as a white dwarf star, or it can explode and leave behind a neutron star or black hole. Black holes resulting from this process weigh 4 to 16 times as much as the sun. Second, large black holes can result from mergers of smaller ones. The black hole at the center of the Milky Way galaxy weighs 4 million times as much as the sun. Black holes in other galaxies may weigh even more. These processes only allow for the creation of star-sized black holes or larger. But it has been suggested that tiny black holes could have been created at the beginning of the universe, during the big bang. If that were true, some of those mini black holes might be evaporating right now! Because they would have been created in the very early universe, they are called primordial black holes, and Astropulse tries to detect them. * The density of a black hole depends on how you count its volume. The pop can calculation assumes that the black hole extends to its event horizon, the surface at which the escape velocity equals the speed of light. It might be better to say that a black hole is a singularity, meaning its mass is concentrated at a single point (of zero volume) at the center. On this view, the black hole's density is infinite! ** Another way to say this is that the black hole warps time and space until "forward in time" becomes "into the black hole." Nothing inside the black hole can escape, or even start to travel away from the black hole, any more than it can go backward in time. |Copyright © 2013 University of California|
http://setiathome.berkeley.edu/ap_prbh.php
13
15
Functions are pieces of code, that can be executed by calling from another function. You know that a common C++ application, must have one function, that functions is int main(). So there is a statement of a function return_type function_name(parameter1, parameter2,...) - return_type, is the type that the function returns as a result (eg. int, float, string,...). It can be void, if the function doesn't return anything. - function_name, is the identifier of the function, we can call the function using that identifier - parameters, values that we give to the function to make some operations with them - statements, the code in the function, it is surrounded by braces - using namespace std; - int sum(int a, int b) - return a+b; - int main() - int x=5; - int y=6; - int z=sum(x, y); - cout << z << endl; The function expects two arguments, the first will be the value of x, the second will be the value of y. And it will return their sum. So, we initialize two variables, x and y. And the variable z will get the value of the sum of x and y. That operation is performed by the function.
http://www.codeitwell.com/functions-in-cpp.html
13
20
This article is Part 1 of a series of 3 articles that I am going to post. The proposed article content will be as follows: - Part 1: This one, will be an introduction into Perceptron networks (single layer neural networks) - Part 2: Will be about multi layer neural networks, and the back propogation training method to solve a non-linear classification problem such as the logic of an XOR logic gate. This is something that a Perceptron can't do. This is explained further within this article - Part 3: Will be about how to use a genetic algorithm (GA) to train a multi layer neural network to solve some logic problem Let's start with some biology Nerve cells in the brain are called neurons. There is an estimated 1010 to the power(1013) neurons in the human brain. Each neuron can make contact with several thousand other neurons. Neurons are the unit which the brain uses to process information. So what does a neuron look like A neuron consists of a cell body, with various extensions from it. Most of these are branches called dendrites. There is one much longer process (possibly also branching) called the axon. The dashed line shows the axon hillock, where transmission of signals starts The following diagram illustrates this. Figure 1 Neuron The boundary of the neuron is known as the cell membrane. There is a voltage difference (the membrane potential) between the inside and outside of the membrane. If the input is large enough, an action potential is then generated. The action potential (neuronal spike) then travels down the axon, away from the cell body. Figure 2 Neuron Spiking The connections between one neuron and another are called synapses. Information always leaves a neuron via its axon (see Figure 1 above), and is then transmitted across a synapse to the receiving neuron. Neurons only fire when input is bigger than some threshold. It should, however, be noted that firing doesn't get bigger as the stimulus increases, its an all or nothing arrangement. Figure 3 Neuron Firing Spikes (signals) are important, since other neurons receive them. Neurons communicate with spikes. The information sent is coded by spikes. The input to a Neuron Synapses can be excitatory or inhibitory. Spikes (signals) arriving at an excitatory synapse tend to cause the receiving neuron to fire. Spikes (signals) arriving at an inhibitory synapse tend to inhibit the receiving neuron from firing. The cell body and synapses essentially compute (by a complicated chemical/electrical process) the difference between the incoming excitatory and inhibitory inputs (spatial and temporal summation). When this difference is large enough (compared to the neuron's threshold) then the neuron will fire. Roughly speaking, the faster excitatory spikes arrive at its synapses the faster it will fire (similarly for inhibitory spikes). So how about artificial neural networks Suppose that we have a firing rate at each neuron. Also suppose that a neuron connects with m other neurons and so receives m-many inputs "x1 …. … xm", we could imagine this configuration looking something like: Figure 4 Artificial Neuron configuration This configuration is actually called a Perceptron. The perceptron (an invention of Rosenblatt ), was one of the earliest neural network models. A perceptron models a neuron by taking a weighted sum of inputs and sending the output 1, if the sum is greater than some adjustable threshold value (otherwise it sends 0 - this is the all or nothing spiking described in the biology, see neuron firing section above) also called an activation function. The inputs (x1,x2,x3..xm) and connection weights (w1,w2,w3..wm) in Figure 4 are typically real values, both postive (+) and negative (-). If the feature of some xi tends to cause the perceptron to fire, the weight wi will be positive; if the feature xi inhibits the perceptron, the weight wi will be negative. The perceptron itself, consists of weights, the summation processor, and an activation function, and an adjustable threshold processor (called bias here after). For convenience the normal practice is to treat the bias, as just another input. The following diagram illustrates the revised configuration. Figure 5 Artificial Neuron configuration, with bias as additinal input The bias can be thought of as the propensity (a tendency towards a particular way of behaving) of the perceptron to fire irrespective of its inputs. The perceptron configuration network shown in Figure 5 fires if the weighted sum > 0, or if you're into math-type explanations The activation usually uses one of the following functions. The stronger the input, the faster the neuron fires (the higher the firing rates). The sigmoid is also very useful in multi-layer networks, as the sigmoid curve allows for differentation (which is required in Back Propogation training of multi layer networks). or if your into maths type explanations A basic on/off type function, if 0 > x then 0, else if x >= 0 then 1 or if your into math-type explanations A foreword on learning Before we carry on to talk about perceptron learning lets consider a real world example : How do you teach a child to recognize a chair? You show him examples, telling him, "This is a chair. That is not a chair," until the child learns the concept of what a chair is. In this stage, the child can look at the examples we have shown him and answer correctly when asked, "Is this object a chair?" Furthermore, if we show to the child new objects that he hasn't seen before, we could expect him to recognize correctly whether the new object is a chair or not, providing that we've given him enough positive and negative examples. This is exactly the idea behind the perceptron. Learning in perceptrons Is the process of modifying the weights and the bias. A perceptron computes a binary function of its input. Whatever a perceptron can compute it can learn to compute. "The perceptron is a program that learn concepts, i.e. it can learn to respond with True (1) or False (0) for inputs we present to it, by repeatedly "studying" examples presented to it. The Perceptron is a single layer neural network whose weights and biases could be trained to produce a correct target vector when presented with the corresponding input vector. The training technique used is called the perceptron learning rule. The perceptron generated great interest due to its ability to generalize from its training vectors and work with randomly distributed connections. Perceptrons are especially suited for simple problems in pattern classification." Professor Jianfeng feng, Centre for Scientific Computing, Warwick university, England. The Learning Rule The perceptron is trained to respond to each input vector with a corresponding target output of either 0 or 1. The learning rule has been proven to converge on a solution in finite time if a solution exists. The learning rule can be summarized in the following two equations: b = b + [ T - A ] For all inputs i: W(i) = W(i) + [ T - A ] * P(i) Where W is the vector of weights, P is the input vector presented to the network, T is the correct result that the neuron should have shown, A is the actual output of the neuron, and b is the bias. Vectors from a training set are presented to the network one after another. If the network's output is correct, no change is made. Otherwise, the weights and biases are updated using the perceptron learning rule (as shown above). When each epoch (an entire pass through all of the input training vectors is called an epoch) of the training set has occured without error, training is complete. At this time any input training vector may be presented to the network and it will respond with the correct output vector. If a vector, P, not in the training set is presented to the network, the network will tend to exhibit generalization by responding with an output similar to target vectors for input vectors close to the previously unseen input vector P. So what can we use do with neural networks Well if we are going to stick to using a single layer neural network, the tasks that can be achieved are different from those that can be achieved by multi-layer neural networks. As this article is mainly geared towards dealing with single layer networks, let's dicuss those further: Single layer neural networks Single-layer neural networks (perceptron networks) are networks in which the output unit is independent of the others - each weight effects only one output. Using perceptron networks it is possible to achieve linear seperability functions like the diagrams shown below (assuming we have a network with 2 inputs and 1 output) It can be seen that this is equivalent to the AND / OR logic gates, shown below. Figure 6 Classification tasks So that's a simple example of what we could do with one perceptron (single neuron essentially), but what if we were to chain several perceptrons together? We could build some quite complex functionality. Basically we would be constructing the equivalent of an electronic circuit. Perceptron networks do however, have limitations. If the vectors are not linearly separable, learning will never reach a point where all vectors are classified properly. The most famous example of the perceptron's inability to solve problems with linearly nonseparable vectors is the boolean XOR problem. Multi layer neural networks With muti-layer neural networks we can solve non-linear seperable problems such as the XOR problem mentioned above, which is not acheivable using single layer (perceptron) networks. The next part of this article series will show how to do this using muti-layer neural networks, using the back propogation training method. Well that's about it for this article. I hope it's a nice introduction to neural networks. I will try and publish the other two articles when I have some spare time (in between MSc disseration and other assignments). I want them to be pretty graphical so it may take me a while, but i'll get there soon, I promise. What Do You Think ? Thats it, I would just like to ask, if you liked the article please vote for it. Points of Interest I think AI is fairly interesting, that's why I am taking the time to publish these articles. So I hope someone else finds it interesting, and that it might help further someones knowledge, as it has my own. Artificial Intelligence 2nd edition, Elaine Rich / Kevin Knight. McGraw Hill Inc. Artificial Intelligence, A Modern Approach, Stuart Russell / Peter Norvig. Prentice Hall.
http://www.codeproject.com/Articles/16419/AI-Neural-Network-for-beginners-Part-1-of-3?fid=359696&df=90&mpp=10&sort=Position&spc=None&noise=1&prof=True&view=Expanded
13
33
Home • How do I...? • Kaleidagraph • Fitting Exercise • Origin 3.5 • Origin 4.0 • Lab Note Books This exercise will help you learn how to use Kaleidagraph or Origin to plot and analyze data. In 1798 Henry Cavendish conducted an experiment to "weigh the earth." He did this by measuring the shift in the equilibrium position of a small dumbbell suspended from a torsion fiber caused by two cannonballs. The balls were placed near the dumbbell so as to make it twist. From the spring constant of the fiber, the geometry and masses of the dumbbell and cannonballs, and the magnitude of the angular shift the cannonballs cause, it is possible to measure the value of G, the universal constant of gravitation. Combined with knowledge of the radius of the earth and the local value of (little) g, the local acceleration due to gravity, one can deduce the mass of the earth. Data have been collected for the angular position of the dumbbell as a function of time, and are available by clicking here. You may retype them or select them in your browser for pasting into the data analysis program. The motion is described by a damped sinusoid. Its functional form is where the quantities in the equation are defined by |a||(angular) amplitude of oscillation| |T||period of oscillation| |φ||initial phase of the motion| |τ||exponential decay time of the motion| |θo||shift in the equilibrium position of the dumbbell| There are six steps to this exercise. Each task has a help link for Kaleidagraph and Origin, which you may obtain by clicking the link. Do the data look like they could represent the motion of a twisting pendulum slowed down by air resistance? Save the plot. Fit your data. Be very careful to choose good guesses for your parameters. If you don't, you will very likely get an error like Singular Matrix Error or some such "useful" diagnostic information. In Origin, you can press the Update Plot button to see how you are doing. In Kaleidagraph, there is no such facility. If you're curious for more details about the fitting operation, click here. To make sure you've done the fit properly, fill out and submit the form below. The function χ2 is defined by which looks more frightening than it is. This function adds up the differences between each data point and the fitted curve, symbolized here by f. In this equation, the xi are the values of the independent variable. In this example, the independent variable is time. The yi are the values of the dependent variable, which is θ in this problem. The sum is over the N data points. There are two subtleties. First, the differences are squared, to make them all add up positively. If they weren't, the average would be zero! Second, each difference is divided by the uncertainty of the data point. This means that you get penalized a lot when the curve is far from a "good" data point (that has small uncertainty), and not much at all when the curve is far from a "lousy" data point (that has a large uncertainty). Since χ2 is the sum of nonnegative quantities, the smallest value it can have is zero. This happens when the curve goes through each and every data point. You might think that this is the best you can do, but you'd be wrong. It means either that you cheat or that you're really sloppy! (Newton cheated with his data and got away with it, because nobody was very sophisticated about this sort of thing when Newton was working out the foundations of mechanics.) Real data have random noise at some level, and this means that they will virtually never lie right on the curve. The uncertainty of a data point is an estimate of the likely discrepancy between the point and its true value. Roughly speaking, a typical data point should be within about one uncertainty ("one error bar") from its true value. This means that each data point should contribute a value of about one to the sum. So the sum should be about N. It is often more convenient to compute the "per point" value of χ2. This is called the reduced χ2 or χ2 per degree of freedom. It turns out that the best procedure is to divide χ2 not by the number of data points, N, but by N - m, where m is the number of fitting parameters. The number you get should be in the ballpark of unity for a good fit. If it's much smaller than you probably overestimated your errors. If it's much bigger, you may have underestimated your errors or your model may not describe the data very well. Origin reports a value of chisq. Don't be fooled. It is the reduced χ2. Kaleidagraph reports a value of chi^2. It is χ2. To get the reduced χ2, divide by the number of degrees of freedom (N - m). Copyright © 2001 Harvey Mudd College Physics Department| This page was last modified on Fri, Jun 29, 2001.
http://www.physics.hmc.edu/howto/FittingExercise.html
13
10
Landslides occur ultimately due to the effect of gravity, although other factors such as geology, topography, weathering, drainage and man-made construction can all contribute to the overall stability of a slope. Landslides are commonly divided into four categories: falls, topples, slides or flows. Landslides rarely comprise a single type of movement but are often the result of a combination of several types. Whilst the BGS currently has over 14 000 landslides in its National landslide Database many of these are ancient and occurred under different climatic conditions to those of the present day (e.g. Pleistocene). If left undisturbed these ancient mass movement deposits may remain stable for many years, however poorly planned development can sometimes reactivate these ancient slides. Downslope movement of materials through landsliding may damage buildings or infrastructure through loss of support or due to direct impact. Common causes of damage due to landslide relate to: The potential for landsliding (slope instability) to be a hazard has been assessed using 1:50 000 scale digital maps of superficial and bedrock deposits. These have been combined with information from the BGS National Landslide Database and scientific and engineering reports. The detailed digital data illustrated in the map are available as attributed vector polygons, as raster grids and in spreadsheet format. The BGS currently has three datasets providing information on landslides in Great Britain. These products differ in both their collection method and also their intended use. The BGS National Landslide Database currently documents over 14 000 landslides across Great Britain. The database was an inherited dataset, based on a search of secondary sources conducted for the Department of the Environment (DoE) in the 1980s and 1990s. Since the foundation of the DoE data set BGS has continued to populate this database using the National Digital Geological Map (DiGMap). Other data has also been collected through media reports, site investigations reports, journal articles and by new mapping. Data is stored within a fully relational Oracle database, which can be accessed through Microsoft Access ™ or an ESRI ® ArcGIS interface. The purpose of this database is to provide a detailed record of landslide events across Great Britain. The database contains over 35 fields which can be attributed with information on the type of landslide, age and causes. The DiGMap mass movement layer displays mapped landslides that have been recorded by field geologists. The mass movement layer does not distinguish between different types of failures; it is purely an outline of the landslide deposit. GeoSure is a national dataset that assesses the potential for ground movement and subsidence across Great Britain. It contains six layers, one of which is for slope instability (landslides). The slope instability layer defines the susceptibility of an area to undergo landsliding. Unlike DiGMap it does not show where a landslide has already occurred, but where it may occur in the future, due to favourable conditions. GeoSure takes into account the local geology and slope of an area along with the geotechnical and structural characteristics of a geological formation. These conditioning factors are combined within a GIS and result is a susceptibility map of the UK. The GeoSure assessment does not show risk or seek to determine the temporal distribution of landsliding.
http://www.bgs.ac.uk/products/geosure/landslides.html
13
21
Algebra Word Expressions Study Guide Introduction to Algebra Word Expressions We are mere operatives, empirics, and egotists until we learn to think in letters instead of figures. —Oliver Wendell Holmes 1841–1935) American Physician and Writer In this lesson, you'll learn how to translate word expressions into numerical and algebraic expressions. You might not believe it, but real-life situations can be turned into algebra. In fact, many real-life situations that involve numbers are really algebra problems. Let's say it costs $5 to enter a carnival, and raffle tickets cost $2 each. The number of tickets bought by each person varies, so we can use a variable, x, to represent that number. The total spent by a person is equal to 2x + 5. If we know how many raffle tickets a person bought, we can evaluate the expression to find how much money that person spent in total. How do we turn words into algebraic expressions? First, let's look at numerical expressions. Think about how you would describe "2 + 3" in words. You might say, Two plus three, two added to three, two increased by three, the sum of two and three, or the total of two and three. You might even think of other ways to describe 2 + 3. Each of these phrases contains a keyword that signals addition. In the first phrase, the word plus tells us that the numbers should be added. In the last phrase, the word total tells us that we will be adding. For each of the four basic operations, there are keywords that can signal which operation will be used. The following chart lists some of those keywords and phrases. Some words and phrases can signal more than one operation. For example, the word each might mean multiplication, as it did in the raffle ticket example at the beginning of the lesson. However, if we were told that a class reads 250 books and we are looking for how many books each student read, each would signal division Let's start with some basic phrases. The total of three and four would be an addition expression: 3 + 4. The keyword total tells us to add three and four. The phrase the difference between ten and six is a subtraction phrase, because of the keyword difference. The difference between ten and six is 10 – 6. Sometimes, the numbers given in a phrase appear in the opposite order that we will use them when we form our number sentence. This can happen with subtraction and division phrases, where the order of the numbers is important. The order of the addends in an addition sentence, or the order of the factors in a multiplication sentence, does not matter. The phrase one fewer than five is a subtraction phrase, but be careful. One fewer than five is 5 – 1, not 1 – 5. Some phrases combine more tan one operation: ten more than five minus three can be written as 10 + 5 – 3 (or 5 – 3 + 10). As we learned in Lesson x 4, the order of operations is important, and it is just as important when forming number sentences from phrases. Seven less than eight times negative two is (8)(–2) – 7. We must show that 8 and –2 are multiplied, and that 7 is subtracted from that product. Although multiplication comes before subtraction in the order of operations, a phrase might be written in such a way that subtraction must be performed first. The phrase five times the difference between eleven and four means that 4 must be subtracted from 11 before multiplication occurs. We must use parentheses to show that subtraction should be performed before multiplication: 5(11 – 4). If you are writing a phrase as a number sentence, and you know that one operation must be performed before another, place that operation in parentheses. Even if the parentheses are unnecessary, it is never wrong to place parentheses around the operation that is to be performed first. Writing Phrases as Algebraic Expressions Writing algebraic phrases as algebraic expressions is very similar to writing numerical phrases as numerical expressions. What is the difference between numerical phrases and algebraic phrases? Numerical phrases, like the ones we have seen so far in this lesson, contain only numbers whose values are given (such as five or ten). Algebraic phrases contain at least one unknown quantity. That unknown is usually referred to as a number, as in "five times a number." A second unknown is usually referred to as another number or a second number. The variable x is usually used to represent "a number" in these expressions, although we could use any letter. Just as the phrase ten more than five is written as 10 + 5, the phrase ten more than a number is written as 10 + x. When a phrase refers to two unknown values, we usually use x to represent the first number and y to represent the second number. Twice a number plus four times another number would be written as 2x + 4y. Practice problems and solutions for these concepts at Algebra Word Expressions Practice Questions. Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! ACTIVITIESGet Outside! 10 Playful Activities - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Bullying in Schools - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Should Your Child Be Held Back a Grade? Know Your Rights - First Grade Sight Words List
http://www.education.com/study-help/article/algebra-word-expressions/
13
14
Comets are very small in size relative to planets. Their average diameters usually range from 750 meters (2,460 feet) or less to about 20 kilometers (12 miles). Recently, evidence has been found for much larger distant comets, perhaps having diameters of 300 kilometers (186 miles) or more, but these sizes are still small compared to planets. Planets are usually more or less spherical in shape, usually bulging slightly at the equator. Comets are irregular in shape, with their longest dimension often twice the shortest. (See Appendix A, Table 3.) The best evidence suggests that comets are very fragile. Their tensile strength (the stress they can take without being pulled apart) appears to be only about 1,000 dynes/cm^2 (about 2 lb./ft.^2). You could take a big piece of cometary material and simply pull it in two with your bare hands, something like a poorly compacted snowball. Comets, of course, must obey the same universal laws of motion as do all other bodies. Where the orbits of planets around the Sun are nearly circular, however, the orbits of comets are quite elongated. Nearly 100 known comets have periods (the time it takes them to make one complete trip around the Sun) five to seven Earth years in length. Their farthest point from the Sun (their aphelion) is near Jupiter's orbit, with the closest point (perihelion) being much nearer to Earth. A few comets like Halley have their aphelions beyond Neptune (which is six times as far from the Sun as Jupiter). Other comets come from much farther out yet, and it may take them thousands or even hundreds of thousands of years to make one complete orbit around the Sun. In all cases, if a comet approaches near to Jupiter, it is strongly attracted by the gravitational pull of that giant among planets, and its orbit is perturbed (changed), sometimes radically. This is part of what happened to Shoemaker-Levy 9. (See Sections 2 and 4 for more details.) The nucleus of a comet, which is its solid, persisting part, has been called an icy conglomerate, a dirty snowball, and other colorful but even less accurate descriptions. Certainly a comet nucleus contains silicates akin to some ordinary Earth rocks in composition, probably mostly in very small grains and pieces. Perhaps the grains are glued together into larger pieces by the frozen gases. A nucleus appears to include complex carbon compounds and perhaps some free carbon, which make it very black in color. Most notably, at least when young, it contains many frozen gases, the most common being ordinary water. In the low pressure conditions of space, water sublimes, that is, it goes directly from solid to gas -- just like dry ice does on Earth. Water probably makes up 75-80% of the volatile material in most comets. Other common ices are carbon monoxide (CO), carbon dioxide (CO2), methane (CH4), ammonia (NH3), and formaldehyde (H2CO). Volatiles and solids appear to be fairly well mixed throughout the nucleus of a new comet approaching the Sun for the first time. As a comet ages from many trips close to the Sun, there is evidence that it loses most of its ices, or at least those ices anywhere near the nucleus surface, and becomes just a very fragile old rock in appearance, indistinguishable at a distance from an asteroid. A comet nucleus is small, so its gravitational pull is very weak. You could run and jump completely off of it (if you could get traction). The escape velocity is only about 1 meter (3 feet) per second (compared to 11 km/s--7 miles/second--on Earth). As a result, the escaping gases and the small solid particles (dust) that they drag with them never fall back to the nucleus surface. Radiation pressure, the pressure of sunlight, forces the dust particles back into a dust tail in the direction opposite to the Sun. A comet's tail can be tens of millions of kilometers in length when seen in the reflected sunlight. The gas molecules are torn apart by solar ultraviolet light, often losing electrons and becoming electrically charged fragments or ions. The ions interact with the wind of charged particles flowing out from the Sun and are forced back into an ion tail, which again can extend for millions of kilometers in the direction opposite to the Sun. These ions can be seen as they fluoresce in sunlight. Every comet then really has two tails, a dust tail and an ion tail. If the comet is faint, only one or neither tail may be detectable, and the comet may appear just as a fuzzy blob of light, even in a big telescope. The density of material in the coma and tails is very low, lower than the best vacuum that can be produced in most laboratories. In 1986, the Giotto spacecraft flew right through Comet Halley only a few hundred kilometers from the nucleus. Though the coma and tails of a comet may extend for tens of millions of kilometers and become easily visible to the naked eye in Earth's night sky, as Comet West's were in 1976, the entire phenomenon is the product of a tiny nucleus only a few kilometers across. Because comet nuclei are so small, they are quite difficult to study from Earth. They always appear at most as a point of light in even the largest telescope, if not lost completely in the glare of the coma. A great deal was learned when the European Space Agency, the Soviet Union, and the Japanese sent spacecraft to fly by Comet Halley in 1986. For the first time, actual images of an active nucleus were obtained and the composition of the dust and gases flowing from it was directly measured. Early in the next century, the Europeans plan to send a spacecraft called Rosetta to rendezvous with a comet and watch it closely for a long period of time. Even this sophisticated mission is not likely to tell scientists a great deal about the interior structure of comets, however. Therefore, the opportunity to reconstruct the events that occurred when Shoemaker-Levy 9 split and to study what occurred when fragments were destroyed in Jupiter's atmosphere is uniquely important (see Section 4). Table of Contents Section 2
http://www.solarviews.com/eng/comet/whatis.htm
13
27
Graphing is a method of showing the relationship between two or more sets of data by means of a chart or sketch. Trends in data are easier to identify with a graph than a data table. A graph can be created using graphing paper (you purchase gridded paper or draw your own), a computer application such as Excel, or graphing applications for a personal digital assistant (PDA) or phone. A graph shows a set of data points plotted in relation to the horizontal axis and vertical axis. Example 1 - Draw a graph for pump performance showing the relationship between pressure (psi) and flow (gpm). Use the following table of pump performance data values. Step 1. Pump performance charts are typically drawn with the flow on the horizontal axis and pressure on the vertical axis. Label the horizontal axis as flow in gallons per minute. Label the vertical axis as pressure in pounds per square inch. Step 2. Mark the horizontal axis from 0 to 90 in even increments. Mark the vertical axis from 0 to 300 in even increments of 25 pounds per square inch, as the data points were collected in increments of 25 pounds per square inch. Step 3. Plot each data set by finding the pressure value on the vertical axis and then the flow value on the horizontal axis. Mark (plot) a point where the two values meet. Continue plotting points for all data sets. Step 4. Run a curved line through the points. Not all the points will be on the curve, some of the points will lie above the line and some below. Special statistical calculations are used to determine how far off the curve a data point can be and still be meaningful. Typically, if the point is off the curve enough to affect the shape of the curve, the data set should be rerun. Using a Graph to Find Approximate Values A curve can be used to find approximate values for data in between the data sets collected. The curve can also be used to show performance trends. For example, this curve shows that as pressure is decreasing, the flow rate increases proportionally throughout the range of performance. Example 2 - Using the graph above, find the indicated flow rate at a pressure of 263 pounds per square inch (psi). Step 1. Approximate the location of 263 pounds per square inch on the pressure axis. This location is approximately halfway between 275 and 250 pounds per square inch. Step 2. Move horizontally until the curved line is met. Step 3. Move vertically from the curved line to the flow rate axis. Read or approximate the flow rate. The flow rate at 263 psi is 35 gallons per minute. Determining the Slope of a Curve The slope of a line can be determined from a plot using the slope formula. slope = rise/run Example 3 - Find the slope of the line drawn on the plot above for the interval between 50 and 150 pounds per square inch. It's important to be aware what interval is being used, because the line drawn is a curve and the slope will change with each section of line. Note from the curve that as the pressure varies from 50 to 150 pounds per square inch, the the flow rate varies from about 79 to 63 gallons per minute. Pressure, pounds per square inch, is on the vertical axis, so it is the rise. Flow rate, gallons per minute, is on the horizontal axis, so it is the run. Slope = rise / run = ((150 - 50) psi / 63 - 79) gpm = 100 psi / (-16 gpm) = -6 psi/gpm The slope is -6 psi/gpm. The negative slope indicates that as the horizontal value (flow rate) increases, the vertical value (pressure) pressure decreases. Reading Distance from a Map Chart Maps are generally broken into grids and labeled on the vertical and horizontal axes for ease of locating places or numbers. If the vertical and horizontal values are known, the area on the map can be obtained by finding where the two lines intersect or cross. Charts accompanying the map provide information about the distance between different map locations. By reading the appropriate values from those horizontal and vertical axes. Example 4 - Use the mileage chart below to find the distance between Tampa, FL, and Albuquerque, NM. Step 1. Locate Tampa on the horizontal axis. Draw a vertical line through these grids. Step 2. Locate Albuquerque on the horizontal axis. Draw a horizontal line across these grids. Step 3. Read the mileage amount where the two lines cross The distance between Tampa and Albuquerque is 1,760 miles.
http://www.firefightermath.org/index.php?option=com_content&view=article&id=42&Itemid=56
13
12
AMES, Iowa – Two planets with very different densities and compositions are locked in surprisingly close orbits around their host star, according to astronomers working with NASA’s Kepler Mission. One planet is a rocky super-Earth about 1.5 times the size of our planet and 4.5 times the mass. The other is a Neptune-like gaseous planet 3.7 times the size of Earth and eight times the mass. The planets approach each other 30 times closer than any pair of planets in our solar system. The discovery of the Kepler-36 planetary system about 1,200 light years from Earth is an example of planets breaking with the planetary pattern of our solar system: rocky planets orbiting close to the sun and gas giants orbiting farther away. The discovery is reported June 21 in the online Science Express. Lead authors of the study are Joshua Carter, a Hubble Fellow at the Harvard-Smithsonian Center for Astrophysics, and Eric Agol, an associate professor of astronomy at the University of Washington. “The planetary system reported in this paper is another example of an ‘extreme’ planetary system that will serve as a stimulus to theories of planet migration and orbital rearrangement,” researchers wrote in the paper. Steve Kawaler, an Iowa State University professor of physics and astronomy, was part of the research team that provided information about the properties of the planets’ host star. He and other researchers measured changes in the star’s brightness to precisely identify the size, mass and age of the host star. Kawaler explained the importance of the discovery: “Small, rocky planets should form in the hot part of the solar system, close to their host star – like Mercury, Venus and Earth in our Solar System. Bigger, less dense planets – Jupiter, Uranus – can only form farther away from their host, where it is cool enough for volatile material like water ice, and methane ice to collect. In some cases, these large planets can migrate close in after they form, during the last stages of planet formation, but in so doing they should eject or destroy the low-mass inner planets. “Here, we have a pair of planets in nearby orbits but with very different densities. How they both got there and survived is a mystery.” The discovery was made possible by NASA’s Kepler Mission, a spacecraft launched in 2009 that’s carrying a photometer to measure changes in star brightness. Its primary job is to use tiny variations in the brightness of the stars within its view to find earth-like planets that might be able to support life. The Kepler Asteroseismic Investigation is also using data from that photometer to study star oscillations, or changes in brightness, that offer clues to a star’s interior structure. The investigation is led by a four-member steering committee: Kawaler, Chair Ron Gilliland of the Space Telescope Science Institute based in Baltimore, Jorgen Christensen-Dalsgaard and Hans Kjeldsen, both of Aarhus University in Aarhus, Denmark. Kawaler said the Kepler spacecraft was essential to discovering what the researchers called in their paper “this puzzling pair of planets.” “The seismic signal is very small, and only Kepler has the sensitivity and persistence to reveal it,” Kawaler said. “Also, the transit signal from the planets crossing in front of the star is very small, and only visible with Kepler’s level of sensitivity.”
http://www.news.iastate.edu/news/2012/06/21/kepler-36
13
26
SSL (Secure Sockets Layer) An Overview on Secure Sockets Layer (SSL) The Secure Sockets Layer (SSL) protocol was developed by Netscape Communications, and enables secure communication over the Internet. SSL works at the transport layer of Transmission Control Protocol/Internet Protocol (TCP/IP), which makes the protocol independent of the application layer protocol functioning on top of it. SSL is an open standard protocol and is supported by a range of both servers and clients. SSL can be utilized for the following: - Encrypt Web traffic using Hypertext Transfer Protocol (HTTP). When HTTP is utilized together with SSL, it is known as HTTPS. - SSL is generally utilized to authenticate Web servers, and to encrypt communications between Web browsers and Web servers. - Encrypt mail and newsgroup traffic. SSL provides the following features for securing confidential data as it transverses over the Internet: - Data integrity - Data confidentiality through encryption SSL works by combining public key cryptography and secret key encryption to ensure data confidentiality. The Rivest-Shamir-Adleman (RSA) public key algorithm is used to generate the certificates, and the public and private key pairs utilized in SSL. When a client Web browser connects to a Web server that is configured for SSL, a SSL handshake process is initiated with the Web server. The Web server at this stage has already obtained a server certificate from a certificate authority (CA). A server certificate is a digital certificate which the server utilizes to verify its identity to other parties. Digital certificates form the basis of a Public Key Infrastructure (PKI) because these certificates use cryptographic algorithms and key lengths to protect data as it is transmitted over the network. The X.509 standard, derived from the X.500 directory standard, defines digital certificates. It describes a certificate as the means by which the distinguished name of the user can be associated with the public key of the user. The distinguished name of the user is defined by a naming authority. The distinguished name is used by the issuing Certificate Authority (CA) as the unique name of the user. A digital certificate contains information such as the certificate version, serial number, signature, issuer, and validity period, among other information. A Certificate Authority (CA) can be defined as an entity that generates and validates digital certificates. The CA adds its own signature to the public key of the client. This essentially indicates that the public key can be considered valid, by those parties that trust the CA. Examples of third party entities that provide and issue digital certificates are VeriSign, Entrust and GlobalSign. Because these entities issue digital certificates for a fee, it can become a costly expense in a large organization. By using the tools provided by Microsoft, you can create an internal CA structure within your organization. You can use Windows Server 2003 Certificate Services to create certificates for users and computers in an Active Directory domain. The SSL handshake process occurs between a client and Web server to negotiate the secret key encryption algorithm which the client and Web server will utilize to encrypt the data which is transmitted in the SSL session. The client Web browser initiates the handshake process by using a URL starting with the following: https://. The SSL handshake process is described below: - The client initiates the SSL handshake process by sending a URL starting with the following: https:// to the server. - The client initially sends the Web server a list of each encryption algorithm which it supports. Algorithms supported by SSL include RC4 and Data Encryption Standard (DES). The client also sends the server its random challenge string which will be utilized later in the process. - The Web server next performs the following tasks: - Selects an encryption algorithm from the list of encryption algorithms supported by, and received from the client. - Sends the client a copy of its server certificate. - Sends the client its random challenge string - The client utilizes the copy of the server certificate received from the server to authenticate the identity of the server. - The client obtains the public key of the server from the server certificate. - The client next generates a premaster secret. This is a different random string which will in turn be utilized to generate the session key for the SSL session. The client then encrypts a different value called the premaster secret using the public key of the server, and returns this encrypted value to the server. This is accompanied with a keyed hash of the handshake messages, and a master key. The hash is used to protect the messages exchanged in the handshake process. The hash is generated from the former two random strings transmitted between the server and the client. - The server sends the client a keyed hash of all the handshake messages exchanged between the two parties so far. - The server and the client then generate the session key from the different random values and keys, and by applying a mathematical calculation. - The session key is used as a shared secret key to encrypt and decrypt data exchanged between the server and the client. - The session key is discarded when the SSL session either times-out or is terminated. What is Transport Layer Security (TLS) TLS (Transport Layer Security), defined in RFC 2246, is a protocol for establishing a secure connection between a client and a server. TLS (Transport Layer Security) is capable of authenticating both the client and the server and creating a encrypted connection between the two. The TLS (Transport Layer Security) protocol is extensible, meaning that new algorithms can be added for any of these purposes, as long as both the server and the client are aware of the new algorithms. TLS is an internet standard version of Secure Sockets Layer (SSL), and is very similar to Secure Sockets Layer version 3 (SSLv3). The key differences between SSLv3 and TLS are: - You can extend TLS by adding new authentication methods. - TLS utilizes session caching, thereby improving on SSL performance. - TLS also distinctly separates the handshake process from the record layer. The record layer holds the data. SSLv3 uses the Message Authenticate Code (MAC) algorithm, while TLS utilizes a hash for Message Authentication Code, also known as HMAC. Because the differences between SSL and TLC are so few, the protocols are typically called SSL/TLS. While being quite similar, SSL and TLS do not interoperate. For a secure session, both parties must utilize either SSL or TLS. SSL/TLS has the following layers. - Handshake layer: This layer deals with establishing the secure SSL session by negotiating key exchange using an asymmetric algorithm such as RSA or Diffie-Hellman. The handshake layer is responsible for these key elements: - Authentication: Digital certificates are used in the authentication process managed by the handshake process. - Message encryption: For encryption, symmetric keys (shared secret keys) and asymmetric keys are utilized. With symmetric keys, the identical key is utilized to both encrypt and decrypt data. With asymmetric keys, a public key and a private key is utilized to encrypt and decrypt data. This essentially means that two separate keys are used for message encryption and decryption. - Hash algorithms: The hash algorithms supported are the Standard Hash Algorithm 1 (SHA1) and Message Digest 5 (MD5). SHA1 produces a 160-bit hash value, while MD5 produces a 128-bit hash value. - Record layer: This layer contains the data, and is also responsible for ensuring that the communications are not altered in transit. Hashing algorithms such as MD5 and SHA are used for this purpose. The benefits associated with utilizing SSL/TLS are: - It is easy to deploy. - Server authentication, and client authentication (optional) occurs. - Message confidentiality and integrity are ensured. - The parties partaking in the secure session can choose the authentication methods, and encryption and hash algorithms. The shortcomings associated with deploying SSL/TLS are: - SSL/TLS needs additional CPU resources to establish the secure session between the server and client. - Because SSL/TLS utilizes certificates, you would need administrators to manage these certificates, and the certificate systems. The different situations where an SSL/TLS implementation normally occurs: - SSL/TLS can be utilized to authenticate client access to a secure site. You can require client and server certificates, and only allow access to the site to those clients that are authenticated. - Applications which support SSL can require authentication for remote users logging on to the system. - Exchange servers can use SSL/TLS to provide data confidentiality when data is transmitted between servers on the intranet or Internet. A Free Implementation of TLS The OpenSSL Project is a non-commercial toolkit implementing the TLS (Transport Layer Security) protocols. Configuring Firewalls to Allow Encrypted Traffic To enable SSL traffic to pass through the firewall, one of two methods can be used: - You can configure the firewall to permit all traffic with a specified port. The firewall will however only be able to use the source and destination of the SSL packets to determine whether to allow a packet to pass through. The firewall does not examine the contents of SSL packets.The common ports which applications utilize for SSL are listed below: - Hypertext Transfer Protocol (HTTP): SSL port 443; Standard port 80 - Simple Mail Transfer Protocol (SMTP): SSL port 465; Standard port 25 - Internet Message Access Protocol (IMAP): SSL port 993; Standard port 143 - Lightweight Directory Access Protocol (LDAP): SSL port 636; Standard port 389 - Network News Transfer Protocol (NNTP): SSL port 563; Standard port 119 - Post Office Protocol version 3 (POP3) : SSL port 995; Standard port 110 - You can configure the firewall as a proxy server. In this configuration, the client establishes a session with the firewall, and the firewall establishes a session with the particular server. A Comparison of IPSec and SSL The Windows Server 2003 Public Key Infrastructure (PKI) is based on the following standards: - Public key infrastructure X.509 (PKIX) standard - Internet Engineering Task Force (IETF) standards: IETF recommends that the security standards listed below interoperate with a PKI implementation to further enhance the security in enterprise applications. - Transport Layer Security (TLS) - Secure Multipurpose Internet Mail Extensions (S/MIME) - Internet Protocol Security (IPSec) As is the case with SSL, IPSec is also utilized to ensure authentication, data confidentiality, and message integrity. A few key differences between IPSec and SSL are: - IPSec is implemented by the operating system (OS) and is transparent to those applications utilizing it. SSL is implemented by distinct applications. While SSL cannot be utilized to encrypt all the types of data communicated between two hosts, IPSec can be utilized to secure most types of network communications. - SSL can only be used to encrypt communications between two hosts, while IPSec can tunnel communications between networks. - IPSec can be used to encrypt data for absolutely any application. Applications have to support SSL in order for SSL to encrypt data for the application. - For authentication, IPSec utilizes public key certificates or a shared secret. SSL has to utilize public key certificates. - With IPSec, the server and the client need to be authenticated. With SSL, either the server, or the client, or both the server and client needs to be authenticated. While IPSec requires each end of the connection to authenticate, SSL does not.
http://www.tech-faq.com/ssl-secure-sockets-layer.html
13
48
The Herschel Space Observatory has focused on Mars, the four giant planets, and the two homes of comets to uncover new information about them and about the nebula from which our solar system formed. When Jupiter, Saturn, Uranus, and Neptune condensed out of the solar system's parent nebula some 4.6 billion years ago, there were millions of bits and pieces left over - the comets. A region of comets lying beyond Neptune's orbit has remained fairly undisturbed since then. It is known as the Kuiper Belt, and it's the source of short-period comets like Halley. But theory holds that the comets scattered among the giant planets were relocated - drawn to and then flung away by the planets' gravitational pull. (NASA has made good use of this "slingshot effect" to give spacecraft a boost on their way to the outer planets.) Some of the comets are thought to have headed sunward and collided with the inner planets. In fact, one school of thought has it that these comets provided the water for Earth's oceans, and may also have contributed the complex organic molecules that led to life. But most of the comets were thrown much farther. Massive Jupiter hurled most of the comets in its vicinity clear out of the solar system. The smaller giants propelled their comets with less force, and fewer gained the velocity needed to leave the solar system. The comets that didn't quite escape wound up forming the Oort Cloud, a vast storehouse of comets at the outskirts of the solar system, trillions of miles from the sun (compared to Pluto's average distance of less than 6 billion miles). Since they have undergone little change since the beginning of the solar system, comets are thought to contain fairly pristine samples of the materials that formed the primordial nebula. So studying them is a good way to glimpse the original cloud of gas and dust from which the Sun and planets formed. Herschel has made detailed observations of comets to help scientists reconstruct the early development of the solar system, and also determine whether comets were the source of water and pre-life chemicals on primitive Earth. (And if on Earth, then also perhaps on Mars and moons of the giant Herschel has inventoried comets' chemical composition, and study their physical and chemical processes. Comparing Kuiper Belt comets with those in the Oort Cloud has enabled scientists to infer conditions in the different parts of the nebula where they formed. The deuterium/hydrogen ratio One measurement of particular interest is the ratio of deuterium to hydrogen (D/H ratio) in cometary water. Hydrogen is the simplest atom, with one electron in orbit around a nucleus of one proton. Deuterium is an isotope of hydrogen, in which the nucleus also has one neutron. Chemical compounds that contain hydrogen also come in versions made with deuterium. Water that contains deuterium is known as "heavy water." Earth's oceans contain a characteristic percentage of heavy water. So do comets, which are mostly water, and some of the other planets and moons. Knowing the D/H ratio of water and other substances is useful to scientists for two basic reasons. First, it acts as a fingerprint. By comparing the D/H ratios in Earth's water with that in cometary water, for example, scientists can determine whether our planet's water could have come from comets. The same determination can be made for other planets and moons in the solar system. Second, the D/H ratio in comets provides a clue to conditions at the beginning of the Universe! Scientists think the D/H ratio of pristine comets represents that of our original nebula, which in turn is typical of the rest of the Universe. And all of the hydrogen and deuterium in the Universe is thought to have formed during the first three minutes after the Big Bang (the nuclei, that is - stable atoms didn't develop for another 300,000 years or so), so the D/H ratio of the Universe was fixed in place at that time. Knowing that ultimate D/H ratio would help scientists determine the conditions that could have generated hydrogen and deuterium in that particular ratio, and therefore help them deduce the nature of the Universe in its earliest stages. Scientists are also interested in comparing the D/H ratios in the atmospheres of the four giant planets, and Herschel will measure them Montage of planets photographed by the Voyager spacecraft. From upper right: Jupiter, Saturn, Uranus, and Neptune. One important reason is to test a prevailing theory of planet formation. Scientists think that as our solar system developed, Jupiter and Saturn formed mostly from gas, while Uranus and Neptune - out in the more tenuous suburbs of the primordial nebula – formed from a much higher percentage of ice-covered dust grains. If that is correct, then Uranus and Neptune should have a higher D/H ratio than Jupiter and Saturn, since icy dust mantles tend to have a higher ratio than the surrounding gas. Comparing the ratios in the two pairs of planets will reveal information about the composition of that icy dust. And comparing them with the ratios in comets will further enhance our understanding of how all those bodies formed, and the structure of the primordial nebula. The Infrared Space Observatory (ISO) surprised scientists when it detected water in the high atmospheres of the four giants. Herschel will help to determine whether that water came from their rings and moons or from the icy crusts of interplanetary dust. Tracking water and other molecules at various altitudes (the "vertical profiles") provides information on how those atmospheres work - how convection moves gases up and down, and how winds blow them around. Herschel has studied the vertical mixing profiles of many molecules in the atmospheres of the giant planets, providing new insights about the chemistry and dynamics of the atmospheric layers. Ammonia and phosphine, which are affected by condensation, photochemical processes, and vertical transport, are expected to be among the most valuable guides to information about atmospheric circulation in the giant planets. Herschel has mapped the spectral continuum of Jupiter and Saturn in the far-infrared for the first time to reveal properties of their clouds, especially particle size and density. Looking further out to Uranus and Neptune, Herschel has measured the methane in their stratospheres. Herschel has also searched all four gas giants for chemicals never before seen in planetary atmospheres. Understanding the atmospheric chemistry of Mars is important both for an understanding of Mars' history - including the possibility that it was more like Earth in its earlier days - and as a tool for comparing how atmospheres differ on different planets. Such studies may provide insights into the workings and possible future of our own atmosphere here on Earth. Herschel has explored the Martian atmosphere in the 200-670 micron range for the first time, enabling scientists to determine the vertical profiles of water vapor and oxygen molecules. Monitoring water at various times during the Martian year has revealed seasonal changes. Herschel has also measured deuterium and carbon monoxide, and may detect other compounds, such as hydrogen peroxide, that are predicted by models of Finally, Herschel has obtained information about the composition and emissivity of minerals covering Mars' surface.
http://herschel.jpl.nasa.gov/solarSystem.shtml
13
15
Observed properties of our Solar System The observed properties of the Solar system. Scale and distribution of matter, t he density of the planets (Mass/Volume) and what this tells you about the materials that make up a planet or Moon. Dynamical properties (orbits, inclination of spin-axis [obliquity], rotational properties). How evolved are the surfaces of the planets? How "primitive" are they? How do we know their relative ages? Their absolute ages? Have the primitive features been removed by recent activity. Know the order of the planets in increasing distance from the Sun (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.). Be able to recognize which planet is which by its appearance. Know which are inner "rocky" planets and which are the outer "gas-ball" planets. Planetary Guts: What is inside a planet? What is inside of a planet or moon? How can we tell?. What methods of observations provide info on the interiors of planets? Use Mass and radius to determine DENSITY. Mass determined from Kepler's laws of planet has a moon or change in orbit of passing space probe. Use direct sampling of rock from surface (Earth/Moon, Mars, Venus) to determine (INDIRECTLY) what the insides may contain CHEMICALLY. Use seismic data from planet quakes to get info about STRUCTURE of interiors. Magnetic fields provide evidence of metallic conducting cores. Shape and rotation provides data about the PLASTICITY of the planet or Moon. Outer Skins of Planets and Moons Know what factors modify the surfaces of planets or Moons. Understand the four main processes: impact cratering, volcanism, tectonics, surface atmospheric and chemical weathering. Know which processes are important on what major objects in the solar system. How are the properties of the moons of the outer planets controlled and what similarities are there with the inner rocky planets? Actual atmospheric composition of the atmospheres of the major planets, especially Venus, Earth, Mars and the outer gas-ball planets. The absence of an atmosphere on the Moon and Mercury. Inventory of carbon dioxide (CO2), nitrogen (N2), and water (H2O), on Venus, Earth and Mars. Origin and change of planetary atmospheres. An understanding of "greenhouse" effect on Earth and Venus. Where did the oxygen (O2) come from on Earth and when? What gasses contribute to global warming on Earth? What factors contribute to global warmoing? How are the activities of human beings changing the balance and how significant is it? What can be done about it? The damaging effects of UV sunlight. What factors cause an increase in ozone depletion. The Outer Planets The structure of the outer planets and what we know about there cloud structures including the Giant Red Spot of Jupiter. The temperature, pressure and composition of the outer planets and the principal forms of clouds present. Differences between the belts and zones of Jupiter. Levels of clouds of different composition and color. Differences between Jupiter, Saturn, and the other gas-balls. Interior structure (liquid metallic hydrogen) in Jupiter and Saturn. Moons of the outer planets - types and origins. Factors influencing their surfaces. Ice geology and the effect of tidal stretching. Evidence of subsurface oceans and interior compostion. Volcanic activity on Io and Triton. The atmosphere of Titan (Saturn's giant Moon). Etc.. Tides and Rings The nature of tides and why they occur in Earth. Effects of tides on shape, rotation, and orbit of moons and planets. The effect on the inner moons of the giant planets (especially Io in the Jupiter system). The rings of the outer gaseous planets and their composition and structure. Why they occur within the tidal radius of a typical moon. Why the Earth does not have a major ring system. Meteors and Meteorites: what they are. Comets and meteor showers. The chemical makeup of meteorites (stones, irons, stoney-Irons), Chondrites, chondules, achondrites, carbonaceous chondrites... What meteorites tell us about the early solar system. Age of meteoroids and their relationship with the asteroid belt and Earth-crossing asteroids, based on an analysis of their orbits and their composition. Impact record of the Moon compared with the inner planets. Terrestrial impacts. The impact hazard scale: small impacts are more frequent. Crater size related to impactor size (crater 10x larger than impactor). Effects of large impacts on life on Earth. Extinction of the dinosaurs: evidence of an impact causing it. Other sample impacts: Arizona meteor crater, Tunguska, Comet SL9 and Jupiter.
http://www.public.iastate.edu/~f2003.astro.120/exam2rev.html
13
11
Your browser may not be supporting dhtml - please see help Middle School and High School Math We will be adding more topics soon ... Algebraic Expressions I Create one step algebraic expressions worksheets. Algebraic Expressions II Create multi step algebraic expressions worksheets. Combine Like Terms Combining Like Terms - create worksheets. One Step Equations Teach or learn one step equations. Practice online or create worksheets. Two Step Equations Teach or learn two step equations. Practice online or create worksheets. Solving Equations - Word Problems Geometry - Rotating A Point Teach or learn rotating a point. Practice online or create worksheets. Y varies as X Teach or learn solving Y varies directly as X or Y varies inversely as X problems. Practice online or create worksheets. Polynomial factoring, Multiplying Polynomials and Quadratic Equations. Practice online or create worksheets. Linear equations (2 variables and 3 variables) worksheets. Geometry - Distance Find the distance between 2 points. Practice online or create worksheets. Geometry - Mid Point Find the mid point of a line segment. Practice online or create worksheets. Graph A Line Graph straight lines. Write the equations in slope-intercept form. Write the equations in point-slope form. Write the equations parallel to a line. Write the equations perpendicular to a line. Improve your vocabulary. Vocabulary flash cards. Copyright © 2004 mathslice.com
http://www.mathslice.com/ms_hs_main.php
13
23
Photograph courtesy Steve Larson, University of Arizona Published January 3, 2011 A large asteroid known for more than a century appears to actually be a comet in disguise, astronomers say. Most asteroids are chunks of metallic rock that have virtually no atmospheres. Tens of thousands of asteroids circle the sun inside what's known as the main asteroid belt, a doughnut-like ring that lies between the orbits of Mars and Jupiter. (Explore an interactive solar system.) By contrast, most comets are loose clumps of dirt and ice thought to originate in the Kuiper belt, far beyond the orbit of Neptune. When a comet's oval-shaped orbit brings it close to the sun, its ices vaporize and the comet develops its signature halo of gases and dust. On December 11 astronomer Steven Larson of the Catalina Sky Survey in Arizona spotted what appeared to be a faint comet not currently in any comet databases. Larson later realized the cometlike body is traveling along the same circular, stable orbit as an asteroid named 596 Scheila. Discovered in 1906, the space rock is more than 70 miles (113 kilometers) wide. The scientist thinks the body belongs to a mysterious group of solar system hybrids called main belt comets, or MBCs, which have orbits like those of asteroids yet display comet-like activity. "Most MBCs are small things, but this is the first time that a large asteroid has been observed to show cometary activity," Larson said. Astronomers have positively identified only five other MBCs to date, but experts think there could be millions more such hybrids cruising through our solar system in inactive states. "It could be that all asteroids with a diameter of less than, say, 150 kilometers [93 miles] have ice inside," said Dale Cruikshank, a comet expert at NASA Ames Research Center in California who was not involved in the Catalina observations. (Also see "Water Discovered on an Asteroid—A First.") Comets Revived by Cracks in Insulation It's unclear why some comets would cease activity, but one idea is that they go into "stealth mode" after too many passes near the sun. "If they come near the sun often enough, they can build up a layer of nonvolatile and [non-reflective] material that acts as an insulating crust. This appears to be what happened" with 596 Scheila, Catalina's Larson said. Based on this idea, a comet that ceases spewing material will remain inactive unless something happens to reactivate it. The simplest trigger is if another object strikes the crusty comet and cracks its insulation, allowing ice and gas trapped inside to gush out. This scenario is "an attractive idea, but it's not the only option," NASA Ames' Cruikshank said. "These objects can crack and break for other reasons as well." Shifting material inside an MBC—caused, for example, by the gravitational tug of a passing body—might also loosen the surface layer. Researchers at NASA Ames have recently submitted a proposal for a space mission to visit an MBC. The mission "would go out and look at a couple of these things, because MBCs are potentially very important to the issue of where the water to make Earth's oceans came from," Cruikshank said. Hybrid Comets Brought Water to Earth? The origin of water on our planet is a mystery because—according to current models of planet formation—water could not have existed on early Earth. "We think the Earth formed hot and dry ... and hot rock is not very good at trapping water," said comet expert David Jewitt at the University of California, Los Angeles, who also did not participate in the new 596 Scheila observations. Some scientists have speculated that, as ancient Earth cooled down, water and other volatiles were added as a kind of "late veneer" by icy comets that collided with the planet. (See "Comet Swarm Delivered Earth's Oceans?") A problem with this theory is that recent spacecraft rendezvous with comets have shown that comet water has a slightly different makeup than Earth water, containing a higher concentration of a heavier version of hydrogen called deuterium. Some scientists point out, however, that so far all of the comets studied directly have come from far out in the solar system. MBCs have a different origin, so their water might be a chemical match for our planet, they say. MBCs "are a source of water the detailed composition of which we do not know," Cruikshank said. Also, because MBCs originate closer to Earth, they may have been more likely to strike the planet than conventional comets, UCLA's Jewitt added. "If you try to hit the Earth from the Kuiper belt, that's a hell of a long shot," Jewitt said. "But if you try to hit Earth from the asteroid belt, which is ten times closer, it's much easier, because Earth is a bigger and closer target." A new species of dinosaur-era reptile is rewriting the books on the evolution of so-called sea monsters, a new study claims. The world's highest peak has been shedding snow and ice for the past 50 years, possibly due in part to global warming, new research shows. Detailed scans capture transformation. Celebrating 125 Years Connect With Nat Geo Special Ad Section Shop National Geographic Great Energy Challenge Blog - Stichting Rootbox: Sustainable Design Through Collaboration, With or Without Wind Turbine - Turkey’s Celal Bayar Still Sun-Powered, With Smaller Panels - Hungary’s Kecskemét College: Boosting Power, But Keeping Light - Aston University Plies the Power of Wood - Universidad Ceu Cardenal Herrera Takes Inspiration From Nature
http://news.nationalgeographic.com/news/2011/01/110103-weird-asteroid-comet-disguise-main-belt-science-space/
13
86
Solar sails (also called light sails or photon sails, especially when they use light sources other than the Sun) are a proposed form of spacecraft propulsion using large membrane mirrors. Radiation pressure is about 10-5 Pa at Earth's distance from the Sun and decreases by the square of the distance from the light source (e.g. sun), but unlike rockets, solar sails require no reaction mass. Although the thrust is small, it continues as long as the light source shines and the sail is deployed. In theory a lightsail (actually a system of lightsails) powered by an Earth-based laser could even be used to decelerate the spacecraft as it approaches its destination. Solar collectors, temperature-control panels and sun shades are occasionally used as expedient solar sails, to help ordinary spacecraft and satellites make minor attitude control corrections and orbit modifications without using fuel. This conserves fuel that would otherwise be used for maneuvering and altitude control. A few have even had small purpose-built solar sails for this use. For example, EADS Astrium's Eurostar E3000 geostationary communications satellites use solar sail panels attached to their solar cell arrays to off-load transverse angular momentum, thereby saving fuel (angular momentum is accumulated over time as the gyroscopic momentum wheels control the spacecraft's attitude - this excess momentum must be offloaded to protect the wheels from overspin). Some unmanned spacecraft (such as Mariner 10) have substantially extended their service lives with this practice. The science of solar sails is well-proven, but the technology to manage large solar sails is still undeveloped. Mission planners are not yet willing to risk multimillion dollar missions on unproven solar sail unfolding and steering mechanisms. This neglect has inspired some enthusiasts to attempt private development of the technology, such as the Cosmos 1. The concept was first proposed by German astronomer Johannes Kepler in the seventeenth century. It was again proposed by Friedrich Zander in the late 1920s and gradually refined over the decades. Recent serious interest in lightsails began with an article by engineer and science fiction author Robert L. Forward in 1984. How they work The spacecraft arranges a large membrane mirror which reflects light from the Sun or some other source. The radiation pressure on the mirror provides a small amount of thrust by reflecting photons. Tilting the reflective sail at an angle from the Sun produces thrust at an angle normal to the sail. In most designs, steering would be done with auxiliary vanes, acting as small solar sails to change the attitude of the large solar sail (see the vanes on the illustration labeled Cosmos 1 , below). The vanes would be adjusted by electric motors. In theory a lightsail driven by a laser or other beam from Earth can be used to decelerate a spacecraft approaching a distant star or planet, by detaching part of the sail and using it to focus the beam on the forward-facing surface of the rest of the sail. In practice, however, most of the deceleration would happen while the two parts are at a great distance from each other, and that means that, to do that focusing, it would be necessary to give the detached part an accurate optical shape and orientation. Sails orbit, and therefore do not need to hover or move directly toward or away from the sun. Almost all missions would use the sail to change orbit, rather than thrusting directly away from a planet or the sun. The sail is rotated slowly as the sail orbits around a planet so the thrust is in the direction of the orbital movement to move to a higher orbit or against it to move to a lower orbit. When an orbit is far enough away from a planet, the sail then begins similar maneuvers in orbit around the sun. The best sort of missions for a solar sail involves a dive near the sun, where the light is intense, and sail efficiencies are high. Going close to the Sun may be done for different mission aims: for exploring the solar poles from a short distance, for observing the Sun and its near environment from a non-Keplerian circular orbit the plane of which may be shifted some solar radii, for flying by the Sun such that the sail gets a very high speed. An unsuspected feature, until the first half of the 1990s, of the solar sail propulsion is to allow a sailcraft to escape the solar system with a cruise speed higher or even much higher than a spacecraft powered by a nuclear electric rocket system. The spacecraft mass-to-sail area ratio does not need to achieve ultra-low values, even though the sail should be an advanced all-metal sail. This flight mode is also known as fast solar sailing. Proven mathematically (like many other astronautical items well in advance of their actual launches), such sailing mode has been considered by NASA/Marshall as one of the options for a future precursor interstellar probe exploring the near interstellar space beyond the heliosphere. Most theoretical studies of interstellar missions with a solar sail plan to push the sail with a very large laser Beam-powered propulsion Direct Impulse beam. The thrust vector (spatial vector) would therefore be away from the Sun and toward the target. Limitations of solar sails Solar sails don't work well, if at all, in low Earth orbit below about 800 km altitude due to erosion or air drag. Above that altitude they give very small accelerations that take months to build up to useful speeds. Solar sails have to be physically large, and payload size is often small. Deploying solar sails is also highly challenging to date. Solar sails must face the sun to decelerate. Therefore, on trips away from the sun, they must arrange to loop behind the outer planet, and decelerate into the sunlight. There is a common misunderstanding that solar sails cannot go towards their light source. This is false. In particular, sails can go toward the sun by thrusting against their orbital motion. This reduces the energy of their orbit, spiraling the sail toward the sun, see Tack (sailing). Investigated sail designs "Parachutes" would have very low mass, but theoretical studies show that they will collapse from the forces placed by shrouds. Radiation pressure does not behave like aerodynamic pressure. The highest thrust-to-mass designs known (2007) were theoretical designs developed by Eric Drexler. He designed a sail using reflective panels of thin aluminum film (30 to 100 nanometres thick) supported by a purely tensile structure. It rotated and would have to be continually under slight thrust. He made and handled samples of the film in the laboratory, but the material is too delicate to survive folding, launch, and deployment, hence the design relied on space-based production of the film panels, joining them to a deployable tension structure. Sails in this class would offer accelerations an order of magnitude higher than designs based on deployable plastic films. The highest-thrust to mass designs for ground-assembled deployable structures are square sails with the masts and guy lines on the dark side of the sail. Usually there are four masts that spread the corners of the sail, and a mast in the center to hold guide wires. One of the largest advantages is that there are no hot spots in the rigging from wrinkling or bagging, and the sail protects the structure from the sun. This form can therefore go quite close to the sun, where the maximum thrust is present. Control would probably use small sails on the ends of the spars. In the 1970s JPL did extensive studies of rotating blade and rotating ring sails for a mission to rendezvous with Halley's Comet. The intention was that such structures would be stiffened by their angular momentum, eliminating the need for struts, and saving mass. In all cases, surprisingly large amounts of tensile strength were needed to cope with dynamic loads. Weaker sails would ripple or oscillate when the sail's attitude changed, and the oscillations would add and cause structural failure. So the difference in the thrust-to-mass ratio was almost nil, and the static designs were much easier to control. JPL's reference design was called the "heliogyro" and had plastic-film blades deployed from rollers and held out by centripetal forces as it rotated. The spacecraft's altitude and direction were to be completely controlled by changing the angle of the blades in various ways, similar to the cycle and collective pitch of a helicopter. Although the design had no mass advantage over a square sail, it remained attractive because the method of deploying the sail was simpler than a strut-based design. JPL also investigated "ring sails" (Spinning Disk Sail in the above diagram), panels attached to the edge of a rotating spacecraft. The panels would have slight gaps, about one to five percent of the total area. Lines would connect the edge of one sail to the other. Weights in the middles of these lines would pull the sails taut against the coning caused by the radiation pressure. JPL researchers said that this might be an attractive sail design for large manned structures. The inner ring, in particular, might be made to have artificial gravity roughly equal to the gravity on the surface of Mars. A solar sail can serve a dual function as a high-gain antenna. Designs differ, but most modify the metallization pattern to create a holographic monochromatic lens or mirror in the radio frequencies of interest, including visible light. Pekka Janhunen from FMI has invented a type of solar wind sail called the electric solar wind sail It has little in common with the solar wind sail design externally, bacause the sails are substituted with straigthened conducting tethers (wires) which are placed radially around the host ship. The wires are electrically charged and thus an electric field is created around the wires. The electric field of the wires extends a few tens of metres into the surrounding solar wind plasma. Because the solar wind electrons react on the electric field similarly as on a concrete solar wind sail, the function radius of the wires is based on the electric field that is generated around the wire rather than the actual wire itself. This fact also makes it possible to maneuver a ship with electric solar wind sail by regulating the electric charge of the wires. A full-sized functioning electric solar wind sail would have 50-100 straightened wires with a length of about 20 km each. Sail testing in space NASA has successfully tested deployment technologies on small scale sails in vacuum chambers. No solar sails have been successfully used in space as primary propulsion systems, but research in the area is continuing. It is noteworthy that the Mariner 10 mission, which flew by the planets Mercury and Venus, demonstrated use of solar pressure as a method of attitude control, in order to conserve attitude-control propellant. On February 4, 1993, Znamya 2, a 20-meter wide aluminized-mylar reflector, was successfully tested from the Russian Mir space station. Although the deployment test was successful, the experiment only demonstrated the deployment, not propulsion. A second test, Znamaya 2.5, failed to deploy properly. On August 9, 2004 Japanese ISAS successfully deployed two prototype solar sails from a sounding rocket. A clover type sail was deployed at 122 km altitude and a fan type sail was deployed at 169 km altitude. Both sails used 7.5 micrometer thick film. The experiment was purely a test of the deployment mechanisms, not of propulsion. A joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science launched Cosmos 1 on June 21, 2005, from a submarine in the Barents Sea, but the Volna rocket failed, and the spacecraft failed to reach orbit. A solar sail would have been used to gradually raise the spacecraft to a higher earth orbit. The mission would have lasted for one month. A suborbital prototype test by the group failed in 2001 as well, also because of rocket failure. A 15-meter-diameter solar sail (SSP, solar sail sub payload, soraseiru sabupeiro-do) was launched together with ASTRO-F on a M-V rocket on February 21, 2006, and made it to orbit. It deployed from the stage, but opened incompletely. A team from the NASA Marshall Space Flight Center (Marshall), along with a team from the NASA Ames Research Center, developed a solar sail mission called NanoSail-D which was lost in a launch failure aboard a Falcon 1 rocket on 3 August 2008. The primary objective of the mission had been to test sail deployment technologies. The spacecraft might not have returned useful data about solar sail propulsion, according to Edward E. Montgomery, technology manager of Solar Sail Propulsion at Marshall, "The orbit available to us in this launch opportunity is so low, it may not allow us to stay in orbit long enough for solar pressure effects to accumulate to a measurable degree. The NanoSail-D structure was made of aluminum and plastic, with the spacecraft weighing less than . The sail has about of light-catching surface. The best known material is thought to be a thin mesh of aluminium with holes less than ½ the wavelength of most light. Nanometre-sized "antennas" would emit heat energy as infrared. Although samples have been created, it is too fragile to unfold or unroll with known technology. The most common material in current designs is aluminized 2 µm Kapton film. It resists the heat of a pass close to the Sun and still remains reasonably strong. The aluminium reflecting film is on the Sun side. The sails of Cosmos 1 were made of aluminized PET film. Research by Dr. Geoffrey Landis in 1998-9, funded by the NASA Institute for Advanced Concepts, showed that various materials such as Alumina for laser lightsails and Carbon fiber for microwave pushed lightsails were superior sail materials to the previously standard aluminum or Kapton films. In 2000, Energy Science Laboratories developed a new carbon fiber material which might be useful for solar sails. The material is over 200 times thicker than conventional solar sail designs, but it is so porous that it has the same weight. The rigidity and durability of this material could make solar sails that are significantly sturdier than plastic films. The material could self-deploy and should withstand higher temperatures. There has been some theoretical speculation about using molecular manufacturing techniques to create advanced, strong, hyper-light sail material, based on nanotube mesh weaves, where the weave "spaces" are less than ½ the wavelength of light impinging on the sail. While such materials have as-of-yet only been produced in laboratory conditions, and the means for manufacturing such material on an industrial scale are not yet available, such materials could weigh less than 0.1 g/m² making them lighter than any current sail material by a factor of at least 30. For comparison, 5 micrometre thick Mylar sail material weighs 7 g/m², aluminized Kapton films weighs up to 12 g/m², and Energy Science Laboratories' new carbon fiber material weighs in at 3g/m². Robert L. Forward pointed out that a solar sail could be used to modify the orbit of a satellite around the Earth. In the limit, a sail could be used to "hover" a satellite above one pole of the Earth. Spacecraft fitted with solar sails could also be placed in close orbits about the Sun that are stationary with respect to either the Sun or the Earth, a type of satellite named by Forward a statite . This is possible because the propulsion provided by the sail offsets the gravitational potential of the Sun. Such an orbit could be useful for studying the properties of the Sun over long durations. Such a spacecraft could conceivably be placed directly over a pole of the Sun, and remain at that station for lengthy durations. Likewise a solar sail-equipped spacecraft could also remain on station nearly above the polar terminator of a planet such as the Earth by tilting the sail at the appropriate angle needed to just counteract the planet's gravity. Robert Forward proposed the use of lasers to push solar sails, providing beam-powered propulsion . Given a sufficiently powerful laser and a large enough mirror to keep the laser focused on the sail for long enough, a solar sail could be accelerated to a significant fraction of the speed of light . To do so, however, would require the engineering of massive, precisely-shaped optical mirrors or lenses (wider than the Earth for interstellar transport), incredibly powerful lasers, and more power for the lasers than humanity currently generates. A potentially easier approach would be to use a maser to drive a "solar sail" composed of a mesh of wires with the same spacing as the wavelength of the microwaves, since the manipulation of microwave radiation is somewhat easier than the manipulation of visible light. The hypothetical "Starwisp" interstellar probe design would use a maser to drive it. Masers spread out more rapidly than optical lasers thanks to their longer wavelength, and so would not have as long an effective range. Masers could also be used to power a painted solar sail, a conventional sail coated with a layer of chemicals designed to evaporate when struck by microwave radiation. The momentum generated by this evaporation could significantly increase the thrust generated by solar sails, as a form of lightweight ablative laser propulsion. To further focus the energy on a distant solar sail, designs have considered the use of a large zone plate. This would be placed at a location between the laser or maser and the spacecraft. The plate could then be propelled outward using the same energy source, thus maintaining its position so as to focus the energy on the solar sail. Additionally, it has been theorized by da Vinci Project contributor T. Pesando that solar sail-utilizing spacecraft successful in interstellar travel could be used to carry their own zone plates or perhaps even masers to be deployed during flybys at nearby stars. Such an endeavour could allow future solar-sailed craft to effectively utilize focused energy from other stars rather than from the Earth or Sun, thus propelling them more swiftly through space and perhaps even to more distant stars. However, the potential of such a theory remains uncertain if not dubious due to the high-speed precision involved and possible payloads required. Despite the loss of Cosmos 1 (which was due to a failure of the launcher), scientists and engineers around the world remain encouraged and continue to work on solar sails. While most direct applications created so far intend to use the sails as inexpensive modes of cargo transport, some scientists are investigating the possibility of using solar sails as a means of transporting humans. This goal is strongly related to the management of very large (i.e. well above 1 km²) surfaces in space and the sail making advancements. Thus, in the near/medium term, solar sail propulsion is aimed chiefly at accomplishing a very high number of non-crewed missions in any part of the solar system and beyond. Critics of the solar sail argue that solar sails are impractical for orbital and interplanetary missions because they move on an indirect course. However, when in Earth orbit, the majority of mass on most interplanetary missions is taken up by fuel. A robotic solar sail could therefore multiply an interplanetary payload by several times by reducing this significant fuel mass, and create a reusable, multimission spacecraft. Most near-term planetary missions involve robotic exploration craft, in which the directness of the course is unimportant compared to the fuel mass savings and fast transit times of a solar sail. For example, most existing missions use multiple gravitational slingshots to reduce necessary fuel mass, in order to save transit time at the cost of directness of the route. There is also a misunderstanding that solar sails capture energy primarily from the solar wind high speed charged particles emitted from the sun. These particles would impart a small amount of momentum upon striking the sail, but this effect would be small compared to the force due to radiation pressure from light reflected from the sail. The force due to light pressure is about 5,000 times as strong as that due to solar wind. A much larger type of sail called a magsail would employ the solar wind. It has been proposed that momentum exchange from reflection of photons is an unproven effect that may violate the thermodynamical Carnot rule. This criticism was raised by Thomas Gold of Cornell, leading to a public debate in the spring of 2003. This criticism has been refuted by Benjamin Diedrich, pointing out that the Carnot Rule does not apply to an open system. Further explanation of lab results demonstrating is provided. James Oberg has also refuted Dr. Gold's analysis: "But ‘solar sailing’ isn’t theoretical at all, and photon pressure has been successfully calculated for all large spacecraft. Interplanetary missions would arrive thousands of kilometers off course if correct equations had not been used. The effect for a genuine ‘solar sail’ will be even more One way to see the conservation of energy is not a problem is to note that when reflected by a solar sail, a photon undergoes a Doppler shift; its wavelength increases (and energy decreases) by a factor dependent on the velocity of the sail, transferring energy from the sun-photon system to the sail. This change of energy can easily be verified to be exactly equal (and opposite) to the energy change of the sail. The Extended Heliocentric Reference Frame - In the 1991-92 the classical equations of the solar sail motion in the solar gravitational field were written by using a different mathematical formalism, namely, the lightness vector fully characterizing the sailcraft dynamics. In addition, solar-sail spacecraft has been supposed to be able to reverse its motion (in the solar system) provided that its sail were sufficiently light that sailcraft sail loading (σ) is not higher than 2.1 g/m². This value entails a high-performance technology indeed, but much probably within the capabilities of emerging technologies. - For describing the concept of fast sailing and some related items, we need to define two frames of reference. The first is an inertial Cartesian coordinate system centred on the Sun or a heliocentric inertial frame (HIF, for short). For instance, the plane of reference, or the XY plane, of HIF can be the mean ecliptic at some standard epoch such as J2000. The second Cartesian reference frame is the so-called heliocentric orbital frame (HOF, for short) with the origin in the sailcraft barycenter. The x-axis of HOF is the direction of the Sun-to-sailcraft vector, or position vector, the z-axis is along the sailcraft orbital angular momentum, whereas the y-axis completes the counterclockwise triad. Such definition can be extended to sailcraft trajectories, including both counterclockwise and clockwise arcs of motion, such a way HOF is always a continuous positively-oriented triad. The sail orientation unit vector (defined in sailcraft), say, n can be specified in HOF by a pair of angles, e.g. the azimuth α and the elevation δ. Elevation is the angle that n forms with the xy-plane of HOF (-90° ≤ δ ≤ 90°). Azimuth is the angle that the projection of n onto the HOF xy-plane forms with the HOF x-axis (0 ≤ α < 360 °). In HOF, azimuth and elevation are equivalent to longitude and latitude, respectively. - The sailcraft lightness vector L = [λr , λt , λn] depends on α and δ (non-linearly) and the thermo-optical parameters of the sail materials (linearly). Neglecting a small contribution coming from the aberration of light, one has the following particular cases (irrespective of the sail material): - α = 0 , δ = 0 ⇔ [λr , 0 , 0] ⇔ λ=|L|=λr - α ≠ 0 , δ = 0 ⇔ [λr , λt , 0] - α = 0 , δ ≠ 0 ⇔ [λr , 0 , λn] A Flight Example - Now suppose we have built a sailcraft with an all-metal sail of Aluminium and Chromium such that σ = 2 g/m². A launcher delivers the (packed) sailcraft at some million kilometers from the Earth. There, the whole sailcraft is deployed and begins its flight in the solar system (here, for the sake of simplicity, we neglect any gravitational perturbation from planets). A conventional spacecraft would move approximately in a circular orbit at about 1 AU from the Sun. In contrast, a sailcraft like this one is sufficiently light to be able to escape the solar system or to point to some distant object in the heliosphere. If n is parallel to the local sun-light direction, then λr = λ = 0.725 (i.e. 1/2 < λ < 1); as a result, this sailcraft moves on a hyperbolic orbit. Its speed at infinity is equal to 20 km/s. Strictly speaking, this potential solar sail mission would be faster than the current record speed for missions beyond the planetary range, namely, the Voyager-1 speed, which amounts to 17 km/s or about 3.6 AU/yr (1 AU/yr = 4.7404 km/s). However, three kilometers per second are not meaningful in the context of very deep space missions. - As a consequence, one has to resort to some L having more than one component different from zero. The classical way to gain speed is to tilt the sail at some suitable positive α. If α= +21°, then the sailcraft begins by accelerating; after about two months, it achieves 32 km/s. However, this is a speed peak inasmuch as its subsequent motion is characterized by a monotonic speed decrease towards an asymptotic value, or the cruise speed, of 26 km/s. After 18 years, the sailcraft is 100 AU away from the Sun. This would mean a pretty fast mission. However, considering that a sailcraft with 2 g/m² is technologically advanced, is there any other way to increase its speed significantly? Yes, there is. Let us try to explain this effect of non-linear dynamics. - The above figures show that spiralling out from a circular orbit is not a convenient mode for a sailcraft to be sent away from the Sun since it would not have a high enough excess speed. On the other hand, it is known from astrodynamics that a conventional Earth satellite has to perform a rocket maneuver at/around its perigee for maximizing its speed at "infinity". Similarly, one can think of delivering a sailcraft close to the Sun to get much more energy from the solar photon pressure (that scales as 1/R2). For instance, suppose one starts from a point at 1 AU on the ecliptic and achieves a perihelion distance of 0.2 AU in the same plane by a two-dimensional trajectory. In general, there are three ways to deliver a sailcraft, initially at R0 from the Sun, to some distance R < R0: - using an additional propulsion system to send the folded-sail sailcraft to the perihelion of an elliptical orbit; there, the sail is deployed with its axis parallel to the sun-light for getting the maximum solar flux at the chosen distance; - spiralling in by α slightly negative, namely, via a slow deceleration; - strongly decelerating by a "sufficiently large" sail-axis angle negative in HOF. - The first way - although usable as a good reference mode - requires another high-performance propulsion system. - The second way is ruled out in the present case of σ = 2 g/m²; as a matter of fact, a small α < 0 entails a λr too high and a negative λt too low in absolute value: the sailcraft would go far from the Sun with a decreasing speed (as discussed above). - In the third way, there is a critical negative sail-axis angle in HOF, say, αcr such that for sail orientation angles α < αcr the sailcraft trajectory is characterized as follows: - # the distance (from the Sun) first increases, achieves a local maximum at some point M, then decreases. The orbital angular momentum (per unit mass), say, H of the sailcraft decreases in magnitude. It is suitable to define the scalar H = H•k, where k is the unit vector of the HIF Z-axis; - # after a short time (few weeks or less, in general), the sailcraft speed V = |V| achieves a local minimum at a point P. H continues to decrease; - # past P, the sailcraft speed increases because the total vector acceleration, say, A begins by forming an acute angle with the vector velocity V; in mathematical terms, dV / dt = A • V / V > 0. This is the first key-point to realize; - # eventually, the sailcraft achieves a point Q where H = 0; here, the sailcraft's total energy (per unit mass), say, E (including the contribution of the solar pressure on the sail) shows a (negative) local minimum. This is the second key-point; - # past Q, the sailcraft - keeping the negative value of the sail orientation - regains angular momentum by reversing its motion (that is H is oriented down and H < 0). R keeps on decreasing while dV/dt augments. This is the third key-point; - # the sailcraft energy continues to increase and a point S is reached where E=0, namely, the escape condition is satisfied; the sailcraft keeps on accelerating. S is located before the perihelion. The (negative) H continues to decrease; - # if the sail attitude α has been chosen appropriately (about -25.9 deg in this example), the sailcraft flies-by the Sun at the desired (0.2 AU) perihelion, say, U; however, differently from a Keplerian orbit (for which the perihelion is the point of maximum speed), past the perihelion, V increases further while the sailcraft recedes from the Sun. - # past U, the sailcraft is very fast and pass through a point, say, W of local maximum for the speed, since λ < 1. Thus, speed decreases but, at a few AU from the Sun (about 2.7 AU in this example), both the (positive) E and the (negative) H begin a plateau or cruise phase; V becomes practically constant and, the most important thing, takes on a cruise value considerably higher than the speed of the circular orbit of the departure planet (the Earth, in this case). This example shows a cruise speed of 14.75 AU/yr or 69.9 km/s. At 100 AU, the sailcraft speed is 69.6 km/s. H-reversal sun flyby trajectory - The Figure below shows the mentioned sailcraft trajectory. Only the initial arc around the Sun has been plotted. The remaining part is rectilinear, in practice, and represents the cruise phase of the spacecraft. The sail is represented by a short segment with a central arrow that indicates its orientation. Note that the complicate change of sail direction in HIF is very simply achieved by a constant attitude in HOF. That brings about a net non-Keplerian feature to the whole trajectory. Some remarks are in order. - As mentioned in point-3, the strong sailcraft speed increase is due to both the solar-light thrust and gravity acceleration vectors. In particular, dV / dt, or the along-track component of the total acceleration, is positive and particularly high from the point-Q to the point-U. This suggests that if a quick sail attitude maneuver is performed just before H vanishes, α → -α, the sailcraft motion continues to be a direct motion with a final cruise velocity equal in magnitude to the reversal one (because the above maneuver keeps the perihelion value unchanged). The basic principle both sailing modes share may be summarised as follows: a sufficiently light sailcraft needs to lose most of its initial energy for subsequently achieving the absolute maximum of energy compliant with its given technology. - The above 2D class of new trajectories represents an ideal case. The realistic 3D fast sailcraft trajectories are considerably more complicated than the 2D cases. However, the general feature of producing a fast cruise speed can be further enhanced. Some of the enclosed references contain strict mathematical algorithms for dealing with this topic. Recently (July 2005), in an international symposium an evolution of the above concept of fast solar sailing has been discussed. A sailcraft with σ = 1 g/m² could achieve over 30 AU/yr in cruise (by keeping the perihelion at 0.2 AU), namely, well beyond the cruise speed of any nuclear-electric spacecraft (at least as conceived today). Such paper has been published on the Journal of the British Interplanetary Society (JBIS) in 2006. Solar sails in fiction - On the Waves of Ether (По волнам эфира) by B. Krasnogorsky 1913, spacecraft propelled by solar light pressure. - Sunjammer by Arthur C. Clarke, a short story (in The Wind from the Sun anthology) describing a solar sail craft Earth-Moon race. It was originally published under the name "Sunjammer" but when Clarke learned of the short story of the same name by Poul Anderson, he quickly changed it. - Dust of Far Suns, by Jack Vance, also published as Sail 25, depicts a crew of space cadets on a training mission aboard a malfunction-ridden solar sail craft. - The Mote in God's Eye (1975) by Larry Niven and Jerry Pournelle depicts an interstellar alien spacecraft driven by laser-powered light sails. - Both Green Mars and Blue Mars, by Kim Stanley Robinson, contain a solar reflecting mirror called a soletta made of earth to mars solar powered shuttles. - Rocheworld by Robert L. Forward, a novel about an interstellar mission driven by laser-powered light sails. - In the movie Tron, the characters Tron, Flynn and Yori are using a computer model of a solar sailer to escape from the MCP. - Solar sails appeared in Star Wars Episode II: Attack of the Clones, in which Count Dooku has a combination hyperdrive and starsail spacecraft dubbed the Solar Sailer. In episode 6 of the animated TV series Star Wars: Clone Wars, Count Dooku also appears travelling on a space ship equipped with a solar sail. - In the film Star Trek IV: The Voyage Home, an officer aboard a crippled spaceship discusses a plan to construct a solar sail to take his ship to the nearest port. - A solar sail appears in the Star Trek: Deep Space Nine episode "Explorers", as the primary propulsion system of the "Bajoran lightship". The vessel inadvertently exceeds the speed of light by sailing on a stream of tachyons. - The Lady Who Sailed The Soul by Cordwainer Smith, a short story (part of The Rediscovery Of Man collection) describing journeys on solar sail craft. - In David Brin's Heaven's Reach, sentient machines are using solar sails to harvest carbon from a red giant star's atmosphere to repair a Dyson Sphere-like construct. - The book Accelerando by Charles Stross depicts a solar sail craft powered by a series of very powerful lasers being used to contact alien intelligences outside of our solar system. - The GSX-401FW Stargazer, a primarily unmanned Gundam mobile suit from the Cosmic Era timeline of the Gundam Seed metaseries, employs a propulsion system dubbed "Voiture Lumière" which utilizes a nano-particle solar sail. - The R.L.S. Legacy, seen in the Disney movie "Treasure Planet", was powered entirely by solar sails. - A solar sail appears in an early episode of the most recent incarnation of "The Outer Limits" (season one, "The Message"). The description refers to it as a planet, perhaps to avoid being a "spoiler". - The 1983 Doctor Who serial Enlightenment depicts a race through the solar system using solar sail ships. - 1985 Japanese Original Video Animation title 'Odin: Photon Sailer Starlight', directed by Eiichi Yamamoto and Takeshi Shirado, features a space ship that travels on beams of light, with its massive sails, over great distances in space. - The 2006 science-fiction novel Le Papillon Des Étoiles (lit. The Butterfly Of The Stars), by Bernard Werber, tells the story of a community of humans who escape from Earth and set off towards a new habitable planet aboard a large spaceship pulled by a gigantic solar sail (one million square kilometers large when deployed). - A space yacht rigged with solar sails is described in the science-fiction novel "Planet of The Apes" by Pierre Boulle (original 1963 work). - G. Vulpetti, L. Johnson, G. L. Matloff, Solar Sails: A Novel Approach to Interplanetary Flight, Springer, August 2008 - Space Sailing by Jerome L. Wright, who was involved with JPL's effort to use a solar sail for a rendezvous with Halley's comet. - Solar Sailing, Technology, Dynamics and Mission Applications - [Colin R. McInnes] presents the state of the art in his book. - NASA/CR 2002-211730, the chapter IV - presents the theory and the optimal NASA-ISP trajectory via the H-reversal sailing mode - G. Vulpetti, The Sailcraft Splitting Concept, JBIS, Vol.59, pp. 48-53, February 2006 - G. L. Matloff, Deep-Space Probes: to the Outer Solar System and Beyond, 2nd ed., Springer-Chichester, UK, 2005 - T. Taylor, D. Robinson, T. Moton, T. C. Powell, G. Matloff, and J. Hall, Solar Sail Propulsion Systems Integration and Analysis (for Option Period), Final Report for NASA/MSFC, Contract No. H-35191D Option Period, Teledyne Brown Engineering Inc., Huntsville, AL, May 11, 2004 - G. Vulpetti, Sailcraft Trajectory Options for the Interstellar Probe: Mathematical Theory and Numerical Results, the Chapter IV of NASA/CR-2002-211730, “The Interstellar Probe (ISP): Pre-Perihelion Trajectories and Application of Holography”, June 2002 - G. Vulpetti, Sailcraft-Based Mission to The Solar Gravitational Lens, STAIF-2000, Albuquerque (New Mexico, USA), 30 Jan - 3 Feb, 2000 - G. Vulpetti, General 3D H-Reversal Trajectories for High-Speed Sailcraft, Acta Astronautica, Vol. 44, No. 1, pp. 67-73, 1999 - C. R. McInnes, Solar Sailing: Technology, Dynamics, and Mission Applications, Springer-Praxis Publishing Ltd, Chichester, UK, 1999 - Genta, G., and Brusa, E., The AURORA Project: a New Sail Layout, Acta Astronautica, 44, No. 2-4, pp. 141-146 (1999) - S. Scaglione and G. Vulpetti, The Aurora Project: Removal of Plastic Substrate to Obtain an All-Metal Solar Sail, special issue of Acta Astronautica, vol. 44, No. 2-4, pp. 147-150, 1999 - J. L. Wright, Space Sailing, Gordon and Breach Science Publishers, Amsterdam, 1993
http://www.reference.com/browse/Keplerian+orbit
13
16
Mathematical Solution to the Triangle of Velocities The motion of an aircraft relative to the surface of the Earth is made up of two velocities. These are the aircraft moving relative to the air mass, and the air mass moving relative to the surface of the Earth. Adding these two vectors together gives us the aircrafts motion over the ground. Together, they form the “triangle of velocities”. A wind that is blowing from the left will carry an aircraft onto a track that is to the right of the heading. In order to achieve a particular track from A to B the aircraft must be turned into-wind by an amount that corrects for the drift. Each of the three vectors in the triangle of velocities has two properties – magnitude and direction. This means that there are a total of six components. These are the True Air Speed (TAS) and heading (HDG) of the aircraft, the speed and direction of the wind (W/V), and the Ground Speed (GS) and track (TR) of the path over the ground. This is shown in Figure 1 – The Triangle of Velocities. Typical navigation problems involve finding two of these properties when given the other four. For example, most of the time we know what track and air speed we would like and we also have the forecast wind velocity. What heading do we need to steer to follow that track, and what ground speed will we achieve? The usual method of solving this problem is with a “Flight Computer” such as the E-6B also known as a “whiz wheel”. The wind side of the flight computer consists of a circular rotating compass rose which is marked with an index at the top, and has a transparent screen in the middle to allow viewing of the slide plate underneath. The slide plate is marked with concentric speed arcs and radial drift lines. The computer allows you to physically visualise the triangle of velocities, and read off the answer you require. But what calculations is the flight computer performing? How do you solve the problem mathematically? Calculating the Required Heading In order to find the heading required, we need to make use of the sine rule. The sine rule states that for any triangle the ratio between a given side length and the sine of the corresponding angle is equal for each side of the triangle – Figure 2. We simply substitute in the parameters from Figure 1, and rearrange to solve for our heading (HDG). Thus, Calculating the Ground Speed The ground speed is simply the magnitude of our track vector. The easiest way to determine this value is to divide the triangle of velocities up into two right-angled triangles – Figure 3. The length of the track vector is then just the sum of the x-component of our velocity through the air mass and the x-component of the wind velocity. These equations are quite cumbersome, and if working out the solutions by hand then by far the quickest solution is to use the whiz wheel. However, now that we understand the mathematical solutions it is possible to enter them into a spreadsheet and speed up the flight planning process considerably.
http://www.stevenhale.co.uk/main/rotorcraft/triangleofvelocities/
13
39
Unit 2 Idealism, Realism and Pragmatism in Education By the end of this topic, you should be able to: 1. Explain major world views of philosophies: idealism, realism, and pragmatism; and 2. Identify the contributions of the world views of philosophies, such as idealism, realism, and pragmatism to the field of education. Traditionally, philosophical methods have consisted of analysis and clarification of concepts, arguments, theories, and language. Philosophers have analyzed theories and arguments; by enhancing previous arguments, raising powerful objections that lead to the revision or abandonment of theories and lines of arguments (Noddings, 1998). This topic will provide readers with some general knowledge of philosophies. Basically, there are three general or world philosophies that are idealism, realism, and pragmatism. Educators confront philosophical issues on a daily basis, often not recognizing them as such. In fact, in the daily practice of educators, they formulate goals, discuss values, and set priorities. Hence, educators who gets involved in dealing with goals, values and priorities soon realizes that in a Philosophy is concerned primarily with identifying beliefs about human existence and evaluating arguments that support those beliefs. Develop a set of questions that may drive philosophical investigations. 7. 1 IDEALISM In the Western culture, idealism is perhaps the oldest systematic philosophy, dating back at least to Plato in ancient Idealism is the philosophical theory that maintains that the ultimate nature of reality is based on mind or ideas. It holds that the so-called external or real world is inseparable from mind, consciousness, or perception. Idealism is any philosophy which argues that the only things knowable are consciousness or the contents of consciousness; not anything in the outside world, if such a place actually exists. Indeed, idealism often takes the form of arguing that the only real things are mental entities, not physical things and argues that reality is somehow dependent upon the mind rather than independent of it. Some narrow versions of idealism argue that our understanding of reality reflects the workings of our mind, first and foremost, that the properties of objects have no standing independent of minds perceiving them. Besides, the nature and identity of the mind in idealism upon which reality is dependent is one issue that has divided idealists of various sorts. Some argue that there is some objective mind outside of nature; some argue that it is simply the common power of reason or rationality; some argue that it is the collective mental faculties of society; and some focus simply on the minds of individual human beings. In short, the main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as idea-ism. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. To achieve a sufficient understanding of idealism, it is necessary to examine the works of selected outstanding philosophers usually associated with this philosophy. Idealism comes in several flavors: (a) Platonic idealism - there exists a perfect realm of form and ideas and our world merely contains shadows of that realm; only ideas can be known or have any reality; (b) Religious idealism - this theory argues that all knowledge originates in perceived phenomena which have been organized by categories. (c) Modern idealism - all objects are identical with some idea and the ideal knowledge is itself the system of ideas. How does modern idealism compare with the other idealism of earlier periods? Discuss. 7.1.1 Platonic Idealism Plato was a Greek philosopher during the 4th century B.C.E. - a student of Socrates and teacher of Aristotle. is an ancient school of philosophy founded by Plato. At the beginning, this school had a physical existence at a site just outside the walls of is an ancient school of philosophy founded by Plato. At the beginning, this school had a physical existence at a site just outside the walls of According to Platonic idealism, there exists a perfect realm of form and ideas and our world merely contains shadows of that realm. Plato was a follower of Socrates, a truly innovative thinker of his time, who did not record his ideas, but shared them orally through a question and answer approach. Plato presented his ideas in two works: The Republic and Laws. He believed in the importance of searching for truth because truth was perfect and eternal. He wrote about separating the world of ideas from the world of matter. Ideas are constant, but in the world of matter, information and ideas are constantly changing because of their sensory nature. Therefore Plato’s idealism suggested moving from opinion to true knowledge in the form of critical discussions, or the dialectic. Since at the end of the discussion, the ideas or opinions will begin to synthesize as they work closer to truth. Knowledge is a process of discovery that can be attained through skilful questioning. For example, a particular tree, with a branch or two missing, possibly alive, possibly dead, and with the initials of two lovers carved into its bark, is distinct from the abstract form of tree-ness. A tree is the ideal that each of us holds that allows us to identify the imperfect reflections of trees all around us. Platonism is considered to be in mathematics departments all over the world, regarding the predominant philosophy of mathematics as the foundations of mathematics. One statement of this philosophy is the thesis that mathematics is not created but discovered. The absence in this thesis is of clear distinction between mathematical and non-mathematical creation that leaves open the Plato believed in the importance of state involvement in education and in moving individuals from concrete to abstract thinking. He believed that individual differences exist and that outstanding people should be rewarded for their knowledge. With this thinking came the view that girls and boys should have equal opportunities for education. In Plato’s utopian society there were three social classes of education: workers, military personnel, and rulers. He believed that the ruler or king would be a good person with much wisdom because it was only ignorance that led to evil. 7.1.2 Religious Idealism: Augustine Religion and idealism are closely attached. Judaism, the originator of Christianity, and Christianity were influenced by many of the Greek philosophers that hold idealism strongly. Saint Augustine of Hippo, a bishop, a confessor, a doctor of the church, and one of the great thinkers of the Catholic Church discussed the universe as being divided into the City of The City of This parallels Plato’s scheme of the world of ideas and the world of matter. Religious thinkers believed that man did not create knowledge, but discovered it. Augustine, like Plato did not believe that one person could teach another. Instead, they must be led to understanding through skilful questioning. Religious idealists see individuals as creations of God who have souls and contain elements of godliness that need to be developed. Augustine was connected the philosophy of Platonists and Neo-Platonist with Christianity. For instance, he saw the World of Ideas as the City of According to Ozmon & Craver, 2008 today one can see the tremendous influence religious idealism has had on American education. Early Christians implemented the idea of systematic teaching, which was used consistently throughout new and established schools. Many Greek and Jewish ideas about the nature of humanity were taught. For centuries, the Christian church educated generations with Idealist philosophy. In addition, idealism and the Judeo-Christian religion were unified in European culture by the Middle Ages and thereafter. Augustine was also very influential in the history of education where he introduced the theory of three different types of students and instructed teachers to adapt their teaching styles to each student's individual learning style. The three different kinds of students are: (a) The student who has been well-educated by knowledgeable teachers; (b) The student who has had no education; and (c) The student who has had a poor education, but believes himself to be well educated. If a student has been well educated in a wide variety of subjects, the teacher must be careful not to repeat what they have already learned, but to challenge the student with material which they do not yet know thoroughly. With the student who has had no education, the teacher must be patient, willing to repeat things until the student understands and sympathetic. Perhaps the most difficult student, however, is the one with an inferior education who believes he understands something when he does not. Augustine stressed the importance of showing this type of student the difference between having words and having understanding and of helping the student to remain humble with his acquisition of knowledge. An additional fundamental idea which Augustine introduced is the idea of teachers responding positively to the questions they may receive from their students, no matter if the student interrupted his teacher. Augustine also founded the controlled style of teaching. This teaching style ensures the student’s full understanding of a concept because the teacher does not bombard the student with too much material; focuses on one topic at a time; helps them discover what they don't understand, rather than moving on too quickly; anticipates questions; and helps them learn to solve difficulties and find solutions to problems. In a nutshell, Augustine claimed there are two basic styles a teacher uses when speaking to the students: (i) The mixed style includes complex and sometimes showy language to help students see the beautiful artistry of the subject they are studying; and (ii) The grand style is not quite as elegant as the mixed style, but is exciting and heartfelt, with the purpose of igniting the same passion in the students’ hearts. Augustine balanced his teaching philosophy with the traditional bible-based practice of strict discipline where he agreed with using punishment as an incentive for children to learn. Augustine believed all people tend toward evil, and students must therefore be physically punished when they allow their evil desires to direct their actions. Identify and explain the aims, content, and the methods of education based on the educational philosophy of Aristotle. 7.1.3 Modern Idealism: Rene Descartes, Immanuel Kant, and Friedrich Hegel By the beginning of the modern period in the fifteenth and sixteenth centuries, idealism has become to be largely identified with systematization and subjectivism. Some major features of modern idealism are: (a) Belief that reality includes, in addition to the physical universe, that which transcends it, is superior to it, and which is eternal. This ultimate reality is non-physical and is best characterized by the term mind; (b) Physical realities draw their meaning from the transcendent realities to which they are related; (c) That which is distinctive of human nature is mind. Mind is more than the physical entity, brain; (d) Human life has a predetermined purpose. It is to become more like the transcendent mind; (e) Man's purpose is fulfilled by development of the intellect and is referred to as self-realization; (f) Ultimate reality includes absolute values; (g) Knowledge comes through the application of reason to sense experience. In so far as the physical world reflects the transcendent world, we can determine the nature of the transcendent; and (h) Learning is a personal process of developing the potential within. It is not conditioning or pouring in facts, but it is self-realization. Learning is a process of discovery. The identification of modern idealism was encouraged by the writings and thoughts of Renè Descartes, Immanuel Kant, and Georg Wilhelm Friedrich Hegel. (i) René Descartes Descartes, a French philosopher, was born in the town of mathematics. In 1614, he studied civil and cannon law at mathematics. In 1614, he studied civil and cannon law at In 1637, he published geometry, in which his combination of algebra and geometry gave birth to analytical geometry, known as Cartesian Geometry. But the most important contribution Descartes made was his philosophical writings. Descartes was convinced that science and mathematics could be used to explain everything in nature, so he was the first to describe the physical universe in terms of matter and motion - seeing the universe a as giant mathematically designed engine. Descartes wrote three important texts: Discourse on Method of rightly conducting the reason and seeking truth in the sciences, "Meditations on First Philosophy and A Principles of Philosophy” . In his Discourse on Method, he attempts to arrive at a fundamental set of principles that one can know as true without any doubt. To achieve this, he employs a method called metaphysical doubt, sometimes also referred to as methodological skepticism wne he rejects any ideas that can be doubted, and then re-establishes them in order to acquire a firm foundation for genuine knowledge. Initially, Descartes arrives at only a single principle - thought exists: „thought cannot be separated from me, therefore, I exist. Most famously, this is known as cogito ergo sum where it means I think, therefore I am. Therefore, Descartes concluded, if he doubted, then something or someone must be doing the doubting; therefore the very fact that he doubted proved his existence. Descartes decides that he can be certain that he exists because he thinks as he perceives his body through the use of the senses; however, these have previously been proven unreliable. Hence, Descartes assumes that the only indubitable knowledge is that he is a thinking thing. Thinking is his essence as it is the only thing about him that cannot be doubted. Descartes defines thought or cogitatio as what happens in me such that I am immediately conscious of it, insofar as I am conscious of it. Thinking is thus every activity of a person of which he is immediately conscious. (ii) Immanuel Kant Immanuel Kant, one of the world’s great philosopher, was born in the East Prussian city of Königsberg, Germany studied at its schools and university, and worked there as a tutor and professor for more than forty years. He had never In writing his Critique of Pure Reason and Critique of Practical Reason, Kant tried to make sense of rationalism and empiricism within the idealist philosophy. In his system, individuals could have a valid knowledge of human experience that was established by the scientific laws of nature. The Critique of Pure Reason spells out the conditions for mathematical, scientific, and metaphysical knowledge in its Transcendental Aesthetic, Transcendental Analytic, and Transcendental Dialectic. Carefully distinguishing judgments as analytic or synthetic and as a priori or a posteriori, Kant held that the most interesting and useful varieties of human knowledge rely upon synthetic a priori judgments, which are, in turn, possible only when the mind determines the conditions of its own experience. Thus, it is we who impose the forms of space and time upon all possible sensation in mathematics, and it is we who render all experience coherent as scientific knowledge governed by traditional notions of substance and causality by applying the pure concepts of the understanding to all possible experience. However, regulative principles of this sort hold only for the world as we know it, and since metaphysical propositions seek a truth beyond all experience, they cannot be established within the bounds of reason. In Critique of Practical Reason, Kant grounded the conception of moral autonomy upon our postulation of God, freedom, and immortality. Kant’s philosophy of education involved some aspects of character education. He believed in the importance of treating each person as an end and not as a means. He thought that education should include training in discipline, culture, discretion, and moral training. Teaching children to think and an emphasis on duty toward self and others were also vital points in his philosophies. Teaching a child to think is associated closely with Kant’s notion of will, and the education of will means living according to the duties flowing the categorical imperatives. Kant’s idealism is based on his concentration on thought processes and the nature of relationship between mind and its objects on the one hand and universal moral ideas on the other. With these systematic thoughts it has greatly influenced all subsequent Western philosophy, idealistic, and other wise. (iii) Georg Wilhelm Friedrich Hegel George Wilhelm Friedrich Hegel, German philosopher, is one the creators of German idealism. He was born in Hegel developed a comprehensive philosophical framework, or system, to account in an integrated and developmental way for the relation of mind and nature, the subject and object of knowledge, and psychology, the state, history, art, religion, and philosophy. In particular, he developed a concept of mind or spirit that manifested itself in a set of contradictions and oppositions that it ultimately integrated and united, such as those between nature and freedom, and immanence and transcendence, without eliminating either pole or reducing it to the other. However, Hegel most influential conceptions are of speculative logic or dialectic, absolute idealism, absolute spirit, negativity, sublation, the master / slave dialectic, ethical life, and the importance of history. Hegelianism is a collective term for schools of thought following Hegel’s philosophy which can be summed up by the saying that the rational alone is real, which means that all reality is capable of being expressed in rational categories. His goal was to reduce reality to a more synthetic unity within the system of transcendental idealism. In fact, one major feature of the Hegelian system is movement towards richer, more complex, and more complete synthesis. Three of Hegel’s most famous books are Phenomenology of Mind, Logic, and Philosophy of Right. In these books, Hegel emphasizes three major aspects: logic, nature, and spirit. Hegel maintained that if his logical system were applied accurately, one would arrive at the Absolute Ideas, which is similar to Plato’s unchanging ideas. However, the difference is that Hegel was sensitive to change where change, development, and movement are all central and necessary in Nature was considered to be the opposite of the Absolute Ideas. Ideas and nature together form the Absolute Spirit which is manifested by history, art, religion, and philosophy. Hegel’s idealism is in the search for final Absolute Spirit. Examining any one thing required examining or referring to another thing. Hegel’s thinking is not as prominent as it once was because his system led to the glorification of the state at the expense of individuals. Hegel thought that to be truly educated an individual must pass through various stages of the cultural evolution of mankind. Additionally, he reasoned that it was possible for some individuals to know everything essential in the history of humanity. The far reaching influence of Hegel is due in a measure to the undoubted vastness of the scheme of philosophical synthesis which he conceived and partly realized. A philosophy which undertook to organize under the single formula of triadic development every department of knowledge, from abstract logic up to the philosophy of history, has a great deal of attractiveness to those who are metaphysically inclined. Hegel’s philosophy is the highest expression of that spirit of collectivism which characterized the nineteenth century. In theology, Hegel revolutionized the methods of inquiry. The application of his notion of development to biblical criticism and to historical investigation is obvious to anyone who compares the spirit and purpose of contemporary theology with the spirit and purpose of the theological literature of the first half of the nineteenth century. In science, as well, and in literature, the substitution of the category of becoming for the category of being is a very patent fact, and is due to the influence of Hegel's method. In political economy and political science the effect of Hegel's collectivistic conception of the „state‰ supplanted to a large extent the individualistic conception which was handed down from the eighteenth century to the nineteenth. Hegel also had considerable influence on the philosophy and theory of education. He appeared to think that to be truly educated, an individual must pass through the various stages of the cultural evolution of humankind. This idea can be much applies to the development of science and technology. For instance, to a person who lived 300 years ago, electricity was unknown except as a natural occurrence, such as lightning. Then again, today, practically everyone depends on the electrical power for everyday use and has a working, practical knowledge of it entirely outside the experience of a person from the past. A contemporary person can easily learn elementary facts about electricity in a relatively short time; that is he or she can pass through or learn an extremely important phase of our cultural evolution simply due to a passing of time. Finally, in short, in Hegel’s philosophical education, he believed that only mind is real and that human thought, through participation in the universal spirit, progresses toward a destined ideal by a dialectical process of resolving opposites through synthesis. 112/126 According to Ozmon and Craver (2008) the most central thread of realism is the Realists believe that the study of ideas can be enhanced by the study of material More generally, realism is any philosophical theory that emphasizes the existence the term stands for the theory that there is a reality quite independent To understand this complex philosophy, one must examine its development and Betrand Russell have contributed much to realism ideology. 7.2.1 Aristotle Realism Aristotle (384 - 322 B.C.E.), a great Greek philosopher, was a child of to a Aristotle believed that the world could be understood at a fundamental level As a result of this belief, Aristotle literally wrote about everything: poetics, Aristotle was the first person to asserts that nature is understandable. This Aristotelian Realism that ideas, such as the idea of God or the idea of a tree, can exist without matter, but matter cannot exist without form. In order to get to form, it was necessary to study material things. As a result, Aristotle used syllogism, which is a process of „ordering statements about reality in a logical, systematic form (Ozmon & Craver, 2008). This systematic form would include a major premise, a minor premise, and a All men are mortal. Socrates is a man; Therefore, Socrates is mortal. Aristotle described the relation between form and matter with the Four Causes: (a) Material cause - the matter from which something is made; (b) Formal cause - the design that shapes the material object; (c) Efficient cause - the agent that produces the object; and (d) Final cause - the direction toward which the object is tending. Through these different forms, Aristotle demonstrated that matter was constantly in a process of change. He believed that God, the Ultimate Reality held all creation together. Organization was very important in Aristotle’s philosophy. It was his thought that human beings as rational creatures are fulfilling their purpose when they think and thinking are the highest characteristic. According to Aristotle, each thing had a purpose and education’s purpose was to develop the capacity for reasoning. Proper character was formed by following the Golden Mean, the The importance of education in the philosophy of Aristotle was enormous, since the individual man could learn to use his reason to arrive at virtue, happiness, and political harmony only through the process of education. For Aristotle, the purpose of education is to produce a good man. Man is not good by nature so he must learn to control his animal activities through the use of reason. Only when man behaves by habit and reason, according to his nature as a rational being, he is capable of happiness. In short, education must aim at the development of the 7.2.2 Religious Realism: Thomas Aquinas Saint Thomas Aquinas (1225 - 1274) was a priest of the Roman Catholic Church and Doctor Communis. He is frequently referred to as Thomas since Aquinas refers to his residence rather than his surname. He was the foremost classical proponent of natural theology and the father of the Thomistic school of philosophy and theology. The philosophy of Aquinas has exerted enormous influence on subsequent Christian theology, especially the Roman Catholic Church, and extending to Western philosophy in general. He stands as a vehicle and modifier of Aristotelianism, which he merged with the thought of Augustine. Aquinas believed that for the knowledge of any truth whatsoever man needs divine help, that the intellect may be moved by God to its act. Besides, he believed that human beings have the natural capacity to know many things without special divine revelation, even though such revelation occurs from time to time. Aquinas believed that truth is known through reason - the natural revelation and faith - the supernatural revelation. Supernatural revelation has its origin in the inspiration of the Holy Spirit and is made available through the teaching of the prophets, summed up in Holy Scripture, and transmitted by the Magisterium, the sum of which is called Tradition. On the other hand, natural revelation is the truth available to all people through their human nature where certain truths all men can attain from correct human reasoning. Thomism is the philosophical school that arose as a legacy of the work and thought of Thomas Aquinas where it is based on Summa Theologica meaning summary of theology. Summa Theologica is arguably second only to the Bible in importance to the Roman Catholic Church, written from 1265 to 1274 is the most famous work of Thomas Aquinas. Although the book was never finished, it was intended as a manual for beginners as a compilation of all of the main theological teachings of that time. It summarizes the reasoning for almost all points of Christian theology in the West. The Summa’s topics follow a cycle: (a) the existence of God; (b) God's creation; (d) Man's purpose; (f) The Sacraments; and (g) back to God. In these works, faith and reason are harmonized into a grand theologico-philosophical system which inspired the medieval philosophical tradition known as Thomism and which has been favored by the Roman Catholic church ever since. Aquinas made an important contribution to epistemology, recognizing the central part played by sense perception in human cognition. It is through the senses that we first become acquainted with existent, material things. Thomas Moreover, in the Summa Theologica, Aquinas records his famous five ways which seek to prove the existence of God from the facts of change, causation, contingency, variation and purpose. These cosmological and teleological arguments can be neatly expressed in syllogistic form as below: (i) Way 1 • The world is in motion or motus. • All changes in the world are due to some prior cause. • There must be a prior cause for this entire sequence of changes, that is, God. (ii) Way 2 • The world is a sequence of events. • Every event in the world has a cause. • There must be a cause for the entire sequence of events, that is, God. (iii) Way 3 • The world might not have been. • Everything that exists in the world depends on some other thing for its existence. • The world itself must depend upon some other thing for its existence, that is, God. (iv) Way 4 • There are degrees of perfection in the world. • Things are more perfect the closer they approach the maximum. • There is a maximum perfection, that is, God. (v) Way 5 • Each body has a natural tendency towards its goal. • All order requires a designer. • This end-directedness of natural bodies must have a designing force behind it. Therefore each natural body has a designer, that is, God. Thomas Aquinas tried to balance the philosophy of Aristotle with Christian ideas. He believed that truth was passed to humans by God through divine revelation, and that humans had the ability to seek out truth. Unlike Aristotle, Aquinas realism came to the forefront because he held that human reality is not only spiritual or mental but also physical and natural. From the standpoint of a human teacher, the path to the soul lies through the physical senses, and education must use this path to accomplish learning. Proper instruction thus directs the learner to knowledge that leads to true being by progressing from a In view of education, Aquinas believed that the primary agencies of education are the family and the church; the state -or organized society - runs a poor third; the family and the church have an obligation to teach those things that relate to the unchanging principles of moral and divine law. In fact, Aquinas mentioned that the mother is the child’s first teacher, and because the child is molded easily; it is the mother’s role to set the child’s moral tone; the church stands for the source of knowledge of the divine and should set the grounds for understanding God’s law. The state should formulate and enforce law on education, but it should not abridge the educational primacy of the home and church. 7.2.3 Modern Realism: Francis Bacon and John Locke Modern realism began to develop because classical realism did not adequately include a method of inductive thinking. If the original premise or truth was incorrect, then there was a possibility of error in the logic of the rest of the thinking. Modern realists therefore believed that a process of deduction must be used to explain ideas. Of all the philosophers engaged in this effort, the two most outstanding did Francis Bacon and John Locke; where they were involved in developing systematic methods of thinking and ways to increase human understanding. (a) Francis Bacon Bacon (1561 - 1626) was an English philosopher, statesman, scientist, lawyer, jurist, and author. He also served as a politician in the courts of Elizabeth I and James I. He was not a successful in his political efforts, but his record in the philosophical thought remained extremely The Novum Organum is a philosophical work by Francis Bacon published in 1620. This is a reference to Aristotle's work Organon, which was his treatise on logic and syllogism. In Novum Organum, Bacon details a new system of logic he believes to be superior to the old ways of syllogism of Aristotle. In this work, we see the development of the Baconian Method, consisting of procedures for isolating the form, nature or cause of a phenomenon, employing the method of agreement, method of difference, and method of associated variation. Bacon felt that the problem with religious realism was that it began with dogma or belief and then worked toward deducing conclusions. He felt that science could not work with this process because it was inappropriate and ineffective for the scientific process to begin with preconceived ideas. Bacon felt that developing effective means of inquiry was vital because knowledge was power that could be used to deal effectively with life. He therefore devised the inductive method of acquiring knowledge which begins with observations and then uses reasoning to make general statements or laws. Verification was needed before a judgment could be made. When data was collected, if contradictions were found, then the ideas would be discarded. The Baconian Method consists of procedures for isolating the form nature, or cause, of a phenomenon, including the method of agreement, method of difference, and method of concomitant or associated variation. Bacon suggests that we draw up a list of all things in which the phenomenon we are trying to explain occurs, as well as a list of things in which it does not occur. Then, we rank the lists according to the degree in which the phenomenon occurs in each one. After that, we should be able to deduce what factors match the occurrence of the phenomenon in one list and do not occur in the other list, and also what factors change in accordance with the way the data had been ranked. From this, Bacon concludes that we should be able to deduce by elimination and inductive reasoning what is the cause underlying the phenomenon. of the scientific or inductive approach uncover many errors in propositions that were taken for granted originally. Bacon urged that people should re-examine all previously accepted knowledge. At the least, he considered that people should attempt to get rid off the various idols in their mind before which they bow down and that cloud their thinking. Bacon (i) Idols of the Tribe (Idola Tribus): This is humans' tendency to perceive more order and regularity in systems than truly exists, and is due to people following their preconceived ideas about things. (ii) Idols of the Cave or Den (Idola Specus): This is due to individuals' personal weaknesses in reasoning due to particular personalities, likes and dislikes. For instance, a woman had several bad experiences with men with moustaches, thus she might conclude that all moustached men are bad; this is a clear case of faulty generalization. (iii) Idols of the Marketplace (Idola Fori): This is due to confusions in the use of language and taking some words in science to have a different meaning than their common usage. For example, such words as liberal and conservative might have little meaning when applied to people because a person could be liberal on one issue and conservative on another. (iv) Idols of the Theatre (Idola Theatri): This is due to using philosophical systems which have incorporated mistaken methods. Bacon insisted on housekeeping of the mind, in which we should break away from the dead ideas of the past and begin again by using the method of induction. Bacon did not propose an actual philosophy, but rather a method of developing philosophy. He wrote that, although philosophy at the time used the deductive syllogism to interpret nature, the philosopher should instead proceed through inductive reasoning from fact to axiom to law. (b) John Locke John Locke (1632 - 1704) was an English philosopher. Locke is considered the first of the British empiricists. His ideas had enormous influence on the development of epistemology and political philosophy, and he is widely regarded as one of the most influential Enlightenment thinkers, classical republicans, and contributors to liberal theory. Surprisingly, Locke’s writings influenced Voltaire and Rousseau, many Scottish Enlightenment thinkers, as well as the American revolutionaries. This influence is reflected in the American Declaration of Independence. Thoughts Concerning Education is a 1693 discourse on education written by John Locke. For over a century, it was the most important philosophical work on Locke’s Essay Concerning Human Understanding, wrote in 1690, Locke outlined a new theory of mind, contending that the child's mind was a tabula rasa or blank slate or empty mind; that is, it did not contain any innate or inborn ideas. In describing the mind in these terms, Locke was drawing on Theatetus, which suggests that the mind is like a wax tablet. Although Locke argued vigorously for the rasa theory of mind, he nevertheless did believe in innate talents and interests. For example, he advises parents to watch their children carefully in order to discover their aptitudes, and to nurture their children's own interests rather than force them to participate in activities which they dislike. John Locke believed that the mind was a blank slate at birth; information and knowledge were added through experience, perception, and reflection. He felt that what we know is what we experience. Locke believed Another Locke most important contribution to eighteenth-century educational theory also stems from his theory of the self. He writes: the little and almost insensible impressions on our tender infancies have very important and lasting consequences. That is, the associations of ideas made when young are more significant than those made when mature because they are the foundation of the self - they mark the tabula rasa. 7.2.4 Contemporary Realism: Alfred North Whitehead and Bertrand Russell Contemporary realism developed around the twentieth century due to concerns with science and scientific problems of a philosophical nature (Ozmon and Carver, 2008). Two outstanding figures in the twentieth century of contemporary realism were Alfred Norton Whitehead and Bertrand Russell. (a) Alfred North Whitehead North Whitehead (1861 - 1947) was an English mathematician who became a philosopher. He wrote on algebra, logic, foundations of mathematics, philosophy of science, physics, metaphysics, and education. He co-authored the epochal Principia Mathematica with Bertrand Russell. While Thomas Aquinas tried to balance the ideas of Aristotle with the ideas Principia Mathematica is a three - volume work on the foundations of mathematics, written by Alfred North Whitehead's philosophical influence can be felt in all three of the main areas in which he worked - logic and the foundations of mathematics, the philosophy of science, and metaphysics, as well as in other areas such as ethics, education and religion. Whitehead was interested in actively utilizing the knowledge and skills that were taught to students to a particular end. He believed we should aim at producing men who possess both culture and expert knowledge in some special direction. He even thought that, education has to impart an intimate sense for the power and beauty of ideas coupled with structure for ideas together with a particular body of knowledge, which has peculiar reference to the life of being possessing it. (b) Bertrand Arthur William Russell Arthur William Russell, a British mathematician and philosopher had embraced materialism in his early writing career. Russell earned his reputation as a distinguished thinker by his work in mathematics and logic. In 1903 he published„The Principles of Mathematics and by 1913 he and Alfred North Whitehead had published the three volumes of Principia Mathematica. The research, which Russell did during this period, establishes him as one of the founding fathers of modern analytical philosophy; discussing towards mathematical quantification as the basis of philosophical appears to have discovered his paradox in the late spring of 1901, while working on his Principles of Mathematics of 1903. Russell's paradox is the most famous of the logical or set-theoretical paradoxes. The paradox arises within naive set theory by considering the set of all sets that are not members of themselves. Such a set appears to be a member of itself if and only if it is not a member of itself, hence the paradox. For instance, some sets, such as the set of all teacups, are not members of themselves; other sets, such as the set of all non-teacups, are members of themselves. If we call the set of all sets that are not members of themselves: R. If R is a member of itself, then by definition it must not be a member of itself. Similarly, if R is not a member of itself, then by definition it must be a member of itself. The paradox has prompted much work in logic, set theory and the philosophy and foundations of mathematics. The root of the word pragmatism is a Greek word meaning work. According to pragmatism, the truth or meaning of an idea or a proposition lies in its observable practical consequences rather than anything metaphysical. It can be summarized by the phrase whatever works, is likely true. Because reality changes, whatever works will also change - thus, truth must also be Pragmatism is also a practical, matter-of-fact way of approaching or assessing situations or of solving problems. However, we might wonder why people insist on doing things and using processes that do not work. Several true reasons for this to happened is because the weight of the customs and tradition, fear and apathy, and the fact that habitual ways of thinking and doing seem to work even though they have lost use in today's world. pragmatism as a philosophical movement began in the movement. The background of pragmatism can be found in the works of such people like Francis Bacon and John Locke. 7.3.1 Centrality of Experience: Francis Bacon and John Locke Human experience is an important ingredient of pragmatist philosophy. John Locke talked about the mind as a „tabula rasa‰ and the world of experience as the verification of thought, or in other words: the mind is a tabula rasa at birth; world of experience verifies thought. Another philosopher, Rousseau followed Locke's idea but with an expansion of the „centrality of experience‰ as the basis for a philosophical belief. Rousseau saw people as basically good but corrupted by civilization. If we would avoid that corruption then we should focus on the educational connection between nature and experience by building the education of our youth around the youth's natural inquisitiveness while attending to their physiological, psychological and, social developmental stages. Locke believed that as people have more experiences, they have more ideas imprinted on the mind and more with which to relate. However, he argued that one could have false ideas as well as true ones. The only way people can be sure of their ideas are correct is by verifying them in the world of experience, such as physical proof. Locke emphasized the idea of placing children in the most desirable environment for their education and pointed out the importance of environment in making people who they are. Nevertheless, notion of experience contained internal flaw and caused difficulties. His firmness that mind is a tabula rasa established mind as a passive, malleable instrument 7.3.2 Science and Society: Auguste Comte, Charles Darwin, and John Dewey Bridging the transition between the Age of Enlightenment and the Modern Age, Auguste Comte (1798 - 1857) and Charles Darwin (1809 1882) shared a belief that science could have a profound and positive effect on society. ComteÊs commitment to the use of science to address the ills of society resulted in the study of sociology. The effects of Charles Darwin and his five years aboard the HMS Beagle are still echoing throughout the world of religion and education. Basically, Comte talked on use of science to solve social problems in sociology and was very much influenced by John Dewey's (1859 1952) ideas regarding the role of science in society. While Darwin initiate „Origin of the Species‰; nature operates by process of development without predetermined directions or ends, reality not found in being but becoming, and promoted pragmatist view that education tied directly to biological and social development. Figure 7.12: From Left : Auguste Comte, Charles Darwin, and John Dewey Auguste Comte was a French philosopher and one of the founders of sociology and positivism. He is responsible for the coining and introduction of the term altruism. Altruism is an ethical doctrine that holds that individuals have a moral obligation to help, serve, or benefit others, if necessary at the sacrifice of self interest. Auguste Comte's version of altruism calls for living for the sake of others. One who holds to either of these ethics is known as an "altruist." One universal law that Comte saw at work in all sciences where he called it the law of three phases. It is by his statement of this law that he is best known in the English-speaking world; namely, that society has gone through three phases: theological, metaphysical, and scientific. In Comte's lifetime, his work was sometimes viewed skeptically, with perceptions that he had elevated positivism to a religion and had named himself the Pope of Positivism. emphasis on the interconnectedness of social elements was a forerunner of modern functionalism. His emphasis on a quantitative, mathematical basis for decision-making, remains with us today. It is a foundation of the modern notion of positivism, modern quantitative statistical analysis, and business decision making. His description of the continuing cyclical relationship between theory and practice is seen in modern business systems of Total Quality Management and Continuous Quality Improvement where advocates describe a Charles Darwin's wrote the On the Origin of Species, published in 1859, is a seminal work of scientific literature considered to be the foundation of evolutionary biology. The full title was On the Origin of Species by Means of Natural Selection, or the Preservation of Favored Races in the Struggle for Life. For the sixth edition of 1872, the short title was changed to The Origin of Species. Darwin's book introduced the theory that populations evolve over the course of generations through a process of natural selection, and presented a body of evidence that the diversity of life arose through a branching pattern of evolution and common descent. He included evidence that he had accumulated on the voyage of the Beagle in the 1830s, and his subsequent findings from research, correspondence, and experimentation. evolutionary ideas had already been proposed to explain new findings in biology. There was growing support for such ideas among protester anatomists and the general public, but during the first half of the 19th century the English scientific establishment was closely tied to the Church of England, while science was part of natural theology. Ideas about the transmutation of species were controversial as they conflicted with the beliefs that species were unchanging parts of a designed hierarchy and that humans were unique, unrelated to animals. The political and theological implications were intensely debated, but transmutation was not accepted by the scientific mainstream. The book was written to be read by non-specialists and attracted widespread interest on its publication. As On the other hand, Dewey attempted to create a philosophy that captured and reflected the influences of the contemporary world on the preparation of the future leaders through the educational system. The reliance on the source of knowledge has to be tempered by an understanding of the societal effects if the learning was to be meaningful, beneficial, or productive. John Dewey discussed the Nature of Experience; experience and nature are not two different things separated from each other, rather experience itself is of nature : experience is and Dewey viewed method, rather than abstract answer, as a central concern, thought that modern industrial society has submerged both individuality and sociality. He defined individuality as the interplay of personal choice and freedom with objective condition. Whereas sociality refers to milieu or medium conducive to individual development. Moreover, Dewey believed that most religions have a negative effect because they tend to classify people. Dewey thought that two schools of social and religious reform exist: one holds that people must be constantly watched, guided and controlled to see that they stay on the right path and the other holds that people will control their own actions intelligently. Dewey also believed that a truly aesthetic experience is one in which people are unified with their activity. Finally, Dewey stated that we should project art into all human activities, such as, the art of politics and the art of education. (a) How is pragmatism similar and different from idealism and realism? Explain. (b) Discuss your thoughts about why pragmatism is seen as most effective in a democratic society. (c) Compare and contrast Dewey's philosophical thoughts with your society's approach and your own. REALISM, AND PRAGMATISM AND ITS CRITIQUE IN Developing a philosophical perspective on education is not easy. However, it is very important if a person wants to become a more effective professional educator. A sound philosophical perspective helps one sees the interaction among students, curriculum, and aims and goals of education of various type of philosophy in achieving a teacher personal and professional undertakings. 7.4.1 Idealism in Philosophy of Education Idealism as a philosophy had its greatest impact during the nineteenth century. Its influence in today's world is less important than it has been in the past. Much of what we know as idealism today was influenced by German ideas of idealism. The main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as „idea-ism‰. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. Table 7.1 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for idealism in philosophy of education: Table7.1: Idealism in Philosophy of Education 7.4.2 Realism in Philosophy of Education According to Ozmon and Craver (2008) „the central thread of realism is the principal of independence.‰ The world of ideas and matter defined in idealism by Plato and Socrates do not exist separately and apart from each other for realists. They contend that material things can exist whether or not there is a human being around to appreciate or perceive them. Table 7.2 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for realism in philosophy of education: Table 7.2: Realism in Philosophy of Education 7.4.3 Pragmatism in Philosophy of Education is basically an American philosophy, but has its roots in European thinking. Pragmatists believe that ideas are tools that can be used to cope with the world. They believe that educators should seek out new process, incorporate traditional and contemporary ideas, or create new ideas to deal with the changing world. There is a great deal of stress placed on sensitivity to consequences, but are quick to state that consideration should be given to the method of arriving at the consequences. The means to solving a problem is as important as the end. The scientific method is important in the thinking process for pragmatists, but it was not to seem like sterile lab thinking. Pragmatists want to apply the scientific method for the greater good of the world. They believe that although science has caused many problems in our world, it can still be used to However, the progressive pragmatic movement believed in separating children by intelligence and ability in order to meet the needs of society. The softer side of that philosophy believed in giving children a great deal of freedom to explore, leading many people to label the philosophy of pragmatism in education as permissive. Table 7.3 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for pragmatism in philosophy of education: Table 7.3: Pragmatism Realism in Philosophy of Education Which of the philosophy is most compatible with your beliefs as an educator? Why? • Basically, there three general or world philosophies that are idealism, realism, and pragmatism. • Idealism is the philosophical theory that maintains that the ultimate nature of reality is based on mind or ideas. It holds that the so-called external or „real world‰ is inseparable from mind, consciousness, or perception. • Platonic idealism says that there exists a perfect realm of form and ideas and our world merely contains shadows of that realm; only ideas can be known or have any reality. • Religious idealism argues that all knowledge originates in perceived phenomena which have been organized by categories. • Modern idealism says that all objects are identical with some idea and the ideal knowledge is itself the system of ideas. • Platonic idealism usually refers to Plato's theory of forms or doctrine of ideas. Plato held the realm of ideas to be absolute reality. Plato's method was the dialectic method all thinking begins with a thesis; as exemplified in the Socratic dialogues. discussed the universe as being divided into the City of • Augustine believed that faith based knowledge is determined by the church and all true knowledge came from God. • Descartes was convinced that science and mathematics could be used to explain everything in nature, so he was the first to describe the physical universe in terms of matter and motion - seeing the universe a as giant mathematically designed engine. • Kant held that the most interesting and useful varieties of human knowledge rely upon synthetic a priori judgments, which are, in turn, possible only when the mind determines the conditions of its own experience. • Kant's philosophy of education involved some aspects of character education. He believed in the importance of treating each person as an end and not as a means. • Hegel developed a concept of mind or spirit that manifested itself in a set of contradictions and oppositions that it ultimately integrated and united, such as those between nature and freedom, and immanence and transcendence, without eliminating either pole or reducing it to the other. • „Hegelianism‰ is a collective term for schools of thought following Hegel's philosophy which can be summed up by the saying that „the rational alone is real‰, which means that all reality is capable of being expressed in rational categories. • The most central thread of realism is the principal or thesis of independence. This thesis holds that reality, knowledge, and value exist independently of the human mind. • Aristotle believed that the world could be understood at a fundamental level through the detailed observation and cataloguing of phenomenon. • Aquinas believed that truth is known through reason - the natural revelation and faith - the supernatural revelation. • Thomism is the philosophical school that arose as a legacy of the work and thought of Thomas Aquinas where it is based on Summa Theologica meaning „summary of theology‰. • Aquinas mentioned that the mother is the child's first teacher, and because the child is molded easily; it is the mother's role to set the child's moral tone; the church stands for the source of knowledge of the divine and should set the grounds for understanding God's law. The state should formulate and enforce law on education. • Bacon devised the inductive method of acquiring knowledge which begins with observations and then uses reasoning to make general statements or laws. Verification was needed before a judgment could be made. When data was collected, if contradictions were found, then the ideas would be discarded. „Baconian Method‰ consists of procedures for isolating the form nature, or cause, of a phenomenon, including the method of agreement, method of difference, and method of concomitant or associated identified the „idols‰, called the Idols of The Mind where he described these as things which obstructed the path of correct scientific • John Locke sought to explain how we develop knowledge. He attempted a rather modest philosophical task: „to clear the ground of some of the rubbish‰ that deter people from gaining knowledge. He was trying to do away with thought of what Bacon called „idols‰. • Locke outlined a new theory of mind, contending that the child's mind was a „tabula rasa‰ or „blank slate‰ or „empty mind‰; that is, it did not contain any innate or inborn ideas. • Whitehead was interested in actively „utilising the knowledge and skills that were taught to students to a particular end‰. He believed we should aim at „producing men who possess both culture and expert knowledge in some special direction‰. • Russell, one of the founding fathers of modern analytical philosophy; discussing towards mathematical quantification as the basis of philosophical generalization. • Russell's paradox is the most famous of the logical or set-theoretical paradoxes. The paradox arises within naive set theory by considering the set of all sets that are not members of themselves. Such a set appears to be a member of itself if and only if it is not a member of itself, hence the paradox. • Pragmatism is a practical, matter-of-fact way of approaching or assessing situations or of solving problems. • Human experience is an important ingredient of pragmatist philosophy. • John Locke talked about the mind as a „tabula rasa‰ and the world of experience as the verification of thought, or in other words: the mind is a tabula rasa at birth; world of experience verifies thought. followed Locke's idea but with an expansion of the „centrality of experience‰ as the basis for a philosophical belief. Rousseau saw people as basically good but corrupted by civilization. If we would avoid that corruption then we should focus on the educational connection between nature and experience by building the education of our youth around the • Locke believed that as people have more experiences, they have more ideas imprinted on the mind and more with which to relate. • Comte is responsible for the coining and introduction of the term altruism. Altruism is an ethical doctrine that holds that individuals have a moral obligation to help, serve, or benefit others, if necessary at the sacrifice of self interest. • One universal law that Comte saw at work in all sciences where he called it the „law of three phases‰. It is by his statement of this law that he is best known in the English-speaking world; namely, that society has gone through three phases: theological, metaphysical, and scientific. • Dewey attempted to create a philosophy that captured and reflected the influences of the contemporary world on the preparation of the future leaders through the educational system. The reliance on the source of knowledge has to be tempered by an understanding of the societal effects if the learning was to be meaningful, beneficial, or productive. • John Dewey discussed the Nature of Experience; experience and nature are not two different things separated from each other, rather experience itself is of nature : experience is and of nature. • Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. • The world of ideas and matter defined in idealism by Plato and Socrates do not exist separately and apart from each other for realists. They contend that material things can exist whether or not there is a human being around to appreciate or perceive them. • They believe that educators should seek out new process, incorporate traditional and contemporary ideas, or create new ideas to deal with the changing world. Democracy and education. Experience and education. treatises of government, ed. Peter Laslett. Locke, J. 1975 . An essay concerning human understanding, ed. P. H.Nidditch. Locke, J. 1989 . Some thoughts concerning education, ed. John W. Yolton and Jean S. Yolton. Some thoughts concerning education; and of the conduct of the understanding, ed. Ruth W. Grant and Nathan Tarcov. Philosophy of education. Ozmon, H.A. & Craver, S.M. (2008). Philosophical foundations of education (8th Turner, W. (1910). Philosophy of Immanuel Kant. In The Catholic Encyclopedia. Arthur G. (1966). John Dewey as educator: His design for work in education (1894-1904). Bohac, P., (2001, February 6). Dewey's pragmatism. Chapter 4 Pragmatism and Education. Retrieved September 3, 2009, from http://www.brendawelch.com/uwf/pragmatism.pdf Created on Nov 12, 2010 and edited last 13 November, 2010 by Pengendali@2006
http://www.kheru2006.webs.com/4idealism_realism_and_pragmatigsm_in_education.htm
13
17
An adder is a device that will add together two bits and give the result as the output. The bits being added together are called the "addends". Adders can be concatenated in order to add together two binary numbers of an arbitrary length. There are two kinds of adders - half adders and full adders. A half adder just adds two bits together and gives a two-bit output. A full adder adds two inputs and a carried input from another adder, and also gives a two-bit output. Half Adders When adding two separate bits together there are four possible combinations. Each of these is shown the the left with its solution. It can easily be seen that the bit in the right-hand column (the "ones" column) is a 1 only when the addends are different. XORing the addends together can therefore give us the right-hand bit. This bit is called the sum and is the modulo-2 sum of the addends (i.e. the solution if you loop round to zero again once you pass one). The left-hand bit reads 1 only when both addends are 1, so an AND gate can be used to generate this bit, called the carried bit. As a summary: The diagram to the left shows the complete half adder with the addends being represented by A and B, the sum represented by S and the carried bit represented by C. The truth table is as follows (the number in the brackets are the weights of the bits - each addend is one, as is the sum - the carried bit is two) |A (1)||B (1)||S (1)||C (2)| Full Adders The downfall of half adders is that while they can generate a carry out output, they cannot deal with a carry in signal. This means that they can only ever be stand-alone units, and cannot be concatenated to add multiple bit numbers. A full adder solves this problem by adding three numbers together - the two addends as in the half adder, and a carry in input. A full adder can be constructed from two half adders by connecting A and B to the input of one half adder, connecting the sum from that to an input to the second adder, connecting the carry in, Cin, to the other input and ORing the two half adder carry outputs to give the final carry output, Cout. The digram below shows a full adder at the gate level. A version of this diagram showing the two half adder modules outlined is available here. The output of the full adder is the two-bit arithmetic sum of three one-bit numbers. The logic expressions for this full adder are: By applications of boolean algebra, the second statement simplifies to Giving the inputs a a three digit number where the first is the state of A, the second the state of B and the third the state of Cin, below is a description of the function of this full adder: - All combinations resultings in one (100, 010, 001) produce a high at the sum output, as the presence of only one high at the second XOR gate (the first gate gives a low when A and B are low and a high when either A or B is high) gives a high output at the sum output. The fact that only one input is ever high prevents either AND gate from going high and give a high Cout. - All combinations resulting in two (011, 101, 110) produce a high carry out and a low sum: - 011 sends the first XOR gate high as A and B are different. Combined with the high at Cin, this sends the top AND gate high and therefore the carry output high as well. The presence of two highs at the second XOR gate hold the sum low. - 101 works in exactly the same way as 011, as A and B are interchangeable. - 110: The high A and B force the first XOR low; this low, combined with the low from Cin gives a low at the sum. The highs at A and B send the lower AND gate high and therefore Cout is high. - When the inputs are 111, the first XOR gate is low due to the two high inputs, and the second XOR gate goes high, as it has one low input (from the first XOR gate) and one high input (from the Cin input). This gives a high sum output. The fact that both A and B are high triggers the lower AND gate, and the Cout output goes high as well. The truth table for a full adder is given below with the weights of the inputs and outputs: |A (1)||B (1)||Cin (1)||S (1)||Cout (2)| If A and B are both high then the AND gate connected to them gives a high output, but the XOR gate gives a low, forcing the other AND gate low. If the "sum" output of the first half adder is high, then one of A or B must be low, meaning that the AND gate connected to them is force low. This means that two inputs to the OR gate combining the half adder carry outs can never both be high, this gate can be replaced with an XOR gate (OR and XOR differ only when both inputs are high). This means that only two kinds of gates are needed, and a full adder can be realised with only two ICs. These can be run in stages, with the carry out of one stage driving the carry in of the next. This is discussed in the next section. Multiple-bit Addition In the previous section we said how we could add two one-bit binary numbers, taking into account any carried digits from the previous binary order of magnitude (BOOM) and any digits carried to the next BOOM. By concatenating (stringing together) these one-bit adders, we can make an adder to add an arbitrary length binary number. To do this, connect the carry in of the first stage to ground, as no bits are carried in from before the least significant bit. Then connect the carry out of the first stage to the carry in of the next, and so on for as many stages as you like. Below is the layout for a 4-bit adder: Bear in mind that the result will only be complete when the carry output have registered in each of the stages in turn. This ripple effect is why this kind of adder is called a ripple carry adder. This takes a small amount of time, and for large (32-bit or more) adders it may take several hundred nanoseconds, which may be a problem if high speed is needed. There are much faster adders that can be made that don't have a ripple effect, such as carry lookahead adders. IC Implementation The 4008 IC is a 4-bit full adder, which can in turn be concatenated with others to provide any length of number.
http://en.wikibooks.org/wiki/Practical_Electronics/Adders
13
60
This chapter describes how to calculate the radiation fields. It also provides general information about the antenna characteristics that can be derived based on the radiation fields. Once the currents on the circuit are known, the electromagnetic fields can be computed. They can be expressed in the spherical coordinate system attached to your circuit as shown in Co-polarization angle. The electric and magnetic fields contain terms that vary as 1/r, 1/r 2 etc. It can be shown that the terms that vary as 1/r 2 , 1/r 3 , ... are associated with the energy storage around the circuit. They are called the reactive field or near-field components. The terms having a 1/r dependence become dominant at large distances and represent the power radiated by the circuit. Those are called the far-field components (E ff , H ff ). In the direction parallel to the substrate (theta = 90 degrees), parallel plate modes or surface wave modes, that vary as 1/sqrt(r), may be present, too. Although they will dominate in this direction, and account for a part of the power emitted by the circuit, they are not considered to be part of the far-fields. The radiated power is a function of the angular position and the radial distance from the circuit. The variation of power density with angular position is determined by the type and design of the circuit. It can be graphically represented as a radiation pattern. The far-fields can only be computed at those frequencies that were calculated during a simulation. The far-fields will be computed for a specific frequency and for a specific excitation state. They will be computed in all directions (theta, phi) in the open half space above and/or below the circuit. Besides the far-fields, derived radiation pattern quantities such as gain, directivity, axial ratio, etc. are computed. Based on the radiation fields, polarization and other antenna characteristics such as gain, directivity, and radiated power can be derived. The far-field can be decomposed in several ways. You can work with the basic decomposition in (, ). However, with linear polarized antennas, it is sometimes more convenient to decompose the far-fields into (E co, E cross ) which is a decomposition based on an antenna measurement set-up. For circular polarized antennas, a decomposition into left and right hand polarized field components (E lhp , E rhp ) is most appropriate. Below you can find how the different components are related to each other. is the characteristic impedance of the open half sphere under consideration. The fields can be normalized with respect to: Below is shown how the left hand and right hand circular polarized field components are derived. From those, the circular polarization axial ratio (AR cp ) can be calculated. The axial ratio describes how well the antenna is circular polarized. If its amplitude equals one, the fields are perfectly circularly polarized. It becomes infinite when the fields are linearly polarized. Below, the equations to decompose the far-fields into a co and cross polarized field are given ( is the co polarization angle). From those, a "linear polarization axial ratio" (AR lp ) can be derived. This value illustrates how well the antenna is linearly polarized. It equals to one when perfect linear polarization is observed and becomes infinite for a perfect circular polarized antenna. This parameter is the solid angle through which all power emanating from the antenna would flow if the maximum radiation intensity is constant for all angles over the beam area. It is measured in steradians and is represented by: The maximum directivity is given by: where P inj is the real power, in watts, injected into the circuit. The maximum gain is given by: For the planar cut, the angle phi ( Cut Angle ), which is relative to the x-axis, is kept constant. The angle theta, which is relative to the z-axis, is swept to create a planar cut. Theta is swept from 0 to 360 degrees. This produces a view that is perpendicular to the circuit layout plane. Planar (vertical) cut illustrates a planar cut. In layout, there is a fixed coordinate system such that the monitor screen lies in the XYplane. The X-axis is horizontal, the Y-axis is vertical, and the Z-axis is normal to the screen. To choose which plane is probed for a radiation pattern, the cut angle must be specified. For example, if the circuit is rotated by 90 degrees, the cut angle must also be changed by 90 degrees if you wish to obtain the same radiation pattern from one orientation to the next. For a conical cut, the angle theta, which is relative to the z-axis, is kept constant. Phi, which is relative to the x-axis, is swept to create a conical cut. Phi is swept from 0 to 360 degrees. This produces a view that is parallel to the circuit layout plane. Conical cut illustrates a conical cut. If you choose to view results immediately after the far-field computation is complete, enable Open display when computation completed . When Data Display is used for viewing the far-field data, a data display window containing default plot types of the data display template of your choice will be automatically opened when the computation is finished. The default template, called FarFields, bundles four groups of plots: - Linear Polarization with E co , E cross , AR lp. - Circular Polarization with E lhp , E rhp , AR cp. - Absolute Fields with . - Power with Gain, Directivity, Radiation Intensity, Efficiency. For more information, please refer to About Antenna Characteristics. If 3D Visualization is selected in the Radiation Pattern dialog, the normalized electric far-field components for the complete hemisphere are saved in ASCII format in the file < project_dir>/ mom_dsn /<design_name>/ proj.fff . The data is saved in the following format: #Frequency <f> GHz /\* loop over <f> \*/ #Excitation #<i> /\* loop over <i> \*/ #Begin cut /\* loop over phi \*/ <theta> <phi_0> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut #Begin cut <theta> <phi_1> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut : : #Begin cut <theta> <phi_n> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut In the proj.fff file, E_theta and E_phi represent the theta and phi components, respectively, of the far-field values of the electric field. Note that the fields are described in the spherical co-ordinate system (r, theta, phi) and are normalized. The normalization constant for the fields can be derived from the values found in the proj.ant file and equals: The proj.ant file, stored in the same directory, contains the antenna characteristics. The data is saved in the following format: Excitation <i> /\* loop over <i> \*/ Frequency <f> GHz /\* loop over <f> \*/ Maximum radiation intensity <U> /\* in Watts/steradian \*/ Angle of U_max <theta> <phi> /\* both in deg \*/ E_theta_max <mag\(E_theta_max\)> ; E_phi_max <mag\(E_phi_max\)> E_theta_max <real\(E_theta_max\)> <imag\(E_theta_max\)> E_phi_max <real\(E_phi_max\)> <imag\(E_phi_max\)> Ex_max <real\(Ex_max\)> <imag\(Ex_max\)> Ey_max <real\(Ey_max\)> <imag\(Ey_max\)> Ez_max <real\(Ez_max\)> <imag\(Ez_max\)> Power radiated <excitation #i> <prad> /\* in Watts \*/ Effective angle <eff_angle_st> steradians <eff_angle_deg> degrees Directivity <dir> dB /\* in dB \*/ Gain <gain> dB /\* in dB \*/ The maximum electric field components (E_theta_max, E_phi_max, etc.) are those found at the angular position where the radiation intensity is maximal. They are all in volts. - Far-fields including E fields for different polarizations and axial ratio in 3D and 2D formats - Antenna parameters such as gain, directivity, and direction of main radiation in tabular format This section describes how to view the data. In EMDS for ADS RF mode, radiation results are not available for display. For general information about radiation patterns and antenna parameters, refer to About Radiation Patterns. In EMDS for ADS, computing the radiation results is included as a post processing step. The Far Field menu item appears in the main menu bar only if radiation results are available. If a radiation results file is available, it is loaded automatically. The command Set Port Solution Weights (in the Current menu) has no effect on the radiation results. The excitation state for the far-fields is specified in the radiation pattern dialog box before computation. You can also read in far-field data from other projects. First, select the project containing the far-field data that you want to view, then load the data: - Choose Projects > Select Project. - Select the name of the Momentum or Agilent EMDS project that you want to use. - Click Select Momentum or Select Agilent EMDS. - Choose Projects > Read Field Solution. - When the data is finished loading, it can be viewed in far-field plots and as antenna parameters. To display a 3D far-field plot: - Choose Far Field > Far Field Plot. - Select the view in which you want to insert the plot. - Select the E Field format: - E = sqrt(mag(E Theta)2 + mag(E Phi)2) - E Theta - E Phi - E Left - E Right - Circular Axial Ratio - E Co - E Cross - Linear Axial Ratio - If you want the data normalized to a value of one, enable Normalize. For Circular and Linear Axial Ratio choices, set the Minimum dB. Also set the Polarization Angle for E Co, E Cross, and Linear Axial Ratio. - By default, a linear scale is used to display the plot. If you want to use a logarithmic scale, enable Log Scale. Set the minimum magnitude that you want to display, in dB. - Click OK . - Click Display Options. - A white, dashed line appears lengthwise on the far-field. You can adjust the position of the line by setting the Constant Phi Value, in degrees, using the scroll bar. - Adjust the translucency of the far-field by using the scroll bar under Translucency. - Click Done . You can take a 2D cross section of the far-field and display it on a polar or rectangular plot. The cut type can be either planar (phi is fixed, theta is swept) or conical (theta is fixed, phi is swept). The figure below illustrates a planar cut (or phi cut) and a conical cut (or theta cut), and the resulting 2D cross section as it would appear on a polar plot. The procedure that follows describes how to define the 2D cross section. To define a cross section of the 3D far-field: - Choose Far Field > Cut 3D Far Field. - If you want a conical cut, choose Theta Cut. If you want a planar cut, choose Phi Cut. - Set the angle of the conical cut using the Constant Theta Value scroll bar or set the angle of the planar cut using the Constant Phi Value scroll bar. - Click Apply to accept the setting. The cross section is added to the Cut Plots list. - Repeat these steps to define any other cross sections. - Click Done to dismiss the dialog box - On a polar plot - On a rectangular plot, in magnitude versus angle In the figure below, a cross section is displayed on a polar and rectangular plot. To display a 2D far-field plot: - Choose Far Field > Plot Far Field Cut . - Select a 2D cross section from the 2D Far Field Plots list. The type of cut (phi or theta) and the angle identifies each cross section. - Select the view that you want to use to display the plot. - Select the E-field format. - Select the plot type, either Cartesian or Polar. - If you want the data normalized to a value of one, enable Normalize. - By default, a linear scale is used to display the plot. If you want to use a logarithmic scale, enable Log Scale. If available, set the minimum magnitude that you want to display, in dB; also, set the polarization angle. - Click OK. Choose Far Field > Antenna Parameters to view gain, directivity, radiated power, maximum E-field, and direction of maximum radiation. The data is based on the frequency and excitation state as specified in the radiation pattern dialog. The parameters include: - Radiated power, in watts - Effective angle, in degrees - Directivity, in dB - Gain, in dB - Maximum radiation intensity, in watts per steradian - Direction of maximum radiation intensity, theta and phi, both in degrees - E_theta, in magnitude and phase, in this direction - E_phi, in magnitude and phase, in this direction - E_x, in magnitude and phase, in this direction - E_y, in magnitude and phase, in this direction - E_z, in magnitude and phase, in this direction In the antenna parameters, the magnitude of the E-fields is in volts.
http://cp.literature.agilent.com/litweb/pdf/ads2008/emds/ads2008/Radiation_Patterns_and_Antenna_Characteristics.html
13
10
This view shows a new picture of the dust ring around the bright star Fomalhaut from the Atacama Large Millimeter/submillimeter Array (ALMA). The underlying blue picture shows an earlier picture obtained by the NASA/ESA Hubble Space Telescope. The new ALMA image has given astronomers a major breakthrough in understanding a nearby planetary system and provided valuable clues about how such systems form and evolve. Note that ALMA has so far only observed a part of the ring. Photo by ALMA (ESO/NAOJ/NRAO); visible light image: NASA/ESA Hubble Space Telescope A new observatory still under construction has given astronomers a major breakthrough in understanding a nearby planetary system and provided valuable clues about how such systems form and evolve. Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have discovered that planets orbiting the star Fomalhaut must be much smaller than originally thought. This is the first published science result from ALMA in its first period of open observations for astronomers worldwide. The discovery was made possible by exceptionally sharp ALMA images of a disk, or ring, of dust orbiting Fomalhaut, which lies about 25 light-years from Earth. It helps resolve a controversy among earlier observers of the system. The ALMA images show that both the inner and outer edges of the thin, dusty disk have very sharp edges. That fact, combined with computer simulations, led the scientists to conclude that the dust particles in the disk are kept within it by the gravitational effect of two planets — one closer to the star than the disk and one more distant. Their calculations also indicated the probable size of the planets — larger than Mars but no larger than a few times the size of the Earth. This is much smaller than astronomers had previously thought. In 2008, a NASA/ESA Hubble Space Telescope image had revealed the inner planet, then thought to be larger than Saturn, the second-largest planet in our solar system. However, later observations with infrared telescopes failed to detect the planet. That failure led some astronomers to doubt the existence of the planet in the Hubble image. Also, the Hubble visible-light image detected very small dust grains that are pushed outward by the star’s radiation, thus blurring the structure of the dusty disk. The ALMA observations, at wavelengths longer than those of visible light, traced larger dust grains — about 1 millimeter in diameter — that are not moved by the star’s radiation. They clearly reveal the disk’s sharp edges and ring-like structure, which indicate the gravitational effect of two planets. “Combining ALMA observations of the ring’s shape with computer models, we can place very tight limits on the mass and orbit of any planet near the ring,” said Aaron Boley, a Sagan Fellow at the University of Florida and a leader of the study. “The masses of these planets must be small; otherwise the planets would destroy the ring.” The small sizes of the planets explain why the earlier infrared observations failed to detect them, the scientists said. The ALMA research shows that the ring’s width is about 16 times the distance from the Sun to Earth, and is only one-seventh as thick as it is wide. “The ring is even more narrow and thinner than previously thought,” said Matthew Payne, also of the University of Florida. The ring is about 140 times the Sun-Earth distance from the star. In our solar system, Pluto is about 40 times more distant from the Sun than Earth. “Because of the small size of the planets near this ring and their large distance from their host star, they are among the coldest planets yet found orbiting a normal star,” added Aaron Boley. The scientists observed the Fomalhaut system in September and October of 2011, when only about a quarter of ALMA’s planned 66 antennas were available. When construction is completed next year, the full system will be much more capable. Even in this Early Science phase, though, ALMA was powerful enough to reveal the telltale structure that had eluded earlier millimeter-wave observers. “ALMA may be still under construction, but it is already the most powerful telescope of its kind. This is just the beginning of an exciting new era in the study of discs and planet formation around other stars," concluded European Southern Observatory astronomer and team member Bill Dent.
http://www.astronomy.com/~/link.aspx?_id=2de6cfd6-1f2b-44fd-b17f-ba12d8775fdf
13
110
Quantitative Introduction to General Relativity For a general overview of the theory, see General Relativity |This article/section deals with mathematical concepts appropriate for a student in late university or graduate level.| General Relativity is a mathematical extension of Special Relativity. GR views space-time as a 4-dimensional manifold, which looks locally like Minkowski space, and which acquires curvature due to the presence of massive bodies. Thus, near massive bodies, the geometry of space-time differs to a large degree from Euclidean geometry: for example, the sum of the angles in a triangle is not exactly 180 degrees. Just as in classical physics, objects travel along geodesics in the absence of external forces. Importantly though, near a massive body, geodesics are no longer straight lines. It is this phenomenon of objects traveling along geodesics in a curved spacetime that accounts for gravity. The mathematical expression of the theory of general relativity takes the form of the Einstein field equations, a set of ten nonlinear partial differential equations. While solving these equations is quite difficult, examining them provides valuable insight into the structure and meaning of the theory. In their general form, the Einstein field equations are written as a single tensor equation in abstract index notation relating the curvature of spacetime to sources of curvature such as energy density and momentum. In this form, Gμν represents the Einstein tensor, G is the same gravitational constant that appears in the law of universal gravitation, and Tμν is the stress-energy tensor (sometimes referred to as the energy-momentum tensor). The indices μ and ν range from zero to three, representing the time coordinate and the three space coordinates in a manner consistent with special relativity. The left side of the equation — the Einstein tensor — describes the curvature of spacetime in the region under examination. The right side of the equation describes everything in that region that affects the curvature of spacetime. As we can clearly see even in this simplified form, the Einstein field equations can be solved "in either direction." Given a description of the gravitating matter, energy, momentum and fields in a region of spacetime, we can calculate the curvature of spacetime surrounding that region. On the other hand, given a description of the curvature of a region spacetime, we can calculate the motion of a test particle anywhere within that region. Even at this level of examination, the fundamental thesis of the general theory of relativity is obvious: motion is determined by the curvature of spacetime, and the curvature of spacetime is determined by the matter, energy, momentum and fields within it. The right side of the equation: the stress-energy tensor In the Newtonian approximation, the gravitational vector field is directly proportional to mass. In general relativity, mass is just one of several sources of spacetime curvature. The stress-energy tensor, Tμν, includes all of these sources. Put simply, the stress-energy tensor quantifies all the stuff that contributes to spacetime curvature, and thus to the gravitational field. First we will define the stress-energy tensor technically, then we'll examine what that definition means. In technical terms, the stress energy tensor represents the flux of the μ component of 4-momentum across a surface of constant coordinate xν. Fine. But what does that mean? In classical mechanics, it's customary to refer to coordinates in space as x, y and z. In general relativity, the convention is to talk instead about coordinates x0, x1, x2, and x3, where x0 is the time coordinate otherwise called t, and the other three are just the x, y and z coordinates. So "a surface of constant coordinate "xν" simply means a 3-plane perpendicular to the xν axis. The flux of a quantity can be visualized as the magnitude of the current in a river: the flux of water is the amount of water that passes through a cross-section of the river in a given interval of time. So more generally, the flux of a quantity across a surface is the amount of that quantity that passes through that surface. Four-momentum is the special relativity analogue of the familiar momentum from classical mechanics, with the property that the time coordinate of a particle's four-momentum is simply the energy of the particle; the other three components of four-momentum are the same as in classical momentum. So putting that all together, the stress-energy tensor is the flux of 4-momentum across a surface of constant coordinate. In other words, the stress-energy tensor describes the density of energy and momentum, and the flux of energy and momentum in a region. Since under the mass-energy equivalence principle we can convert mass units to energy units and vice-versa, this means that the stress-energy tensor describes all the mass and energy in a given region of spacetime. Put even more simply, the stress-energy tensor represents everything that gravitates. The stress-energy tensor, being a tensor of rank two in four-dimensional spacetime, has sixteen components that can be written as a 4 × 4 matrix. Here the components have been color-coded to help clarify their physical interpretations. - energy density, which is equivalent to mass-energy density; this component includes the mass contribution - , , - the components of momentum density - , , - the components of energy flux The space-space components of the stress-energy tensor are simply the stress tensor from classic mechanics. Those components can be interpreted as: - , , , , , - the components of shear stress, or stress applied tangential to the region - , , - the components of normal stress, or stress applied perpendicular to the region; normal stress is another term for pressure. Pay particular attention to the first column of the above matrix: the components , , and , are interpreted as densities. A density is what you get when you measure the flux of 4-momentum across a 3-surface of constant time. Put another way, the instantaneous value of 4-momentum flux is density. Similarly, the diagonal space components of the stress-energy tensor — , and — represent normal stress, or pressure. Not some weird, relativistic pressure, but plain old ordinary pressure, like what keeps a balloon inflated. Pressure also contributes to gravitation, which raises a very interesting observation. Imagine a box of air, a rigid box that won't flex. Let's say that the pressure of the air inside the box is the same as the pressure of the air outside the box. If we heat the box — assuming of course that the box is airtight — then the temperature of the gas inside will rise. In turn, as predicted by the ideal gas law, the pressure within the box will increase. The box is now heavier than it was. More precisely, increasing the pressure inside the box raised the value of the pressure contribution to the stress-energy tensor, which will increase the curvature of spacetime around the box. What's more, merely increasing the temperature alone caused spacetime around the box to curve more, because the kinetic energy of the gas molecules inside the box also contributes to the stress-energy tensor, via the time-time component . All of these things contribute to the curvature of spacetime around the box, and thus to the gravitational field created by the box. Of course, in practice, the contributions of increased pressure and kinetic energy would be miniscule compared to the mass contribution, so it would be extremely difficult to measure the gravitational effect of heating the box. But on larger scales, such as the sun, pressure and temperature contribute significantly to the gravitational field. In this way, we can see that the stress-energy tensor neatly quantifies all static and dynamic properties of a region of spacetime, from mass to momentum to electric charge to temperature to pressure to shear stress. Thus, the stress-energy tensor is all we need on the right-hand side of the equation in order to relate matter, energy and, well, stuff to curvature, and thus to the gravitational field. Example 1: Stress-energy tensor for a vacuum The simplest possible stress-energy tensor is, of course, one in which all the values are zero. This tensor represents a region of space in which there is no matter, energy or fields, not just at a given instant, but over the entire period of time in which we're interested in the region. Nothing exists in this region, and nothing happens in this region. So one might assume that in a region where the stress-energy tensor is zero, the gravitational field must also necessarily be zero. There's nothing there to gravitate, so it follows naturally that there can be no gravitation. In fact, it's not that simple. We'll discuss this in greater detail in the next section, but even a cursory qualitative examination can tell us there's more going on than that. Consider the gravitational field of an isolated body. A test particle placed somewhere near but outside of the body will move in a geodesic in spacetime, freely falling inward toward the central mass. A test particle with some constant linear velocity component perpendicular to the interval between the particle and the mass will move in a conic section. This is true even though the stress-energy tensor in that region is exactly zero. This much is obvious from our intuitive understanding of gravity: gravity affects things at a distance. But exactly how and why this happens, in the model of the Einstein field equations, is an interesting question which will be explored in the next section. Example 2: Stress-energy tensor for an ideal dust Imagine a time-dependent distribution of identical, massive, non-interacting, electrically neutral particles. In general relativity, such a distribution is called a dust. Let's break down what this means. - The distribution of particles in our dust is not a constant; that is to say, the particles may be motion. The overall configuration you see when you look at the dust depends on the time at which you look at it, so the dust is said to be time-dependent. - The particles that make up our dust are all exactly the same; they don't differ from each other in any way. - Each particle in our dust has some rest mass. Because the particles are all identical, their rest masses must also be identical. We'll call the rest mass of an individual particle m0. - The particles don't interact with each other in any way: they don't collide, and they don't attract or repel each other. This is, of course, an idealization; since the particles are said to have mass m0, they must at least interact with each other gravitationally, if not in other ways. But we're constructing our model in such a way that gravitational effects between the individual particles are so small as to be be negligible. Either the individual particles are very tiny, or the average distance between them is very large. This same assumption neatly cancels out any other possible interactions, as long as we assume that the particles are far enough apart. - electrically neutral - In addition to the obvious electrostatic effect of two charged particles either attracting or repelling each other — thus violating our "non-interacting" assumption — allowing the particles to be both charged and in motion would introduce electrodynamic effects that would have to be factored into the stress-energy tensor. We would greatly prefer to ignore these effects for the sake of simplicity, so by definition, the particles in our dust are all electrically neutral. The easiest way to visualize an ideal dust is to imagine, well, dust. Dust particles sometimes catch the light of the sun and can be seen if you look closely enough. Each particle is moving in apparent ignorance of the rest, its velocity at any given moment dependent only on the motion of the air around it. If we take away the air, each particle of dust will continue moving in a straight line at a constant velocity, whatever its velocity happened to be at the time. This is a good visualization of an ideal dust. We're now going to zoom out slightly from our model, such that we lose sight of the individual particles that make up our dust and can consider instead the dust as a whole. We can fully describe our dust at any event P — where event is defined as a point in space at an instant in time — by measuring the density ρ and the 4-velocity u at P. If we have those two pieces of information about the dust at every point within it at every moment in time, then there's literally nothing else to say about the dust: it's been fully described. Let's start by figuring out the density of dust at a the event P, as measured from the perspective of an observer moving along with the flow of dust at P. The density ρ is calculated very simply: where m0 is the mass of each particle and n is the number of particles in a cubical volume one unit of length on a side centered on P. This quantity is called proper density, meaning the density of the dust as measured within the dust's own reference frame. In other words, if we could somehow imagine the dust to measure its own density, the proper density is the number it would get. Clearly proper density is a function of position, since it varies from point to point within the dust; the dust might be more "crowded" over here, less "crowded" over there. But it's also a function of time, because the configuration of the dust itself is time-dependent. If you measure the proper density at some point in space at one instant of time, then measure it at the same point in space at a different instant of time, you may get a different measurement. By convention, when dealing with a quantity that depends both on position in space and on time, physicists simply say that the quantity is a function of position, with the understanding that they're referring to a "position" in four-dimensional spacetime. The other quantity we need is 4-velocity. Four-velocity is an extension of three-dimensional velocity (or 3-velocity). In three dimensional space, 3-velocity is a vector with three components. Likewise, in four-dimensional spacetime, 4-velocity is a vector with four components. Directly measuring 4-velocity is an inherently tricky business, since one of its components describes motion along a "direction" that we cannot see with our eyes: motion through time. The math of special relativity lets us calculate the 4-velocity of a moving particle given only its 3-velocity v (with components vi where i = 1,2,3) and the speed of light. The time component of 4-velocity is given by: and the space components u1, u2 and u3 by: where γ is the boost, or Lorentz factor: and where , in turn, is the square of the Euclidean magnitude of the 3-velocity vector v: Therefore, if we know the 3-velocity of the dust at event P, then we can calculate its 4-velocity. (For more details on the how and why of 4-velocity, refer to the article on special relativity.) Just as proper density is a function of position in spacetime, 4-velocity also depends on position. The 4-velocity of our dust at a given point in space won't necessarily be the same as the 4-velocity of the dust at another point in space. Likewise, the 4-velocity at a given point at a given time may not be the same as the 4-velocity of the dust at the same point at a different time. It helps to think of 4-velocity as the velocity of the dust through a point in both space and time. Assembling the stress-energy tensor Since the density and the 4-velocity fully describe our dust, we have everything we need to calculate the stress-energy tensor. where the symbol indicates a tensor product. The tensor product of two vectors is a tensor of rank two, so the stress-energy tensor must be a tensor of rank two. In an arbitrary coordinate frame xμ, the contravariant components of the stress-energy tensor for an ideal dust are given by: From this equation, we can now calculate the contravariant components of the stress-energy tensor for an ideal dust. We start with the contravariant time-time component T00: If we rearrange the terms in this equation slightly, something important becomes apparent: Recall that ρ is a density quantity, in mass per unit volume. By the mass-energy equivalence principle, we know that E = mc2. So we can interpret this component of the stress-energy tensor, which is written here in terms of mass-energy, to be equivalent to an energy density. The off-diagonal components of the tensor — Tμν where μ and ν are not equal — are calculated this way: Again, recall that ρ is a quantity of mass per unit volume. Multiplying a mass times a velocity gives momentum, so we can interpret ρv1 as the density of momentum along the x1 direction, multiplied by constants c and γ2. Momentum density is an extremely difficult quantity to visualize, but it's a quantity that comes up over and over in general relativity. If nothing else, one can take comfort in the fact that momentum density is mathematically equivalent to the product of mass density and velocity, both of which are much more intuitive quantities. Note that the off-diagonal components of the tensor are equal to each other: In other words, in the case of an ideal dust, the stress-energy tensor is said to be symmetric. A rank two symmetric tensor is said to be symmetric if Tab = Tba. Diagonal space components The diagonal space components of the stress-energy tensor are calculated this way: In this case, we're multiplying a four-dimensional mass density, ρ, by the square of a component of 4-velocity. By dimensional analysis, we can see: Recall that the force has units: If we divide the units of the diagonal space component by the units of force, we get: So the diagonal space components of the stress-energy tensor come are expressed in terms of force per unit volume. Force per unit area are, of course, the traditional units of pressure in three-dimensional mechanics. So we can interpret the diagonal space components of the stress-energy tensor as the components of "4-pressure" in spacetime. The big picture We now know everything we know to assemble the entire stress-energy tensor, all sixteen components, and look at it as a whole. The large-scale structure of the tensor now becomes apparent. This is the stress-energy tensor of an ideal dust. The tensor is composed entirely out of the proper density and the components of 4-velocity. When velocities are low, the coefficient γ2, even though it's a squared value, remains extremely close to one. The time-time component includes a mass multiplied by the square of the speed of light, so it has to do with energy. The rest of the top row and left column all include the speed of light as a coefficient, as well as density and velocity; in the case of an ideal dust which is made up of non-interacting particles, the energy flux along any basis direction is the same as the momentum density along that direction. This is not the case in other, less simple models, but it's true here. The diagonal space components of the tensor represent pressure. For example, the T11 component represents the pressure that would be exerted on a plane perpendicular to the x1 direction. The off-diagonal space components represent shear stress. The T12 component, for instance, represents the pressure that would be exerted in the x2 direction on a plane perpendicular to the x1 axis. The overall process for calculating the stress-energy tensor for any system is fairly similar to the example given here. It involves taking into account all the matter and energy in the system, describing how the system evolves over time, and breaking that evolution down into components which represent individual densities and fluxes along different directions relative to a chosen coordinate basis. As can easily be imagined, the task of constructing a stress-energy tensor for a system of arbitrary complexity can be a very daunting one. Fortunately, gravity is an extremely weak interaction, as interactions go, so on the scales where gravity is interesting, much of the complexity of a system can be approximated. For instance, there is absolutely nothing in the entire universe that behaves exactly like the ideal dust described here; every massive particle interacts, in one way or another, with other massive particles. No matter what, a real system is going to be very much more complex than this approximation. Yet, the ideal dust solution remains a much-used approximation in theoretical physics specifically because gravity is such a weak interaction. On the scales where gravity is worth studying, many distributions of matter, including interstellar nebulae, clusters of galaxies, even the whole universe really do behave very much like an ideal dust. The left side of the equation: the Einstein curvature tensor We will recall that the Einstein field equations can be written as a single tensor equation: The right side of the equation consists of some constants and the stress-energy tensor, described in significant detail in the previous section. The right side of the equation is the "matter" side. All matter and energy in a region of space is described by the right side of the equation. The left side of the equation, then, is the "space" side. Matter tells space how to curve, and space tells matter how to move. So the left side of the Einstein field equation must necessarily describe the curvature of spacetime in the presence of matter and energy. Some assumptions about the universe Before we proceed into a discussion of what curvature is and how the Einstein equation describes it, we must first pause to state some fundamental assumptions about the universe. The first assumption we're going to make is that spacetime is continuous. In essence, this means that for any event P in spacetime — that is, any point in space and moment in time — there exists some local neighborhood of P where the intrinsic properties of spacetime differ from those at P by only an infinitesimal amount. The second assumption we're going to make is that spacetime is differentiable everywhere. In other words, the geometry of spacetime doesn't have any sharp creases in it. If we hold these two assumptions to be true, then a convenient property of spacetime emerges: Given any event P, there exists a local neighborhood where spacetime can be treated as flat, that is, having zero curvature. It is not necessarily true that all of spacetime be flat — in fact, it most definitely is not — but given any event in spacetime, there exists some neighborhood around it that is flat. This neighborhood may be arbitrarily small in both time and space, but it is guaranteed to exist as long as our two assumptions remain valid. With these two assumptions and this convenient property in hand, we will now examine what it means to say that spacetime is curved. Flatness versus curvature The Euclidean plane is an infinite, flat, two-dimensional surface. A sheet of paper is a good approximation of the Euclidean plane. Onto this plane, we can project a set of Cartesian coordinates. By "Cartesian," we mean that the coordinate axes are straight lines, that they are perpendicular, and that the unit lengths of the axes are equal. A fancier term for a Cartesian coordinate system is an orthonormal basis. Note carefully the distinction between the Euclidean plane and Cartesian coordinates. The plane exists as a thing in and of itself, just as a blank piece of paper does. It has certain properties, which we'll get into below. Those properties are intrinsic to the plane. That is, the properties don't have anything to do with the coordinates we project onto the plane. The plane is a geometric object, and the coordinates are the method by which we measure the plane. (The emphasis on the word measure there is not accidental; please keep this idea in the foreground of your mind as we continue.) Cartesian coordinates are not the only coordinates we can use in the Euclidean plane. For example, instead of having axes that are perpendicular to each other, we could choose axes that are straight lines, but that meet at some non-perpendicular angle. These types of coordinates are called oblique. For that matter, we're not bound to use straight-line coordinates at all. We could instead choose polar coordinates, wherein every point on the plane is described by a distance from a fixed but arbitrary point and an angle from a fixed but arbitrary direction. Polar coordinates are often more convenient than Cartesian coordinates. For example, when navigating a ship on the ocean, the location of a fixed point is usually described in terms of a bearing and a distance, where the distance is the straight-line distance from the ship to the point, and the bearing is the clockwise angle relative to the direction in which the ship is sailing. Polar coordinates in two and three dimensions are often used in physics for similar reasons. But there's a fundamental problem with polar coordinates that is not present with Cartesian coordinates. In Cartesian coordinates, every point on the Euclidean plane is identified by exactly one set of real numbers: there is precisely one set of x and y coordinates for every point, and every point corresponds to precisely one set of coordinates. This is not true in polar coordinates. What are the unique polar coordinates for the origin? The radial distance is obviously zero, but what is the angle? In actuality, if the radial distance is zero, any angle can be used, and the coordinates will identify the same point. The one-to-one correspondence between points in the plane and pairs of coordinates breaks down at the origin. In mathematical terms, polar coordinates in the Euclidean plane have a coordinate singularity at the origin. A coordinate singularity is a point in space where ambiguities are introduced, not because of some intrinsic property of space, but because of the coordinate basis you chose. So clearly there may exist a reason to choose one coordinate system over another when measuring — there's that word again — the Euclidean plane. Polar coordinates have a singularity at the origin — in this case, a point of undefined angle — while Cartesian coordinates have no such singularities anywhere. So there may be good reason to choose Cartesian coordinates over polar coordinates when measuring the Euclidean plane. Fortunately, this is always possible. The Euclidean plane can always be measured by Cartesian coordinates; that is, coordinates wherein the axes are straight and perpendicular at their intersection, and where lines of constant coordinate — picture the grid on a sheet of graph paper — are always a constant distance apart no matter where you measure them. Imagine taking a piece of graph paper, which is printed in a pattern that lets us easily visualize the Cartesian coordinate system, and rolling it into a cylinder. Do any creases appear in the paper? No, it remains smooth all over. Do the lines printed on the paper remain a constant distance apart everywhere? Yes, they do. In technical mathematical terms, then, the surface of a cylinder is flat. That is, it can be measured by an orthonormal basis, and there is everywhere a one-to-one correspondence between sets of coordinates and points on the surface. It's possible not to use an orthonormal basis to measure the surface; one might reasonably choose polar coordinates, or some other arbitrary coordinate system, if it's more convenient. But whichever basis is actually used, it's always possible to switch to an orthonormal basis instead. Now imagine wrapping a sheet of graph paper around a basketball. Does the paper remain smooth? No, if we press it down, creases appear. Do the lines on the paper remain parallel? No, they have to bend in order to conform the paper to the shape of the ball. In the same technical mathematical terms, the surface of a sphere is not flat. It's curved. That is, it is not possible to measure the surface all over using an orthonormal basis. But what if we focus our attention only on a part of the sphere? What if instead of measuring a basketball, we want to measure the whole Earth? The Earth is a sphere, and therefore its surface is curved and can't be measured all over with Cartesian coordinates. But if we look only at a small section of the surface — a square mile on a side, for instance — then we can project a set of Cartesian coordinates that work just fine. If we choose our region of interest to be sufficiently small, then Cartesian coordinates will fit on the surface to within the limits of our ability to measure the difference. The surface of a sphere, then, is globally curved, but locally flat. In physicist jargon, the surface of a sphere can be flattened over a sufficiently small region. Not the whole sphere all at once, nor half of it, nor a quarter of it. But a sufficiently small region can be dealt with as if it were a Euclidean plane. But this brings up an important point. The entire surface of the sphere is curved, and thus can't be approximated with Cartesian coordinates. But a sufficiently small patch of the surface can be approximated with Cartesian coordinates. This implies, then, that "curvedness" isn't an either-or property. Somewhere between the locally flat region of the surface and the entire surface, the amount of curvature goes from none to some value. Curvature, then, must be something we can measure. The metric tensor It is a fundamental property of the Euclidean plane that, when Cartesian coordinates are used, the distance s between any two points A and B is given by the following equation: where Δx and Δy are the distance between A and B in the x and y directions, respectively. This is essentially a restatement of the universally known Pythagorean theorem, and in the context of general relativity, it is called the metric equation. Metric, of course, comes from the same linguistic root as the word measure, and since this is the equation we use to measure distances, it makes sense to call it the metric equation. But this particular metric equation only works on the Euclidean plane with Cartesian coordinates. If we use polar coordinates, this equation won't work. If we're on a curved surface instead of a plane, this equation won't work. This metric equation is only valid on a flat surface with Cartesian coordinates. Which makes it pretty useless, since so much of physics revolves around curved spacetime and spherical coordinates. What we need is a generalized metric equation, some way of measuring the interval of any two points regardless of what coordinate system we're using or whether our local geometry is flat or curved. The metric tensor equation provides this generalization. If v is any vector having components vμ, the length of v is given by the following equation: where gμν is the metric tensor, and μ and ν range over the number of dimensions. Recall that Einstein summation notation means that this is actually a sum over indices μ and ν. If we assume that we're in the two-dimensional Euclidean plane, the metric tensor equation expands to: The terms of the metric tensor, then, must be numerical coefficients in the metric equation. We already know what these equations need to be to make the metric equation work in the Euclidean plane with Cartesian coordinates: Now we can write the metric tensor for the Euclidean plane in Cartesian coordinates in the form of a 2 × 2 matrix: So in the case of the Euclidean plane with Cartesian coordinates, the metric tensor is the Kronecker delta: Of course, the same concepts apply if we expand our interest from the plane to three-dimensional Euclidean space with Cartesian coordinates. We just have to let the indices of the Kronecker delta run from 1 to 3. Which gives us the following metric equation for the length of a vector v (omitting terms with zero coefficient): Which precisely agrees with the Pythagorean theorem in three dimensions. So given a metric tensor gμν for any space and coordinate basis, we can calculate the distance between any two points. The metric tensor, therefore, is what allows us to measure curved space. In a very real sense, the metric tensor describes the shape of both the underlying geometry and the chosen coordinate basis. But relativity is concerned not with geometrically abstract space; we're interested in very real spacetime, and that requires a slightly different kind of metric. The local Minkowski metric Parallel transport and intrinsic curvature The Riemann and Ricci tensors and the curvature scalar The Einstein tensor The cosmological constant <ref>tags exist, but no <references/>tag was found
http://www.conservapedia.com/Quantitative_Introduction_to_General_Relativity
13
16
- Students & Postdocs - Education & Outreach - About JILA Sculpting a Star System: The Inner Planets The Solar System has a remarkable number of planets. It includes four rocky planets (Mercury, Venus, Earth, and Mars), four giant gaseous planets, and countless smaller worlds. Early on, there may even have been a fifth rocky planet that collided with the Earth, forming the Moon. We owe the survival of so many terrestrial planets (and our own evolution as a species) to the relatively stable orbits of Jupiter, Saturn, Uranus, and Neptune during the 100 million years it took to form the inner planets of the Solar System. Most extrasolar planetary systems may not have been so fortunate. They show signs of being survivors of violent instabilities that knocked at least one giant planet completely out of the system. The worst instabilities would have resulted in the destruction of all the system’s rocky planets. Some “milder” instabilities left a single rocky planet intact, but badly shaken up. The survival of two or more rocky planets like those in our Solar System requires a stable set of giant gas and ice planets. And, theoretical calculations suggest that such stability may be relatively uncommon. Only 15–25% of planetary systems around Sun-like stars end up with three or four terrestrial planets, according to a recent simulation by Fellow Phil Armitage and his colleagues at the Université de Bordeaux, Princeton University, Cambridge University, Weber State University, NASA Goddard Space Flight Center, and Boston University. These planetary systems had calm environments conducive to the formation and preservation of terrestrial planets. Most also still have large disks of dust and other debris in orbits beyond those of the system’s giant planets. These large debris disks mirror the favorable conditions for inner planet formation so well that Armitage and his colleagues say that bright cold dust emission around Sun-like stars could well serve as signposts of terrestrial planet formation. A notable exception is our own Solar System. The Solar System’s outer region has a relatively meager collection of dust, rocks, and planetismals (rocky bodies a few kilometers or more in diameter) known as the Kuiper belt. The Kuiper belt’s relatively small size tells us something important about the Solar System’s unique history, Armitage says. About 700 million years after the birth of the Solar System, Uranus and Neptune moved far enough away from Jupiter and Saturn to interact with the then much-larger Kuiper belt. This interaction brought Uranus and Neptune into their present orbits. It also destabilized countless planetismals, hurtling many into outer space and others straight into the heart of the Solar System. This bombardment was responsible for some of the craters still visible today on the Moon. It must have inflicted even more damage on the Earth and Mars, and on the moons of the giant planets. Despite the scars we see today, the bombardment came much too late to significantly affect the evolution of our Solar System’s inner planets. Fortunately for us, they remained stable. However, there is evidence that the bombardment increased the amount of wobble in the orbits of both Earth and Venus. On Earth, lulls in the rain of impacting comets and asteroids may even have allowed any primitive life forms to survive. S. N. Raymond, P. J. Armitage, A. Moro-Martin, M. Booth, M. C. Wyatt, J. C. Armstrong, A. M. Mandell, F. Selsis, and A. A. West, Astronomy & Astrophysics 530, A62 (2011).
http://jila.colorado.edu/content/sculpting-star-system-inner-planets
13
33
Function Definition MathFunction Definition Math We now need to move into the second topic of this chapter. The first thing that we need to do is define just what a function is. There are lots and lots of definitions for a function out there and most of them involve the terms rule, relation, or correspondence. While these are more technically accurate than the definition that we're going to use in this section all the fancy words used in the other definitions tend to just confuse the issue and make it difficult to understand just what a function is. So, here is the definition of function that we're going to use. Again, I need to point out that this is NOT the most technically accurate definition of a function, but it is a good "working definition" of a function that helps us to understand just how a function works. "Working Definition" of Function :- A function is an equation (this is where most definitions use one of the words given above) for which any x that can be plugged into the equation will yield exactly one y out of the equation.Know More About :- Graphing Circles Tutorcircle.comPageNo.:1/4 There it is. That is the definition of functions that we're going to use. Before we examine this a little more note that we used the phrase "x that can be plugged into" in the definition. This tends to imply that not all x's can be plugged into an equation and this is in fact correct. We will come back and discuss this in more detail towards the end of this section, however at this point just remember that we can't divide by zero and if we want real numbers out of the equation we can't take the square root of a negative number. So, with these two examples it is clear that we will not always be able to plug in every x into any equation. When dealing with functions we are always going to assume that both x and y will be real numbers. In other words, we are going to forget that we know anything about complex numbers for a little bit while we deal with this section. Okay, with that out of the way let's get back to the definition of a function. Now, we started off by saying that we weren't going to make the definition confusing. However, what we should have said was we'll try not to make it too confusing, because no matter how we define it the definition is always going to be a little confusing at first. In order to clear up some of the confusion let's look at some examples of equations that are functions and equations that aren't functions. Example 1 Determine which of the following equations are functions and which are not functions.Solution :- The definition of function is saying is that if we take all possible values of x and plug them into the equation and solve for y we will get exactly one value for each value of x.Learn More :- Graph A Line Tutorcircle.comPageNo.:2/4 At this stage of the game it can be pretty difficult to actually show that an equation is a function so we'll mostly talk our way through it. On the other hand it's often quite easy to show that an equation isn't a function. (a) So, we need to show that no matter what x we plug into the equation and solve for y we will only get a single value of y. Note as well that the value of y will probably be different for each value of x, although it doesn't have to be. So, for each of these value of x we got a single value of y out of the equation. Now, this isn't sufficient to claim that this is a function. In order to officially prove that this is a function we need to show that this will work no matter which value of x we plug into the equation. Of course we can't plug all possible value of x into the equation. That just isn't physically possible. However, let's go back and look at the ones that we did plug in. For each x, upon plugging in, we first multiplied the x by 5 and then added 1 onto it. Now, if we multiply a number by 5 we will get a single value from the Likewise, we will only get a single value if we add 1 onto a number. Therefore, it seems plausible that based on the operations involved with plugging x into the equation that we will only get a single value of y out of the equation. TutTu ot rcr ic rcr lc el .e c. oc mPaP geg e NoN ..::2/3 3/4
http://pdfcast.org/pdf/function-definition-math
13
56
California Mathematics Standards, 6th Grade 1.0 Students compare and order positive and negative fractions, decimals, and mixed numbers. Students solve problems involving fractions, ratios, proportions, and percentages: 1.1 Compare and order positive and negative fractions, decimals, mixed numbers and place them on a number line. 2.0 Students calculate and solve problems involving addition, subtraction, multiplication, and division: 1.2 Interpret and use ratios in different contexts (e.g., batting averages, miles per hour) to show the relative sizes of two quantities, using appropriate notations (a/b, a to b, a:b). 1.3 Use proportions to solve problems (e.g., determine the value of N if 4/7=N/21, find the length of a side of a polygon similar to a known polygon). Use cross-multiplication as a method for solving such problems, understanding it as the multiplication of both sides of an equation by a multiplicative inverse. 1.4 Calculate given percentages of quantities and solve problems involving discounts at sales, interest earned, and tips. 2.1 Solve problems involving addition, subtraction, multiplication, and division of positive fractions and explain why a particular operation was used for a given situation. 2.2 Explain the meaning of multiplication and division of positive fractions and perform the calculations (e.g., 5/8 / 15/16 = 5/8 x 16/15 = 2/3). 2.3 Solve addition, subtraction, multiplication, and division problems, including those arising in concrete situations, that use positive and negative integers and combinations of these operations. 2.4 Determine the least common multiple and the greatest common divisor of whole numbers; use them to solve problems with fractions (e.g., to find a common denominator to add two fractions or to find the reduced form for a fraction). Algebra and Functions 1.0 Students write verbal expressions and sentences as algebraic expressions and equations; they evaluate algebraic expressions, solve simple linear equations, and graph and interpret their results: 1.1 Write and solve one-step linear equations in one variable 2.0 Students analyze and use tables, graphs and rules to solve problems involving rates and proportions: 1.2 Write and evaluate an algebraic expression for a given situation, using up to three variables. 1.3 Apply algebraic order of operations and the commutative, associative, and distributive properties to evaluate expressions; and justify each step in the process. 1.4 Solve problems manually by using the correct order of operations or by using a scientific calculator. 2.1 Convert one unit of measurement to another (e.g., from feet to miles, from centimeters to inches). 3.0 Students investigate geometric patterns and describe them algebraically: 2.2 Demonstrate an understanding that rate is a measure of one quantity per unit value of another quantity. 2.3 Solve problems involving rates, average speed, distance, and time. 3.1 Use variables in expressions describing geometric quantities (e.g., P=2w / 2l, A=1/2bh, C=(pi)d - the formulas for the perimeter of a rectangle, the area of a triangle, and the circumference of a circle, respectively). 3.2 Express in symbolic form simple relationships arising from geometry. Measurement and Geometry 1.0 Students deepen their understanding of the measurement of plane and solid shapes and use this understanding to solve problems: 1.1 Understand the concept of a constant such as pi; know the formulas for the circumference and area of a circle. 2.0 Students identify and describe the properties of two-dimensional figures: 1.2 Know common estimates of pi (3.14; 22/7) and use these values to estimate and calculate the circumference and the area of circles; compare with actual measurements. 1.3 Know and use the formulas for the volume of triangular prisms and cylinders (area of base x height); compare these formulas and explain the similarity between them and the formula for the volume of a rectangular solid. 2.1 Identify angles as vertical, adjacent, complementary, or supplementary and provide descriptions of these terms. 2.2 Use the properties of complementary and supplementary angles and the sum of the angles of a triangle to solve problems involving an unknown angle. 2.3 Draw quadrilaterals and triangles from given information about them (e.g., a quadrilateral having equal sides but no right angles, a right isosceles triangle). Statistics, Data Analysis and Probability 1.0 Students compute and analyze statistical measurement for data sets: 1.1 Compute the range, mean, median, and mode of data sets. 2.0 Students use data samples of a population and describe the characteristics and limitations of the samples: 1.2 Understand how additional data added to data sets may affect these computations of measures of central tendency. 1.3 Understand how the inclusion or exclusion of outliers affects measures of central tendency. 1.4 Know why a specific measure of central tendency (mean, median, mode) provides the most useful information in a given context. 2.1 Compare different samples of a population with the data from the entire population and identify a situation in which it makes sense to use a sample. 3.0 Students determine theoretical and experimental probabilities and use these to make predictions about events: 2.2 Identify different ways of selecting a sample (e.g., convenience sampling, responses to a survey, random sampling) and which method makes a sample more representative for a population. 2.3Analyze data displays and explain why the way in which the question was asked might have influenced the results obtained and why the way in which the results were displayed might have influenced the conclusions reached. 2.4Identify data that represent sampling errors and explain why the sample (and the display) might be biased. 2.5Identify claims based on statistical data and, in simple cases, evaluate the validity of the claims. 3.1Represent all possible outcomes for compound events in an organized way (e.g., tables, grids, tree diagrams) and express the theoretical probability of each outcome. 3.2 Use data to estimate the probability of future events (e.g. batting averages or number of accidents per mile driven). 3.3 Represent probabilities as rations. proportions, decimals between 0 and 1, and percentages between 0 and 100 and verify that the probabilities computed are reasonable; know that if P is the probability of an even, 1-P is the probability of an event not occurring. 3.4 Understand the probability of either of two disjoint events occurring is the sum of the two individual probabilities and that the probability of one event following another, in independent trials, is the product of the two probabilities. 3.5 Understand the difference between independent and dependent events. 1.0 Students make decisions about how to approach problems: 1.1 Analyze problems by identifying relationships, discriminating relevant from irrelevant information, identifying missing information, sequencing and prioritizing information, and observing patterns 2.0 Students use strategies, skills and concepts in finding solutions: 1.2 Formulate and justify mathematical conjectures based upon a general description of the mathematical question or problem posed 1.3 Determine when and how to break a problem into simpler parts 2.1 Use estimation to verify the reasonableness of calculated results 3.0 Students move beyond a particular problem by generalizing to other situations: 2.2 Apply strategies and results from simpler problems to more complex problems 2.3 Estimate unknown quantities graphically and solve for them using logical reasoning, and arithmetic and algebraic techniques 2.4 Use a variety of methods such as words, numbers, symbols, charts, graphs, tables, diagrams and models to explain mathematical reasoning 2.5 Express the solution clearly and logically using appropriate mathematical notation and terms and clear language, and support solutions with evidence, in both verbal and symbolic work 2.6 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy 2.7 Make precise calculations and check the validity of the results from the context of the problem 3.1 Evaluate the reasonableness of the solution in the context of the original situation 3.2 Note method of deriving the solution and demonstrate conceptual understanding of the derivation by solving similar problems 3.3 Develop generalizations of the results obtained and the strategies used and extend them to new problem situations
http://mathforum.org/alejandre/frisbie/math/standards6.html
13