Identify two elements that are commonly found in all three minerals in the data table

Identify two elements that are commonly found in all three minerals in the data table

Updated February 16, 2020

By Kevin Beck

Reviewed by: Lana Bandoim, B.S.

If asked to list the chemical elements that make up most of the Earth, you might be surprised by how hard it is to guess the right elements without knowing more than the average person does about all of that dirt, rock and metal underfoot. (You would also want to know if your quizzer was inquiring about the composition of the atmosphere above the Earth, also a common geoscience topic.)

Elements are different kinds of atoms, and as of 2020, 118 of them had been identified, 92 of which occur in significant amounts in nature. While it's impossible to know for certain the precise composition of the deeper layers of Earth, four elements alone make up almost 90 percent of the Earth's uppermost portion, or crust; four more account for nine-tenths of the remainder.

The more-or-less-spherical Earth, about 8,000 miles (just under 13,000 kilometers, or km) through the middle, is divided into three geologic layers:

  • A very thin crust extending down from the surface, about 3 miles (5 km) thick; 
  • a mantle about 1,800 miles

    (2,890 km)

    thick, composed mainly of magnesium and iron rocks;  * a core about 2,200 miles (3,400 km) thick and including a solid iron center surrounding a molten (hot liquid) "ring" of iron and nickel (see below).

Earth's crust is made up almost entirely of eight elements, four of them alone claiming almost nine-tenths of this layer's total weight: oxygen comes in at 46.6 percent by mass, followed by silicon at 27.7 percent, aluminum at 8.1 percent and iron at 5 percent.

  • The remaining crust elements are calcium, 3.6 percent; sodium, 2.8 percent, potassium, 2.6 percent, and magnesium, 2.1 percent.

Nearly 100 percent of the mass of Earth's core is composed of the two elements iron and nickel. Scientists have concluded this from the knowledge that the core must be 13 times as dense as water, leaving only some combination of molten iron and nickel as plausible candidates.

Oxygen: This familiar element, is number 6 on the periodic table of elements, makes up about 47 percent of the mass of the crust, and is also abundant (fortunately) in the atmosphere. It is also the main component, by mass, of water.

Since oxygen atoms are light compared to the other predominant elements in Earth's crust, the fact that their total mass accounts for almost half of the crust means that the fraction of its atoms that consists of oxygen is even higher than the mass fraction.

Silicon: This element, number 14 on the periodic table, exists as a crystal solid. It reacts poorly with most other elements, but eluded isolation by chemists for a long time owing to its affinity for oxygen and thus its tendency to remain "disguised" as silicon oxide.

  • Silicon is not to be confused with silicone, which is a polymer made of silicon, oxygen and other elements. It is commonly used in oil, grease and other physical media.

Aluminum: This metal is number 13 on the periodic table, one atomic number and thus one proton shy of silicon. It is non-magnetic and highly reactive, so much so that pure aluminum is rarely found. Instead, it is usually found embedded within rocks and combined with other compounds

Iron: Iron, atomic number 26, is a famous element, vital in both construction and engineering (iron accounts for almost all of the mass of most kinds of steel, for instance) and human physiology (iron is a required component of the oxygen-binding red blood cells, or erythrocytes, in your bloodstream). Is is found in all three of Earth's layers in significant amounts.

Rocks and Minerals

by Anne E. Egger, Ph.D.

Geologists have recently determined that the minerals goethite and hematite exist in abundance on Mars, sure signs of the presence of water (see Figure 1 for a picture). None of those geologists have been to Mars, of course, but the unmanned rovers Spirit and Opportunity have. These rovers are equipped with three mass spectrometers, each of which is capable of determining the chemical composition of a solid with a high degree of accuracy. With such a precise chemical analysis in hand, geologists on Earth had no problem identifying the minerals.

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: The small spheres in this picture were dubbed “berries” by geologists who first saw them. They sit on the surface of Mars and were photographed by the Mars rover Opportunity. A mass spectrometer on the rover was able to determine the chemical content of the berries and geologists recognized the chemical formula for hematite (Fe2O3).© NASA/JPL/Cornell

A mineral is defined in part by a specific chemical composition. In theory, therefore, it is always easy to identify a mineral, if you can determine the chemical composition with a mass spectrometer like the Mars rovers. In reality, however, even if you are looking at rocks on Earth, determining the exact chemical composition of a substance involves significant time preparing the sample and sophisticated laboratory equipment (and often significant money). Luckily, it is usually unnecessary to go to such lengths, because there are much easier ways that require little more than a magnifying lens and a penknife.

The most common minerals in Earth's crust can often be identified in the field using basic physical properties such as color, shape, and hardness. The context of a mineral is important, too – some minerals can form under the same conditions, so you are likely to find them in the same rock, while others form under very different conditions and will never occur in the same rock. For this reason, context (the other surrounding minerals and type of rock) can often be used to rule out minerals that have similar color, for example. Although there are many thousands of named minerals, only a dozen or so are common in Earth's crust. Testing a few physical properties therefore means that you can identify about 90% of what you are likely to encounter in the field.

Because the physical properties of a mineral are determined by its chemical composition and internal atomic structure, they can be used diagnostically, the way a runny nose and sore throat can be used to diagnose a cold. There are many physical properties of minerals that are testable with varying degrees of ease, including color, crystal form (or shape), hardness, luster (or shine), density, and cleavage or fracture (how the mineral breaks). In addition, many minerals have unique properties, such as radioactivity, fluorescence under black light, or reaction to acid. In most cases, it is necessary to observe a few properties to identify a mineral; to extend the medical analogy even further, a runny nose is a symptom of a cold virus, allergies, or a sinus infection among other things, so we have to use other symptoms to diagnose the problem – a headache, fever, watery eyes, and so on.

Comprehension Checkpoint

The context in which a mineral is found

The most obvious property of a mineral, its color, is unfortunately also the least diagnostic. In the same way that a headache is a symptom for a whole host of problems from the flu to a head injury, many minerals share the same color. For example, several minerals are green in color – olivine, epidote, and actinolite, just to name a few. On the other extreme, one mineral can take on several different colors if there are impurities in the chemical composition, such as quartz, which can be clear, smoky, pink, purple, or yellow.

Part of the reason that the color of minerals is not uniquely diagnostic is that there are several components of the crystal compositions and structure that can produce color. The presence of some elements, such as iron, always results in a colored mineral, but iron can produce a wide variety of colors depending on its state of oxidation – black, red, or green, most commonly. Some minerals have color-producing elements in their crystal structure, like olivine (Fe2SiO4), while others incorporate them as impurities, like quartz (SiO2). All of this variability makes it difficult to solely use color to identify a mineral. However, in combination with other properties such as crystal form, color can help narrow the possibilities. As an example, hornblende, biotite, and muscovite are all very commonly found in rocks such as granite. Hornblende and biotite are both black, but they can be easily distinguished by their crystal form because biotite occurs in sheets, while hornblende forms stout prisms (Figure 2). Muscovite and biotite both form in sheets, but they are different colors – muscovite is colorless, in fact.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: These three minerals can be distinguished using both color and form. Hornblende (left) and biotite (middle) share the same color, but are different forms; muscovite (right) and biotite share form but not color.

Comprehension Checkpoint

Color is one of the best ways to identify a mineral.

The external shape of a mineral crystal (or its crystal form) is determined largely by its internal atomic structure, which means that this property can be highly diagnostic. Specifically, the form of a crystal is defined by the angular relationships between crystal faces (recall Steno's Law of Interfacial Angles as discussed in our Minerals I module). Some minerals, like halite (NaCl, or salt) and pyrite (FeS) have a cubic form (see Figure 3, left); others like tourmaline (see Figure 3, middle) are prismatic. Some minerals, like azurite and malachite, which are both copper ores, don't form regular crystals, and are amorphous (Figure 3).

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: Examples of different types of crystal forms. On the left, pyrite has a cubic form; tourmaline (middle) is prismatic; azurite and malachite (on the right) are often amorphous.

Unfortunately, we don't always get to see the crystal form. We see perfect crystals only when they have had a chance to grow into a cavity, such as in a geode. When crystals grow in the context of cooling magma, however, they are competing for space with all of the other crystals that are trying to grow and they tend to fill in whatever space they can. The shape of the crystal can vary quite a bit depending on the amount of space available, but the angle between the crystal faces will always be the same.

Comprehension Checkpoint

Which is more helpful in identifying a mineral?

The hardness of a mineral can be tested in several ways. Most commonly, minerals are compared to an object of known hardness using a scratch test – if a nail, for example, can scratch a crystal, than the nail is harder than that mineral. In the early 1800s, Friedrich Mohs, an Austrian mineralogist, developed a relative hardness scale based on the scratch test. He assigned integer numbers to each mineral, where 1 is the softest and 10 is the hardest. This scale is shown in Figure 4.

Identify two elements that are commonly found in all three minerals in the data table
Figure 4: Mohs' scale of mineral hardness, where 1 is the softest and 10 is the hardest.

The scale is not linear (corundum is actually 4 times as hard as quartz), and other methods have now provided more rigorous measurements of hardness. Despite the lack of precision in the Mohs scale, it remains useful because it is simple, easy to remember, and easy to test. The steel of a pocketknife (a common tool for geologists to carry in the field) falls almost right in the middle, so it is easy to distinguish the upper half from the lower half. For example, quartz and calcite can look exactly the same – both are colorless and translucent, and occur in a wide variety of rocks. But a simple scratch test can tell them apart; calcite will be scratched by a pocketknife or rock hammer and quartz will not. Gypsum can also look a lot like calcite, but is so soft that it can be scratched by a fingernail.

Variations in hardness make minerals useful for different purposes. The softness of calcite makes it a popular material for sculpture (marble is made up entirely of calcite), whereas the hardness of diamond means that it is used as an abrasive to polish rock.

Comprehension Checkpoint

The hardness of a mineral can be determined by attempting to scratch it with a knife.

The luster of a mineral is the way that it reflects light. This may seem like a difficult distinction to make, but picture the difference between the way light reflects off a glass window and the way it reflects off of a shiny chrome car bumper. A mineral that reflects light the way glass does has a vitreous (or glassy) luster; a mineral that reflects light like chrome has a metallic luster. There are a variety of additional possibilities for luster, including pearly, waxy, and resinous (see pictures in Figure 5). Minerals that are as brilliantly reflective as diamond have an adamantine luster. With a little practice, luster is as easily recognized as color and can be quite distinctive, particularly for minerals that occur in multiple colors like quartz.

Identify two elements that are commonly found in all three minerals in the data table
Figure 5: Examples of only a few of the different lusters that can be seen in minerals. Galena (left) has a metallic luster, amber (middle) is resinous, and quartz (right) is glassy.

The density of minerals varies widely from about 1.01 g/cm3 to about 17.5 g/cm3. The density of water is 1 g/cm3, pure iron has a density of 7.6 g/cm3, pure gold, 17.65 g/cm3. Minerals, therefore, occupy the range of densities between water and pure gold. Measuring the density of a specific mineral requires time-consuming techniques, and most geologists have developed a more intuitive sense for what is "normal" density, what is unusually heavy for its size, and what is unusually light. By "hefting" a rock, experienced geologists can usually guess if the rock is made up of minerals that contain iron or lead, for example, because it feels heavier than an average rock of the same size (see our Density module for more information).

Most minerals contain inherent weaknesses within their atomic structures, a plane along which the bond strength is lower than the surrounding bonds. When hit with a hammer or otherwise broken, a mineral will tend to break along that plane of pre-existing weakness. This type of breakage is called cleavage, and the quality of the cleavage varies with the strength of the bonds. Biotite, for example, has layers of extremely weak hydrogen bonds that break very easily, thus biotite breaks along flat planes and is considered to have perfect cleavage (see Figure 6). Other minerals cleave along planar surfaces of varying roughness – these are considered to have good to poor cleavage.

Identify two elements that are commonly found in all three minerals in the data table
Figure 6: Several conchoidal fractures are visible in the mineral samples above. Note the concave surface and the curved ribs.

Some minerals don't have any planes of weakness in their atomic structure. These minerals don't have any cleavage, and instead they fracture. Quartz fractures in a distinctive fashion, called conchoidal, which produces a concave surface with a series of arcuate ribs similar to the way that glass fractures (see Figure 6). For quartz, in fact, this lack of cleavage is a distinguishing property.

Comprehension Checkpoint

A mineral with perfect cleavage

Physical properties provided the main basis for classification of minerals from the Middle Ages through the mid-1800s. Minerals were grouped according to characteristics such as hardness, so that diamond and corundum would be in the same class of minerals. As the ability to determine the chemical composition of minerals developed, so did a new classification system. Many scientists contributed to the discovery of mineral chemical formulas, but James Dwight Dana, a mineralogist at Yale University from 1850 to 1892 (see Biography link in the Resources section), developed a classification system for minerals based on chemical composition that has survived to the present day. He grouped minerals according to their anions, such as oxides (compounds with O2-), silicates (compounds with (SiO4)4-), and sulfates (compounds with (SO4)2-). A chemical classification system meant that minerals that were grouped together theoretically also tended to appear with each other in rocks since they tended to develop under similar geochemical conditions.

Physical properties still provide the main means for identification of minerals, however, though they are no longer used to group minerals (from the example above, corundum is an oxide while diamond is a pure element, so by Dana's system, they are in separate groups). A composition-based grouping highlights some common mineral associations that allow geologists to make educated guesses about which minerals are present in a rock, even with only a quick glance. By far, the most common minerals are the silicates, which make up 90% of Earth's crust. Of the many hundreds of named silicate minerals, only about eight are common, one of which is quartz. The uncommon minerals are critical, however, as they include economically important ones such as galena, which is the primary ore for lead, and apatite, a phosphate mined for phosphoric acid that is added to fertilizers. The discovery of new ore deposits depends on the ability of geologists to identify what they see in the field and recognize unusual mineral occurrences that should be explored in more detail in the laboratory. A hand lens, a pocketknife, and a lot of practice still provide the easiest and cheapest methods of identifying minerals.

Minerals are classified on the basis of their chemical composition, which is expressed in their physical properties. This module, the second in a series on minerals, describes the physical properties that are commonly used to identify minerals. These include color, crystal form, hardness, density, luster, and cleavage.

Key Concepts

  • Properties that help geologists identify a mineral in a rock are: color, hardness, luster, crystal forms, density, and cleavage.

  • Crystal form, cleavage, and hardness are determined primarily by the crystal structure at the atomic level.

  • Color and density are determined primarily by the chemical composition.

  • Minerals are classified on the basis of their chemical composition.

Anne E. Egger, Ph.D. “Properties of Minerals” Visionlearning Vol. EAS-2 (8), 2005.

Top


Page 2

Rocks and Minerals

by Anne E. Egger, Ph.D.

The mineral quartz (SiO2) is found in all rock types and in all parts of the world. It occurs as sand grains in sedimentary rocks, as crystals in both igneous and metamorphic rocks, and in veins that cut through all rock types, sometimes bearing gold or other precious metals. It is so common on Earth's surface that until the late 1700s it was referred to simply as "rock crystal." Today, quartz is what most people picture when they think of the word "crystal."

Quartz falls into a group of minerals called the silicates, all of which contain the elements silicon and oxygen in some proportion. Silicates are by far the most common minerals in Earth's crust and mantle, making up 95% of the crust and 97% of the mantle by most estimates. Silicates have a wide variety of physical properties, despite the fact that they often have very similar chemical formulas. At first glance, for example, the formulas for quartz (SiO2) and olivine ((Fe,Mg)2SiO4) appear fairly similar; these seemingly minor differences, however, reflect very different underlying crystal structures and, therefore, very different physical properties. Among other differences, quartz melts at about 600° C while olivine remains solid to temperatures of nearly twice that; quartz is generally clear and colorless, whereas olivine received its name from its olive green color.

The variety and abundance of the silicate minerals is a result of the nature of the silicon atom, and even more specifically, the versatility and stability of silicon when it bonds with oxygen. In fact, pure silicon was not isolated until 1822, when the Swedish chemist Jöns Jakob Berzelius (see the Biography link in our Resources section) finally succeeded in separating silicon from its most common compound, the silicate anion (SiO4)4-. This anion takes the shape of a tetrahedron, with an Si4+ ion at the center and four O2- ions at the corners (see Figure 1); thus, the molecular anion has a net charge of -4.

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: Three ways of drawing the silica tetrahedron: a) At left, a ball & stick model, showing the silicon cation in orange surrounded by 4 oxygen anions in blue; b) At center, a space filling model; c) At right, a geometric shorthand.

The Si-O bonds within this tetrahedral structure are partially ionic and partially covalent, and they are very strong. Silica tetrahedra bond with each other and with a variety of cations in many different ways to form the silicate minerals. Despite the fact that there are many hundreds of silicate minerals, only about 25 are truly common. Therefore, by understanding how these silica tetrahedra form minerals, you will be able to name and identify 95% of the rocks you encounter on Earth's surface.

Early mineralogists grouped minerals according to physical properties, which spread the silicates across many groups because they have very different properties. By the early 1800s, however, Berzelius had begun classifying minerals based on their chemical composition rather than on their physical properties, defining groups such as the oxides and sulfides – and, of course, the silicates. At the time, Berzelius was able to determine the absolute proportions of elements within a mineral, but he could not see the internal arrangement of the atoms of those elements in their crystalline structure.

A detailed view of the internal arrangement of atoms within minerals would have to wait over 100 years for the development of X-ray diffraction (XRD) by Max von Laue, and its application to determine atomic distances by the father-son team of William Henry Bragg and William Lawrence Bragg a few years later (see their biographies in our Resources section). In the process of XRD, X-rays are aimed at a crystal. Electrons in the atoms within the crystal interact with the X-rays and cause them to undergo diffraction. In the same way that light can be diffracted by a grate or card (see our Light I: Particle or Wave? module for more information on this topic), X-rays are diffracted by the crystal and a 2-dimensional pattern of constructive and destructive interference bands results. This pattern can be used to determine the distance between atoms within the crystal structure according to Bragg's Law. The Braggs' work opened up a new world of mineralogy, and they were awarded a Nobel Prize in 1915 for their work determining the crystal structures of NaCl, ZnS, and diamond. XRD revealed that even minerals with similar chemical formulas could have very different crystal structures, strongly influencing those minerals' chemical and physical properties.

As scientists created XRD images of the atomic structure of minerals, they were better able to understand the nature of the bonds between atoms in the silicate and other crystals. Within a silica tetrahedron, any single Si-O bond requires half of the available bonding electrons of the O2- ion, meaning that each O2- may bond with a second ion, including another Si4+ ion. The result of this is that the silica tetrahedra can polymerize, or form chain-like compounds, by sharing an oxygen atom with a neighboring silica tetrahedron. The silicates are, in fact, subdivided based on the shape and bonding pattern of these polymers, because the shape influences the external crystal form, the hardness and cleavage of the mineral, the melting temperature, and the resistance to weathering. These different atomic structures produce recognizable and consistent physical properties, so it is useful to understand the structures at an atomic level in order to identify and classify the silicate minerals. Identifying minerals in a rock may seem like an arcane exercise, but it is only by identifying minerals that we begin to understand the history of a given rock.

The most common silicate minerals fall into four types of structures, described in more detail below: isolated tetrahedra, chains of silica tetrahedra, sheets of tetrahedra, and a framework of interconnected tetrahedra. 

Comprehension Checkpoint

The silica tetrahedron is made up of

The simplest atomic structure involves individual silica anions and metal cations, usually iron (Fe) and magnesium (Mg), both of which exist most commonly as ions with charge of +2. Therefore, it takes two atoms of Fe2+ or Mg2+ (or one of each) to balance the -4 charge of the silica anion. Olivine (see Figures 2a and 2b below) is the most common silicate of this type, and it makes up most of the mantle. Because these minerals contain a relatively high proportion of iron and magnesium, they tend to be both dense and dark-colored. Because the tetrahedra are not polymerized, there are no consistent planes of internal atomic weakness, so they also have no cleavage. Garnet is another common mineral with this structure.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2a: Depiction of a single silicate tetrahedron.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2b: A picture of olivine (the green crystals), an example of a silicate structure composed of isolated tetrahedrons, with a vein of basalt (the gray material).

When silicate anions polymerize, they share an oxygen atom with a neighboring tetrahedron. Commonly, each tetrahedron will share two of its oxygen atoms, forming long chain structures. These chains still have a net negative charge, however, and the chains bond to metal cations like Fe2+, Mg2+, and Ca2+ to balance the negative charge. These metal cations commonly bond to multiple chains, forming bridges between the chains. Single-chain silicates include a common group called the pyroxenes, which are generally dark-colored (see Figures 3a and 3b). Because the bonds within the tetrahedra are strong, planes of atomic weakness do not cross the chains; instead, pyroxenes have two cleavage planes parallel to the chains and at nearly right angles to each other.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3a: A schematic diagram of the single chain silica structure. Where two tetrahedra touch, they share an oxygen ion.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3b: Pyroxene is one of the dominant minerals in this sample of gabbro. It is the dark mineral and can be hard to recognize.

Double chains form when every other tetrahedron in a single chain shares a third oxygen ion with an adjoining chain (see Figure 4a). Like single chains, the double chains still maintain a net negative charge and bond to cations that can form bridges between multiple double chains.

Identify two elements that are commonly found in all three minerals in the data table
Figure 4a: A schematic diagram of the double chain silicate structure.

Double chain silicates, called amphiboles, host a wider variety of cations, including Fe2+, Mg2+, Ca2+, Al3+, and Na+, and have a wide variety of colors. The most common amphibole is hornblende, a black mineral found in igneous rocks like granite and andesite (see Figures 4b and 4c). Amphiboles tend to form prismatic crystals with two cleavage planes at 120 degrees to each other.

Identify two elements that are commonly found in all three minerals in the data table
Figure 4b: Individual hornblende crystals where the characteristic cleavage can be seen.

Identify two elements that are commonly found in all three minerals in the data table
Figure 4c: Hornblende is the dark mineral in this rock.

Pyroxenes and amphiboles can be difficult to distinguish from one another, as they are both dark-colored, blocky minerals. A careful examination of the angle between cleavage planes, described above, is required to identify them.

Comprehension Checkpoint

The best way to tell single-chain silicate minerals from double-chain silicate minerals is by examining their

When every tetrahedron shares three of its oxygen ions with neighboring tetrahedra, sheets are formed (see Figure 5a). Micas such as muscovite and biotite (see Figure 5b) are both common sheet silicates, notable for their one perfect cleavage. This perfect cleavage results from the type of bonds that occur between sheets – van der Waals bonds. Because van der Waals bonds are weak, cleavage occurs between sheets, never across sheets. Clays are another very important sheet silicate that incorporate water into their atomic structure. The presence of water lubricates the sheets and is what makes clays easy to work with in forming pottery; the firing process heats the minerals to the point where the water is driven off, resulting in a rigid, durable structure such as a pot.

Identify two elements that are commonly found in all three minerals in the data table
Figure 5b: An example of biotite.

Identify two elements that are commonly found in all three minerals in the data table
Figure 5c: An example of muscovite. (Both biotite and muscovite are micas, which are one kind of sheet silicate.)

Identify two elements that are commonly found in all three minerals in the data table
Figure 6a: An example of the 3-dimensional structure formed by a framework silicate

When each tetrahedron shares all of its oxygen atoms with adjacent tetrahedra, a very strong 3-dimensional framework of Si-O bonds is formed (see Figure 6a). Quartz is pure SiO2; note that the charge is now exactly balanced and no other bonding ions are needed. In the feldspars, one or two out of every four Si4+ ions is replaced by an Al3+ ion, creating a charge imbalance that must be solved through the presence of additional cations: K+, Na+, and Ca2+. There are two kinds of feldspars upon which cations are incorporated into the structure. Feldspars that contain the K+ cation are called K-feldspars, or alkali feldspar, whereas those that contain Na+ and Ca2+ are called plagioclase feldspars (see Figure 6b). This separation occurs because K+ is a much larger cation than either Ca2+ or Na+, and its presence creates a slightly expanded framework structure.

Identify two elements that are commonly found in all three minerals in the data table
Figure 6b: The white, blocky minerals in the rock on the left are plagioclase feldspar; the pink minerals in the rock on the right (granite) are K-feldspar.

Like olivine, quartz also has no cleavage, because there is no natural weakness within that 3-dimensional framework. The feldspars, on the other hand, have two good cleavage planes at ~90 degrees to each other, due in part to the way that the aluminum ion changes the structure slightly, opening up planes of weakness. Quartz and feldspar are generally light-colored as well, making them easily distinguishable from darker minerals like olivine and pyroxene.

Quartz and feldspar together make up the bulk of the rocks we see at the surface. Plagioclase feldspar is the single most common mineral in Earth's crust, making up an estimated 39% of both continental and oceanic crust. Quartz only makes up an estimated 12% of the entire crust, but it is by far the most common mineral we see on the surface because of its resistance to weathering.

Familiarity with these few minerals – olivine, garnet, pyroxene, hornblende, muscovite, biotite, K-feldspar, plagioclase, and quartz – prepares you to identify and interpret the vast majority of rocks you will see on Earth's surface.

Comprehension Checkpoint

Quartz has no cleavage because

Though we generally think of coal or oil when discussing natural resources, silicate minerals are a natural resource we can't live without on our planet, and not just because of our increasing reliance on computers. Without quartz, there would be no glass. Without the clay minerals, we would have no ceramics or pottery. We use silicate minerals in the manufacture of many building materials, including bricks and concrete. The weathering of silicate minerals on the surface of Earth produces the soils in which we grow our foods and the sand on our beaches. The properties of the minerals that are important to us are based on the versatility of the silicate anion in combination with other elements.

Understanding the structure of silicate minerals makes it possible to identify 95% of the rocks on Earth. This module covers the structure of silicates, the most common minerals in the Earth's crust. The module explains the significance of the silica tetrahedron and describes the variety of shapes it takes. X-ray diffraction is discussed in relation to understanding the atomic structure of minerals.

Key Concepts

  • Silicate minerals are the most common of Earth's minerals and include quartz, feldspar, mica, amphibole, pyroxene, and olivine.

  • Silica tetrahedra, made up of silicon and oxygen, form chains, sheets, and frameworks, and bond with other cations to form silicate minerals.

  • X-ray diffraction (XRD) allows scientists to determine the crystal structure of minerals.

  • The physical properties of silicate minerals are determined largely by the crystal structure.

Anne E. Egger, Ph.D. “The Silicate Minerals” Visionlearning Vol. EAS-2 (9), 2006.

Top


Page 3

Rocks and Minerals

by Anne E. Egger, Ph.D.

Humans have always used materials from the earth selectively. Early human artists who painted on rock walls made their own paints from red and yellow pigments present in soils, pigments we now know as the minerals hematite and ochre. Countries have fought wars and trade companies battled over deposits of table salt, also called halite, in the East Indies. Today, we build our houses out of drywall, made of gypsum; we make cement out of lime, a calcium oxide mineral; and we extract aluminum from the mineral bauxite to make aluminum foil and soda cans.

Hematite, halite, gypsum, lime, and bauxite are all minerals, naturally formed materials that have a specific chemical composition and crystal structure. Minerals are the building blocks of rocks, which can be composed of one or more minerals in varying amounts. Granite, for example, contains quartz, mica, feldspar, and other minerals. Marble, on the other hand, consists entirely of the mineral calcite. Although minerals combine to form rocks, they retain their own characteristics, much like the ingredients in a salad. You can make a salad that contains a variety of vegetables, like lettuce, carrots, bell peppers, and sprouts, or you can make a salad that consists solely of lettuce. In either case, the individual components are still identifiable, the way minerals can be identified within a rock.

Fortunately, most minerals form only under certain conditions, so by identifying the minerals present in a rock, scientists can start to understand how, where, and maybe even when that rock formed. Understanding mineral formation also means that scientists can predict where to find economically important minerals like bauxite, and gemstones like diamonds.

Initially, most miners knew little about how minerals formed, but a lot about extracting the materials they found valuable. Georgius Agricola, a German physician who was much more enthusiastic about mining than medicine, documented mining practices and mineral descriptions in his book De Re Metallica, published in 1556. The title literally translates as "On the Nature of Metals," but at that time the word "metal" was widely used to describe any material from the earth. Agricola describes every aspect of mining, from how to identify minerals to 16th-century techniques for crushing ore to the uses of minerals and the diseases that they could cause (see the Classics link under the Resources tab to see original woodcuts from De Re Metallica).

Agricola's book remained a mining standard for nearly two hundred years and is considered the first major contribution to the science of mineralogy. Despite the comprehensive nature of the book, Agricola had little understanding of the fundamental composition of minerals – in other words, he had no way of knowing their chemical formulas. Though much thought had been devoted to the concept of atoms, the experiments that would allow scientists to define the nature of atoms, and thus the chemical composition of minerals, were more than 200 years away when Agricola began writing. Thus, early on, the science of mineralogy advanced on the basis of describing the shape of minerals and their defining properties, like hardness, instead of their atomic structure.

Comprehension Checkpoint

The classic 1556 book De Re Metallica (

The word "mineral" means something very specific to Earth scientists. By definition, a mineral:

  1. Is naturally formed;
  2. Is solid;
  3. Is formed by inorganic processes;
  4. Has a specific chemical composition; and
  5. Has a characteristic crystal structure

Though each of these aspects of a mineral may seem simple, they have important implications when considered together.

1. Naturally formed: Minerals form through natural processes, including volcanic eruptions, precipitation of a solid out of a liquid, and weathering of pre-existing minerals. Today, scientists, engineers, and manufacturers synthesize many ceramics, plastics, and other substances with a specific chemical composition and structure, but none of these synthetic substances is considered a true mineral.

2. Solid: Liquids and gases are not considered minerals, in large part because their structure is constantly changing, which means they do not have a characteristic crystal structure. A true mineral must be solid.

3. Formed by inorganic processes: Any material produced through organic activity – such as leaves, bones, peat, shell, or soft animal tissue – is not considered a mineral. Most fossils, although they were once living, have generally had their living tissues completely replaced by inorganic processes after burial; thus, they are considered to be composed of minerals as well.

4. Specific chemical composition: Most minerals exist as chemical compounds whose compositions can be expressed using a chemical formula. The chemical formula of salt, or halite, is NaCl, meaning each molecule of salt consists of one sodium atom (Na) and one chlorine atom (Cl). Other common minerals have much more complicated formulas, such as muscovite (KAl2(AlSi3O10)(OH)2). A few minerals, such as graphite, consist of only one type of atom (carbon, in this case); therefore, the chemical formula for graphite is written simply as C. All minerals are defined by their chemical composition. If we tried to change the composition of muscovite by replacing the aluminum with iron and magnesium, for instance, we would end up with a totally new and different mineral called biotite.

On the other hand, many minerals do contain impurities, and these impurities can vary. Quartz, for example, has the chemical formula SiO2 and generally does not have any color in its pure form. The presence of a minute amount of titanium (Ti), however, causes the slight pinkish coloration present in rose quartz, as seen in Figure 1. The amount of titanium relative to the amount of silicon and oxygen is on the order of parts per million, however, so this is considered an impurity rather than a change in the chemical composition. In other words, rose quartz is still quartz. Similarly, the gemstone amethyst is a form of quartz that is colored pale to deep purple by the presence of the impurity iron (Fe).

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: An example of rose quartz, colored by trace amounts of titanium.

It was not until the 1900s, 350 years after Agricola's book, that scientists were able to determine the specific chemical composition of minerals. The invention of the mass spectrometer, ever more powerful microscopes, and the use of diffraction techniques allowed the kind of highly detailed analysis that caused the science of mineralogy to flourish.

5. Characteristic crystal structure: Nicolaus Steno, a Dutch contemporary of Isaac Newton, made an important contribution to mineralogy in 1669 when he noted that the angles between faces (or sides) of quartz crystals were constant, no matter how big the crystals were or where they had formed. Today, we know that Steno's Law of Interfacial Angles concerning the external appearance of crystals reflects a regular, internal arrangement of atoms. The angles are constant between faces on quartz crystals because every single quartz crystal is made of the same atoms: one atom of silicon for every two atoms of oxygen, written with the molecular formula SiO2.

The chemical composition of a mineral is reflected internally in a regular, repeating arrangement of atoms, called the crystal structure of the mineral. The crystal structure of halite is shown in Figure 2. The internal structure (shown on the left) is reflected in a generally consistent external crystal form (shown on the right), as noted by Steno. The cubic shape of salt crystals very clearly reflects the right-angle bonds between the Na and Cl atoms in its atomic structure (see our Chemical Bonding module).

Identify two elements that are commonly found in all three minerals in the data table
Figure 2a: A sodium chloride crystal.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2b: The cubic shape of salt crystals results from the regular arrangement of atoms forming the crystal.

Most importantly, this structure repeats itself. As the halite crystal is broken into smaller and smaller pieces, it retains its cubic structure. Take a look at a dash of table salt under a microscope and you will confirm that this is the case.

Comprehension Checkpoint

Minerals can be found in nature or made in a laboratory.

The graphite-diamond mineral pair is an extreme example of the importance of crystal structure. These two very different minerals have exactly the same chemical formula (C), but the crystal structure of the two minerals is very different. In graphite, carbon atoms are bonded together along a flat plane, as shown in Figure 3. These sheets of carbon are loosely held together by weak attractive forces. However, the attractive forces between sheets can be easily broken, allowing them to slide past one another. Thus graphite is a soft, slippery mineral that is often used as a lubricant in machines (see Figure 4). When graphite is rubbed against another material, such as a piece of paper, it leaves a trail of small sheets that have broken free, thus it is also used in pencils.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3a: The internal structure of graphite shows strong bonds within planes and weak forces between them.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3b: Graphite has a metallic sheen, is soft, and can be easily broken into thin sheets.

In diamond, by comparison, every single carbon atom is bonded strongly to four surrounding carbon atoms in a 3-dimensional structure (see Figure 5). This structure results in one of the hardest natural substances on the planet (see Figure 6), a property that contributes to its value. The structure of each of these minerals is crucial to determining their physical properties.

Identify two elements that are commonly found in all three minerals in the data table
Figure 4a: The internal structure of diamond shows equally strong bonds in all directions.

Identify two elements that are commonly found in all three minerals in the data table
Figure 4b: An uncut diamond crystal is clear and is the hardest substance known.

Chemical composition and crystal structure are the most important factors in determining the properties of a mineral, including shape, density, hardness, and color. Geologists use these properties to identify which minerals are present in rocks. Hardness and fracture characteristics can be easily determined in the field with a small magnifying lens and a hammer, allowing for rapid identification of the mineral.

The internal atomic structure of graphite and diamond, shown in Figures 3 and 5, explains the properties of the two minerals.

Comprehension Checkpoint

Graphite and diamond have very different properties because their __________ is very different.

By identifying the minerals present in a given rock, geologists can begin to understand the history of that rock. Some minerals form only when magma erupts out of a volcano and cools, others form only deep within Earth's crust under tremendous heat and pressure, and still others form only at the surface through evaporation. The basalt that erupts out of the volcanoes in Hawaii, for example, contains olivine, a mineral that forms only within Earth's mantle at depths greater than 70 km. This tells us that the source of the magma in the Hawaiian Islands is very deep. Sediment cores from the bottom of the Mediterranean Sea contain layers of gypsum and halite, two minerals that form only when water evaporates; this discovery led geologists to the conclusion that the Mediterranean Sea had dried up repeatedly in the past.

Identifying minerals on other planets has also led to a greater understanding of our solar system. Hematite is a mineral that forms most commonly on Earth's surface in the presence of water. It is, essentially, rust, and it forms during weathering of iron-bearing minerals. The discovery of hematite "blueberries" on Mars was part of the evidence that led geologists to conclude that there once was liquid water on the planet (see the News and Events links under the Resources tab).

The study of minerals began with mining, and we still use our knowledge of minerals to find important economic deposits. But our understanding of mineral composition and structure has become essential to many other areas of study as well. The environmental remediation of mines, the exploration of other planets and search for extraterrestrial life, and the study of the geologic history on our own planet are all areas that require knowledge of minerals and their sources.

The study of minerals provides a window into the history of Earth and other planets in our solar system. This first module in a three-part series describes the history of our understanding of minerals and then defines a mineral, focusing on chemical composition and structure.

Key Concepts

  • Minerals have specific chemical compositions, with a characteristic chemical structure.

  • Minerals are solids that are formed naturally through inorganic processes.

  • Chemical composition and crystal structure determine a mineral's properties, including density, shape, hardness, and color.

  • Because each mineral forms under specific conditions, examining minerals helps scientists understand the history of earth and the other planets within our solar system.

Anne E. Egger, Ph.D. “Defining Minerals” Visionlearning Vol. EAS-2 (6), 2005.

Top


Page 4

Earth Cycles

by John Arthur Harrison, Ph.D.

N2 → NH4+

Nitrogen (N) is an essential component of DNA, RNA, and proteins, the building blocks of life. All organisms require nitrogen to live and grow. Although the majority of the air we breathe is N2, most of the nitrogen in the atmosphere is unavailable for use by organisms. This is because the strong triple bond between the N atoms in N2 molecules makes it relatively inert, or unreactive, whereas organisms need reactive nitrogen to be able to incorporate it into cells. In order for plants and animals to be able to use nitrogen, N2 gas must first be converted to more a chemically available form such as ammonium (NH4+), nitrate (NO3-), or organic nitrogen (e.g., urea, which has the formula (NH2)2CO). The inert nature of N2 means that biologically available nitrogen is often in short supply in natural ecosystems, limiting plant growth.

Nitrogen is an incredibly versatile element, existing in both inorganic and organic forms as well as many different oxidation states. The movement of nitrogen between the atmosphere, biosphere, and geosphere in different forms is called the nitrogen cycle (Figure 1), one of the major biogeochemical cycles. Similar to the carbon cycle, the nitrogen cycle consists of various reservoirs of nitrogen and processes by which those reservoirs exchange nitrogen (note the arrows in the figure). (See The Carbon Cycle module for more information.)

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: The nitrogen cycle. Yellow arrows indicate human sources of nitrogen to the environment. Red arrows indicate processes in which microorganisms participate in the transformation of nitrogen. Blue arrows indicate physical forces acting on nitrogen. And green arrows indicate natural processes affecting the form and fate of nitrogen that do not involve microbes.

Five main processes cycle nitrogen through the biosphere, atmosphere, and geosphere: nitrogen fixation, nitrogen uptake through organismal growth, nitrogen mineralization through decay, nitrification, and denitrification. Microorganisms, particularly bacteria, play major roles in all of the principal nitrogen transformations. Because these processes are microbially mediated, or controlled by microorganisms, these nitrogen transformations tend to occur faster than geological processes like plate motion, a very slow, purely physical process that is a part of the carbon cycle. Instead, rates are affected by environmental factors that influence microbial activity, such as temperature, moisture, and resource availability.

Comprehension Checkpoint

Processes within the nitrogen cycle progress at a _______ rate than geological processes like plate motion.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: Part of a clover root system bearing naturally occurring nodules of Rhizobium, bacteria that can transform atmospheric nitrogen through the process of nitrogen fixation. Each nodule is about 2-3 mm long. Image courtesy of http://helios.bto.ed.ac.uk/bto/microbes/nitrogen.htm. image © The Microbial World

Nitrogen fixation is the process wherein N2 is converted to ammonium, or NH4+. This is the only way that organisms can attain nitrogen directly from the atmosphere; the few that can do this are called nitrogen-fixing organisms. Certain bacteria, including those among the genus Rhizobium, are able to fix nitrogen (or convert it to ammonium) through metabolic processes, analogous to the way mammals convert oxygen to CO2 when they breathe. Nitrogen-fixing bacteria often form symbiotic relationships with host plants. This symbiosis is well-known to occur in the legume family of plants (e.g., beans, peas, and clover). In this relationship, nitrogen-fixing bacteria inhabit legume root nodules (Figure 2) and receive carbohydrates and a favorable environment from their host plant in exchange for some of the nitrogen they fix. There are also nitrogen-fixing bacteria that exist without plant hosts, known as free-living nitrogen fixers. In aquatic environments, blue-green algae (really a bacteria called cyanobacteria) are an important free-living nitrogen fixer.

In addition to nitrogen-fixing bacteria, high-energy natural events such as lightning, forest fires, and even hot lava flows can cause the fixation of smaller, but significant, amounts of nitrogen. The high energy of these natural phenomena can break the triple bonds of N2 molecules, thereby making individual N atoms available for chemical transformation.

Within the last century, humans have become as important a source of fixed nitrogen as all natural sources combined. Burning fossil fuels, using synthetic nitrogen fertilizers, and cultivating legumes all fix nitrogen. Through these activities, humans have more than doubled the amount of fixed nitrogen that is pumped into the biosphere every year (Figure 3), the consequences of which are discussed below.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: Recent increases in anthropogenic N fixation in relation to “natural” N fixation. Modified from Vitousek, P. M.. and Matson, P. A. (1993). Agriculture, the global nitrogen cycle, and trace gas flux. The Biogeochemistry of Global Change: Radiative Trace Gases. R. S. Oremland. New York, Chapman and Hall: 193-208.

NH4+ → Organic N

The ammonium (NH4+) produced by nitrogen-fixing bacteria is usually quickly taken up by a host plant, the bacteria itself, or another soil organism and incorporated into proteins and other organic nitrogen compounds, like DNA. When organisms nearer the top of the food chain (like us!) eat, we are taking up nitrogen that has been fixed initially by nitrogen-fixing bacteria.

Organic N → NH4+

After nitrogen is incorporated into organic matter, it is often converted back into inorganic nitrogen by a process called nitrogen mineralization, otherwise known as decay. When organisms die, decomposers (such as bacteria and fungi) consume the organic matter and lead to the process of decomposition. During this process, a significant amount of the nitrogen contained within the dead organism is converted to ammonium. Once in the form of ammonium, nitrogen is available for use by plants or for further transformation into nitrate (NO3-) through the process called nitrification.

NH4+ → NO3-

Some of the ammonium produced by decomposition is converted to nitrate (NO3-) via a process called nitrification. The bacteria that carry out this reaction gain energy from it. Nitrification requires the presence of oxygen, so nitrification can happen only in oxygen-rich environments like circulating or flowing waters and the surface layers of soils and sediments. The process of nitrification has some important consequences. Ammonium ions (NH4+) are positively charged and therefore stick (are sorbed) to negatively charged clay particles and soil organic matter. The positive charge prevents ammonium nitrogen from being washed out of the soil (or leached) by rainfall. In contrast, the negatively charged nitrate ion is not held by soil particles and so can be washed out of the soil, leading to decreased soil fertility and nitrate enrichment of downstream surface and groundwater.

NO3- → N2+ N2O

Through denitrification, oxidized forms of nitrogen such as nitrate (NO3-) and nitrite (NO2-) are converted to dinitrogen (N2) and, to a lesser extent, nitrous oxide gas (NO2). Denitrification is an anaerobic process that is carried out by denitrifying bacteria, which convert nitrate to dinitrogen in the following sequence:

NO3- → NO2- → NO → N2O → N2.

Nitric oxide and nitrous oxide are gases that have environmental impacts. Nitric oxide (NO) contributes to smog, and nitrous oxide (N2O) is an important greenhouse gas, thereby contributing to global climate change.

Once converted to dinitrogen, nitrogen is unlikely to be reconverted to a biologically available form because it is a gas and is rapidly lost to the atmosphere. Denitrification is the only nitrogen transformation that removes nitrogen from ecosystems (essentially irreversibly), and it roughly balances the amount of nitrogen fixed by the nitrogen fixers described above.

Comprehension Checkpoint

Which process returns nitrogen gas to the atmosphere?

Early in the 20th century, a German scientist named Fritz Haber figured out how to short-circuit the nitrogen cycle by fixing nitrogen chemically at high temperatures and pressures, creating fertilizers that could be added directly to soil. This technology spread rapidly over the 20th century, and, along with the advent of new crop varieties, the use of synthetic nitrogen fertilizers led to an enormous boom in agricultural productivity. This agricultural productivity has helped us to feed a rapidly growing world population, but the increase in nitrogen fixation has had some negative consequences as well. While the consequences are perhaps not as obvious as an increase in global temperatures (see our Data Analysis and Interpretation module) or a hole in the ozone layer (see The Practice of Science module), they are just as serious and potentially harmful for humans and other organisms.

Why? Not all of the nitrogen fertilizer applied to agricultural fields stays to nourish crops. Some is washed off of agricultural fields by rain or irrigation water, where it leaches into surface water or groundwater and can accumulate. In groundwater that is used as a drinking water source, excess nitrogen can lead to cancer in humans and respiratory distress in infants. The US Environmental Protection Agency has established a standard for nitrogen in drinking water of 10 mg per liter nitrate-N. Unfortunately, many systems (particularly in agricultural areas) already exceed this level. By comparison, nitrate levels in waters that have not been altered by human activity are rarely greater than 1 mg/L. In surface waters, added nitrogen can lead to nutrient over-enrichment, particularly in coastal waters receiving the inflow from polluted rivers. This nutrient over-enrichment, also called eutrophication, has been blamed for increased frequencies of coastal fish-kill events, increased frequencies of harmful algal blooms, and species shifts within coastal ecosystems.

Reactive nitrogen (like NO3- and NH4+) present in surface waters and soils, can also enter the atmosphere as the smog-component nitric oxide (NO) which is a component of smog, and also as the greenhouse gas nitrous oxide (N2O). Eventually, this atmospheric nitrogen can be blown into nitrogen-sensitive terrestrial environments, causing long-term changes. For example, nitrogen oxides comprise a significant portion of the acidity in acid rain, which has been blamed for forest death and decline in parts of Europe and the northeastern United States. Increases in atmospheric nitrogen deposition have also been blamed for more subtle shifts in dominant species and ecosystem function in some forest and grassland ecosystems. For example, on nitrogen-poor serpentine soils of northern Californian grasslands, plant communities have historically been limited to native species that can survive without a lot of nitrogen. There is now some evidence that elevated levels of atmospheric N input from nearby industrial and agricultural development have allowed invasion of these ecosystems by non-native plants. As noted earlier, NO is also a major factor in the formation of smog, which is known to cause respiratory illnesses like asthma in both children and adults.

Currently, much research is devoted to understanding the effects of nitrogen enrichment in the air, groundwater, and surface water. Scientists are also exploring alternative agricultural practices that will sustain high productivity while decreasing the negative impacts caused by fertilizer use. These studies not only help us quantify how humans have altered the natural world, but increase our understanding of the processes involved in the nitrogen cycle as a whole.

Although the majority of the air we breathe is N2, molecular nitrogen cannot be used directly to sustain life. This module provides an overview of the nitrogen cycle, one of the major biogeochemical cycles. The five main processes in the cycle are described. The module explores human impact on the nitrogen cycle, resulting in not only increased agricultural production but also smog, acid rain, climate change, and ecosystem upsets.

Key Concepts

  • The nitrogen cycle is the set of biogeochemical processes by which nitrogen undergoes chemical reactions, changes form, and moves through difference reservoirs on Earth, including living organisms.

  • Nitrogen is required for all organisms to live and grow because it is the essential component of DNA, RNA, and protein. However, most organisms cannot use atmospheric nitrogen, the largest reservoir.

  • The five processes in the nitrogen cycle – fixation, uptake, mineralization, nitrification, and denitrification – are all driven by microorganisms.

  • Humans influence the global nitrogen cycle primarily through the use of nitrogen-based fertilizers.

  • HS-C5.2, HS-ESS2.A1, HS-ESS3.C1, HS-LS1.C3

John Arthur Harrison, Ph.D. “The Nitrogen Cycle” Visionlearning Vol. EAS-2 (4), 2003.

Top


Page 5

Earth Cycles

by John Arthur Harrison, Ph.D.

Carbon is the fourth most abundant element in the universe, and is absolutely essential to life on Earth. In fact, carbon constitutes the very definition of life, as its presence or absence helps define whether a molecule is considered to be organic or inorganic. Every organism on Earth needs carbon either for structure, energy, or, as is the case of humans, for both. Discounting water, you are about half carbon. Additionally, carbon is found in forms as diverse as the gas carbon dioxide (CO2), and in solids like limestone (CaCO3), wood, plastic, diamonds, and graphite.

The movement of carbon, in its many forms, between the atmosphere, oceans, biosphere, and geosphere is described by the carbon cycle, illustrated in Figure 1. This cycle consists of several storage carbon reservoirs and the processes by which the carbon moves between reservoirs. Carbon reservoirs include the atmosphere, the oceans, vegetation, rocks, and soil; these are shown in black text along with their approximate carbon capacities in Figure 1. The purple numbers and arrows in Figure 1 show the fluxes between these reservoirs, or the amount of carbon that moves in and out of the reservoirs per year. If more carbon enters a pool than leaves it, that pool is considered a net carbon sink. If more carbon leaves a pool than enters it, that pool is considered net carbon source.

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: A cartoon of the global carbon cycle. Pools (in black) are gigatons (1Gt = 1x109 Tons) of carbon, and fluxes (in purple) are Gt carbon per year. Illustration courtesy NASA Earth Science Enterprise. image © NASA

The global carbon cycle, one of the major biogeochemical cycles, can be divided into geological and biological components. The geological carbon cycle operates on a timescale of millions of years, whereas the biological carbon cycle operates on a timescale of days to thousands of years.

The geological component of the carbon cycle is where it interacts with the rock cycle in the processes of weathering and dissolution, precipitation of minerals, burial and subduction, and volcanic eruptions (see The Rock Cycle module for information). In the atmosphere, carbonic acid forms by a reaction with atmospheric carbon dioxide (CO2) and water. As this weakly acidic water reaches the surface as rain, it reacts with minerals at Earth's surface, slowly dissolving them into their component ions through the process of chemical weathering. These component ions are carried in surface waters like streams and rivers eventually to the ocean, where they precipitate out as minerals like calcite (CaCO3). Through continued deposition and burial, this calcite sediment forms the rock called limestone.

This cycle continues as seafloor spreading pushes the seafloor under continental margins in the process of subduction. As seafloor carbon is pushed deeper into the Earth by tectonic forces, it heats up, eventually melts, and can rise back up to the surface, where it is released as CO2 and returned to the atmosphere. This return to the atmosphere can occur violently through volcanic eruptions, or more gradually in seeps, vents, and CO2-rich hotsprings. Tectonic uplift can also expose previously buried limestone. One example of this occurs in the Himalayas where some of the world's highest peaks are formed of material that was once at the bottom of the ocean. Weathering, subduction, and volcanism control atmospheric carbon dioxide concentrations over time periods of hundreds of millions of years.

Comprehension Checkpoint

Carbonic acid reaches the Earth's surface by way of

Biology plays an important role in the movement of carbon between land, ocean, and atmosphere through the processes of photosynthesis and respiration. Virtually all multicellular life on Earth depends on the production of sugars from sunlight and carbon dioxide (photosynthesis) and the metabolic breakdown (respiration) of those sugars to produce the energy needed for movement, growth, and reproduction. Plants take in carbon dioxide (CO2) from the atmosphere during photosynthesis, and release CO2 back into the atmosphere during respiration through the following chemical reactions:

Respiration:

C6H12O6 (organic matter) + 6O2 → 6CO2 + 6 H2O + energy

Photosynthesis:

energy (sunlight) + 6CO2 + H2O → C6H12O6 + 6O2

Through photosynthesis, green plants use solar energy to turn atmospheric carbon dioxide into carbohydrates (sugars). Plants and animals use these carbohydrates (and other products derived from them) through a process called respiration, the reverse of photosynthesis. Respiration releases the energy contained in sugars for use in metabolism and changes carbohydrate "fuel" back into carbon dioxide, which is in turn released back to the atmosphere. The amount of carbon taken up by photosynthesis and released back to the atmosphere by respiration each year is about 1,000 times greater than the amount of carbon that moves through the geological cycle on an annual basis.

On land, the major exchange of carbon with the atmosphere results from photosynthesis and respiration. During daytime in the growing season, leaves absorb sunlight and take up carbon dioxide from the atmosphere. At the same time plants, animals, and soil microbes consume the carbon in organic matter and return carbon dioxide to the atmosphere. Photosynthesis stops at night when the sun cannot provide the driving energy for the reaction, though respiration continues. This kind of imbalance between these two processes is reflected in seasonal changes in the atmospheric CO2 concentrations. During winter in the northern hemisphere, photosynthesis ceases when many plants lose their leaves, but respiration continues. This condition leads to an increase in atmospheric CO2 concentrations during the northern hemisphere winter. With the onset of spring, however, photosynthesis resumes and atmospheric CO2 concentrations are reduced. This cycle is reflected in the monthly means (the light blue line) of atmospheric carbon dioxide concentrations shown in Figure 2.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: The "Keeling Curve," a long-term record of atmospheric CO2 concentration measured at the Mauna Loa Observatory (Keeling et al.). Although the annual oscillations represent natural, seasonal variations, the long-term increase means that concentrations are higher than they have been in 400,000 years (see text and Figure 3). Graphic courtesy of NASA's Earth Observatory. image © NASA

In the oceans, phytoplankton (microscopic marine plants that form the base of the marine food chain) use carbon to make shells of calcium carbonate (CaCO3). The shells settle to the bottom of the ocean when phytoplankton die and are buried in the sediments. The shells of phytoplankton and other creatures can become compressed over time as they are buried and are often eventually transformed into limestone. Additionally, under certain geological conditions, organic matter can be buried and over time form deposits of the carbon-containing fuels coal and oil. It is the non-calcium containing organic matter that is transformed into fossil fuel. Both limestone formation and fossil fuel formation are biologically controlled processes and represent long-term sinks for atmospheric CO2.

Comprehension Checkpoint

The major biological exchange of carbon with the atmosphere is from

Recently, scientists have studied both short- and long-term measurements of atmospheric CO2 levels. Charles Keeling, an oceanographer at the Scripps Institute of Oceanography, is responsible for creating the longest continuous record of atmospheric CO2 concentrations, taken at the Mauna Loa observatory in Hawaii. His data (now widely known as the "Keeling curve," shown in Figure 2) revealed that human activities are significantly altering the natural carbon cycle. Since the onset of the industrial revolution about 150 years ago, human activities such as the burning of fossil fuels and deforestation have accelerated, and both have contributed to a long-term rise in atmospheric CO2. Burning oil and coal releases carbon into the atmosphere far more rapidly than it is being removed, and this imbalance causes atmospheric carbon dioxide concentrations to increase. In addition, by clearing forests, we reduce the ability of photosynthesis to remove CO2 from the atmosphere, also resulting in a net increase. Because of these human activities, atmospheric carbon dioxide concentrations are higher today than they have been over the last half-million years or longer.

Because CO2 increases the atmosphere's ability to hold heat, it has been called a "greenhouse gas." Scientists believe that the increase in CO2 is already causing important changes in the global climate. Many attribute the observed 0.6 degree C increase in global average temperature over the past century mainly to increases in atmospheric CO2. Without substantive changes in global patterns of fossil fuel consumption and deforestation, warming trends are likely to continue. The best scientific estimate is that global mean temperature will increase between 1.4 and 5.8 degrees C over the next century as a result of increases in atmospheric CO2 and other greenhouse gases. This kind of increase in global temperature would cause a significant rise in average sea-level (0.09-0.88 meters), exposing low-lying coastal cities or cities located by tidal rivers, such as New Orleans, Portland, Washington, and Philadelphia, to increasingly frequent and severe floods. Glacial retreat and species range shifts are also likely to result from global warming, and it remains to be seen whether relatively immobile species such as trees can shift their ranges fast enough to keep pace with warming.

Even without the changes in climate, however, increased concentrations of CO2 could have an important impact on patterns of plant growth worldwide. Because some species of plants respond more favorably to increases in CO2 than others, scientists believe we may see pronounced shifts in plant species as a result of increasing atmospheric CO2 concentrations, even without any change in temperature. For example, under elevated CO2 conditions, shrubs are thought to respond more favorably than certain grass species due to their slightly different photosynthetic pathway. Because of this competitive inequality, some scientists have hypothesized that grasslands will be invaded by CO2-responsive grass species or shrubby species as CO2 increases.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: CO2 over the past 140,000 years as seen in an ice core and in the modern Mauna Loa record. The red line represents predicted concentrations. Figure courtesy of : Rebecca Dorsey, University of Oregon.

Comprehension Checkpoint

Carbon dioxide is considered a greenhouse gas because

In an attempt to understand whether recently observed changes in the global carbon cycle are a new phenomenon, or have instead occurred throughout geologic history, scientists have devoted considerable effort to developing methods for understanding Earth's past atmosphere and climate. These techniques include analysis of gas bubbles trapped in ice, tree rings, and ocean and lake floor sediments for clues about past climates and atmospheres. Together, these techniques suggest that over the past 20 million years, the Earth's climate has oscillated between relatively warm and relatively cold conditions called interglacial and glacial periods. During interglacial periods, atmospheric CO2 concentrations were relatively high, and during glacial periods, CO2 concentrations were relatively low. We are currently in an interglacial warm period, and human activities are pushing CO2 concentrations higher than they have been for hundreds of thousands of years (Figure 3).

Understanding and mitigating negative impacts of atmospheric CO2 enrichment constitute two of the most central challenges that environmental scientists and policy makers currently face. In order to address this issue, the scientific community has formed the Intergovernmental Panel on Climate Change (IPCC), an international, interdisciplinary consortium comprised of thousands of climate experts collaborating to produce consensus reports on climate change science. Many nations have agreed to conditions specified by the Kyoto Protocol, a multilateral treaty aimed at averting the negative impacts associated with human-induced climate change. The United States, which is currently responsible for approximately one quarter of global CO2 emissions, has so far declined to participate in the Kyoto Protocol.

Carbon, the fourth most abundant element in the universe, moves between the atmosphere, oceans, biosphere, and geosphere in what is called the carbon cycle. This module provides an overview of the global carbon cycle, one of the major biogeochemical cycles. The module explains geological and biological components of the cycle. Major sources and sinks of carbon are discussed, as well as the impact of human activities on global carbon levels.

Key Concepts

  • The presence of carbon determines whether something is organic or inorganic; all living things require carbon to live.

  • Carbon cycles through the ecosystem in various ways, from photosynthesis and respiration to weathering and other geologic processes.

  • Many factors, such as seasons and human activities, influence the concentration of carbon in the global atmosphere.

  • HS-C1.5, HS-C5.2, HS-C7.2, HS-ESS2.A1, HS-ESS2.A3, HS-ESS2.D3, HS-LS2.B3

John Arthur Harrison, Ph.D. “The Carbon Cycle” Visionlearning Vol. EAS-2 (3), 2003.

Top


Page 6

Earth Cycles

by Anne E. Egger, Ph.D.

We all see changes in the landscape around us, but your view of how fast things change is probably determined by where you live. If you live near the coast, you see daily, monthly, and yearly changes in the shape of the coastline. Deep in the interior of continents, change is less evident – rivers may flood and change course only every 100 years or so. If you live near an active fault zone or volcano, you experience infrequent but catastrophic events like earthquakes and eruptions.

Throughout human history, different groups of people have held to a wide variety of beliefs to explain these changes. Early Greeks ascribed earthquakes to the god Poseidon expressing his wrath, an explanation that accounted for their unpredictability. The Navajo view processes on the surface as interactions between opposite but complementary entities: the sky and the Earth. Most 17th century European Christians believed that the Earth was essentially unchanged from the time of creation. When naturalists found fossils of marine creatures high in the Alps, many devout believers interpreted the Old Testament literally and suggested that the perched fossils were a result of the biblical Noah's flood.

In the mid-1700s, a Scottish physician named James Hutton began to challenge the literal interpretation of the Bible by making detailed observations of rivers near his home. Every year, these rivers would flood, depositing a thin layer of sediment in the floodplain. It would take many millions of years, reasoned Hutton, to deposit a hundred meters of sediment in this fashion, not just the few weeks allowed by the Biblical flood. Hutton called this the principle of uniformitarianism: Processes that occur today are the same ones that occurred in the past to create the landscape and rocks as we see them now. By comparison, the strict biblical interpretation, common at the time, suggested that the processes that had created the landscape were complete and no longer at work.

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: This image shows how James Hutton first envisioned the rock cycle.

Hutton argued that in order for uniformitarianism to work over very long periods of time, Earth materials had to be constantly recycled. If there were no recycling, mountains would erode (or continents would decay, in Hutton's terms), the sediments would be transported to the sea, and eventually the surface of the Earth would be perfectly flat and covered with a thin layer of water. Instead, those sediments once deposited in the sea must be frequently lifted back up to form new mountain ranges. Recycling was a radical departure from the prevailing notion of a largely unchanging Earth. As shown in Figure 1, Hutton first conceived of the rock cycle as a process driven by Earth's internal heat engine. Heat caused sediments deposited in basins to be converted to rock, heat caused the uplift of mountain ranges, and heat contributed in part to the weathering of rock. While many of Hutton's ideas about the rock cycle were either vague (such as "conversion to rock") or inaccurate (such as heat causing decay), he made the important first step of putting diverse processes together into a simple, coherent theory.

Hutton's ideas were not immediately embraced by the scientific community, largely because he was reluctant to publish. He was a far better thinker than writer – once he did get into print in 1788, few people were able to make sense of his highly technical and confusing writing (to learn more about Hutton and see a sample of his writing, visit the Resources for this module). His ideas became far more accessible after his death with the publication of John Playfair's "Illustrations of the Huttonian Theory of the Earth" (1802) and Charles Lyell's "Principles of Geology" (1830). By that time, the scientific revolution in Europe had led to widespread acceptance of the once-radical concept that the Earth was constantly changing.

A far more complete understanding of the rock cycle developed with the emergence of plate tectonics theory in the 1960s (see our Plate Tectonics I module). Our modern concept of the rock cycle is fundamentally different from Hutton's in a few important aspects: We now largely understand that plate tectonic activity determines how, where, and why uplift occurs, and we know that heat is generated in the interior of the Earth through radioactive decay and moved out to the Earth's surface through convection. Together, uniformitarianism, plate tectonics, and the rock cycle provide a powerful lens for looking at the Earth, allowing scientists to look back into Earth history and make predictions about the future.

Comprehension Checkpoint

If Earth's materials were not recycled then

The rock cycle consists of a series of constant processes through which Earth materials change from one form to another over time. As within the water cycle and the carbon cycle, some processes in the rock cycle occur over millions of years and others occur much more rapidly. There is no real beginning or end to the rock cycle, but it is convenient to begin exploring it with magma. You may want to open the rock cycle schematic in Figure 2 and follow along in the sketch; click on the diagram to open it in a new window.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: A schematic sketch of the rock cycle. In this sketch, boxes represent Earth materials and arrows represent the processes that transform those materials. The processes are named in bold next to the arrows. The two major sources of energy for the rock cycle are also shown; the sun provides energy for surface processes such as weathering, erosion, and transport, and the Earth's internal heat provides energy for processes like subduction, melting, and metamorphism. The complexity of the diagram reflects a real complexity in the rock cycle. Notice that there are many possibilities at any step along the way.

Magma, or molten rock, forms only at certain locations within the Earth, mostly along plate boundaries. (It is a common misconception that the entire interior of the Earth is molten, but this is not the case. See our Earth Structure module for a more complete explanation.) When magma is allowed to cool, it crystallizes, much the same way that ice crystals develop when water is cooled. We see this process occurring in places like Iceland, where magma erupts out of a volcano and cools on the surface of the Earth, forming a rock called basalt on the flanks of the volcano (Figure 3). But most magma never makes it to the surface and it cools within Earth's crust. Deep in the crust below Iceland's surface, the magma that doesn't erupt cools to form gabbro. Rocks that form from cooled magma are called igneous rocks; intrusive igneous rocks if they cool below the surface (like gabbro), extrusive igneous rocks if they cool above (like basalt).

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: This picture shows a basaltic eruption of Pu'u O'o, on the flanks of the Kilauea volcano in Hawaii. The red material is molten lava, which turns black as it cools and crystallizes.

Rocks like basalt are immediately exposed to the atmosphere and weather. Rocks that form below the Earth's surface, like gabbro, must be uplifted and all of the overlying material must be removed through erosion in order for them to be exposed. In either case, as soon as rocks are exposed at the Earth's surface, the weathering process begins. Physical and chemical reactions caused by interaction with air, water, and biological organisms cause the rocks to break down. Once rocks are broken down, wind, moving water, and glaciers carry pieces of the rocks away through a process called erosion. Moving water is the most common agent of erosion – the muddy Mississippi, the Amazon, the Hudson, the Rio Grande, all of these rivers carry tons of sediment weathered and eroded from the mountains of their headwaters to the ocean every year. The sediment carried by these rivers is deposited and continually buried in floodplains and deltas. In fact, the US Army Corps of Engineers is kept busy dredging the sediments out of the Mississippi in order to keep shipping lanes open (see Figure 4).

Identify two elements that are commonly found in all three minerals in the data table
Figure 4: Photograph from space of the Mississippi Delta. The brown color shows the river sediments and where they are being deposited in the Gulf of Mexico. image © NASA

Comprehension Checkpoint

Erosion is caused primarily by

Under natural conditions, the pressure created by the weight of the younger deposits compacts the older, buried sediments. As groundwater moves through these sediments, minerals like calcite and silica precipitate out of the water and coat the sediment grains. These precipitants fill in the pore spaces between grains and act as cement, gluing individual grains together. The compaction and cementation of sediments creates sedimentary rocks like sandstone and shale, which are forming right now in places like the very bottom of the Mississippi delta.

Because deposition of sediments often happens in seasonal or annual cycles, we often see layers preserved in sedimentary rocks when they are exposed (Figure 5). In order for us to see sedimentary rocks, however, they need to be uplifted and exposed by erosion. Most uplift happens along plate boundaries where two plates are moving towards each other and causing compression. As a result, we see sedimentary rocks that contain fossils of marine organisms (and therefore must have been deposited on the ocean floor) exposed high up in the Himalaya Mountains – this is where the Indian plate is running into the Eurasian plate.

Identify two elements that are commonly found in all three minerals in the data table
Figure 5: The Grand Canyon is famous for its exposures of great thicknesses of sedimentary rocks. image © Anne Egger

Comprehension Checkpoint

Most uplift happens

If sedimentary rocks or intrusive igneous rocks are not brought to the Earth's surface by uplift and erosion, they may experience even deeper burial and be exposed to high temperatures and pressures. As a result, the rocks begin to change. Rocks that have changed below Earth's surface due to exposure to heat, pressure, and hot fluids are called metamorphic rocks. Geologists often refer to metamorphic rocks as "cooked" because they change in much the same way that cake batter changes into a cake when heat is added. Cake batter and cake contain the same ingredients, but they have very different textures, just like sandstone, a sedimentary rock, and quartzite, its metamorphic equivalent. In sandstone, individual sand grains are easily visible and often can even be rubbed off; in quartzite, the edges of the sand grains are no longer visible, and it is a difficult rock to break with a hammer, much less rubbing pieces off with your hands.

Some of the processes within the rock cycle, like volcanic eruptions, happen very rapidly, while others happen very slowly, like the uplift of mountain ranges and weathering of igneous rocks. Importantly, there are multiple pathways through the rock cycle. Any kind of rock can be uplifted and exposed to weathering and erosion; any kind of rock can be buried and metamorphosed. As Hutton correctly theorized, these processes have been occurring for millions and billions of years to create the Earth as we see it: a dynamic planet.

Comprehension Checkpoint

All processes in the rock cycle take millions of years.

The rock cycle is not just theoretical; we can see all of these processes occurring at many different locations and at many different scales all over the world. As an example, the Cascade Range in North America illustrates many aspects of the rock cycle within a relatively small area, as shown in Figure 6.

Identify two elements that are commonly found in all three minerals in the data table
Figure 6: Cross-section through the Cascade Range in Washington state. Image modified from the Cascade Volcano Observatory, USGS.

The Cascade Range in the northwestern United States is located near a convergent plate boundary, where the Juan de Fuca plate, which consists mostly of basalt saturated with ocean water is being subducted, or pulled underneath, the North American plate. As the plate descends deeper into the Earth, heat and pressure increase and the basalt is metamorphosed into a very dense rock called eclogite. All of the ocean water that had been contained within the basalt is released into the overlying rocks, but it is no longer cold ocean water. It too has been heated and contains high concentrations of dissolved minerals, making it highly reactive, or volatile. These volatile fluids lower the melting temperature of the rocks, causing magma to form below the surface of the North American plate near the plate boundary. Some of that magma erupts out of volcanoes like Mt. St. Helens, cooling to form a rock called andesite, and some cools beneath the surface, forming a similar rock called diorite.

Storms coming off of the Pacific Ocean cause heavy rainfall in the Cascades, weathering and eroding the andesite. Small streams carry the weathered pieces of the andesite to large rivers like the Columbia and eventually to the Pacific Ocean, where the sediments are deposited. Continual deposition of sediments near the deep oceanic trench results in the formation of sedimentary rocks like sandstone. Eventually, some sandstone is carried down into the subduction zone, and the cycle begins again (see the Experiment! section in the Resources for this module).

The rock cycle is inextricably linked not only to plate tectonics, but to other Earth cycles as well. Weathering, erosion, deposition, and cementation of sediments all require the presence of water, which moves in and out of contact with rocks through the hydrologic cycle; thus weathering happens much more slowly in a dry climate like the desert southwest than in the rainforest (see our module The Hydrologic Cycle for more information). Burial of organic sediments takes carbon out of the atmosphere, part of the long-term geological component of the carbon cycle (see our module The Carbon Cycle module); many scientists today are exploring ways we might be able to take advantage of this process and bury additional carbon dioxide produced by the burning of fossil fuels (see News & Events in Resources). The uplift of mountain ranges dramatically affects global and local climate by blocking prevailing winds and inducing precipitation. The interactions between all of these cycles produce the wide variety of dynamic landscapes we see around the globe.

Earth’s materials are in constant flux. Some processes that shape the Earth happen quickly; others take millions of years. This module describes the rock cycle, including the historical development of the concept. The relationship between uniformitarianism, the rock cycle, and plate tectonics is explored in general and through the specific example of the Cascade Range in the Pacific Northwest.

Key Concepts

  • The rock cycle is the set of processes by which Earth materials change from one form to another over time.

  • The concept of uniformitarianism, which says that the same Earth processes at work today have occurred throughout geologic time, helped develop the idea of the rock cycle in the 1700s.

  • Processes in the rock cycle occur at many different rates.

  • The rock cycle is driven by interactions between plate tectonics and the hydrologic cycle.

  • HS-C5.2, HS-C7.1, HS-ESS2.A3

Anne E. Egger, Ph.D. “The Rock Cycle” Visionlearning Vol. EAS-2 (7), 2005.

Top


Page 7

Earth Cycles

by Anne E. Egger, Ph.D.

As recently as 12,000 years ago, you could walk from Alaska to Siberia without having to don a wetsuit. At that time, glaciers and ice sheets covered North America down to the Great Lakes and Cape Cod, though coastal areas generally remained ice-free. These extensive ice sheets occurred at a time when sea level was very low, exposing land where water now fills the Bering Strait. In fact, throughout Earth’s history, times of extensive glaciers correlate with low sea level and times when only minor ice sheets exist (like today) correlate with high sea levels. These correlations are due to the fact that the amount of water on Earth is constant, and is divided up between reservoirs in the oceans, in the air, and on the land. In addition, Earth’s water is constantly cycling through these reservoirs in a process called the hydrologic cycle. Both of these facts together lead us to the conclusion that more water stored in ice sheets means less water in the oceans.

Earth is the only planet in our solar system with extensive liquid water – other planets are too hot or too cold, too big or too small. Though Mars appears to have had water on its surface in the past and may still harbor liquid water deep below its surface, our oceans, rivers, and rain are unique as far as we know, and they are life-sustaining. Understanding the processes and reservoirs of the hydrologic cycle is fundamental to dealing with many issues, including pollution and global climate change.

As early as 800 BCE, Homer wrote in the Iliad of the ocean "from whose deeps every river and sea, every spring and well flows," suggesting the interconnectedness of all of Earth's water. It wasn't until the 17th century, however, that the poetic notion of a finite water cycle was demonstrated in the Seine River basin by two French physicists, Edmé Mariotte and Pierre Perrault, who independently determined that the snowpack in the river's headwaters was more than sufficient to account for the river's discharge. These two studies marked the beginning of hydrology, the science of water, and also the hydrologic cycle.

The hydrologic cycle can be thought of as a series of reservoirs, or storage areas, and a set of processes that cause water to move between those reservoirs (see Figure 1). The largest reservoir by far is the oceans, which hold about 97% of Earth’s water. The remaining 3% is the freshwater so important to our survival, but about 78% of that is stored in ice in Antarctica and Greenland. About 21% of freshwater on Earth is groundwater, stored in sediments and rocks below the surface of Earth. The freshwater that we see in rivers, streams, lakes, and rain is less than 1% of the freshwater on Earth and less than 0.1% of all the water on Earth.

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: The hydrologic cycle. Arrows indicate volume of water that moves from reservoir to reservoir.

Comprehension Checkpoint

More freshwater is stored in ice than is found in all other freshwater sources combined.

Water moves constantly between these reservoirs through the processes of evaporation, condensation and precipitation, surface and underground flow, and others. The driving force for the hydrologic cycle is the sun, which provides the energy needed for evaporation just as the flame of a gas stove provides the energy necessary to boil water and create steam. Water changes from a liquid state to a gaseous state as it evaporates from the oceans, lakes, streams, and soil (see our Water: Properties and Behavior module for a further explanation). Because the oceans are the largest reservoir of liquid water, that is where most evaporation occurs. The amount of water vapor in the air varies widely over time and from place to place; we feel these variations as humidity.

The presence of water vapor in the atmosphere is one of the things that makes Earth livable for us. In 1859, Irish naturalist John Tyndall began studying the thermal properties of the gases in Earth's atmosphere. He found that some gases, like carbon dioxide (CO2) and water vapor, trap heat in the atmosphere (a property commonly called the greenhouse effect), while other gases like nitrogen (N2) and argon (Ar) allow heat to escape to space. The presence of water vapor in the atmosphere helps keep surface air temperatures on Earth in a range from about -40° C to 55° C. Temperatures on planets without water vapor in the atmosphere, like Mars, stay as low as -100° C.

Once water vapor is in the air, it circulates within the atmosphere. When an air package rises and cools, the water vapor condenses back to liquid water around particulates like dust, called condensation nuclei. Initially these condensation droplets are much smaller than raindrops and are not heavy enough to fall as precipitation. These tiny water droplets create clouds. As the droplets continue to circulate within the clouds, they collide and form larger droplets, which eventually become heavy enough to fall as rain, snow, or hail. Though the amount of precipitation varies widely over Earth's surface, evaporation and precipitation are globally balanced. In other words, if evaporation increases, precipitation also increases; rising global temperature is one factor that can cause a worldwide increase in evaporation from the world’s oceans, leading to higher overall precipitation.

Since oceans cover around 70% of Earth’s surface, most precipitation falls right back into the ocean and the cycle begins again. A portion of precipitation falls on land, however, and it takes one of several paths through the hydrologic cycle. Some water is taken up by soil and plants, some runs off into streams and lakes, some percolates into the groundwater reservoir, and some falls on glaciers and accumulates as glacial ice.

Comprehension Checkpoint

What drives the hydrologic cycle?

The amount of precipitation that soaks into the soil depends on several factors: the amount and intensity of the precipitation, the prior condition of the soil, the slope of the landscape, and the presence of vegetation. These factors can interact in sometimes surprising ways – a very intense rainfall onto very dry soil, typical of the desert southwest, often will not soak into the ground at all, creating flash-flood conditions. Water that does soak in becomes available to plants through soil moisture and groundwater (see Figure 2). Plants take up water through their root systems, which mostly draw water from soil moisture; the water is then pulled up through all parts of the plant and evaporates from the surface of the leaves, a process called transpiration. Water that soaks into the soil can also continue to percolate down through the soil profile below the water table into groundwater reservoirs, called aquifers. Aquifers are often mistakenly visualized as great underground lakes; in reality, groundwater saturates the pore spaces within sediments or rocks (see Figure 2).

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: Groundwater exists below the water table, which divides unsaturated soil, rock, and sediments from saturated.

Water that doesn’t soak into the soil collects and moves across the surface as runoff, eventually flowing into streams and rivers to get back to the ocean. Precipitation that falls as snow in glacial regions takes a somewhat different journey through the water cycle, accumulating at the head of glaciers and causing them to flow slowly down valleys.

Comprehension Checkpoint

Flash-flooding can result from intense rainfall

The properties of water and the hydrologic cycle are largely responsible for the circulation patterns we see in the atmosphere and the oceans on Earth. Atmospheric and oceanic circulation are two of the major factors that determine the distribution of climatic zones over the Earth. Changes in the cycle or circulation can result in major climatic shifts. For example, if average global temperatures continue to increase as they have in recent decades, water that is currently trapped as ice in the polar ice sheets will melt, causing a rise in sea level. Water also expands as it gets warmer, further exacerbating sea level rise. Many heavily populated coastal areas like New Orleans, Miami, and Bangladesh will be inundated by a mere 1.5 meter increase in sea level (see Figure 3). Additionally, the acceleration of the hydrologic cycle (higher temperatures mean more evaporation and thus more precipitation) may result in more severe weather and extreme conditions. Some scientists believe that the increased frequency and severity of El Niño events in recent decades is due to the acceleration of the hydrologic cycle induced by global warming.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: Areas in red would be flooded with a 1.5 m rise in sea level; areas in blue would be flooded by a 3.5 m rise in sea level. Image has been modified from the original from the US Environmental Protection Agency (EPA).

Even more immediately, the finitude of Earth’s freshwater resources is becoming more and more apparent. Groundwater can take thousands or millions of years to recharge naturally, and we are using these resources far faster than they are being replenished. The water table in the Ogallala Aquifer, which underlies 175,000 square miles of the US from Texas to South Dakota, is dropping at a rate of 10-60 cm per year due to extraction for irrigation. Surface waters around the world are largely contaminated by human and animal waste, most noticeably in countries like India and China, where untreated rivers provide the drinking and washing water for nearly 2 billion people. Although legislation like the Clean Water Act in the United States and water conservation practices such as the use of low-flow toilets and showerheads in parts of the world has begun to address these issues, the problems will only grow as world population increases. Every spring and well, every river and sea does indeed flow from the same source, and changes affect not just one river or lake, but the whole hydrologic cycle.

Powered by the sun, water constantly cycles through the Earth and its atmosphere. This module discusses the hydrologic cycle, including the various water reservoirs in the oceans, in the air, and on the land. The module addresses connections between the hydrologic cycle, climate, and the impacts humans have had on the cycle.

Key Concepts

  • Though the amount of water on Earth remains constant, it is regularly cycling through the ecosystem through various processes.

  • Earth's water supply is stored in a variety of ways, from ice sheets to oceans to underground reservoirs.

  • Like other processes occurring on Earth, the hydrologic cycle is affected by global warming and, as a result, influences climate and weather patterns.

  • HS-C5.2, HS-C6.1, HS-ESS2.C1

Anne E. Egger, Ph.D. “The Hydrologic Cycle” Visionlearning Vol. EAS-2 (2), 2003.

Top


Page 8

Earth Cycles

by Heather MacNeill Falconer, M.A./M.S.

For centuries, alchemists around the world searched tirelessly for the philosopher’s stone – a substance rumored to have the ability to turn base metals, like lead, into gold (Figure 1). Like the Holy Grail, stories claimed that this stone was also able to cure illness, prolong life, and even create the user’s clone. The German alchemist Hennig Brand was one such pursuer of the philosopher’s stone. So much so that he completely depleted his first wife’s significant inheritance in his pursuits, and used his second wife’s dowry to do the same!

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: The Alchemist in Search of the Philosopher's Stone, painting by Joseph Wright of Derby. image © Wikimedia Commons

In 1669, Brand was conducting an experiment using concentrated urine and sand when he came across something unique. After boiling his mixture down, Brand was left with a white, waxy substance that continued to glow in the dark after it had cooled. At first, he thought that he had discovered the famed stone, but soon discovered this was not the case. What Brand had discovered was phosphorus – one of the most important elements to life on Earth.

Like carbon, oxygen, hydrogen, and nitrogen, phosphorus is a limiting nutrient for all forms of life, which means that the potential for an organism’s growth is limited by the availability of this vital nutrient. It forms part of the structure of DNA and RNA, is needed for energy transport in cells, provides structure to cellular membranes, and assists in giving bones and teeth their rigidity. In short, without phosphorus, we simply could not exist. And yet, for something so crucial, it is one of the most difficult elements for living things to access in nature.

Prior to the 1800s, very little was known about phosphorus or how it moved through the environment. Early chemists like Robert Boyle knew that the element was highly flammable and would phosphoresce, or glow, when exposed to oxygen. In fact, in 1680 Boyle took advantage of this flammability and developed the first matchstick by using phosphorus to ignite wooden sticks dipped in sulfur. But, like other elements, phosphorus’ contribution to the growth and health of organisms remained a mystery.

For almost a century, scientists believed Sir Francis Bacon’s hypothesis that water was the “principle of vegetation” – the essential nutrient for plant growth (Tindall & Krunkel, 1998). This idea was supported by experiments conducted by notable scientists like Jan Baptiste van Helmont, John Evelyn, and Robert Boyle. For example, in 1629, the Flemish alchemist van Helmont put Bacon’s theory to test with his famous willow tree experiment. Van Helmont’s experiment involved the growth of a willow tree in what he thought was a controlled environment. In his own words,

I took an Earthen vessel, in which I put 200 pounds of Earth that hadbeen [sic] dried in a Furnace, which I moystened with Rainwater, and I implanted therein the Trunk or Stem of a Willow Tree, weighing five pounds; and at length, five years being finished, the Tree sprung from thence, did weigh 169 pounds, and about three ounces: But I moystened the Earthen Vessel with Rain-water, or distilled water (alwayes when there was need) and it was large, and implanted into the Earth, and least the dust that flew about should be co-mingled with the Earth, I covered the lip or mouth of the Vessel with an Iron-Plate covered with Tin, and easily passable with many holes. I computed not the weight of the leaves that fell off in the four Autumnes. At length, I again dried the Earth of the Vessell, and there were found the same two hundred pounds, wanting about two ounces. Therefore 164 pounds of Wood, Barks, and Roots, arose out of water onely. (van Helmont, 1662)

We now know that there were flaws in van Helmont’s experiment, including his use of soil in an application meant to show that water alone was what nourished plants. However, his contributions to our knowledge of the role of elements in plant nutrition were significant. The willow tree experiment alone marked the beginning of experimental plant physiology, and is both one of the first quantitative experiments in biology and one of the first written accounts of the use of the scientific method (Hershey, 2003; Morton, 1981).

During the 17th century, however, a German chemist named Johann Glauber argued that soil, not water, was the sole source of nourishment for plants. This sparked a debate that continued in various forms into the 18th century. In 1775, Frances Home concluded that both arguments were correct. Home theorized that not one but many factors influence a plant’s growth – a conclusion that would open up new research in various fields.

In 1838, a competition was held by The Academy of Sciences in Göttingen, Germany (Tindall & Krunkel, 1998). The Academy asked members of the scientific community to determine whether the inorganic elements found in the ashes of plants are present in the living plant, and whether there was any evidence of these inorganic elements being necessary for plant growth and survival. Justus von Liebig, a German chemist, won the contest with his treatise Organic Chemistry and its Applications to Agriculture and Physiology (von Liebig, 1838).

Von Liebig explained that certain elements, like Carbon (C), Hydrogen (H), and Phosphorus (P), are vital to the growth and sustainability of plants. His work drew clear connections between crop yield and the amount of fertilizer offered during the growing season, and identified a hierarchy of minerals in these interactions. One of the most important moments in von Liebig’s work is a discussion of the “Law of Minimum.”

The Law of Minimum states that the growth and yield of a plant are limited by the nutrient in least abundance, regardless of which nutrient that might be. This law is often referred to as Liebig’s Law of Minimum, though it is understood now that the discovery actually belongs to Karl Sprengel, a German agronomist working at the same time. Because macronutrients like carbon, oxygen, hydrogen, and nitrogen are readily available in Earth’s atmosphere, more often than not the limiting nutrient for plant growth in natural ecosystems is phosphorus.

Comprehension Checkpoint

According to the Law of Minimum

Like many of Earth’s cycles, the phosphorus cycle involves movement through biological and geological systems, and this movement is driven by various chemical transformations. Unlike carbon or nitrogen, however, phosphorus moves only through the lithosphere, biosphere, and hydrosphere. It is one of the only biogeochemical cycles that does not involve a gaseous stage, meaning that it does not become part of Earth’s atmosphere in any significant way.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: Phosphates are a biological molecule that play an important role in the structure and function of living things. They contain at least one phosphorus atom bound to four oxygen atoms, but will bond with other atoms (like hydrogen) to create a wide variety of compounds necessary for life.

As Brand discovered, elemental phosphorus is a highly reactive substance. Simply exposing it to air will stimulate a chemical reaction with oxygen. This means that in nature the element is typically found as a phosphate (PO4-3). Phosphates, in their most basic form, contain one phosphorus atom bound to four oxygen atoms, with one of those oxygen atoms being bonded to another atom, like hydrogen (Figure 2). A very common phosphate found in nature, for example, is HPO4-2. There are a wide variety of combinations that take place with this simple PO4-3 anion: bonding with carbon, nitrogen, and hydrogen to create the energy storage compound ATP, for example, or with calcium (and occasionally hydrogen) to create calcium phosphate (Figure 3). DNA, our genetic blueprint, relies on phosphate groups to provide the backbone to its double-helix structure (see our DNA II: The Structure of DNA module for more information), and cell membranes rely on phospholipids to give them structure (see our Membranes I: Introduction to Biological Membranes module).

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: Adenosine Tri-Phosphate (ATP) is responsible for the transport of chemical energy within cells for metabolism, and calcium phosphate is a primary component of milk, bones, and teeth.

In the environment, phosphates can be found in both organic and inorganic forms. Organic phosphates are mainly created through biological processes and include a bonded carbon, such as in plant or animal tissues. Inorganic phosphates on the other hand, are not associated with carbon. They are produced through natural processes like chemical weathering of phosphorous-containing rocks, or man-made processes like the chemical manufacturing of fertilizers. While animals are able to use either of these forms, plants are able to use only the inorganic form.

Comprehension Checkpoint

Phosphorous is most often found

The phosphorus cycle is similar to other elemental cycles and is often described in an overly-simplified way: As Earth’s tectonic plates shift, volcanic action, earthquakes, and movement at plate boundaries expose buried sediments and rock to the surface of the planet (to learn more, read our Plates, Plate Boundaries, and Driving Forces and The Rock Cycle: Uniformitarianism and Recycling modules). When exposed to elements like wind and water, mechanical and chemical weathering of these rocks take place. These transformations release phosphates that have been bound in these reservoirs to the environment, where they become available in soil and water. After passing through biological systems via the food chain, phosphorus is eventually returned to the soil and then into aquatic systems, where it ultimately becomes sediment and can move back into the geological part of the cycle (Figure 4).

Identify two elements that are commonly found in all three minerals in the data table
Figure 4: A simplified drawing of the phosphorus cycle. Phosphorus moves in multiple directions through a series of smaller processes.

Like all of Earth’s cycles, there is no start or finish to the phosphorus cycle, and certainly no single direction of movement. Earth’s cycles are complex webs where resources move in multiple directions. In fact, it might be even easier to think of the phosphorus cycle as being a process made up of a series of smaller processes that may or may not ever interact – processes that take place over a time frame as short as weeks and as long as millennia. To get a better sense of the movement of phosphorus through the lithosphere, biosphere, and hydrosphere, it helps to view it in terms of its movement on a shorter time-scale and through a specific ecosystem.

During the months of September and October 1967, scientists T.R. Cleugh and B.W. Hauser began a helicopter survey on 463 lakes in the Precambrian Shield in northwestern Ontario, Canada (Figure 5). Lakes were numbered in the order they were sampled, and data on maximum depth, visibility, dissolved solids, and conductivity were recorded to create lake profiles (Cleugh & Hauser, 1971). This was the first step in what would become one of the most well-known examples of extreme science: the creation of the Experimental Lakes Area (ELA), a project run by the Freshwater Institute to manipulate whole-lake ecosystems.

Identify two elements that are commonly found in all three minerals in the data table
Figure 5: Map of the Precambrian Shield

The Experimental Lakes Project had its beginnings in 1965 when the US and Canadian governments were asked by the International Joint Commission (IJC), a commission that helps Canada and the US prevent disputes over boundary waters, to devote resources to understanding pollution in the lower Great Lakes/St. Lawrence Plain. This unique region – spanning the southern part of Ontario to areas of central New York, Vermont, Pennsylvania, and Ohio – had been transformed during the early 20th century from forests rich in oak, hemlock, and mixed-conifers to land primarily devoted to agriculture. Housing developments throughout the region also increased significantly over this time. The water bodies of interest to the IJC were beginning to show effects of eutrophication – a condition of excessive plant and algae growth that can kill fish and other wildlife in the water – and little to no information existed on the causes or controls.

As a result, the Experimental Lakes Area was created to study these questions. It consisted of isolated, pristine land containing 58 lakes and watersheds, free from cultural or industrial influence, where researchers could actively manipulate whole ecosystems. The first experiments involved researchers directly controlling nutrient influxes to isolate the factors that might influence eutrophication in the water bodies. One of the experiments – that on Lake 227 – was conducted over the course of 44 years. It and a shorter experiment in Lake 226 were the first of their kind to clearly identify phosphorus as a driving factor in eutrophication.

Comprehension Checkpoint

Excessive algae growth

Lake 227 is small by most lake standards and offered limnologist David W. Schindler and his research team an ideal subject on which to test their ideas about eutrophication. In June 1969, Schindler and his team began to intentionally fertilize Lake 227 on a weekly basis, using a fertilizer with a 12:1 ratio by weight of nitrogen to phosphorus (Schindler, 2008). At the time, they were interested in testing a hypothesis popular in North America that the supply of carbon could limit the growth of phytoplankton in lakes. They chose Lake 227 specifically because it had a low concentration of dissolved inorganic carbon (Schindler, 2009). For the first five years of the experiment, the researchers added phosphorus and nitrogen to the lake to ensure phytoplankton had adequate amounts for growth and sustainability, but limited access to carbon.

Identify two elements that are commonly found in all three minerals in the data table
Figure 6: The data from Schindler's research shows a clear connection between the amount of phosphorus added to the lakes and the algal growth. image © David W. Schindler

After nutrient-loading Lake 227 for the first time, Schindler and his team noticed that algae growth increased significantly (or “bloomed”) despite the low concentration of carbon. Further, they saw that the blooms had a direct correlation to the amount of phosphorus they added to the water (Figure 6). While Schindler began to suspect phosphorous as the culprit, he needed further evidence. Importantly, the soap and detergent industry, whose products contained phosphorous, worked to take the focus off of the phosphorous-containing products by arguing that nitrogen was as influential in aquatic systems as phosphorus. So, the team began to test the effects of nitrogen separately by adding nitrogen and carbon to Lake 226.

Identify two elements that are commonly found in all three minerals in the data table
Figure 7: An aerial photograph of Lake 226 taken in August 1973. The plastic curtain dividing the lake at the narrows allowed Schindler's team to nutrient load each half of the lake with different amounts of phosphates. The northern basin, shown on the bottom half of the photo, became eutrophic in response to the excess phosphorus. image © David W. Schindler

Lake 226, shaped like an hourglass, had two basins that could be isolated from one another at the narrows with a heavy nylon curtain (Figure 7). Schindler’s group added nitrogen and carbon to both of the basins, but in the north basin they also added phosphorus. Again, algal blooms were in direct relation to the amount of phosphorus added – the south basin remained pristine, while the north basin bloomed within weeks (Schindler, 1977).

Schindler’s research began to show clearly that phosphorus, and not carbon or nitrogen, is the nutrient that has the greatest effect on plant growth in aquatic ecosystems. As Schindler notes in his personal recount of the history of the ELA, the aerial photograph of the two basins shown in Figure 7

had more impact on policy makers than hours of testimony based on scientific data, helping to convince them that controlling phosphorus was the key to controlling eutrophication problems in lakes. (Schindler, 2009)

The reason for phosphorus’s impact is simple. Along with carbon, nitrogen, oxygen, and potassium, phosphorus is a macronutrient that determines whether an organism will grow and survive, or wither and die. Without it, living beings cannot grow, reproduce, move, or do much of anything. But because other macronutrients are readily available, more often than not the limiting nutrient for plant growth in natural ecosystems is phosphorus. This is partly because the largest reservoir of phosphorus is locked up in sedimentary rock and unavailable, and partly because its chemistry in the environment limits its availability.

Comprehension Checkpoint

In the Experimental Lakes Project, which element was found to cause excessive algae growth?

Since humans began to walk the Earth, we have interacted with – and influenced – many natural processes, and the phosphorus cycle is no exception. Because phosphates are quite limited in soil naturally, modern agricultural practices frequently involve the application of fertilizers heavy in inorganic phosphates. When phosphorus is added to an ecosystem through non-natural or excessive means – run-off from farms (both fertilizers and animal excrement), sewerage, or phosphate-containing detergents – the sudden increase in nutrient availability can have a dramatic effect on plant growth.

Soil has a saturation point with respect to how much phosphate it can hold, and plants have a limit as to how fast they can take it up, so the application of too much phosphate results in both leaching into the water supply and run-off into lakes, streams, and oceans. Since aquatic ecosystems have very low phosphate concentrations naturally, whenever phosphate enters the water column, phytoplankton like algae quickly consume it.

As Schindler and his team showed with the ELA, if the influx of phosphate steadily continues for a period of time, the algae and other aquatic phytoplankton are able to reproduce so quickly and efficiently that they literally form a mat on the surface of the water, blocking out light for other plants and organisms living below (Figure 8). This reduces the ability of bottom-dwelling plants to photosynthesize, reducing the amount of oxygen being released into the water.

Identify two elements that are commonly found in all three minerals in the data table
Figure 8: Myvatn Lake - a shallow eutrophic lake in northern Iceland. image © Israel Hervas Bengochea/Shutterstock

As the algae die, they fall to the bottom where they are decomposed by bacteria – a process that uses a large amount of dissolved oxygen. As this dissolved oxygen is depleted, fish and other organisms living in the water body slowly suffocate and die.

Though we have learned better and made many efforts to change, the effects of these practices still linger. The over-application of fertilizers with high concentrations of phosphate is still a problem, and bodies of water in places with heavy agricultural communities suffer the greatest. Fortunately, as we learn more about the impacts our actions have on our environment, we can consciously make choices that will benefit rather than harm our surroundings.

The body of research on phosphorus conducted at the Experimental Lakes Area was a seminal contribution to environmental science. While the phosphorous cycle can be simplified, as we did above, to a cycle that includes a geological component and a biological component, the cycle is actually far more detailed than this.

All living organisms need phosphorous to survive and grow. This module describes forms that phosphorous takes in nature and how the element cycles through the natural world. A historical journey highlights how we came to understand this vital element. The Experimental Lakes Project shows the harmful effects of too much phosphorous on the environment as a result of human activities.

Key Concepts

  • The phosphorus cycle is the set of biogeochemical processes by which phosphorus undergoes chemical reactions, changes form, and moves through different reservoirs on Earth, including living organisms.

  • The phosphorus cycle is the only biogeochemical process that does not include a significant gaseous phase.

  • Phosphorus is required for all organisms to live and grow because it is an essential component of ATP, the structural framework holding DNA and RNA together, cellular membranes, and other critical compounds.

  • Agricultural runoff, over-fertilization, and sewage all increase the amount of phosphate available to plants and can cause significant ecological damage.

  • HS-C5.2, HS-ESS2.A1, HS-ESS3.C1, HS-LS1.C3
  • Cleugh, T. R., & Hauser, B. W. (1971). Results of the initial survey of the Experimental Lakes Area, northwestern Ontario. J. Fish. Res. Board Can., 28, 129-137.
  • Hershey, D. (2003). Misconceptions about Helmont’s Willow Experiment. Plant Science Bulletin, Fall (49.3).
  • Morton, A. G. (1981). History of Botanical Science, Academic Press, London, ISBN 0-12-508480-3, 474.
  • Schindler, D. W., Hecky, R. E., Findlay, D. L., Stainton, M. P., Parker B. R., Paterson, M. J.,...& Kasian, S. E. (2008). Eutrophication of lakes cannot be controlled by reducing nitrogen input: Results of a 37-year whole-ecosystem experiment. PNAS, 105(32), 11254–11258.
  • Schindler, D. W. (1977). Evolution of phosphorus limitation in lakes: Natural mechanisms compensate for deficiencies of nitrogen and carbon in eutrophied lakes. Science, 195, 260–262.
  • Schindler, D. W. (2009). A personal history of the Experimental Lakes Project. Can. J. Fish. Aquatic Sci. (66), 1837-1847.
  • Tindall, J. A., & Krunkel, J. R. (1998). Unsaturated Zone Hydrology for Scientists and Engineers. New York: Pearson.
  • van Helmont, J. B. (1662). Oriatrike or Physick Refined. London: Lodowick Loyd. (translated by John Chandler).
  • von Liebig, Justus. (1840). Organic Chemistry and its Applications to Agriculture and Physiology. London: Taylor and Watson.

Heather MacNeill Falconer, M.A./M.S. “The Phosphorus Cycle” Visionlearning Vol. EAS-3 (1), 2014.

Top


Page 9

Plate Tectonics

by Anne E. Egger, Ph.D.

The deepest places on Earth are in South Africa, where mining companies have excavated 3.5 km into Earth to extract gold. No one has seen deeper into Earth than the South African miners because the heat and pressure felt at these depths prevents humans from going much deeper. Yet Earth's radius is 6,370 km – how do we begin to know what is below the thin skin of the Earth when we cannot see it?

Isaac Newton was one of the first scientists to theorize about the structure of Earth. Based on his studies of the force of gravity, Newton calculated the average density of Earth and found it to be more than twice the density of the rocks near the surface. From these results, Newton realized that the interior of Earth had to be much denser than the surface rocks. His findings excluded the possibility of a cavernous, fiery underworld inhabited by the dead, but still left many questions unanswered. Where does the denser material begin? How does the composition differ from surface rocks?

Volcanic vents like Shiprock occasionally bring up pieces of Earth from as deep as 150 km, but these rocks are rare, and we have little hope of taking Jules Verne's Journey to the Center of the Earth. Instead, much of our knowledge about the internal structure of the Earth comes from remote observations – specifically, from observations of earthquakes.

Earthquakes can be extremely destructive for humans, but they provide a wealth of information about Earth's interior. This is because every earthquake sends out an array of seismic waves in all directions, similar to the way that throwing a stone into a pond sends out waves through the water. Observing the behavior of these seismic waves as they travel through the Earth gives us insight into the materials the waves move through.

Comprehension Checkpoint

Our knowledge about the internal structure of Earth comes mainly from

An earthquake occurs when rocks in a fault zone suddenly slip past each other, releasing stress that has built up over time. The slippage releases seismic energy, which is dissipated through two kinds of waves, P-waves and S-waves. The distinction between these two waves is easy to picture with a stretched-out Slinky®. If you push on one end, a compression wave passes through the Slinky® parallel to its length (see P-waves video). If instead you move one end up and down rapidly, a "ripple" wave moves through the Slinky® (see S-waves video). The compression waves are P-waves, and the ripple waves are S-waves.

Identify two elements that are commonly found in all three minerals in the data table
Illustration of a P-wave/compression wave.

Identify two elements that are commonly found in all three minerals in the data table
Illustration of a S-wave/ripple wave.

Both kinds of waves can reflect off of boundaries between different materials: They can also refract, or bend, when they cross a boundary into a different material. But the two types of waves behave differently depending on the composition of the material they are passing through. One of the biggest differences is that S-waves cannot travel through liquids whereas P-waves can. We feel the arrival of the P- and S-waves at a given location as a ground-shaking earthquake.

If Earth were the same composition all the way through its interior, seismic waves would radiate outward from their source (an earthquake) and behave exactly as other waves behave – taking longer to travel further and dying out in velocity and strength with distance, a process called attenuation. (See Figure 1.)

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: Seismic waves in an Earth of the same composition.

Given Newton's observations, if we assume Earth's density increases evenly with depth because of the overlying pressure, wave velocity will also increase with depth and the waves will continuously refract, traveling along curved paths back towards the surface. Figure 1 shows the kind of pattern we would expect to see in this case. By the early 1900s, when seismographs were installed worldwide, it quickly became clear that Earth could not possibly be so simple.

Andrija Mohorovičić was a Croatian scientist who recognized the importance of establishing a network of seismometers. Though his scientific career had begun in meteorology, he shifted his research pursuits to seismology around 1900, and installed several of the most advanced seismometers around central Europe in 1908. His timing was fortuitous, as a large earthquake occurred in the Kupa Valley in October 1909, which Mohorovičić felt at his home in Zagreb, Croatia. He made careful observations of the arrivals of P- and S-waves at his newly-installed stations, and noticed that the P-waves that measured more than 200 km away from an earthquake's epicenter arrived with higher velocities than those within a 200 km radius. Although these results ran counter to the concept of attenuation, they could be explained if the waves that arrived with faster velocities traveled through a medium that allowed them to speed up, having encountered a structural boundary at a greater depth.

This recognition allowed Mohorovičić to define the first major boundary within Earth’s interior – the boundary between the crust, which forms the surface of Earth, and a denser layer below, called the mantle (Mohorovičić, 1910). Seismic waves travel faster in the mantle than they do in the crust because it is composed of denser material. Thus, stations further away from the source of an earthquake received waves that had made part of their journey through the denser rocks of the mantle. The waves that reached the closer stations stayed within the crust the entire time. Although the official name of the crust-mantle boundary is the Mohorovičić discontinuity, in honor of its discoverer, it is usually called the Moho (see the interactive animation below).

Identify two elements that are commonly found in all three minerals in the data table

Interactive Animation: Moho

Another observation made by seismologists was the fact that P-waves die out about 105 degrees away from an earthquake, and then reappear about 140 degrees away, arriving much later than expected. This region that lacks P-waves is called the P-wave shadow zone (Figure 2). S-waves, on the other hand, die out completely around 105 degrees from the earthquake (Figure 2). Remember that S-waves are unable to travel through liquid. The S-wave shadow zone indicates that there is a liquid layer deep within Earth that stops all S-waves but not the P-waves.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2:The P-wave and S-wave shadow zones.

In 1914, Beno Gutenberg, a German seismologist, used these shadow zones to calculate the size of another layer inside of the Earth, called its core. He defined a sharp core-mantle boundary at a depth of 2,900 km, where P-waves were refracted and slowed and S-waves were stopped.

Comprehension Checkpoint

Scientists figured out that there is a liquid layer deep within Earth by observing

On the basis of these and other observations, geophysicists have created a cross-section of Earth (Figure 3). The early seismological studies previously discussed led to definitions of compositional boundaries; for example, imagine oil floating on top of water – they are two different materials, so there is a compositional boundary between them.

Later studies highlighted mechanical boundaries, which are defined on the basis of how materials act, not on their composition. Water and oil have the same mechanical properties – they are both liquids. On the other hand, water and ice have the same composition, but water is a fluid with far different mechanical properties than solid ice.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: Compositional and mechanical layers of Earth's structure.

There are two major types of crust: crust that makes up the ocean floors and crust that makes up the continents. Oceanic crust is composed entirely of basalt extruded at mid-ocean ridges, resulting in a thin (~ 5 km), relatively dense (~3.0 g/cm3) crust. Continental crust, on the other hand, is made primarily of less dense rock such as granite (~2.7 g/cm3). It is much thicker than oceanic crust, ranging from 15 to 70 km. At the base of the crust is the Moho, below which is the mantle, which contains rocks made of a denser material called peridotite (~3.4 g/cm3). This compositional change is predicted by the behavior of seismic waves and it is confirmed in the few samples of rocks from the mantle that we do have.

At the core-mantle boundary, composition changes again. Seismic waves suggest this material is of a very high density (10-13 g/cm3), which can only correspond to a composition of metals rather than rock. The presence of a magnetic field around Earth also indicates a molten metallic core. Unlike the crust and the mantle, we don't have any samples of the core to look at, and thus there is some controversy about its exact composition. Most scientists, however, believe that iron is the main constituent.

These compositional layers are shown in Figure 3.

Comprehension Checkpoint

The crust, mantle, and core are defined as compositional layers of Earth because

The compositional divisions of Earth were understood decades before the development of the theory of plate tectonics – the idea that Earth's surface consists of large plates that move (see our Plate Tectonics I module). By the 1970s, however, geologists began to realize that the plates had to be thicker than just the crust, or they would break apart as they moved. In fact, plates consist of the crust acting together with the uppermost part of the mantle; this rigid layer is called the lithosphere and it ranges in thickness from about 10 to 200 km. Rigid lithospheric plates "float" on a layer called the asthenosphere that flows like a very viscous fluid, like Silly Putty®. It is important to note that although the asthenosphere can flow, it is not a liquid, and thus both S- and P-waves can travel through it. At a depth of around 660 km, the pressure becomes so great that the mantle can no longer flow, and this solid part of the mantle is called the mesosphere. The lithospheric mantle, asthenosphere, and mesosphere all share the same composition (that of peridotite), but their mechanical properties are significantly different. Geologists often refer to the asthenosphere as the jelly in between two pieces of bread: the lithosphere and mesosphere.

The core is also subdivided into an inner and outer core. The outer core is liquid molten metal (and able to stop S-waves), while the inner core is solid. (Because the composition of the core is different than that of the mantle, it is possible for the core to remain a liquid at much higher pressures than peridotite.) The distinction between the inner and outer core was made in 1936 by Inge Lehmann, a Danish seismologist, after improvements in seismographs in the 1920s made it possible to "see" previously undetectable seismic waves within the P-wave shadow zone. These faint waves indicated that they had been refracted again within the core when they hit the boundary between the inner and outer core.

The mechanical layers of Earth are also shown in Figure 3, in comparison to the compositional layers.

Our picture of the interior of Earth becomes clearer as imaging techniques improve. Seismic tomography is a relatively new technique that uses seismic waves to measure very slight temperature variations throughout the mantle. Because waves move faster through cold material and slower through hot material, the images they receive help scientists "see" the process of convection in the mantle (see our Plates, Plate Boundaries, and Driving Forces module). These and other images offer a virtual journey into the center of Earth.

Earth's interior structure is composed of layers that vary by composition and behavior. Using principles of physics like gravity and wave motion, this module explains how scientists have determined Earth's deep structure. Different types of seismic waves are discussed. The module details both compositional and mechanical layers of Earth.

Key Concepts

  • Our knowledge about the structure Earth's interior comes from studying how different types of seismic waves, created by earthquakes, travel through Earth.

  • Earth is composed of multiple layers, which can be defined either by composition or by mechanical properties.

  • The crust, mantle, and core are defined by differences in composition.

  • The lithosphere, asthenosphere, mesosphere, and outer and inner cores are defined by differences in mechanical properties.

  • HS-C3.2, HS-ESS2.A2, HS-PS4.A4

Anne E. Egger, Ph.D. “Earth Structure” Visionlearning Vol. EAS (1), 2003.

Top


Page 10

Equations

by Heather MacNeill Falconer, M.A./M.S., Anthony Carpi, Ph.D.

Measurement affects many different aspects of our lives. Our admittance to college depends on grades – the measure of our performance in various classes; we assess phone plans by how much data usage they allow; we count calories and our doctors look to see that our blood sugar and cholesterol levels are safe. In almost every facet of modern life, values – measurements – play an important role.

From the earliest documented days in ancient Egypt (see Figure 1), systems of measurement have allowed us to weigh and count objects, delineate boundaries, mark time, establish currencies, and describe natural phenomena. Yet, measurement comes with its own series of challenges. From human error and accidents in measuring to variability to the simply unknowable, even the most precise measures come with some margin of error.

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: An Egyptian ceremonial cubit rod in the Louvre Museum. The cubit was a standard linear measurement in ancient Egypt. image © Uriah Welcome

Archeological artifacts show us that systems of measurement date back before 2500 BCE – over 4,500 years ago. As ancient civilizations in parts of the world as disparate as Greece, China, and Egypt became more formalized, the acts of dividing up land or trade with others led to a need for standardizing techniques for measuring things. Since measurement is largely a matter of comparison of one thing to another, it isn’t surprising that early systems often began with objects that were common to the community. The weight of one grain of wheat, for example, or the volume of liquid that could be held by one goat skin were used as standards.

Interestingly, many of these systems initiated with the human body. For example, the Egyptian "cubit" was assessed as the length of a man’s forearm from the tip of the middle finger to the elbow (roughly 48 cm, or 19 in). In India’s Mauryan period (500 BCE), 1 "angul" was the width of a finger (roughly 1 cm, or 0.4 in). The Ancient Greeks and Romans used the units "pous" and "pes," both of which translate into "foot." Unsurprisingly, this measurement was based on the length of a man’s foot from the big toe to the heel (roughly 29.5 cm, or 11.6 in).

However, as any trip to a clothing or shoe store will show, not all bodies are the same. When measuring something small, like a table, the difference between one man’s foot and another’s might not make much difference. However, if what is being measured is much larger – say, a plot of land – those small differences add up (a magnification error that we’ll discuss shortly). In an effort to be fair to all its citizens, many civilizations moved to standardize measurements further. By 2500 BCE, the “royal cubit” in Egypt was determined by the forearm length of the Pharaoh and carved into black marble. It was approximately 52 cm in length (20.5 in) and was further divided into 28 equal segments, approximating the width of a finger. This provided a baseline for others and consistency across the kingdom. Individuals could bring a stick or other object that could be marked, lay it against the marble and, in effect, create a ruler that they could use to measure length, width, or height elsewhere.

As civilizations advanced and measurements became more standardized, systems of measurement were developed with increasing complexity. The ancient Mesopotamians were among the first to measure angles and time, dividing the path of the sun on the celestial sphere into twelve 30-degree intervals (1/360 of the circumference of the circle, Figure 2). They also used the new crescent phase of the moon to mark the start of a new month. Celestial objects like the Sun and stars were used to track hours, through the use of sundials or the known seasonal positions of stars. Measurement has a long and complex history.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: A portion of the Sumerian civil calendar from the city of Nippur. Using astronomical cycles, the calendar divides a month into 30 days of 12 watches (equal to 2 hours) and 1 year into 12 months of 30 days. Both systems result in the circumference of a circle: 360 degrees. image © Lamassu Design

Measurement gives us a way to communicate with one another and interact with our surroundings – but it only works if those you are communicating with understand the systems of measurement you are using. Imagine you open a recipe book and read the following:

Mix white sugar (10) with flour (1) and water (100). Wait for 1, and then bake for 1.

How would you go about using this recipe? How much sugar do you use? 10 grams? 10 teaspoons? 10 pounds? How much flour or water? cups? liters? Milliliters? How long do you wait? Minutes? Hours?

All measurement involves two parameters: the amount present (i.e., the number) and the unit within a system of measurement. The recipe lists the amounts (1, 10, and 100), but not the units. Without both parameters, the information is virtually useless. (To see a recipe with amounts and units, see Figure 3.)

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: A handwritten recipe for sweet potato & zucchini bread. image © Charles Willgren, Flickr

There are many different systems of measurement units in the world, but one commonly used in science is the metric system (described in more detail in our Metric System module). The metric system uses very precise base standards, such as the meter, a unit of length, which is defined as "the length of the path travelled by light in a vacuum during a time interval of 1/299,792,458 of a second."

Standard units exist in the metric system for a host of things we might want to measure, ranging from the common such as the gram (mass) and liter (volume), to the more obscure such as the abampere, or biot, an electromagnetic unit of electrical current. Despite our best efforts to organize and standardize measurement, there still exist non-standard units that do not fit neatly into any formal systems of measurement. This may be because the exact quantity may not be known, or because it has some historical relevance. For example, horses continue to be measured in the unit of height called the hand (equal to 4 inches) out of tradition. But other examples do exist; for example, the "serving size" we often encounter on pre-packaged food (Figure 4). Serving sizes vary depending on the type of item you are eating. One serving of dry cereal like Cheerios® is listed as one cup, but a serving of potato chips is commonly listed as 1 ounce, and a serving of a snack like a Twinkie® is often listed as a number of objects (for example, two Twinkies®).

Identify two elements that are commonly found in all three minerals in the data table
Figure 4: An example of a food label. image © BruceBlaus

The question of how to measure has been the topic of great discussion since antiquity. Many of the systems of measure discussed in the previous section relate to direct measurement. Direct measurement gives us a very clear, quantifiable value of "this-equals-that." I can count the number of minutes or hours until my summer vacation, or the number of miles between my house and my favorite restaurant. But some quantities are not so easily measured. While you might be able to use a ruler to measure the dimensions of your bedroom, or even the distance to a neighbor’s house, you can’t simply use a long ruler to measure the depth of the ocean.

In cases like these, scientists are called upon to make measurements that are challenging or impossible to make in a direct way. Thus, indirect measurements are commonly used in science to determine values for properties that cannot be measured directly. Indirect measurement involves estimating an unknown value by measuring something that is known. For example, the National Oceanic and Atmospheric Administration (NOAA) of the United States government commonly relies on sonar-based measurements to create maps of ocean depth. This technique involves sending out sound waves into the water and then measuring the amount of time it takes for the sound to be reflected back to the instrument. Since the speed of sound is known, by measuring the time between the original transmission and reception of the response, a sonar operator can calculate the distance to the object, and thus the depth of the ocean (Figure 5).

Identify two elements that are commonly found in all three minerals in the data table
Figure 5: Organizations like NOAA use sound waves (sonar) to determine the depth of the water (in this image, the depth is represented by color) and to identify objects. image © MingTsang Lin

Science has, over time, built a reputation as being objective, careful, and precise – or, at least, as precise as is possible given the latest knowledge and technology. This leads many to believe that when errors in science occur they are the result of human error. While mistakes certainly do happen, the "error" in measurement error does not mean a mistake has taken place, it refers to the variability around a specific measurement.

All measurements have variability. Think about it, if you take a ruler and try to make a mark six inches from the edge of your paper, a number of things affect the measurement – from the width of the pencil you are using to how you mark your line in relation to the line on the ruler. Or consider the amount of calories in a pre-packaged snack food item, like a Twinkie®. If you look at the nutrition label on the package, you’ll see a serving size (77g, or two Twinkies®), calories per serving (290), and other nutritional details. However, you might imagine that the precise amount of batter or filling used can vary from cake to cake. And these differences will affect the number of calories per serving. The differences aren’t large, but there would be variation nonetheless.

While the fraction of a calorie difference in the size of a Twinkie® may not make or break your diet, measurement error is compounded across multiple steps of a calculation and can become a problem. Think of all of the measurements needed to send a spacecraft to Mars, for example. We need to accurately know the speed at which the craft will travel, which itself depends on measurements like the force the engines produce, the weight of the craft, the gravitational pull of Earth, etc. Small amounts of uncertainty in each of those measurements can add up to cause significant error in our calculations (Figure 6). This is why scientists not only report on the value of measurements they collect, but they also try to estimate the uncertainty associated with those measurements. For more information on measurement error, see our module Uncertainty, Error, and Confidence.

Identify two elements that are commonly found in all three minerals in the data table
Figure 6: Representations of error propagation in an iterative, dynamic system. After ~1,000 iterations, the error is equivalent to the value of the measurement itself (~0.6) making the calculation fluctuate wildly. Adapted from IMO (2007).

We are constantly measuring the world around us and using that information to make decisions. From the casual decision on the type of snack to enjoy to the important one of how much medicine to take, we quantify and measure values. And we’ve been measuring the world since very early times, making adjustments and new discoveries of how to measure continuously. With all of these measurements there is a margin of error included in even the most precise measurement. But through awareness of these errors and careful attention to the values and units, we can approach very high levels of accuracy in our measurements. And that is the ultimate goal of measurement – to provide accurate information that everyone can understand and use.

In almost every facet of modern life, values – measurements – play an important role. We count calories for a diet, stores measure the percentage of tax on our purchases, and our doctors measure important physiological indicators, like heart rate and blood pressure. From the earliest documented days in ancient Egypt, systems of measurement have allowed us to weigh and count objects, delineate boundaries, mark time, establish currencies, and describe natural phenomena. Yet, measurement comes with its own series of challenges. From human error and accidents in measuring to variability to the simply unknowable, even the most precise measures come with some margin of error.

Key Concepts

  • Since their earliest days, systems of measurement have provided a common ground for individuals to describe and understand their world. Measurement helps to give context to observations and a means to describe phenomena.

  • A measurement consists of two parts – the amount present or numeric measure, and the unit that the measurement represents within a standardized system.

  • When direct measurement is not possible, scientists can estimate parameters through indirect measurement.

  • While errors do occur in measurement, measurement error generally refers to the uncertainty or variability around a measure that occurs naturally due to the limitations of the tool we are using to measure the quantity.

Heather MacNeill Falconer, M.A./M.S., Anthony Carpi, Ph.D. “Measurement” Visionlearning Vol. MAT-3 (8), 2017.

Top


Page 11

Trigonometric Functions

by Gary Leo Welz, M.A./M.S.

Waves are familiar to us from the ocean, the study of sound, earthquakes, and other natural phenomenon. But as any surfer can tell you, ocean waves come in very different sizes, as can all waves. To fully understand waves, we need to understand measurements associated with these waves, such as how often they repeat (their frequency), and how long they are (their wavelength), and their vertical size (amplitude).

Identify two elements that are commonly found in all three minerals in the data table

While these measurements help describe waves, they do not help us make predictions about wave behavior. In order to do that, we need to look at waves more abstractly, which we can do using a mathematical formula.

It is possible to look at waves mathematically because a wave's shape repeats itself over a consistent interval of time and distance. This behavior mirrors the repetition of the circle. Imagine drawing a circle on a piece of paper. Now imagine drawing that same shape while your friend slowly pulled the piece of paper out from under your pencil – the line you would have drawn traces out the shape of a wave. One rotation around the circle completes one cycle of rising and falling in the wave, as seen in the picture below.

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: Circle on a cartesian plane.

Mathematicians use the sine function (Sin) to express the shape of a wave. The mathematical equation representing the simplest wave looks like this:

y = Sin(x)

This equation describes how a wave would be plotted on a graph, stating that y (the value of the vertical coordinate on the graph) is a function of the sine of the number x (the horizontal coordinate).

The sine function is one of the trigonometric ratios originally calculated by the astronomer Hipparchus of Nicaea in the second century B.C. when he was trying to make sense of the movement of the stars and moon in the night sky. More than 2000 years ago, when Hipparchus began to study astronomy, the movement of objects in the sky was a mystery. Hipparchus knew that the stars and moon tended to move through the night sky in a semi-circular fashion. Thus he felt that understanding the shape of a circle was important to understanding astronomy. Hipparchus began to observe that there was a relationship between the radius of a circle, the center angle made by a pie slice of that circle, and the length of the arc of that pie piece. If one knew any two of these values, the third value could be calculated. It was later realized that this relationship also applies to right triangles. Knowing one angle measure of a right triangle, you can calculate the ratio of the sides of the triangle. The exact size of the triangle can vary, but the ratio of the lengths of the sides is defined by the angle size. The specific relationships between the angle measure and the sides of the triangle are what we call the trigonometric functions, the three main functions consisting of:

  • Sine A = opposite/hypotenuse
  • Cosine A = adjacent/hypotenuse
  • Tangent A = opposite/adjacent

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: Triangle.

The word trigonometry means "measurement of triangles" and sine along with the cosine and tangent are called the trigonometric ratios because they originated with the ancient study of triangles.

Comprehension Checkpoint

The ancient astronomer Hipparchus discovered that knowing one angle measure of a right triangle allows you to calculate

But how do triangles relate to waves? In the early 17th century, two Frenchmen named Rene Descartes and Pierre Fermat co-developed what would become known as the Cartesian coordinate plane, more commonly known as the (x,y)-graphing plane. This invention was an extraordinary advance in the history of mathematics because it brought together, for the first time, the integration of the two great, but distinct branches of mathematics: geometry, the science of space and form, and algebra, the science of numbers. The invention of the Cartesian coordinate system soon led to the graphing of many mathematical relations including the sine and cosine ratios.

As it turns out, the trigonometric functions can also be defined in relation to the "unit circle," i.e. a circle with a radius equal to 1. When we put the unit circle on the Cartesian plane, we can begin to see how this works if we draw a triangle within the circle, as seen in the diagram below. According to our earlier discussion, the sine of angle A in the diagram equals the ratio of the opposite side over the hypotenuse. However, remember that we are working with a unit circle and the length of the hypotenuse is equal to the radius of the circle, or 1. Therefore,

Sin(A) = opp/1 = opp

So the sine of A gives the length of the opposite side of the triangle, or the y-coordinate on our Cartesian plane. Similarly, the cosine of angle A equals the ratio of the adjacent side over the hypotenuse. Since the length of the hypotenuse equals 1, the cosine of A gives the length of the adjacent side, or the x-coordinate on the Cartesian plane.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: Shows the unit circle on the Cartesian Plane with an inscribed triangle. The point on the circle touched by the radius has coordinates (x,y).

If we redraw this triangle as we move counterclockwise on the circle, we can begin to see that the trigonometric functions, in this case sine and cosine, take on a periodic quality. This means that sine, for example, increases to a maximum at the top of the circle, decreases to zero as we sweep left, and begins to take on negative values as we continue around the circle. At the bottom of the circle the sine function reaches a minimum value and the process begins again as we reach the right side of the circle. To better appreciate this idea, review the animation Sine, Cosine, and the Unit Circle linked below.

Sine, Cosine, and the Unit Circle
This animation illustrates how the values of the sine and cosine change as we sweep around the unit circle.

As you saw in the animation above, as angle A increases, the values of the trigonometric functions of A undergo a periodic cycle from 0, to a maximum of 1, down to a minimum of -1, and back to 0. There are several ways to express the measure of the angle A. One way is in degrees, where 360 degrees defines a complete circle. Another way to measure angles is in a unit called the radian, where 2π radians defines a complete circle. Angles smaller than 360 can be defined as fractions of this unit, for example 90° can be written as π/2 or 1.57 radians, 180° equals π or 3.14 radians.

Comprehension Checkpoint

The measure of angles can be expressed in degrees or in

If we now plot the sine of the angle measured in radians along the Cartesian coordinate system, we see that we again get the characteristic rise and fall. However, since the angle measure is plotted along the x-axis (instead of the cosine of the angle), the graph that results is a continuous curve on the coordinate plane that resembles a physical wave, as seen below.

Identify two elements that are commonly found in all three minerals in the data table
Figure 4: Sine graph.

If you look closely at this graph you will see that the wave crosses the x-axis at multiples of 3.1416 - the value of pi. One full wave is completed at the value 6.2832, or 2π, exactly the circumference of the unit circle.

Understanding the origin of the sine function makes it easier to understand how it operates in relation to waves. As we saw earlier, the basic formula representing the sine function is:

y = Sin(x)

In this formula, y is the value on the y-axis obtained when one carries out the function Sin(x) for points on the x-axis. This results in the graph of the basic sine wave. But how can we represent other forms of waves, especially ones that are larger or longer? To graph waves of different sizes we need to add other terms to our formula. The first we will look at is amplitude.

y = ASin(x)

In this modification of the formula, A gives us the value of the amplitude of the wave – the distance it moves above and below the x-axis, or the height of the wave. In essence, what the modifier A does is increase (or amplify) the result of the function Sin(x), thus leading to larger resulting y values.

To modify the wavelength of a wave, or the distance from one point on a wave to an equal point on the following wave, the modifier k is used, as seen in the formula below.

y = ASin(k*x)

The multiplier k extends the length of the wave. Remember from our earlier discussion that the wavelength of our most simple wave is 2π, therefore wavelength in the final formula is determined simply be dividing 2π by the multiplier k, so wavelength (λ) = 2π/k.

Comprehension Checkpoint

The multiplier k is used to modify the _____ of a wave.

Since waves always are moving, one more important term to describe a wave is the time it takes for one wavelength to pass a specific point in space. This term, referred to as the period, T, is equivalent to the wavelength, T = Period = 2π/k, however it is given in units of time (sec) rather than distance.

Understanding the mathematics behind wave functions allows us to better understand the natural world around us. For example, the differences between the colors you see on this page have to do with different wavelengths of light perceived by your eyes. Similarly, the difference between a bird?s song and the roar of a locomotive is due to the size of the sound waves emitted. Waves, and thus the mathematics of waves, constantly surround us.

Waves, circles, and triangles are closely related. In fact, this relatedness forms the basis of trigonometry. Basic trigonometric functions are explained in this module and applied to describe wave behavior. The module presents Cartesian coordinate (x, y) graphing, and shows how the sine function is used to plot a wave on a graph.

Key Concepts

  • The sine function is one of many trigonometric ratios calculated by astronomer Hipparchus over 2,000 years ago.

  • Understanding trigonometric functions allows for the understanding and prediction of an object’s movement.

Gary Leo Welz, M.A./M.S. “Wave Mathematics” Visionlearning Vol. MAT-1 (1), 2006.

Top


Page 12

Equations

by Donald G Wiggins, M.A./M.S.

  • Table of contents
    • Dimensional Analysis
    • Conversions of multiple units
  • Terms you should know
    • conversion : the act of changing from one form, unit, or state to another
    • ratio : the relationship between two or more quantities; relative amounts of two or more values expressed as a proportion
    • set up (verb) : to organize or arrange a problem or task in preparation for working it out
    • setup (noun) : the organization or arrangement of a problem or task in preparation for working it out

On September 23, 1999, NASA's $125 million Mars Climate Orbiter approached the red planet under guidance from a team of flight controllers at the Jet Propulsion Laboratory. The probe was one of several planned for Mars exploration, and would stay in orbit around the planet as the first extraterrestrial weather satellite. It had been in flight for over nine months, covering more than 415 million miles of empty space on its way to Mars. As the Orbiter reached its final destination, the flight controllers began to realize that something was wrong. They had planned for the probe to reach an orbit approximately 180 km off the surface of Mars – well beyond the planet's thin atmosphere. But new calculations based on the current flight trajectory showed the Orbiter skimming within 60 km of the Martian surface. Now the probe would actually enter the planet's thin atmosphere, something for which it was never designed. The consequences were catastrophic: when the scientists and engineers commanding the probe lost communication, they could only assume that the spacecraft was incinerated by the friction from an atmospheric entry that it was never supposed to make.

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: An artists rendition of the Mars Climate Orbiter.

What caused this disaster? The problem arose in part from a simple, seemingly innocent, mistake. Throughout the journey from Earth, solar winds pushed against the solar panels of the probe, throwing the spacecraft off course by a small amount. The designers had planned for this, and jet thrusters were turned on by the flight controllers to apply a force, making numerous small corrections to readjust its course. Unfortunately, the NASA engineers measured this force in pounds (a non-metric unit), while the JPL team worked in Newtons (a metric unit), and the software that calculated how long the thrusters should be fired did not make the proper conversion. Since 1 pound = 4.45 Newtons, 4.45 times too much thrust was applied each time the thrusters were used. While each individual adjustment mistake was very small, this mistake grew larger and larger over multiple adjustments, resulting in the craft's premature demise in the Martian atmosphere.

The Orbiter loss illustrates the need for consistent use of units. Most people, however, are most comfortable working in whatever units they grew up using. As a result, unit consistency may not be possible within or between teams around the world. Ideally, people should be comfortable with a variety of ways of converting units in order to allow for collaboration among individuals from a variety of backgrounds.

While most people are not controlling NASA space probes, unit conversion is something that happens every day, in all walks of life. Even such a simple problem as figuring out that two dozen eggs equals 24 eggs is, at its heart, a unit conversion problem. Whether you realize it or not, when you do this problem in your head, you're figuring it out like this:

Identify two elements that are commonly found in all three minerals in the data table

Comprehension Checkpoint

Due to the lack of a common unit for calculations, the Mars Climate Orbiter

Generally, unit conversions are most easily solved using a process called dimensional analysis, also known as the factor-label method. A notable exception is the conversions among temperature units (see our Temperature module for details). Dimensional analysis uses three fundamental facts to make these conversions, which lead to the steps in the conversion process:

1. A conversion factor is a statement of the equal relationship between two units. The first step in dimensional analysis is therefore identifying the conversion factor(s) you will need to make your conversion. In the egg problem, the statement that "1 dozen eggs = 12 eggs" is a conversion factor.

2. If you multiply by a conversion factor in the form of a ratio, you are really only multiplying by 1, since the two parts of the ratio equal each other:

Identify two elements that are commonly found in all three minerals in the data table

The second step in dimensional analysis is therefore to set up a mathematical problem that uses one or more conversion factors to get to the units you are interested in. In the egg problem, if you have 2 dozen eggs and want to know how many individual eggs you have, you would set up the problem like this:

Identify two elements that are commonly found in all three minerals in the data table

3. Units, just like numbers or variables, "cancel" when you divide a unit by itself. So the final step in dimensional analysis is to work the math problem you've set up, canceling units along the way. In the egg example, the "dozen eggs" in the bottom of the ratio cancels the "dozen eggs" in your original number, leaving "eggs" as the only unit left in the problem, as shown in the final answer, 24 eggs.

Identify two elements that are commonly found in all three minerals in the data table

Let's apply these steps to a slightly more complex problem than counting eggs? How much money would it cost to fill a truck's 23 gallon gas tank if gas cost $2.87 per gallon?

First, create the conversion factor. Given the price, you can say 1 gallon = $2.87. Then go through and set-up the equations:

Identify two elements that are commonly found in all three minerals in the data table
Identify two elements that are commonly found in all three minerals in the data table

Now that you've filled your tank, it's time to head off for your day trip to Mexico. As you cross the border from the US into Mexico, you notice that the speed limit sign reads 100. Wow! Can you step on the gas, or is there something else going on here? There are very few countries other than the United States where you will find speeds in miles per hour – almost everywhere else they would be in kilometers per hour. So, some converting is in order to know what the speed limit would be in a unit you're more familiar with.

First, we need to define what "100 kilometers per hour" means mathematically. The "per" tells you that the number is a ratio: 100 kilometers distance per 1 hour of time. Other than that, you need to know the conversion factor between kilometers and miles, namely 1 mile = 1.61 km. Now the set-up is pretty simple. Give it a try yourself, and then run the animation below (an SWF file, Flash required) or see the equation to reveal the math needed to solve the problem.

Interactive Animation: Velocity Conversion

Identify two elements that are commonly found in all three minerals in the data table

Comprehension Checkpoint

Dimensional analysis is a method that can be used to convert km/h to mph.

So far you've seen examples with only one conversion factor, but this method can be used for more complicated situations. When it's time to leave for home from your day trip in Mexico, you realize you have just enough gas to make it back across the border into the US before you have to fill up. You notice that you could buy gas for 6.50 pesos per liter before you head home. At first glance that seems more expensive than the $2.87 per gallon at home, but is it really? You need to convert to be sure. Fortunately you came prepared, and looked up the currency exchange rate (1 peso = 8.95 cents) and volume conversion (1 gallon = 3.79 Liters) the morning before you left.

This conversion is more complicated than the previous examples for two reasons. First, imagine that you do not have a single direct conversion factor for the monetary conversion (pesos to dollars). You know that 1 peso = 8.95 cents, and you also know that 100 cents = 1 dollar. Together, these two facts will let you convert the currency. The second twist is that you are not only changing the money unit – you also need to convert the volume unit as well. These two conversions can be done in a single set-up. The order does not matter, but both must be done. Try to set this one up for yourself first, and then run the animation (an SWF file, Flash required) or see the equation below to reveal the solution:

Interactive Animation: Gas Price Conversion

Identify two elements that are commonly found in all three minerals in the data table

Notice that when set up properly, the "L" had to be placed above the division bar in the conversion factor in order to cancel out the "L" below the division bar in the original number. Also note that even though the "L" terms are separated by two conversion factors, they still cancel each other out. Now it is easier to decide whether you should fill up before or after you return to the United States.

You can see that you don't have to be an engineer at NASA to need dimensional analysis. You need to convert units in your everyday life (to budget for gas price increases, for example) as well as in scientific applications, like stoichiometry in chemistry and calculating past plate motions in geology. If you know what units you have to work with, and in what units you want your answer to be, you don't need to memorize a formula. If the teams working on the Mars Climate Orbiter had realized that they needed to go through these steps, we would be getting weather forecasts for Mars today.

When units of measurement are not used consistently in science, serious consequences can result, as seen in NASA’s Mars Climate Orbiter disaster. This module introduces dimensional analysis, or the factor label method, of converting units of measurement to solve mathematical problems. The module takes readers through realistic scenarios where unit conversion is required and explains how to set up and solve problems using dimensional analysis.

Key Concepts

  • Most unit conversions can be solved through dimensional analysis, also known as the factor-label method.

  • Dimensional analysis uses three fundamental facts: (1) A conversion factor is a statement of the equal relationship between two units; (2) Multiplying by a conversion factor in the form of a ratio is multiplying by 1, since the two parts of the ratio equal each other; (3) Units "cancel" when you divide a unit by itself.

  • The steps in the conversion process are (a) identifying the conversion factor(s) needed, (b) setting up a mathematical problem that uses one or more conversion factors to get to the desired units, and (c) working the math problem, canceling units along the way.

Donald G Wiggins, M.A./M.S. “Unit Conversion” Visionlearning Vol. MAT-3 (2), 2008.

Top


Page 13

Statistics

by Liz Roth-Johnson, Ph.D.

You may not think of beer brewing as a scientific pursuit, but consider how many variables must be carefully controlled in order to reproducibly brew the same beer with the right appearance, taste, and aroma. Small differences in the quality of the raw ingredients, the temperature of the brew, or the precise way in which microorganisms break down sugars to produce alcohol could have noticeable effects on the final beverage. If you were managing a brewery and needed to send a consistently crafted beer to market, would you be content leaving all of these different brewing variables up to chance?

Because of the complicated nature of brewing, beer companies have a long history of employing scientists to help them analyze and perfect their beers. In 1901, the Guinness Brewery in Dublin, Ireland, established its first official research laboratory. Guinness employed several young scientists as beer makers. These scientists-turned-brewers applied rigorous experimental approaches and analytical techniques to the process of brewing (Figure 1).

Identify two elements that are commonly found in all three minerals in the data table
Figure 1: The process of beer making is a surprisingly scientific pursuit. The Guinness Brewery founded its first research laboratory in 1901 and hired a young scientist who made a lasting impact on the field of statistics. image © Morabito92

To see what this might have looked like, let’s imagine a scenario in which one of Guinness’ scientific brewers is working on a new quality control procedure. He (as the vast majority of beer workers at the time were men) is recording a variety of quantitative measurements to be used as benchmarks throughout the brewing process.

Today he is analyzing a set of density measurements taken on the tenth day of brewing from five different batches of the same type of beer (Figure 2). From this dataset, the brewer would like to establish a range of density measurements that can be used to assess the quality of all future beer batches on the tenth day of brewing.

Identify two elements that are commonly found in all three minerals in the data table
Figure 2: Measurements of beer density recorded for five batches of the same type of beer on the tenth day of brewing. Beer density is reported as specific gravity, which is the density of the beer divided by the density of water. Specific gravity is typically measured with a hydrometer, pictured on the right. The higher the hydrometer floats, the higher the density of the fluid being tested. image © Schlemazl (hydrometer)

As we will see in this module, the brewer’s analysis would benefit greatly from the techniques of statistical inference. When studying a procedure like brewing, the statistical population includes all possible observations of the procedure in the past, present, and future. Because of the impracticality of studying the entire population, the brewer must instead use a smaller subsample, such as the data presented above, to make inferences about the entire population. Here we will highlight how one technique, the confidence interval, can be used to estimate a parameter from a subsample dataset. (To review the relationship between subsamples and populations, or to learn more about inferential statistics in general, see Introduction to Inferential Statistics.)

The histories of science and beer are surprisingly intertwined. Before the development of modern science, ancient brewers experimented through trial-and-error, testing different combinations of ingredients and brewing conditions to concoct palatable beverages. In later centuries, several important scientific advancements occurred in connection with brewing. For example, James Prescott Joule conducted some of his classic thermodynamics experiments in his family’s brewery where he had access to high-quality thermometers and other useful equipment (see Energy: An Introduction and Thermodynamics I); biochemist Sören Sörensen invented the pH scale while working for Carlsberg, a Danish brewing company (see Acids and Bases: An Introduction); and one of the most widely used tools used in inferential statistics was developed by William Sealy Gosset, a chemist working for Guinness.

Identifying the relationship between a subsample and a population is a key concept at the heart of inferential statistics. However, at the turn of the twentieth century, statisticians did not differentiate between subsample statistics and population parameters. Karl Pearson, one of the great statisticians of the time, typically worked with such large subsamples that any difference between the subsample statistics and population parameters would, in theory, be negligibly small. But this posed a problem for Gosset, who needed a way to select the best varieties of barley to use in Guinness’ beer by analyzing only very small subsamples of data collected from the farm. After spending a year studying with Pearson, Gosset developed a new mathematical tool that could be used to estimate a population mean based on a small subsample. Because Guinness’ board of directors was eager to protect company secrets, Gosset published his work in 1908 under the pseudonym ‘Student’.

Gosset’s mathematical tool, now known as “Student’s t-distribution,” has become an important component of several inferential statistics techniques used in science, including the construction of a confidence interval. Student’s t-distribution is a probability distribution that looks similar to a normal distribution but with more pronounced tails (Figure 3). The t-distribution can be used to help estimate a population mean when the subsample size is small and the population’s standard deviation is unknown. While the mean of a very large subsample is likely to fall close to the population mean, small subsamples can be more unpredictable. The t-distribution accounts for the inherent uncertainty associated with small subsamples by assigning less probability to the central values of the distribution and more probability to the extreme values at the tails. The larger a subsample becomes, the more the t-distribution looks like a normal distribution; the decision to use a t-distribution rests on the assumption that the underlying population is normally distributed.

Identify two elements that are commonly found in all three minerals in the data table
Figure 3: Student’s t-distribution looks similar to a normal distribution but has more pronounced tails when subsample sizes are small. Four different t-distributions are shown in varying shades of blue, each of which corresponds to a different subsample size (N). Notice how the t-distribution approaches a normal distribution (shown in red) as the degrees of freedom (i.e., the sample size) becomes larger.

Shortly after Gosset’s original paper was published, Ronald Fisher (whose own work is highlighted in Statistics in Science) further developed and applied the work. In particular, Fisher defined a metric called the “t-score” (or t-statistic). As we will see in a moment, this t-score can be used to construct a confidence interval, especially when subsample sizes are small. In 1937, Polish mathematician and statistician Jerzy Neyman built upon this work and formally introduced the confidence interval more or less as it is used by scientists today.

Comprehension Checkpoint

The mathematical tool developed by Gosset was called Student's t-distribution because

Confidence intervals use subsample statistics like the mean and standard deviation to estimate an underlying parameter such as a population mean. As the name suggests, confidence intervals are a type of interval estimate, meaning that they provide a range of values in which a parameter is thought to lie. This range reflects the uncertainty related to the estimation, which is largely determined by a confidence level selected at the beginning of the analysis. The higher the confidence level, the less uncertainty is associated with the estimation. For example, a confidence interval calculated at the 95% confidence level will be associated with less uncertainty than a confidence interval calculated at the 50% confidence level.

One common misconception is that the confidence level represents the probability that the estimated parameter lies within the range of any one particular confidence interval. This misconception can lead to false assumptions that the confidence interval itself provides some intrinsic measure of precision or certainty about the location of the estimated parameter. In fact, the confidence level represents the percentage of confidence intervals that should include the population parameter if multiple confidence intervals were calculated from different subsamples drawn from the same population.

To think about what this really means, let’s imagine that you are an environmental scientist studying chemical runoff from a local factory. You have detected lead in a pond near the factory and want to know how the lead has affected a small population of 25 frogs that live in the pond. Although you would like to test the entire population of 25 frogs, you only have five lead testing kits with you. Because of this, you can only test a random subsample of five frogs and then use your data to make inferences about the entire population. After collecting and testing blood samples from five random frogs, you get the following results:

Subsample #1
Frog # Lead in Blood (µg/dL)
1 10.3
2 12.5
3 9.7
4 5.6
5 14.5
Mean = 10.5 µg/dL Standard deviation = 3.3 µg/dL

95% confidence interval = 10.5 ± 4.2 µg/dL

Using this subsample dataset, you calculate a confidence interval at the 95% confidence level. Based on this interval estimate, you feel reasonably sure that the population mean (in this case the average level of lead in the blood of all 25 frogs in this pond) lies somewhere between 6.4 and 14.7 ug/dL – but what would happen if you repeated this entire process again? Let’s say you go back to the same pond, collect two more random subsamples from the same population of 25 frogs and find the following results:

Subsample #2
Frog # Lead in Blood (µg/dL)
1 12.9
2 12.9
3 15.8
4 10.7
5 16.9
Mean = 13.8 µg/dL Standard deviation = 2.5 µg/dL

95% confidence interval = 13.8 ± 3.1 µg/dL

Subsample #3
Frog # Lead in Blood (µg/dL)
1 16.9
2 5.6
3 11.0
4 12.9
5 3.7
Mean = 10.0 µg/dL Standard deviation = 5.4 µg/dL

95% confidence interval = 10.0 ± 6.7 µg/dL

Although all three subsamples were randomly drawn from the same population, they generate three different 95% confidence interval estimates for the population mean (Figure 4). Perhaps most notable is the difference in size among the three confidence intervals. Subsample 3, for example, has a much larger confidence interval than either of the other two samples. Subsample 3 also has the most variation, or spread, among its five data points, which is quantified by its particularly large standard deviation. Greater variation within a subsample leads to a greater degree of uncertainty during the estimation process. As a result, the range of a confidence interval or other statistical estimate will be larger for a more varied sample compared to a less varied sample even when both estimates are calculated at the same confidence level.

Identify two elements that are commonly found in all three minerals in the data table
Figure 4: Different subsamples generate different confidence intervals, even when randomly selected from the same population. Each subsample confidence interval is represented by black error bars. In this case, subsamples 1 and 3 generate confidence intervals that include the population mean (10.1 ug/dL) while subsample 2 does not. At the 95% confidence level, we would expect 95 out of every 100 subsamples drawn from the same population to generate a confidence interval that includes the population parameter of interest.

Because this is an illustrative example, it turns out that we know the actual parameters for the entire 25-frog population—a luxury we would not normally have in this kind of situation. The population mean we have been trying to estimate is in fact 10.1 µg/dL. Notice that the second subsample’s confidence interval, despite being at the 95% confidence level, does not include the population mean (Figure 4). If you were to collect 100 different random subsamples from the same frog population and then calculate 100 different confidence intervals based on each subsample, 95 out of the 100 subsamples should generate confidence intervals that contain the population parameter when calculated at the 95% confidence level. When calculated at the 50% confidence level, only 50 out of the 100 subsamples would be expected to generate confidence intervals that contain the population parameter. Thus, the confidence level provides a measure of probability that any particular subsample will generate a confidence interval that contains the population parameter of interest.

In practice, confidence intervals are generally thought of as providing a plausible range of values for a parameter. This may seem a little imprecise, but a confidence interval can be a valuable tool for getting a decent estimation for a completely unknown parameter. At the beginning of the frog scenario above, we knew absolutely nothing about the average level of lead in the population before analyzing any one of the three subsamples. In this case, all three subsamples allowed us to narrow down the value of the population mean from a potentially infinite number of options to a fairly small range. If, on the other hand, we had wanted to precisely pinpoint the population mean, then the analysis would not have been nearly as helpful. When dealing with confidence intervals—as with any statistical inference technique—it is ultimately up to researchers to choose appropriate techniques to use and ascribe reasonable meaning to their data (for more about deriving meaning from experimental results, see Introduction to Inferential Statistics).

Comprehension Checkpoint

The higher the confidence level,

To see how a confidence interval is constructed, we will use the brewer’s density dataset from the beginning of the module (Figure 1). This dataset gives us a subsample with a mean of 1.055 and a standard deviation of 0.009. (See Introduction to Descriptive Statistics for more information about calculating mean and standard deviation). To differentiate these subsample statistics from the population parameters (where µ represents the population mean and σ the standard deviation), it is common practice to use the variables m and s for the subsample mean and standard deviation, respectively. The size of our subsample (N) is 5. In the four steps below, we will use these values to construct a confidence interval for the population mean to answer our brewer’s original question: What is the average density of this beer on the tenth day of brewing?

First, we need to choose a confidence level for our calculation. A confidence level can be any value between, and not including, 0% and 100% and provides a measure of how probable it is that our interval estimate includes the population mean. In theory, any confidence interval can be chosen, but scientists commonly choose to use 90%, 95%, or 99% confidence levels in their data analysis. The higher the value, the larger the confidence level and the more probable it is that the confidence interval will include the actual population mean. For our calculation, we will choose a 95% confidence level.

The next step is to find the “critical value” that corresponds with our sample size and chosen confidence level. A critical value helps us to define the cut-off regions for the chosen test's statistics where the null hypothesis can be rejected. We begin by calculating a value called alpha (α), which is determined by our chosen confidence level using the equation:

α = 1 − confidence level 100%

For a confidence level of 95%, alpha equals 0.05. We can now use our subsample size and alpha value to use a look-up table or online calculator to find the critical value. Because our subsample size is quite small (N = 5) and we know nothing about the variation of beer density among the entire population, we will express our critical value as a t-score. The t-score can be found using a lookup table like the one shown in Figure 5. Typically a t-score lookup table will organize t-scores by two metrics: the "cumulative probability" and the "degrees of freedom." Cumulative probability helps us to determine if a random variable's value falls within a specific range; the degrees of freedom are the number of observations in a sample that are free to vary when making estimations from subsample data.

  • The cumulative probability (p) is calculated using alpha: p = 1 – α/2. Because our alpha is 0.05, the cumulative probability we are interested in is 0.975.
  • The degrees of freedom is the subsample size minus one: N – 1. Because our subsample size is 5, the degrees of freedom equals 4.

Using the lookup table, we now want to find where our cumulative probability (0.975) intersects with the degrees of freedom (4). As shown in Figure 5, this leads us to the t-score 2.776. This is our critical value.

Identify two elements that are commonly found in all three minerals in the data table
Figure 5: A t-score lookup table shows several critical values for a wide range of sample sizes (expressed as degrees of freedom, or N-1) and confidence levels (expressed as the cumulative probability, p = 1 – alpha/2). The t-score corresponding to a confidence level of 95% and a sample size of 5 is highlighted.

Sometimes scientists will express a critical value as a z-score instead, which is more appropriate when subsample sizes are much larger and the population standard deviation is already known. Both the t-score and the z-score work with the assumption that the sampling distribution can be reasonably approximated by a normal distribution (see Introduction to Descriptive Statistics). If you know or have reason to believe that the subsample statistic you are analyzing is not normally distributed around the population parameter, then neither the t-score nor the z-score should be used to express the critical value.

Now that we have found our critical value, we can calculate the “margin of error” associated with our parameter estimation. The margin of error is a value that tells us the error or uncertainty associated with our point estimate. This value is calculated by multiplying the critical value with the standard error (an estimate of the standard deviation of a subsample distribution) of the subsample mean.

margin of error = (critical value) × (standard error of the mean)

For a subsample that has been chosen through simple random sampling, the standard error of the subsample mean is calculated as the subsample standard deviation (s) divided by the square root of the subsample size (N).

standard error of the mean = s N

In our case, the standard error of the mean sugar content is (0.009)/sqrt(5) = 0.004.

While the standard deviation and standard error may seem very similar, they have very different meanings. When measuring beer densities, the standard deviation is a descriptive statistic that represents the amount of variation in density from one batch of beer to the next. In contrast, the standard error of the mean is an inferential statistic that provides an estimate of how far the population mean is likely to be from the subsample mean.

With our standard error of the mean (0.004) and our critical value (2.776) we can calculate the margin of error: (0.004)(2.776) = 0.011.

At this point we are ready to assemble and report our final confidence interval. A confidence interval is commonly expressed as a point estimate (in this case our subsample mean) plus or minus a margin of error. This means that our confidence interval for beer density on the tenth day of brewing is 1.055 ± 0.011 at a confidence level of 95%. Sometimes scientists will simply report this as the “95% confidence interval.”

Now that we have constructed a confidence interval, what can we say about the density of the entire population? Although we still do not know the exact mean beer density for all batches that ever have been or will be brewed, we can be reasonably (though never absolutely) sure that the mean density falls somewhere between 1.044 and 1.066. This is therefore a good density range for brewers to aim for when analyzing the quality of future batches of beer.

Comprehension Checkpoint

A t-score lookup table organizes scores by

With computer programs like Excel, a confidence interval can be constructed at the click of a button. The entire process above can be completed using Excel’s CONFIDENCE.T function. This function requires three input values entered in this order: alpha, subsample standard deviation, and subsample size (Figure 6). It then reports the margin of error, which can be used to report the final confidence interval as mean ± margin of error.

Identify two elements that are commonly found in all three minerals in the data table
Figure 6: The margin of error for a confidence interval can be easily calculated using Excel’s CONFIDENCE.T function. This function requires alpha, the subsample standard deviation, and the subsample size.

Excel has a second confidence interval function called CONFIDENCE.NORM (or CONFIDENCE in earlier versions of the program) that can also be used to calculate a margin of error (Figure 7). Whereas CONFIDENCE.T uses a t-distribution to find a t-score for the critical value, CONFIDENCE.NORM uses a normal distribution to find a z-score for the critical value. The CONFIDENCE.NORM function can be used when the subsample size is large and/or the population standard deviation is already known. In most cases, it is safest to use the CONFIDENCE.T distribution. In this example, the subsample size (5) is very small and using the two functions produces different margins of error: 0.011 using a t-score versus 0.008 using a z-score. The CONFIDENCE.T margin of error is larger, as this function is better at representing the increased error associated with small subsamples.

Identify two elements that are commonly found in all three minerals in the data table
Figure 7: The margin of error for a confidence interval can also be calculated using Excel’s CONFIDENCE.NORM function. This function is more appropriate to use when the subsample size is much larger and/or the population standard deviation is already known.

The NASA Curiosity rover is traversing Mars and sending troves of data back to Earth. One important measurement Curiosity records is the amount of cosmic and solar radiation hitting the surface of Mars (Figure 8). As humans look forward to one day exploring the Red Planet in person, scientists will need to develop spacesuits capable of protecting astronauts from harmful levels radiation. But how much radiation, on average, will a future Martian be exposed to?

Identify two elements that are commonly found in all three minerals in the data table
Figure 8: The Curiosity rover (left) uses its radiation assessment detector (right) to record the surface radiation exposure on Mars. How can the data collected by Curiosity be used to make inferences about the typical levels of radiation a future Mars astronaut would be exposed to? image © NASA/JPL-Caltech/SwRI

Since it landed in August 2012, Curiosity has been using its radiation assessment detector to record the surface radiation exposure on Mars. A scientist in the future is analyzing this data and sees that there has been an average of 0.67 ± 0.29 millisieverts of radiation exposure per Martian day. (For comparison, sitting on the surface of Mars would be like taking approximately 35 chest X-rays every day.) This average is based on daily radiation exposure measurements recorded once every five Martian days over the past five Martian years for a total of 669 individual measurements.

Use this information to construct 50%, 80%, and 95% confidence intervals for the daily average radiation exposure on Mars. What are the subsample and population in this scenario? Can you identify any possible sources of sampling bias? (See our module Introduction to Inferential Statistics for a review of these terms.)

(Problem loosely based on Hassler et al., 2014)

Because we are interested in knowing the daily average radiation on Mars, the population would be the total surface radiation measured every single day on Mars over the entire time that its current atmospheric conditions have existed and continue to exist. Observing this population is clearly impossible! Instead we must analyze a subsample to make inferences about the daily average radiation on Mars.

The subsample presented in this problem is the daily radiation exposure measured over five Martian years. This is a reasonably random subsample given that radiation exposure was recorded at equal intervals over several Martian years. Bias could have easily been introduced into the subsample if radiation exposure had only been recorded during certain seasons throughout the Martian year or if the instrument recording the radiation levels had not been properly calibrated. The fact that the subsample was collected over several years also helps account for solar fluctuations and other changes that might occur from one year to the next.

To construct our three confidence intervals, we can start by using the subsample mean (m = 0.67 mSV day-1) as a point estimation of the parameter mean. We can then calculate the margin of error in Excel using three values: the subsample size (N = 669), the subsample standard deviation (s = 0.29), and alpha. Because alpha is a function of the confidence level, we will need to compute a different value of alpha for each confidence interval:

  • alpha = 1 – (50% ÷ 100%) = 0.5 at the 50% confidence level
  • alpha = 1 – (80% ÷ 100%) = 0.2 at the 80% confidence level
  • alpha = 1 – (95% ÷ 100%) = 0.05 at the 95% confidence level

In this problem we do not know any population parameters, so we will use the CONFIDENCE.T function in Excel. However, because the subsample size is so large (N = 669) both the CONFIDENCE.T and the CONFIDENCE.NORM functions will generate nearly the same confidence interval. Using the CONFIDENCE.T function in Excel calculates the margin of error:

  • margin of error = 0.0076 mSV day-1 at the 50% confidence level
  • margin of error = 0.014 mSV day-1 at the 80% confidence level
  • margin of error = 0.022 mSV day-1 at the 95% confidence level

Taking these calculations together with our point estimation of the parameter mean gives us three confidence interval estimates for the daily average radiation exposure on Mars:

  • 0.67 ± 0.0076 mSV day-1 at the 50% confidence level
  • 0.67 ± 0.014 mSV day-1 at the 80% confidence level
  • 0.67 ± 0.022 mSV day-1 at the 95% confidence level

We can show this graphically by plotting the confidence intervals as error bars on a bar graph (Figure 9). Notice how the size of the confidence interval changes for each confidence level. Out of the three confidence intervals we just constructed, the 50% confidence interval is the smallest but is also associated with the highest level of uncertainty. Conversely, the 95% confidence interval is the largest but is associated with the lowest level of uncertainty. In none of the cases can we know for certain where the true population mean lies – or whether the population mean falls within the confidence interval at all – but we can say that the interval estimate at the 95% confidence level are associated with a lower level of uncertainty than the interval estimates at lower confidence levels.

Identify two elements that are commonly found in all three minerals in the data table
Figure 9: Confidence intervals calculated at three different confidence levels for the mean radiation on Mars as measured by the Curiosity rover. Notice how the size of the confidence interval gets smaller as the level of uncertainty associated with the interval estimate becomes larger.

So what does this mean for our future Martian? Based on these calculations, the future scientists can move ahead with their spacesuit designs being fairly, though not absolutely, certain that the average daily radiation exposure on the surface of Mars – the true population mean in this scenario – falls somewhere within the interval of 0.67 ± 0.022 mSV day-1.

Like all of inferential statistics, confidence intervals are a useful tool that scientists can use to analyze and interpret their data. In particular, confidence intervals allow a researcher to make statements about how a single experiment relates to a larger underlying population or phenomenon. While Gosset’s t-distribution and Neyman’s confidence interval are common tools of statistical inference used in science, it is important to remember that there are numerous other methods for calculating interval estimates and drawing inferences from experimental subsamples. Just as the confidence interval can be a valuable technique in some situations, it can also be an unhelpful analytical tool in others. For example, a chemist trying to precisely measure the atomic mass of a new element would find no use for a 95% confidence interval that only provides a plausible range of values. In such a case, a much more stringent confidence level, or perhaps an entirely different statistical technique, would be more appropriate. Ultimately, as with all of statistical inference, it is up to the researcher to use techniques wisely and interpret data appropriately.

Through history, important scientific advances have been made in connection with brewing beer. The module begins at the Guinness Brewery with the development of an important mathematical tool for inferential statistics. The focus of the module is confidence intervals, used when making statements about the relationship between a subsample and an entire population. Readers are shown how to construct and report a confidence interval. Topics include Student’s t-distribution, confidence level, critical value, and margin of error. Examples and a sample problem illustrate concepts introduced.

Key Concepts

  • Confidence intervals are a common type of inferential statistics estimate used in science. Starting with a subsample dataset, a scientist can construct a confidence interval that represents a plausible range for a population parameter while also indicating the level of error or uncertainty associated with the estimation.

  • A confidence level represents the degree of uncertainty associated with a confidence interval. The higher the confidence level, the less uncertainty is associated with the confidence interval’s estimation of a population parameter. Although any value between 0% and 100% can theoretically be chosen, scientists typically calculate confidence intervals at the 90%, 95%, or 99% confidence level.

  • Standard error is commonly encountered when using inferential statistics and is needed to calculate a confidence interval. It is important not to confuse standard error with standard deviation. Standard deviation is a descriptive statistic that represents the amount of variation in a sample, whereas standard error is an inferential statistic that represents a likely distance between a population parameter and a subsample statistic.

  • Hassler, D.M., et al. (2013). Mars' surface radiation environment measured with the Mars Science Laboratory's Curiosity rover. Science, 343(6169): 1244797-1244797. Retrieved from: http://science.sciencemag.org/content/343/6169/1244797
  • Neyman, J. (1937). Outline of a theory of statistical estimation based on the classical theory of probability. Philosophical Transactions of the Royal Society, 236: 333-380.
  • Student. (1908). The probable error of a mean. Biometricka, 6(1): 1-25.

Liz Roth-Johnson, Ph.D. “Confidence Intervals” Visionlearning Vol. MAT-3 (6), 2016.

Top