In which medium does diffusion happen at the fastest rate and why
In which medium does diffusion happen at the fastest rate and why – Apa kabar In which medium does diffusion happen at the fastest rate and why – Hi sista dan agan semua, Terima Kasih sudah berkunjung ke situs Sepertinya ini. Kali ini, kami di portal www.sepertinya.Com akan membagikan saran yang mantap yang menunjukkan pada anda mengenai In which medium does diffusion happen at the fastest rate and why. Ini dia teman menyimak setelah ini:
Diffusion is defined as the net movement of molecules or ions from a region of high concentration to a region of lower concentration. Diffusion continues until a state of equilibrium is reached, which means that the molecules are randomly distributed throughout the system. Diffusion is considered a form of passive transport because no energy is required in the process. Diffusion can occur in a gas, a liquid, or a solid medium. Diffusion also occurs across the selectively permeable membranes of cells.
All molecules possess kinetic energy which provides the force for movement. Molecules are in constant motion and as they move, they collide with each other. The more molecules in an environment, the higher the concentration of molecules, the higher the frequency of molecular collisions and the faster the speed of diffusion.
How might these factors influence the rate of diffusion?
- Concentration gradient
- Medium molecules diffuse in (gas, liquid, solid)
- Molecular weight of the molecule
Molecular weight is an indication of the mass and size of a molecule. The purpose of this experiment is to determine the relationship between molecular weight and the rate of diffusion through a semisolid gel. You will investigate two dyes, methylene blue and potassium permanganate.
|Methylene blue||300 grams/mole||blue|
|Potassium permanganate||150 grams/mole||purple|
- Petri dish of agar semi-solid gel
- Methylene blue solution
- Potassium permanganate solution
- Small straws
- Small plastic metric ruler
Placing a drop of dye into a small well on an agar plate
- Obtain a Petri dish of agar
- Take the plastic straw and gently stick down into the agar. Lift up withdrawing a small plastic plug of agar. Repeat.
- Place a single drop of each dye into the agar well. (Figure 4.1).
- After 20 minutes, place a small, clear metric ruler underneath the Petri dish to measure the distance (diameter) that the dye has moved. Enter the data in Table
|Molecular weight (grams/mole)||Diameter after 20 minutes (millimeters)||Diameter after 40 minutes (millimeters)|
Describe the relationship between molecular weight and speed of diffusion
Cells acquire the molecules and ions they need from their surrounding extracellular fluid. In living cells, the ability of a molecule to cross the cell membrane is influenced by its size, charge, lipid solubility, and other characteristics. Small molecules such as water, oxygen, amino acids, and ions easy cross the membrane by passive transport processes that do not require energy (diffusion and osmosis). Other molecules do not easily fit through the lipid bilayer and the cell must expend energy to bring them across.
You will investigate two molecules, starch and iodine, for their ability to cross a selectively permeable membrane. A colorimetric test is employed to assess the movement of these molecules. Dialysis tubing is a transparent material with microscopic pores that allow only small molecules to pass. It provides a model of the cell membrane and has many uses in industry and medicine.
- Dialysis tubing
- Starch solution
- Iodine (IKI)
- Obtain a piece of dialysis tubing that has been pre-cut by the instructor. Thoroughly wet the tubing and open the ends. Tie a knot in one end.
- Add approximately 2ml (~2cm) starch solution to the dialysis bag. Tie a knot at the top of the tubing. Rinse the bag briefly with tap water to remove any traces of starch.
- Fill the beaker approximately 1⁄2 full with tap water.
- Add iodine (IKI) to the beaker of water until a deep yellow color is obtained.
- Submerge the dialysis bag in the water and incubate at room temperature until a color change is observed (~15 minutes).
- Did iodine diffuse across the selectively permeable membrane? How do you know?
- Which is the smaller molecule, iodine, or starch?
- Is diffusion a passive, or an active transport process (choose one)?
Physical States and Properties
If you’ve ever made cookies and left the kitchen door open, you’re probably aware that the aroma spreads throughout the house. It is strongest in the kitchen, where the cookies are baking, a little less in the dining or living room, and least in the upstairs corner bedroom. And if the door is closed in the corner bedroom, the cookie scent is even weaker.
This is a delicious example of diffusion, or the movement of matter from a region of high concentration (the cookie pan in the kitchen) to a region of low concentration (the corner bedroom). This principle of diffusion is fundamental throughout science, from gas exchange in the lungs to the spread of carbon dioxide in the atmosphere to the movement of water from one side of a cell’s plasma membrane to the other. However, the concept of diffusion is rarely as simple as molecules moving from one place to another. Temperature, the size of the molecules involved, the distance molecules need to travel, the barriers they may encounter along the way, and other factors all influence the rate at which diffusion takes place.
The universe is in constant motion: from the orbiting of planets around the sun, to the movement of particles from one area to another. And while on a grand scale it may appear that there is a rationale to this movement – for example, the planets in our solar system have regular revolutions that can be predicted – in truth there is a great deal of motion that occurs randomly.
When we learn about diffusion, we often hear about the movement of particles from an area of high concentration to an area of low concentration, as if the particles themselves are somehow motivated to move in this direction. But this movement is in fact a by-product of what scientists refer to as the “random walk” of particles. Molecules do not move in straight paths from Point A to Point B. Instead, they interact with their environment, bumping into other molecules and barriers encountered along their way, as well as interacting with the medium through which they are moving.
The observation of the spontaneous, random movement of small particles was first recorded in the first century BCE. Lucretius, a Roman poet and philosopher, described the dust seen in sunbeams coming through a window (Figure 1):
Figure 1: Dust particles “dancing” in a ray of light. image © E.mil.mil
You will see a multitude of tiny particles mingling in a multitude of ways… their dancing is an actual indication of underlying movements of matter that are hidden from our sight… It originates with the atoms which move of themselves [i.e., spontaneously]… So the movement mounts up from the atoms and gradually emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible.
While Lucretius’s “dancing” particles were likely dust particles or pollen grains that are affected by air currents and other phenomenon, his description is a wonderfully accurate assessment of what goes on at the molecular level. Many scientists have explored this random molecular motion in a variety of contexts, most famously by the Scottish botanist Robert Brown in the 19th century.
In 1828, while observing pollen granules suspended in water under a microscope, Brown discovered that the motion of the granules were “neither from currents in the fluid, nor from its gradual evaporation, but belonged to the particle itself.” After suspending various organic and inorganic substances in water and seeing this same inherent, random movement, he concluded that this random walk of particles – later termed Brownian motion in his honor – was a general property of matter that is suspended in a liquid medium. However, it would take nearly a century for scientists to mathematically quantify Brownian motion and demonstrate that this random movement of molecules dictates diffusion.
When molecules move from an area of high concentration to an area of low concentration,
About the same time that Brown was making his observations, a group of scientists including the French engineer Sadi Carnot and German physicist Rudolph Clausius were establishing a whole new field of scientific study: the field of Thermodynamics (see our Thermodynamics I module for more information). Clausius’s work in particular led to the development of the kinetic theory of heat – the idea that atoms and molecules are in motion and the speed of that motion is related to a number of things, including the heat of the substance. The molecules of a solid are generally considered to be locked in place (though they vibrate); however, the molecules of a liquid or a gas are free to move around, and they do: bumping in to one another or the walls of their container like balls on a pool table.
As molecules in a liquid or gas move through space, they bump into one another and follow random paths – moving in a straight line until something blocks their way and then bouncing off of that thing. This random molecular movement is constantly occurring and can be measured, giving a molecule’s mean free path – or, the average distance a particle moves between impacts with other particles.
It is this spontaneous and random motion that leads to diffusion. For example, as the scent molecules from baking cookies move into the air, they interact with air molecules – crashing into them and changing direction. Over time, these random processes will cause the scent molecules to disperse throughout the room. Diffusion is presented as a process in which a substance moves down a concentration gradient – from an area of high concentration to an area of low concentration. However, it is important to recognize that there is no directional force at play – the scent molecules are not pushed to the edge of the room because the concentration is lower there. It is the random movement of these molecules within the roomful of moving air molecules that causes them to evenly spread out throughout the entire space – bouncing off walls, moving through doors, and eventually moving through the whole house. In this way, it appears to move along a concentration gradient – from the kitchen oven to the most distant rooms of the house.
It may sound like a paradox – the movement of molecules are random, yet at the same time appear to occur along a gradient – but in practice, it’s actually quite logical. A simple illustration of this process can be seen using a glass of water and food coloring. When a drop of food coloring enters the water, the food coloring molecules are highly concentrated at the location where the dye molecules meet the water molecules, giving the water in that area a very dark color (Figure 2). The bottom of the glass initially has few or no food coloring molecules and so remains clear. As the food coloring molecules begin to interact with the water molecules, molecular collisions cause them to move randomly around the glass. As collisions continue, the molecules spread out, or diffuse, over space.
Figure 2: Diffusion of a purple dye in a liquid.
Eventually, the molecules spread throughout the entire glass, becoming evenly distributed and filling the space. At this point, the molecules have reached a state of equilibrium in which no net diffusion is taking place and the concentration gradient no longer exists. In this state, the molecules are still moving haphazardly and colliding with each other; we just can’t see that motion because the water and color molecules are evenly dispersed throughout the space. Once equilibrium has been reached, the probability that a molecule will move from the top to the bottom is equal to the probability a molecule will move from the bottom to the top.
When equilibrium is achieved and molecules are equally distributed,
We know that diffusion involves the movement of particles from one place to another; thus, the speed at which those particles move affects diffusion. Since molecular motion can be measured by the heat of an object, it follows that the hotter a substance is the faster diffusion will take place in that substance. (Click the animation below to see how temperature affects diffusion.) If you were to repeat your food coloring and water experiment comparing a glass of cold to a glass of hot water, you would see that the color disperses much more quickly in the hot water. But what other factors influence the speed, or rate, at which diffusion takes place?
Interactive Animation: The Effect of Temperature on Diffusion
In 1829, the Scottish physical chemist Thomas Graham first quantified diffusion behavior before the idea of atoms and molecules was widely established. Basing his observations on real-life “substances,” Graham measured the diffusion rates of gases through plaster plugs, fine tubes, and small orifices that were meant to slow down the diffusion process so that he could quantify it. One of his experiments, detailed in Figure 3, used an apparatus with the open end of a tube sitting in a beaker of water and the other end sealed with a plaster stopper containing holes large enough for gases to enter and leave the tube. Graham filled the open end of the tube with various gases (as indicated by the red tube in Figure 3), and observed the rate at which the gases effused, or escaped through the plaster plug. If the gas effused from the tube faster than the air outside of the tube moved in, the water level in the tube would rise. On the other hand, if the outside air moved through the plaster faster than the gas in the tube escaped to the outside, the water level in the tube would go down. He used the rate of change in the water level to determine the relative rate at which the different gases diffused into air.
Figure 3: Thomas Graham’s experiment to measure the diffusion rates of gases.
Graham experimented with many combinations of different gases and published his findings in an 1829 publication of the Quarterly Journal of Science, Literature, and Art titled “A Short Account of Experimental Researches on the Diffusion of Gases Through Each Other, and Their Separation by Mechanical Means.” He stated that when gases come into contact with each other, “indefinitely minute volumes” of the gases spontaneously intermix with each other until they reach equilibrium (Graham, 1829). However, he discovered that different types of gases did not mix at the same rate – rather, the rates at which two gases diffuse is inversely proportional to the square root of their densities, a relationship now known as Graham’s law. Although Graham’s original relationship used density, or mass per unit volume, the modern form of the equation uses molar mass, or the mass of one mole of a substance.
What Graham showed was that the molecular weight of a molecule directly affects the speed at which that molecule can move. Graham’s work actually helped lay the foundations of kinetic molecular theory because it recognized that at a given temperature, a heavy molecule would move more slowly than a light molecule. In other words, more kinetic energy is needed to move a large molecule at the same speed as a small molecule. You can think of it this way: A small push will get a tennis ball rolling quickly; however, it takes a much harder push to move a bowling ball at the same speed. At a given temperature, small molecules move faster, and will diffuse more quickly than large ones. View the animation below to see how atomic mass affects diffusion.
Interactive Animation: The Effect of Atomic Mass on Diffusion
Rate of diffusion is influenced by
Graham later studied the diffusion of salts into liquids and discovered that the diffusion rate in liquids is several thousand times slower than in gases. This seems relatively obvious to us today, as we know that the molecules of a gas move faster and are more spread out than molecules in a liquid. Therefore, the movement of one substance within a gas occurs more freely than in a liquid. Diffusion in liquids is proportional to temperature, as it is in gases, as well as to the viscosity of the specific liquid into which the material is diffusing. (View the animation below to compare diffusion in gases and liquids.) Diffusion, in fact, can even take place in solids. While this is a very slow process, Sir William Chandler Roberts-Austen, a British metallurgist, fused gold plates to the end of cylindrical rods made of lead. He analyzed the lead rods after a period of 31 days and actually found that gold atoms had “flowed” into the solid rods.
While we have talked extensively about diffusion and concentration gradients, it was not until the mid-1800s when a German-born physicist and physiologist named Adolf Fick built upon Graham’s work and introduced the notion of a diffusion coefficient, or diffusivity, to characterize how fast molecules diffuse.
In his 1855 publication “On Diffusion” in Annalen der Physik, Fick described an experimental setup in which he connected cylindrical and conical tubes with solid salt crystals at the bottom to an “infinitely large” reservoir filled with freshwater (Figure 4). The solid salt crystals dissolved into the water in the tubes and diffused toward the water reservoir. A stream of freshwater swept the saltwater out of the reservoir. This stream of water kept the salt concentration at the very top of the tubes (the point where the salt solution met the water reservoir) close to zero. The dissolving salt at the bottom of the tube maintained a high salt concentration in the water at that end of the tube. Because the tubes had a different shape (conical versus cylindrical), the concentration gradient in the tubes differed, setting up a system in which diffusion could be compared in relation to a concentration gradient.
Figure 4: Fick’s experimental setup in which he connected cylindrical and conical tubes to a reservoir filled with freshwater. (Image from the 1903 publication, Collected Works, I. Stahel’sche Verlags-Anstalt, Würzburg: Germany.) image © Fick
Fick then calculated the diffusion rate of the salt by measuring the amount of salt that passed through the top of the respective tubes (just before they met the freshwater in the reservoir) within a given time period. He discovered that the movement rate of the salt solution into the water reservoir depended on the concentration difference between the solution at the bottom of the tube and the concentration of the solution leaving the tube and entering the reservoir. In other words – the higher the concentration of salt at the top of the tube, the faster it diffused into the water reservoir. You can see how concentration affects diffusion in the animation below.
Interactive Animation: The Effect of Concentration on Diffusion
After studying the phenomenon, Fick hypothesized that the relationship between the concentration gradient and the diffusion rate was similar to what Joseph Fourier, a French mathematician and physicist, found in his study of heat conduction in 1822. Fourier had described the rate of heat transfer through a substance as proportional to the difference in temperature between two regions. Heat moves from warmer to cooler objects, and the greater the temperature difference between the two objects, the faster the heat moves. (This is why your mug of hot coffee cools off much faster outside on a cold morning than when you leave it in your heated apartment). Using Fourier’s law of thermal conduction as a model, Fick created a mathematical framework for the movement of salt into the water, proposing that the diffusion rate of a substance is proportional to the difference in concentration between the two regions. What this means for diffusion of a substance is that if the concentration of a given substance is high in relation to the substance it is diffusing into (e.g., food coloring into water), it will diffuse faster than if the concentration difference is low (e.g., food coloring into food coloring). The application of a successful principle from one branch of science to another is not uncommon, and Fick was a classic example of this process. Fick knew of Fourier’s work because he had modeled his experimental apparatus on that of Fourier. Thus it was natural for him to apply Fourier’s law to diffusion. While he had no way to know that the underlying mechanism of heat conduction and diffusion were both based on atomic collisions (in fact, some researchers at the time still doubted the existence of atoms), he had a feeling. That feeling, and the existence of atoms themselves, would be mathematically proven some 50 years later when Albert Einstein published his seminal work, Investigations on the Theory of the Brownian Movement (Einstein, 1905).
The diffusion coefficient, or diffusivity D, defined by Fick is a proportionality constant between the diffusion rate and the concentration gradient. The diffusion coefficient is defined for a specific solute-solvent pair, and the higher the value for the coefficient, the faster two substances will diffuse into one another. For example, at 25°C the diffusivity of gaseous air into gaseous water is 0.282 cm2/sec (Cussler, 1997). At the same temperature, the diffusivity of dissolved air into liquid water is 2.00 x 10-5 cm2/sec, a much lower number than that for the two gases, representing the much slower diffusion rate in liquids compared to gases. And the diffusivity of dissolved helium into liquid water at 25°C is 6.28 x 10-5 cm2/sec – higher than that of dissolved air, representing the smaller size of helium atoms compared to the nitrogen and oxygen molecules in air.
When dissolved into water, helium has a higher diffusivity than air. This means that helium dissolves at a ______ rate than air.
Yet another factor that influences the rate at which diffusion occurs is the distance a molecule travels before bumping into something (referred to as a molecule’s mean free path). Imagine taking a container filled with a gas and putting it under pressure so that the molecules in the gas are squeezed together. This would slow the rate of diffusion through the gas because the molecules travel a shorter distance before colliding with something else and changing direction. (The animation below shows the effect of pressure on diffusion.)
Interactive Animation: The Effect of Pressure on Diffusion
This is an important factor affecting the difference in diffusion rates in gases versus liquids versus solids; because gas particles are the most spread out of the three, molecules travel the furthest between collisions and diffusion occurs most rapidly in this state (Figure 5).
Figure 5: The three states of matter at the atomic level: gas, liquid, and solid.
To fully understand why we can smell the cookies baking in the kitchen from the bedroom we also have to consider another process at work here – advection. Advection involves the transfer of a material or heat due to the movement of a fluid. So, because people walk through the rooms of your house and because heat rises from your radiators, the air is constantly moving, and that movement carries and mixes the scent molecules in your house. In many situations (such as your house), the effects of advection exceed those of diffusion, but these processes work in tandem to bring you the cookie smell.
From the traveling smells of cookies to the dissolving of salt into water, diffusion is a process happening around (and within!) us every second of every day. It is a process that is critical to moving oxygen across the membranes of our lungs, moving nutrients through soil to be taken up by plants, dispersing pollutants that are released into the atmosphere, and a whole host of other events that are necessary for life to exist.
The process of diffusion is critical to life, as it is necessary when our lungs exchange gas during breathing and when our cells take in nutrients. This module explains diffusion and describes factors that influence the process. The module looks at historical developments in our understanding of diffusion, from observations of “dancing” particles in the first century BCE to the discovery of Brownian motion to more recent experiments. Topics include concentration gradients, the diffusion coefficient, and advection.
Diffusion is the process by which molecules move through a substance, seemingly down a concentration gradient, because of the random molecular motion and collision between particles.
Many factors influence the rate at which diffusion takes place, including the medium through with a substance is diffusing, the size of molecules diffusing, the temperature of the materials, and the distance molecules travel between collisions.
The diffusion coefficient, or diffusivity, provides a relative measure at specific conditions of the speed at which two substances will diffuse into one another.
- HS-C5.4, HS-PS3.A3, HS-PS3.B5
- Brown, R. (1828). A brief account of microscopical observations made on the particles contained in the pollen of plants. Philosophical Magazine, 4, 161-173.
- Cussler, E.L. (1997). Diffusion: Mass transfer in fluid systems (2nd ed). New York: Cambridge University Press.
- Einstein, A. (1956). Investigations on the theory of Brownian movement. New York: Dover.
- Fick, A. (1855). Ueber Diffusion [On diffusion]. Annalen der Physik und Chemie von J. C. Pogendorff, 94, 59-86.
- Graham, T. (1829). A short account of experimental researches on the diffusion of gases through each other, and their separation by mechanical means. Quarterly Journal of Science, Literature, and Art, 2, 74–83.
- Lucretius. (1880). On the nature of things. (trans. J.S. Watson). London: George Bell & Sons.
Heather MacNeill Falconer, M.A./M.S., Gina Battaglia, Ph.D., Anthony Carpi, Ph.D. “Diffusion I” Visionlearning Vol. CHE-3 (4), 2015.
Physical States and Properties
This is an updated version of the Water module. For the previous version, go here.
Before we start, get yourself a glass of water. By the time you’ve reached the end, you’ll have a much greater appreciation for this miracle liquid.
Got your glass? Now take a sip and think about all the roles water plays in your life. For one thing, your body can’t function more than a few days without it. You use water to wash yourself, your clothes, and your car. Water puts out fires, cooks our food, makes our soap get sudsy, and hundreds of other things. Water is absolutely essential to our lives on Earth.
Water is so central to our existence that you might be surprised to learn that it’s a rare and unusual substance in the universe. Water is at once so vital and so scarce that exobiologists (scientists looking for life beyond Earth) set their sights on planets where water might exist. Life, it seems, can tough it out in acid, lye, extreme salt, extreme heat, and other conditions that would kill us humans. But it can’t exist without water.
Despite its scarcity across the universe, water is so abundant on Earth that we aren’t always aware of how special it is. For starters, water is the only substance that exists naturally on our planet as a solid (ice and snow), liquid (rivers, lakes, and oceans), and a gas (water in the atmosphere as humidity). As you might recall (or can read about in our module on States of Matter), water molecules are in a different energy state in each phase. The amount of energy required to go from solid to liquid and liquid to gas is related to how water molecules interact with each other. Those interactions are, in turn, related to how the atoms within a water molecule interact with each other.
Our Chemical Bonding: The Nature of the Chemical Bond module discussed how a dipole forms across a water molecule; in the bond between oxygen and hydrogen, the electrons are shared unequally, drawn a bit more to the oxygen. As a result, a partial negative charge (ð-) forms at the oxygen end of the molecule, and a partial positive charge (ð+) forms at each of the hydrogen atom ends (Figure 1).
Figure 1: The dipoles arise in a water molecule because of unequal sharing of electrons.
Since the hydrogen and oxygen atoms in the molecule carry opposite (though partial) charges, nearby water molecules are attracted to each other like tiny little magnets. The electrostatic attraction between the ð+ hydrogen (ð stands for partial charge, a value less than the charge of an electron) and the ð- oxygen in adjacent molecules is called hydrogen bonding (Figure 2).
Figure 2: Hydrogen bonds between water molecules. The slight negative charge on the oxygen atom is attracted to the slight positive charge on a hydrogen atom.
Hydrogen bonds make water molecules “stick” together. These bonds are relatively weak compared to other types of covalent or ionic bonds. In fact, they are often referred to as an attractive force as opposed to a true bond. Yet, they have a big effect on how water behaves. There are many other compounds that form hydrogen bonds, but the ones between water molecules are particularly strong. Figure 2 shows why. If you look at the central molecule in this figure you see that the oxygen end of the molecule forms hydrogen bonds with two other water molecules; in addition, each hydrogen on the central molecule is attracted to a separate water molecule. As the illustration shows, each water molecule forms attractions with four other water molecules, a network of connections that makes the hydrogen bonding in water particularly strong and lends the substance its many unique properties.
Now it’s time to make use of that glass of water. If you have some ice cubes, drop one in your glass. You’ll notice that it floats. Its ability to bob to the top of the water line means that the ice (water in its solid state) is less dense than liquid water. (To review density and buoyancy, see our Density module) This isn’t a common state of affairs; if you put a chunk of solid wax into a vat of molten wax, it will sink toward the bottom (and possibly melt before it gets there).
To understand what causes ice to float but solid wax to sink, let’s think first about what happens when a liquid turns to a solid (again, the States of Matter module can be a handy review here). In a liquid, the molecules have enough kinetic energy to keep moving around. As molecules come near to each other, they are drawn together by intermolecular forces. At the same time, molecules have enough kinetic energy to break free of those forces and be drawn to other nearby molecules. Thus the liquid flows because intermolecular attractions can be broken and reformed.
A liquid freezes when the kinetic energy is reduced (i.e. the temperature is reduced) enough that the attractive forces between molecules can no longer be broken, and the molecules become locked in a static lattice. For nearly all compounds, the lower energy and lack of movement between molecules means the molecules in a solid are packed together more tightly than the liquid state. This is the case with wax and so solid wax is denser than the liquid and sinks.
In the case of water, though, the shape of the molecule and the strength of the hydrogen bonds affect the arrangement of the molecules. In liquid water, hydrogen bonding pulls molecules closely together. As water freezes, the dipole ends with like charges repel each other, forcing the molecules into a fixed lattice in which they are farther from each other than they are in liquid water (Figure 3). More space between molecules makes the ice less dense than liquid water, and thus it floats.
Figure 3: When water freezes, the similarly-charged ends of the dipoles repel each other, pushing molecules apart. This means there is more space between molecules in the solid than in the liquid, making the solid (aka, ice) less dense.
Water is sometimes referred to as the “universal solvent,” because it dissolves more compounds than any other liquid known. The polarity of the water molecule allows it to readily dissolve other polar molecules, as well as ions. (See our Solutions, solubility, and colligative properties module for a deeper discussion of dissolution.)
This ability to dissolve substances is one of the properties that makes water vital for life. Most biological molecules, such as DNA, proteins, and vitamins are polar, and important ions such as sodium and potassium are also charged. In order for any of these compounds to carry out functions in the body, they have to be able to circulate in the blood and the fluid within and between cells, all of which are mostly water. Because of its polarity, water is able to dissolve these and other substances, allowing their free movement around the body. A few biomolecules, such as fats and cholesterol, aren’t polar, and don’t dissolve in water – however, the body has developed unique ways to circulate and store these substances.
Water is also able to dissolve gasses such as oxygen, allowing fish, plants, and other aquatic life to access this dissolved oxygen (Figure 4). O2 isn’t a polar molecule; it dissolves because the polar charges in the water molecule induce a dipole in the oxygen, making it soluble and so available to aquatic life. (Learn more about induced-dipole interactions in our Properties of Liquids module.)
Figure 4: When water and oxygen molecules meet (left), the negative dipole of water repels electrons around the oxygen molecule, creating a temporary dipole in the oxygen molecule (right).
Let’s return to your water glass. Fill the glass just to the rim and stop. Then, slowly, add a little bit more. You’ll see that you can actually fill the glass a bit past its rim, and the edges of the water will round out against the glass, holding the water in.
Once again, hydrogen bonding is behind this act, resulting in cohesion. Cohesion occurs when molecules of the same kind are attracted to each other. In the case of water, the molecules form strong hydrogen bonds, which hold the substance together. As a result, water is highly cohesive, in fact, it is the most cohesive of all non-metallic liquids.
Cohesion occurs throughout your glass of water, but it’s especially strong at the surface. Molecules there have fewer neighbors (because they have none at the very surface), and so create stronger bonds with the molecules that are near them. The result is called surface tension, or the ability of a substance to resist disruption to its surface. Dip your finger into your water glass and then pull it out. The drop that forms at the end of your fingertip is held together by surface tension.
Surface tension was the misunderstood central player in a raucous debate between Galileo Galilei and his chief rival, Ludovico delle Colombe in 1611. Delle Colombe, a philosopher, was at odds with some of Galileo’s ideas, including his explanation that ice floats on water because it is less dense. So the philosopher challenged Galileo to a debate, which delle Colombe believed would prove his own intellectual superiority.
Delle Colombe championed the (incorrect) idea that ice floats not because of density, but because of its shape, which he saw as broad and flat, as is ice on a lake. To prove the “truth” of his theory, he used ebony wood, which is slightly denser than water, in a demonstration before an audience of curious spectators. He dropped a sphere of the wood into water, and it sank. He then placed a thin wafer of the wood flat on the water’s surface, and it floated. Delle Colombe pronounced himself the winner.
Galileo left frustrated. His observations of the world gave him evidence that his explanation, not delle Colombe’s, was right, but he couldn’t explain the outcome of delle Colombe’s experiment.
Had he known about molecules and dipoles and hydrogen bonds at the time, Galileo certainly would have offered this explanation: When delle Colombe floated the thin ebony disc, he was taking advantage of the cohesive nature of water and the surface tension that arises from it (Figure 5). As the ebony wafer appeared to float on the water, the force exerted by its mass was distributed throughout the surface of the water beneath it. In other words, a single pinpoint-sized area of surface water only had to support the pinpoint-sized piece of ebony just above it. The hydrogen bonds between the water molecules were strong enough to support the weight of the disc. When delle Colombe placed the sphere in the water, however, the pinpoint-sized area that first touched the water bore the weight of the entire sphere, which was more than the water’s surface tension could support. Had Galileo known this at the time, he could have disproved delle Colombe easily – had he simply pushed the wafer through the surface to break the surface tension, the wafer would have sunk.
Figure 5: Water molecules at the surface form stronger hydrogen bonds between them than do molecules in the rest of the water. These stronger bonds are responsible for surface tension. image © USGS
This same surface tension is what allows leaves to stay at the surface of a lake and dewdrops to adhere to a spider’s web. Even some animals take advantage of this phenomenon – the Basilisk lizard (Figure 6), water striders, and a few other small animals and bugs appear to “walk” on water by taking advantage of the surface tension of water.
Figure 6: A Basilisk lizard (Basiliscus basiliscus) runs on the water surface. Movie S1 from Minetti A, Ivanenko Y, Cappellini G, Dominici N, Lacquaniti F. “Humans Running in Place on Water at Simulated Reduced Gravity”. PLOS ONE. DOI:10.1371/journal.pone.0037300
For your next observation, take another sip of water, and notice the side of the glass. Chances are you’ll see a few drops stuck to it. Gravity is pulling down on these drops, so something else must be keeping them stuck there. That something else is adhesion, the attraction of water to other kinds of molecules; in this case, the molecules that make up the glass. Because of the polarity of the molecule, water exhibits stronger adhesion to those surfaces that have some net electrical charge, and glass is one such surface. But place a drop of water on a non-polar surface, such as a piece of wax paper and you will see it take a different shape than one to which it adheres. On the wax paper, the water droplets take the shape of a true droplet because there is little adhesion and the cohesive forces pull the drop into a sphere. But on glass you will see the droplets flatten and deform a bit as the adhesive forces draw it more to the surface of the glass.
Both cohesion and adhesion (Figure 7) occur with many compounds besides water. Pressure sensitive tapes, for example, stick to surfaces because they are coated with a high viscosity fluid that adheres to the surface to which they are pressed. Generally, you can overcome this adhesive force by pulling, for example – you can easily lift a Post-it® Note from a page. But sometimes the adhesive forces are stronger than the forces holding the surface together – pull tape off of a piece of paper and you remove pieces of the paper with the tape.
Let’s return to our glass of water, and look inside to where the water surface meets the glass. The very edge of the water surface curves upward slightly on the glass. That’s also adhesion – the water is drawn up the surface by adhesion with the glass. If you have a clear plastic straw, you can put one end of it into the water and see that the liquid climbs up the straw a bit, above the surface of the remaining glass of water. It’s actually moving upward against gravity!
What’s happening in your straw is a phenomenon called capillary action (Figure 7). Capillary action occurs in small tubes, where the surface area of the water is small, and the force of adhesion—water’s attraction to the polar glass or other material—overcomes the force of cohesion between those surface molecules.
Figure 7: The attraction of water molecules to the sides of a narrow vessel (adhesion, red arrows) is stronger than the cohesion (orange arrows) drawing water molecules together. The result is capillary action, in which the force of adhesion pulls the fluid upwards (purple arrows).
Another way to see the effects of adhesion and cohesion is to compare the behavior of polar and nonpolar liquids. When you put water in a test tube, adhesion makes the water along the edges move slightly upward and creates a concave meniscus. Liquid mercury, on the other hand, is not polar and therefore not attracted to glass. In a test tube, cohesion at the surface of the mercury is much stronger than adhesion to the glass. The surface tension in the mercury forms a convex meniscus, much the same as the way water forms a slight bulge over the top of your very full glass (Figure 8).
Figure 8: Water and mercury behave differently in a test tube made of polar glass. Water adheres to the glass, bringing the sides upwards and forming a concave surface. Nonpolar mercury is not attracted to the glass. Cohesion between the mercury atoms creates surface tension that forms a convex surface. image © USGS
Adhesion and capillary action are among the forces at play that help plants take up water (and dissolved nutrients) in their roots. Capillary action also keeps your eyes from drying out, as saline water flows from tiny ducts in the outer corners of your eyes. With each blink, you spread the water away from the duct, and capillary action brings more fluid to the surface.
If you want to see capillary action at work, put a few drops of red food coloring in your glass of water, and then drop a stalk or two of leafy celery into it. After a day or two, your green celery will be streaked with red.
Water is a truly unusual and important substance. The unique chemical properties of water that give rise to surface tension, capillary action, and the low density of ice play vital roles in life as we know it. Floating ice protects aquatic organisms and keeps them from being frozen in the winter. Capillary action keeps plants alive. Surface tension allows lily pads to stay on the surface of a lake. In fact, water’s chemistry is so complex and important that scientists today are still striving to understand all the feats this simple substance can perform.
Water has a number of unique properties that make it important in both the chemical and biological worlds.
The polarity of water molecules allows liquid water to act as a “universal solvent,” able to dissolve many ionic and polar covalent compounds.
The polarity of water molecules also results in strong hydrogen bonds that give rise to phenomena such as surface tension, adhesion, and cohesion.
Everts, S. (2013). Galileo on ice: Researchers commemorate the scientist’s debate on why ice floats on water. Chemical and Engineering News, 91(34), p.28-29.
- Heilbron, J.L. (2012). Galileo. Oxford University Press.
- Lo Nostro, P. and Ninham, B.W. (2014). Aqua incognita: Why ice floats on water and Galileo 400 years on. Connor Court Press.
- MachLachlan, J. (1999). Galileo Galilei: First physicist. Oxford University Press.
- Whitehouse, D. (2009). Renaissance genius: Galileo Galilei and his legacy to modern science. Sterling Press.
Robin Marks, M.A., Anthony Carpi, Ph.D. “Water” Visionlearning Vol. CHE-4 (6), 2018.
It’s nearly the start of the school year, and you’ve gathered 10 friends for an end-of-semester bonfire. What would a bonfire be without s’mores? You pack supplies, making sure you have enough to make one s’more for everyone.
You can think of making a s’more as a chemical equation (see our Chemical Equations module for more on those):
2 Graham crackers + 1 piece of chocolate + 4 mini-marshmallows → 1 s’more
Just as with a chemical equation, the coefficients in front of the “reactants” and “products” show the proportions in which they react to produce the desired product—one s’more.
So, to make 10 s’mores, you would need:
20 Graham crackers + 10 pieces of chocolate + 40 mini-marshmallow → 10 s’mores
Congratulations, you’ve just made it through your first exercise in what chemists call stoichiometry. This mouthful of a term was coined in the 1790s by chemist Jeremias Benjamin Richter, who became fascinated by the proportional mathematics of combining chemicals, convinced that it held clues to the nature of matter (which it does indeed; Dalton drew on this math to devise his early atomic theory, described in depth in our module Early Ideas about Matter: From Democritus to Dalton. Richter combined the Greek words stoicheion, which means “element,” and metron, which means “measure.” In other words, stoichiometry is a way of measuring the amount of each reactant combining in a chemical reaction, in our case, the amounts of reactant (20 graham crackers, 10 pieces of chocolate and 30 mini-marshmallows) that in turn predict the amount of product (10 s’mores), or vice versa.
Stoichiometry may seem like a complicated word, but it’s a fairly straightforward concept when you apply it to chemical equations: the proportions expressed in a chemical equation (the coefficients) can be used to predict how much product will be produced from a given measure of reactants.
For example, we used stoichiometry to determine how many s’more “reactants” we would need to make 10 s’mores. We can also use stoichiometry to predict how much product we’ll get with the amount of each reactant we have. If we have lots and lots of chocolate and marshmallows but only 12 graham crackers, how many s’mores can we make?
Again, our equation is: 2 Graham crackers + 1 piece of chocolate + 4 mini-marshmallows → 1 s’more
If we have 12 graham crackers, that’s enough to make 6 s’mores. It doesn’t matter how much extra chocolate we have, because without the graham crackers, it isn’t a s’more.
So the mole ratio of graham crackers to s’mores produced is:
Using the same concept of mole ratios as explained above, stoichiometry is used to figure out how much reactant is needed to make a desired quantity of product in a laboratory or manufacturing facility. An important industrial example is the production of nitrogen-based fertilizer, which provides important nutrients to the soil and allows modern farmers to grow more food per acre.
For centuries, farmers have understood the importance of adding nutrients to the soil in which they grow crops, but prior to the 1900s they were limited to using animal manure or expensive, naturally occurring mineral deposits as fertilizer. In the 1840s, the German chemist Justus von Liebig identified nitrogen as fertilizer’s key ingredient. However, despite the abundance of nitrogen in the atmosphere, there was no easy way to convert nitrogen to a form that could be taken up by plants.
This all changed in the early 1900s when the German chemist Fritz Haber invented a chemical process for converting nitrogen to ammonia (NH3), the compound that often gives household cleaners their characteristic smell, and which plants can use as a source of nitrogen. His initial method was only economical on a small scale, so Haber worked with a German colleague, Carl Bosch, to adapt this process to work at an industrial scale. The Haber-Bosch process is sometimes referred to as one of the most significant inventions of the 20th century, and it led to Haber winning the Nobel Prize in Chemistry in 1918. In its equation form, the Haber-Bosch process is relatively simple:
N2 + 3H2 → 2NH3
The ability to perform this simple reaction on a large scale had important historical consequences. Cheap ammonia provided an avenue to widely available inexpensive fertilizers, which created a boom in agriculture (and an associated increase in population) in the 20th century. And it indirectly prolonged World War I by providing Germany with an inexpensive source of the nitrogen necessary to make gunpowder. Some scientists have more recently questioned whether the Haber-Bosch process is a sustainable practice, given the environmental impact of agriculture and a growing population, as well as the fact that considerable energy is required to generate the hydrogen gas.
Spraying rice fields with fertilizer. The manufacturing of nitrogen-based fertilizers relies on stoichiometry to calculate how much starting material (N2 and H2) is needed to produce the desired amount of ammonia (NH3) to be used in the fertilizer. image © Jan Amiss
Let’s apply our stoichiometry discussion here and imagine that an agricultural company needs to manufacture 1,500 kilograms of NH3 to meet the demand for fertilizer. How much N2 and H2 would they need to start with?
Again, the equation is: N2 + 3H2 → 2NH3
Looking at the equation, we see that the mole ratio of N2 required to produce NH3 is:
(1 mol N2) / (2 mol NH3)
Now we’ll use this mole ratio to determine how much reactant we need to start with to make 1,500 kilograms of ammonia.
First, a reminder: whenever we are calculating amounts of substance in a reaction, we have to convert the mass of each substance into moles. Why? Because the substances involved don’t have equal weights. Think of it in terms of the s’more: 1 piece of chocolate weighs a lot more than 1 mini-marshmallow. If we used mass in the equation instead of number of pieces, we might say that one s’more requires 1 gram of chocolate and 4 grams of mini-marshmallows. But in reality, that would amount to one piece of chocolate and about 50 mini-marshmallows! (For more on converting from grams to moles, see our module about the mole and atomic mass.)
So, we know that we want to make 1500 kg of NH3, Let’s start by converting kilograms to grams as follows
1,500 kg of NH3 x 1000 g/kg = 1,500,000 g of NH3
Then, we need to calculate how many moles that is. To do that, we multiply the molecular mass of NH3 (17 g per mole) by the number of grams, setting the equation up so that the grams cancel and the answer is in moles. We see that:
1,500,000 g NH3 × (1 mol NH3) / (17 g NH3) = 88,325 mol NH3
Next, we can use the mole ratio to figure out how many moles of N2 will be needed. Since we need 1 mole of N2 to produce 2 moles of NH3, we use that mole ratio to determine how many moles of N2 will be needed to produce 88,235 moles of NH3:
88,235 mol NH3 × (1 mol N2) / (2 mol NH3) = 44,117 mol N2
Now, use the molecular mass of N2 to figure out how many grams of N2 are required, then convert the grams of N2 to kg of N2, since that’s the units we want for our answer:
44,117 mol N2 × (28 g N2) / (1 mol N2) = 882,340 g N2, or 882.340 kg
Now that we know how many kilograms of N2 we would need, we can use the mole ratio of the reactants (N2 and H2) to figure out how many moles of H2 are required.
Remember, the equation states:
N2 + 3H2 → 2NH3
The mole ratio for H2 to N2 is 3 to 1. So, for every mole of nitrogen, we’ll need three times as many moles of hydrogen:
(3 mol H2) / (1 mol N2 )
Remember, we need to calculate how many moles of hydrogen are needed, and then convert those moles of hydrogen into grams of hydrogen. Using the moles of nitrogen we calculated above, we can get kg of H2:
44,117 mol N2 x (3 mol H2) / (1 mol N2 ) × (2 g H2) / (1 mol H2) = 264,702 g H2, or 264.702 kg
This is important information for the fertilizer manufacturer. While nitrogen is readily available from the air, hydrogen gas is not. So the manufacturer would likely have to purchase the hydrogen gas, which is expensive to generate, potentially explosive, and difficult to transport and store. Therefore, the manufacturer needs to know precisely how much hydrogen gas is required.
In the case above, the manufacturer will have an unlimited amount of nitrogen gas, but a precise amount of hydrogen gas. Therefore, the amount of hydrogen gas will limit the amount of ammonia that can be made (just like the number of graham crackers can limit the number of s’mores that can be made).
We would say that hydrogen is the limiting reactant, meaning that this is the reactant that will be used up first. As a result, the amount of it will determine how much product is produced. Determining how much reactant is required to produce a specific amount of product is one of the most important applications of stoichiometry.
We’ll illustrate this first with the s’mores. Let’s say you have the following amounts of s’more “reactants”:
120 graham crackers
70 pieces of chocolate
Here, again, is the s’mores equation: 2 Graham crackers + 1 piece of chocolate + 4 mini-marshmallows → 1 s’more
How many s’mores can you make from your reactants? That will depend on the limiting reactant, the one which will run out first. To determine with reactant is limiting, you will first need to calculate how many s’mores you can make with each of the reactants. You can do this using the mole ratio:
Limiting reactant is an important concept in any manufacturing process. A manufacturer knows they want to make a certain amount of a specific product, and will purchase the reactants accordingly. In many cases, it is more economical to make the most expensive reactant be the limiting one, reducing the cost of excess and waste.
Silver nitrate is a good example. This compound, AgNO3, has been used since ancient times as a disinfectant and wound-healing agent. Today it is used in bandages and other medical applications, as well as water purification. It can be easily made by reacting pure silver with nitric acid, according to the equation:
3Ag + 4HNO3 → 3AgNO3 + 2H2O + NO
Silver is a much more expensive reactant than nitric acid, so someone using it to produce silver nitrate will probably want to make silver the limiting reactant.
By starting with a set amount of each reactant, you can determine not only the limiting reactant but also the mass of product that will be produced and the amount of reactant that remains in excess.
Let’s say we start with 150g of silver and 150 g of nitric acid. How much AgNO3 can we make, and which reactant is the limiting one?
To find the answer takes a few steps:
1) convert each reactant to moles 2) use the mole ratio to determine how many moles of one reactant would be required to use up the other 3) calculate the amount of product based on using up all the limiting reactant.
Step 1: Converting to moles
150g Ag × (1 mole) / (108 g Ag) =1.39 mol Ag
150g HNO3 × (1 mole) / (63 g HNO3) = 2.38 mol HNO3
Step 2: Using the mole ratio to equate moles of Ag to moles of HNO3
(4 mol HNO3) / (3 mol Ag)
Since silver is our expensive reactant, we want to use it all up. We can calculate how many moles of HNO3 is required to react with the whole 1.39 mol of Ag, setting up the equation so that moles of HNO3 cancel:
1.39 mol Ag × (4 mol HNO3) / (3 mol Ag) = 1.85 mol HNO3 required
To use up all the, Ag, we need 1.85 moles of HNO3. Look at our calculations above. How much HNO3 do we have? We have 2.38 moles – more than we need. In other words, if we put all of both reactants together, the silver will be used up first, and there will be HNO3 left over. That makes silver the limiting reactant.
This example shows the importance of converting to moles first. We started with the same mass of each reactant, 150 g. But mass doesn’t tell us how many particles there are. That is what the unit of moles tells us.
Knowing that silver is the limiting reactant, we can go further and determine how many moles of AgNO3 is produced from the 1.39 moles of Ag we are starting with. This time we use the mole ratio between Ag and AgNO3.
In the reaction:
3Ag + 4HNO 3 → 3AgNO3 + 2H2O + NO
There are 3 moles of AgNO3 produced for every 3 moles of Ag used. So:
1.39 mol Ag × (3 mol AgNO3) / (3 mol Ag) = 1.379 mol AgNO3 produced
Calculations such as these are vital to our ability to manufacture and use chemicals efficiently, as well as to our ability to understand the impact of the reactions that take place in our everyday world. For example, an engineer for a paint manufacturer must consider the mole ratios of different chemicals in the paint, which will determine the cost of producing that paint. On a grander scale, stoichiometry plays a role in understanding climate change: if we know the quantities of different types of fossil fuels burned in a year, we can determine how much CO2 has been added to the atmosphere. From planning for s’mores to streamlining manufacturing and generating environmental data, we can use stoichiometry to predict and plan the outcome of many chemical processes.
Stoichiometry is the mathematics of chemistry. Starting with a balanced chemical equation, we make use of the proportional nature of chemical reactions to calculate the amount of reactant needed at the start or predict the amount of product that will be produced. While it may not seem all that “chemical,” stoichiometry is a concept that underlies our ability to understand the impact and implications of many chemical processes. A bandage manufacturer may use mole ratios to determine how much silver is required (and therefor the cost) to treat a batch of bandages with silver nitrate. A fertilizer company might apply the concept of limiting reactant to figure out how much product they can produce with a given amount of hydrogen gas. And so on. Stoichiometry, mole ratios, and limiting reactants are indispensable concepts for fully understanding any chemical process.
Stoichiometry uses the proportional nature of chemical equations to determine the amount of reactant needed to produce a given amount of product or predict the amount that will be produced from a given amount of reactant.
The mole ratio shows the proportion of one reactant or product in a reaction to another, and is derived from the balanced chemical equation. While we may need to adjust the amount of reactants to yield more product, the ratio of reactants to products is always the same as the balanced reaction.
The limiting reactant is the chemical used up first in a reaction. it can be determined by comparing the number of moles of each reactant on hand and the mole ratio between reactants and products in the balanced reaction.
Stoichiometry is the mathematics of chemistry. Starting with a balanced chemical equation, we make use of the proportional nature of chemical reactions to calculate the amount of reactant needed at the start or predict the amount of product that will be produced. While it may not seem all that “chemical,” stoichiometry is a concept that underlies our ability to understand the impact and implications of many chemical processes. A bandage manufacturer may use mole ratios to determine how much silver is required (and therefor the cost) to treat a batch of bandages with silver nitrate. A fertilizer company might apply the concept of limiting reactant to figure out how much product they can produce with a given amount of hydrogen gas. And so on. Stoichiometry, mole ratios, and limiting reactants are indispensable concepts for fully understanding any chemical process.
Robin Marks, M.A., Anthony Carpi, Ph.D. “Stoichiometry” Visionlearning Vol. CHE-4 (8), 2019.
Atomic Theory and Structure
This is an updated version of our Atomic Theory I module. For the previous version, please go here.
By the late 1800’s, John Dalton’s view of atoms as the smallest particles that made up all matter had held sway for about 100 years, but that idea was about to be challenged. Several scientists working on atomic models found that atoms were not the smallest possible particles that made up matter, and that different parts of the atom had very distinct characteristics.
The English scientist Michael Faraday can reasonably be considered one of the greatest minds ever in the fields of electrochemistry and electromagnetism. Somewhat paradoxically, all of Faraday’s pioneering work was carried out prior to the discovery of the fundamental particle that these electrical phenomena depend upon. However, one of Faraday’s earliest experimental observations was a crucial precursor to the discovery of the first subatomic particle, the electron.
As early as the mid-17th century, scientists had been experimenting with glass tubes filled with what was known then as rarefied air. Rarefied air referred to a system in which most of the gaseous atoms had been removed, but where the vacuum was not complete. In 1838, Faraday noted that when passing a current through such a tube, an arc of electricity was observed. The arc started at the negative plate (known as the cathode) and traveled through the tube to the oppositely charged anode (Faraday, 1838).
In his experiments, Faraday observed a luminescence that started part way down the tube, and traveled toward the anode. This left an area between the cathode and the start of the luminescence that was not illuminated, and subsequently became known as Faraday’s dark space (Figure 1). Faraday couldn’t fully explain his observations, and it took a number of further developments in terms of the technology of the tubes, before a greater understanding emerged.
Figure 1: Glow discharge in a low-pressure tube caused by electric current. Like what Faraday saw, the tube shows a dark space between the glows around the cathode (left, negatively charged) and anode (right, positively charged). image © Andrejdam/Wikimedia
Faraday is famous for discovering and naming the electron.
In 1857 the German glassblower Heinrich Geissler, while working for fellow countryman and physicist Julius Plücker at the University of Bonn, improved the quality of the vacuum that could be achieved in such tubes. However, Geissler’s tubes still contained enough gaseous atoms that when the electrical current travelled in the tube, there was an interaction between the two, causing the tubes to glow. In the mid-19th century these Geissler tubes were largely nothing more than a curiosity, but interestingly, a curiosity that proved to be a forerunner of neon lights.
Englishman William Crookes repeated experiments similar to those of Faraday and Geissler, but this time with the ‘new and improved’ vacuums. The number of gas atoms (and hence the pressure) was drastically reduced in Crookes’ tubes. This caused an interesting effect: Faraday’s dark space was observed even further down the tube, again extending away from cathode toward the anode.
In addition to the extension of the dark space, fluorescence was observed on the glass behind the anode at the positive end of the tube. When further experimentation revealed that shadows of objects placed in the tube were cast onto the glass behind the anode, the German physicist Johann Hittorf proposed that the shadows must have been created by something travelling in a straight line from the cathode to the anode. Yet another German physicist Eugen Goldstein, christened these invisible beams cathode rays.
As discussed in our module Early Ideas about Matter, Dalton’s atomic theory suggested that the atom was indivisible, i.e., that it was the smallest particle that made up matter, and that all matter was based upon that single unit. Experiments with cathode ray tubes dramatically changed that view when they led to the discovery of the first subatomic particle.
J.J. Thomson was an English physicist who worked with cathode ray tubes similar to those used by Crookes and others in the mid-19th century. Thomson’s experiments (Thomson, 1897) went further than those before him and provided evidence of the properties of the “something” hinted at by Hittorf. Thomson noted that cathode rays were deflected by magnetic fields and that the deflection was the same no matter what the source of the rays. This suggested that the rays were universal in their properties and that they had some magnetic charge. Thomson further demonstrated that cathode rays were charged because they could be deflected by an electric field. He found that the rays were deflected toward a positive plate and away from a negative plate, thus determining that they were made up of negatively charged particles of some sort.
Finally, Thomson applied both electrical and magnetic fields to the cathode ray at the same time. Knowing the strength of the fields applied and measuring the deflection of the particle stream in the tube, he was first able to measure the velocity of the particles in the stream. Then, by measuring the deflection of the ray while varying the two fields, Thomson was able to measure the mass-to-charge ratio of the particles in the stream and he found something astonishing. The negative particles had a mass-to-charge ratio that was over 1,000 times lower than that of a hydrogen atom, suggesting that the particles were incredibly tiny – much smaller than the smallest atom known. This fact allowed Thomson to definitively say that the atom was not the fundamental building block of matter and that smaller (subatomic) particles existed. Thomson originally called these particles corpuscles, but later they became known as electrons.
J.J. Thomson determined that cathode rays were made up of
Thomson’s discovery made sense of all of the previous observations made by Faraday, Geissler, and Crookes. Zipping through a tube filled with gas but partially under vacuum, electrons would eventually slam into those gas atoms, knocking off some of their electrons and making them fluoresce. The dark space that Faraday first noted was due to the distance needed for the electrons to accelerate to the speed necessary to ionize the tube’s gas atoms.
In the better vacuums achieved in the Crookes’ tubes, the electrons could travel further distances without interacting with gas molecules because of the lower density of molecules in the tube, thus extending the dark space.
The less gas there was in a cathode ray tube, the _____ dark space could be observed.
With the electron now discovered, Thomson went on to propose an entirely new model of the atom that was known as “The Plum Pudding Model.” The model was so called since it mimicked the British desert of the same name that had dried fruit (primarily raisins not plums), dispersed in a body of suet and eggs that made a dough.
In his model Thomson proposed that the negatively charged electrons (analogous to the raisins) were randomly spread out among what he called “a sphere of uniform positive electrification” (analogous with the dough or body of the pudding) (see Figure 2).
Figure 2: Thomson’s “plum pudding model” of the atom, showing a positively-charged sphere containing many negatively-charged electrons in a random arrangement.
Thompson’s model of the atom as a doughy clump of positive and negative particles persisted until 1911, when Ernest Rutherford, a former student of Thomson’s, advanced atomic theory yet another notch.
During the years 1908–1911, Ernest Marsden and Hans Geiger performed a series of experiments under the direction of Ernest Rutherford at the University of Manchester in England. In these experiments alpha particles (tiny, positively charged particles) were fired at a thin piece of gold foil (Figure 3). Under the Thomson Plum Pudding model of the atom, the “sphere of uniform positive electrification” was thought to be so diffuse that the tiny, fast moving alpha particles would pass straight through. Similarly, the electrons in the model were thought to be so tiny that any electrostatic interactions between them and the positive alpha particles would be minimal, so the path of the alpha particles would hardly be affected.
Figure 3: The gold foil experiment designed by Rutherford, Marsden, and Geiger. A beam of positively charged alpha particles was shot at a piece of gold foil. A screen around the foil captured the impact of the alpha particles.
As predicted, Rutherford and his co-workers observed that most of the alpha particles passed straight through the gold foil, and some particles were deflected at small angles. However, contradictory to what the Plum Pudding model predicted, a few rebounded at very sharp angles, some even flying straight back toward the source! These particles were acting as if they were encountering a hard object, like a tennis ball bouncing off a brick wall (Figure 4).
Figure 4: In the gold foil experiment, Rutherford and his colleagues expected to see the alpha particles passing through the mostly empty “Plum Pudding”-style atoms. However, what they observed was that the alpha particles occasionally ricocheted at sharp angles, indicating there was something more solid in the atom than previously thought.
The fact that most of the alpha particles passed straight through the gold foil suggested to Rutherford that atoms are made up of largely empty space. However, contrary to the Thomson Plum Pudding model, Rutherford’s work suggested that there was a dense, positively charged area in an atom that caused the observed repulsion and backscattering of alpha particles. Rutherford was astonished by these observations and famously said:
It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backward must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of the atom was concentrated in a minute nucleus. It was then that I had the idea of an atom with a minute massive centre, carrying a charge.
Over a series of experiments and papers (Rutherford, 1911, 1913, 1914), Rutherford developed a model of the atom with a dense, positively charged area of the atom at the center, now known as the nucleus – and the nuclear model of the atom was born.
Rutherford and his colleagues were surprised that
Following the discovery of the electron, Nobel Prize-winning physicist Robert Millikan conducted an ingenious experiment that allowed for the specific value of the negative charge of the electron to be calculated. In his famous oil drop experiment, Millikan and co-workers sprayed tiny oil droplets from an atomizer into a sealed chamber (Millikan, 1913). The oil drops fell downward, under the influence of gravity, into a space between two electrical plates. There they became charged, by interacting with air that had been ionized by X-rays.
Figure 5: Millikan’s oil drop experiment in which he observed droplets of oil fall between two electrical plates, where the droplets became ionized by X-rays.
By adjusting the voltage between the two electrical plates, Millikan applied an electrical force upward that exactly matched the gravitational force downward, thus suspending the drops motionless. When suspended, the electrical force and the force of gravity were working in opposite directions but were equal in magnitude. Hence:
where q is the charge on the oil drop, E is the electric field, m is the mass of the oil drop, and g is the gravitational field. By measuring the mass of each oil drop and knowing both the gravitational and the electrical field, the charge on each drop could be determined.
Millikan found that there were differing charges on different oil drops. However, in each case the charges on the oil drops were found to be multiples of 1.60 x 10-19 coulombs. He concluded that the differing charges were due to different numbers of electrons, each having a negative charge of 1.60 x 10-19 coulombs, and hence the charge on the electron was found.
Millikan found that the different oil drops in his experiment
Thomson’s electron and Rutherford’s nuclear model were tremendous advancements. The Japanese scientist Hantaro Nagaoka had previously rejected Thomson’s Plum Pudding model on the grounds that opposing charges could not penetrate each other, and he counter-proposed a model of the atom that resembled the planet Saturn with rings of electrons revolving around a positive center. Upon hearing of Rutherford’s work, he wrote to him in 1911 saying, “Congratulations on the simpleness of the apparatus you employ and the brilliant results you obtained.”
But, the planetary model was not perfect, and several inconsistent experimental observations meant much work was still to be done. At the time the electron was still thought of as a small particle, and it was thought to spin almost randomly around the nucleus of the atom. It would take the additional experiments and the genius of Neils Bohr, Max Planck, and others to make the paradigm shift from classical physics in which atoms consist of tiny particles and are governed by laws of motion, to quantum mechanics in which electrons behave like waves and exhibit strange and exotic behaviors. (See our interactive animation comparing orbital and quantum models of the first 12 elements.) To learn more about the strange behaviors of quantum physics, read the other entries in our Atomic Theory series: II: Bohr and the Beginnings of Quantum Theory, III: Wave-Particle Duality and the Electron, and IV: Quantum Numbers and Orbitals.
Interactive Animation: Atomic and ionic structure of the first 12 elements
The 19th and early 20th centuries saw great advances in our understanding of the atom. This module takes readers through experiments with cathode ray tubes that led to the discovery of the first subatomic particle: the electron. The module then describes Thomson’s plum pudding model of the atom along with Rutherford’s gold foil experiment that resulted in the nuclear model of the atom. Also explained is Millikan’s oil drop experiment, which allowed him to determine an electron’s charge. Readers will see how the work of many scientists was critical in this period of rapid development in atomic theory.
Atoms are not dense spheres but consist of smaller particles including the negatively charged electron.
The research on passing electrical currents through vacuum tubes by Faraday, Geissler, Crookes, and others laid the groundwork for discovery of the first subatomic particle.
J.J. Thomson’s observations of cathode rays provide the basis for the discovery of the electron.
Rutherford, Geiger, and Marsden performed a series of gold foil experiments that indicated that atoms have small, dense, positively-charged centers – later named the nucleus.
Millikan’s oil drop experiment determines the fundamental charge on the electron as 1.60 x 10-19 coulombs.
- HS-C4.4, HS-C6.2, HS-PS1.A1, HS-PS1.A3
Faraday, M. (1838). VIII. Experimental researches in electricity. Thirteenth series. Philosophical Transactions of the Royal Society of London, 128: 125-168.
- Millikan, R.A. (1913). On the elementary electric charge and the Avogadro Constant. Physics Review, 2(2): 109–143.
- Rutherford, E. (1911). The scattering of α and β particles by matter and the structure of the atom. Philosophical Magazine, Series 6, 21(125): 669–688.
- Rutherford, E., & Nuttal, J.M. (1913). Scattering of α-particles by gases. Philosophical Magazine, Series 6, 26(154): 702–712.
- Rutherford, E. (1914). The structure of the atom. Philosophical Magazine. Series 6, 27(159): 488–498.
- Thomson, J.J. (1897). Cathode rays. Philosophical Magazine, Series 5, 44(269): 293-316.
Adrian Dingle, B.Sc., Anthony Carpi, Ph.D. “Atomic Theory I” Visionlearning Vol. CHE-1 (2), 2003.
Atomic Theory and Structure
The earliest ideas about matter at the atomic level were built over many centuries. Starting with the ancient Greeks, and moving through to the beginning of the 19th century, the story unfolds relatively slowly. (You can read more about this is in our modules Early Ideas about Matter: From Democritus to Dalton and Atomic Theory I: The Early Days.) Despite the slow pace, it is crucial to understand that the process was a methodical one as each scientist built upon earlier ideas. This gradual, logical progression, where atomic structure evolved from being a simple, philosophical idea, through to the ultra-sophisticated world of the Higgs boson particle discovered in the early part of the 21st century, represents a wonderful example of the evolution of a scientific idea, and the application of the scientific process. In fact, one could argue that the history, struggle, and achievement that is threaded through the development of understanding matter at the atomic level is the quintessential story of the scientific method.
The story of atomic theory first encounters reproducible, scientific (evidence based) proof in the late 18th century. French chemists Antoine Lavoisier and Joseph Proust, with their Law of Conservation of Mass in 1789 and Law of Definite Proportions in 1799, respectively, each laid the groundwork for Englishman John Dalton’s work on the Law of Multiple Proportions (Dalton, 1803). Given that many centuries had elapsed between the earliest ideas of the atom and Dalton’s work, it would be fair to say that the evolution of atomic theory had been a gradual one, with progression in the field being steady rather than spectacular. But that was all about to change, and quite dramatically.
The most intense period of progress took place between the late 19th and early 20th century, and it hinged heavily on the work of a Danish physicist named Niels Bohr. Like so many before him, Bohr built upon the work of his predecessors, and for Bohr, part of that foundation had been built by Ernest Rutherford.
Based upon a series of experiments, Rutherford proposed the planetary model of the atom in which electrons swirled around a hard, dense nucleus (see Atomic Theory I: The Early Days). While Rutherford’s model explained many observations accurately, it was found to have flaws.
Rutherford’s planetary model of the atom was based upon classical physics – a system that deals with physical particles, force, and momentum. Unfortunately, this same system predicted that electrons orbiting in the manner that Rutherford described would lose energy, give off radiation, and ultimately crash into the nucleus and destroy the atom. However, for the most part, atoms are stable, lasting literally billions of years. Furthermore, the radiation predicted by the Rutherford model would have been a continuous spectrum of every color – in essence white light that when passed through a prism would display all of the colors of the rainbow (Figure 1).
Figure 1: When passed through a prism, white light displays the color spectrum.
But when pure gases of different elements are excited by electricity, as they would have been when placed in the newly discovered electric-discharge tube, they emit radiation at distinct frequencies. In other words, different elements do not emit white light, they emit light of different colors, and when that light is passed through a prism it does not produce a continuous rainbow of colors, but a pattern of colored lines, now referred to as line spectra (Figure 2). Clearly, Rutherford’s model did not fit with all of the observations, and Bohr made it his business to address these inconsistencies.
Figure 2: The visible light spectrum is displayed at the top and line spectra for three elements – hydrogen, neon, and iron – are below. image © Neon spectrum: Deo Favente
In 1911, Niels Bohr (Figure 3) had just completed his doctorate in physics at the University of Copenhagen and was invited to continue his work at the University of Manchester in England by Rutherford. Rutherford had already significantly advanced atomic theory with his groundbreaking gold foil experiment, but Bohr’s genius was in taking the Rutherford model and advancing it further.
Figure 3: Niels Bohr
It wasn’t Bohr who had come up with the original idea of a planetary model of the atom, but he was the one who took the fundamental concept and applied new ideas about quantum theory to it. This leap was necessary to explain the new evidence that challenged the old model, and to subsequently formulate a new, ‘better’ model.
Interactive Animation: Bohr’s Atom
The planetary model of the atom was based on
It’s easy to think of light, and other forms of energy, as continuous. Turn up the dimmer switch on your lamp, and the lamp gets gradually brighter. However, by the late 1800s, physicists were beginning to suspect that this was not in fact true. Classical physics models failed to accurately predict black-body radiation; in other words, classical physics did not accurately predict the energy given off by an object when it was heated.
The German physicist Max Planck solved this problem in 1903 by proposing that black-body radiation energy had to be quantized, i.e., that it could only be released or absorbed in specific ‘packets’ that were associated with specific frequencies. This solved the black-body problem and was consistent with the observed experimental data. Thus, quantum mechanics was born.
Despite the advances that he and others made using this idea, interestingly, Planck remained quite skeptical of quantized energy for many years. He insisted that the calculations that he had done, and the conclusions that he had reached, were somehow a sophisticated mathematical trick and that ultimately the old, classical model would prevail. After all, it had been around for approximately 200 years and had stood up to some pretty intense scrutiny.
In 1905, Albert Einstein published a series of papers proposing that light also exhibited quantum behavior (Einstein, 1905). Sometimes described as Einstein’s Annus mirabilis (miracle year), the papers taken together, and combined with Planck’s work, allowed Bohr to marry the nature of the atom with physics to usher in a new dawn of understanding in atomic theory.
In 1913, building on Planck’s and Einstein’s theories of quantization, Bohr proposed that the electron itself was quantized – that it could not exist just anywhere around an atom (as suggested by the Rutherford model) but instead could only be found in specific positions, with specific energies. The electron could transition to different positions, but only in discrete, defined steps. It could not spin in any location around the nucleus of an atom but instead was restricted to specific areas of space – much like the planets in our solar system are restricted to specific paths.
Being negatively charged, electrons are attracted to the positive protons in the nucleus of an atom, and will normally occupy the orbital, or path, within an atom that is closest to the nucleus if it is available. This state, which has low potential energy, is called the ground state. By exposing the electrons to an external source of energy such as an electric discharge, it is possible to promote the electrons from their ground state to other positions that have higher potential energies, called excited states. These ‘excited’ electrons quickly return to lower energy positions (in order to regain the stability associated with lower energies), and in doing so they release energy in specific frequencies that correspond to the energy differences between electron orbitals, or shells (see quantum behavior simulation). Bohr’s mathematical equations further predicted that electrons would not crash into the nucleus in a manner that classical physics – and Rutherford’s model – had predicted. This was another crucial realization that made a jump from one paradigm (classical physics) to a new one (quantum physics).
Interactive Animation: Atomic and ionic structure of the first 12 elements
Bohr’s discovery that Planck’s quantum theory could be applied to the classical Rutherford model of the atom, and could account for the observed shortcomings in the original model, is another beautiful example of how scientific theory uses prior evidence, coupled with new experimental observations, to adapt, develop, and change models and understanding over time. Science is usually advanced by contemporary scientists building on the work of predecessors and, as Isaac Newton put it in his 1676 letter to Robert Hooke (both prominent scientists of their time), by their “standing on the shoulders of giants.” Bohr’s work built on the theories of those before him, and extended them to explain the experimentally observed line spectra of atoms in a mathematical proof that made perfect sense.
In an atom, the ground state is the orbital with the ______ potential energy.
While Bohr’s work seemed to explain the curious phenomenon of line spectra, a number of lines had been observed in the spectra of hydrogen that did not fit Bohr’s theory. At first glance this appeared to poke holes in Bohr’s ideas, but Bohr was quick to offer an explanation. He suggested that the lines in the spectrum for hydrogen that could not be accounted for were actually caused not by hydrogen atoms, but rather by an entirely different species altogether. So, what were these different species, and how did they come to be?
Almost 30 years prior to Bohr’s publishing his famous trilogy of papers in the Philosophical Magazine and Journal of Science in 1913, the idea of particles having the ability to carry some kind of charge had been established by the Swedish scientist Svante Arrhenius and the Englishman Michael Faraday. These charged particles had been christened ions.
Atoms are electrically neutral, meaning that the number of positive protons in any given atom equals the number of negative electrons in the same. Since the positive charge is exactly cancelled by the negative, the atom has no overall electrical charge. However, just as it is possible to excite an electron to a higher orbital, as Bohr’s work had shown, it is also possible to give an electron sufficient energy to overcome the attraction of the nucleus completely and to remove it from the atom entirely. This has the effect of unbalancing the electrical charge and results in the formation of a species with an overall positive charge – called a cation. For example, a sodium atom can lose an electron to form a positively charged sodium cation (Equation 1); the energy associated with the ejection of the first electron from any atom is called the first ionization energy.
Na(s) → Na+(s) + e-
Cations that are formed by the ejection of one electron can be further ionized by losing additional electrons, and in the process forming another ion, this time with a 2+ charge. The energy required for the ejection of the second electron is known as the second ionization energy. Although it rarely occurs with larger atoms (or indeed with the sodium example given above), it is theoretically possible to remove all of the electrons from any given atom, leading to third, fourth, fifth, etc. ionization energies for atoms with large numbers of electrons.
Figure 4: Using the element hydrogen, examples of a cation and anion. image © Jkwchui
Once an electron (or electrons) has been ejected from an atom in this manner, it can be accepted by other atoms, and as such, electrons can be transferred from one atom to another. Just as releasing electrons unbalances the charge, accepting electrons causes atoms to become unbalanced in terms of their charge as well, and once again an ion is formed. This time, the ion has a negative charge (a greater number of electrons than protons), and the species is called an anion. (A hydrogen cation and anion are shown in Figure 4.) For example, a neutral chlorine atom, with equal numbers of protons and electrons, can accept an electron from an external source to form a negatively charged chloride anion (Equation 2).
Cl(g) + e- → Cl-(g)
Ions have wildly different properties when compared to their parent atoms, and this transfer of electrons from one atom to another is an example of how a small change in the structure of an atom can make a large difference in the behavior and nature of the particles. For example, sodium metal is unstable, reacting violently with water and corroding instantaneously in air. Thus, coming into contact with free sodium metal would be extremely dangerous. Similarly, chlorine exists as a gas under ambient conditions and it is highly poisonous, scarring the lungs of anyone who breathes it (in fact, free chlorine gas was used as a chemical weapon in World War I). However, when the two substances react with one another, sodium loses an electron forming a cation, and chlorine accepts the same electron to form an anion. The two resulting ions then bond together as a result of their charges, and together create a very common substance – table salt, which is neither reactive nor poisonous.
_____ have a positive charge, while _____ have a negative charge.
In Bohr’s work, the ionization of one particular element, helium, proved to be the key to unlock the explanation for the unexpected lines that he observed in hydrogen’s spectrum. When a helium atom, which has two electrons, loses an electron to form the helium ion, He+, its electronic structure mimics that of atomic hydrogen, since both species only have only one electron – the helium ion is said to be isoelectronic with the hydrogen atom. However, the helium ion possesses a nucleus with double the charge of a hydrogen atom (two protons as opposed to one proton). Bohr realized this, and suggested that the greater attraction between the electron and He nucleus accounted for the spectral lines that were previously unexplained – the charge of the nucleus affected the energy associated with transfers of electrons between the orbitals. Bohr’s theory was proven correct when spectra were generated using ionized helium that had been purged of hydrogen.
At the close of the 19th century, two different particles were known to exist in the atom, and both possessed an electrical charge – the very small and negatively charged electron and the much larger and positively charged proton. However, by the beginning of the 20th century, evidence began to mount that this was not a complete picture of the atom. Specifically, the mass of protons and electrons in an atom did not appear sufficient to justify the mass of the whole atom, and certain types of nuclear decay suggested that something else might be going on in the nucleus. In 1932, James Chadwick, a British physicist who had studied with, and was working for, Ernest Rutherford at the time, set out to solve the problem. Rutherford had proposed the idea of a neutral atomic particle that had mass as early as 1920, but he never managed to gain traction in the hunt for this mysterious particle.
In 1932 Chadwick further developed an experiment that had been first performed by Frederic Joliot-Curie and Irene Joliot-Curie. They found that by using polonium as a source of alpha particles, they could cause beryllium to emit radiation that, in turn, could be used to knock protons out of a piece of paraffin wax. The Joliot-Curies proposed that this radiation was gamma radiation, a packet of energy with no true mass. As an accomplished researcher of gamma rays and of the nucleus of the atom, Chadwick realized what others had not – that protons were too massive to be ejected from paraffin by mass-less gamma rays. By more carefully measuring the impact of the mystery particle on the paraffin wax and combining this with other measurements, Chadwick concluded that the particles being emitted were not gamma radiation, but a relatively heavy particle that had no charge – a particle named the neutron (Figure 5).
Figure 5: Artistic model of an atom showing the nucleus, with protons and neutrons, and orbiting electrons. image © Visionlearning
Chadwick wrote a paper about his discovery entitled “The Possible Existence of a Neutron,” and it was published in the journal Nature (Chadwick, 1932). In 1935, he was awarded the Nobel Prize in Physics for his discovery. The Joliot-Curies did not go without recognition either. Their work on radioactivity and radioactive isotopes won them the Nobel Prize for Chemistry, also in 1935.
Chadwick’s discovery marked the genesis of induced nuclear reactions, where the neutron is accelerated and crashed into the nuclei of other elements, generating massive amounts of energy (the neutron can easily do this since being neutral it is not repelled from the nucleus in the way that positively charged particles would be). These reactions had a massive impact upon the world as a whole since they spawned thoughts of the atomic bomb and of nuclear energy.
A _____ is a heavy particle that has no charge.
The neutron also explains the existence of atoms of the same element that have different atomic masses. Isotopes are atoms of the same element (i.e., they have the same numbers of protons) but that differ in the number of neutrons they possess. As a result, different isotopes have similar chemical properties, but their masses, and in some cases their physical behavior, differ. Isotopes are differentiated by their atomic mass, which can be indicated by writing the element’s symbol, followed by a dash and then the mass, or, more commonly, by writing the mass as a superscript before the element symbol. For example, carbon-12 (C-12, or 12C) and carbon-14 (C-14, or 14C) are both naturally occurring isotopes of carbon. Carbon-12 is a stable isotope that accounts for almost 99% of naturally occurring carbon. Carbon-14 is a radioactive isotope that accounts for only about 1 x 10-10 % of naturally occurring carbon, but because it decays to nitrogen with a half-life of approximately 5,730 years, it can be used to date some carbon-containing objects. (Three isotopes of carbon are depicted in Figure 6.)
Figure 6: Carbon isotopes. Each have the same number of protons, but different numbers of neutrons.
There is often more than one naturally occurring isotope of any given, individual element. As a result, the atomic masses that are given on a modern periodic table are the weighted masses of all the known isotopes of each element. For example, naturally occurring chlorine has two principal isotopes, one with 18 neutrons (mass of 35), and one with 20 neutrons (mass of 37). The lighter isotope 35Cl, accounts for almost 76% of its natural abundance, while the 37Cl isotope accounts for only about 24%. Thus the weighted mean is the average mass of naturally occurring chlorine atoms weighted by their relative abundance, or 35.45.
Isotopes are atoms of the same element that have a different
Bohr’s work provided a bridge between several disparate ideas that may never have been linked together without his intervention. He changed the paradigm from applying classical (particle) physics to the model of the atom, to thinking about the application of quantum theory and waves – a truly crucial development in the grand scheme of atomic theory and one that laid the groundwork for future scientists to build upon.
Having traveled from the earliest work on atomic theory through the crucial early part of the 20th century, we still have some way to go in the story of the atom. The advancements described in this module were grounded in Rutherford’s work, modified by Planck’s insights, and pieced together by Bohr’s genius. Still, there would be at least another decade that had to pass before work by Pauli, Heisenberg, and ultimately Schrödinger led to the full development of modern quantum mechanics that completely describes the atom as we know it today.
This module is an updated version of our previous content, to see the older module please go to this link.
The 20th century brought a major shift in our understanding of the atom, from the planetary model that Ernest Rutherford proposed to Niels Bohr’s application of quantum theory and waves to the behavior of electrons. With a focus on Bohr’s work, the developments explored in this module were based on the advancements of many scientists over time and laid the groundwork for future scientists to build upon further. The module also describes James Chadwick’s discovery of the neutron. Among other topics are anions, cations, and isotopes.
Drawing on experimental and theoretical evidence, Niels Bohr changed the paradigm of modern atomic theory from one that was based on physical particles and classical physics, to one based in quantum principles.
Under Bohr’s model of the atom, electrons cannot rotate freely around the atom, but are bound to certain atomic orbitals that both constrain and define an atom’s electronic behavior.
Atoms can gain or lose electrons to become electrically charged ions.
James Chadwick completed the early picture of the atom with his discovery of the neutron, a neutral, nuclear particle that affects an atom’s mass and the different physical properties of atomic isotopes.
- HS-C4.4, HS-C6.2, HS-PS1.A1, HS-PS1.A3
Bohr, N. (1913). On the Constitution of Atoms and Molecules. Philosophical Magazine (London), Series 6 (26), 1–25.
- Chadwick, J. (1932). The Possible Existence of a Neutron. Nature, 129(3252), 312.
- Dalton, John (1805). On the Absorption of Gases by Water and Other Liquids. Memoirs of the Literary and Philosophical Society of Manchester, Series 2(1), 271–287.
- Einstein, A. (1905). A New Determination of Molecular Dimensions. Annalen der Physik, Series 4(19), 289–306.
- Einstein, A. (1905). Does the inertia of a body depend on its energy content? Annalen der Physik, Series 4(18), 639–641.
- Einstein, A. (1905). On the electrodynamics of moving bodies. Annalen der Physik, Series 4(17), 891–921.
- Einstein, A. (1905). On a heuristic viewpoint concerning the production and transformation of light. Annalen der Physik, Series 4(17), 132–148.
- Einstein, A. (1905). On the motion of small particles suspended in liquids at rest required by the molecular-kinetic theory of heat. Annalen der Physik, Series 4(19), 371–381.
- Planck, M. (1903). Treatise on Thermodynamics. Ogg, A. (trans.). London: Longmans, Green & Co.
Adrian Dingle, B.Sc., Anthony Carpi, Ph.D. “Atomic Theory II” Visionlearning Vol. CHE-1 (3), 2003.
Atomic Theory and Structure
As discussed in our Atomic Theory II module, at the end of 1913 Niels Bohr facilitated the leap to a new paradigm of atomic theory – quantum mechanics. Bohr’s new idea that electrons could only be found in specified, quantized orbits was revolutionary (Bohr, 1913). As is consistent with all new scientific discoveries, a fresh way of thinking about the universe at the atomic level would only lead to more questions, the need for additional experimentation and collection of evidence, and the development of expanded theories. As such, at the beginning of the second decade of the 20th century, another rich vein of scientific work was about to be mined.
In the late 19th century, the father of the periodic table, Russian chemist Dmitri Mendeleev, had already determined that the elements could be grouped together in a manner that showed gradual changes in their observed properties. (This is discussed in more detail in our module The Periodic Table of Elements.) By the early 1920s, other periodic trends, such as atomic volume and ionization energy, were also well established.
The Periodic Table of Elements
The German physicist Wolfgang Pauli made a quantum leap by realizing that in order for there to be differences in ionization energies and atomic volumes among atoms with many electrons, there had to be a way that the electrons were not all placed in the lowest energy levels. If multi-electron atoms did have all of their electrons placed in the lowest energy levels, then very different periodic patterns would have resulted from what was actually observed. However, before we reach Pauli and his work, we need to establish a number of more fundamental ideas.
The development of early quantum theory leaned heavily on the concept of wave-particle duality. This simultaneously simple and complex idea is that light (as well as other particles) has properties that are consistent with both waves and particles. The idea had been first seriously hinted at in relation to light in the late 17th century. Two camps formed over the nature of light: one in favor of light as a particle and one in favor of light as a wave. (See our Light I: Particle or Wave? module for more details.) Although both groups presented effective arguments supported by data, it wasn’t until some two hundred years later that the debate was settled.
Interactive Animation: Atomic and ionic structure of the first 12 elements
At the end of the 19th century the wave-particle debate continued. James Clerk Maxwell, a Scottish physicist, developed a series of equations that accurately described the behavior of light as an electromagnetic wave, seemingly tipping the debate in favor of waves. However, at the beginning of the 20th century, both Max Planck and Albert Einstein conceived of experiments which demonstrated that light exhibited behavior that was consistent with it being a particle. In fact, they developed theories that suggested that light was a wave-particle – a hybrid of the two properties. By the time of Bohr’s watershed papers, the time was right for the expansion of this new idea of wave–particle duality in the context of quantum theory, and in stepped French physicist Louis de Broglie.
In 1924, de Broglie published his PhD thesis (de Broglie, 1924). He proposed the extension of the wave-particle duality of light to all matter, but in particular to electrons. The starting point for de Broglie was Einstein’s equation that described the dual nature of photons, and he used an analogy, backed up by mathematics, to derive an equation that came to be known as the “de Broglie wavelength” (see Figure 1 for a visual representation of the wavelength).
The de Broglie wavelength equation is, in the grand scheme of things, a profoundly simple one that relates two variables and a constant: momentum, wavelength, and Planck’s constant. There was support for de Broglie’s idea since it made theoretical sense, but the very nature of science demands that good ideas be tested and ultimately demonstrated by experiment. Unfortunately, de Broglie did not have any experimental data, so his idea remained unconfirmed for a number of years.
Figure 1: Two representations of a de Broglie wavelength (the blue line) using a hydrogen atom: a radial view (A) and a 3D view (B).
It wasn’t until 1927 that de Broglie’s hypothesis was demonstrated via the Davisson-Germer experiment (Davisson, 1928). In their experiment, Clinton Davisson and Lester Germer fired electrons at a piece of nickel metal and collected data on the diffraction patterns observed (Figure 2). The diffraction pattern of the electrons was entirely consistent with the pattern already measured for X-rays and, since X-rays were known to be electromagnetic radiation (i.e., waves), the experiment confirmed that electrons had a wave component. This confirmation meant that de Broglie’s hypothesis was correct.
Figure 2: A drawing of the experiment conducted by Davisson and Germer where they fired electrons at a piece of nickel metal and observed the diffraction patterns. image © Roshan220195
Interestingly, it was the (experimental) efforts of others (Davisson and Germer), that led to de Broglie winning the Nobel Prize in Physics in 1929 for his theoretical discovery of the wave-nature of electrons. Without the proof that the Davisson-Germer experiment provided, de Broglie’s 1924 hypothesis would have remained just that – a hypothesis. This sequence of events is a quintessential example of a theory being corroborated by experimental data.
Theories must be backed up by
In 1926, Erwin Schrödinger derived his now famous equation (Schrödinger, 1926). For approximately 200 years prior to Schrödinger’s work, the infinitely simpler F = ma (Newton’s second law) had been used to describe the motion of particles in classical mechanics. With the advent of quantum mechanics, a completely new equation was required to describe the properties of subatomic particles. Since these particles were no longer thought of as classical particles but as particle-waves, Schrödinger’s partial differential equation was the answer. In the simplest terms, just as Newton’s second law describes how the motion of physical objects changes with changing conditions, the Schrödinger equation describes how the wave function (Ψ) of a quantum system changes over time (Equation 1). The Schrödinger equation was found to be consistent with the description of the electron as a wave, and to correctly predict the parameters of the energy levels of the hydrogen atom that Bohr had proposed.
Equation 1: The Schrödinger equation.
Schrödinger’s equation is perhaps most commonly used to define a three-dimensional area of space where a given electron is most likely to be found. Each area of space is known as an atomic orbital and is characterized by a set of three quantum numbers. These numbers represent values that describe the coordinates of the atomic orbital: including its size (n, the principal quantum number), shape (l, the angular or azimuthal quantum number), and orientation in space (m, the magnetic quantum number). There is also a fourth quantum number that is exclusive to a particular electron rather than a particular orbital (s, the spin quantum number; see below for more information).
Schrödinger’s equation allows the calculation of each of these three quantum numbers. This equation was a critical piece in the quantum mechanics puzzle, since it brought quantum theory into sharp focus via what amounted to a mathematical demonstration of Bohr’s fundamental quantum idea. The Schrödinger wave equation is important since it bridges the gap between classical Newtonian physics (which breaks down at the atomic level) and quantum mechanics.
The Schrödinger equation is rightfully considered to be a monumental contribution to the advancement and understanding of quantum theory, but there are three additional considerations, detailed below, that must also be understood. Without these, we would have an incomplete picture of our non-relativistic understanding of electrons in atoms.
German mathematician and physicist Max Born made a very specific and crucially important contribution to quantum mechanics relating to the Schrödinger equation. Born took the wave functions that Schrödinger produced, and said that the solutions to the equation could be interpreted as three-dimensional probability “maps” of where an electron may most likely be found around an atom (Born, 1926). These maps have come to be known as the s, p, d, and f orbitals (Figure 3).
Figure 3: Based on Born’s theories, these are representations of the three-dimensional probabilities of an electron’s location around an atom. The four orbitals, in increasing complexity, are: s, p, d, and f. Additional information is given about the orbital’s magnetic quantum number (m). image © UC Davis/ChemWiki
In the year following the publication of Schrödinger’s work, the German physicist Werner Heisenberg published a paper that outlined his uncertainty principle (Heisenberg, 1927). He realized that there were limitations on the extent to which the momentum of an electron and its position could be described. The Heisenberg Uncertainty Principle places a limit on the accuracy of simultaneously knowing the position and momentum of a particle: As the certainty of one increases, then the uncertainty of other also increases.
The crucial thing about the uncertainty principle is that it fits with the quantum mechanical model in which electrons are not found in very specific, planetary-like orbits – the original Bohr model – and it also dovetails with Born’s probability maps. The two contributions (Born and Heisenberg’s) taken together with the solution to the Schrödinger equation, reveal that the position of the electron in an atom can only be accurately predicted in a statistical way. That is to say, we know where the electron is most likely to be found in the atom, but we can never be absolutely sure of its exact position.
The Heisenberg uncertainty principle concerning the position and momentum of a particle states that as the certainty of one increases, the _____ of the other increases.
In 1922 German physicists Otto Stern, an assistant of Born’s, and Walther Gerlach conducted an experiment in which they passed silver atoms through a magnetic field and observed the deflection pattern. In simple terms, the results yielded two distinct possibilities related to the single, 5s valence electron in each atom. This was an unexpected observation, and implied that a single electron could take on two, very distinct states. At the time, nobody could explain the phenomena that the experiment had demonstrated, and it took a number of scientists, working both independently and in unison with earlier experimental observations, to work it out over a period of several years.
In the early 1920s, Bohr’s quantum model and various spectra that had been produced could be adequately described by the use of only three quantum numbers. However, there were experimental observations that could not be explained via only three mathematical parameters. In particular, as far back as 1896, the Dutch physicist Pieter Zeeman noted that the single valence electron present in the sodium atom could yield two different spectral lines in the presence of a magnetic field. This same phenomenon was observed with other atoms with odd numbers of valence electrons. These observations were problematic since they failed to fit the working model.
In 1925, Dutch physicist George Uhlenbeck and his graduate student Samuel Goudsmit proposed that these odd observations could be explained if electrons possessed angular momentum, a concept that Wolfgang Pauli later called “spin.” As a result, the existence of a fourth quantum number was revealed, one that was independent of the orbital in which the electron resides, but unique to an individual electron.
By considering spin, the observations by Stern and Gerlach made sense. If an electron could be thought of as a rotating, electrically-charged body, it would create its own magnetic moment. If the electron had two different orientations (one right-handed and one left-handed), it would produce two different ‘spins,’ and these two different states would explain the anomalous behavior noted by Zeeman. This observation meant that there was a need for a fourth quantum number, ultimately known as the “spin quantum number,” to fully describe electrons. Later it was determined that the spin number was indeed needed, but for a different reason – either way, a fourth quantum number was required.
Some experimental observations could not be explained mathematically using three parameters because
In 1922, Niels Bohr visited his colleague Wolfgang Pauli at Göttingen where he was working. At the time, Bohr was still wrestling with the idea that there was something important about the number of electrons that were found in ‘closed shells’ (shells that had been filled).
In his own later account (1946), Pauli describes how building upon Bohr’s ideas and drawing inspiration from others’ work, he proposed the idea that only two electrons (with opposite spins) should be allowed in any one quantum state. He called this ‘two-valuedness’ – a somewhat inelegant translation of the German zweideutigkeit (Pauli, 1925). The consequence was that once a pair of electrons occupies a low energy quantum state (orbitals), any subsequent electrons would have to enter higher energy quantum states, also restricted to pairs at each level.
Using this idea, Bohr and Pauli were able to construct models of all of the electronic structures of the atoms from hydrogen to uranium, and they found that their predicted electronic structures matched the periodic trends that were known to exist from the periodic table – theory met experimental evidence once again.
Pauli ultimately formed what came to be known as the exclusion principle (1925), which used a fourth quantum number (introduced by others) to distinguish between the two electrons that make up the maximum number of electrons that could be in any given quantum level. In its simplest form, the Pauli exclusion principle states that no two electrons in an atom can have the same set of four quantum numbers. The first three quantum numbers for any two electrons can be the same (which places them in the same orbital), but the fourth number must be either +½ or -½, i.e., they must have different ‘spins’ (Figure 4). This is what Uhlenbeck and Goudsmit’s research suggested, following Pauli’s original publication of his theories.
Figure 4: A model of the fourth quantum number, spin (s). Shown here are models for particles with spin (s) of ½, or half angular momentum.
The period described here was rich in the development of the quantum theory of atomic structure. Literally dozens of individuals, some mentioned throughout this module and others not, contributed to this process by providing theoretical insights or experimental results that helped shape our understanding of the atom. Many of the individuals worked in the same laboratories, collaborated together, or communicated with one another during the period, allowing the rapid transfer of ideas and refinements that would shape modern physics. All these contributions can certainly been seen as an incremental building process, where one idea leads to the next, each adding to the refinement of thinking and understanding, and advancing the science of the field.
The 20th century was a period rich in advancing our knowledge of quantum mechanics, shaping modern physics. Tracing developments during this time, this module covers ideas and refinements that built on Bohr’s groundbreaking work in quantum theory. Contributions by many scientists highlight how theoretical insights and experimental results revolutionized our understanding of the atom. Concepts include the Schrödinger equation, Born’s three-dimensional probability maps, the Heisenberg uncertainty principle, and electron spin.
Electrons, like light, have been shown to be wave-particles, exhibiting the behavior of both waves and particles.
The Schrödinger equation describes how the wave function of a wave-particle changes with time in a similar fashion to the way Newton’s second law describes the motion of a classic particle. Using quantum numbers, one can write the wave function, and find a solution to the equation that helps to define the most likely position of an electron within an atom.
Max Born’s interpretation of the Schrödinger equation allows for the construction of three-dimensional probability maps of where electrons may be found around an atom. These ‘maps’ have come to be known as the s, p, d, and f orbitals.
The Heisenberg Uncertainty Principle establishes that an electron’s position and momentum cannot be precisely known together, instead we can only calculate statistical likelihood of an electron’s location.
The discovery of electron spin defines a fourth quantum number independent of the electron orbital but unique to an electron. The Pauli exclusion principle states that no two electrons with the same spin can occupy the same orbital.
- HS-C1.4, HS-C4.4, HS-PS1.A2, HS-PS2.B3
Bohr, N. (1913). On the constitution of atoms and molecules. Philosophical Magazine (London), Series 6, 26, 1–25.
- Born, M. (1926). Zur Quantenmechanik der Stoßvorgänge. Zeitschrift für Physik, 37(12), 863–867.
- Davisson, C. J. (1928). Are electrons waves? Franklin Institute Journal, 205(5), 597-623.
- de Broglie, L. (1924). Recherches sur la théorie des quanta. Annales de Physique, 10(3), 22-128.
- Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift für Physik, 43(3-4), 172-198.
- Pauli, W. (1925). Ueber den Einfluss der Geschwindigkeitsabhaengigkeit der Elektronenmasse auf den Zeeman-Effekt. Zeitschrift für Physik, 31(1), 373-385.
- Pauli, W. (1946). Remarks on the history of the exclusion principle. Science, New Series, 103(2669), 213-215.
- Schrödinger, E. (1926). Quantisierung als Eigenwertproblem. Annalen der Physik, 384(4), 273–376.
- Stoner, E. C. (1924). The distribution of electrons among atomic energy levels. The London, Edinburgh and Dublin Philosophical Magazine (6th series), 48(286), 719-736
Adrian Dingle, B.Sc., Anthony Carpi, Ph.D. “Atomic Theory III” Visionlearning Vol. CHE-3 (6), 2015.
Atomic Theory and Structure
In Atomic Theory III: Wave-Particle Duality and the Electron, we discussed the advances that were made by Schrödinger, Born, Pauli, and others in the application of the quantum model to atomic theory. The Schrödinger equation was seen as a key mathematical link between the theory and the application of the quantum model. Born took the wave functions that Schrödinger produced and said that the solutions to the equation could define the energies and the most probable positions of electrons within atoms, thus allowing us to build a much more detailed description of where electrons might be found within an atom. This module further explores these solutions, the position of electrons, the shape of atomic orbitals, and the implications of these ideas.
As we saw in earlier reading, the electron is not a true particle, but a wave-particle similar to the photon. Since we measure the position of objects with light, the small size of the electron introduces a challenge. If we shine a light beam on a moving tennis ball, the light has little effect on the tennis ball and we can measure both its position and momentum with a high degree of accuracy. However, the electron is so tiny that even a single photon will influence its trajectory – thus if we shine a beam of light on it to measure its position, the energy of the photon will affect its momentum, and vice versa. Werner Heisenberg developed a principle to describe this uncertainty, called appropriately the Heisenberg Uncertainty Principle (Heisenberg, 1927). It tells us that mathematically, the product of the uncertainty in position (Δx) and the uncertainty in momentum (Δp) of an electron cannot be less than the reduced Planck constant ℏ/2 (Equation 1).
Equation 1: The Heisenberg Uncertainty Principle equation, where Δx is the product of the uncertainty in position, Δp is the uncertainty in momentum of an electron, and ℏ/2 is the reduced Planck constant. (Equation created with CodeCogs online tool.)
This is a very small number that can usually be ignored, but when dealing with a particle as small as an electron, it is significant. Because of this, it becomes necessary to describe the position of an electron in terms of probability, rather than that of absolute certainty. We have to say that an electron is likely to be found within the atom in certain areas of high probability, but we cannot be 100% sure of its precise position.
The Heisenberg Uncertainty Principle tells us that
Because the electron is not a true particle, we cannot describe its movement or location in traditional terms. In other words, those that would normally be defined by simple x, y, and z coordinates. The challenges raised by the combination of wave particle duality and the Heisenberg Uncertainty Principle are simplified and expressed by Schrödinger’s equation that, when solved, produces wave functions denoted by Ψ. When Ψ is squared, the resulting solution gives the probability of finding an electron at a particular place in the atom. The Ψ2 term describes how electron density and electron probability are distributed in space around the nucleus of an atom.
The application of the Schrödinger equation is most easily understood in the case of the hydrogen atom because the one-electron atom allows us to avoid the complex interactions of multiple electrons. The time-independent form of the Schrödinger equation (Equation 2) can be solved to provide solutions that correspond to the energy levels in a hydrogen atom. In solving this equation, we find that there are multiple solutions for Ψ that we call Ψ1, Ψ2, Ψ3, etc.
Equation 2: The time independent Schrödinger equation for hydrogen’s energy levels. (Equation created with CodeCogs online tool.)
Each of these solutions has a different energy that corresponds to what we think of as different energy levels, or electron shells, within the atom. The lowest energy electron distribution is that which is closest to the nucleus, and it is called the ground state. The ground state is the most stable state of the single electron within the hydrogen atom. Other, higher energy states exist and are called excited states. As the energy of each level increases above the ground state, we refer them to them as the second, third, fourth, etc. energy levels, respectively.
The energy of the electron shells ______ as you move away from the nucleus of the atom.
Acceptable solutions to the wave equation for an electron cannot be just any values, but rather they are restricted to those that obey certain parameters. Those parameters are described by a set of three quantum numbers that are named the principal quantum number, the azimuthal quantum number, and the magnetic quantum number. These quantum numbers are given the symbols n, l, and m, respectively.
The principal quantum number n is a positive integer starting with a value of 1, where 1 corresponds to the first (lowest energy) shell known as the ground state in hydrogen. The principal quantum number then increases with the increasing energy of the shells (the excited states in hydrogen) to values of n = 2, n = 3, n = 4, etc., as one moves farther away from the nucleus to higher energy levels.
The azimuthal quantum number l (also called the orbital angular momentum quantum number) determines the physical, three-dimensional shape of the orbitals in any subshell. The value of l is dependent on the principal quantum number n, and there are multiple values of l for each value of n. Values of l are positive integers or 0, and are determined by subtracting integers from the corresponding value of n. For example, when n = 3 we can subtract 3, 2, and 1 from n to yield values for l of 0, 1, and 2, respectively. Thus, the ground state shell in hydrogen (principal quantum number 1) has only one s subshell, shell 2 has s and p subshells, and so on. The subshells indicated by l are also given the letter designations s, p, d, and f where l = 0 corresponds to an s subshell, l = 1 a p subshell, l = 2 a d subshell, and l = 3 an f subshell. Each subshell has a unique shape in 3D-space.
The magnetic quantum number m, defines the orientation (i.e., the position) and the number of orbitals within any given subshell. Values of m depend upon values of l, the azimuthal quantum number, and m can take on values of +1, -1, and all integers (including 0) in between. So for example, when l = 1, m can be +1, 0, or -1, yielding three separate orbitals, on axes x, y, and z in space. So in the case of l = 1 (the p subshell), there are three orbitals oriented in different directions in space.
Thus each energy level is given a unique set of quantum numbers to fully describe it. Only certain values for quantum numbers in any one energy level are allowed, and those combinations are summarized in Table 1. Analysis of all of the allowed solutions to the wave equation shows that orbitals can be grouped together in sets according to their l values: s, p, d, and f sets.
Table 1: The allowed solutions to the wave equation shows that orbitals can be grouped together in sets according to their l values (i.e., the s, p, d, and f sets).
|(principal quantum number)||azimuthal quantum number, |
with letter designations)
|(magnetic quantum number, |
with letter designations)
|1||0 (s)||0 (s)|
|2||0 (s), |
|0 (s), |
|3||0 (s), 1 (p), |
|0 (s), -1/0/+1 (p), |
|4||0 (s), 1 (p), 2 (d), |
|0 (s), -1/0/+1 (p), -2/-1/0/+1/+2 (d), |
The first set of sub-orbitals are described by a type of wave function whose probability density depends only on the distance from the nucleus, and whose probability is the same in all directions. Thus they have a spherical shape and are called s orbitals. These wave functions are all found to have values of l = 0 and therefore values of m = 0, and every energy level has such a wave function starting with 1s, and moving to 2s, 3s, etc. (Figure 1).
Figure 1: The spherical shaped s orbital. image © UC Davis ChemWiki
The second type of wave function has a probability density that depends on both the distance from the nucleus and the orientation along either the x-, y-, or z-axis in space. This leads to three, separate, equivalent (or degenerate) wave functions that have a “figure eight” shape in 3D-space. These wave functions are all found to have values of l = 1 and can take on three, different m values. These are called p orbitals and exist for every energy level except the first, thus appearing as 2p, 3p, 4p, etc. (Figure 2).
Figure 2: The “figure eight” p orbitals. image © UC Davis ChemWiki
The third type of wave function has a probability density that depends on both the distance from the nucleus and the orientation along two of the x-, y-, and z-axes in space. This leads to five separate, equivalent wave functions. When the wave functions have equivalent energy, they are given the label “degenerate.” They have complex shapes in 3D-space. These wave functions are all found to have values of l = 2 and can take on five different m values. These are called d orbitals, and every energy level except the first and second has such a wave function (Figure 3).
Figure 3: The d orbitals, the beginning of more complex orbital shapes. image © UC Davis ChemWiki
The fourth type is a set of orbitals with even more complexity in terms of their wave function, positions, and shape in 3D-space. These wave functions are all found to have values of l = 3 and can take on seven different m values. They, too, are all degenerate. These are called f orbitals and every energy level except the first, second, and third has such a wave function, starting with 4f and moving to 5f (Figure 4).
Figure 4: The f orbitals, which continue the complexity of shape as seen in the d orbitals. image © UC Davis ChemWiki
To summarize, the orbitals available in the first four energy levels are as follows in Table 2:
Table 2: Orbitals associated with the first four energy levels.
|3||3s, 3p, 3d|
|4||4s, 4p, 4d, 4f|
The principal quantum number n
Since orbitals are defined by very specific solutions to the Schrödinger equation, electrons must absorb very specific quantities of energy to be promoted from one energy level to another. By exposing electrons to an external source of energy, such as light, it is possible to promote the electrons from their ground state to other positions that have higher potential energies, called excited states. These ‘excited’ electrons quickly return to lower energy positions (in order to regain the stability associated with lower energies), and in doing so they release energy in specific frequencies that correspond to the energy differences between electron orbitals, or shells. Light energy is related to frequency (f) and Planck’s constant (h) in the equation E = hf.
Electrons within a hydrogen atom will absorb specific frequencies of energy that correspond to the energy gaps (ΔE) between two energy levels, thus allowing the promotion of an electron from a relatively low energy level to a relatively high energy level. Examining the absorption spectrum produced when hydrogen is irradiated, we observe a pattern of lines that supports this (Figure 5).
Figure 5: Hydrogen’s emission and absorption spectra from the Balmer series. image © Chem1 Virtual Textbook, adapted from the Online Journey through Astronomy site
Bohr proposed that electrons can transition to different positions, but only in discrete, defined steps. He thought that electrons were restricted to a specific area of space around the nucleus, much like the planets in our solar system are restricted to specific paths. The hydrogen absorption spectrum of discrete lines (rather than a continuous spectrum) shows that only very specific transitions can be made and is evidence for the quantum model.
In addition to the absorption spectrum shown above, another identical spectrum can be observed, this time with the energy being emitted rather than absorbed. In this case the electrons fall back from higher energies to lower energies rather than being promoted as in the absorption spectrum. This is called the emission spectrum of the atom and is once again made up of discrete, individual lines.
When energy is emitted, electrons
Each series of lines that appear in such spectra are named after their discoverers. The Lyman series are those lines in which the electrons are either promoted from, or to, the 1s orbital. In the Balmer, Paschen, Brackett, and Pfund series, electrons travel either from or to the orbitals where n = 2, 3, 4, and 5, respectively. The differing energy gaps (and therefore the differing frequencies) correspond to lines in different regions of the electromagnetic spectrum as a whole (Figure 6). Lyman lines are in the ultraviolet region, Balmer in the visible, and Paschen, Brackett, and Pfund in various parts (near and far) of the infrared.
Figure 6: An energy level transition diagram for hydrogen.
In hydrogen atoms, and other single electron species such as He+ and Li2+, it is found that the energy of the orbitals is only dependent on their distance from the nucleus, with increasing energies as n increases. The lowest energy orbital is 1s, followed by 2s and 2p (that have the same energy), and 3s, 3p, and 3d (that have the same energy) etc. As such, the ground state (lowest energy state) for a hydrogen atom with only one electron is with the electron residing in the 1s orbital. The differences in energies (ΔE) required to promote electrons from one level to another can be determined by using one version of the Rydberg Formula, where RH is a constant for hydrogen and ni and nf are the initial and final energy levels respectively.
Equation 3: The Rydberg Formula (created with CodeCogs online tool.)
Since other single electron species such as He+ and Li2+ contain more protons than the hydrogen atom, the electron experiences a greater attraction from the nucleus. As such, differing, larger amounts of energy are required for promotion of electrons in these species and a modified Rydberg constant is required in the calculation of ΔE for single electron species other than the hydrogen atom.
In atoms that have only one electron, the energy of the orbitals is dependent on the
The hydrogen atom is the simplest case to study since its single electron is both uninfluenced by other electrons and does not influence any other electrons. In this case, the wave function is relatively easy to compute. However, in more complex atoms the calculations are not so easy. For example, when two electrons are present, as in helium, things become considerably more complex. Rather than there being just one simple potential energy attraction between the nucleus and the single electron in a hydrogen atom, in helium there are now three potential energy terms to consider: the attractive forces between the nucleus and each electron, and the repulsion between the two electrons. The addition of even more electrons leads to the equation and the calculations becoming increasingly complex.
The complexity of the Schrödinger equation under these many electron circumstances means that it has to be solved by a series of approximations rather than directly. One method is to effectively apply the single electron system over and over again to produce what amounts to an approximate answer. Although not entirely correct, such approximations prove to be very workable and produce reasonable answers to the multi-electron problem.
The approximation works well enough to once again produce Ψ2 wave functions that predict the electron density in space of multi-electron atoms. The orbitals in the single electron situation and those in the multi-electron situation are still defined by the same set of restricted quantum numbers, but one difference arises regarding the specific energies of those orbitals. Previously, the energy depended only on the distance from the nucleus (that is, the energy n in multi-electron systems is found to be dependent on the value of l). This complicates the idea that relates electron energy simply to distance from the nucleus (i.e., orbital energy follows the pattern 1s < (2s = 2p) < (3s = 3p = 3d), etc.). Instead, it produces the following sequence of relative energies for the orbitals.
1s < 2s < 2p < 3s < 3p < 4s < 3d < 4p < 5s < 4d < 5p, etc.
Note the insertion of 4s and 5s earlier in the sequence than expected. This order is reflected on the periodic table. When reaching the end of period three on the periodic table (element 18, argon), we will have filled the 1s, 2s, 2p, 3s, and 3p orbitals giving a total (as expected) of 18 electrons. One would perhaps expect the next orbitals to be filled to be the 3d, since after all, they are also in the third energy level. However, the next elements on the periodic table, potassium and calcium (elements 19 and 20, respectively), fill their 4s orbitals before their 3d. Moving further along the fourth period to element 21, scandium, we find that the 3d subshell now begins to populate. When that subshell is full at element 30, zinc, we then fill the 4p subshell beginning with element 31, gallium. In short, the periodic table is ordered in such a way that it reflects the sequence of relative energies of orbitals given above.
As seen previously, the concept of electron spin introduced the need for the fourth and final quantum number, the spin quantum number, s. Its introduction ensures that each electron has a unique set of quantum numbers according to the Pauli Exclusion Principle. With there being only two values for s (+½ or -½), it follows that each orbital may only hold a maximum of two electrons.
The notation used to describe the electronic configuration of an atom involves the use of a superscript to denote the number of electrons in any given orbital. For example, in the case of beryllium that has a total of four electrons, 1s2 2s2 shows that both the 1s and the 2s orbitals are each filled with two electrons.
When identifying which orbitals the electrons in an atom occupy, we generally follow the principle referred to as the Aufbau principle, where lower energy orbitals are filled first. So, for example, 1s is filled before 2s, and 2s before 2p, etc. But what about degenerate orbitals, those that have the same energy, such as the three orbitals that exist in the 2p sub-level?
Hund’s rule states that when electrons are placed into any set of degenerate orbitals, one electron is placed into each orbital before any spin pairing takes place. This means, for example, if we consider three electrons being placed into the degenerate 2p orbitals, one would see an electronic configuration of 2px1 2py1 2pz1 (recall the p orbitals are oriented along x, y, and z axes, respectively) before we see any pairing. The fourth electron to enter the 2p sub-shell would create the need for an unavoidable pairing, hence 2px2 2py1 2pz1 (Figure 7).
Figure 7: An illustration of Hund’s Rule showing the placement of electrons in various orbitals of nitrogen (N) and oxygen (O).
Now that we have a sense of electron configuration, their energies, and their most probable positions within atoms, the next step is to describe the behavior of electrons when atoms interact with one another. Electron interaction forms the basis of a key chemical process called chemical bonding. And understanding the behavior of electrons provides us with a better understanding of the chemical behavior of the elements and their compounds.
Our Atomic Theory series continues, exploring the quantum model of the atom in greater detail. This module takes a closer look at the Schrödinger equation that defines the energies and probable positions of electrons within atoms. Using the hydrogen atom as an example, the module explains how orbitals can be described by type of wave function. Evidence for orbitals and the quantum model is provided by the absorption and emission spectra of hydrogen. Other concepts include multi-electron atoms, the Aufbau Principle, and Hund’s Rule.
The wave-particle nature of electrons means that their position and momentum cannot be described in simple physical terms but must be described by wave functions.
The Schrödinger equation describes how the wave function of a wave-particle changes with time in a similar fashion to the way Newton’s second law describes the motion of a classical particle. The equation allows the calculation of each of the three quantum numbers related to individual atomic orbitals (principal, azimuthal, and magnetic).
The Heisenberg uncertainty principle establishes that an electron’s position and momentum cannot be precisely known together; instead we can only calculate statistical likelihood of an electron’s location.
The discovery of electron spin defines a fourth quantum number independent of the electron orbital but unique to an electron. The Pauli exclusion principle states that no two electrons with the same spin can occupy the same orbital.
Quantum numbers, when taken as a set of four (principal, azimuthal, magnetic and spin) describe acceptable solutions to the Schrödinger equation, and as such, describe the most probable positions of electrons within atoms.
Orbitals can be thought of as the three dimensional areas of space, defined by the quantum numbers, that describe the most probable position and energy of an electron within an atom.
- HS-C1.4, HS-C4.4, HS-PS1.A2, HS-PS2.B3
Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift für Physik, 43(3–4), 172-198.
- Pauli, W. (1925). Ueber den Einfluss der Geschwindigkeitsabhaengigkeit der Elektronenmasse auf den Zeeman-Effekt. Zeitschrift für Physik 31(1), 373-385.
- Pauli, W. (1925). Ueber den Zusammenhang des Abschlusses der Elektronengruppen im Atom mit der Komplexstruktur der Spektren. Zeitschrift für Physik 31(1), 765-783.
- Pauli, W. (1946). Remarks on the history of the Exclusion Principle. Science, New Series, 103(2669), 213-215.
- Schrödinger, E. (1926). Quantisierung als Eigenwertproblem. Annalen der Physik, 384(4), 361-376.
- Stoner, E.C. (1924). The distribution of electrons among atomic energy levels. The London, Edinburgh and Dublin Philosophical Magazine (6th series), 48, 719-736.
Adrian Dingle, B.Sc., Anthony Carpi, Ph.D. “Atomic Theory IV” Visionlearning Vol. CHE-3 (7), 2016.
Physical States and Properties
If you’ve ever changed an old incandescent light bulb, you might have noticed what looks like black powder coating the inside of the bulb. That black coating is actually metal atoms that escaped from the bulb’s tungsten filament and condensed on its glass (Figure 1). While this little bit of tungsten residue is annoying for modern people who like to read at night, in the early 1900s light bulbs used to burn out their filaments and turn black very quickly. Then in 1913, the American chemist Irving Langmuir figured out a surprising solution to keep bulbs burning bright: fill the bulb with an inert, non-toxic gas called argon.
Figure 1: A new, clear light bulb compared with a blackened one. The black coating is from metal atoms that escaped from the bulb’s tungsten filament and condensed on its glass. image © new bulb, Dave Gough / burned-out bulb, Paul Cowan
Before Langmuir, manufacturers made light bulbs with a vacuum inside to prevent oxygen from contacting the filament. This was because when current ran through the filament, it heated to 3,000°C—hot enough to oxidize the metal in the filament. While this temperature helpfully caused the filament to radiate visible light, it occasionally caused a tungsten atom to sublime (change directly from solid to gas phase) off the filament and onto the bulb’s glass, deteriorating the filament and blackening the bulb.
Langmuir figured out that by filling the bulb with argon gas, the tungsten atoms would take much longer to blacken the bulb. Instead of streaking straight towards the glass walls, they would collide and bounce off the argon atoms, sometimes even ricocheting back into the filament.
Langmuir was able to solve the problem of blackening light bulbs because he was familiar with kinetic-molecular theory (KMT). By making several assumptions about the motion and energy of molecules, KMT provides scientists with a useful framework for understanding how the behavior of molecules influences the behaviors of different states of matter, particularly the gas state. As the story of Langmuir’s light bulbs shows, this framework can be a useful tool for understanding and solving real-world problems. But KMT hasn’t always existed: When Langmuir figured out how to make light bulbs last longer in 1913, he was relying on many centuries of work by scientists who had developed the assumptions at the core of modern KMT.
In the 17th century, the Italian mathematician Evangelista Torricelli built the first mercury barometer by filling a glass tube sealed at one end with mercury and then inverting the open end into a tub full of the liquid metal. To the surprise of his contemporaries, the tube remained partially filled—almost as if something was pushing down on the mercury in the tub, and forcing the liquid metal up the tube (Figure 2). Most significantly, the level at which mercury rose in the tube changed from day to day, challenging scientists to explain how mercury was forced up the closed glass tube.
Figure 2: An example of Evangelista Torricelli’s experiment with a mercury barometer, where he filled a glass tube sealed at one end with mercury and then inverted the open end into a tub full of the liquid metal. The tube of mercury remained partially filled even though inverted. image © Technica Curiosa […] by Gasparis Schott
The British scientists Robert Boyle and Robert Hooke devised an experiment to figure out what was pressing down on the mercury. Working with a candy cane-shaped glass tube that had its short leg sealed off, Boyle poured in just enough mercury to fill the tube’s curve and trap air inside the short leg. When he poured in still more mercury, Boyle saw that the while the volume of trapped air shrank, the air somehow pushed against the mercury and forced it back up the long leg. Boyle reasoned that air must also weigh down on the mercury in the Torricelli’s tub and exert the force driving mercury partially up the tube (to learn more about Boyle’s experiment, see our module Properties of Gases).
But how could air—which many scientists had regarded as an indivisible element—weigh down on mercury? By the early 18th century, scientists realized that air was made up of tiny particles. However, these same scientists couldn’t imagine air particles just floating in space. They assumed that the particles vibrated and spun while being held in place by an invisible substance called ether.
The Swiss mathematician Daniel Bernoulli had a different idea about how particles could be suspended in air. In his 1738 book Hydrodynamica, Bernoulli sketched out a thought experiment illustrating how the linear motion of air particles could exert pressure. Bernoulli first asked his readers to imagine a cylinder fitted with a movable piston. Next, he directed the reader to picture the air particles as tiny spheres that zipped around in all directions, colliding with each other and the piston. These numerous, constant collisions would “kick” the piston aloft. Furthermore, Bernoulli suggested that if the air was heated, the particles would zoom faster, striking the piston more often and kicking the piston still higher in the cylinder.
With his thought experiment, Bernoulli was the first to develop several assumptions about molecules and heat that are integral to modern KMT. Like modern KMT, Bernoulli assumed that molecules behave like tiny spheres in constant linear motion. Working from this assumption, he reasoned that the molecules would constantly collide with each other and the walls of a container, thereby exerting pressure on these walls. Importantly, he also assumed that heat affects the movement of molecules.
Though largely correct, Bernoulli’s ideas were mostly dismissed by contemporary scientists. He didn’t offer any experimental data to support his ideas. Furthermore, accepting his ideas required a literal belief in atoms, which many scientists were skeptical existed up to the 19th century. Equating motion with temperature implied that there must be an absolute minimum temperature at which point all motion ceased.
And finally, Bernoulli’s kinetic theory competed with caloric theory, a prominent idea at the time. According to caloric theory, caloric was a “heat substance” that engulfed gas molecules, causing them to repel each other so forcefully they would shoot into the walls of a container. This competing idea about what caused air pressure was championed by influential chemists such as Antoine Lavoisier and John Dalton, while Bernoulli’s idea was largely ignored until the 19th century. Unfortunately, it is not uncommon for good ideas to take time to be accepted in science as previously accepted theories are disproven. However, over time, scientific progress assures that theories which better explain the data collected take hold, as Bernoulli’s did.
Like modern kinetic-molecular theory, Bernoulli theorized that
Unlike Lavoisier and Dalton, the 19th century German physicist Rudolf Clausius rejected caloric theory. Instead of regarding heat as a substance that surrounds molecules, Clausius proposed that heat is a form of energy that affects the temperature of matter by changing the motion of molecules in matter. This kinetic theory of heat enabled Clausius to study and predict the flow of heat—a field we now call thermodynamics (for more information, see our module Thermodynamics I).
In his 1857 paper, “On the nature of the motion which we call heat,” Clausius speculated on how heat energy, temperature, and molecule motion could explain gas behavior. In doing this, he proposed several ideas about the molecules of gases. These ideas have come to be accepted for ideal gases—theoretical gases that perfectly obey the ideal gas equation (for more information, see our Properties of Gases module). Clausius proposed that the space taken up by ideal gas molecules should be regarded as infinitesimal when compared to the space occupied by the whole gas – in other words, a gas consists mostly of empty space. Second, he suggested that the intermolecular forces between molecules should be treated as infinitesimal.
A key part of Clausius’s ideas was his work on the mathematical relationship between heat, temperature, molecule motion, and kinetic energy—the energy of motion. He proposed that the net kinetic energy of the molecules in an ideal gas is directly proportional to the gas’s absolute temperature, T. Its kinetic energy, Ek, is therefore determined by the number of gas molecules, n, which each have a molecule mass of m, and are moving with velocity u, as shown below:
With this equation and observational data on the weights and volumes of gases at specific temperatures, Clausius was able to calculate the average speeds of gas molecules such as oxygen (an astonishing 461 m/s!). However, the Dutch meteorologist Christoph Hendrik Diederik Buys-Ballot quickly pointed out a problem with these speed calculations. If gas molecules moved hundreds of meters per second, then an odorous gas (like in perfume) should spread across a room almost instantly. Instead, perfumes and other scents usually took several minutes to reach people across a room. This suggested that either Clausius’s mathematical relationship was wrong, or that something more complicated was happening with real gas molecules.
Clausius proposed that
Buys-Ballot’s objection forced Clausius to re-think his ideas about gas molecules. If a gas molecule could travel at 461 m/s, but still take minutes to cross a room, it must be encountering lots of obstacles—such as other gas molecules. Clausius realized then that one of his core ideas about ideal gas molecules had to change.
In 1859, Clausius published a paper proposing that, instead of being infinitesimally small, gas molecules had to be big enough that it could collide with another molecule, and there had to be so many fast-moving gas molecules present that it couldn’t travel far before doing so. The average distance that a molecule travels between collisions has come to be known as its mean free path. Clausius realized that while the mean free path must be very big compared to the actual size of the molecules, it would still have to be small enough that a fast-moving molecule would collide with other molecules many times each second (Figure 3).
Figure 3: A simplified illustration of a molecule’s (blue dot) mean free path, which is the average distance that a molecule travels between collisions (dotted lines). The solid line indicates the actual distance between the beginning and end of the molecule’s journey.
Thus, gas molecules are constantly colliding and changing directions. While it’s tempting to picture molecules colliding every few seconds like bumper cars at an amusement park, the collisions are so much more frequent. For example, at room temperature, one oxygen molecule travels an average distance of 67 nm (almost 1,500 times narrower than the width of a human hair) before colliding with another molecule. And this single molecule collides with others 7.2 billion times per second! This astounding frequency of collisions explains how gas molecules can zip along at hundreds of meters per second, but still take minutes to cross a room.
Clausius’s idea about mean free path was vital to how Langmuir solved the blackening light bulb problem. Thanks to Clausius, Langmuir understood that he needed to decrease the mean free path for the tungsten atoms sublimating off the filament. In a vacuum, the mean free path was very long, and the tungsten atoms quickly make their way from the filament to the inside of the bulb. By adding an inert gas like argon, Langmuir increased the number of molecules in the bulb and the frequency of collisions—thereby decreasing the mean free path, and increasing the bulb’s life.
Clausius’s ideas about mean free path, molecule motion, and kinetic energy intrigued James Clerk Maxwell, a contemporary Scottish physicist. In 1860, Maxwell published his first kinetic theory paper expanding on Clausius’s work. Building on Clausius’s calculations for the average speed of oxygen and other molecules, Maxwell further developed the idea that the gas molecules in a sample of gas are moving at different speeds. Within this range of possible speeds, some molecules are slower than the average and some faster (Figure 4); furthermore, a molecule’s speed can change when it collides with another molecule. Only a tiny number of the gas molecules are actually moving at the slowest and fastest speeds possible—but we know now that this small number of speedy molecules are especially important, because they are the most likely molecules to undergo a chemical reaction.
Figure 4: Since heavier molecules move more slowly than lighter molecules, the heavier molecules (xenon, argon) have a narrow speed distribution and the lighter molecules (neon, helium) have a spread out speed distribution. image © Pdbailey
Along with these ideas, Maxwell proposed that gas particles should be treated mathematically as spheres that undergo perfectly elastic collisions. This means that the net kinetic energy of the spheres is the same before and after they collide, even if their velocities change. Together, Clausius and Maxwell developed several key assumptions that are vital to modern KMT.
The average distance that a molecule travels between collisions is known as
A major use of modern KMT is as a framework for understanding gases and predicting their behavior. KMT links the microscopic behaviors of ideal gas molecules to the macroscopic properties of gases. In its current form, KMT makes five assumptions about ideal gas molecules:
- Gases consist of many molecules in constant, random, linear motion.
- The volume of all the molecules is negligible compared to the gas’s total volume.
- Intermolecular forces are negligible.
- The average kinetic energy of all molecules does not change, so long as the gas’s temperature is constant. In other words, collisions between molecules are perfectly elastic.
- The average kinetic energy of all molecules is proportional to the absolute temperature of the gas. This means that, at any temperature, gas molecules in equilibrium have the same average kinetic energy (but NOT the same velocity and mass).
With KMT’s assumptions, scientists are able to describe on a molecular level the behaviors of gases. These behaviors are common to all gases because of the relationships between gas pressure, volume, temperature, and amount, which are described and predicted by the gas laws (for more on the gas laws, please see our Properties of Gases module). But KMT and the gas laws are useful for understanding more than abstract ideas about chemistry. With KMT and the gas laws, we can better understand the behaviors of real gases, such as the air we use to inflate tires, as we’ll explore more below.
Gases consist of many molecules in _____, random, linear motion.
Boyle’s law describes how, for a fixed amount of gas, its volume is inversely proportional to its pressure (Figure 5). This means that if you took all the air from a fully inflated bike tire and put the air inside a much larger, empty car tire, the air would not be able to exert enough pressure to inflate the car tire. While this example about the relationship between gas volume and pressure may seem intuitive, KMT can help us understand the relationship on a molecular level. According to KMT, air pressure depends on how often and how forcefully air molecules collide with tire walls. So when the volume of the container increases (like when we transfer air from the bike tire to the car tire), the air molecules have to travel farther before they can collide with the tire’s walls. This means that there are fewer collisions per unit of time, which results in lower pressure (and an underinflated car tire).
Figure 5: Boyle’s law states that so long as temperature is kept constant, the volume of a fixed amount of gas is inversely proportional to the pressure placed on the gas.
Charles’s law describes how a gas’s volume is directly proportional to its absolute temperature (Figure 6)—and also why your car tire pressure increases the longer you drive (and thus why you should always measure tire pressure after your car has been parked for a long period). Using KMT, we can understand that as the friction between the tires and road raises the air temperature inside the tires, the air molecules’ kinetic energy and speed are also increasing. Because the molecules are zipping around faster, they collide more frequently and more forcefully with the tire’s walls, thereby increasing the pressure. Both Charles’s law and KMT also explain why tire pressure decreases after you park. As the tires cool down, the air molecules move more slowly and collide less with the tire’s walls, thereby exerting less pressure.
Figure 6: Charles’s law states that when pressure is kept constant, a fixed amount of gas linearly increases its volume as its temperature increases.
While KMT is a useful tool for understanding the linked behaviors of molecules and matter, particularly gases, KMT does have limitations related to how its theoretical assumptions differ from the behavior of real matter. In particular, KMT’s assumptions that intermolecular forces are negligible, and the volume of molecules is negligible, aren’t always valid. Real gas molecules do experience intermolecular forces. As pressure on a real gas increases and forces its molecules closer together, the molecules can attract one another. This attraction slows down the molecules just a little bit before they slam into one another or the walls of a container, so that the pressure inside a container of real gas molecules is slightly lower than we would expect based on KMT. These intermolecular forces are particularly influential when gas molecules are moving more slowly, such as at low temperature.
While growing pressure on a real gas initially allows its intermolecular forces to have more influence, a different factor gains more influence as the pressure continues to grow. While KMT assumes that gas molecules have no volume, real gas molecules do have volume. This gives a real gas greater volume at high pressure than would be predicted from KMT. Furthermore, as a real gas is compressed, the mean free path of its molecules decreases and the molecules collide more often—thereby increasing the pressure exerted by a real gas compared to KMT’s prediction.
Ultimately, KMT is most useful and accurate when gases are under conditions that cause molecules to behave consistently with KMT’s assumptions. These conditions often happen at low pressure, where molecules have lots of empty space to move in, and the molecule volumes are very small compared to the total volume. And the conditions often occur at high temperature, when the molecules possess a high kinetic energy and fast speed, which lets them overcome the attractive forces between molecules.
Ultimately, KMT provides assumptions about molecule behavior that can be used both as the basis for other theories about molecules, and to solve real-world problems. Clausius’ concept of a molecule’s mean free path underlies our modern ideas of diffusion and Brownian motion, which can explain why scents from perfume or baking cookies take so long to cross a room. And while Langmuir used KMT’s assumptions to develop a longer-lasting light bulb, still other scientists are deploying their knowledge of KMT in more dramatic and controversial ways. By understanding how real gas molecules behave and move, scientists are able to separate gas molecules from each other based on tiny differences in mass—a key principle behind, for example, how uranium isotopes are enriched for use in nuclear weapons.
Over four hundred years, scientists including Rudolf Clausius and James Clerk Maxwell developed the kinetic-molecular theory (KMT) of gases, which describes how molecule properties relate to the macroscopic behaviors of an ideal gas—a theoretical gas that always obeys the ideal gas equation. KMT provides assumptions about molecule behavior that can be used both as the basis for other theories about molecules and to solve real-world problems.
Kinetic-molecular theory states that molecules have an energy of motion (kinetic energy) that depends on temperature.
Rudolf Clausius developed the kinetic theory of heat, which relates energy in the form of heat to the kinetic energy of molecules.
Over four hundred years, scientists have developed the kinetic-molecular theory of gases, which describes how molecule properties relate to the macroscopic behaviors of an ideal gas—a theoretical gas that always obeys the ideal gas equation.
The kinetic-molecular theory of gases assumes that ideal gas molecules (1) are constantly moving; (2) have negligible volume; (3) have negligible intermolecular forces; (4) undergo perfectly elastic collisions; and (5) have an average kinetic energy proportional to the ideal gas’s absolute temperature.
Megan Cartwright, Ph.D. “Kinetic-Molecular Theory” Visionlearning Vol. CHE-4 (2), 2017.
Reactions and Changes
Traditional chemical reactions occur as a result of the interaction between valence electrons around an atom’s nucleus (see our Chemical Reactions module for more information). In 1896, Henri Becquerel expanded the field of chemistry to include nuclear changes when he discovered that uranium emitted radiation. Soon after Becquerel’s discovery, Marie Sklodowska Curie began studying radioactivity and completed much of the pioneering work on nuclear changes. Curie found that radiation was proportional to the amount of radioactive element present, and she proposed that radiation was a property of atoms (as opposed to a chemical property of a compound). Marie Curie was the first woman to win a Nobel Prize and the first person to win two (the first, shared with her husband Pierre Curie and Henri Becquerel for discovering radioactivity; the second for discovering the radioactive elements radium and polonium).
In 1902, Frederick Soddy proposed the theory that “radioactivity is the result of a natural change of an isotope of one element into an isotope of a different element.” Nuclear reactions involve changes in particles in an atom’s nucleus and thus cause a change in the atom itself. All elements heavier than bismuth (Bi) (and some lighter) exhibit natural radioactivity and thus can “decay” into lighter elements. Unlike normal chemical reactions that form molecules, nuclear reactions result in the transmutation of one element into a different isotope or a different element altogether (remember that the number of protons in an atom defines the element, so a change in protons results in a change in the atom). There are three common types of radiation and nuclear changes:
Alpha Radiation (α) is the emission of an alpha particle from an atom’s nucleus. An α particle contains two protons and two neutrons (and is similar to a He nucleus:
). When an atom emits an a particle, the atom’s atomic mass will decrease by four units (because two protons and two neutrons are lost) and the atomic number (z) will decrease by two units. The element is said to “transmutate” into another element that is two z units smaller. An example of an α transmutation takes place when uranium decays into the element thorium (Th) by emitting an alpha particle, as depicted in the following equation:
(Note: in nuclear chemistry, element symbols are traditionally preceded by their atomic weight [upper left] and atomic number [lower left].)
Beta Radiation (β) is the transmutation of a neutron into a proton and an electron (followed by the emission of the electron from the atom’s nucleus:
). When an atom emits a β particle, the atom’s mass will not change (since there is no change in the total number of nuclear particles); however, the atomic number will increase by one (because the neutron transmutated into an additional proton). An example of this is the decay of the isotope of carbon called carbon-14 into the element nitrogen:
Gamma Radiation (γ) involves the emission of electromagnetic energy (similar to light energy) from an atom’s nucleus. No particles are emitted during gamma radiation, and thus gamma radiation does not itself cause the transmutation of atoms; however, γ radiation is often emitted during, and simultaneous to, α or β radioactive decay. X-rays, emitted during the beta decay of cobalt-60, are a common example of gamma radiation.
Radiation can result in an atom having a different atomic number.
Radioactive decay proceeds according to a principle called the half-life. The half-life (T½) is the amount of time necessary for one-half of the radioactive material to decay. For example, the radioactive element bismuth (210Bi) can undergo alpha decay to form the element thallium (206Tl) with a reaction half-life equal to five days. If we begin an experiment starting with 100 g of bismuth in a sealed lead container, after five days we will have 50 g of bismuth and 50 g of thallium in the jar. After another five days (ten from the starting point), one-half of the remaining bismuth will decay and we will be left with 25 g of bismuth and 75 g of thallium in the jar. As illustrated, the reaction proceeds in halves, with half of whatever is left of the radioactive element decaying every half-life period.
Radioactive Decay of Bismuth-210 (T½ = 5 days)
The fraction of parent material that remains after radioactive decay can be calculated using the equation:
|Fraction remaining =|| 1 |
|(where n = # half-lives elapsed)|
The amount of a radioactive material that remains after a given number of half-lives is therefore:
|Amount remaining = Original amount * Fraction remaining|
The decay reaction and T½ of a substance are specific to the isotope of the element undergoing radioactive decay. For example, Bi210 can undergo a decay to Tl206 with a T½of five days. Bi215, by comparison, undergoes bdecay to Po215 with a T½ of 7.6 minutes, and Bi208 undergoes yet another mode of radioactive decay (called electron capture) with a T½ of 368,000 years!
All radioactive material decays at the same rate.
While many elements undergo radioactive decay naturally, nuclear reactions can also be stimulated artificially. Although these reactions also occur naturally, we are most familiar with them as stimulated reactions. There are two such types of nuclear reactions:
1) Nuclear fission: reactions in which an atom’s nucleus splits into smaller parts, releasing a large amount of energy in the process. Most commonly this is done by “firing” a neutron at the nucleus of an atom. The energy of the neutron “bullet” causes the target element to split into two (or more) elements that are lighter than the parent atom.
The Fission Reaction of Uranium-235
Interactive Animation: The Fission of U235
During the fission of U235, three neutrons are released in addition to the two daughter products. If these released neutrons collide with nearby U235 nuclei, they can stimulate the fission of these atoms and start a self-sustaining nuclear chain reaction. This chain reaction is the basis of nuclear power. As uranium atoms continue to split, a significant amount of energy is released from the reaction. The heat released during this reaction is harvested and used to generate electrical energy.
Interactive Animation: Two Types of Nuclear Chain Reactions
Nuclear fusion: reactions in which two or more elements “fuse” together to form one larger element, releasing energy in the process. A good example is the fusion of two “heavy” isotopes of hydrogen (deuterium: H2 and tritium: H3) into the element helium.
Nuclear Fusion of Two Hydrogen Isotopes
Interactive Animation: Nuclear Fusion
Fusion reactions release tremendous amounts of energy and are commonly referred to as thermonuclear reactions. Although many people think of the sun as a large fireball, the sun (and all stars) are actually enormous fusion reactors. Stars are primarily gigantic balls of hydrogen gas under tremendous pressure due to gravitational forces. Hydrogen molecules are fused into helium and heavier elements inside of stars, releasing energy that we receive as light and heat.
Beginning with the work of Marie Curie and others, this module traces the development of nuclear chemistry. It describes different types of radiation: alpha, beta, and gamma. The module then applies the principle of half-life to radioactive decay and explains the difference between nuclear fission and nuclear fusion.
- HS-C5.5, HS-PS1.C1, HS-PS3.A1
Anthony Carpi, Ph.D. “Nuclear Chemistry” Visionlearning Vol. CHE-2 (3), 2003.
To understand life as we know it, we must first understand a little bit of organic chemistry. Organic molecules contain both carbon and hydrogen. Though many organic chemicals also contain other elements, it is the carbon-hydrogen bond that defines them as organic. Organic chemistry defines life. Just as there are millions of different types of living organisms on this planet, there are millions of different organic molecules, each with different chemical and physical properties. There are organic chemicals that make up your hair, your skin, your fingernails, and so on. The diversity of organic chemicals is due to the versatility of the carbon atom. Why is carbon such a special element? Let’s look at its chemistry in a little more detail.
Carbon (C) appears in the second row of the periodic table and has four bonding electrons in its valence shell (see our Periodic Table module for more information). Similar to other non-metals, carbon needs eight electrons to satisfy its valence shell. Carbon therefore forms four bonds with other atoms (each bond consisting of one of carbon’s electrons and one of the bonding atom’s electrons). Every valence electron participates in bonding; thus, a carbon atom’s bonds will be distributed evenly over the atom’s surface. These bonds form a tetrahedron (a pyramid with a spike at the top), as illustrated below:
Carbon forms 4 bonds
Organic chemicals get their diversity from the many different ways carbon can bond to other atoms. The simplest organic chemicals, called hydrocarbons, contain only carbon and hydrogen atoms; the simplest hydrocarbon (called methane) contains a single carbon atom bonded to four hydrogen atoms:
Methane – a carbon atom bonded to 4 hydrogen atoms
But carbon can bond to other carbon atoms in addition to hydrogen, as illustrated in the molecule ethane below:
Ethane – a carbon-carbon bond
In fact, the uniqueness of carbon comes from the fact that it can bond to itself in many different ways. Carbon atoms can form long chains:
Hexane – a 6-carbon chain
Isohexane – a branched-carbon chain
Cyclohexane – a ringed hydrocarbon
There appears to be almost no limit to the number of different structures that carbon can form. To add to the complexity of organic chemistry, neighboring carbon atoms can form double and triple bonds in addition to single carbon-carbon bonds:
| || || |
|Single bonding||Double bonding||Triple bonding|
Keep in mind that each carbon atom forms four bonds. As the number of bonds between any two carbon atoms increases, the number of hydrogen atoms in the molecule decreases (as can be seen in the figures above).
__________ can form long chains, branched chains, and rings.
The simplest hydrocarbons are those that contain only carbon and hydrogen. These simple hydrocarbons come in three varieties depending on the type of carbon-carbon bonds that occur in the molecule.
Alkanes are the first class of simple hydrocarbons and contain only carbon-carbon single bonds. The alkanes are named by combining a prefix that describes the number of carbon atoms in the molecule with the root ending “ane”. The names and prefixes for the first ten alkanes are given in the following table.
The chemical formula for any alkane is given by the expression CnH2n+2. The structural formula, shown for the first five alkanes in the table, shows each carbon atom and the elements that are attached to it. This structural formula is important when we begin to discuss more complex hydrocarbons. The simple alkanes share many properties in common. All enter into combustion reactions with oxygen to produce carbon dioxide and water vapor. In other words, many alkanes are flammable. This makes them good fuels. For example, methane is the main component of natural gas, and butane is common lighter fluid.
|The chemical reaction between a fuel (for example wood) and an oxidizing agent.|
The second class of simple hydrocarbons, the alkenes, consists of molecules that contain at least one double-bonded carbon pair. Alkenes follow the same naming convention used for alkanes. A prefix (to describe the number of carbon atoms) is combined with the ending “ene” to denote an alkene. Ethene, for example is the two-carbon molecule that contains one double bond. The chemical formula for the simple alkenes follows the expression CnH2n. Because one of the carbon pairs is double bonded, simple alkenes have two fewer hydrogen atoms than alkanes.
Alkynes are the third class of simple hydrocarbons and are molecules that contain at least one triple-bonded carbon pair. Like the alkanes and alkenes, alkynes are named by combining a prefix with the ending “yne” to denote the triple bond. The chemical formula for the simple alkynes follows the expression CnH2n-2.
The simplest of hydrocarbons are called
Because carbon can bond in so many different ways, a single molecule can have different bonding configurations. Consider the two molecules illustrated here:
|CH3 CH2 CH CH2 CH3|
Both molecules have identical chemical formulas (shown in the left column); however, their structural formulas (and thus some chemical properties) are different. These two molecules are called isomers. Isomers are molecules that have the same chemical formula but different structural formulas.
When molecules have the same number and type of atoms, they must have the same structure.
In addition to carbon and hydrogen, hydrocarbons can also contain other elements. In fact, many common groups of atoms can occur within organic molecules, these groups of atoms are called functional groups. One good example is the hydroxyl functional group. The hydroxyl group consists of a single oxygen atom bound to a single hydrogen atom (-OH). The group of hydrocarbons that contain a hydroxyl functional group is called alcohols. The alcohols are named in a similar fashion to the simple hydrocarbons, a prefix is attached to a root ending (in this case “anol”) that designates the alcohol. The existence of the functional group completely changes the chemical properties of the molecule. Ethane, the two-carbon alkane, is a gas at room temperature; ethanol, the two-carbon alcohol, is a liquid.
Ethanol, common drinking alcohol, is the active ingredient in “alcoholic” beverages such as beer and wine.
The chemical basis of all living organisms is linked to the way that carbon bonds with other atoms. This introduction to organic chemistry explains the many ways that carbon and hydrogen form bonds. Basic hydrocarbon nomenclature is described, including alkanes, alkenes, alkynes, and isomers. Functional groups of atoms within organic molecules are discussed.
Anthony Carpi, Ph.D. “Carbon Chemistry” Visionlearning Vol. CHE-2 (4), 2003.
Reactions and Changes
Physical States and Properties
Think of all the amazing substances you use during the first hours of your day. You take a shower with soap and water. You eat your favorite breakfast cereal and brush your teeth with toothpaste. On your way to class, you may grab a drink in an aluminum can from a vending machine.
What are these substances made from? What makes them similar or different from one another? Let’s find out, beginning with a substance you use so often that you probably take it for granted: water.
Have you ever thought about what you are drinking when you drink a glass of water? Take a look at the two glasses in Figure 1. What do they contain? They contain a clear liquid that looks like water. But is the water identical in the two glasses?
Figure 1: Two glasses of water, Glass A and Glass B, that appear the same. Are they the same? image © CC BY-SA 3.0 S Nova
You can’t really tell any differences between the water in the two glasses using your naked eye. However, if you were able to view them at the molecular level, you might see something as shown in Figure 2.
Figure 2: A model of the water in Glass A and in Glass B as viewed at the molecular level. Glass A contains only one type of molecule, while glass B contains a mixture of elements and molecules.
In what ways are the models of water in Glass A and B similar? In what ways are they different?
First, consider Glass A. Glass A contains water “molecules” that all look the same. Molecules are formed by the chemical bonding of two or more “atoms,” and atoms are the smallest particles of “elements.” An exception are diatomic molecules formed by some elements when existing alone in nature, such as nitrogen gas existing as N2.
Do you know the chemical formula for water? It is H2O, meaning that two atoms of the element hydrogen are bonded to one atom of the element oxygen to form each molecule of water. In Figure 2, the white spheres represent hydrogen atoms, and the red spheres represent oxygen atoms. These molecules collectively make up the “compound,” water.
The depiction of water in Glass A (Figure 2) is considered a “pure substance” because only water molecules are present. Pure substances can be made up of atoms or molecules. Atoms are the smallest parts of elements – pure substances found on the periodic table. They cannot be chemically made or separated. Pure substances made up of molecules are called compounds. Compounds are chemically made and can only be chemically separated.
Now look at Glass B. Glass B contains molecules of water mixed with atoms of elements. This water is not pure but part of a mixture. The blue and green spheres in the model represent atoms or ions (atoms that have lost or gained electrons to become charged) of other elements. To figure out the elements, you should know that Glass B contains tap water, which can come from a lake, river, or groundwater source. Tap water is filtered at a water treatment plant to remove many particles from the mixture. Then it is disinfected with the addition of compounds that release chloride ions. Additionally, fluoride ions are often similarly added to tap water to reduce tooth decay. The blue and green spheres in Glass B represent chloride and fluoride ions. Because the particles in the model do not all look the same, Glass B is a model of a mixture. Mixtures can be made up of atoms, molecules, or a combination of both. Additionally, mixtures are physically mixed and can be physically separated. The atoms, molecules, or a combination of the two mixed together do not change but retain their unique characteristics and simply exist in a shared space.
What is another name for a pure substance made up of one type of atom?
You learned that water is considered a pure substance when it consists of only water molecules and nothing else. While you can drink this water, you may not like its taste. It does not have any flavor because there are no minerals in it. This type of water can be made through “distillation” or other purification techniques. In the distillation process, water is heated to form steam through evaporation. The impurities remain behind. Then steam is cooled to form distilled water through condensation.
Figure 3 shows the distillation process in the laboratory setting. Water is boiled in the round-bottom flask on the left. The steam travels upward and down the narrow tube on the right into the condenser. Cold water is forced through the outer part of the condenser tube to cool the steam in the narrow tube. Distilled water drips from the narrow tube into the flask on the right.
Figure 3: The distillation process to purify water. Water, in the flask on the left, is boiled and condenses in the long tube leaving impurities behind. image © CC BY-SA 3.0 William Crochot
You also learned that water is a mixture when it consists of water molecules and other atoms or ions, like chloride and fluoride. The water we drink is a mixture. Remember that water can come from a lake, river, or groundwater source. It is mixed with lots of substances: dust, chemicals, parasites, bacteria, viruses, and more. It must be filtered and disinfected at a water treatment plant to make it safe for your consumption. But when did humans figure out the need to filter and disinfect water?
Ancient Greek, Egyptian, and Sanskrit writings indicate that water was filtered as early as 2000 BCE. Historical records and archaeological sites have indicated that water was treated in ancient Egypt, which is located in the Sahara Desert. Ancient Egyptians harvested rainwater. Rainwater was naturally purified and safe for drinking if it was stored in clean containers.
Rainwater is a “homogeneous mixture” or “solution.” The word “homogeneous” is derived from the Greek roots homos (meaning the same) and genos (meaning kind). When you look at rainwater, it looks the same throughout but dissolved gases from the atmosphere are mixed in. The atmospheric gases are uniformly mixed or evenly distributed in the water, so rainwater is a solution.
Ancient Egyptians also collected water from the Nile River. The river water was a “heterogeneous mixture,” meaning you could see sediments and tiny organisms mixed in the water. The word “heterogeneous” is derived from the Greek roots heteros (meaning different) and genos (meaning kind).
The ancient Egyptians could see that the Nile River’s water was unclean. They were the first to add the chemical alum, a naturally occurring mineral, to river water because they found that it caused particles to settle at the bottom of the water container. Ancient Egyptians used the cleaner water at the top of the container and discarded the settled matter at the bottom of the container.
The ancient Egyptians also used ceramic filters to clean the river water. They added herbs, seeds, and stones to improve the water quality and taste. Other ancient cultures used methods such as gravel filters, sand filters, and boiling to improve water quality. Regardless of the method, these examples show how river water (a heterogeneous mixture) could be physically separated to provide cleaner water for humans.
The term “heterogeneous” refers to rainwater because it is a mixture of water and dissolved gases.
Ancient civilizations focused on physically separating water from the other particles they could see in the heterogeneous mixture. What was not seen were the disease-causing organisms from animal waste or other sources that can contaminate water. But the causes of these diseases were unknown in ancient times. In fact, many people believed that diseases were caused by humans upsetting the gods.
The disinfection of water began centuries later after a cholera outbreak in London in 1854. Cholera is an intestinal disease caused by bacteria that leads to vomiting and diarrhea. In the mid-1800s, Dr. John Snow, a British medical doctor, mapped the location of each cholera death in the Soho area of London to look for a pattern, as shown in Figure 4.
Figure 4: A representation of the 1854 map of cholera deaths by Snow. The map revealed a pattern of cases occurring near the Broad Street water pump, as shown in red. image © Public Domain
At the time, indoor plumbing was rare. Instead, most people used one of several central pumping stations to get their water or wash household items. By mapping the locations of cholera deaths, Snow discovered something interesting—many of the people who developed cholera lived near and used London’s Broad Street pump.
Snow studied the conditions at the Broad Street pump. He learned that many residents washed their clothes near the pump since it was their only water source. One resident, a mother whose baby suffered from cholera, washed contaminated diapers three feet from the pump. Snow realized the contaminated wash water could leak into the water supply.
Snow appealed to the local governing body, the Board of Guardians, who took action by preventing residents from accessing the Broad Street water supply. With access to the Broad Street pump closed, the number of cholera cases in the city declined quickly. Snow had used his maps, a form of a scientific model, to understand London’s cholera outbreak and take action to control it. By the late 1800s, the British took advantage of another scientific discovery—that chloride ions could be used to control outbreaks of infectious diseases. The British began adding compounds with chloride ions to drinking water to kill the bacteria that caused cholera and other diseases.
The practice of disinfecting water supplies with chloride ions began in the United States shortly after that. Over time, treatment methods and technologies have improved. Today our water supply is constantly monitored to ensure it is safe for human consumption. The Environmental Protection Agency (EPA) sets national standards for safe drinking water in the United States.
How can you find out if the tap water you drink is safe? Communities release consumer confidence reports annually explaining the substances in the water supply. You can find information from your local community about what substances might be present in the homogeneous solution that is your tap water. While water testing has made our drinking water much cleaner and safer than in the 1800s, there are occasional breakdowns in the system, as infamously happened in Flint, Michigan, in the mid-2010s.
What did the British add to their drinking water to kill harmful bacteria that cause cholera and other diseases?
Dr. Mona Hanna-Attisha knows all too well what can happen if governments do not take on the responsibility of treating drinking water. Hanna-Attisha immigrated to the United States as a child refugee from Iraq. She received college degrees in environmental studies, sustainability, and public health before attending medical school. In 2015, Hanna-Attisha was practicing pediatrics in Flint, Michigan, when she visited an old high school friend after work one day. She learned from her friend, a water quality expert, that the water in Flint was not being treated properly and probably contained lead. Hanna-Attisha was alarmed.
She knew that the city water supply had changed over the previous year and a half to save money. Rather than receiving treated water from Detroit, water from the Flint River was pumped to a water treatment plant to provide tap water for residents. However, what was unknown was that the water was not treated correctly—the treatment did not account for the old pipes in the city’s infrastructure made of iron, copper, and lead. High levels of chloride ions were added to kill pathogens in the river water. However, the high levels of chloride ions also caused lead from the pipes to dissolve into the water forming an invisible solution.
Hanna-Attisha knew that children exposed to lead might physically and behaviorally be affected. The exposure could impair children’s nervous systems, hearing, growth, and blood cell function and formation. Further, exposed children may have learning problems, including hyperactivity.
Hanna-Attisha knew that she needed data to immediately sound the alarm on the presence of high levels of lead in the tap water. So, she looked at medical records to see if blood lead levels in children across the city had changed since the shift in the water supply. What she found was alarming: Blood lead levels had doubled and even tripled for some children.
Hanna-Attisha knew that publishing her findings would take months for peer review, but this problem was immediate. So, because of her concern for children’s health, Hanna-Attisha risked her career by publicly announcing her findings in 2015 before publishing them. Although she initially faced criticism, ultimately, she was praised for exposing the Flint water crisis. Figure 5 shows the results of Dr. Hanna-Attisha’s study that was published in 2016 (Hanna-Attisha et al. 2016). This bar graph shows the percent of children with elevated blood lead levels outside of Flint (left) and in Flint (right). The blue bars show the percentages before the change in the water source, and the red bars show the percentages after. The percentage of children with elevated blood lead levels increased significantly in all of Flint after the change in the water source.
Figure 5: A bar graph showing the results of Hanna-Attisha’s study. The graph shows that a higher percentage of children in the city of Flint, Michigan had elevated blood lead levels compared to those who lived outside of Flint. image © data from Hanna-Attisha et al. 2016
Although residents had previously complained of the bad taste and smell of the water, city officials ignored the complaints. High levels of E. coli, an indicator of animal or sewage waste contamination, had even prompted a boil-water order several months after the shift in the water supply. Still, the city did not change its supply. The third-largest outbreak of Legionnaires disease in U.S. history occurred in 2015 because of the tainted water supply, killing 12 people and causing more than 87 people to report being ill.
In Flint, Michigan, government safeguards had failed. Part of the problem was that the city overlooked many complaints because of the demographics. Nearly 41% of the city’s population lived below the poverty line. Demographically, about 57% of the population was Black, 37% white, 4% Latino, and 4% mixed race.
Advocates for the victims say that the citizens of Flint experienced environmental racism. These events, and Hanna-Attisha’s research, led to a group of citizens and activists suing the city and state. As a result, the city gave bottled water to residents, improved water testing significantly, and eventually replaced lead-containing pipes.
How did they identify lead in Flint drinking water?
Let’s take what you have learned about water as a substance and use it to build a model for the classification of substances. The terms and examples you learned about water as a substance are used in the model.
Figure 6: This is the classification system used by scientists to describe substances. The system categorizes substances as either pure substances, which are made up of a single type of element or compound, or mixtures of more than one.
Figure 6 shows a classification system that most scientists use to describe substances as pure or mixtures. Pure substances include elements made up of the same atoms and compounds made up of the same molecules. Mixtures are either homogeneous or heterogeneous. They can be made up of atoms, molecules, or a combination of the two.
Remember the description of water treatment in ancient Egypt? We learned that rainwater is a homogeneous mixture (or solution). If you look at a sample of rainwater closely, it appears to look the same throughout, although there are gases dissolved within. River water is a heterogeneous mixture. If you look at a sample of river water closely, you will see sediments and tiny organisms. It does not look the same throughout. Both types of mixtures can be physically separated through distillation or filtration, as described earlier.
Let’s look at some of the substances you use daily and use the model in Figure 6 to classify them. When you use liquid soap in the shower, you use a mixture of water, detergent, oil, fragrance, and color compounds. It is most likely homogeneous because it is the same throughout the container. Toothpaste is fun because it can be either a homogeneous or heterogeneous mixture. Some toothpaste is the same throughout the container—often a chalky white color. These are homogeneous mixtures. However, some types of toothpaste have different colored stripes or sparkles running through the tube. In this case, they are heterogeneous mixtures because they look different throughout. Toothpaste contains fluoride, abrasives, flavoring, glycerol, and detergent compounds.
What about breakfast foods and drinks? They are usually mixtures of compounds, elements, or a combination of both. For example, many breakfast cereals contain whole grains and compounds like sugar, flavoring, vitamins, and minerals. What is sugar by itself when sprinkled on your cereal? It is a compound or pure substance. If you have an energy drink, you are having a homogeneous mixture or solution of mostly the compounds water, sugar, and caffeine. However, the aluminum can that it comes in is a pure substance because aluminum is an element. And the coins you used to buy the drink from a vending machine? They are likely a mixture of copper and nickel—even solid substances can be mixtures.
Let’s see if you recognize the classification of substances based on models. Look at Figure 7.
Figure 7: Models of two different substances at the molecular level. How would you classify these substances?
In Figure 7, the model on the left includes molecules that are all the same. This is a pure substance containing molecules of the same compound. What about the model on the right? There are molecules of two different compounds. One compound contains two atoms of different elements, represented by the blue and orange colors. The other compound has two atoms of the same element, as shown in green—a type of compound we call “diatomic.” We can’t tell whether this substance is heterogeneous or homogeneous on this scale. Sometimes it is difficult to classify a substance without more information.
Pure substances can either be elements or compounds. Remember that compounds are chemically made and can be chemically separated; mixtures are physically mixed and can be physically separated. Let’s see if the properties of each substance can help.
Mixtures may include different combinations of elements and compounds.
Pure substances have different characteristics than mixtures. Let’s learn what these characteristics are. Table 1 compares typical properties of pure substances and mixtures to help you recognize some patterns.
Table 1: A comparison table of the properties of pure substances versus mixtures. What patterns do you see between pure substances and mixtures in the table?
Pure substances are just that—pure. They are made up of a single type of compound or element and thus have a chemical formula defined by that compound or element. Pure substances cannot be separated into parts, except in the case of compounds, which can be chemically broken down into their elements.
Mixtures are combinations of two or more pure substances made by physical means (pouring them together, for example). They can be separated by physical means (like filtering), though sometimes this can take a lot of work. Mixtures have no single chemical formulas because they do not have definite compositions—they are made up of different amounts of elements or compounds.
For example, you may use a recipe to stir a chocolate chip cookie mixture, but the recipe varies according to individual tastes. You physically mix up chocolate chip cookies, but how could you physically separate them? Do you ever pick the chocolate chips out of the dough and eat them? The dough could also be separated by dissolving it, evaporating the water, and sorting the ingredients. The ingredients retain their unique properties in the dough. Chocolate chip cookie dough is a heterogeneous mixture because you can see the particles that make it up.
You have learned about many amazing substances that we use every day. In its purest form, water is a pure substance made up of only molecules containing hydrogen and oxygen. But as this same water travels to your home, it becomes a mixture (or solution). As it goes through a local river or lake, the water picks up particles and organisms that make it a heterogeneous mixture. That heterogeneous mixture is physically separated by filtration at a water treatment plant. Then it is mixed with chloride and fluoride ions, forming a homogeneous mixture that comes out of our faucets and you can drink from a glass. Mixtures and pure substances are all around us, and we depend on them for life!
This module explores substances through hypothetical and real-world examples. Substances are broadly classified as pure substances, such as elements and compounds, or mixtures, such as rainwater, but the classification system goes further. How a substance is classified depends on its makeup and properties, and understanding the differences helps scientists solve major issues, such as creating clean drinking water.
Substances can be classified as pure substances or mixtures. This classification helps scientists understand what particular substances are made up of and their properties.
Experiments over many years have helped scientists recognize that pure substances include elements, which cannot be broken down, and compounds, which can be broken down chemically into the elements that make them up. Compounds have a definite composition and are chemically formed.
Mixtures are physical combinations of pure substances that can be homogeneous or heterogeneous. Homogeneous mixtures are also called solutions and look the same throughout. Heterogeneous mixtures have clearly distinguished parts.
Mixtures do not have a definite composition and may be separated physically, such as the distillation of water to separate salts and impurities from pure water.
Elements are made up of atoms and compounds are made up of molecules. Mixtures may be any combination of atoms, molecules, or a combination of the two.
- MS-PS1.A1, MS-PS1.A2, MS-PS1.A3, MS-PS1.A4, MS-PS1.A5, MS-PS1.A6
- Hanna-Attisha, Mona, Jenny LaChance, Richard C. Sadler, and Allison C. Schnepp. “Elevated blood lead levels in children associated with the Flint drinking water crisis: a spatial analysis of risk and public health response.” American journal of public health 106, no. 2 (2016): 283-290. https://doi.org/10.2105/AJPH.2015.303003
Judi Luepke, PhD. “Substances” Visionlearning Vol. CHE-5 (3), 2022.
Physical States and Properties
As a young child, I remember staring in wonder at a pot of boiling water. Searching for an explanation for the bubbles that formed, I believed for a time that the motion of the hot water drew air down into the pot, which then bubbled back to the surface. Little did I know that what was happening was even more magical than I imagined – the bubbles were not air, but actually water in the form of a gas.
The different states of matter have long confused people. The ancient Greeks were the first to identify three classes (what we now call states) of matter based on their observations of water. But these same Greeks, in particular the philosopher Thales (624 – 545 BCE), incorrectly suggested that since water could exist as a solid, liquid, or even a gas under natural conditions, it must be the single principal element in the universe from which all other substances are made. We now know that water is not the fundamental substance of the universe; in fact, it is not even an element.
To understand the different states in which matter can exist, we need to understand something called the Kinetic Molecular Theory of Matter. Kinetic Molecular Theory has many parts, but we will introduce just a few here. One of the basic concepts of the theory states that atoms and molecules possess an energy of motion that we perceive as temperature. In other words, atoms and molecules are constantly moving, and we measure the energy of these movements as the temperature of the substance. The more energy a substance has, the more molecular movement there will be, and the higher the perceived temperature will be. An important point that follows this is that the amount of energy that atoms and molecules have (and thus the amount of movement) influences their interaction with each other. Unlike simple billiard balls, many atoms and molecules are attracted to each other as a result of various intermolecular forces such as hydrogen bonds, van der Waals forces, and others. Atoms and molecules that have relatively small amounts of energy (and movement) will interact strongly with each other, while those that have relatively high energy will interact only slightly, if even at all, with others.
The lower the energy of a substance, the ________________ the interaction between its atoms and molecules.
How does this produce different states of matter? Atoms that have low energy interact strongly and tend to “lock” in place with respect to other atoms. Thus, collectively, these atoms form a hard substance, what we call a solid. Atoms that possess high energy will move past each other freely, flying about a room, and forming what we call a gas. As it turns out, there are several known states of matter; a few of them are detailed below.
image © Corel Corporation
Solids are formed when the attractive forces between individual molecules are greater than the energy causing them to move apart. Individual molecules are locked in position near each other, and cannot move past one another. The atoms or molecules of solids remain in motion. However, that motion is limited to vibrational energy; individual molecules stay fixed in place and vibrate next to each other. As the temperature of a solid is increased, the amount of vibration increases, but the solid retains its shape and volume because the molecules are locked in place relative to each other. To view an example of this, click on the animation below which shows the molecular structure of ice crystals.
image © Corel Corporation
Liquids are formed when the energy (usually in the form of heat) of a system is increased and the rigid structure of the solid state is broken down. In liquids, molecules can move past one another and bump into other molecules; however, they remain relatively close to each other like solids. Often in liquids, intermolecular forces (such as the hydrogen bonds shown in the animation below) pull molecules together and are quickly broken. As the temperature of a liquid is increased, the amount of movement of individual molecules increases. As a result, liquids can “flow” to take the shape of their container but they cannot be easily compressed because the molecules are already close together. Thus, liquids have an undefined shape, but a defined volume. In the example animation below, we see that liquid water is made up of molecules that can freely move past one another, yet remain relatively close in distance to each other.
image © Corel Corporation
Gases are formed when the energy in the system exceeds all of the attractive forces between molecules. Thus gas molecules have little interaction with each other beyond occasionally bumping into one another. In the gas state, molecules move quickly and are free to move in any direction, spreading out long distances. As the temperature of a gas increases, the amount of movement of individual molecules increases. Gases expand to fill their containers and have low density. Because individual molecules are widely separated and can move around easily in the gas state, gases can be compressed easily and they have an undefined shape.
Solids, liquids, and gases are the most common states of matter that exist on our planet. If you would like to compare the three states to one another, click on the comparison animation below. Note the differences in molecular motion of water molecules in these three states.
Interactive Animation:Solid-Liquid-Gas Comparison
image © NASA/JPL/Caltech
Plasmas are hot, ionized gases. Plasmas are formed under conditions of extremely high energy, so high, in fact, that molecules are ripped apart and only free atoms exist. More astounding, plasmas have so much energy that the outer electrons are actually ripped off of individual atoms, thus forming a gas of highly energetic, charged ions. Because the atoms in plasma exist as charged ions, plasmas behave differently than gases, thus representing a fourth state of matter. Plasmas can be commonly seen simply by looking upward; the high energy conditions that exist in stars such as our sun force individual atoms into the plasma state.
___________ have an undefined shape and expand to fill their container.
As we have seen, increasing energy leads to more molecular motion. Conversely, decreasing energy results in less molecular motion. As a result, one prediction of Kinetic Molecular Theory is that if we continue to decrease the energy (measured as temperature) of a substance, we will reach a point at which all molecular motion stops. The temperature at which molecular motion stops is called absolute zero and has been calculated to be -273.15 degrees Celsius. While scientists have cooled substances to temperatures close to absolute zero, they have never actually reached absolute zero. The difficulty with observing a substance at absolute zero is that to “see” the substance, light is needed, and light itself transfers energy to the substance, thus raising the temperature. Despite these challenges, scientists have recently observed a fifth state of matter that only exists at temperatures very close to absolute zero.
Bose-Einstein Condensates represent a fifth state of matter only seen for the first time in 1995. The state is named after Satyendra Nath Bose and Albert Einstein who predicted its existence in the 1920s. B-E condensates are gaseous superfluids cooled to temperatures very near absolute zero. In this weird state, all the atoms of the condensate attain the same quantum-mechanical state and can flow past one another without friction. Even more strangely, B-E condensates can actually “trap” light, releasing it when the state breaks down.
Several other less common states of matter have also either been described or actually seen. Some of these states include liquid crystals, fermionic condensates, superfluids, supersolids, and the aptly named strange matter. To read more about these phases, see “Phase” in our Resources for this module.
The transformation of one state of matter into another state is called a phase transition. The more common phase transitions even have names; for example, the terms melting and freezing describe phase transitions between the solid and liquid state, and the terms evaporation and condensation describe transitions between the liquid and gas state.
Phase transitions occur at very precise points, when the energy (measured as temperature) of a substance in a given state exceeds that allowed in the state. For example, liquid water can exist at a range of temperatures. Cold drinking water may be around 4ºC. Hot shower water has more energy and thus may be around 40ºC. However, at 100°C under normal conditions, water will begin to undergo a phase transition into the gas phase. At this point, energy introduced into the liquid will not go into increasing the temperature; it will be used to send molecules of water into the gas state. Thus, no matter how high the flame is on the stove, a pot of boiling water will remain at 100ºC until all of the water has undergone transition to the gas phase. The excess energy introduced by a high flame will accelerate the liquid-to-gas transition; it will not change the temperature. The heat curve below illustrates the corresponding changes in energy (shown in calories) and temperature of water as it undergoes a phase transition between the liquid and gas states.
As can be seen in the graph above, as we move from left to right, the temperature of liquid water increases as energy (heat) is introduced. At 100ºC, water begins to undergo a phase transition and the temperature remains constant even as energy is added (the flat part of the graph). The energy that is introduced during this period goes toward breaking intermolecular forces so that individual water molecules can “escape” into the gas state. Finally, once the transition is complete, if further energy is added to the system, the heat of the gaseous water, or steam, will increase.
This same process can be seen in reverse if we simply look at the graph above starting on the right side and moving left. As steam is cooled, the movement of gaseous water molecules and thus temperature will decrease. When the gas reaches 100ºC, more energy will be lost from the system as the attractive forces between molecules re-form; however, the temperature remains constant during the transition (the flat part of the graph). Finally, when condensation is complete, the temperature of the liquid will begin to fall as energy is withdrawn.
As heat is turned up under a pot of boiling water, the temperature will __________ until all the water has become gas.
Phase transitions are an important part of the world around us. For example, the energy withdrawn when perspiration evaporates from the surface of your skin allows your body to correctly regulate its temperature during hot days. Phase transitions play an important part in geology, influencing mineral formation and possibly even earthquakes. And who can ignore the phase transition that occurs at about -3ºC, when cream, perhaps with a few strawberries or chocolate chunks, begins to form solid ice cream?
Now we understand what is happening in a pot of boiling water. The energy (heat) introduced at the bottom of the pot causes a localized phase transition of liquid water to the gaseous state. Because gases are less dense than liquids, these localized phase transitions form pockets (or bubbles) of gas, which rise to the surface of the pot and burst. But nature is often more magical than our imagination. Despite all that we know about the states of matter and phase transitions, we still cannot predict where the individual bubbles will form in a pot of boiling water.
There are many states of matter beyond solids, liquids, and gases, including plasmas, condensates, superfluids, supersolids, and strange matter. This module introduces Kinetic Molecular Theory, which explains how the energy of atoms and molecules results in different states of matter. The module also explains the process of phase transitions in matter.
- HS-C5.2, HS-PS1.A3, HS-PS1.A4, HS-PS2.B3
Anthony Carpi, Ph.D. “States of Matter” Visionlearning Vol. CHE-3 (1), 2004.
Physical States and Properties
Since 1924, the Macy’s Thanksgiving Day Parade has wound through 2.5 miles of New York City once a year. More than three million people gather to enjoy the boisterous marching bands, laugh at the hundreds of clowns, and gawk at the gigantic balloons floating above the parade. Designed to look like cartoon characters such as Snoopy (Figure 1), each massive, helium-filled balloon needs 90 handlers to safely tow it through the parade.
Figure 1: The Snoopy balloon at the 2008 Macy’s Thanksgiving Day Parade in New York City. image © Ben W. (https://www.flickr.com/photos/wlscience/)
At first glance, the helium gas inside these balloons seems very different from the air outside them. For one thing, the balloons would be a lot less impressive if filled with air—instead of floating above the parade, they would be dragged along the ground. For another, while everyone enjoying the parade must breathe air to survive, they should only breathe helium if they want a squeaky voice. Even on a molecular level, air and helium are different: Air is a mixture of nitrogen, oxygen, and other gases, while helium is a single gas.
But helium and air have many things in common with each other, and even with substances like deadly carbon monoxide and flammable hydrogen. At standard temperature and pressure, these substances are all gases, one of the common states of matter (see our module States of Matter for more information). All gases share common physical properties. Like liquids, gases freely flow to fill the container they are in. But while liquids have a defined volume, gases have neither a defined volume nor shape. And unlike liquids and solids, gases are highly compressible.
These common properties relate to a unique characteristic of gases: Gas molecules are incredibly far apart and rarely interact with each other. In solids, the attractive and repulsive forces between molecules—the intermolecular forces—are so strong they lock the solid into a fixed shape and size, as discussed in our Properties of Solids module. In liquids, the intermolecular forces are weaker, and liquid molecules can move around each other. But a liquid’s molecules are still close enough that intermolecular forces affect nearby molecules (see our Properties of Liquids module). A gas’s molecules are so far apart that the intermolecular forces are negligible.
Because gas molecules don’t interact with one another, gases don’t exist as different types like liquids and solids do. The different types of liquids and solids (such as molecular and network solids) have properties that reflect the unique ways their molecules interact. As a result, all gases share some common behaviors. We can understand how any gas—whether it’s helium or carbon monoxide—behaves by understanding the laws governing gas behavior.
Over the past four centuries, scientists have performed many experiments to understand the common behaviors of gases. They have observed that a gas’s physical condition—its state—depends on four variables: pressure (P), volume (V), temperature (T), and amount (n, in moles; see our module The Mole: Its History and Use for more information). The relationships between these variables are now known as the gas laws, which describe our current knowledge about how gases behave on a macroscopic level.
But the relationships behind the gas laws weren’t obvious at first—they were uncovered by many scientists examining and testing their ideas about gases over many years.
We now understand that air is a gas made of physical molecules (for more information, see our module on Atomic Theory). As these molecules move about inside a container, they exert force—known as pressure—on the container when they ricochet off its walls. Thanks to this behavior, we can inflate car tires, rubber rafts, and Macy’s Day Parade balloons with gases. However, the idea that air is a substance made of molecules that exert pressure would have been a strange idea to scientists before the 17th century. Along with fire, water, and earth, air was generally considered a fundamental substance, and not one made up of other things. (For more information on this concept, see our Early Ideas about Matter: From Democritus to Dalton module.)
However, in 1644, the Italian mathematician and physicist Evangelista Torricelli proposed a strange idea. In a letter to a fellow mathematician, Torricelli described how he had filled a long glass tube full of mercury. When he sealed one end and inverted the tube into a basin, only some mercury flowed into the basin. The rest of the mercury stayed up in the tube, filling it to a height of approximately 29 inches or 73.6 centimeters (Figure 2). Torricelli proposed that it was the weight of air that pressed down on the mercury in the basin that forced the liquid up into the tube (this was one of the first known devices that we now call barometers).
Figure 2: Evangelista Torricelli experimenting with a tube of mercury and inventing the barometer. (Image from L’Atmosphere published in 1873.)
The Jesuit scientist Franciscus Linus had a different idea about what was holding the mercury up in the tube. He proposed that the mercury was being pulled up by “funiculus”—an invisible substance that materialized to prevent a vacuum forming between the mercury and sealed tube top.
The British scientist Robert Boyle disagreed, and came up with an experiment to disprove Linus’s funiculus idea. Working with the English physicist Robert Hooke, Boyle made a long glass tube that was curved like a cane, and he sealed off the short leg of the cane. Resting the curve on the ground so that both ends pointed up, Boyle poured in just enough mercury so that the silver liquid filled the curve and rose to the same height in each leg. This trapped air inside the sealed short leg.
Boyle then poured in more mercury and observed with “delight and satisfaction” that the trapped air in the sealed short end supported a 29-inch-tall (73.6 cm) column of mercury in the long leg—the same height that the mercury reached in Torricelli’s barometer. However, because there was no cap on the long leg, there could be no funiculus pulling up the extra mercury. Boyle reasoned that it must be the trapped air’s pressure (which he called “spring”) that pushed the mercury up those 29 inches.
To understand more about air’s pressure, Boyle poured more mercury into the curved tube. He recorded the height of the mercury column in the long leg, and the height of the trapped air in the short leg. After repeating these steps many times, Boyle was able to observe the relationship between the height of the trapped air—its volume—and the height of the growing mercury column—an indicator of the pressure in the tube. Even though scientists in Boyle’s time generally didn’t graph data, we can best see this relationship by graphing Boyle’s data (Figure 3).
Figure 3: The plot of Robert Doyle’s data that he recorded during his experiment on mercury and trapped air in glass tubes. image © Krishnavedala
Boyle’s data showed that when air was squeezed to half its original volume, it doubled its pressure. In 1661, Boyle published his conclusion that air’s volume was inversely related to its pressure. This observation about air’s behavior—and therefore, gas behavior—is a critical part of what we now call Boyle’s law.
Boyle’s law states that so long as temperature is kept constant, the volume (V) of a fixed amount of gas is inversely proportional to its pressure (P) (Figure 4):
Figure 4: Boyle’s law states that so long as temperature is kept constant, the volume of a fixed amount of gas is inversely proportional to the pressure placed on the gas.
Boyle’s law can also be written as:
For a fixed amount of gas at a fixed temperature, this constant will be the same, even if the gas’s pressure and volume change from (P1, V1) to (P2, V2), because volume decreases as pressure increases. Therefore, P1 x V1 must equal the constant, and P2 x V2 must also equal the constant. Because they both equal the same constant, the gas’s pressure and volume under two different conditions are related like this:
Going back to that helium balloon shaped like Snoopy, Boyle’s law means that if you took the balloon deep under the ocean, poor Snoopy would shrivel because the pressure is very high and the helium would significantly decrease in volume. And if you took the balloon to the top of Mount Everest, Snoopy would get even bigger (and might even pop!) because the atmospheric pressure is low and the helium would increase in volume.
Which two variables describing a gas’s state are inversely related, according to Boyle’s law?
More than a century after Boyle’s work, scientists had figured out another important behavior of air: Air expands when heated, and hot air rises above cooler air. Taking advantage of this air behavior, the French brothers Joseph-Michel and Jacques-Étienne Montgolfier launched the first successful hot-air balloon in Paris in 1783.
The Montgolfiers’ balloon fascinated Jacques-Alexandre-César Charles, a self-taught French scientist interested in aeronautics. He had an idea about how to make an even better balloon. From his familiarity with contemporary chemistry research, Charles knew that hydrogen was much lighter than air. In 1783, Charles built and launched the first hydrogen balloon (see Figure 4 for an example of the balloon launch). Later that year, he became the first human to ride in a hydrogen balloon, which reached almost 10,000 feet above Earth.
Figure 4: Jacques Charles and Nicolas Marie-Noel Robert standing in their hydrogen-filled balloon waving flags, beginning their ascent in Paris. Thousands of spectators are gathered in the foreground to witness the first manned gas balloon flight.
Charles was very fortunate that he survived riding in a hydrogen balloon: On May 6, 1937, 36 people died when the Hindenburg airship, a dirigible filled with flammable hydrogen, caught fire and crashed to the ground. The airship’s flammable hydrogen gas may have been ignited by a lightning bolt or spark from static electricity, and the fire spread explosively throughout the ship in a matter of seconds.
While Charles never rode a balloon again, he remained fascinated with the gases inside balloons. In 1787, Charles conducted experiments comparing how balloons filled with different gases behaved when heated. Intriguingly, he found that balloons filled with gases as different as oxygen, hydrogen, and nitrogen expanded the same amount when their temperatures were heated from 0 to 80°C. However, Charles did not publish his findings. We only know about his experiments because they were mentioned in the work of another French chemist and balloonist, Joseph-Louis Gay-Lussac.
In 1802, Gay-Lussac published his results from similar experiments comparing nine different gases. Like Charles, Gay-Lussac concluded that it was a common property of all gases to increase their volume the same amount when their temperature was increased by the same degree. Gay-Lussac graciously gave Charles credit for first observing this common gas behavior.
This relationship between a gas’s volume (V) and absolute temperature (T, in Kelvin; to learn more about absolute temperature, see our Temperature module) is now known as Charles’s law. Charles’s law states that when pressure is kept constant, a fixed amount of gas linearly increases its volume as its temperature increases (Figure 5):
Figure 5: Charles’s Law states that when pressure is kept constant, a fixed amount of gas linearly increases its volume as its temperature increases.
Charles’s law can also be understood as:
For a fixed amount of gas at a fixed pressure, this constant will be the same, even if the gas’s volume and temperature change from (V1, T1) to (V2, T2). Therefore, V1/T1 must equal the constant, and V2/T2 must also equal the constant. As a result, the gas’s temperature and volume under different conditions are related like this:
This means that if we took the Snoopy balloon to the North Pole, the balloon would shrink as the helium cooled and decreased in volume. However, if we took the balloon to a hot tropical island and the helium’s temperature increased, the helium would increase in volume, expanding the balloon.
When different gases are heated up by the same number of degrees, their volume will
After his work on Charles’s law, Gay-Lussac focused on figuring out how gases reacted and combined. In 1808, he observed that many gases combined their volumes in simple, whole-number ratios. While we understand now that volumes of gases combine in whole-number ratios because that is how the gas molecules react, Gay-Lussac didn’t suggest this explanation. This was probably because the idea of whole-number molecular combinations had only recently been proposed by John Dalton, who was Gay-Lussac’s scientific rival. (For further exploration of how gas molecules react, see our Chemical Equations module).
It was the Italian mathematician Amedeo Avogadro who realized that Dalton’s and Gay-Lussac’s ideas complemented each another. Gay-Lussac’s claim that gas volumes combined in whole-number ratios resembled Dalton’s claim that atoms combined in whole-number ratios to form molecules. Avogadro reasoned that a gas’s volume must then be related to the number of its molecules. In 1811, Avogadro published his hypothesis that equal volumes of different gases have an equal number of molecules.
Avogadro’s hypothesis was ground-breaking though largely overlooked. The mathematician rarely interacted with other scientists, and he published his hypothesis with mathematical expressions that were unfamiliar to chemists. He also didn’t publish experimental data to support his hypothesis.
It was 47 years before Avogadro’s hypothesis would be broadly recognized. In 1858, a former student of Avogadro, the Italian chemist Stanislao Cannizzaro, published an influential work on atomic theory. This work drew on Avogadro’s hypothesis and presented experimental data supporting the hypothesis.
Avogadro’s law is based off of Avogadro’s hypothesis. Avogadro’s law states that at a constant pressure and temperature, a gas’s volume (V) is directly proportional to the number of molecules (n, in moles) (Figure 6):
Figure 6: Avogadro’s Law states that at a constant pressure and temperature, a gas’s volume is directly proportional to the number of molecules.
We know that a Snoopy balloon filled with helium will float above the parade, while the same balloon filled with air will drag along the ground. While helium and air are different in many ways, Avogadro’s law means that if we compared the number of helium molecules and the number of air molecules needed to inflate the same Snoopy balloon, we would find that the numbers are the same.
According to Avogadro’s law, 1 liter of toxic carbon monoxide gas and 1 liter of flammable hydrogen gas both have the same:
Because gases have common behaviors described by the gas laws, we can understand and predict the behavior of real gases through the concept of an ideal gas—a theoretical, idealized gas that always behaves according to the ideal gas equation.
The ideal gas equation is derived from the gas laws. This equation describes the relationships between all of the variables examined in the gas laws: pressure (P), volume (V), amount (n, in moles), and absolute temperature (T, in Kelvins). Along with the gas constant, R, these variables combine into the ideal gas equation:
Using the ideal gas equation, we can solve for any one of the unknown variables, so long as we know the others. The value for R depends on the units used for the other variables (Table 2).
|cal K-1 mol-1||1.9872|
|J K-1 mol-1||8.3145|
|L atm K-1 mol-1||0.0821|
|L Torr K-1 mol-1||62.364|
|Pa m3 K-1 mol-1||8.3145|
The ideal gas law assumes that an ideal gas’s molecules have no volume, and experience no intermolecular attraction or repulsion. But real gas molecules do have finite volume and often have some (very small) interactions with each other. Nonetheless, the behavior and state of a real gas can often be predicted from the ideal gas equation, especially at standard temperature and pressure. Under most conditions the difference between a real gas’s behavior and an ideal gas’s behavior is so small that we can use the ideal gas equation for real gases.
We’ll explore several conditions where a real gas behaves differently from an ideal gas at the end of this module.
Ideal gas molecules have
The ideal gas law is also useful in situations where the amount, n, of gas is fixed, but its pressure, volume, and temperature change. Using the ideal gas law, we can relate the value of these three variables under different conditions. To do this, we have to first rearrange the ideal gas equation so that the three changing variables equal nR:
This relationship is referred to as the combined gas law. Because nR is a constant, we can relate the initial (P1, V1, T1) and the final conditions (P2, V2, T2) of a gas like this:
P 1 V 1 T 1 = P 2 V 2 T 2
From 1987 to 2012, air bags (Figure 6) saved almost 37,000 American lives in car crashes. Air bags save lives because when a car stops hard during a crash, a sensor triggers a chemical reaction to generate nitrogen gas. The nitrogen gas inflates the air bag, which essentially forms a big pillow between the driver and the steering wheel. The pillow spreads out the force of the crash’s impact, helping reduce the severity of the driver’s injuries.
For an air bag to work, it has to inflate full of nitrogen incredibly fast—within 40 milliseconds of the collision. For a 60-liter cylindrical air bag to work properly, the nitrogen gas has to reach a pressure of 2.37 atm. At 25°C, how many moles of nitrogen gas are needed to pressurize the air bag?
We can figure this out using the ideal gas equation. First, we list the values we know, and convert them so they have the same units as the gas constant, R (0.0821 L-atm/mol-K).
T = 25 ∘ C = ( 25 + 273 ) K = 298 K
Next, we rearrange the ideal gas equation to solve for the number of moles, n:
Finally, we solve for the number of moles of nitrogen gas to pressurize the air bag:
n = ( 2.37 atm ) × ( 60 L ) ( 0.0821 L ⋅ atm mol ⋅ K ) × ( 298 K )
Real gases frequently deviate from ideal gases when their temperature gets low, especially when it’s close to where the gas would undergo a phase change into its liquid form. When a gas’s temperature decreases, its molecules move slower. These slower molecules are less able to overcome even the weak intermolecular forces in a gas.
This means that when a gas molecule is about to strike the wall of a container, the very small attraction it experiences for nearby gas molecules reduces its impact and the pressure it exerts on the container. Therefore, a real gas at a low temperature exerts a lower pressure in a container (Figure 7), compared to an ideal gas.
Figure 7: A real gas at low temperature exerts a lower pressure than predicted due to the attraction between molecules of the gas.
Under high pressure, a real gas frequently deviates from ideal gases because the real gas’s molecules do have volume, and do attract each other.
When a real gas is under high pressure, its molecules are forced into a smaller volume. This smaller volume reduces the amount of free space the molecules have to move around (Figure 8). The amount of space a gas’s molecules take up compared to the total space in the container—the molecules’ relative volume—gets bigger.
Figure 8: Under high pressure, a real gas has a larger volume than predicted due to the volume of the molecules it contains.
This means that under high pressures a real gas has a higher volume than an ideal gas, because an ideal gas’s molecules have no volume.
Furthermore, when a real gas’s molecules are crowded close together, intermolecular forces can have more influence on the molecules’ behavior. The attractive intermolecular forces draw molecules towards each other, which slows down the molecules and reduces their impact on a container’s walls. Therefore, when it’s under high pressure, a real gas has a slightly lower pressure than an ideal gas.
At low temperatures, an ideal gas in a container exerts _____ than a real gas.
The characteristics of gases affect many important things, from Earth’s atmosphere to air bags to how we breathe. Between breaths, the air pressure inside our lungs is the same as the atmospheric pressure around us. When we inhale and use our rib cage and diaphragm to expand our lungs’ volume, the air pressure decreases and external pressure forces air inside our lungs until the pressures are the same again—thereby filling our lungs with the oxygen we need to survive.
In this module, we have focused on the common properties of gases, and explored how these properties relate to a common set of behaviors called the gas laws. We have also gained an understanding of the ideal gas equation, and when this equation can—and cannot—be used to predict the behavior of real gases. In other modules, we examine the properties of solid and liquid states of matter, and further explore molecular explanations for gas behavior with kinetic molecular theory.
This module describes the properties of gases and explores how these properties relate to a common set of behaviors called the gas laws. With a focus on Boyle’s Law, Charles’s Law, and Avogadro’s Law, an overview of 400 years of research shows the development of our understanding of gas behavior. The module presents the ideal gas equation and explains when this equation can—and cannot—be used to predict the behavior of real gases.
Unlike solids or liquids, the molecules in a gas are very far apart and rarely interact with each other, which is why gases made out of different molecules share similar behaviors.
The gas laws describe the relationships between a gas’s temperature, pressure, volume, and amount. These laws were identified in experiments performed by multiple scientists over four centuries.
Because gases share common behaviors, the behavior of a real gas at a given pressure (P), absolute temperature (T), volume (V), and amount (n, in moles) can often be predicted by the ideal gas equation, PV = nRT, which perfectly describes the behavior of an idealized gas.
The behavior of real gases deviates from ideal gases at very low temperatures and high pressures.
Ashkenazi, G., James, S.G., & Jason, D.H. (2008). Similarity and difference in the behavior of gases: An interactive demonstration. Journal of Chemical Education, 85(1): 72.
- Bell, W.L. (1990). Chemistry of air bags. Journal of Chemical Education, 67(1): 61.
- Brush, S. (1999). Gadflies and geniuses in the history of gas theory. Synthese, 119(1): 11-43.
- Cornely-Moss, K. (1995). Kinetic theory of gases. Journal of Chemical Education, 72(8): 715.
- Crane, H.R. (1985). The air bag: An exercise in Newton’s laws. The Physics Teacher, 23(9): 576-594.
- Criswell, B. (2008). Teaching Avogadro’s hypothesis and helping students to see the world differently. Journal of Chemical Education, 85(10): 1372.
- Gay-Lussac, J.L. (1802). The expansion of gases by heat. Annales de Chimie, 43.
- Giunta, C.J. (2001). Using history to teach scientific method: The role of errors. Journal of Chemical Education, 78(5): 623.
- Gough, J.B. (1979). Charles the Obscure. Isis, 70(4): 576-579.
- Howard, I.K. (2001). S is for entropy. U is for energy. What was Clausius thinking? Journal of Chemical Education, 78(4): 505.
- Jensen, W.B. (2003). The universal gas constant R. Journal of Chemical Education, 80(7): 731.
- —–. (2007). How and when did Avogadro’s name become associated with Avogadro’s Number? Journal of Chemical Education, 84(2): 223.
- Kauffman, G.B. (1991). Sunto di un curso di filosofia chimica (Cannizzaro, Stanislao). Journal of Chemical Education, 68(10): A266.
- Laugier, A., & Garai J. (2007). Derivation of the Ideal Gas Law. Journal of Chemical Education, 84(11): 1832.
- Lipeles, E.S. (1983). The chemical contributions of Amadeo Avogadro. Journal of Chemical Education, 60(2): 127.
- Madlung, A. (1996). The chemistry behind the air bag: High tech in first-year chemistry. Journal of Chemical Education, 73(4): 347.
- Neville, R.G. (1962). The discovery of Boyle’s Law, 1661-62. Journal of Chemical Education, 39(7): 356.
- Partington, J. R. (1950). J L. Gay-Lussac (1778-1850). Nature 165(4201): 708.
- Szabadváry, F. (1978). Joseph Louis Gay-Lussac (1778-1850) and analytical chemistry. Talanta, 25(11-12): 611.
- West, J.B. (1999). The original presentation of Boyle’s Law. Journal of Applied Physiology, 87(4): 1543-1545.
- —–. (2005). Robert Boyle’s landmark book of 1660 with the first experiments on rarified air. Journal of Applied Physiology, 98(1): 31-39.
- —–. (2014). Robert Hooke: early respiratory physiologist, polymath, and mechanical genius. Physiology (Bethesda), 29(4): 222-233.
- Whitaker, R.D. (1979). The early development of kinetic theory. Journal of Chemical Education, 56(5): 315.
Megan Cartwright, Ph.D., Anthony Carpi, Ph.D. “Properties of Gases” Visionlearning Vol. CHE-3 (9), 2016.
Physical States and Properties
It’s a classic prank: Fill a saltshaker with sugar and wait for the meal to take an unexpected turn as your dining companions wonder why their chicken is oddly sweet. Or go the other direction and you can really disrupt someone’s morning when they take the first salty sip of coffee. These pranks work well because salt and sugar are almost indistinguishable by the naked eye: Both are crystalline solids with similar structures. Nonetheless, they have very different flavors, and they behave differently too. For example, you can pass an electrical current through salt water and light a light bulb (you might have done this experiment yourself); but you can’t do this with sugar water. Differences arise from the different properties of the two crystals, including the atoms that compose them and the actual structure of the crystal itself. In this module we will explore different types of solids and discuss how their structures relate to their behavior.
Figure 1: A pile of salt (left) and sugar (right).
From ancient Greece until the birth of modern chemistry in the 17th century, people may have been confused about what made salt and sugar so different. Without today’s tools to identify the components in the crystals and their structures, the two would have looked as similar to them as they do to our naked eye today (see Figures 1 and 2). As scientists began identifying and characterizing elements in the 17th and 18th centuries, they would have been able to determine that salt is made of sodium and chlorine, while sugar consists of carbon, hydrogen, and oxygen, but they would probably still have wondered how such combinations of completely different elements lead to such similar-looking crystals.
Figure 2: Close-up views of salt (left) and sugar (right) crystals. image © Salt: kevindooley; Sugar: Lauri Andler
It wasn’t until the early 1900s that scientists were first able to look inside crystals, when German scientist Max von Laue and English father and son scientists William Bragg and Lawrence Bragg developed a method that uses X-rays to determine the microscopic structures of crystalline solids. In fact, salt was the first solid investigated by this method, called X-ray crystallography, which revealed a regular lattice of sodium and chlorine atoms. Applying X-ray crystallography to sugar reveals a similar but not identical well-ordered crystal (Figure 3). Similarities in their crystal structure account for similarities in crystal appearance; however, the different types of atoms that make up each crystal and the different arrangements of the atoms account for the differences in behavior between the two solids. X-ray crystallography has also become a critical tool in modern biology research, helping to reveal the double helix structure of DNA in the 1950s (see our DNA II: The Structure of DNA module) and the structure of many simple and complex biological systems since that time.
Now that researchers can see this level of detail through X-ray crystallography and other methods, they can understand why some solids behave the way they do. And they can also use their understanding of the relationship between structure and behavior to design new and useful materials.
Figure 3: Atomic-level representations of salt (NaCl) and sugar (sucrose, C12H22O11).
You may not think of salt and sugar as solids because when you see them in the kitchen they are such small particles. But each of these particles is as much a solid as a wooden table, a glass window, or a gold piece of jewelry. A solid is a collection of atoms or molecules that are held together so that, under constant conditions, they maintain a defined shape and size. Solids, of course, are not necessarily permanent. Solid ice can melt to form liquid water at room temperature, and extremely high temperatures can be used to melt solid iron so it can be shaped into a skillet, for example. Once that skillet is formed and cools back to room temperature, though, its shape and size will not change on its own, as opposed to molten metal, which can be made to drip and change shape by gravity and molds. The same is true for ice cubes that are kept in the freezer: Once they are formed, their size and shape doesn’t change. Solids have constant shape and size because they are formed when the attractive forces between individual atoms or molecules are greater than the energy causing them to move apart. In other words, the atoms or molecules don’t have enough energy to move and are stuck together in whatever shape they were in when they lost the energy to separate. (See our States of Matter module for more about how solids differ from other states of matter.)
Salt and sugar are both crystalline solids. The other main category of solids is called amorphous. While crystalline solids are well ordered at the atomic level, with each atom or molecule inhabiting a specific point on a lattice, amorphous solids are disordered at an atomic level, with the atoms or molecules held together in a completely random formation. Consider a game of checkers. A board carefully set up with a checker in each square is analogous to a crystalline solid, while an amorphous solid could be represented as a checker pieces randomly scattered across the board.
Figure 4: Atomic-level representations of glass (silica) and quartz.
Quartz and glass are atomic-level examples of these two categories of solids. Quartz is a crystalline solid containing a high silicate (SiO2) content. If we were to examine the structure of quartz, we could see that the silicate subunits are arranged very precisely (see Figure 4). Glass, on the other hand, is an amorphous solid. Although its typical smooth, transparent appearance may make it seem like it must have a neat, organized microscopic structure, the opposite is true: The silicate units are unevenly scattered throughout the solid in a completely disordered fashion.
Like quartz, glass has a very high silicate (SiO2) content. (See our Defining Minerals and The Silicate Minerals modules for more about silicates and quartz.) The crucial difference between crystalline and amorphous solids is not what they are made of, but how they are made, and more precisely how their structures are arranged. Quartz forms on a very slow, geological timescale so the atoms have time to achieve a highly ordered crystal structure, in which the atoms optimize the attractive forces and minimize the repulsive forces between them and which is therefore energetically favorable. Glass, on the other hand, is made by melting sand (among other methods) and letting it cool very quickly, “freezing” the atoms in place, resulting in a disordered amorphous solids. Amorphous solids are often formed when atoms and molecules are frozen in place before they have a chance to reach the crystalline arrangement, which would otherwise be the preferred structure because it is energetically favored.
One important consequence of the irregular structure of amorphous solids is that they don’t always behave consistently or uniformly. For example, they may melt over a wide range of temperatures, in contrast to a crystalline solid’s very precise melting point. Returning to the glass versus quartz example, the most prevalent type of glass, called soda lime glass, can melt anywhere between 550°C and 1450°C, while cristobalite, a quartz polymorph, melts precisely at 1713°C. In addition, amorphous solids break unpredictably and produce fragments with irregular, often curved surfaces, while crystalline solids break along specific planes and at specific angles defined by the crystal’s geometry. (See our Defining Minerals module for more about how a crystal’s external appearance reflects the regular arrangement of its atoms.)
As an amorphous solid, glass has a precise melting point.
Crystal structure determines a lot more about a solid than simply how it breaks. Structure is directly related to a number of important properties, including, for example, conductivity and density, among others. To explain these relationships, we first need to introduce the four main types of crystalline solids – molecular, network, ionic, and metallic – which are each described below.
Individual molecules are composed of atoms held together by strong covalent bonds (see our Chemical Bonding module for more about covalent bonding). To form molecular solids, these molecules are then arranged in a specific pattern and held together by relatively weak intermolecular forces. Examples include ice (H2O(s) – s here stands for “solid”) and table sugar (sucrose, C12H22O11). The individual water and sugar molecules each exist as their own independent entities that interact with their neighbors in specific ways to create an ordered crystalline solid. (See Figure 5).
Figure 5: Two representations of ice: the atomic-level organization of molecules and the common ice cube. image © FDA (ice cubes)
In network solids, on the other hand, there are no individually defined molecules. A continuous network of covalent bonds holds together all the atoms. For example, carbon can form two different network solids: diamond and graphite. These materials are made up of only carbon atoms that are arranged in two different ways. Diamond is a three-dimensional crystal that is the hardest known natural material in the world. In contrast, graphite is a two-dimensional network solid. The carbon atoms essentially form flat sheets, which are relatively slippery and can slide past each other. While these two materials are made of the same very simple component – just carbon atoms – their appearance and behavior are completely different because of the different types of bonding in the solids. (See our Defining Minerals module for more about diamond and graphite.) This ability of a single element to form multiple solids is called allotropy.
Figure 6: Representations of diamonds and graphite, including their atomic structures showing the arrangement of carbon atoms. image © Itub
Network solids can also incorporate multiple elements. For example, consider quartz, the second most abundant material in the earth’s crust. The chemical formula for quartz is SiO2, but this formula indicates the ratio of silicon to oxygen and is not meant to imply that there are distinct SiO2 molecules present. Each silicon atom is bonded to four different oxygen atoms and each oxygen atom is bonded to two different silicon atoms, creating a large network of covalent bonds, as shown in Figure 6. (See our Defining Minerals and The Silicate Minerals modules for more about quartz.)
Ionic solids are similar to network solids in one way: There are no distinct molecules. But instead of atoms held together by covalent bonds, ionic solids are composed of positively and negatively charged ions held together by ionic bonds. (See our Chemical Bonding module for more about ionic bonding.) Table salt (sodium chloride, NaCl) is a common ionic solid, as is just about anything that is called a “salt.” Simple salts usually consist of one metal ion and one non-metal ion. In the case of sodium chloride, sodium is the metal and chloride is the non-metal. Salts can also consist of more complex ions, such as ammonium sulfate, whose components ammonium (NH4+) and sulfate (SO42-), are individually held together by covalent bonds and attracted to each other through ionic bonds.
Finally, metallic solids are a type all their own. Although we are discussing them last here, about three quarters of the known elements are metals. You can read more about these metallic elements in The Periodic Table of Elements module. Here we will focus on how these elements behave as metallic solids.
Figure 7: This atomic-level representation of a metallic solid shows that the electrons can easily move around within the solid. image © Rafaelgarcia
Metal atoms are held together by metallic bonds, in which the atoms pack together and the outer electrons can easily move around within the solid (Figure 7). Metallic bonds are nondirectional, meaning that metal atoms can remain bonded while they roll against each other as long as some parts of their surfaces are in contact. These unique properties of metallic bonds are largely responsible for some of the valuable behavior of metals, including their conductivity and malleability, which we discuss in the next section.
All crystalline solids are held together by covalent bonds.
As described in the previous section, crystalline solids can vary in their atomic compositions, bonding, and structure. Together, these attributes determine how the different solids behave under different conditions. Solids have many different properties, including conductivity, malleability, density, hardness, and optical transmission, to name a few. We will discuss just a handful of these properties to illustrate some of the ways that atomic and molecular structure drives function.
As you read this lesson on your computer, you’re probably not thinking about the wires your computer uses to get the electrical power it needs to run. Those wires are made of metal, probably copper, because metals generally have good electrical conductivity. Electricity is essentially a flow of electrons from one place to another, and in metallic bonds the outer electrons are relatively free to move between adjacent atoms. This electron mobility means it is easy for an electrical current to move from one end of a piece of metal to the other. When an electron is introduced at one end of a piece of wire by an electric current, this causes electrons to move from one to another metal atom continuously down the wire, allowing the current to flow. In other solids, though, the electrons are engaged in the covalent or ionic bonds and therefore are not able to conduct electricity, or do so only poorly. Materials that do not conduct electricity are called electrical insulators.
Heat, or thermal, conductivity is closely related to electrical conductivity. Just as metals are good electrical conductors, you probably know from experience that they’re good at conducting heat too. (That’s why most kitchen pots, pans, and baking sheets are metal, so they can absorb the heat from the stove or oven and pass it on to the food that’s being cooked.) To understand how this works, consider that temperature is a measurement of how much molecules are moving (see our States of Matter and Temperature modules). For a solid to conduct heat, the movement of one molecule or atom needs to be easily transferrable to its neighbor. The non-directional nature of the metallic bond makes this type of transfer relatively easy, so metals conduct heat well. In a network solid, on the other hand, where the bonds are more rigid and the angles between the atoms are strictly defined, such transfer is more difficult. Such solids would be expected to have low heat conductivity and would be called heat insulators.
Figure 8: A graphite sheet and carbon nanotube. image © NASA (nanotube)
Graphite is an interesting exception to this trend. Because of the specific energy and orientation of the typical bonds in graphite sheets, they are relatively good at conducting heat and electricity. You may have heard about carbon nanotubes, which are similar to graphite sheets but exist in the form of tubes (Figure 8). These tubes can conduct electricity and heat from one end to the other and are being tested for many possible applications, including in electrical circuits, solar cells, and textiles.
Metal conducts heat and electricity well because the bonds between atoms are
Two additional properties, malleability and ductility, follow trends similar to those for electrical and thermal conductivity. Malleability describes the ability to hammer a solid into a sheet without breaking it, and ductility refers to whether a solid can be stretched to form a wire. As you may have guessed, metals tend to be both malleable and ductile, largely due to the non-directionality of metallic bonds. In contrast, covalent and ionic bonds, which are directional and require specific geometries resulting in fixed three-dimensional lattice structures, make many other types of solids brittle so they break under force.
Metallic malleability and ductility are a crucial reason that metals are so useful. Their electrical conductivity would be much less useful if it weren’t possible to stretch them into wires that could then be bent and shaped at room temperature for an incredible array of applications. They also create some drawbacks though. Metal jewelry can be crushed and deformed in the bottom of a purse, or a metal figurine can be dented if it’s dropped. Manufacturers must consider all the properties of the materials they plan to work with to find the best option for each application.
Another way to deform a solid is to melt it. A solid’s melting point depends on the strength of the interactions between its components: Stronger interactions mean a higher melting point. For molecular solids, melting means breaking the weak intermolecular forces (the forces between different molecules), not the strong covalent bonds that hold the individual molecules together, so a compound like sugar can be easily melted on your stovetop. For network solids (held together by covalent bonds), ionic solids (held together by ionic bonds), and metallic solids (held together by metallic bonds), though, the melting temperature depends on the strength of the specific bonds in each solid. Some metals have relatively low melting points, like mercury, which is actually a liquid at room temperature (its melting point is -38°C), while others, such as tungsten, melt only at extremely high temperatures (tungsten’s melting point is 3,422°C). Among network solids, a type of quartz called tridymite melts at 1,670°C while graphite melts at 4,489°C, and among ionic solids, sodium chloride melts at 801°C while lithium bromide melts at 552°C. Ionic bonds tend to be weaker than covalent and metallic bonds, which is why the melting points for these salts are somewhat lower than most of the other example melting points included here.
Melting is one way of changing a solid’s shape. Another approach is dissolving the solid into some type of liquid, in this case referred to as a solvent. The extent to which a solid dissolves in a particular solvent is called its solubility. Solids can be dissolved into a variety of types of solvents, but for now we will focus on solubility in water.
Dissolving a solid requires breaking different types of bonds for different types of solids. Dissolving a metal requires breaking metallic bonds, and dissolving a network solid requires breaking covalent bonds. Both of these types of bonds are very strong and hard to break. Therefore, metals and network solids are generally not soluble in water. (Diamond rings probably wouldn’t be as valuable if the band and the stone dissolved in the shower.) In contrast, dissolving a molecular solid requires breaking only weak intermolecular forces, not the covalent bonds that actually hold the individual molecules together. Therefore, molecular solids are relatively soluble, as you might have been able to guess given how we use sugar in so many drinks.
Finally, to dissolve ionic solids, the ionic bonds between the atoms or molecules must be broken, which water does particularly well. Each atom or molecule within an ionic solid carries a charge, and water molecules also carry a charge due to polarity (see our Water: Properties and Behavior module for more information). As a result, the negative charges within water are attracted to the positively charged ions, and the positive charges within water are attracted to the negative ions. This allows the water molecules to dissolve ionic solids by separating the parts, essentially trading the favorable ionic interaction in the solid crystal with favorable ionic interactions between the individual ions and the water molecules. Therefore, most salts are relatively water-soluble.
Both salt and sugar are quite soluble in water, but because of the differences between ionic solids (salt) and molecular solids (sugar), salt water behaves differently than sugar water (remember the light bulb experiment from the previous section). When salt dissolves in water, the positively (Na+) and negatively (Cl-) charged ions that compose the solid separate, creating a liquid solution of charged particles. These charged particles can pick up electrons and transfer them across the solution, effectively conducting electricity. When salts such as ammonium sulfate dissolve, the ionic bonds between the ions break, but the covalent bonds holding the individual complex ions together remain intact. By comparison, when sugar dissolves, each individual sucrose molecule separates from its neighbors but the sucrose molecules themselves remain intact and without charge, so they don’t conduct electricity.
Dissolving a molecular solid requires breaking
Density, defined as the amount of mass that exists in a certain volume (see our Density module for more information), is another important property that depends on the solid’s structure and composition. It’s important to note that although we described the different types of crystal solids as having certain structural characteristics, there is significant variation within each type as well. For example, metallic solids do not all share a similar arrangement of atoms. The atoms and molecules that make up crystals can pack in many different ways, which affects density (Figure 9). Imagine a jar of neatly ordered marbles, with each dimple between marbles in one row filled with a marble in the row above. This closely packed arrangement leads to a very high density. Gold takes on approximately this type of packing, resulting in its high density of 19.3 grams per cubic centimeter. Now imagine another jar where the marbles are still neatly ordered, but each marble is stacked directly on top of another instead of in the dimple. This type of packing leaves a lot more empty space in the jar because those dimples aren’t filled, so if the jar is the same size as the first jar, it can’t hold as many marbles and is less dense. Lithium, which is the least dense metal at 0.534 grams per cubic centimeter, is an example of this type of packing.
Figure 9: Two packing geometries. The one on the left is a closely packed arrangement, which results in a high density; the one on the right is more neatly ordered, yet less packed and leaves more space, resulting in a lower density. image © Vinícius Machado Vogt
Another important variable is size. Bigger marbles can’t pack as closely as smaller marbles, even if they are in the same arrangement, so contents of the jar will be less dense. However, if you are allowed to use marbles of different sizes, you might be able to fit small marbles in the holes left between big marbles, which could lead to an even higher density than you would get from just the small marbles alone. This principle is particularly relevant for ionic solids, which are made up of two different ions that are usually different sizes. Lithium bromide, for example, is denser than potassium chloride. The size difference between lithium and bromide is greater than the size difference between potassium and chloride, so the lithium and bromide ions leave less empty space when they pack together than the potassium and chloride do, resulting in a higher density.
While the properties of solids may at first appear trivial, the unique characteristics of different solids influence almost every aspect of daily life in more ways than you may think. Fine watches and, increasingly, other electronic devices use sapphire crystals instead of glass because the strong network bonding makes sapphire incredibly hard (in fact, it is the third hardest substance known) and scratch-resistant. The peculiar molecular structure of ice results in its being less dense than liquid water, and it can be argued that without this property life on Earth would have never have come into existence. On a less existential level, it means that we can go ice skating on frozen ponds in the winter even if it hasn’t frozen all the way through. Developing new solid materials with specific properties, such as electrical semiconductors and superconductors, is an active area of research with many potential applications. But solids aren’t the only substances with useful and entertaining properties, as we will see in the next modules on liquids and gases.
Solids are formed when the forces holding atoms or molecules together are stronger than the energy moving them apart. This module shows how the structure and composition of various solids determine their properties, including conductivity, solubility, density, and melting point. The module distinguishes the two main categories of solids: crystalline and amorphous. It then describes the four types of crystalline solids: molecular, network, ionic, and metallic. A look at different solids makes clear how atomic and molecular structure drives function.
A solid is a collection of atoms or molecules that are held together so that, under constant conditions, they maintain a defined shape and size.
There are two main categories of solids: crystalline and amorphous. Crystalline solids are well ordered at the atomic level, and amorphous solids are disordered.
There are four different types of crystalline solids: molecular solids, network solids, ionic solids, and metallic solids. A solid’s atomic-level structure and composition determine many of its macroscopic properties, including, for example, electrical and heat conductivity, density, and solubility.
Rachel Bernstein, Ph.D., Anthony Carpi, Ph.D. “Properties of Solids” Visionlearning Vol. CHE-3 (2), 2015.
Physical States and Properties
Water gushes out of the faucet. Honey oozes out of a squeeze bottle. Gasoline flows out of the pump. These are just three examples of a highly diverse state of matter: liquids. One of the key defining properties of liquids is their ability to flow. Beyond this feature, though, the behaviors of different liquids span a broad range. Some liquids flow relatively easily, like water or oil, while others, like honey or molasses, flow quite slowly. Some are slippery, and some are sticky. Where do these different behaviors come from?
When it comes to interactions between different liquids, some mix well: Think of a Shirley Temple, made of ginger ale and grenadine. Others, though, don’t seem to mix at all. Consider oil spills, where the oil floats in a sticky, iridescent layer on top of the water. You may also notice a similar phenomenon in some salad dressings that separate into an oil layer that rests atop a layer of vinegar, which is primarily water. Why don’t these liquids mix well?
These varied behaviors arise primarily from the different types of intermolecular forces that are present in liquids. In this module we’ll first discuss liquids in the context of the other two main states of matter, solids and gases. Then we will go through a brief overview of intermolecular forces, and finally we’ll explore how intermolecular forces govern the way that liquids behave.
Liquids flow because the intermolecular forces between molecules are weak enough to allow the molecules to move around relative to one another. Intermolecular forces are the forces between neighboring molecules. (These are not to be confused with intramolecular forces, such as covalent and ionic bonds, which are the forces exerted within individual molecules to keep the atoms together.) The forces are attractive when a negative charge interacts with a nearby positive charge and repulsive when the neighboring charges are the same, either both positive or both negative. In liquids, the intermolecular forces can shift between molecules and allow them to move past one another and flow. (See Figure 1 for an illustration of the various intermolecular forces and interactions.)
Figure 1: Panel A shows the variety of attractive and repulsive dipole–dipole interactions. Attractive interactions are show in (a) and (b) with orientations where the positive end is near the negative end of another molecule. In (c) and (d), repulsive interactions are shown with orientations that juxtapose the positive or negative ends of the dipoles on adjacent molecules. Panel B shows a sample liquid with several molecules both attracting and repulsing with their dipole-dipole interactions. image © UC Davis ChemWiki
Contrast that with a solid, in which the intermolecular forces are so strong that they allow very little movement. While molecules may vibrate in a solid, they are essentially locked into a rigid structure, as described in the Properties of Solids module. At the other end of the spectrum are gases, in which the molecules are so far apart that the intermolecular forces are effectively nonexistent and the molecules are completely free to move and flow independently.
At a molecular level, liquids have some properties of gases and some of solids. First, liquids share the ability to flow with gases. Both liquid and gas phases are fluid, meaning that the intermolecular forces allow the molecules to move around. In both of these phases, the materials don’t have fixed shapes and instead are shaped by the containers holding them.
Solids are not fluid, but liquids share a different important property with them. Liquids and solids are both held together by strong intermolecular forces and are much more dense than gases, leading to their description as “condensed matter” phases because they are both relatively incompressible. (Figure 2 shows the differences of gases, liquids, and solids at the atomic level.)
Figure 2: The three states of matter at the atomic level: solid, liquid, and gas. image © Yelod
Most substances can move between the solid, liquid, and gas phases when the temperature is changed. Consider the molecule H20: It takes the form of ice, a crystalline solid, below 0° C; water, a liquid, between 0° and 100° C; and water vapor, or steam, a gas, above 100° C. These transitions occur because temperature affects the intermolecular attraction between molecules. When H20 is converted from a liquid to gas, for instance, the rising temperature makes the molecules’ kinetic energy increase such that it eventually overcomes the intermolecular forces and the molecules are able to move freely about in the gas phase. However, the intramolecular forces that hold the H20 molecule together are unchanged; H20 is still H20, regardless of its state of matter. You can read more about phase transitions in the States of Matter module.
Now that we’ve discussed how liquids are similar to and different from solids and gases, we can focus on the wide world of liquids. First, though, we need to briefly introduce the different types of intermolecular forces that dictate how liquids, and other states of matter, behave.
Intermolecular forces are
As we described earlier, intermolecular forces are attractive or repulsive forces between molecules, distinct from the intramolecular forces that hold molecules together. Intramolecular forces do, however, play a role in determining the types of intermolecular forces that can form. Intermolecular forces come in a range of varieties, but the overall idea is the same for all of them: A charge within one molecule interacts with a charge in another molecule. Depending on which intramolecular forces, such as polar covalent bonds or nonpolar covalent bonds, are present, the charges can have varying permanence and strengths, allowing for different types of intermolecular forces.
So, where do these charges come from? In some cases, molecules are held together by polar covalent bonds – which means that the electrons are not evenly distributed between the bonded atoms. (This type of bonding is described in more detail in the Chemical Bonding module.) This uneven distribution results in a partial charge: The atom with more electron affinity, that is, the more electronegative atom, has a partial negative charge, and the atom with less electron affinity, the less electronegative atom, has a partial positive charge. This uneven electron sharing is called a dipole. When two molecules with polar covalent bonds are near each other, they can form favorable interactions if the partial charges align appropriately, as shown in Figure 3, forming a dipole-dipole interaction.
Figure 3: In panel A, a molecule of water, H2O, is shown with uneven electron sharing resulting in a partial negative charge around the oxygen atom and partial positive charges around the hydrogen atoms. In panel B, three H2O molecules interact favorably, forming a dipole-dipole interaction between the partial charges.
Hydrogen bonds are a particularly strong type of dipole-dipole interaction. (Note that although they are called “bonds,” they are not covalent or ionic bonds; they are a strong intermolecular force.) Hydrogen bonds occur when a hydrogen atom is covalently bonded to one of a few non-metals with high electronegativity, including oxygen, nitrogen, and fluorine, creating a strong dipole. The hydrogen bond is the interaction of the hydrogen from one of these molecules and the more electronegative atom in another molecule. Hydrogen bonds are present, and very important, in water, and are described in more detail in our Water: Properties and Behavior module.
Hydrogen bonds and dipole-dipole interactions require polar bonds, but another type of intermolecular force, called London dispersion forces, can form between any molecules, polar or not. The basic idea is that the electrons in any molecule are constantly moving around and sometimes, just by chance, the electrons can end up distributed unequally, creating a temporary partial negative charge on the part of the molecule with more electrons. This partial negative charge is balanced by a partial positive charge of equal magnitude on the part of the molecule with fewer electrons, with the positive charge coming from the protons in the nucleus (Figure 4). These temporary partial charges in neighboring molecules can interact in much the same way that permanent dipoles interact. The overall strength of London dispersion forces depends on the size of the molecules: larger molecules can have larger temporary dipoles, leading to stronger London dispersion forces.
Figure 4: Two nonpolar molecules with symmetrical molecule distributions (panel A) can become polar (panel B) when the random movement of electrons results in temporary negative charge in one of the molecules, inducing an attractive (positive) charge in the other.
Now, you might ask, if molecules can develop temporary partial charges that interact with each other, these temporary charges should also be able to interact with permanent dipoles, right? And you would be correct. These interactions are called, very creatively, dipole-induced dipole interactions. The partial charge of the polar molecule interacts with the electrons in the nonpolar molecule and “induces” them to move so they’re not evenly distributed anymore, creating an induced dipole that can interact favorably with the polar molecule’s permanent dipole (Figure 5).
Figure 5: When a polar molecule interacts with the electrons in a nonpolar molecule (panel A), the nonpolar molecule is induced to become a dipole and interacts favorably with the polar molecule (panel B).
As you might have guessed, London dispersion forces and dipole-induced dipole interactions are generally weaker than dipole-dipole interactions. These forces, as well as hydrogen bonds, are all van der Waals forces, which is a general term for attractive forces between uncharged molecules.
There’s a lot more to intermolecular forces than what we’ve covered here, but with this brief introduction, we’re ready to get back to the main event: liquids, and how intermolecular forces determine their properties and behavior.
Which interactions are stronger?
If you’ve ever used oil for cooking or working on a car, you know that it’s nice and slippery. That’s probably why you used it: it keeps stir-fry pieces from sticking to each other or the pan, and it helps engine pistons and other moving parts slide easily.
One of the reasons oils are good for these applications is because they have low cohesion: the liquid molecules don’t interact particularly strongly with each other because the intermolecular forces are weak. The primary intermolecular forces present in most oils and many other organic liquids – liquids made predominantly of carbon and hydrogen atoms, also referred to as non-polar liquids – are London dispersion forces, which for small molecules are the weakest types of intermolecular forces. These weak forces lead to low cohesion. The molecules don’t interact strongly with each other, so they can slide right past one another.
On the other end of the cohesion spectrum, consider a dewdrop on a leaf in the early morning (Figure 6). How can such a thing exist if, as explained earlier, liquids flow and take the shape of the container holding them? As described above and in the Water module, water molecules are held together by strong hydrogen bonds. These strong forces lead to high cohesion: The water molecules interact with each other more strongly than they interact with the air or the leaf itself. (The interaction of the water with the leaf is an example of adhesion, or the interaction of a liquid with something other than itself; we’ll discuss adhesion in the next section.) Because of water’s high cohesion, the molecules form a spherical shape to maximize their interactions with each other.
Figure 6: Dew drops on a leaf. image © Cameron Whitman/iStockphoto
This high cohesion also creates surface tension. You may have noticed insects walking on water on an outdoor pond (Figure 7), or seen a small object such as a paperclip resting on water’s surface instead of sinking; these are two examples of water’s surface tension in action. Surface tension results from the strong cohesive forces of some liquids. These forces are strong enough to be maintained even when they experience external forces like the weight of an insect walking across its surface.
Figure 7: The water strider (Gerris remigis), a common water-walking insect. image © John Bush, MIT/NSF
Adhesion is the tendency of a compound to interact with another compound. (Remember that, in contrast, cohesion is the tendency of a compound to interact with itself.) Adhesion helps explain how liquids interact with their containers and with other liquids.
One example of an interaction with high adhesion is that between water and glass. Both water and glass are held together by polar bonds. Therefore, the two materials can also form favorable polar interactions with each other, leading to high adhesion. You may have even seen these attractive adhesive forces in action in lab. When water is in a glass graduated cylinder, for example, the water creeps up the sides of the glass, creating a concave curve at the top called a meniscus, as shown in the figure below. Water in graduated cylinders made out of some types of non-polar plastic, on the other hand, forms a flat meniscus because there are neither attractive nor repellant cohesive forces between the water and the plastic. (See Figure 8 for a comparison of polar and non-polar graduated cylinders.)
Figure 8: In graduated cylinder A, made of glass, the meniscus is concave; in cylinder B, made of plastic, the meniscus is flat. image © Achim Prill/iStockphoto
When intermolecular forces are weak in a liquid, the liquid has low
At the beginning of the module, we said that one of the defining features of liquids is their ability to flow. But among liquids there is a huge range in how easily this happens. Consider the ease with which you can pour yourself a glass of water, as compared to the relative challenge of pouring thick, slow-moving motor oil into an engine. The difference is their viscosity, or resistance to flow. Motor oil is quite viscous; water, not so much. But why?
Before we dive into the differences between water and motor oil, let’s compare water with another liquid: pentane (C5H12). While we don’t think of water as viscous, it’s actually more viscous than pentane. Remember, water molecules form strong hydrogen bonds with each other. Pentane, on the other hand, made up of just hydrogen and carbon atoms, is nonpolar, so the only types of intermolecular forces it can form are the relatively weak London dispersion forces. The weaker intermolecular forces mean that the molecules can more easily move past each other, or flow – hence, lower viscosity.
But both water and pentane are relatively small molecules. When we’re looking at liquids made of up bigger molecules, size comes into play as well. For example, compare pentane to motor oil, which is a complex mixture of large hydrocarbons much larger than little pentane, and some with dozens or even hundreds of carbons in a chain. If you’ve ever poured motor oil into an engine, you know it’s pretty viscous. Both liquids are nonpolar, and so have relatively weak intermolecular forces; the difference is the size. The big, bendy motor oil hydrocarbons can literally get tangled with their neighbors, which slows the flow. It’s almost like a pot of spaghetti: If you don’t prepare it correctly, you can end up with a blob of tangled noodles that are very hard to serve because they’re all stuck together – in a sense, it’s a viscous pasta blob. Shorter noodles – or smaller molecules – don’t tangle as much, so they tend to be less viscous (Figure 9).
Figure 9: Group A consists of large molecules in a tangled blob (a viscous liquid) and Group B consists of smaller molecules with fewer entanglements (a less viscous liquid).
Returning to our original comparison of motor oil versus water, even though water has such strong intermolecular forces, the much larger size of the molecules in the motor oil makes the oil more viscous.
There’s one more piece to the story: temperature. Warming a liquid makes it less viscous, as you may have observed if you’ve ever experienced how much easier it is to pour maple syrup onto your pancakes when the syrup has been warmed than when it is cold. This is the case because temperature affects both of the factors that determine viscosity in the first place. First, increasing the temperature increases the molecules’ kinetic energy, which allows them to overcome the intermolecular forces more easily. It also makes the molecules move around more, so those big molecules that got tangled up when they were cold become more dynamic and are more able to slide past each other, allowing the liquid to flow more easily.
Motor oil pours more slowly than the solvent pentane because motor oil is made up of
When you think of water, you might think of its chemical formula, H2O. This formula describes a pure liquid composed only of H2O molecules, with absolutely no other components. The reality, though, is that the vast majority of liquids we encounter are complex mixtures of many compounds.
Solutions are made of a liquid solvent in which one or more solutes are dissolved. Solutes can be solids, liquids, and gases. There are many, many common solutions that use water as the solvent, including salt water and pretty much any type of flavored drink. Carbon dioxide (CO2) gas is a common gaseous solute in carbonated drinks, and ethanol is a liquid solute in any alcoholic drink. Although solutions are mixtures of multiple compounds, the properties discussed in the previous section still apply.
Not all solutes dissolve in all solvents. You can dissolve huge amounts of some solutes in some liquids, and other solutes are only marginally soluble in any solvent. The underlying explanation for solubility is that “like dissolves like.” Nonpolar solutes generally dissolve better in nonpolar liquids, and polar solutes dissolve better in polar liquids. For example, oil-based (and therefore nonpolar) paints require a non-polar solvent such as turpentine for clean up; they will not dissolve in water, which is polar. Table salt or sugar, on the other hand, both polar solids, easily dissolve at high concentrations in water.
More complex solutions include emulsions, colloids, and suspensions. Briefly, an emulsion is a well-dispersed mixture of two or more liquids that don’t normally mix. Mayonnaise, for example, is an emulsion of oil, egg yolk, and vinegar or lemon juice, which is made by very vigorous mixing.
Colloids and suspensions both consist of insoluble particles in a liquid. In a colloid, the miniscule insoluble particles are distributed in a liquid and won’t separate. And a suspension, on the other hand, is a liquid that contains larger insoluble particles that will eventually separate. Milk is a useful example of the difference between these two. Fresh milk is a suspension. It’s a complex mixture of components that don’t normally mix – water, fats, proteins, carbohydrates, and more – and if left alone the fat globules separate from the water-based portion of the mixture. (Remember the separation of vinegar and oil in salad dressing? The milk separation process is similar, with the oily fat separating from the water.) The milk at most grocery stores, on the other hand, is a colloid. The components don’t separate thanks to a process called homogenization, which breaks the fat globules into small enough particles that they can remain suspended in the liquid.
Which statement is true about solutes?
We’ve discussed a lot of different liquids, with varying cohesion, adhesion, and viscosity, as well as other properties. But in addition to this already wide variety, there are some substances that blur the distinction between liquid and solid. For example, as a kid you may have played with oobleck, a mixture of water and starch that gets its name from a Dr. Seuss book. Oobleck is a slimy substance that can flow between your fingers if you hold it gently in your hands but becomes hard and firm, almost solid, if you squeeze it.
For a more technical example, consider the material used in LCD television displays and other electronic screens. LCD stands for Liquid-Crystal Display. That doesn’t mean that the displays use both liquids and crystals; it means that they use a material that is both liquid and crystal, at the same time. This might sound like a contradiction – crystals are solids, not liquids, you say – but such materials exist.
The first liquid crystal discovered was a modified version of cholesterol, called cholesteryl benzoate. It’s a solid at room temperature and melts at around 150°C, but then things get weird. At about 180°C, it changes phase again, but not from liquid to gas; it goes from cloudy liquid to clear liquid. Austrian botanist and chemist Friedrich Reinitzer observed this unusual behavior in 1888 and discussed it with his colleague, German physicist Otto Lehmann. Lehmann then took over the investigation, studying cholesteryl benzoate and other compounds with similar double-melting behavior. When he looked at the cloudy phase under his microscope, he found that the material appeared crystalline, a defining feature of solids. But the phase also flowed, like a liquid. In 1904 he coined the term “liquid crystal” to describe this phase, with properties between those of a conventional liquid and crystalline solid. Liquid crystals play an important role in biology, particularly in membranes, which need to be fluid but also must retain a regular structure.
There are also some liquids that are so viscous you wouldn’t be blamed for thinking that they’re solid, such as pitch, a substance derived from plants and petroleum. It appears almost solid, and shatters if hit with a hammer, but if left to gravity it will flow extremely, extremely slowly. A few labs around the world are running so-called pitch drop experiments, in which they leave some pitch in a funnel and wait for it to drip; about 10 years pass between each drop (Figure 10).
Figure 10: The Pitch Drop Experiment at the University of Queensland (battery shown for size comparison). image © John Mainstone & Amada44
These examples of substances behaving in ways that seem to defy the traditional definitions for the phases of matter illustrate the inherent complexity of science and the natural world, even when it comes to something as seemingly simple as determining whether a substance is a liquid or a solid. In this module we have focused on defining and explaining the basic properties of liquids, which provides the foundation for you to think about states of matter in all their complexity. In other modules we discuss the solid and gas phases to help you contrast the different physical properties of these states.
When it comes to different liquids, some mix well while others don’t; some pour quickly while others flow slowly. This module provides a foundation for considering states of matter in all their complexity. It explains the basic properties of liquids, and explores how intermolecular forces determine their behavior. The concepts of cohesion, adhesion, and viscosity are defined. The module also examines how temperature and molecule size and type affect the properties of liquids.
Liquids share some properties with solids – both are considered condensed matter and are relatively incompressible – and some with gases, such as their ability to flow and take the shape of their container.
A number of properties of liquids, such as cohesion and adhesion, are influenced by the intermolecular forces within the liquid itself.
Viscosity is influenced by both the intermolecular forces and molecular size of a compound.
Most liquids we encounter in everyday life are in fact solutions, mixtures of a solid, liquid or gas solute within a liquid solvent.
- HS-C6.2, HS-PS1.A3, HS-PS1.A4
Rachel Bernstein, Ph.D., Anthony Carpi, Ph.D. “Properties of Liquids” Visionlearning Vol. CHE-3 (5), 2015.