March 29, 2006

The Size of the Cosmos

Henrietta Leavitt, Periods of 25 Variable Stars in the Small Magellanic Cloud, 1912

Much of astronomy is tied up in attempts to determine the distance to various objects, such as planets, stars, and galaxies. This seemingly simple task is actually shockingly difficult when astronomical distances are involved. The easiest way to measure a distance is to actually travel the distance, effectively counting your steps. This is impractical even on Earth when the destination is too far away or too inaccessible (e.g. when measuring the height of Mt. Everest). In these cases, geometry can be used. If you start with two points that are a known distance apart, you can measure the direction from these two points to a third point and work out the distances. Effectively, you measure one side and two angles of a triangle, which provides enough information to work out the lengths of the other two sides.

This method only works if the known side's length is large enough when compared to the unknown lengths. Otherwise there will be no measureable difference between the two angles. In 1672, the distances to the planets were measured in this way, with the two known points on opposite sides of the planet (Paris, France, and Cayenne Island, French Guyana). By 1838, the distances to some stars were measured, with the two known points on opposite sides of the solar system: one measurement is taken in summer, and the other in winter, when the planet is on the opposite side of the sun!

Unfortunately, only the nearest stars can have their distances measured in this way. Out of literally millions of stars visible in telescopes, only those closer than about 500 light years can have their distances measured with any accuracy using geometry alone.

Enter Henrietta Leavitt. She was a "computer" at Harvard College Observatory, literally a person who was hired to compute numbers. She computed, and studied, the brightnesses of stars. There is a simple relationship between the amount of light a star produces, its distance, and how bright it appears to us. To understand this, imagine all the light produced by a star at a particular instant in time. As time passes, it heads away from the star in all directions at the same speed. The light appears, in other words, to be spread evenly across the surface of an expanding sphere. As the distance grows, the surface area of the sphere grows, and the light is spread more and more thinly. Because the surface area of a sphere grows with the square of its radius, the light reaching an observer decreases in intensity with the square of the distance to the source of light. In other words, if you know how much light a star is putting out, and you can see how bright it is, you can work out how far away it must be.

Isacc Newton guestimated some stellar distances by assuming that all stars are identical to the sun. He correctly concluded that stars are very, very far away. Unfortunately, once nearby stars had their distances measured by geometry, it was shown that stars vary a lot in the amount of light they put out: they aren't all identical to the sun.

Anyway, back to Miss Leavitt. One of the tasks she accomplished was to assign known brightnesses to sets of stars. The ideal way to do this is to take many pictures of a star of known brightness at carefully varied exposure times. This produces a set of pictures that vary in brightness in exactly the same way that the exposure times were varied. Then you can compare these to pictures of other stars taken with the same equipment and determine their brightness. Keep in mind that you're doing this by comparing the size of spots of light on photographic negatives using a magnifying glass.

Miss Leavitt was given 277 photographs from 13 telescopes, with varying apertures and exposure times, and asked to calculate the brightnesses of 96 stars in the pictures, so that they could be used as a standard by which other astronomers could measure brightnesses. She succeeded. From this, you can infer that she was an expert at measuring the brightnesses of stars.

At about the same time as this project, she started studying stars whose brightness varied noticeably over time. In her landmark paper, she looked at 25 variables of a particular type, called Cepheid variable stars, all in the Small Magellanic Cloud, a group of stars thought to be at roughly the same distance from us. Because of this, she knew that their apparent brightnesses were not being significantly distorted by distance: if one of them appeared to be twice as bright as another, it had to be because it was putting out twice as much light, not because of a difference in distance between the two stars.

Each Cepheid variable varies in brightness over a particular period: e.g. the brightness of star #7 in her paper peaked every 45 days, while the brightness of star #2 peaked every 20 days. Much to everyone's surprise, Miss Leavitt discovered that the period of each star was related to its apparent brightness, and therefore to the amount of light it was putting out. In other words, if you can find a Cepheid variable star anywhere, you can measure its period, use that to work out the amount of light it puts out, measure its apparent brightness, and combine those to work out how far away it is. With this, you can estimate the distance to any group of stars that happens to contain a Cepheid.

Although the Cepheids Miss Leavitt studied were not at a known distance, within a year someone found a Cepheid close enough to measure its distance by simple geometry. With this discovery, the measurement of distances expanded from around 500 light years to over 20 million light years. Within a decade, studies of globular clusters had mapped out the shape of the Milky Way, our home galaxy, and Hubble soon discovered the existence of stars well outside the Milky Way, in other galaxies entirely.

March 25, 2006

Sidebar: properties of atoms

So, how do you measure the size and mass of individual atoms?

Chemists discovered long ago that various substances combine in exact relative amounts during chemical reactions. For example, 16 grams of Oxygen will combine with 2 grams of Hydrogen to form 18 grams of Water. From the vast catalog of reactions such as that, you can work out the relative masses of each type of atom. Carbon weighs 12 times as much as Hydrogen, and 0.75 times as much as Oxygen, and so on. So, if you can figure out the exact weight of any one type of atom, you've got them all.

Actually, all you really need to know is how many atoms there are in a particular lump of stuff. Then you can divide the mass of that lump by the number of atoms in it, and estimate the mass of a single atom. Nowadays, we have several tools for figuring out how many atoms there are in something (at least one of which is a Discovery in its own right), but in the 1800s your choices were more limited. I've heard that one way it was done was through kinetic theory, which I don't have the details of. Another was to look at the way air scatters light. Electromagnetic theory was well developed, and there was already an equation governing the scattering of light passing through air, which depended, among other things, on the number of atoms or molecules in a cubic centimeter of air. So, in 1890 Lorenz did the math, weighed a cubic centimeter of air, and came up with a pretty good estimate of the masses of all the different types of atoms.

With this information in hand, it becomes relatively easy to estimate the volume of an atom too: just divide the volume of a lump of stuff by the number of atoms in it, which in turn can be worked out by dividing the lump's mass by the mass of a single atom. Of course, atoms aren't perfectly packed together in solid matter, but it's within a factor of 2 or so, which is good enough for some purposes.

Quite a chain of logic, eh? Anyway, once these ballpark estimates were well known, lots of work got done on refining the numbers to the point at which we find them today.

The Nucleus of the Atom

Ernest Rutherford, The Scattering of α and β particles by matter and the structure of the atom, 1911

This one is really easy to understand, conceptually. Before this paper came along, scientists had a good idea of the size, mass, and electric charge of each type of atom. The then-current model of the atom had its mass and positive charge distributed evenly throughout its volume, with electrons embedded essentially at random. Electromagnetic forces were well understood, so collisions between charged particles could be accurately modelled.

Rutherford, working with Geiger, built some radiation guns (which fire the α and β particles referred to in the paper's title), aimed them at thin foils, and built detectors to measure deflections of the radiation particles caused by collisions with the atoms in the foils. The detectors had to be set up in particular arcs, so they looked in the expected places first, and found the expected deflections. Then Rutherford assigned a graduate student to look for deflections where there were none expected. To everyone's surprise, very large deflections were discovered. Some radiation actually bounced back from the atoms, a result Rutherford commented was "... as incredible as if you fired a fifteen-inch shell at a piece of tissue paper and it came back and hit you."

Rutherford's paper proposes a new model of the atom: practically all of its mass, and all of its positive charge, is concentrated in the center of the atom. If an atom were the size of a sports stadium, the nucleus would be about the size of a pea. Through this concentration, the paths of particles encountering the atom can be greatly perturbed, where they would pass through a diffuse atom almost unaffected. Rutherford demolishes the notion that the observed deflections could be caused by an accumulation of small encounters by working out the probabilities and noting that the observations show a much greater number of deflections than the old model would predict.

Rutherford also notes that the concentration of positive charge in the nucleus implies a vast amount of energy is preventing the charges from flying apart, and proposes that α particles are expelled from the nucleus of the atom. This naturally explains the high velocities of α particles when they are expelled from radioactive atoms.

So: the atom, previously thought to be an indivisible unit of matter, was found to have internal structure and smaller parts. That opens up a whole new can of worms: are these smaller parts also divisible? Is there, in fact, a smallest, indivisible piece of matter?

March 18, 2006

Special Relativity

Albert Einstein, On the Electrodynamics of Moving Bodies, 1905

In 1887 Michelson and Morely experimentally determined that light in a vacuum appears to travel at exactly the same speed, no matter what direction it moves and no matter what speed or direction the detectors are moving. This is effectively saying that if you are running towards or away from light makes no difference to how fast it appears to be going.

Many scientists couldn't accept this, including Michelson, who kept repeating the experiments until being awarded the Nobel Prize in 1907. Einstein was the first to start with the assumption that the observations were correct and work out the consequences in detail.

Actually, he starts with two assumptions. Along with assuming light always appears to travel at the same speed, he assumes that all the known laws of motion and time work normally for objects at rest relative to each other. A key insight here is that relative to the speed of light nothing around us is moving. Any distortions to the normal laws of motion and time caused by the fixed speed of light are so tiny that we can't detect them in our daily lives.

Imagine, if you will, a bullet train with a lightbulb at each end and a light detector exactly between them. The detector is rigged to go off only if light from both bulbs arrives at the same time. The people on the train, the light bulbs, and the detector are all at rest relative to each other, so normal physics applies: if both light bulbs are turned on at the same time, the light will reach the detector at same time. To people watching the train whip by, however, the light travelling from the front of the train towards the back must cover a smaller distance than the light travelling from back to front. However, since the two light beams must arrive at the detector at the same time (otherwise the detector won't go off), and must travel at the same speed, the lightbulbs cannot have been switched on at the same time.

This is counter to our day-to-day experience only because bullet trains, at a mere 300km/h, are barely moving relative to the speed of light. If you were to perform this experiment in real life, the lightbulb at the front of the train would appear to be switched on less than a trillionth of a second before the lightbulb in the back.

Through similar constructions, Einstein worked out exactly how time, distance, and simultaneity change as relative velocity increases. The math is high-school level geometry, but Einstein takes his analysis all the way, working out every detail. In later years, thousands of concrete predictions were made based on his work, all of which have been confirmed by experiment. The results are staggering: time and space are not constant; light is. This is a paradigm shift comparable to the notion that the Earth orbits the Sun rather than the other way around. Light is, in some way, fundamental to the universe.

March 11, 2006

The Particle Nature of Light

Albert Einstein, On a Heuristic Point of View Concerning the Production and Transformation of Light, 1905

In 1905, Einstein didn't look anything like the later popular images of an older man with wildly unkempt white hair. When he published this paper, which later won him the Nobel Prize, he was a patent clerk, and looked it:



In this paper, Einstein points out that there is a mathematical similarity between the way black-box radiation behaves and the way a gas of particles behaves, and takes the crucial step of suggesting that light is, in fact, particulate. This goes one step further than Planck, who only showed that atoms absorb and admit light energy in discrete quanta.

Along the way, Einstein derives h, Planck's constant, in a new way, confirming that the energy of a quantum of light is h times the frequency of the light. After this theoretical triumph, Einstein uses his model of light to explain some puzzling experimental observations:

In 1902, Philip Lenard observed the photoelectric effect: when you shine ultraviolet light on an object, it emits electrons. Lenard noted that the kinetic energy of the escaping electrons did not vary with the intensity of the light, but instead increased with the frequency of the light, which was unexpected and difficult to explain.

Einstein proposed that an electron absorbs one quantum of light, and if this imparts enough energy to allow it to escape its atom, it does, with the kinetic energy of that quantum minus the energy needed to escape from the atom. This prediction was confirmed a decade later. More to the point, increasing the intensity of light increases the number of incoming quanta, and thus the number of escaping electrons, while increasing the frequency increases the energy of the quanta and electrons. This effect is simple, definitive evidence that light energy can only be transferred to electrons in discreet chunks of known, fixed size.

Einstein's genius was partly in the simplicity of his arguments. His conclusions are, on occasion, mind-boggling, but the arguments that lead to those conclusions are often easy to understand. (As with Special Relativity.)

Interestingly, the idea that light is made of particles may be an illusion created by the fact that electrons can only absorb and emit light in discreet amounts. I think this is why Einstein gave his paper the title On a Heuristic Point of View Concerning the Production and Transformation of Light. Are there any experiments that show that a quantum of light energy is concentrated into a point in space? Can there be, if electrons can only absorb or admit light in quanta?

March 06, 2006

Hormones

William Bayliss and Ernest Starling, The Mechanism of Pancreatic Secretion, 1902

Nerves were discovered by the ancient Greeks, and until Bayliss and Starling discovered the first hormone, nerves were considered to be the only method used by the body to communicate with and regulate itself. To modern biochemists, used to thinking of the body primarily as a chemical machine, this can be somewhat surprising, but it really makes perfect sense. Nerves are large enough to be discovered during a disection, as indeed they were, while chemical messengers are impossible to detect directly with the unaided senses.

How, then, did Bayliss and Starling figure out that a chemical messenger was being created by the intestine to signal the pancreas that it should produce digestive juices? This is the point at which we leave the abstract and enter the gory. Those easily upset by vivisection should skip the next paragraph.

Bayliss and Starling took an anesthetized dog, immersed it in saline solution, and pumped its lungs with oxygen to keep it alive. Then they cut open the dog's abdomen, attached a device to measure the amount of fluid being produced by the pancreas, isolated a loop of the small intestine, and severed all the nerves attaching it to the rest of the body while leaving the blood vessels intact. Then they poured dilute hydrochloric acid (about what you would expect to enter the intestine from the stomach) into one end of the isolated segment of intestine. Much to their surprise, the pancreas started producing digestive juices.

Up until this point, Bayliss and Starling had been performing a control experiment. That is, they thought that the secretion of the pancreas was triggered by a nerve, and they were doing a sanity check. When they severed all the nerves, they expected the pancreas to stop reacting to hydrochloric acid in the small intestine. Instead, they discovered a new communication system in the body, one unrelated to nerves.

Since there were no theories to support them, Bayliss and Starling relied on experiment... lots and lots of experiments. In the most important one, they ground up some of the mucous membrand of a small intestine with sand and hydrochloric acid, filtered it, and injected the remaining fluid into a vein. The result: the pancreas secreted its juices. In further experiments, they showed: secretin (their name for the substance causing the effect) is unaffected by boiling; it is not produced by the lower end of the intestine; it has no effect on any other glands in the body; the same effect cannot be reproduced by extracts from any other part of the body; the effect cannot be reproduced with the ash of the extract, proving that the inorganic constituents are not responsible; secretin does not evaporate or diffuse easily; secretin is destroyed by tryptic solutions (digestive juices that work on proteins); secretin made from the intestines of cats, rabbits, oxen, monkeys, men, and frogs are all active on the pancreases of dogs; secretin from dogs, rabbits, monkeys, and men are all active on rabbits and monkeys.

I have to admit that although I took a lot of biology courses, it never occurred to me to ask "Just how did anyone figure out that secretin exists?" or similar questions. The dissections and experiments on living animals I did for classes, although interesting and informative, seemed like a needlessly wasteful way to teach students biology. Why not teach them with pictures, theory, and simulation?

I see now that lab dissections are not just teaching known information, but teaching the next generation of experimentalists the tools of their trade. Without the training, many future researchers would be uncomfortable, and worse, unskilled at their work, and much knowledge would remain undiscovered. And there is definitely a lot left to discover in biology: I know of at least a dozen things that have been learned through experiment in the last 5 years, and that's just the stuff interesting enough to make it to the mainstream press.