Quantum physics started out as a rather desperate measure to avoid some of the spectacular failures of what we now call “classical physics.” The story begins with the discovery by Max Planck, in 1900, of the law that perfectly describes the radiation spectrum of a glowing hot object. One of the things classical physics had predicted was that you would get blinded by ultraviolet light if you looked at the burner of your stove.

At first it was just a fit to the data — “a fortuitous guess at an interpolation formula,” as Planck himself described his law. A few weeks later, however, it was found to imply the quantization of energy in the emission of electromagnetic radiation and thus to be irreconcilable with classical physics. According to the classical theory, a glowing hot object emits energy continuously. Planck’s formula implies that it emits energy in discrete quantities proportional to the frequency *f* (in cycles per second) of the radiation:

*E* = *hf*,

where *h* = 6.626 069×10^{−34} Joule seconds is Planck’s constant. Often it is more convenient to use the reduced Planck constant ℏ (“h-bar”), which equals *h* divided by 2π (the ratio of a circle’s circumference to its radius). With this we can write

*E* = ℏω,

where the angular frequency ω (in angle per second) replaces *f*.

In 1911, Ernest Rutherford proposed a model of the atom based on experiments conducted by Hans Geiger and Ernest Marsden. Geiger and Marsden had directed a beam of alpha particles (helium nuclei) at a thin gold foil. As expected, most of the alpha particles were deflected by at most a few degrees. Yet a tiny fraction of the particles were deflected through angles much larger than 90°. In Rutherford’s own words,^{[1]}

It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backward must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of the atom was concentrated in a minute nucleus.

The resulting model, which described the atom as a miniature solar system, with electrons orbiting the nucleus the way planets orbit a star, was however short-lived. Classical electromagnetic theory predicts that an orbiting electron will radiate away its energy and spiral into the nucleus in less than a nanosecond.

In 1913, Niels Bohr postulated that the angular momentum *L* of an orbiting atomic electron was quantized: its possible values are integral multiples of the reduced Planck constant:

*L* = *n*ℏ, *n* = 1,2,3….

Bohr’s postulate not only explained the stability of atoms but also accounted for the by then well-established fact that atoms absorb and emit electromagnetic radiation only at specific frequencies. It even enabled Bohr to calculate with remarkable accuracy the spectrum of atomic hydrogen — the particular frequencies at which this absorbs and emits light, visible as well as infrared and ultraviolet.

Yet apart from his quantization postulate, Bohr’s reasoning at the time remained completely classical. He assumed that the orbit of the hydrogen atom’s single electron was a circle, and that the atom’s nucleus — a single proton — was at the center. He used classical laws to calculate the orbiting electron’s energy *E*, expressed *E* as a function of the classical expression for *L*, and only then replaced this expression by *L* = *n*ℏ.

In this way Bohr obtained a discrete sequence of values *E*_{n}. And since they were the only values the energy of the orbiting electron was “allowed” to take, the energy that a hydrogen atom could emit or absorb had to be equal to the difference between two of these values. The atom could “jump” from a state of lower energy *E*_{m} to a state of higher energy *E*_{n}, absorbing a photon of frequency (*E*_{n} − *E*_{m})/*h*, and it could “jump” from a state of higher energy *E*_{m} to a state of lower energy *E*_{n}, emitting a photon of frequency (*E*_{m} − *E*_{n})/*h*. Or so the story went.

It took ten years, from 1913 to 1923, before someone finally found an explanation for the quantization of angular momentum. Planck’s radiation formula had implied a relation between a particle property (*E*) and a wave property (*f* or ω) for the quanta of electromagnetic radiation we now call photons. Einstein’s explanation of the photoelectric effect, published in 1905, established another such relation, between the momentum *p* of a photon and its wavelength λ:

*p* = *h*/λ.

If electromagnetic waves have particle properties, Louis de Broglie reasoned, why cannot electrons have wave properties? Imagine that the electron in a hydrogen atom is a standing wave on a circle rather than some sort of corpuscle moving in a circle. (A standing wave does not travel. Its crests and troughs are stationary; they stay put.)

Such a wave has to satisfy the condition

2π*r* = *n*λ, *n* = 1,2,3….

In (other) words, the circumference 2π*r* of the circle must be an integral multiple of the wavelength λ. With ℏ = *h*/2π and de Broglie’s formula *p* = *h*/λ, this implies that

*pr* = *n*ℏ.

But *pr* is just the angular momentum *L* of a classical particle moving in a circle of radius *r*. In this way de Broglie arrived at the quantization condition *L* = *n*ℏ, which Bohr had simply postulated.

1. [↑] Cassidy, D., Holton, G., and Rutherford, J. (2002). *Understanding Physics*, Springer, p. 632.