Earlier we defined “ordinary objects” as
- having spatial extent (they “occupy space”),
- being composed of a (large but) finite number of objects without spatial extent (particles that do not “occupy space”),
- and being stable (they neither explode nor collapse as soon as they are created).
The stability of such objects rests on the stability of atoms and molecules, and this requires (among other things) that the relative positions and momenta of their constituents be fuzzy (with a lower limit for the product of their respective “uncertainties”), that the Pauli exclusion principle should hold, and that (as a consequence) the constituents of atoms and molecules be fermions.
More is required. The stability of atoms and molecules rests on a stable equilibrium between two tendencies — the tendency of fuzzy positions to grow fuzzier, which is due to the fuzziness of the corresponding momenta, and the tendency of fuzzy positions to get less fuzzy, which is due to the electrostatic attraction between the atomic nucleus and its surrounding electrons. Without the electromagnetic interaction, neither atoms nor molecules would exist.
As you will remember, the proper — mathematically rigorous and philosophically sound — way to define and quantify a fuzzy physical property is to assign nontrivial probabilities to the possible outcomes of a measurement. This is the reason (or at least one reason) why quantum mechanics is a probability calculus, and why the events to which it serves to assign probabilities are measurement outcomes.
Quantum mechanics thus presupposes the existences of measurement outcomes — actual property-indicating (or value-indicating) events. But for an event to be an actual property-indicating event, it must leave a persistent trace. There must be a persistent record of the event, and the existence of such a record does not seem possible in the absence of a richly structured environment, which only a planet can provide. But without gravity, planets do not exist.
So the existence of “ordinary” objects — stable objects that “occupy space” yet are composed of finite numbers of objects that don’t — requires both electromagnetism and gravity. These are also the only forces we know how to incorporate into the quantum formalism without using multi-component wave functions. In other words, they are the only forces that can be formulated in terms of a differential geometry (the former in terms of a particle-specific Finsler geometry, the latter in terms of a universal pseudo-Riemannian geometry), and that therefore have classical analogues.
Yet electromagnetism and gravity are not sufficient. The existence of “ordinary” objects requires at least one other force.
The simplest and most straightforward way to introduce another type of interaction is to generalize from QED. To see how this is done, we first observe that the QED Lagrangian is invariant under the following substitutions (known as a gauge transformation):
ψ → [1:qα]ψ, V → V−∂tα, Ax → Ax+∂xα, Ay → Ay+∂yα, Az → Az+∂zα.
The invariance under the substitution ψ → [1:qα]ψ gives us the freedom to associate a different complex plane with each spacetime point and to independently rotate each of these planes. (If we make use of this freedom, then α is a function of the spacetime coordinates t,x,y,z, and the gauge transformation is referred to as a local transformation.) This is quite analogous to the freedom we have to associate a different system of inertial coordinates with each spacetime point and to independently subject each to a Lorentz transformation. It is this kind of freedom that makes it possible to formulate an interaction. While the latter freedom makes room for gravity, the former makes it not merely possible but, in fact, necessary to introduce the electromagnetic “force.”
The transformations of the form ψ → [1:β]ψ constitute what mathematicians call a “group.” We may think of the elements of this group, which goes by the name U(1), as rotations in (or of) the complex plane, β specifying the angle of rotation. The complex plane is a 1-dimensional complex vector space. The most direct generalization of U(1) is the group of “rotations” in a 2-dimensional complex vectors space, which is known as SU(2). These “rotations” are themselves the most direct generalizations of rotations in a real vector space. (You will recall that the components of a vector in a real vector space are real numbers.) The most direct generalization of SU(2) is — no surprise there — SU(3), the group of “rotations” in a 3-dimensional complex vectors space.
As it turns out, U(1), SU(2), and SU(3) are all the groups that are needed to formulate the so-called Standard Model, which comprises all well-tested theories of particle physics:
- the theory of the strong nuclear interactions called quantum chromodynamics (QCD), which is constructed just like QED except that U(1) is replaced by SU(3),
- the so-called electroweak theory, which is a partially unified theory of the weak and electromagnetic interactions (“partially” because it involves two symmetry groups — U(1) and SU(2) — rather than a single one).
The findings of every high energy physics experiment carried out to date are consistent with the Standard Model. While this has real and/or perceived shortcomings, its proposed extensions have so far remained in the realm of pure, untestable speculation.
Because the symmetry group of QCD is SU(3), the strongly interacting fundamental particles — the quarks — come in three “colors” (hence quantum chromodynamics, from the Greek word for color, chroma). As the electromagnetic interaction is said to be “mediated by the exchange of photons,” so the strong force is said to be “mediated by the exchange of gluons,” of which there are 3²−1=8 varieties. (This terminology involves an illegitimate reification of correlations between measurement outcomes.) Unlike photons, which are electrically neutral, the gluons “carry” color charges, owing to the fact that the result of two successive SU(3) transformations depends on the order in which they are performed.
The two most important characteristics of QCD are asymptotic freedom and quark confinement. The first means that the shorter the distance over which quarks interact, the more weakly they interact. The second means that only color-neutral combinations of quarks appear in isolation. Baryons and mesons are such combinations. While baryons — among them the proton and the neutron — are bound states of three quarks, mesons are bound states of a quark and an antiquark.
Here is a simple (if not simplistic) illustration of quark confinement. Imagine trying to separate the quark q from the antiquark q of a meson qq. The energy required to do this grows with the distance between q and q. Since at some point this energy exceeds the energy required to create another meson, you end up with two separate mesons instead of two separate quarks.
Studied at sufficiently high resolution, the atomic nucleus thus presents itself as containing quarks that interact via gluons, while at lower resolution it presents itself as containing protons and neutrons that interact via mesons. The force that binds the protons and neutrons in a nucleus is thus a residue of the strong force that binds quarks.
According to the Standard Model, the fundamental constituents of matter are of two kinds: the quarks, which interact via the strong force, and the leptons, which don’t. Besides coming in three “colors,” the quarks come in six “flavors”: down, up, strange, charm, bottom, and top. There are six leptons (the electron e, the muon μ, the tauon τ, and three corresponding neutrinos (νe, νμ, ντ). These particles group themselves into three “generations.” The first contains the down and up quarks, the electron, and the electron-neutrino; the second contains the quarks with strangeness and charm, the muon, and the muon-neutrino; the third contains the bottom and top quarks, the tauon, and the tauon-neutrino. Except for their masses, the particles of the second and third generations have the same properties, and thus interact in the same way, as the corresponding particles of the first generation. It will therefore suffice to consider only the first generation.
One salient feature of the weak force is that while the “left-handed” leptons (and therefore also the right-handed antileptons) interact via the weak force, the right-handed leptons are immune to it. To get the hang of this handedness business, recall that all leptons have a spin equal to 1/2, then consider a measurement of the spin of such a particle with respect to the direction in which it is moving. If this measurement yields “up” with probability 1, the particle is said to be right-handed, and if it yields “down” with probability 1, the particle is said to be left-handed. The lepton part of the electroweak Lagrangian therefore contains two terms, a “right-handed” one constructed so as to make it invariant only under local U(1) transformations, and a “left-handed” one constructed so as to make it invariant under both local U(1) transformations and local SU(2) transformations.
Another salient feature of the electroweak Lagrangian is the absence of a term containing particle masses. The presence of such a term would destroy the invariance of the Lagrangian under gauge transformations and thereby render the theory non-renormalizable. (In order to yield numbers that can be checked against experimental results, a relativistic quantum theory must be renormalizable. The charges and masses that appear in the Lagrangian of such a theory — they are often referred to as the theory’s “bare” charges and masses — are not the charges and masses that can be measured; they are physically meaningless. Renormalization makes it possible to theoretically link the charges and masses that are measured at one momentum scale to the charges and masses that are measured at a another momentum scale. It thus allows us to map the dependence of physical masses and charges on the momentum scale at which they are measured.)
Since most of the particles the electroweak theory deals with have masses, a theoretical procedure was needed for the purpose of giving rise to effective mass terms without destroying the invariance of the Lagrangian under gauge transformations. Unsurprisingly, this procedure — usually referred to as “Higgs mechanism” — has been hailed as explaining why particles have mass. However, a theoretical scheme that gives rise to mass terms without jeopardizing the theory’s renormalizability is one thing, while a physical mechanism or process leading to the creation of mass would be quite another. It bears repetition: physical theories are calculational tools, not descriptions of physical entities or processes. It also pays to bear in mind that mass and matter are entirely different things, the former a parameter in some calculational schemes, the latter either a folk notion or a philosophical concept neither of which has any part in the theoretical formalism of contemporary physics.