We again ask: how are “ordinary” objects possible? Ordinary objects have spatial extent (they “occupy space”), are composed of a finite number of objects without spatial extent (particles that do not “occupy space”), and are stable (they neither explode nor collapse as soon as they are created).
Ordinary objects occupy as much space as they do because
- atoms (and therefore molecules) occupy as much space as they do, and
- the volume occupied by a large number of atoms grows linearly with the number of atoms it contains.
In order that atoms occupy as much space as they do, their internal relative positions and the corresponding momenta must both be fuzzy, and the product of the standard deviations of each position–momentum pair must have a positive lower limit. And in order that the volume occupied by a large number of atoms grow linearly with the number of atoms it contains, the Pauli exclusion principle must hold. This is why the constituents of ordinary matter must be fermions with a half-integral spin of at least 1/2. In addition, the number of protons in a nucleus must have an upper limit.
The proper way to define and quantify a fuzzy observable is to assign nontrivial probabilities to the possible outcomes of a measurement of that observable. The most straightforward way to make room for nontrivial probabilities (in fact, the all but inevitable way) is to upgrade from the 0-dimensional probability algorithm of classical physics to a 1-dimensional one — that is, from a point in a phase space to a 1-dimensional subspace of a vector space. The measurement outcomes to which probabilities are assigned are then represented by the subspaces of this vector space, or by the corresponding projectors. Since the particles that make up stable atoms must themselves be stable, this vector space has to be a complex one. Hence our first postulate:
- Measurement outcomes are represented by projectors in a complex vector space.
Because the product of the standard deviations of every position-momentum pair has a positive lower limit, both quantities cannot be measured simultaneously with arbitrary precision: the two quantities are incompatible. A formal definition of incompatibility gave us our second postulate:
- The outcomes of compatible elementary tests correspond to commuting projectors.
If A and B are two possible outcomes of the same measurement, we want to make sure that the probability of “either A or B” is the sum of the respective probabilities of A and B. The following postulate takes care of this:
- If PA and PB are orthogonal projectors, then the probability of the outcome represented by their sum is the sum of the probabilities of the outcomes represented by PA and PB, respectively.
These postulates are sufficient to prove the trace rule, which tells us that the probabilities of the possible outcomes of measurements are encoded in a density operator, and how they can be extracted from it. There also isn’t anything ambiguous about how density operators depend on measurement outcomes and how probabilities depend on the times of measurements.
Subsequently we derived the two Rules we had initially postulated. Rule B took us to the particle propagator, which for a free and stable particle takes the form
<B,tB|A,tA> = ∫DC [1: –b s(C)].
It sums contributions from all paths that lead from (A,tA) to (B,tB). The constant b is the particle’s mass measured in its natural units, and the “length” s(C) turns out to be the relativistic proper time associated with C.
blank
The classical or long-range forces
A general way of accommodating influences on the behavior of a stable particle (albeit not the most general, as we has seen) is to modify the rate at which it “ticks.” It is customary to write the modified amplitude associated with an infinitesimal path segment dC in the form
Z(dC) = [1:dS/ℏ],
where the infinitesimal increment dS of the action S is a function of x, y, z, t, dx, dy, dz, and dt, which must be invariant under Lorentz transformations as well as homogeneous in dx, dy, dz, and dt. Owing to this homogeneity, the number of ticks associated with an infinitesimal path segment defines a Finsler geometry.[1,2] This can be influenced in two ways, and so, therefore, can the behavior of a particle: a species-specific way represented by V and A, which bends geodesics relative to local inertial frames, and a species-independent way represented by the metric g, which bends the geodesics of the pseudo-Riemannian spacetime geometry.
Since the sources of these fields — V, A, and metric g — are fuzzy, the components of the fields cannot be sharp. We take this into account by summing over paths in the corresponding configuration spacetimes. This calls for the addition of terms that only contain the fields.
To obtain the appropriate equation for a particle with spin, we need to let the wave function have more than one component. This equation will therefore be a matrix equation. The simplest version is linear in the operators of energy and momentum. If we require in addition that each component of ψ satisfy the Klein–Gordon equation, we find that the lowest possible number of components is four, and that this number yields the appropriate equation for a spin-1/2 fermion — the Dirac equation.
To arrive at QED, we still need to take account of the fuzziness of particle numbers, due to the fact that only charges are conserved.
The stability of matter rests on the stability of atoms, and without the electromagnetic force atoms would not exist. The stability of matter further rests on the validity of quantum mechanics, and quantum mechanics presupposes not only value-indicating events but also persistent records of such events. The existence of such events and such records does not seem possible without the relatively hospitable environment of a planet, and without gravity there would be no planets.
If we further take into account the evolutionary nature of our world, we have reason to expect that the requisite variety of chemical elements was not present from the start, and we can point to the fact that without gravity there would be no stars to synthesize elements beyond beryllium.
blank
The nuclear or short-range forces
Quantum mechanics presupposes measurement outcomes. It thus presupposes macroscopic objects, including objects that can perform the function of a measurement apparatus. Yet it seems all but certain that the existence of outcome-indicating devices requires a variety of chemical elements well beyond the four — hydrogen, helium, and a sprinkling of lithium and beryllium — that are known to have existed before the formation of stars. If so, QED and gravity are necessary parts of the laws governing UR’s spatial self-relations, but they are not sufficient. Interactions of a different kind are needed to carry further the nucleosynthesis that took place before stars were formed. (Nucleosynthesis is the process of creating atomic nuclei from protons and neutrons.)
By hindsight we know that QCD bears much of the responsibility for nucleosynthesis, both primordial and inside stars. (The elements created by stellar nucleosynthesis range in atomic numbers from six to at least ninety-eight, that is, from carbon to at least californium.) QCD is also responsible for the formation of the first protons and neutrons, which are thought to have condensed from the quark-gluon plasma that emerged from the Big Bang — the hot and dense condition from which the Universe began to expand some 13–14 billion years ago.
But if most of the chemical elements are created inside stars, how do they get out to form planets? Sufficiently massive stars end their lives with an explosion, as Type II supernovae, spewing the elements created in their interiors into the interstellar medium — dust clouds that have been produced by explosions of earlier generations of stars. New stars and eventually planets condense from these clouds, sometimes triggered by shock waves generated by other supernova explosions. It has taken many stellar life cycles to build up the variety and concentration of heavy elements that is found on Earth.
A Type II supernova occurs when the nuclear fusion reactions inside a star are depleted to the point that they can no longer sustain the pressure required to support the star against gravity. During the ensuing collapse, electrons and protons are converted into neutrons and neutrinos. The central core ends up either a neutron star or a black hole, while almost all of the energy released by its collapse is carried away by prodigious quantities of neutrinos, which blow off the star’s outer mantle. But if neutrinos are crucial for the release into the interstellar medium of the products of stellar nucleosynthesis, then so is the weak force.
Supernova explosions not only release the products of stellar nucleosynthesis but themselves contribute significantly to the synthesis of the heavier elements. The weak force, for its part, not only is crucial to supernova explosions but also plays an essential part in stellar nucleosynthesis.
It is also clear why the “carriers” of the weak force need to have large masses, and why a mathematical procedure like the Higgs mechanism is needed to “create” them. The range of a force is characteristically given by the Compton wavelengths of the particles mediating it. For a particle of mass m this equals h/mc. If the masses of the W± and the Z0 were too small, the weak force would cause the beta decay of neutrons in the atomic nucleus through interactions with atomic electrons, as well as the decay of atomic electrons into neutrinos through interactions with nucleonic quarks. All matter would be unstable.
It thus appears safe to say that the well-tested physical theories are preconditions of the possibility of objects that (i) have spatial extent, (ii) are composed of a finite number of objects without spatial extent, and (iii) neither explode nor collapse as soon as they are created. In other words, the existence of such objects appears to require quantum mechanics, special relativity, general relativity, and the standard model of particle physics, at least as effective theories. (An effective theory includes appropriate degrees of freedom to describe physical phenomena occurring above a given length scale, while ignoring substructure and degrees of freedom that exist or may exist at shorter distances.)
1. [↑] Antonelli, P.L., Ingarden, R.S., and Matsumoto, M. (1993). The Theory of Sprays and Finsler Spaces with Applications in Physics and Biology,Kluwer.
2. [↑] Rund, H. (1969). The Differential Geometry of Finsler Spaces, Springer.