13 Why the laws of physics are just so

We again ask: how are “ordi­nary” objects pos­sible? Ordi­nary objects have spa­tial extent (they “occupy space”), are com­posed of a finite number of objects without spa­tial extent (par­ti­cles that do not “occupy space”), and are stable (they nei­ther explode nor col­lapse as soon as they are created).

Ordi­nary objects occupy as much space as they do because

  • atoms (and there­fore mol­e­cules) occupy as much space as they do, and
  • the volume occu­pied by a large number of atoms grows lin­early with the number of atoms it contains.

In order that atoms occupy as much space as they do, their internal rel­a­tive posi­tions and the cor­re­sponding momenta must both be fuzzy, and the product of the stan­dard devi­a­tions of each position–momentum pair must have a pos­i­tive lower limit. And in order that the volume occu­pied by a large number of atoms grow lin­early with the number of atoms it con­tains, the Pauli exclu­sion prin­ciple must hold. This is why the con­stituents of ordi­nary matter must be fermions with a half-​​integral spin of at least 12. In addi­tion, the number of pro­tons in a nucleus must have an upper limit.

The proper way to define and quan­tify a fuzzy observ­able is to assign non­trivial prob­a­bil­i­ties to the pos­sible out­comes of a mea­sure­ment of that observ­able. The most straight­for­ward way to make room for non­trivial prob­a­bil­i­ties (in fact, the all but inevitable way) is to upgrade from the 0-​​dimensional prob­a­bility algo­rithm of clas­sical physics to a 1-​​dimensional one — that is, from a point in a phase space to a 1-​​dimensional sub­space of a vector space. The mea­sure­ment out­comes to which prob­a­bil­i­ties are assigned are then rep­re­sented by the sub­spaces of this vector space, or by the cor­re­sponding pro­jec­tors. Since the par­ti­cles that make up stable atoms must them­selves be stable, this vector space has to be a com­plex one. Hence our first postulate:

  • Mea­sure­ment out­comes are rep­re­sented by pro­jec­tors in a com­plex vector space.

Because the product of the stan­dard devi­a­tions of every position-​​momentum pair has a pos­i­tive lower limit, both quan­ti­ties cannot be mea­sured simul­ta­ne­ously with arbi­trary pre­ci­sion: the two quan­ti­ties are incom­pat­ible. A formal def­i­n­i­tion of incom­pat­i­bility gave us our second pos­tu­late:

  • The out­comes of com­pat­ible ele­men­tary tests cor­re­spond to com­muting projectors.

If A and B are two pos­sible out­comes of the same mea­sure­ment, we want to make sure that the prob­a­bility of “either A or B” is the sum of the respec­tive prob­a­bil­i­ties of A and B. The fol­lowing pos­tu­late takes care of this:

  • If PA and PB are orthog­onal pro­jec­tors, then the prob­a­bility of the out­come rep­re­sented by their sum is the sum of the prob­a­bil­i­ties of the out­comes rep­re­sented by PA and PB, respec­tively.

These pos­tu­lates are suf­fi­cient to prove the trace rule, which tells us that the prob­a­bil­i­ties of the pos­sible out­comes of mea­sure­ments are encoded in a den­sity oper­ator, and how they can be extracted from it. There also isn’t any­thing ambiguous about how den­sity oper­a­tors depend on mea­sure­ment out­comes and how prob­a­bil­i­ties depend on the times of mea­sure­ments.

Sub­se­quently we derived the two Rules we had ini­tially pos­tu­lated. Rule B took us to the par­ticle prop­a­gator, which for a free and stable par­ticle takes the form

<B,tB|A,tA> = ∫DC [1: –b s(C)].

It sums con­tri­bu­tions from all paths that lead from (A,tA) to (B,tB). The con­stant b is the particle’s mass mea­sured in its nat­ural units, and the “length” s(C) turns out to be the rel­a­tivistic proper time asso­ci­ated with C.

blank

The clas­sical or long-​​range forces

A gen­eral way of accom­mo­dating influ­ences on the behavior of a stable par­ticle (albeit not the most gen­eral, as we has seen) is to modify the rate at which it “ticks.” It is cus­tomary to write the mod­i­fied ampli­tude asso­ci­ated with an infin­i­tes­imal path seg­ment dC in the form

Z(dC) = [1:dS/ℏ],

where the infin­i­tes­imal incre­ment dS of the action S is a func­tion of x, y, z, t, dx, dy, dz, and dt, which must be invariant under Lorentz trans­for­ma­tions as well as homo­ge­neous in dx, dy, dz, and dt. Owing to this homo­geneity, the number of ticks asso­ci­ated with an infin­i­tes­imal path seg­ment defines a Finsler geom­etry.[1,2] This can be influ­enced in two ways, and so, there­fore, can the behavior of a par­ticle: a species-​​specific way rep­re­sented by V and A, which bends geo­desics rel­a­tive to local iner­tial frames, and a species-​​independent way rep­re­sented by the metric g, which bends the geo­desics of the pseudo-​​Riemannian space­time geometry.

Since the sources of these fields — V, A, and metric g — are fuzzy, the com­po­nents of the fields cannot be sharp. We take this into account by sum­ming over paths in the cor­re­sponding con­fig­u­ra­tion space­times. This calls for the addi­tion of terms that only con­tain the fields.

To obtain the appro­priate equa­tion for a par­ticle with spin, we need to let the wave func­tion have more than one com­po­nent. This equa­tion will there­fore be a matrix equa­tion. The sim­plest ver­sion is linear in the oper­a­tors of energy and momentum. If we require in addi­tion that each com­po­nent of ψ sat­isfy the Klein–Gordon equa­tion, we find that the lowest pos­sible number of com­po­nents is four, and that this number yields the appro­priate equa­tion for a spin-​​1/​2 fermion — the Dirac equa­tion.

To arrive at QED, we still need to take account of the fuzzi­ness of par­ticle num­bers, due to the fact that only charges are conserved.

The sta­bility of matter rests on the sta­bility of atoms, and without the elec­tro­mag­netic force atoms would not exist. The sta­bility of matter fur­ther rests on the validity of quantum mechanics, and quantum mechanics pre­sup­poses not only value-​​indicating events but also per­sis­tent records of such events. The exis­tence of such events and such records does not seem pos­sible without the rel­a­tively hos­pitable envi­ron­ment of a planet, and without gravity there would be no planets.

If we fur­ther take into account the evo­lu­tionary nature of our world, we have reason to expect that the req­ui­site variety of chem­ical ele­ments was not present from the start, and we can point to the fact that without gravity there would be no stars to syn­the­size ele­ments beyond beryllium.

blank

The nuclear or short-​​range forces

Quantum mechanics pre­sup­poses mea­sure­ment out­comes. It thus pre­sup­poses macro­scopic objects, including objects that can per­form the func­tion of a mea­sure­ment appa­ratus. Yet it seems all but cer­tain that the exis­tence of outcome-​​indicating devices requires a variety of chem­ical ele­ments well beyond the four — hydrogen, helium, and a sprin­kling of lithium and beryl­lium — that are known to have existed before the for­ma­tion of stars. If so, QED and gravity are nec­es­sary parts of the laws gov­erning UR’s spa­tial self-​​relations, but they are not suf­fi­cient. Inter­ac­tions of a dif­ferent kind are needed to carry fur­ther the nucle­osyn­thesis that took place before stars were formed. (Nucle­osyn­thesis is the process of cre­ating atomic nuclei from pro­tons and neutrons.)

By hind­sight we know that QCD bears much of the respon­si­bility for nucle­osyn­thesis, both pri­mor­dial and inside stars. (The ele­ments cre­ated by stellar nucle­osyn­thesis range in atomic num­bers from six to at least ninety-​​eight, that is, from carbon to at least cal­i­fornium.) QCD is also respon­sible for the for­ma­tion of the first pro­tons and neu­trons, which are thought to have con­densed from the quark-​​gluon plasma that emerged from the Big Bang — the hot and dense con­di­tion from which the Uni­verse began to expand some 13–14 bil­lion years ago.

But if most of the chem­ical ele­ments are cre­ated inside stars, how do they get out to form planets? Suf­fi­ciently mas­sive stars end their lives with an explo­sion, as Type II super­novae, spewing the ele­ments cre­ated in their inte­riors into the inter­stellar medium — dust clouds that have been pro­duced by explo­sions of ear­lier gen­er­a­tions of stars. New stars and even­tu­ally planets con­dense from these clouds, some­times trig­gered by shock waves gen­er­ated by other super­nova explo­sions. It has taken many stellar life cycles to build up the variety and con­cen­tra­tion of heavy ele­ments that is found on Earth.

A Type II super­nova occurs when the nuclear fusion reac­tions inside a star are depleted to the point that they can no longer sus­tain the pres­sure required to sup­port the star against gravity. During the ensuing col­lapse, elec­trons and pro­tons are con­verted into neu­trons and neu­trinos. The cen­tral core ends up either a neu­tron star or a black hole, while almost all of the energy released by its col­lapse is car­ried away by prodi­gious quan­ti­ties of neu­trinos, which blow off the star’s outer mantle. But if neu­trinos are cru­cial for the release into the inter­stellar medium of the prod­ucts of stellar nucle­osyn­thesis, then so is the weak force.

Super­nova explo­sions not only release the prod­ucts of stellar nucle­osyn­thesis but them­selves con­tribute sig­nif­i­cantly to the syn­thesis of the heavier ele­ments. The weak force, for its part, not only is cru­cial to super­nova explo­sions but also plays an essen­tial part in stellar nucleosynthesis.

It is also clear why the “car­riers” of the weak force need to have large masses, and why a math­e­mat­ical pro­ce­dure like the Higgs mech­a­nism is needed to “create” them. The range of a force is char­ac­ter­is­ti­cally given by the Compton wave­lengths of the par­ti­cles medi­ating it. For a par­ticle of mass m this equals h/​mc. If the masses of the W± and the Z0 were too small, the weak force would cause the beta decay of neu­trons in the atomic nucleus through inter­ac­tions with atomic elec­trons, as well as the decay of atomic elec­trons into neu­trinos through inter­ac­tions with nucle­onic quarks. All matter would be unstable.

It thus appears safe to say that the well-​​tested phys­ical the­o­ries are pre­con­di­tions of the pos­si­bility of objects that (i) have spa­tial extent, (ii) are com­posed of a finite number of objects without spa­tial extent, and (iii) neither explode nor col­lapse as soon as they are cre­ated. In other words, the exis­tence of such objects appears to require quantum mechanics, spe­cial rel­a­tivity, gen­eral rel­a­tivity, and the stan­dard model of par­ticle physics, at least as effec­tive the­o­ries. (An effec­tive theory includes appro­priate degrees of freedom to describe phys­ical phe­nomena occur­ring above a given length scale, while ignoring sub­struc­ture and degrees of freedom that exist or may exist at shorter distances.)

Next


1. [↑] Antonelli, P.L., Ingarden, R.S., and Mat­sumoto, M. (1993). The Theory of Sprays and Finsler Spaces with Appli­ca­tions in Physics and Biology,Kluwer.

2. [↑] Rund, H. (1969). The Dif­fer­en­tial Geom­etry of Finsler Spaces, Springer.