Material objects occupy as much space as they do because atoms and molecules occupy as much space as they do. So how is it that a hydrogen atom in its ground state (its state of lowest energy) occupies a space roughly one tenth of a nanometer across? Thanks to quantum mechanics, we now understand that the stability of “ordinary” material objects rests on the stability of atoms and molecules, and that this rests on the fuzziness of the relative positions and momenta of their constituents.
What, then, is the proper — that is, mathematically rigorous and philosophically sound — way to define and quantify a fuzzy property? It is to assign nontrivial probabilities — probabilities between 0 and 1 — to the possible outcomes of a measurement of this property.
To be precise, the proper way of quantifying a fuzzy property or value is to make counterfactual probability assignments. (To assign a probability to a possible measurement outcome counterfactually — contrary to the facts — is to assign it to a possible outcome of a measurement that is not actually made.) In order to quantitatively describe a fuzzy property or value, we must assume that a measurement is made, and if we do not want our description to change the property or value described, we must assume that no measurement is made. We must assign probabilities to the possible outcomes of unperformed measurements.
Arguably the most straightforward way to make room for nontrivial probabilities is to upgrade from a 0-dimensional point P to a 1-dimensional line L. Instead of representing a probability algorithm by a point in a phase space, we represent it by a 1-dimensional subspace of a vector space V. And instead of representing elementary tests by subsets of a phase space S, we represent them by the subspaces of V.
Examples of vector spaces are lines, planes, and the familiar 3-dimensional space in which we appear to exist. Vector spaces contain vectors (no surprise there). Vectors can be added, and they can be multiplied with either real or complex numbers. If only real numbers are allowed, we can think of vectors as arrows, and we can visualize the addition of two vectors as we did in Fig. 2.2.1. If complex numbers are admitted, we have to go beyond visualization. (Nobody should expect the inner workings of a probability algorithm to be visualizable.)
What is characteristic of a vector space V is that if the two vectors a, b are contained in it, and if α and β are real or complex numbers, then αa + βb is also contained in it. A vector space U is a subspace of V if the vectors contained in U are also contained in V.
Mathematically it is easy to introduce vector spaces of more then three (and even infinitely many) dimensions. The dimension of a vector space is the largest number of mutually orthogonal (perpendicular) vectors one can find in it. What makes it possible to define orthogonality for vectors is an inner product (a.k.a. scalar product). This is a machine that accepts two vectors a, b and returns a real or complex number <b|a>. Two vectors are orthogonal if their scalar product vanishes (equals zero).
A 1-dimensional subspace L can be contained in a subspace U, it can be orthogonal to U, but it may be neither contained in nor orthogonal to U; there is a third possibility, and this is what makes room for nontrivial probabilities. As a probability algorithm, L assigns probability 1 to elementary tests represented by subspaces containing L, it assigns probability 0 to elementary tests represented by subspaces orthogonal to L, and it assigns probabilities greater than 0 and less than 1 to tests represented by subspaces that neither contain nor are orthogonal to L.
The virtual inevitability of this “upgrade” has been demonstrated by J.M. Jauch.[1]
1. Jauch, J.M. (1968). Foundations of Quantum Mechanics, Addison–Wesley.