TRIALITY IN THE DEPTH OF PHYSICS? ON THE FUNDAMENTAL UNIFICATION

 

From: Triality in Evolution, eds. B. Lukács & al., KFKI-1995-21/C

 

 

B. Lukács

Central Research Institute for Physics RMKI, H-1525 Bp. 114. Pf. 49., Budapest, Hungary

 

ABSTRACT

            It seems as if physics suggested 3 fundamental phenomena, accompanied by 3 fundamental constants known as G, c and h. If so, then a contradiction-free description would need a trialistic unification. Dual unifications are known as e.g. General Relativity, but the trial unification is not yet available. Here the reasons suggesting a trial unification, the possible phenomena described by it and the difficulties are discussed.

 

1. INTRODUCTION

            There are, to our present knowledge, at least 3 fundamental and generally valid phenomena in physics. Here these we call sym­bolically Gravity, Relativity and Quantumness. We do not know exactly, what is behind these phenomena, we know only some consequences of them, and some theories which have been made to describe them. Still it is easy to define the meaning of the above 3 keywords in such a way that all physicists can agree.

            Gravity is a phenomenon by which any pair of physical entities disturb each other through "empty space", although in degrees decreasing with distance. There are many ways by which mutual disturbances are possible; but only gravity is "general". E.g. electromagnetism mimics gravity quite well, but acts only on electrically charged bodies. As we know, gravity acts on everything.

            Relativity is a phenomenon showing that only relative motion means anything, and 3-space and time, as we know them, do not mean too much, only the space-time is "objective", but causality is strict.

            Quantumness (a term whose clumsiness reflects how late the phenomenon was recognised) is behind a lot of correlated violations of basic concepts of the physics from Aristotle to Mach. It means that the point particle is a nonexistent idealisation; that there may be two bodies in the same time at the same place. Maybe its most direct manifestation is the existence of "uncertainty relations", inequalities stating the nonexistence of states sharply defined synchronously in two quantities "canonically conjugate" to each other.

            Interestingly enough, each such Phenomenon possesses its Fundamental constant. For Gravity, there is the Cavendish constant: G = 6.67*10-8 cm3/gs2. For Relativity, space and time scales are converted to each other via the light velocity: c = 3.00*1010 cm/s. Finally, in Quantumness elemental and indivisible portions of energy, angular momentum, uncertainties, &c. are all defined by the Planck constant h = 1.05*10-27 gcm2/s.

            Obviously, for a phenomenon at least one scale parameter is needed, and if the phenomenon is of general validity, the constant is such as well. We do not know what would happen if two scale constants were to belong to one fundamental phenomenon; no such case is reported. One may believe that further fundamental and generally valid physical phenomena exist as well: up to now they have not been reported beyond doubt. One may question the general validity of any of the 3 phenomena mentioned above; but the loss of general validity would result in anomalies.

            Now, fundamental and generally valid phenomena cannot be contradictory, but theories can and are. Until it is so, physics does not work, in principle in any case, in practice in some ones. So some answers are impossible, ambiguous or arbitrary. So some "final" questions cannot be answered. It seems that this (not necessarily only this) is behind some unsolved questions in cosmology, biology or psychology.

            The standard way of solution is Unification. If two theories are unified, the new unified theory is not self-contradictory (a self-contradictory theory is not a correctly made theory, so is not a real theory but rather an educated guesswork). The problem is that in unification the general structures of the theories must change (being different from each other even at the beginning). But then nothing leads the hand of the unifier: great inspiration and intuition is needed, and even if a unification has been made, there is no guarancy that the result is correct.

            We not yet have the unified theory for all the 3 Phenomena. But the overview of successes and failures will be edifying.

 

2. GRAVITY IN SICH: THE NEWTONIAN THEORY OF MUTUAL ATTRACTION

            In ancient ages the whole problem did not exist at all. Our weight, and in some rare cases the orbital motion of Moon, was explained by Universal Weight, the universal tendency of bodies to approach an exceptional point of the Universe. The most consequent theories put this point into Earth's center, because then the formation of the spherical Earth got a simple explanation, and experience was absent at places far from this point. Asimov points out [1] that this would have been impossible if instead of Earth Venus or Mars had possessed Moon; similarly the Galilean moons of Jupiter are not too far from visibility by naked eyes. In both cases the early astronomers, many millenia ago, would have established at least the multiplicity of centers of attraction, but (by accidents at the formation of the Solar System?) this was really possible only at the discovery of telescopes. Then in happened in one year. The logical consequence was the theory that gravity is mutual between bodies.

            The first mathematical formalism of mutual gravitational attraction was made by Newton [2]. He formulated gravity as a mutual force causing acceleration. From the data of parabolic (really elliptic) paths of ballistics on terrestrial grounds, from the orbital velocity and radius of Moon, and from the radius of Earth (all known with some accuracies in his time) he found that if an inverse square law in distance holds (which is rather natural from geometric viewpoint and very simple mathematically), then the same force can explain the motion of the cannon ball and that of Moon, a description conform to Occam's razor Entia non sunt multiplicanda praeter necessitatem, expressing the principles of all economic theories. So he stated

              ma = F = -GMm/r2                                                                                         (2.1)

Such a formula predicts that the accelerations of all falling bodies are equal at the same place, but this had become then an established fact up to several percents of accuracy from Galileo's experiments on the Pisa Tower. Later the fact was confirmed by Bessel to 4 digits from astronomical observations and in this century to 9 digits in laboratory [3].

            After the formulation of the force law of gravity a science called celestial mechanics came into existence and sooner or later all but two observed details of the motions of celestial bodies of the Solar System became explained in details up to observational errors by the Newtonian Theory of Gravity. The two exceptions were the secular deceleration of the angular velocity of Moon and the anomalous perihelion advance of Mercury. Both effects are very minute: the rate of lunar angular deceleration is some 10-10/year, and Mercury's perihelion advances 5.75"/y instead of the 5.32"/y calculated as the perturbation of all known planets (while in a year Mercury itself moves some 1500°, some 107 times more than the anomalous advance).

            The lunar anomaly was explained by G. H. Darwin as a consequence of tidal frictions [4]. Moon generates a tidal protuberance on the oceans just below herself, but friction with the other oceanic waters carries forward this protuberance. In consequence gravitational attraction between the protuberance and Moon pulls forward Moon and backwards some water. So Moon gets some extra (orbital) angular momentum at the price of the rotational angular momentum of Earth. Therefore Moon spirals outwards with increasing angular momentum but decreasing angular velocity. While the actual rate of angular momentum transfer could not be calculated from first principles (friction with seabed is a complicated phenomenon), the rates did not contradict to anything known. As for the anomalous perihelion advance of Mercury, the simplest solution was an unseen planet, called Vulcan, within Mercury's orbit. One time the discovery of Vulcan was reported [5], but without calculated orbit, and the result was irreproducible.

 

3. RELATIVITY IN SICH: EINSTEIN'S SPECIAL THEORY OF RELATIVITY

            The relativity of motions come from Galileo, but there it was true only for "mechanical" motions [6]. It came from common sense. In Newton's mechanics it turned out to be true if all forces depend only on relative distances and velocities, an obvious assumption. However it had no further consequence, because the general belief was that optics was not subject of this relativity principle. Namely the propagation of the transversally oscillating light wave was explained via a very rigid medium, the aether, of course permeating the space up to at least the visible stars. The (local) rest frame of this medium defines a natural and physically preferred coordinate system and by combinations of mechanical and optical experiments the velocities can be measured in this system. While this rest frame may be not absolute in philosophical sense, it is quite universal in physics.

            However, serious attempts to measure Earth's velocity in this Ko frame failed, e.e. they gave 0 velocity with a margin much below Earth's orbital velocity around Sun [7]. For a while ad hoc explanations existed to explain particular failures, but finally Einstein invented a general explanation [8], appended soon by a geometric interpretation of Minkowski. Now this appended version is known as Einstein's theory of Special Relativity.

            The fundaments of this theory go as follows. The fundamental manifold of physics is not Space, but Space-Time, a four-dimensional manifold whose points are physical events. An event is labelled by its spatial coordinates x and its time t.

            By using four coordinates (x,y,z,ct) one gets a 4 dimensional pseudo-Euclidean space with the invariant elementary distance

              ds2 = dx2 + dy2 + dz2 -(dct)2                                                   (3.1)

same for any observer, where first c is a scale factor of velocity dimension.

            Any linear transformation of the coordinates, preserving this invariant form, gives another permitted coordinate system, related to each other via unaccelerated motions, none of them is preferred. Only such formulae are permitted, which transform covariantly together with the coordinate transformations. Therefore when comparing observations of two observers of relative velocity v on some noninvariant, 3 dimensional quantity (as lengths of rods, time intervals between events &c.), the ratio v/c appears in the relation of the two observations.

            If, in addition, one postulates that real physical motion can connect events only if

              ds2 µ 0                                                                                                                        (3.2)

then events happen in a strict causal order for any observer (time evolution goes forward), as always seen.

            Since the whole theory was based on tendentional failures of optical measurements for absolute velocity, c turned out to be the velocity of light, but it became an universal terminal velocity (on 3 dimensional language)

 

4. QUANTUMNESS IN SICH: QUANTUM (WAVE) MECHANICS

            At the end of the last century there were problems in explaining the frequency distribution of the blackbody or cavity radiation. It turned out that sufficiently "black" bodies emit similar radiations when heated at the same temperature. The radiation was, of course, proportional to the surface and depended on temperature T and its distribution changed with the actual frequency x, but otherwise the formula was universal:

              dE ~ Ff(x;T)dx                                                                                               (4.1)

with f~x2 at low x's and f~e-Âx at high x's. The frequency of the maximum obeyed

              xm = T/ß                                                                                                         (4.2)

where ß~10-27 ergs.

            The explanation might have been anything about the microscopic structure of the matter. However, experimenters showed that the same radiation emerged from empty containers of reflecting or reradiating inner walls if a negligible hole was opened. Since no particular matter was inside, a fundamental thermodynamical explanation was needed, of course with a possibility that c appears too via Maxwell's equations.

            With combined electromagnetism and thermodynamics Wien arrived at a scaling

              f(x;T) = c-3x3g(x/T)                                                                             (4.3)

and there became stuck. The experiments suggested

              lim y‑>0 g(y) ~ 1/y

              lim y‑>∞ g(y) ~ e-ßy                                                                            (4.4)

              y ≡ x/T

and, by choosing any smooth function for g between the asymptotics, eq. (4.2) follows with a number factor of order 1. However the new constant ß seemed to belong to vacuum, which is strange.

            In 1900 Planck was able to get a function g(x/T), conform to observations, from the postulate that all emissions and absorptions of energy in atomic systems happen in portions

              E = hx                                                                                                                        (4.5)

[9]. Then the principles of thermodynamics will result in

              g(y) = 8πh(e-hy - 1)-1                                                                           (4.6)

where h ≡ h/2π was a completely new constant, to be taken from measurement. With h = 1.054*10-27 ergs this Planck spectrum agrees with all measurements.

            In the next years the new constant appeared in various rôles, e.g. in the photoelectric electron radiation from alkali metals, in atomic spectra and in the specific heat of crystallic matter. The synthesis was reached in 1925-6, when Heisenberg formulated the matrix mechanics [10] and Schrödinger the wave mechanics [11]. They turned out to be equivalent, and Dirac formulated the fundamental postulate of Quantum Mechanics [12] as follows.

            The physical quantity is replaced by an operator. The commutator of two operators is ih*(their classical Poisson bracket). The operator acts on the physical state vector or function, and the measurable values of a physical quantity are their eigenvalues according to the particular operator. Then uncertainty relations appear if two operators do not commute, and the uncertainties are proportional with h. Later von Neumann appended the theory with the axiomatic description of measurement [13]. According to him, if we measure the system (anything it means, e.g. an interaction with a macrosystem, us or the apparatus), then it jumps into an eigenstate of the operator of the measured quantity, with a probability calculable from the actual state.

            Since we will refer back to this point later, let us see the so called Schrödinger equation, describing the evolution of a simple point system in Quantum Mechanics in wave language. We assume that only potential and kinetic energies exist, the potential is external and depends only on the coordinates as V(x). Then we need the operators of coordinates, momenta, energy and time derivative. First, let us choose the correspondence

              xi ‑> xi·                                                                                                          (4.7)

Being the [x,p] Poisson bracket 1, now one gets

              pk ‑> -ih(∂/∂xk)                                                                                               (4.8)

Via eq. (4.7)

              V(x) ‑> V(x)·                                                                                     (4.9)

Finally, Hamiltonian mechanics + analogy with eqs. (4.7-8) suggest that

              ih(∂/∂t) <‑ E                                                                               (4.10)

Then, being E=p2/2m+V in classical mechanics, the evolution of the wave function Ň between two measurements is described by the equation

              ih∂ψ/∂t = -(h2/2m)(∂2/∂x2)ψ + Vψ                                              (4.11)

This equation is fully deterministic for ψ; the deterministic evolution is interrupted by measurement and there a stochastic step appears, but not between measurements.

            Quantum Mechanics states that the measurable physical quantities are all operators. Then in the measurement of 2 quantities always commutators enter, which, in turn, contain h. In addition, the evolution equations always contain h too. So Quantumness is an universal phenomenon, and h is its fundamental constant.

 

5. ON CONTRADICTIONS

            The above 3 fundamental theories have been proven correct in themselves, i.e. their predictions were proven correct in measurements innumerable times, if nothing else disturbed the results. However, the 3 fundamental theories contradict each other pairwise.

            Special Relativity Theory predicts the light to propagate with a constant velocity c. On the other hand, mass m drops out from eq. (2.1), so anything moving with v=c at infinity, would travel with something else near a mass. A ball thrown towards a mass M with almost v=c would exceed c nearer to the center but this is impossible in Special Relativity. Both Newtonian Gravity and Special Relativity claimed universality but they cannot be synchronously true.

            Quantum Mechanics predicts an inevitable spread of momentum (and so of velocity) of a point mass closed into a box of linear size L as

             Δv ≥ h/2mL                                                                                                     (5.1)

If L<h/2mc, then the point mass would have probabilities for v>c, prohibited by the also universal Special Relativity Theory.

            The potential or force field of universal Newtonian Gravity is measured from ballistic paths of test bodies. In the theory the gravitational potential is a c-number. On the other hand, Quantum Mechanics claims universal validity too, so the test bodies have inevitable spatial and momentum spreads. Then there is an universal limit when measuring Vgr(x) (see in Sect. 8). So the sharp potential of Newtonian gravity cannot be measured in any way, i.e. does not exist.

            Since everything contradicts everything, at least two of the 3 "fundamental theories" must be wrong, maybe all 3. Now we turn to pairwise unifications.

 

6. GRAVITY + RELATIVITY: GENERAL RELATIVITY

            In 1916 Einstein was able to formulate the common theory of Gravity and relativity [14]. In it the manifold is a 4 dimensional pseudo-Riemannian one, with the infinitesimal distance

              ds2 = grs(x)dxrdxs                                                                                (6.1)

(there is a summation for indices occurring pairwise, above and below). The metric tensor gik is the solution of the Einstein equation

              Rik(glm) - ˝gikRrsgrs = -8πGc-4Tik                                            (6.2)

where Rik is the Ricci tensor, formed from gik and its first and second derivatives, while Tik is the energy-momentum tensor of the matter present. Motions happen between events ds2>0, and force-free motions do on straight(est) lines called geodesics. Gravitational force does not exist, but a central body changes the space-time geometry, and the other orbits it on straightest lines of the influenced geometry, as if it were attracted. Of course these geodesics are independent of the orbiting body, so the measured acceleration is matter-independent up to any digits.

            This is really a unified theory. For small masses the Minkowski spacetime recovers, so special relativity. For slow motions far from the gravity center the motion mimics the Newtonian motion under the force (2.1), which will not be shown here. However the formalism differs from those of both precursor theories.

            Mercury, the planet closest to Sun, does not move very slowly at a very weak curvature. So deviations are expected from the Newtonian orbit, and they are obtained by calculation as 0.43"/year. The exact value of the observed perihelion anomaly is obtained without further assumptions.

            In the unified theory both previous constants appear. For an example where both appear in a physical situation, consider the horizon distance

              rh = 2GM/c2                                                                                                   (6.3)

around a point mass M. Below this shell even light is trapped.

 

7. RELATIVITY + QUANTUMNESS: QUANTUM FIELD THEORY

            The first attempt to unify Quantum Mechanics and Special Relativity was made by Dirac [15], in a straightforward way. Starting with relativistic mechanics, and then again replacing physical quantities with operators as old in Section 4, one arrives at relativistically covariant quantum mechanical equations. However this formalism does not define a self-consistent theory.

            Namely, when starting in Quantum Mechanics, one must fix the number of particles, since the wave function Ň depends on the particle coordinates. On the other hand, putting more energy into the system than the double of the rest energy of the particle considered, a particle-antiparticle pair may appear. Since uncertainty principles permit energy fluctuation for a short time, additional pairs are always present with a small probability, so the structure of Quantum Mechanics loses its validity if Relativity is included.

            The solution was the so called "second quantization" in Quantum Field Theories [16]. Instead of the wave function Ň another operator appears, producing the actual, changing state from a fixed one. The actual physical field, considered, must be defined in advance.

            There are as many independent Quantum Field Theories as independent interactions (only one in Grand Unification). Two fundamental constants, c and h, are always present and there appear the coupling constants of the fields too, but they are dimensionless.

            Quantum Field Theories tend to give infinite results via divergences, but for a class of theories these divergences can be removed by renormalisation. If the coupling constant is not <<1, technical problems may appear. However for Quantum Electrodynamics e2/hc~1/137, the theory can be evaluated via perturbations, and the results (e.g. small corrections in the spectrum of the H atom) are correct.

 

8. GRAVITY + QUANTUMNESS: NEWTONIAN QUANTUM GRAVITY

            This pairwise unification is not yet ready, mainly because nobody tried to formulate it for decades. Namely, we have seen that in 1925, when Quantum Mechanics appeared, and any contradiction between Quantum Mechanics and Newtonian Gravity could show up, Newtonian Gravity was no more the contemporary theory of Gravity, General Relativity being available.

            However this was a historical accident, not an inherent onthologic fact. So let us assume for a moment that Quantum Mechanics has preceeded General, moreover even Special Relativity; which contradiction appears then between Newtonian Gravity and Quantum Mechanics, and how to resolve it?

            In Newtonian Gravity there is a well-defined sharp gravity potential V, giving a force, acting on bodies. In Quantum Mechanics bodies have wave functions, so their positions have uncertainties. We shall see that these two statements contradict each other. For the details see [17].

            Assume first a sharp (c-number) gravity potential V(r). We want to measure it. The proper way is to throw test bodies through the region affected by V; the accelerations will show the local gradients of V, and then V can be reconstructed (up to a zero point constant always appearing in potentials). Without Quantumness this is possible in the limit of infinite many test bodies. With Quantumness the process does not converge. Namely, take a test body of mass M which is point-like (but of course, its wave function is not). We want to measure the average of the potential or the gravitational acceleration g in a volume R3 and a time interval T. Obviously the wave function should be concentrated in the volume at least during T. Starting with a very concentrated wave packet it spreads too rapidly. The optimal strategy is to choose an initial spread below but in the order of R in such a way that at t=T it grows to just R. Only order of magnitude formulae are given here; choose a spread R/2 and wait until it becomes R. This time will be T if

              T ~ MR2/h                                                                                                      (8.1)

which is a lower limit for the mass of the test particle. In addition the momentum P must be 0 at the middle of the measurement, otherwise the particle would leave the volume prematurely. Now, the particle picks up a momentum

              P ~ MTg                                                                                                                    (8.2)

with a quantum uncertainty

              _P ~ h/R                                                                                                        (8.3)

Then the sensitivity of the measurement is

              σ(g) ~ h/MRT                                                                                                            (8.4)

So the accuracy of the measurement is increasing with increasing M. However with a high enough M the test body is no more a test body because it disturbs the gravity to be measured, from an unknown point. Therefore

              σ2(g) ~ (h/MRT)2 + (GM/R2)2                                                                        (8.5)

Now, our only free parameter is M, because R and T were fixed at the beginning. Therefore σ(g) has its minimum at

              Mopt ~ (hR/GT)1/2                                                                                           (8.6)

whence

              σ2(g) ~ hG/R3T                                                                                               (8.7)

This is the final accuracy until which the gravity acceleration is observable in a domain of size R during a time T. Eq. (8.7) is a special kind of uncertainty relations.

            Now let us make one more step by postulating that physical theories should not contain unobservable quantities. Then eq. (8.7) indicates that the gravity potential is not a c-number; it contains inherent "smearing". The simplest way to it is a stochastic description. The gravitational acceleration g (gradient of the potential) has a deterministic and an indeterministic part as

              g = gcl + gst                                                                                         (8.8)

and just so V is built up. We get eq. (8.7) if

              <gst(x,t)> = 0                                                                                      (8.9)

              <gst(x,t)gst(x',t) = 1h3(x-x')δ(t-t')                     (8.10)

which is just a white noise type fluctuation.

            Of course, there is a back reaction to Quantum Mechanics. The Schrödinger equation automatically gets the form

              ih∂ψ/∂t = -(h2/2m)(∂2/∂x2)ψ + (Vcl+Vst)ψ                       (8.11)

So quantum objects are subjects of a general stochastic force, determined by the fundamental constants h and G. This would lead to an "anomalous" Brownian motion, or, for microparticles, simply to the breakdown of quantum superpositions.

            Such effects are, in principle, observable. Some efforts have been made to find hopeful scenarios, since some 30 years ago Károlyházy proposed a theory which belongs to Sect. 10, but in which the Quantum Mechanics was also disturbed via a stochastic gravity [18]. The results are up to now that the anomalous Brownian motion is on the verge of observability if circumstances are lucky; but they are probably not [19], [20].

            On the other hand, there is a phenomenon without good explanation, whose characteristic data are conform to eq. (8.11); it is rather hard to calculate the theoretical predictions in this moment. The phenomenon is called spurious scattering. Spurious scattering was observed when evaluating particle tracks in photoemulsions. In the 30's cosmic radiation was measured by balloons carrying photoemulsion, but it was impossible to establish the standard electric and magnetic fields on board the baloon to determine the particle energies. However via Coulomb scattering the track is winding in a way whose statistics is definite and a parameter is energy- and emulsion-dependent. Calibrating the same emulsion down one can determine then the particle energy from the winding of the track. However it turned out that besides the measurement errors and Coulomb scattering, a third effect exists too, mimicking Coulomb scattering but with a different statistics similar to that of random walk. It was necessary to determine its amplitude too when determining the energy of the particle. There is no generally accepted explanation.

            Being a mere side effect, publications on spurious scattering were rare; an exception is Ref. [21]. L. Jánossy, involved into cosmic radiation measurements in that time, later believed that there is some limitation of the quantum superposition behind it [22], and proposed a methodical measurement of it [23]. The result is that many observations point to a characteristic length in the "random walk" in the order of 10-8 cm; such a characteristic length can be obtained by formulae containing only h, G and a mass M from a mass ~10-13 g; and this is the characteristic grain size in emulsions [24]. If anyone is interested further, the details of the argumentation are in Ref. 24. In this time no new data are expected because of changes in detection techniques in particle physics.

            As told above, the simplest way to satisfy the uncertainty relation (8.7) is to introduce a stochastic element into Quantum Mechanics and Newtonian Gravity (both completely deterministic in their own spheres). The unified theory (Newtonian Quantum Gravity) is not yet ready, but proper stochastic terms have been introduced into Quantum Mechanics; see Ref. 25 and citations therein.

            We note that in NQG a borderline can be derived in the form

              h2/G ~ M3R                                                                                            (8.12)

Well above this line the body is moving approximately on a trajectory of NG and we can forget about wave functions, well below its evolution is almost exactly as in QM, with superpositions and so. So this dividing line is the border between Macroscopy and Microscopy. Objects near to the line would need full NQG, not yet ready. For terrestrial densities ~ 1 g/cm3 grains of R~10-5 cm and M~10-14 g are on the borderline. There can one expect new, unexplained behaviour.

            Now, these characteristic data hold for colloid grains. What is more, they hold also for the smallest known true living organisms below bacteria: Mycoplasmatales and Rickettsia. In addition, they hold for the relevant parts of neurons too. Now the extra stochastic nature of NQG means that all the argumentation about determinism and indeterminism up to now is invalid for objects near to the dividing line, and the unsolved problem of "free will" is obviously intimately connected with neural networks, whose elements are on the dividing line.

 

9. FURTHER CONTRADICTIONS AND LIMITATIONS

            Now we have 3 unified theories (one is being still in construction, but never mind), but they are unified pairwise. So they cannot be the unified theory. One can see this, if limitations of the theories are detected or if two unified theories still contradict.

            Limitations are ample. For example, in a wide class of material sources GR cosmologies start or end with a singularity, where density is infinite [26]. One may or may not believe in physical infinities, but, without doubt, histories cannot be continued through infinities. Quantum Field Theories are able to describe some interactions, although in strict mathematical sense they do not exist [27]; but without doubt they cannot give the masses of the elementary particles appearing in them.

            Let us clarify this statement. Consider a QFT. It describes an interaction, mediated by a vector boson of mass m, with a coupling constant g2, between various particles of masses Mi. In addition, of course, h and c appear.

            First question: can h, c and g2 determine m? The answer is no, because the dimension of g2 is that of hc. One cannot get anything of dimension of m from h, c and g2.

            Second question: may h, c and g2 determine m? The answer is no, from the structure of QFT's, since QFT's with massive vector bosons are not renormalisable, so they give infinities, not removable, as results. Non-zero masses must be generated by a way called spontaneous symmetry breaking. We will not go into its details, but it needs a scalar boson (Higgs), with a quartic potential. Then an effective mass will be generated, determined by h, c, g2 and the coefficients in the quartic potential. As we have seen, h, c and g2 are insufficient, and Mi must not appear, because m is unique and Mi are various. So the coefficients of the quartic polynomial must set the mass m; however this is a description, not an explanation.

            In addition, m is generally in the same order of magnitude as the Mi's. This suggests a common particle mass scale ~1 GeV. However this characteristic mass cannot be obtained from QFT's, only can be introduced into them from outside.

            As for contradictions, we mention only one and half, from confronting GR and QFT.

            First consider the Einstein equation (6.2) of GR. The Einstein equation is necessary to determine the metric of the space-time, but it does not exist outside of pure GR. Namely, the left hand side is a c-tensor, but the right hand side is a q-one, so they cannot be equal.

            Second, try with some ideas that e.g. the Einstein equation holds e.g. in expectation value. Then we have a generally non-Minkowski space-time with Quantum Fields in it. Now, vacua of QFT's are Lorentz-invariant, but not invariant for general transformations of GR. So no covariant answer exists if we ask for the states of the quantum fields. Even if somehow there is a "most natural" coordinate system, strange results are obtained. Here only one will be mentioned.

            Consider a Universe solution. Then there is a "natural" coordinate system, whose time coordinate lines are trajectories orthogonal to the 3-space of constant curvature [26]. Let us perform the QFT calculations in this coordinate system. Since the geometry is generally time-dependent, the fields become excited, the vector bosons are created with some rates. This is a kind of Hawking radiations [28].

            Now, there is no problem with the existence of such a radiation. The problem is that, when it has been calculated in a de Sitter Universe, the result has become rather strange. A de Sitter Universe is a metric of form

              ds2 = dt2 - eαt(dx2 + dy2 + dz2)                                                (9.1)

and it has 10 Killing vectors of space-time symmetry [29]. Now let us calculate the Hawking radiation in the special coordinate system of an observer moving with a "cosmologic" velocity, a velocity field orthogonal to the 6 spatial symmetries. A more or less thermal energy distribution is obtained. Such particles may have been created by the expanding Universe.

            However consider another observer moving with a relative velocity to the first, but still orthogonal to 6 spatial symmetries. It is possible because there are 4 timelike symmetries too, and linear combinations of Killing vectors by constant coefficients are Killing vectors too. He will, of course, see the same distribution from symmetry reasons. But for a real radiation, external to both observers, a Doppler shift would be expected due to the relative motion, which does not appear now [28]. So there are here onthologic problems with the radiation. That is only philosophy; but then should we write the energy-momentum tensor of this radiation into the Einstein equation for determining the expansion or should not.

            Without doubt, the 3 pairwise united theories cannot be the final words of natural sciences.

 

10. TRIALISTIC UNIFICATIONS

            Contradictions would automatically vanish in a trially unified theory if it were free of self-contradictions. Here we note first that no such theory, describing in addition correctly the known phenomena, has been found up to now. We mention some examples, without too much details or completeness.

            The first such example is the so called K-model [18]. It was not intended to be a trial unification, but formally it is, containing G, c and h as fundamental constants. It is a space-time, approximately Minkowskian, but "gravity waves" are travelling in it. They are stochastic with such amplitudes that some combined "uncertainty relations" hold. So in any prediction h, G and c appear together. Unfortunately the theory gives infinities, and even with physically reasonable cutoffs gives unphysical results [30], [31], [32].

            There is a clear kinship between the K-model and the Newtonian Quantum Gravity (Sect. 8), but c does not appear in the latter. The structural similarities and differences are discussed in Ref. 33. Indeed, NQG can be extended to give a reasonable near-Minkowskian space-time. However that is a first approximation, not a trial unification. Quite recently Ref. 32 suggested to use space-times conformal to Minkowski (or those of maximal symmetry) and to put fluctuations into the conformal factor. This is a good idea, e.g. causality (the lightcone structure) would not fluctuate, but such a theory does not contain QFT's, so it is not a trial unification.

            Another approach, but slightly related, was taken by Lánczos [34]. He generalised the Einstein equation (6.2) to a fourth order equation via an elegant variational principle. All vacuum solutions of the old equation remain valid but there are new ones. He showed that there are solutions periodic in all 4 directions. Now let us assume that (for a reason unknown for us) such a solution came into existence in the early Universe, with a period length l~10-33 cm, same in all four directions. This periodicity is of course directly unobservable in macroscopy, being averaged out. Still, some fluctuations remain, and their characteristic parameter (we have seen in Sect. 4 that a parameter of dimension gcm2/s is needed) contains, of course, l and the two fundamental constants of the (modified) GR. Then the only possibility is

              H ~ c3l2/G ~ 10-27 gcm2/s                                                           (10.1)

in the order of the Planck constant. So it is possible that Quantumness is not fundamental, but is a consequence of the microstructure of our space-time. It is possible, but then first one should (approximately) derive QFT's from this structure, and, second, what did produce this particular structure?

            For another approach we must see first something about multidimensional space-times. The topics will be treated in another paper of this Volume [35], but something is necessary to be repeated here. The 4-dimensionality of the spacetime is an observed fact, but only in macroscopy. Extra timelike dimensions would disturb causality, but extra spacelike ones closed (compact) at microscopic sizes do not disturb anything fundamental. Their existence or nonexistence is a matter of fact. Therefore one may invent spacetimes with N spatial and 1 temporal coordinates. Still the arguments leading to GR remain valid, so some lightcone structures (SR) and Einstein-type equations are advisable.

            After the pioneering work of Kaluza [36] particle physicists are eager to generate interactions from extra dimensions. The problem is that electromagnetic or stronger interactions can be four-dimensional projections of particles moving "too fast" in the extra dimensions, so that their full N+1 dimensional velocity would be spatial [37]. Then either 4 dimensional causality remains unexplained, which is an unpleasant idea, or electrodynamics, and strong and weak interactions do not come from the extra dimensions. Now we can continue with supergravity.

            There is a theory in Minkowski space-time called supersymmetry (see e.g. Ref. 38 and citations therein). The idea is that there are operators of space-time transformations as e.g. rotation and translation, with some commutators. There are operators transforming quantum numbers (connecting for example different quarks, &c.), with some other commutators (e.g. SU(3)). Now, it is unnatural to have non-simple groups for fundamental operators, so the commutators between the two sets of operators should be nontrivial too. The new theory, e.g., predicts new particles, boson pairs to each fermion and vice versa. They are not seen, but they may be very massive. Now we come to a trial unification, if the "external" and "internal" operators act in the macroscopic 4 and the microscopic N-3 dimensions, and we apply the formalism of GR on all the N+1 dimensions. Automatically, h appears in the commutators. That is the theory of supergravity [39]. Unfortunately it seems unrenormalisable, so if it gave infinities, they could not be removed; in addition, as told above, one may have serious doubts in deriving the stronger interactions from extra dimensions.

            Another attempt is using superstrings [40], fundamental objects whose first excited quantum states are at 1019 GeV (1016 erg). This quantity contains also h, similarly to the Lánczos theory. If the macroscopic spacetime structure is derived from the actions of superstrings, then there is a hope that such a construction might explain all these pairwisely unified theories, but until now no great success has been reported.

            We stop here with the list of attempts. The problem is not hopeless, but obviously the trially unified theory is not yet in hand. However in the next Section it will be shown that some, very general, features of the unified theory can be guessed even now.

 

11. THE FUNDAMENTAL SCALES OF THE UNIFIED THEORY

            A trially unified theory contains the 3 fundamental constants of Gravity, Relativity and Quantumness (or their independent combinations); and the minimal theory does not contain more. Then the characteristic data or its fundamental objects or phenomena can be determined for orders of magnitude, because, with 3 and only 3 fundamental constants the length, time and mass scales of the unified theory can be uniquely calculated. Namely, besides G, c and h everything other results of the calculations must be numbers, and, without postulating very large or very small number constants in the theory (unnatural) or very special formulae (also unnatural) the number factors would be in the "neighbourhood" of the order of unity.

            Consider the set (G,c,h). All of them have different dimensions. Imagine an arbitrary function of them. The only function which can have a dimensional argument is the power function; all the others must have dimensionless arguments. Let us see only one example. Consider the exponential function; it can be expanded into a Taylor series as

              ex = ∑0 xn/(n!)                                                                                       (11.1)

Now if x is not dimensionless, then the sum does not exist at all, quantities of different dimensions being unsummable.

            But then all lengths derived from the united theory must contain powers of the 3 constants, so the result must be αGAcBhC, where α is a number, and, as told above, generally one expects it not too far from unity. The exponents must be so arranged that (cm3/gs2)A(cm/s)B(gcm2/s)C=cm. This is 3 equations for the 3 exponents, so the result is unique. In this way we get the scales, traditionally called Planck scales, as

              LPL = (hG/c3)1/2 ~ 10-33 cm                                                         (11.2)

              TPl = (hG/c5)1/2 ~ 10-44 s                                                             (11.3)

              MPl = (hc/G)1/2  ~ 10-5 g                                                             (11.4)

The first 2 scales are completely strange and inaccessible for us; the third is familiar, being the mass of a dust grain, for example. However the fundamental "particle" or fluctuation of the unified theory would be of ~LPl size and ~MPl mass. Therefore it would be of density ~1093 g/cm3, rather high. All other scales, if needed, can be combined from eqs. (11.2-4).

            This means that for our present techniques genuine phenomena of the trially unified theory cannot be directly investigated. However from any proposed unified theory some side effects can be calculated within the reach of our possibilities. An example comes from supergravity. It predicts fermionic pairs of bosons, say the photino (gravitino is doubtful, because Gravitation is not an interaction in General Relativity, and supergravity is a multidimensional generalisation of GR). Now, the photino is not seen, but the possible mass was guessed just above the present accelerators.

            In a unified theory the expansion of the Universe would not start from singularity (infinite density) but from a state of density ~MPl/LPl3, and such a state may spontaneously occur for a time TPl as quantum fluctuation. Such a result can be seen even from a very crude approximation, by using the Einstein equation (6.2) with a Universe symmetry, and writing the Hawking radiation, calculated from QFT on that geometry, on the right hand side [41]. Then one gets a solution incomplete in past time direction, i.e. starting from something else than singularity. The initial curvature radius is, for order of magnitude, LPl and the initial radius is ~MPl/LPl3 [41], but it cannot be anything else, of course.

            As seen a trially unified theory could explain some "final" questions. Unfortunately it is not ready.

 

12. DOUBTS AND CONCLUSIONS

            Unfortunately there are some doubts about the finality of the trially unified theory to be looked for. One such serious signal exists, the particle masses. As told above, the general scale for particle masses is M0~1 GeV. Let us see the generality of this statement.

            In pre-quark particle physics all non-masless elementary particles have masses from 0.105 GeV (µ±) upwards to several GeV, with the only exception of the e±, possessing 0.000511 GeV. There was some hope to explain the ratio mp/me=1836 and tricky mathematical formulae were invented for it.

            In the present Grand Unification the elementary particles are leptons, quarks, Higgs bosons and vector bosons. The masses of the last two groups are derived effective masses, and for the first two ones (except again the electrons) if not zero then are in the order of GeV magnitude. Two mass scales seem to appear in the effective masses: one of ~100 GeV of the electroweak sector, hopefully derivable somehow from the GeV scale, and one of 1015 GeV of the X and Y bosons (see proton decay) [42], maybe derivable from MPlc2~1019 GeV.

            So if we are very lucky, then only the GeV scale needs explanation. But a very exceptional formula would be needed to derive 100 Gev from 1019 GeV. It seems as if there were a fourth fundamental constant.

            But this fourth constant does not appear anywhere in Gravity, Relativity or Quantumness. It seems as Quantum Field Theories would be the (still incomplete) pairwise unification of the individual theories of Relativity and Quantumness, not containing any M0. Until the trial unification this idea was possible, because some external parameters may get explanations from the third; but it seems as if the trial unification could not be successful at this point.

            There are various possibilities, and no one can choose among them at the present stage of art. It is possible that there is a fourth fundamental phenomenon, independent of Quantumness, behind the non-massless elementary particles; but what is it, and in what else is it manifested? For ninety years everybody expected the quantization of particle masses from Quantum Theories. It is possible that somehow elementary particles are "accidental": they dropped out somehow from the unity of the original Oneness in the early stage of expansion of the Universe and thinning of the matter; but then why the particle masses are so uniform in the whole observable Universe? No mass difference is seen even in far spectra, and at least small inhomogeneities would be expected in a decaying Oneness. Or, it is marginally possible that finally someone will be possible to explain a number 10-19 between Planck and particle mass scales. It is not impossible by an exponential formula: e-N~10-19 if N=44 and why could a number in an exponent not be 44?

            It could; only such dirty trick never happened up to now in the history of physics. So we stop here, with an example given by Eddington. As known from observations, the present average particle density of the Universe is cca. 1 particle in a cubic meter, i.e. ~10-6 cm-3. The observable part of the Universe is some 10 billion lightyears, or ~1028 cm, and it is still unknown if the Universe is closed or not. It is still possible that it is a surface of a hypersphere of radius not too far above this size. If so, then the total particle number is ~1078.

            Now, this is a dimensionless number and very large. By no means, the number of all the conserved particles in a closed Universe (finite) is a fundamental number; let us denote it by N0 and try to guess how to explain it.

            Eddington, as told, guessed that

              N0 ~ 1078 >> 1                                                                                       (12.1)

and found a useful formula in a 16th century theology book which calculated the exact number of independent graces of the Holy Virgin (gratia plena). The author got an exponential formula

                          NG = 2Θ where Θ≡2Ξ and Ξ≡23                                                              (12.2)

(exponential formulae are natural in classification counting independent possibilities). Now

              NG = 1.158*1077 ~ 0.1N0                                                          (12.3)

so if N0 has a fundamental explanation at all, it probably contains the construction (12.2). We cannot derive the formula (12.2) from today's physics. Now observe that

              e2/GMpMe ~ 1039 ~ N01/2                                                            (12.4)

Eddigton interpreted the left hand side as the ratio of strengths of fundamental interactions and guessed that the total number of existing particles might govern the strengths of interactions. This is obviously impossible in the present construction of physics without instantaneous long-range connections; it might or might not come naturally in a Machian world description. And observe that

              MPl/M0 ~ N01/4                                                                                       (12.5)

            But it is quite possible that N0 is 0 (or 1 or such). Namely in Grand Unification the number of conserved charges is only 2: Z and B-L [42]. Now, for the electric charge Z we have local density data, and the local electric charge density is 0 (equal numbers of protons and neutrons). As for the barion and lepton charges we could make an incomplete count. In our neighbourhood we mainly have protons, electrons, neutrons, e-neutrinos and e-antineutrinos. Then, for densities,

              (B-L)/V ~ nproton - nelectron + nneutron - (nneutrino - nantineutrino)             (12.6)

Now, the first two terms cancel, the third is some 15% of the first, and we do not know anything for the bracketed difference. So with a slight luck both conserved charges may be 0, which is a natural initial condition.

            But then the present N~1078 of particles conserved in the SU(3)*SU(2)*U(1) Standard Theory, having emerged somewhere at the breakdown of Grand Unified symmetries about 1028 K temperature and 10-34 s from the Beginning is a cosmologic accident. Then MPl/M0~N1/4 can be another accident or the consequence of the same accident, and the trial unification can be complete. But how to explain a very old accident?

 

ACKNOWLEDGEMENT

            Partly supported by OTKA T/01822.

 

REFERENCES

 [1]       I. Asimov: The Tragedy of the Moon. Doubleday & Co., New York, 1973

 [2]       I. Newton: Principia Mathematica Philosophiae Naturalis.London, 1687

 [3]       L. Eötvös, D. Pekár & E. Fekete: Annln. Phys. 68, 11 (1922)

 [4]       Sir G. H. Darwin: Phil. Trans. Roy. Soc. 172, 491 (1881)

 [5]       M. Lescarbault d'Orgčres: Cosmos 16, 22 (1860)

 [6]       G. Galileo: Dialogo dei due massimi sistemi del mondo. Landini, Florence, 1632

 [7]       A. A. Michelson & E. W. Morley: Amer. J. Sci. 31, 377 (1886)

 [8]       A. Einstein: Annln. Phys. 17, 891 (1905)

 [9]       M. Planck: Lecture at the German Physical Society (Berlin), 14th Dec., 1900

[10]      W. Heisenberg: Z. Phys. 33, 879 (1925)

[11]      E. Schrödinger: Annln. Phys. 79, 361 (1926)

[12]      P. A. M. Dirac: Proc. Roy. Soc. London 109, 642 (1925)

[13]      J. v. Neumann: Gött. Nachr. 1, 1 (1927)

[14]      A. Einstein: Annln. Phys. 49, 898 (1916)

[15]      P. A. M. Dirac: Proc. Roy. Soc. Lond. 117, 610 (1928)

[16]      S. Tomonaga: Prog. Theor. Phys. 1, 27 (1946)

[17]      L. Diósi & B. Lukács: Annln. Phys. 44, 488 (1987)

[18]      F. Károlyházy: Nuovo Cim. 42, 390 (1966)

[19]      F. Károlyházy, A. Frenkel & B. Lukács: in Physics as Natural Philosophy, eds. A. Shimony & H. Feshbach, MIT Press, Cambridge, Mass. 1972, p. 204

[20]      F. Károlyházy, A. Frenkel & B. Lukács: Quantum Concepts in Space and Time, eds. R. Penrose & C. J Isham, Clarendon Press, Oxford, 1986, p. 109

[21]      P. J. Lavakare & E. C. G. Sudarshan: Nuovo Cim. Suppl. XXVI, 251 (1962)

[22]      L. Jánossy: Lecture given at his 60th birthday at CRIP, Budapest, May, 1971, unpublished

[23]      Ágnes Holba & B. Lukács: Acta Phys. Hung. 70, 121 (1991)

[24]      Ágnes Holba & B. Lukács: in Stochastic Evolution of Quantum States in Open Systems and in Measurement Pocesses, eds. L. Diósi & B. Lukács, World Scientific, Singapore, 1994, p. 69

[25]      L. Diósi: Europh. Lett. 22, 1 (1993)

[26]      S. Hawking & G. F. R. Ellis: The Large-Scale Structure of  Space-Time. Cambridge University Press, Cambridge, 1973

[27]      M. Banai: J. Math. Phys. 28, 193 (1987)

[28]      G. W. Gibbons & S. Hawking: Phys. Rev. D15, 2738 (1977)

[29]      H. P. Robertson & T. W. Noonan: Relativity and Cosmology. Saunders, New York, 1969

[30]      L. Diósi, B. Lukács: Nuovo Cim. 108B, 1419 (1993)

[31]      L. Diósi, B. Lukács: Phys. Lett. A181, 366 (1993)

[32]      J. L. Rosales & J. L. Sanchez-Gomez: Phys. Lett. A199, 320 (1995)

[33]      L. Diósi, B. Lukács: Phys. Lett. A142 331, (1989)

[34]      C. Lánczos: Found. Phys. 2, 271 (1972)

[35]      B. Lukács: in this Volume, p. 2

[36]      Th. Kaluza: Sitzungber. Preuss. Akad. Wiss. Phys. Math. Kl. LIV, 966 (1921)

[37]      B. Lukács & T. Pacher: Phys. Lett. Phys. Lett. 113A, 200 (1985)

[38]      J. Ellis: Proc. Neutrino '82 Balatonfüred, p. 304

[39]      J. Wess & B. Zumino: Nucl. Phys. B70, 39 (1974)

[40]      M. Green: Sci. Amer. 1986/9

[41]      L. Diósi & al.: Astroph. Space Sci. 122, 371 (1986)

[42]      P. Langacker: Phys. Rep. 72C, 185 (1981)

 

_

My HomePage, with some other studies, if you are curious.