You have no items in your bag. get the Epoch
Rotation of the Triangulum Galaxy M33
Baryonic mass ~ 1010 solar masses ~ 2 x 1040 kg.
Radius ~ 3 x 104 light years ~ 3 x 1020 metres.
If w = 2 (sqrt pi G d) *Gödel, where d = density.
w = sqrt 2 (pi x 6.67 x 10-11 x 2 x 1040 x 3 / 4 x pi x 27 x 1060)
w ~ 3.8 x 10-16 radians/second.
Compare with Gomel and Zimmerman
Where w = 4.58 x 10-16 radians/second.
Gödel 1949 proposed that any dust like cloud of gravitationally bound matter would have an intrinsic angular velocity w.
Gomel and Zimmerman 2019 propose that a galaxy will have an intrinsic inertial frame of reference which gives rise to an angular velocity w.
The above very rough calculation shows that the angular velocity proposed by Gödel may equal the component of the galactic rotation curve identified as w by Gomel and Zimmerman and that the hypothesis of ‘Dark Matter’ to explain galactic rotation curves will become completely unnecessary.
This calculation depends on assessing the overall density of the galactic disc and the spherical gas halo together.
Peter J Carroll 29/2/2020
The above may supply the answer to the connundrun of Galactic Rotation Curves not matching the Newtonian-Relativistic predictions but the following now looks like a better proposition https://www.specularium.org/component/k2/item/290-rotation-of-the-triangulum-m33-galaxy
Lensing, Redshift and Distance for Type1A Supernovae.
All data input from Perlmutter et all, https://arxiv.org/pdf/astro-ph/9812133.pdf
The measured Apparent Magnitudes of type 1A Supernovae become converted into fluxes measured in Janskys and these fluxes become converted into Apparent Distances on the basis of the inverse square law taking the Absolute Magnitude of a type 1A supernova as -19.3 at 10 parsecs as the basis of calculation.
The Lensing Equation then gives the Actual Distance for each supernova.
The Redshift-Distance Equation then gives the Antipode distance of the Universe which comes out at ~13 billion light years with only minor divergences, in every case.
The physical principles underlying the Lensing Equation and the Redshift-Distance Equation lie here https://www.specularium.org/hypersphere-cosmology in sections 8 and 5 respectively.
This analysis suggests that the small positive spacetime curvature of a Hyperspherical Universe can account for the apparent discrepancies between observed redshifts and observed apparent magnitudes of type 1A, and the hypotheses of an expanding universe with an accelerating expansion driven by a mysterious ‘dark energy’ become unnecessary.
As a consequence of the hypothesis that the universe exists as a hypersphere where the main parameters of the universe all come out at the same multiple of the corresponding Planck quantities:
Where U stands for a very large dimensionless number now called the Ubiquity Constant.
Now the Beckenstein-Hawking conjecture about black hole entropy initiated speculation about the idea of a ‘holographic universe’ because if we equate entropy with information then it suggests that the amount of information in the universe corresponds to its surface area and perhaps the information somehow lies on its surface and the apparent 3d universe just consists of a projection from that. Personally I don’t buy into that hypothesis but I do think the basic conjecture tells us something about reality because the universe does not appear to contain enough information to allow for quantisation right down to the Planck scales.
Various experiments are in progress to see if the universe actually exhibits spacetime quantisation, see http://en.wikipedia.org/wiki/Holometer for links.
Now the Beckenstein-Hawking conjecture states:
Where I = information in bits if we equate information with entropy, (eliminating Boltzmann’s constant in the process). A = the surface area of the universe, and here we can use either the ‘causal horizon’ from the big bang or the spatial horizon from the hypersphere cosmology hypothesis. means four times the Planck length squared.
So a calculation can proceed as follows: -
Calculating the surface area of universe A, and then dividing by
A = 4π
Dividing area by 4
Area/4 = where ђ means Planck’s constant reduced.
This comes out at2.0559 x , the number of bits for the universe, the number of bits for the universe.
Calculating the number of Planck volumes in the universe:
Volume of universe = = 3.314 x
Calculating the number of Planck volumes in the universe per bit in the universe:
3.314 x 10183 / 2.0559 x 10120 = 160 x 1060
Taking the cube root of this for the number of bits per Planck length = 5.4 x1020
Thus the model predicts that:
*The universe will exhibit an effective ‘pixilation’ or ‘grain size’ or quantisation of spacetime at Planck length and Planck time x 5.4 x 1020
Note that a derivation using the same Beckenstein-Hawking formula for a 13.8 bnlyr universe will give a different prediction because this model uses for hypersphere volume rather than c
*Note also that we have yet to observe anything ‘real’ below these new limits.
Plus this may have interesting implications for the Heisenberg uncertainty relationships.
The Gödel 3d rotating universe has this solution derived from General Relativity: -
“Matter everywhere rotates relative to the compass of inertia with an angular velocity equal to twice the square root of pi times the gravitational constant times the density.” – Gödel.
So, squaring and substituting mass over volume for d yields:
Substituting 4/3 π for the volume of a sphere yields:
Substituting the formula for a photon sphere (where orbital velocity equals lightspeed) yields:
Taking the square root:
(1*) W =
Substituting L for r as we wish to find the solution for a hypersphere:
(2*) W =
Substituting GM/L = the formula for a hypersphere yields:
Which we can also express as:
W= or as W =
Substituting W = 2πf in (1*) and (2*), to turn radians into frequency we obtain:
for a sphere and f = for a hypersphere.
Centripetal acceleration a, where a = for a sphere and a for a hypersphere, yields:
= for a sphere and a = = for a hypersphere.
Now for a hypersphere where a centrifugal acceleration a, of a = or more simply a = will exist, to exactly balance the centrifugal effects of rotation.
In a hypersphere of about L = 13.8 billion light years, the rotation will correspond to about 0.0056 arcseconds per century. As this consists of a 4-rotation of the 3d ‘surface’, visualisation or measurement of its effects may prove problematical, the rotation should eventually cause every point within the 3d ‘surface’ to exchange its position with its antipode point, over a 13.8 billion year period, turning the universe into a mirror image of itself.
Note that if the universe does have the same number of dimensions of time as of space then it will rotate back and forth between matter and anti-matter as well.
A 4-rotation does not submit to easy visualisation but consider this: -
A needle rotated in front of an observer and viewed in the plane of rotation will appear to shrink to a small point and then become restored to its original length with further rotation.
Likewise a sheet of card will appear to shrink to a line and then expand back into a plane if rotated in front of an observer.
In both cases the apparent shrinkage and expansion depends on the observer not seeing the needle or card in their full dimensionality.
A cube (or a sphere) rotated through a fourth dimension would appear to shrink to a point and then expand back to its original size to an observer viewing it in only three dimensions and ignoring the curvature.
The alternative mechanism for redshift works as follows:
Redshift = Z = - 1
Where = observed wavelength, = expected wavelength, and the -1 simply starts the scale at zero rather than 1.
Now wavelength, , time frequency, f, still always equals lightspeed, c .
= c -
Where d = astronomical distance, A = Anderson acceleration. The Anderson acceleration (the small positive curvature of the hypersphere of the universe) works against the passage of light over the astronomical distance, d. It cannot actually decrease lightspeed but it acts on the frequency component.
So substituting =
Therefore Redshift, Z = - 1
Thus Z becomes just a function of d, astronomical distance, not recession velocity.
To put it in simple terms, redshift does not arise from a huge and inexplicable expansion of the underlying spacetime increasing the wavelength; it arises from the resistance of the small positive gravitational curvature of hyperspherical spacetime to the passage of light decreasing its frequency.
Thus the ‘Hubble Constant’ has a value of precisely zero kilometres per second per magaparsec, but the ‘Hubble Time’ does give a reasonably accurate indication of the temporal horizon and hence the spatial horizon/antipode distance of the hypersphere of the universe.
Redshift and Distance.
We can rearrange the following equation derived from considerations of the effect of a positive spacetime curvature on the frequency of light from distant galaxies: -
To give an expression for cosmological distance in terms of redshift; -
This simplifies to: -
Graphically we can represent this as: -
The vertical scale represents Redshift.
The horizontal scale represents observer to antipode distance L, in both space and time in the Hypersphere Cosmology model. In the conventional big bang model it would represent only the time since the emission of the light.
Note that when Z = 0, d = 0. When Z = 1, d = When Z = 10, d ~ 0.91 L
And when Z →∞, d = L
A consideration of Frequency Shift gives a more clear and intuitive picture of the effect of the positive spacetime curvature on light from cosmological sources: -
Hypersphere Cosmology. (5) P J Carroll 20/12/20
Abstract. Hypersphere Cosmology presents an alternative to the expanding universe of the standard LCDM-Big Bang model. In Hypersphere Cosmology, a universe finite and unbounded in both space and time has a small positive spacetime curvature and a form of rotation. This positive spacetime curvature appears as an acceleration A that accounts for the cosmological redshift of distant galaxies and a stereographic projection of radiant flux from distant sources that makes them appear dimmer and hence more distant.
By calculating Apparent distances for type 1A Supernovae based on an inverse square diminution of flux with distance and then using their redshifts in hyperspherical lensing to obtain their actual distances and then using their redshifts in the redshift-distances equation to calculate the antipode length of the universe, a value of L close to 1.23 x 1026 metres appears in all cases.
Perlmutter et all, https://arxiv.org/pdf/astro-ph/9812133.pdf
See the calculations here https://www.specularium.org/hypersphere-cosmology/item/270-type-1a-supernovae-and-hypersphere-cosmology
See an example calculation here: https://www.specularium.org/component/k2/item/290-rotation-of-the-triangulum-m33-galaxy
Commentary. The Hypersphere Cosmology model seeks to replace the standard LCDM Big Bang cosmological model with something approaching its exact opposite.
In HC the universe does not expand, it remains as a finite but unbounded structure in both space and time with spatial and temporal horizons of about 13bn light years and 13bn years. The HC universe does not collapse because its major gravitationally bound structures all rotate back and forth to their antipode positions over a 26bnyr period about randomly aligned axes, giving the universe no overall angular momentum and no observable axis of rotation.
The small positive spacetime curvature of the Glome type Hypersphere of the universe has many effects, it redshifts light traveling across it, it lenses distant objects making them look further away, it prevents smaller hyperspheres or singularities forming within black holes and it gradually causes black holes to eject mass and energy.
In HC the CMBR simply represents the temperature of the universe, and it consists of redshifted radiation that has reached thermodynamic equilibrium with the thin intergalactic medium***
The universe will appear to observers as having the flux from distant sources in stereographic projection due to the geometry of the small positive spacetime curvature.
In terms of the enormously dimmed flux from very distant sources the antipode will appear to lie an infinite distance away, and beyond direct observation, even though the antipode of any point in the universe lies about 13bnlyr distant.
The antipode thus in a sense plays the anti-role of the Big Bang Singularity in LCDM cosmology. We can never observe either, but instead of an apparently infinitely dense and infinitely hot singularity a finite distance away in universe undergoing an accelerating expansion in space and time, Hypersphere Cosmology posits an Antipode that will appear infinitely distant in space and time and infinitely diffuse and cold, even though actual conditions at the antipode of any point will appear broadly similar on the large scale for any observer anywhere in space and time within the hypersphere.
Both HC and LCDM-BB can both model many of the important cosmological observations but in radically different ways. HC has more economical concepts, as a small positive spacetime curvature alone can account for redshift without expansion, the dimming of distant sources of light without an accelerating expansion driven by dark energy, and it also offers a singularity free universe.
Neither model really explains where the universe ‘came from’, but we have no reason to regard non-existence as somehow more fundamental than existence.
The evidence for one-way cosmological evolution remains mixed. The entropy of a vast Glome Hypersphere may remain constant as a function of its hypersurface area. On the very large scale the universe needs only the ability to break neutrons to maintain constant entropy. Very distant parts of the universe appear to contain structures far too large to have evolved in the BB timescale.
Hypersphere addenda. Notes and Further Speculations
2) Gödel derived an exact solution of General Relativity in which ‘Matter everywhere rotates relative to the compass of inertia with an angular velocity of twice the square root of pi times the gravitational constant times the density’. This solution became largely ignored because of the apparent lack of observational evidence for an axis of rotation. However, in a hypersphere the galaxies can rotate back and forth to their antipode positions about randomly aligned axes (most probably around the circles of a Hopf Fibration of the hypersphere, thus resulting in a universe with no net overall angular momentum. Such a ‘Vorticitation’ would stabilise a hypersphere against implosion under its own gravity and result in the universe rotating at a mere fraction of an arcsecond per century – well below levels that we can currently observe.
3) Mach’s Principle can only work in a universe of constant size and density. Strong evidence exists to show that the gravitational constant and inertial masses have remained constant for billions of years.
***The Cosmic Microwave Background Radiation (CMB/CMBR) may consist of redshifted trans-antipodal light that has reached thermodynamic equilibrium with the thin intergalactic medium, but Hyperspherical Lensing raises another possibility: -
Widely spatially separated observers within the universe may see a quite different CMBR or perhaps none at all, because the CMBR originates from near their antipodes.
Consider that a galaxy like our own lies near to our antipode point. Such a galaxy will have a spherical Hot Gas Halo extending for several hundred thousand light years around it, accounting for about half of its mass, and at a temperature of about I million Kelvin.
The angular size of such a distant galaxy in Euclidean space would come out at a paltry e21/e26 radians, about 1/10,000th of a radian.
However, lensing at a redshift of 300,000 would give it an apparent angular size of about 6 radians (it would thus fill the whole sky) and reduce its temperature to around 3 Kelvin.
The lower temperature light from the starry part of the distant galaxy would become redshifted far beyond observability, but all the light from the Hot Gas Halo would end up here, coming in from every direction, providing a microwave background radiation that remains location dependent rather than cosmically uniform.
Created by Peter J Carroll. This upgrade 20/12/20.
In this paper 'Chaos' denotes randomness and indeterminacy, not the mere so-called 'deterministic chaos' of systems with extreme sensitivity to initial conditions.
For many decades it has seemed from theoretical considerations that true Chaos or indeterminacy applies only at the Planck length scale of ~10^-35 metres or the Planck time scale of ~10^-44 seconds. This hardly accords with either life as we experience it or what we know about the behaviour of fundamental particles or our own minds.
Evidence mounts for a substantial upward revision of the level at which Chaos actually shapes the behaviour of the universe and everything within it, including ourselves.
This paper considers the emerging holographic universe principle in terms of vorticitating hypersphere cosmology, (VHC).
The Beckenstein-Hawking conjecture asserts a proportionality between the surface area of a gravitationally closed structure like a black hole or a hypersphere, and its internal entropy.
The Claude Shannon information theory relates information to entropy.
Combined, these theories lead to the idea of a Holographic Universe in which each Planck area of the surface of the universe carries one bit of information. These bits of information serve to define what happens in all the Planck volumes within the universe.
This results in far less than one bit of information for each Planck volume within the universe. A holographic universe thus contains a substantial information deficit, and thus it behaves with more degrees of freedom, or more chaotically than simple quantum theories suggest.
In VHC the calculation of the ratio becomes quite simple. L, the antipode length of the universe has the value lp times U, where lp equals the Planck length, ~ 10^-35m, and U represents the huge number ~ 10^60. (U also relates the time and mass of the universe to the corresponding Planck quantities).
Thus if Vu represents the volume of the universe and Vp represents Planck volume then
Vu / Vp divided by Au / Ap = U
where Au equals the surface area of the universe and Ap equals the Planck area.
Thus the ratio of bits to Planck volumes comes out at only 1 bit per 10^60 Planck volumes.
This may seem a contra-intuitively small figure, however if we take the cube root of this volume figure to reduce it to a length measurement then it comes out at 10^20 Planck lengths. As the Planck length equals ~10^-35m this results in a figure of ~10^-15 metres, merely down at the fundamental particle level.
The information content of the universe cannot then specify its conditions down to the Planck scale, but only down to the scale of ~10^-15 metres, or the corresponding time of just ~10^-24 seconds.
Now we do not actually observe any real quantities below these levels anyway. Although fundamental particles theoretically exist as dimensionless points they have effective sizes down at the 10^-15m size when they interact, and similarly nothing ever seems to happen in less than 10^-24 seconds. These lower limits seem to represent the actual pixelation level of the universe, its 'grain size', and any real event either occupies a whole pixel or it doesn't happen. This minimum level of divisibility represents the point where causality ceases to apply and processes become indeterminate and Chaotic. Thus it's a whole lot more chaotic than Heisenberg thought.
This suggests that we should add the factor of the cube root of U to all of the Heisenberg uncertainty (indeterminacy) relationships, as in the following examples where D stands for Delta, the uncertainty, and h stands for slash-aitch, Planck's constant over 2 pi.
Momentum and Position. Dm Dp ~ h becomes Dm Dp ~ h (U)^1/3
Energy and time. DE Dt ~ h becomes DE Dt ~ h (U)^1/3
and also to the esoteric Entropy times temperature (SK) and time t, uncertainties which then becomes
DSK Dt ~ h (U)^1/3
This gives considerably more scope for the local spontaneous reversal of entropy that may allow the emergence of more complex forms from simpler ones.
The factoring in of cube root U seems to bring the indeterminacy of world up towards the scale at which we actually experience it, and it has two other consequences: -
Firstly it answers all those sarcastic physicists who opine that quantum based magical and mystical ideas remain invalid because quantum effects occur at a level far below the scale of brain cells for example:
Typical nerve conduction energy equates to~10^-15 joules and a typical nerve transmission time between neurones takes ~10^-3 seconds.
Applying this to DE Dt ~h (U)^1/3 suggests that Ur-Chaos enhanced quantum effects will certainly come into play at a brain cell level corresponding to actual thinking, the activation of about 1,000 neurones per second.
Thus Sir Roger Penrose's attempt, (in The Emperor's New Mind), to hypothesize such effects merely at the intracellular microtubule level perhaps seem rather too conservative now.
Secondly the value of U depends on the size of the universe. Bigger universes have a lower ratio of surface area to volume than smaller ones and hence behave more chaotically, so the value of U will affect the physical laws of the universe. A bigger universe will for example have fundamental particles with larger effective sizes, thus changing chemistry and radiation. When we examine 'fossil' light from distant galaxies or even the fossil or geological record of our own fairly old planet it appears that the laws of physics have remained essentially unchanged.
Thus if we inhabit a universe subject to the holographic principle it cannot have expanded.
So welcome to the wonderful world of Ur-Chaos, where everything has twenty more orders of magnitude of Chaos than previous calculations suggested, but we suspected something like this all along.
The title Ur-Chaos Theory derives from two mnemonics, as it suggests both U-root(3) and Uber or 'greater' Chaos.
An afterthought. In a lyrical sort of way, one might note that the bigger one's universe, i.e. the wider one's horizons, the more chaotic and interesting one's life becomes.
I have repeatedly asserted in many of my magical and scientific books and papers that this universe runs on probability rather than deterministic certainty, yet I have never formally demonstrated this. I assumed, perhaps unfairly, that everyone would have a familiarity with the arguments that lead to this assertion. However I have received so many questions about it from those who seem to cling to various deterministic superstitions from simple Newtonianism, to theories of Predestination, and to old ideas about Cause and Effect. I feel that I must present a simple killer example to the contrary.
Herewith, perhaps the simplest proof that we inhabit a probabilistic rather than a deterministic universe. Philosophers have torn their hair out over this one for the last 50 years, it underlies the Schrödinger’s Cat Paradox, which to me doesn’t constitute a paradox at all, merely a realization that cause and effect operates only because the universe acts randomly.
Consider the so-called ‘Half-Life’ of radioactive isotopes. Every radioactive substance, whether made by natural processes, (e.g. Uranium) or by humans playing around with reactors, (e.g. Plutonium), has a half-life. This means that after a certain amount of time, half of the atoms in a sample will have gone off like miniature bombs and spat out some radiation and decayed into some other element. After the same amount of time has elapsed again, a further half of the remainder will have done the same leaving only a quarter of the original atoms, and so on. Highly unstable atoms may have a half life measured in seconds; somewhat more stable ones may have half lives measured in tens of thousands of years.
Now the half-life of a lump of radioactive material remains perfectly predictable so long as that lump consists of millions or billions of individual atoms. However that predictability depends entirely upon the behavior of individual atoms remaining entirely probabilistic and random. The half-life effect means that during the half-life period, half of the atoms will explode. So if one takes the case of a single atom, one can only say that it has an evens chance of exploding during its half-life period. So the half-life time defines the period in which it has an evens chance of exploding.
Suppose you threw two million coin tosses. If you got anything but close to a million heads and a million tails you would naturally suspect something non-random about the tossing procedure or the coin itself. Thus the nice smooth predictable exponential decrease in the number of unexploded atoms in a radioactive isotope, a half, a quarter, and eighth, and so on, with each passing of the half life period, overwhelmingly suggests that the individual atoms behave randomly. Imagine that after tossing the two million coins you discarded all the tails and tossed the heads again, and then repeated the process on and on, you would expect to halve the number of coins each time so long as the number of coins remained fairly large. Random behavior means that the outcome of an event has no connection to its past, the coin may have come down heads in the previous toss but that gives no clue as to what it will do subsequently, many gamblers willfully ignore this fact.
So here we have an odd insight, random behavior in detail leads to perfectly predictable behavior en masse. Indeed it seems difficult to see how anything but random behavior could lead to such predictability.
So what, you may ask, okay, so individual atoms may behave randomly, but surely doesn’t human scale reality behave itself according to cause and effect?
Well sometimes it does reflect the apparently causal predictability of randomness in bulk, but often it does not. The Schrödinger’s Cat thought experiment provides a seminal example. In principle one can easily rig up a device to measure whether or not a single atom has exploded within its half life period, and then use that measurement to trigger a larger scale event like shooting a captive cat concealed in a box. The poor cat has only an evens chance of surviving the experiment and nobody can tell what happened until they look in the box afterwards. This thought experiment demonstrates a fundamental randomness and unpredictability about the universe. Schrödinger thought it up to demonstrate that the cause and effect based thinking to which science had become dedicated, and which also forms a central plank of our ordinary thinking and language structure, does not always accurately describe how the universe works.
It would seem that a lot of what goes on in the universe, particularly at the human scale, remains subject ultimately to the random behavior of individual atomic particles. The long term behavior of the weather, the fall of a dice that bounces more than about seven times, human decisions, they all seem to depend on atomic randomness to some extent.
And if the universe permits even a single random event, then its entire future history becomes unpredictable!
Not many philosophers have managed to get their heads around this insight yet, although it was first realized decades ago. However I did see recently an apologist for one of the monotheist religions claim that his god must therefore do his business by tweaking probability in favor of what he or his devotees want. That doesn’t look like such a bad idea. A tweak here, a tweak there, and after a few hundreds of millions of years he can evolve an image of himself from the primeval slime for company.
Of course chaos magicians claim something similar, a spell here, a spell there, and after a while modified probabilities should deliver the goods. However the Chaoists do have some experimental data in their favor. Quite a number of parapsycholgical experiments indicate that events at the atomic scale remain surprisingly sensitive to psychic meddling.
Perhaps you do not even need sentience to tweak probability. The concept of emergence suggests that whenever nature accidentally throws up something more complex or sophisticated or interesting then some sort of feedback may occur which makes its subsequent appearance more likely. The very laws of the universe, and the conventions of chemistry and biology, and perhaps even those of thought, may owe something to this mechanism.
At the atomic or quantum level, experiments have shown that matter and energy behave in a way that seems very strange compared to the way they seem to behave on the macroscopic or human sized scale.
At the quantum level even basic concepts such as causality or the idea of a particle having a single definite location and momentum seem to break down. The very idea of thing-ness itself seems inapplicable to quantum particles. Particles simply do not behave like tiny little balls, and few verbal analogies or visualise-able images provide much of a guide as to what they actually do.
Now the whole of the observable universe consists of systems made up of quantum particles, yet their almost unimaginably strange individual behaviours add up to give the world we see around ourselves which seems to run on entirely different principles.
Nevertheless it has proved possible to devise mathematical models of what quantum particles do. However these mathematical models involve the use of imaginary and complex numbers which do not work in quite the same way as simple arithmetic and they give answers in terms of accurate probabilities rather than definite yes or no outcomes.
All this has led to endless debate about the reality of what actually goes on in the quantum realm.
Some theorists have taken the position that the mathematics which model quantum physics represents nothing real, it just gives good results because we fudged it to fit the observations, and that the underlying physical reality remains impossible to comprehend in any other way at the moment.
Others have taken the position that quantum physics remains incomplete and that further discoveries will allow us to make more sense of it.
This paper will attempt to show that the hypothesis of 3 dimensional time can allow a novel reinterpretation of the observed phenomena of quantum physics which allows us to form some idea of its underlying reality.
At the quantum level the basic constituents of matter and energy behave with some of the characteristics of both waves and particles. To a simple approximation they seem to move around rather like waves spread out over space, but they seem to interact and to register on detectors as localised particles. Heisenberg’s uncertainty principle models this behaviour mathematically, the more a particle reveals its momentum the less certain does its position become, and vice-versa. This situation does not represent merely the technical impossibility of measuring both of these quantities simultaneously. It seems to represent the physical reality that a particles momentum becomes progressively more objectively indeterminate as its position becomes more objectively determinate, and vice versa, at least in the 4-dimensional spacetime in which we observe its behaviour. If particles did not have this indeterminate aspect to their behaviour they could not act as they do and we would have a completely different reality.
Such wave/particle behaviour appears in a simple, definitive, and notorious experiment known as the Double Slit Experiment. This experiment has numerous variations and elaborations, and it works just as well with energy particles like light photons or matter particles such as electrons. Apparently it will even work with quite large clusters of atoms such as entire C60 buckyballs. Feynman identified it as encapsulating the entire mystery of quantum behaviour.
In the double slit experiment a single particle apparently passes somehow through both of two slits in a screen simultaneously and then recombines with itself to form a single particle impact on a target screen or detector. If the experimenter closes one of the slits, the particle can still hit the target but it will hit it in a different place. If both slits remain open the particle’s position on the target indicates that the presence of two open slits has somehow contributed to the final position of the particle. Now common sense dictates that a particle cannot go through two separate slits simultaneously, although a wave can do this. So how does something that arrives at its target as a particle apparently switch to wave mode during flight to get part of itself through both slits at once and then switch back to particle mode for landing? Big objects like aircraft never seem to do this sort of thing, even though they consist of particles which can.
The mathematical model of a particle in flight describes it as in a state of superposition, the notorious quantum condition in which a particle can apparently exist in more than one state simultaneously. The so called wave function of a particle does not constrain it to choose between apparently mutually contradictory qualities, like having two different positions, or two different spins in opposite directions. The choice seems only to occur when the particle gets measured or hits something and then manifests just one of its possible alternatives. The choice it makes however seems completely random when it takes its quantum jump.
This had led theorists into endless arguments and debates about what a quantum particle really ‘is’. Such Quantum Ontology seems very questionable, we cannot really ask questions about being because we do not actually observe any kind of being anywhere. Basic kinetic theory shows us this, nothing just sits there and exhibits being. Everything actually has a lot of internal atomic motion and exchanges heat and radiation with its environment. To observe something just being we would have to stop it doing anything at all by freezing it to absolute zero, but at that point it would simply permanently cease to exist.
Some theorists have argued that the wave function which describes particles as existing mainly in superposed states cannot really model reality because we always observe things in singular rather than superposed states. However this seems debatable, whilst photons and electrons for example do have singular characteristics when we catch them in detectors or at the point in time when they interact with other particles, most of the properties of materials that we observe actually arise quite naturally from a superposition of states. For example the strength of metallic crystals, the strength of bonds between the atoms in molecular gasses, the behaviour of molecules like benzene and indeed the behaviour of atoms in general, all find explanation in terms of particles occupying superposed states. Thus it seems more reasonable to suppose that for most of the time, matter and energy do superposition, and that when particles undergo measurement or other forms of interaction, they do monoposition. In the HD8 model, monoposition corresponds to the manifestation of a particle in one dimensional time, whilst superposition corresponds to what it does in three dimensional time.
A lot of theorists have a philosophical objection to the way particles seem to make a completely random choice when reverting from superposition to monoposition. They do not like the way in which the wave function seems to collapse in a completely probabilistic way without a sufficient cause for the observed effect. Science has depended on the principle of material cause and effect for centuries they argue, and we cannot abandon it just because we cannot find it in quantum behaviour.
Nevertheless the probabilistic collapse of the wave functions does lead to the fairly predictable behaviour of matter and energy as seen on the human scale. Toss a single coin and either alternative may result, but toss a million of them and the deviation from half a million heads will rarely stray beyond a fraction of one percent. Probability can thus lead to near certainty and most of the apparent cause and effect relationships that we observe in the human scale world can in fact arise precisely because quantum superpositions collapse randomly. Indeed, assuming that superpositions mainly define the state of the universe for most of the time, then turn the idea on its head and consider how bizarrely it might behave if those superpositions collapsed non-randomly. Ordinary lightbulbs might suddenly start emitting laser beams, the radioisotopes in smoke detectors might sometimes go off like small nukes instead of sedately decaying.
The idea of quantum particles in a superposed wave function mode perhaps becomes easier to understand in the HD8 model where 3 dimensions of time complement the 3 dimensions of space. A superposition of two or more states in the same place can occur if the extra states lie in the plane of time orthogonal to what we perceive as ordinary time. In effect orthogonal time provides a sort of pseudospace for parallel universes. I am not implying here that I have doppelgangers for example, or perhaps millions of them, in full scale parallel universes, but merely that I could have a slight thickness in sideways time which allows the superpositions of my constituent quantum particles to manifest my normal electo-chemical properties that ordinary 4 dimensional classical mechanics cannot explain. (I have a suspicion that such superpositions may also have some relevance to mental and perhaps parapsychological phenomena as well, but let’s not open that can of worms for a while yet.)
The notorious Double Slit Experiment and its variants also reveal another aspect to quantum superposition, the phenomena of quantum entanglement. When presented with two possible flightpaths to follow, some quantum phenomena appear to take both paths but the parts which go their separate ways seem to remain in some sort of instantaneous communication with each other, no matter how great the spatial distance between them. This entanglement of widely separated parts appears to violate the Special Relativistic principle that no signal can travel faster than light. However quantum physicists usually point out that no proper information actually gets transmitted because of the random nature of the outcome.
Now, as with superpositions, we cannot observe quantum entanglements directly, we can only make observations from which we can infer that the observed result must have come from an entanglement. HD8 explains entanglement in terms of multiple histories. When an entanglement collapses due to an interaction or a measurement, some of the alternative histories collapse. Alternative histories exist as superpositions in sideways time.
Entanglements occur when a single superposed quantum state splits into two (or more) parts, each of which then travels away in a superposed state. The original quantum state could consist of a single particle which apparently splits into two as in the double slit experiment or it could consist of a pair of particles in an intimate contact which forces them into the same quantum state. Now when one of the parts of the entangled system falls out of superposition because someone measures it, or because it interacts and decoheres into its environment, then the other half has to fall out of superposition into the opposite mode. If one component has say, spin up, then the other will have spin down, or the corresponding other half of a number of other quantum properties.
Thus if the initial quantum state has the superposed qualities of AB and splits into two parts, both parts seem to carry the AB superposition. Yet if we intercept one of the parts and find that it appears to us as A, then we know with certainty that the other part will have to manifest as B. Experiment has repeatedly confirmed this over macroscopic distances, and we have no reason to suspect that it will not work over astronomical or cosmic distances. We know that we cannot explain this by simply assuming that the original superposed state splits into an A component and a B component because in experiments where we recombine the two parts they recombine in such a way as to show that each must have carried the AB superposition.
Ordinary cause and effect cannot explain what happens in entanglement. From the point of view of classical physics the phenomenon seems completely impossible. Whatever mechanism constrains each part of an entanglement to jump into the opposite mode that the other part jumps to, must either act instantaneously across arbitrarily large distances, in violation of relativity, or it must act retroactively across time.
A non-local effect across space seems the least likely alternative. It would require that something from one half of the entangled pair somehow found its way to the other half which could have travelled anywhere within billions of cubic miles of space. Considering that the universe contains unimaginably vast numbers of virtually identical entangled pairs of quantum states, this seems fantastically improbable.
Temporal retroactivity on the other hand, requires no more than a certain two way traffic across time which can allow for the cancellation of some of the alternate histories.
Time does not appear to run both forwards and backwards on the macroscopic scale because energy dissipates in what we call entropy, and because gravity acts attractively only. Nevertheless, nothing seems to constrain quantum processes to progress in only one temporal direction, and some interpretations suggest that they actually proceed in both directions at once. In particular Cramer’s Transactional of quantum physics models a photon exchange as comprising a sort of superposition of a photon travelling from emitter to receiver and an antiphoton travelling backwards in time from receiver to emitter. This perhaps explains an oddity of Maxwell’s equations of electromagnetism. These time symmetric equations also yield a set of solutions for so-called advanced waves travelling backwards in time, but physicists usually quietly ignore them because they appear materially indistinguishable from the so-called retarded waves which travel forwards in time.
The material indistinguishibility of a quantum process proceeding forwards through time from the corresponding anti-process proceeding backwards through time has intriguing implications.
We can interpret it as an exchange between the past and the future which has bizarrely counterintuitive aspects. The event which will end up as the past can send multiple contradictory signals to the future. These signals eventually collapse at a moment of interaction or measurement to give a singular present. However those signals which do not manifest in the present cannot send a time reversed signal back to the past to complete the exchange, so that those particular signal paths cease to exist, effectively modifying the past. Thus when a superposition or an entanglement collapses it erases the multiple history of its own past. The whole concept of being or ‘is-ness’ falls apart here, and not just for particles flying around in specially contrived apparatus. As quantum systems generally behave as if they had just collapsed out of superposition or entanglement, and as quantum systems underlie the behaviour of all matter and energy, we must inhabit an almost unimaginably stranger world than our ordinary senses reveal.
Some theorists have spoken of the Omnium or the Multiverse underlying and supporting the mere surface reality that we directly experience. At the time of writing we have few concepts and little vocabulary to describe the signals exchanged across time and space out of which our perceived reality coalesces. Antiparticles in the conventional sense cannot act as the agents of information transfer into the past. Neither can particles as we understand them in 4D spacetime, carry superpositions and entanglements into the future. However we can form a partial visualisation of the processes involved by using reduced dimension graphs derived from a modified Minkowski formula (itself derived from Pythagoras), for the distance D, between points in 6 dimensions, 3 of space and 3 of time.
In simplified form the equation looks like this:
D = [s^2 - (ct)^2 + (ct1)^2 + (ct2)^2]^1/2
Where s = spatial distance,
t = time, (reference direction),
c = lightspeed,
and t1 and t2 represent the axes of the plane of imaginary time orthogonal to the temporal reference direction, which acts as a kind of pseudospace.
This equation yields two obvious null paths in addition to that conventionally reserved for ordinary light ( s = 1, ct = 1, ct1&2 = 0),
1) s = 0, ct = ct1&2.
2) s ~ ct, where ct1&2 >0
The graphs of these equations represent Superposition and Entanglement respectively.
See Null Path Superposition & Entanglement, Figure 1 and 2.
Figure 1 shows a mechanism for superposition, representing a superposed quantum state at rest relative to an observer, so we can omit spatial coordinates for simplicity.
An event at the origin subtends a plethora of superposed states in imaginary time pseudospace, represented by the circle (which appears in perspective as an ellipse), at right angles to the temporal reference direction.
These states collapse at a further unit distance along the t-axis to produce a random unitary outcome. As soon as one of the paths from the origin to the final state completes an exchange, all other paths collapse and the exchange path becomes the 'new' temporal reference direction. This effectively changes history.
Figure 2 shows a mechanism for entanglement. The Figure shows single dimensions of space and time (reference directions for a moving particle) and a single dimension of orthogonal time for simplicity. The Figure could for example, represent entangled particles flying apart at lightspeed in opposite spatial directions. Note that as each particle flies away, it diverges into two parts in the plane of imaginary time. This represents the superposed condition of each of the particles. In practise the two particles may well have a number of superpositions, but we cannot show these with only a single dimension of imaginary time in the Figure.When one of the particles interacts, only one of the two paths shown will actually complete a transaction and become real. The other path will then cease to exist all along its history. This will have the effect of cancelling the superpositions all the way back down the time line to the origin, hence the other particle will then have to manifest with the opposite particle property when it interacts. In the new history of the two particles it will then appear that each set off from the origin with one of the two opposite properties.
What actually happens between moments of particle interaction remains an interesting question. Any attempt to look at the intervening period merely shortens it to the point at which we choose to take a measurement. Except at the point where a particle interacts it seems to consist of a multitude of superposed and/or entangled states. However as soon as it interacts, all but one of the particles multitudinous states become eliminated from history. At that point the path which remains in imaginary time becomes the real time history of the particle.
Some theorists have argued that we cannot answer the question in principle. Others have gone further and opined that we cannot even ask such a question because it would beg an answer in terms of objective reality about a realm in which objective reality does not apply. In other words they dismiss the question as as fruitless as theology.
However a combination of the hypothesis of 3-dimensiona; time with Heisenberg’s Uncertainty (Indeterminacy) Principle can perhaps offer some sort of an answer.
The Uncertainty principle allows nature to violate the conservation of pretty well any particle property of behaviour, including that of existence itself, so long as the violations remain very small and/or get paid back very quickly.. Planck’s constant sets a precise limit to the imprecision with which quantum phenomena can behave.
Thus for example; -
DE Dt = h and Dp Dl = h
Where D (delta) E means energy indeterminacy,
Dt means temporal (durational) indeterminacy,
Dp means momentum indeterminacy
Dl means spatial (positional) indeterminacy
h means Planck’s constant over 2 pi.
This means that the universe can allow the spontaneous creation of particles from the void or the background energy if you prefer. Such particles persist for a time inversely proportional to their masses so massless photons can persist indefinitely whilst massive fermion/antifermion pairs can persist for only the briefest instants. Now HD8 models bosons as consisting of particle/reverse particle pairs, and we can conceive of fermions as having a similar configuration whilst in flight (between interactions).
Now conventional theory calls for the existence of so called virtual particles to account for the electrostatic and magnetic fields and for various other fields. HD8 rejects this idea and suggests instead that such fields arise from the warping of spacetime in various dimensions in the vicinity of charges. To account for the behaviour of fields, virtual photons particles would have to have properties at variance with relativity. The distinction between so called real particles and virtual particles has become progressively blurred in Standard Theory, particularly as we can now make apparently real particles perform the double slit trick. I think that the hypothesis of virtual particles has outlived its usefulness.
It would seem more reasonable to describe all particles as real at the instant of their interaction, and that whilst they remain in flight as it were, they spread out into multiple form in the pseudospace of imaginary time, using the freedom of quantum indeterminacy.
At the risk of undermining the idea that particles in between interactions do have some actual reality we should perhaps consider calling them Imaginary Particles.
Consider again the double slit experiment, but now in terms of real and imaginary particles. An electron in an imaginary state in orbit about an atom in a light source emits an arbitrarily large number of imaginary photons towards two slits in a screen. As two imaginary photons, one having gone through each slit, fly towards the detector, their higher dimensional spacetime curvatures interact and they combine to hit an imaginary electron in a detector. However only one of the two imaginary electros can become real by making a time reversed exchange with the electron at the origin of its path. As it does so it becomes momentarily real, as does the electron it interacts with.
In summary something does actually go through both slits but after the completion of the exchange only one of the paths remains real (and we cannot tell which). The electron at the emitter becomes real only momentarily as it emits. Both electrons go back into superposed states around their atomic nuclei.
Particles spend only a vanishingly small part of their time in interactions that confer a momentary reality upon them. So almost the entire universe consists of imaginary particles at any moment.
We ourselves must also consist mainly of particles in an imaginary condition. Imaginary particles interact with each other to create the reality that our senses detect, but can the imaginary part of ourselves interact more directly with the Omnium, that Multiverse of superposition and entanglement underlying our perceived reality?
I propose to return to this question in a later paper, but for now I leave you with an odd thought.
Thought itself feels to me very much like a series of collapses of superposed and entangled mental states into real ideas, actions and decisions. It seems as though I fill my head with ideas and then let them become a bit fuzzy, and then somehow something definite springs into reality. It feels like a stochastic process. Most of it of course ends up in the wastebasket, the most powerful tool of the thinker according to Einstein.
Houston we have a problem.
If the human race does not develop starships that can reach other star systems as easily as sea freighters or jetplanes now traverse the Pacific Ocean, it has no long term future. Either an asteroid will eventually smash the planet, or we shall exhaust its natural resources, or the lack of a new frontier will lead to cultural decay. Our star will eventually run out of fuel anyway.
At the time of writing we face only a single obstacle to interstellar travel. Physics as we understand her in the opening years of the twenty first century. We have the technology to sustain life support in space; telemetry and astrogation present no real problems, but propulsion by reaction-thrust (firework) principles remains pathetically inadequate. The billions of dollars expended annually worldwide to improve chemical or even nuclear reaction thrust vehicles cannot in principle yield dividends in fields other than materials science, warfare, orbital astronomy, telecommunications, and a very modest exploration of the local solar system.
Any serious attempt to penetrate interstellar space must come from ideas developed in fundamental physics. No conceivable increment in rocket style technology can possibly get us to anywhere useful. We may as well abandon schemes based on Bussard ramjets, Orions, or Matter-Anti matter reaction thrust vehicles. Such schemes compare to crossing the pacific on surfboards. Whilst they remain just about possible, they would represent a heroic, if not suicidal, waste of effort compared to the quest to develop more useful means of transport.
Even if such drives could achieve a comfortable one gravity of acceleration for a couple of years without an impossible power input, they could achieve relativistic speeds but interstellar gas particles would hit the vessel like hard gamma rays, and dust particles would strike like battleship shells. Thus we have to look within and beyond special and general relativity or quantum physics for some principle which may allow us to 'warp' or 'teleport' across space rather than hammer our way through it.
Some of the ideas on this site may suggest a possible means of moving between star systems without traversing the daunting interstellar voids between them.
Consider the analogy with terrestrial travel. Friction against the solid or liquid surfaces of the planet limits ground and sea vehicles moving around the two dimensional surface to rather modest speeds. However aircraft moving in three rather than two dimensions escape from surface friction and can travel much faster. A tunnel dug right through the planet could provide an exceptionally efficient means of travelling to the antipodes of any point, although no known material could serve as a lining for the tunnel. Simply dive headfirst into the tunnel and freefall to the centre of the earth which you will pass at extreme speed. You will then begin to slow down as you continue, coming, in theory, to a momentary halt at the other end of the tunnel allowing you to just time to grab the lip and clamber out before you fall back again. Elapsed time 42min, fuel required none, g forces nil. In practice you really need to have a vacuum in the tunnel to prevent air friction and a spacesuit to prevent death.
So can we somehow exploit the extra degrees of freedom offered by six dimensions to achieve rapid transit across the universe? Six dimensions seem to offer the higher dimensional equivalents of both aircraft travel and tunnel travel, which for the purposes of this paper we can call Warp1 and Warp 2 respectively.
Now as two dimensions of time lie orthogonal to all points of 4D spacetime, much as the up/down spatial dimension lies orthogonal to every point on a planetary surface, so Warp 1 and 2 journeys can, in principle begin and end anywhere. As with a Tardis, launch from within your garden shed or basement should not present a problem.
Warp 1 craft would basically travel "outside" the hyperspherical three-dimensional "surface" of the observable universe in the "ana" or forward direction of the plane of imaginary time. By analogy with aircraft travel we might expect that such vessels may have to expend energy to resist the spacetime curvature (gravity) of the universe. Warp 2 craft would travel through the "kata" or past direction of imaginary time to any point of equigravitational potential without expending energy, although opening a tunnel may have an energy requirement.
Warp 1. The invention of aircraft required the coming together of two technologies, the technology of aerodynamic lift, and the technology to push persistently against the air. By analogy Warp 1 craft will need something to hold them in imaginary time and something to give them a shove. Now as the plane of imaginary time corresponds in some sense to probability, perhaps we should look at the phenomenon of quantum superposition in which a particle appears to occupy (or to have occupied) an indeterminately large number of possible states simultaneously. At the time of writing, such superpositions only seem to involve small numbers of particles and they seem prone to decohere rapidly back into "normal" states. However perhaps one day we will discover how to place an entire ship into a quantum superposition with a wave function that has a non zero probability at every spatial point in the universe. Then perhaps we will only have to give it the gentlest push, and then allow it to decohere back into "ordinary" reality wherever we want, perhaps trillions of miles away, perhaps almost instantly.
Warp 2. Trying to dig a tunnel through spacetime using general relativity seems less sensible than at first it may appear, and Warp 2 travel does not involve this. Black holes would not consist of "holes" in spacetime; they would consist of impassable "knots". Four dimensional wormhole throats would require titanic amounts of "negative energy" to hold them open for ships to pass, and negative energy seems to have no real meaning or physical reality in this universe. To tunnel across the universe from one point to another we only actually need to penetrate beneath the four dimensional hyperspherical surface of which it consists, to gain the extra degrees of freedom on the "inside" in the kata or past direction of the plane of imaginary time. Warp 2 drive may require essentially the same technology as Warp 1 drive, except that the craft will require a push in the opposite "direction". Neither type of travel requires that the vessel risk ending up in the far past or future probabilities of its destination. Well perhaps it runs no more risk than an air traveller or tunnel diver from London to Australia risks in ending a journey a long way up in the air or some distance underground. (Timecrash- a new sci fi theme perhaps?)
A paper as wildly speculative and analogically based as this would certainly benefit from some supporting maths.
As faith began to make way for reason during the Enlightenment, humans began to deduce the relationships between certain mathematical patterns in nature. Of course a certain aesthetic appreciation for symmetry and proportion had existed for millennia; witness the extraordinary precision with which the later Pyramids, Stonehenge, and much of Classical Architecture was erected for example. The Platonic school of philosophy had a strong intuitive grasp of the idea that geometry underlay the structure of nature, although it relied too heavily on theory and not enough on actual measurement.
The Savants and Cabals of the Enlightenment choose their symbolism with some subtlety.
The profession of Arianism, (Christ as a non-divine but merely enlightened human), or outright Atheism, could get you into a lot of fatal trouble in those days. Thus they adopted a sort of compromise position and replaced the mad capricious Deity of the bible with the Architect of the Universe, an altogether more reasonable sort of creature whose works lay amenable to rational understanding. Out of this idea Freemasonry evolved to promote the ideas of the Enlightenment under various degrees of secrecy, and obfuscation, depending upon circumstances. Originally it had strong anti monarchist and anti-clerical (particularly anti-Catholic) currents, but these have ameliorated with time. It has become rather superfluous since science gained the ascendancy in the westernised world, and has since descended into mere clubbishness, and sometimes petty corruption.
Interestingly, soon after the Enlightenment had started, Geometry became superseded by Algebra. Now Algebra simply consists of Geometry without the diagrams and it gives you the freedom to do Geometry in as many âdimensionsâ as you want. The ancient Egyptians appear to have understood Pythagorasâ theorem to the extent that they could construct a right angle by tightening a rope knotted to make a 5x4x3 triangle. However it took the Greeks to work out that the square on the hypotenuse equals the square on the other two sides for ANY right angled triangle.
Astonishingly, Isaac Newton, one of the greatest figures in the Enlightenment, worked out planetary motion entirely by geometry, using painstakingly collected measurements and gruesomely complicated looking diagrams before distilling it all down to elegant algebraic form as F = Gm1m2/r^2. Despite the algebraic notation this remains, like all algebraic equations, an essentially geometric relationship.
Now when the modern mind looks at many of the architectural and symbolic marvels of ancient cultures it discerns, within at least some of it, the encoding of geometrical relationships to many of the phenomena in the natural world. Some ancient structures exhibit alignment with celestial events, some exhibit proportions such as the so called Golden Mean which reflect geometric relationships which occur naturally in biological forms or in shapes such as the pentagram or in spirals.
Perhaps we should not get too excited about such secret or sacred relationships and let our apophenia run away with itself. After all, it seems that as well as tool making apes and talking social apes, we have also made an evolutionary career for ourselves as pattern recognising apes.
However, letâs run with the concept of sacred geometry for a while and see where it leads, for the occult so often holds the door open to the sciences of the future.
A really sacred geometry should show us the proportions and relationships inherent in the entire macrocosm and microcosm, real secret of the universe stuff.
Can such a thing exist?
I suspect that it does, and that we already have a fair proportion of it, although we seldom recognise it as such. Most of it appears as algebra rather than the more easily visualised geometry; however the algebra reflects the basically geometric structure of reality.
Earlier attempts to distil from the microcosm and the macrocosm a sacred geometry such as exhibited by the cabalistic Tree of Life or the musings of Ramon Lull, (author of the original Liber Chaos!), suffer from a lack of good observational data and from contamination with theological nostrums.
The accurate macrocosmic stuff begins with Newton and becomes magnificently expanded by Einstein.
Einstein worked out the basic geometric relationships between space and time, and between mass and energy, and presented the answer as what he called Special Relativity. Such fundamental things basically exist as geometric relationships between one phenomenon and another.
He then went further and developed General Relativity to include the effects of acceleration and gravity. General Relativity describes the macrocosm in exclusively geometric terms: -
Space/time curvature = Mass/energy density
Thus what we perceive as matter actually consists of a curvature in space and time, and what we perceive as gravity actually consist of the effect of that curvature on matter.
Whilst we can draw out the diagrams to represent the Special Relativity algebra as geometry on paper, we would need multidimensional paper to do this with General Relativity, so we usually have to leave it as algebra.
Now in contradistinction to many other rebel physicists, I don't think Einstein got it wrong, I think he got it very very right. However I think more will eventually come to expand upon his insights, in particular I suspect that: -
6D Space/time curvature(s) = 6D Charge/spin densities
Now when Einstein intuited and calculated the geometry of space/time and gravitation we had only an embryonic understanding of particle physics, the microcosm, and he could not include it into a grand unified theory, although he strove mightily to do so.
The particle physics theory of the microcosm has since taken off in a quite different direction to the geometric model of the macrocosm. Particle physics theory depends on the idea of 'Quanta', (minimum sized indivisible bits of reality), hence quantum physics, in which the fundamental building blocks of reality occur as dimensionless points with a probabilistic wavelike distribution in space and time; hence they cannot have a properly geometric description. Although the quantum theory gave predictions that modelled observations, Einstein considered that it still contained a profound metaphysical flaw.
I suspect that a fully geometric model of quantum physics requires 6 dimensions, three of space and three of time, rather than the three of space and the one of time that both relativistic and quantum theories currently employ.
At the time of writing, General Relativity still provides our best official understanding of gravity which defines the large scale structure of the universe, but Quantum Physics provides the official descriptions of the other 3 forces in nature, the electromagnetic, the weak nuclear and the strong nuclear forces. Nevertheless the quantum descriptions seem to imply some very strange and paradoxical metaphysical ideas and the accompanying Standard Model of particle physics still seems largely phenomenological because it fails to provide much in the way of a mechanistic description and it contains many seemingly arbitrary constants. The description of the strong nuclear force remains particularly messy. The standard model conspicuously fails to explain the existence of 3 generations of each type of fundamental fermion particle, quark, electron, and neutrino.
Yet having described three of the four fundamental forces, if only imperfectly, with quantum physics, physicists have attempted to bring gravity into the same fold to create a Grand Unified Theory of all four, and the search for a theory of Quantum Gravity has dominated the agenda of a generation of fundamental physicists.
A quantum theory of gravity would imply that Einstein got it wrong in describing gravity as curvature in spacetime geometry and would instead describe gravitational fields as mediated by the exchange of virtual graviton particles, whilst gravitational waves would arise from the exchange of real gravitons. Nobody incidentally, has ever captured a virtual particle; theorists merely hypothesise their existence to explain field effects.
The search for a theory of quantum gravity has failed to bear fruit, and all the particles implied by various versions of it, gravitons, the supersymmetry particles, and the Higg's particles, have all failed to appear in experiments so far.
Rather than try to quantize gravity, I suggest that we should try to geometricate the quanta. In a 'Quantum Geometry' virtual particles would not exist, all apparent fields would arise from various spacetime curvatures subtended by real particles. Various papers on this site attempt to show that quantum geometry can exist in a spacetime having six dimensions, three of space and three of time. I suspect that we need to investigate the geometry of time more fully to get to a fully unified grand 'theory of everything'.
This paper provides a method of visualising the four dimensional hypersphere as a perspective construction in three dimensions, and it also shows how the curvature of hyperspherical space will distort images from distant galaxies and supernovae.
Figure 1. Any attempt to project the surface of the Earth onto a flat surface necessarily leads to some kind of distortion. In addition to the usual Mercator projection that distorts distances towards the poles, we can also make a polar projection by cutting through the equator and then "photographing" each hemisphere from above the poles. Such a projection gives a good representation of distances near the poles but it leads to progressive distortions as we near the equator. Cartographers normally place the two halves of the polar projection in contact at some point, to remind us that all points around the equator of one hemisphere actually touch a corresponding point on the other hemisphere.
Similarly we can represent the 3-sphere or hypersphere as two spheres in which every point on the surface of one of the spheres corresponds to a point on the other sphere, despite that we can only represent them with a single point of contact. Now when we make a polar projection of the Earth's surface, convention dictates that we centre the projection on the poles, but we could chose any two opposite points and cut the sphere across a great circle other than the equator. An egomaniac might delight in a projection with his house at the very centre of the projection, but it would remain a valid projection.
Thus an observer A, in an hypersphere can define her map of it with herself at the centre of one of the spheres. This then defines a second point B, as her hyperspherical antipode, analogous to the point furthest away from her on the surface of the world. In a hypersphere it represents the furthest point you can travel to without starting to come back to where you started from.
Figure 2 . An observer looking into the deep space of an hypersphere could in theory see an abject at her antipode point B, by looking in any direction, analogous to the way in which all routes from the North Pole lead to the South Pole on the Earth. Note that lines of sight curve within an hypersphere, in a way analogous to the way in which meridians curve around the surface of the Earth. Light follows geodesics in space, so if space curves, light has to follow the curvature. Theoretically our observer could see right past her antipode and catch sight of the back of her own head In practice light from near the antipode becomes so red-shifted by the time it gets to A, that A cannot even see quite as far as the antipode.
VHC argues that the curvature of the universe also causes the progressive red-shift of light travelling across it, and that conventional cosmology has mistaken this for recession velocity, an hypothesis which implies an expanding universe. This paper will attempt to show that the supposed acceleration of that expansion arises from the lensing effect of hyperspherical space.
Figure 3. Imagine that we take the polar projection of the Earth and then roll the equator of the Southern Hemisphere around that of the Northern one. This will have the effect of stretching out Antarctica so that it goes all the way around the circumference of the whole map. We would then have a circular and highly topological map of the world with huge distance distortions towards the South Pole which itself now stretches around the entire edge. With a little effort at visualisation we can do something analogous with the two sphere map of the hypersphere, by rolling one sphere all over the entire surface of the other so that all corresponding points come into contact. This will result in the antipode point becoming spread out over the entire surface of the resulting sphere. Astronomers who assume a "flat" Euclidean universe will have effectively and unwittingly distorted their view of the universe in exactly this way if it does in fact have an hyperspherical geometry.
Figure 4. Astronomers who assume a flat universe with no curvature will assume that they can see in straight lines. If they look out into the apparent sphere of space that surrounds them, they will actually see along geodesics which curve relative to the assumed flat space of their maps.
The positive curvature of the Universe has a lensing effect on the passage of light across it: -
Where = hyperspherical lensing and d stands for the ratio a/L, astronomical distance over antipode distance. . The lensing distorts apparent magnitude and thus creates a mismatch with redshift leading to the dubious assumption of an expanding acceleration.
The vertical axis for a number of factors runs from zero to three, marked in divisions of 0.5 with the unity line highlighted in purple for clarity. The horizontal axis runs from observer to antipode.
The red line shows redshift Z, where Z = (c/(c-(dA)^0.5)-1 where d = astronomical distance.
Note that a redshift of 1 at about 7 billion light years denotes the halfway point to the antipode distance. Redshift climbs exponentially towards infinity at the antipode; observations become increasingly difficult up to redshift 10 and then virtually impossible beyond.
The yellow line represents schematically the hyperspherical geodesic from the observer to the observer’s antipode; the curved path that light actually takes in the cosmic hypersphere.
The blue line represents schematically the observer’s assumed sight line for flat space.
The green line represents schematically the difference between the actual and the assumed sight line, and thus the degree of Hyperspherical Lensing LH, that light becomes subject to at various distances. Note that in this revised version of the model, the line has the inverse configuration to previous models on this site and the equation governing it has the form
LH = 1/(1+((d-d^2)^0.5))-d) where here, d = astronomical distance/antipode distance.
The negative lensing at distances below 7 billion light years explains the anomalously low luminosity of type 1A supernovae without recourse to the hypothesis of an accelerating expansion driven by some mysterious dark energy.
The positive lensing at distances greater than 7 billion light years explains the increase in angular size of very distant structures without recourse to the hypothesis of an expanding universe at all.
Dark energy does not exist.
Thus the Universe consists of a hypersphere, finite but unbounded in both space and time and constant in size, and vorticitating with a small positive spacetime curvature, despite the apparent temporal and spatial horizons.
Inflation did not occur.
The overwhelming majority of university and state funded cosmologists believe that the observable universe has expanded from a much smaller and much hotter and denser volume of space over the last 13 billion years or so. Some believe that the original volume of the observable universe may have had a size as small as a grapefruit, or possibly the size of a single atom, or that it had a quantum size or even the infinite density of a zero-sized singularity. The eruption of the universe from the initially very compact state commonly gets called the Big Bang.
Several items of evidence led to this novel idea.
Firstly, it would seem that if we inhabited a universe that tried to stay still then its own gravity would cause it to collapse.
Secondly, light from distant galaxies appears shifted towards the red or lower energy end of the spectrum, and the greater the distance to the far galaxy the more pronounced this red shift becomes. Cosmologists have interpreted the galactic redshifts as evidence for the galaxies flying away from us and stretching out the light waves and lowering their energy as the universe expands.
Thirdly, a thin drizzle of microwave radiation permeates all of space and it comes in from very deep space, from way beyond the local galaxies. This radiation has the name of the Cosmic Microwave Background Radiation (CMBR). Cosmologists have interpreted this as the remnant of a time when the early universe had a very high temperature, and that subsequent expansion has stretched and cooled the radiation from the early fireball.
Fourthly, the universe contains about 25% helium and 75% hydrogen, with only a fraction of a percent of the heavier elements that make up planets and rocks and us. Cosmologists use this observation to support a rather curious circular argument which says that if hydrogen constitutes the primordial form of matter but it gets transmuted into helium in stars, then 13 billion years of star activity will not suffice to produce that much helium. Ergo much of the helium must have got made in the violence of the Big Bang.
A whole massive edifice of official scientific theory depends on these three and a half items of evidence. Hardly anyone dares to question the big bang idea in principle, for fear of loosing their academic funding, so they just argue about the precise details of it and try to fit new observations within the theory.
However the theory has many holes in it, and the attempt to patch them has led to all sorts of fudges and the multiplication of dubious sub-hypotheses.
For a start the Big Bang theory does not explain the origin of the universe nor how the mass of billions of galaxies each containing billions of stars could have got into a state of near infinite density and almost zero size. Maybe it perhaps collapsed into this state from a state not unlike it’s present one and then somehow bounced back. Maybe it does this endlessly, but cosmologists generally think that our universe’s expansion has started to accelerate with no chance of it ever collapsing again.
The purported age of the universe at a mere 13 billion years in BB theory seems too short for a lot of the mega-structures that we can observe in the universe to have evolved. Plus some galaxies look like they have been in business for at least this time, if not more.
Plus cosmologists seem to have encountered a couple of problems with the way that gravity seems to work on very large scales. A lot of galaxies seem to spin too fast. They do not appear to contain enough matter that we can see to hold themselves together by gravity, and according to conventional gravity ideas they should fly apart. His has led to the ad-hoc hypothesis that they must contain a lot of so-called ‘dark matter’ a mysterious substance that does not behave much like ordinary matter at all, although it still does gravity. I’m surprised that they did not call it Phlogiston instead, in memory of the imaginary substance that the ancients though of as the cause of fire.
Additionally another problem has arisen with gravity. One might expect that after an initial BB explosion, the expansion of the universe would begin to slow down as the gravity of all the separating pieces began to pull on each other and slow each other down. However from observations of the apparent speed of far galaxies, most cosmologists have concluded that the apparent expansion rate has actually increased rather than decreased with time. They put this acceleration down to yet another ad-hoc mysterious substance that they call dark energy which exhibits anti-gravity, (or levity as some wag put it).
On top of all these problems with gravity comes the fairly recent observation that our space probes to the limits of the solar system have somehow mysteriously decelerated more than we expected from conventional gravity calculations.
Big Bang theory thus looks increasingly dubious as the anomalies pile up and the epicycle style ad-hoc adjustments keep multiplying to paper them over. The three and a half major items of evidence for the Big Bang hypothesis do however submit to at least one alternative interpretation.
Einstein originally favoured the idea that the universe might consist of a hypersphere which represents the four dimensional equivalent of a sphere. Such a structure has a finite but unbounded extent, it does not go on forever, but you cannot ever reach the edge of it because its gravity causes it to close in on itself so on a very long journey you would find yourself just going round it again. The earth itself has a finite surface, and unless you try to go underground or into the sky it remains unbounded, it doesn’t have edges and you cannot fall off. In a hypersphere the three dimensional ‘surface’ has this same property of finite and unbounded extent.
A hyperspherical universe would have a definite shape, although we cannot easily visualise it, and it would not have infinite extent and all the paradoxes and nonsense that go with the idea of infinite anything. However a hyperspherical universe would collapse in on itself, so Einstein added a fudge factor he called the cosmological constant which had anti-gravity effects to prevent it doing so. His looked grossly inelegant, so Godel suggested instead that it could remain stable if it rotated. However if it rotated in the same sort of way that planets rotate about stars then astronomers would easily notice the effect, and they hadn’t.
Then Hubble came up with the red-shift to distance relationship and the Hyperspherical Universe model became generally abandoned in favour of an Expanding Universe model. Finally the discovery of the CMBR seemed to settle the issue as evidence of the remnant of a primeval fireball following the Big Bang.
However several features of a hypersphere seem to have become overlooked in the rush to form a new theory and these features can account for the non-collapse of the universe, the red-shifts, and the CMBR. Moreover they can also explain the apparent acceleration of the apparent expansion, the apparently anomalous rotation of galaxies, and the mysterious deceleration of our space probes as well. This paper will deal with each of these effects in turn:
A ‘rotating’ universe. Godel found a solution to Einstein’s equations of general relativity that allowed for a rotating universe:
W = 2 ( piGd)^1/2, where W = angular velocity, G = gravitational constant, and d = density.
Now this solution assumes a simple spherical universe rotating about a single spatial axis like a spinning ball, and plainly we do not observe such a phenomenon in the heavens.
For a hypersphere the corresponding equation becomes:
W = (2piGd)^1/2
The angular velocity of a hypersphere represents something a little more complicated than a simple rotation as it involves an extra dimension. A hypersphere undergoes ‘vorticitation’ in which all points on its 3 dimensional ‘surface’ move to the opposite position on the other side of the hypersphere and then back again.
Now a hypersphere has the property that GM/L = c^2, where M = its mass, and L = its length (antipode distance), and c = lightspeed. Thus it has an orbital velocity of lightspeed and an escape velocity of root 2 lightspeed, so nothing can escape from it.
A hyperspherical universe will have a natural centripetal acceleration of -c^2/L due to its gravity and a natural centrifugal acceleration of c^2/L due to its vorticitation, and these balance to give a stable universe that does not collapse or expand.
The opposing centripetal and centrifugal accelerations give rise to an omni-directional resistance to linear motion and an omni-directional boost to orbital motion.
We can measure this acceleration A = c^2/L by recording the amount of deceleration of our space probes which does not arise from the gravity of our star. This ‘Anderson Acceleration’ comes out at about 8.74 x 10^-10 m/s^2, a very tiny deceleration that we only notice over vast distances.
With this figure we can calculate the mass, antipode length, and vorticitation rate of our hyperspherical universe to give the following figures:
(Remember to use 2L^3/pi for hyperspherical surface volume when calculating density)
Mass ~ 10^53 kilograms.
Antipode length ~ 10^26 metres (About 11 billion light years).
Temporal horizon ~ 10^17 seconds.
Vorticitation rate ~ 0.006 arc-seconds per century.
Vorticitation cycle ~ 22 billion years.
The observed universe actually appears slightly larger than the antipode distance shown because of the hyperspherical lensing arising from the small positive curvature to space-time.
The omni-directional boost to orbital motion, the acceleration A, contributes to all orbital velocities V, in the following way:
V = (Gm/r + rA)^1/2
This has negligible effects at planetary scales where the + rA factor has immeasurably small effects, but on galactic scales it explains the unexpectedly high orbital velocities, without the need for dark matter.
The omni-directional resistance to linear motion, c^2/L, arising from centripetal/centrifugal effects in the vorticitating hypersphere acts not only on space-probes but also on light and all other forms of electromagnetic radiation. As light refuses to travel at anything less than lightspeed in empty space, it simply looses energy rather than velocity, and thus light reaching an observer from faraway galaxies will appear red-shifted in proportion to the distance that it has travelled.
c- AT = 0, an apparent temporal horizon will exist at a time T, the travel time that it takes to red-shift light (almost*) to oblivion.
The Cosmic Microwave Background Radiation.
Light that has travelled an almost antipodal distance to an observer will arrive highly red-shifted and hence very low on energy. However space contains a thin gas containing about one hydrogen atom per cubic metre, at a temperature of only a couple of degrees above absolute zero. When the highly red-shifted light from distant sources drops down to an energy level corresponding to the temperature of the hydrogen in the interstellar and intergalactic medium* it will start to become absorbed and re-emitted by it, and this effect will dominate over further red-shifting. Thus the CMBR that we observe corresponds to the temperature of the universe in general, which comes in at about a rather chilly 2.7 degrees above absolute zero, because apart from the widely spaced hotspots of stars, it mostly consists of freezing cold space. The CMBR does not come from a primeval fireball some 13 billion years ago, it results from radiation that has circulated the finite and unbounded universe for a time that may exceed the temporal horizon distance several times for some lucky photons.
In a hypersphere, spacetime itself has a small positive curvature and light and other forms of radiation have to follow that curvature. This lensing effect leads to two optical illusions if observers assume a ‘flat’ space without curvature. (I.e., if they assume that the light that they see has travelled to them in straight lines). Firstly it makes the observable universe look about a quarter larger than it should, and secondly it makes the universe look like it has an accelerating expansion. The following entire paper deals with this concept, as it involves some rather challenging geometry.
However, now for something rather astonishing:
Macrocosm and Microcosm.
Godel’s equation for a rotating cosmos, W = 2 (piGd)^1/2, led him to exclaim:
‘Matter everywhere rotates relative to the compass of inertia with the angular velocity of twice the square root of pi times the gravitational constant times density’
Note the presence of the word ‘everywhere’. This solution of Einstein’s field equations supposedly applied to matter at all scales. However the solution did not appear physical because we do not seem to inhabit a spherical universe. However the hyperspherical variant of this equation, W = (2piGd)^1/2, does seem to apply to matter everywhere at all scales.
In cosmological terms it can explain the non-collapse of the universe.
In astrophysical terms it can explain the apparently anomalous rotations of galaxies.
In terms of solar system mechanics it makes very little difference because of the comparatively large ratio of masses to distances.
However its effect re-appears on the quantum scale, indeed it seems to dominate what happens there.
Taking W = (2piGd)^1/2, and applying d = m divided by 2L^3/pi, and Gm/L = c^2, and
W = 2pif, where f = frequency, we obtain f = c/2L, the familiar formula for all fermionic fundamental particles showing the relationship between frequency and wavelength!
This strongly suggests that fundamental particles also consist of vorticitating hyperspheres, just like the entire universe itself.
As Above, So Below, as Hermes Trismegistus apparently wrote on his Emerald Tablet.
I haven’t a clue how to derive the hyperspin equation:
w = (2piGM/V)^1/2
from the fearsome field equations, but I have a strong intuitive feeling that it does constitute a valid and exact solution because it gives such a beautiful result with such explanatory power.
P.S. I suspect that the spacetime GR matrices may require a 6 x 6 rather than a 4 x 4 expression to allow a complete unification with quantum physics.