Arcanorium CollegeCollege News and Views

Hypersphere Cosmology (13)

Saturday, 29 February 2020 16:38

Rotation of the Triangulum M33 Galaxy

Rotation of the Triangulum Galaxy M33

Baryonic mass ~ 1010 solar masses ~ 2 x 1040 kg.

Radius ~ 3 x 104 light years ~ 3 x 1020 metres.

If w = 2 (sqrt pi G d)                  *Gödel, where d = density.

w = sqrt 2 (pi x 6.67 x 10-11 x 2 x 1040 x 3 / 4 x pi x 27 x 1060)

w ~ 3.8 x 10-16 radians/second.

Compare with Gomel and Zimmerman

https://www.preprints.org/manuscript/201908.0046/v1

Where w = 4.58 x 10-16 radians/second.

*Gödel https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.21.447

Gödel 1949 proposed that any dust like cloud of gravitationally bound matter would have an intrinsic angular velocity w.

Gomel and Zimmerman 2019 propose that a galaxy will have an intrinsic inertial frame of reference which gives rise to an angular velocity w.

The above very rough calculation shows that the angular velocity proposed by Gödel may equal the component of the galactic rotation curve identified as w by Gomel and Zimmerman and that the hypothesis of ‘Dark Matter’ to explain galactic rotation curves will become completely unnecessary.

This calculation depends on assessing the overall density of the galactic disc and the spherical gas halo together.

Peter J Carroll 29/2/2020

Monday, 20 January 2020 20:33

Modified Newtonian Dynamics

The above may supply the answer to the connundrun of Galactic Rotation Curves not matching the Newtonian-Relativistic predictions but the following now looks like a better proposition https://www.specularium.org/component/k2/item/290-rotation-of-the-triangulum-m33-galaxy

Tuesday, 17 December 2019 21:15

Type 1A supernovae and Hypersphere Cosmology

Lensing, Redshift and Distance for Type1A Supernovae.

All data input from Perlmutter et all, https://arxiv.org/pdf/astro-ph/9812133.pdf

The measured Apparent Magnitudes of type 1A Supernovae become converted into fluxes measured in Janskys and these fluxes become converted into Apparent Distances on the basis of the inverse square law taking the Absolute Magnitude of a type 1A supernova as -19.3 at 10 parsecs as the basis of calculation.

The Lensing Equation  then gives the Actual Distance for each supernova.

The Redshift-Distance Equation  then gives the Antipode distance of the Universe which comes out at ~13 billion light years with only minor divergences, in every case.

The physical principles underlying the Lensing Equation and the Redshift-Distance Equation lie here https://www.specularium.org/hypersphere-cosmology in sections 8 and 5 respectively.

This analysis suggests that the small positive spacetime curvature of a Hyperspherical Universe can account for the apparent discrepancies between observed redshifts and observed apparent magnitudes of type 1A, and the hypotheses of an expanding universe with an accelerating expansion driven by a mysterious ‘dark energy’ become unnecessary.

Thursday, 04 December 2014 09:38

Hypersphere Holometry - A Prediction

As a consequence of the hypothesis that the universe exists as a hypersphere where \frac{{Gm}}{{{c^2}}} = L  the main parameters of the universe all come out at the same multiple of the corresponding Planck quantities:

\frac{M}{{{m_p}}} = \frac{M}{{{m_p}}} = \frac{T}{{{t_p}}} = \frac{E}{{{e_p}}} = \frac{{{a_p}}}{A} = U

Where U stands for a very large dimensionless number now called the Ubiquity Constant.

Now the Beckenstein-Hawking conjecture about black hole entropy initiated speculation about the idea of a ‘holographic universe’ because if we equate entropy with information then it suggests that the amount of information in the universe corresponds to its surface area and perhaps the information somehow lies on its surface and the apparent 3d universe just consists of a projection from that. Personally I don’t buy into that hypothesis but I do think the basic conjecture tells us something about reality because the universe does not appear to contain enough information to allow for quantisation right down to the Planck scales.

Various experiments are in progress to see if the universe actually exhibits spacetime quantisation, see http://en.wikipedia.org/wiki/Holometer for links.

Now the Beckenstein-Hawking conjecture states:

I = \frac{A}{{4l_p^2}}

Where I = information in bits if we equate information with entropy, (eliminating Boltzmann’s constant in the process). A = the surface area of the universe, and here we can use either the ‘causal horizon’ from the big bang or the spatial horizon from the hypersphere cosmology hypothesis. 4l_p^2means four times the Planck length squared.

So a calculation can proceed as follows: -

Calculating the surface area of universe A, and then dividing by 4l_p^2

A = 4π{r^2}

r = \frac{{Gm}}{{\pi {c^2}}}

A = \frac{{4{G^{2}}{m^2}}}{{\pi {c^4}}}

Dividing area by 4l_p^2  

Area/4l_p^2  =   where ђ means Planck’s constant reduced.

This comes out at2.0559 x {10^{120}}, the number of bits for the universe, the number of bits for the universe.

Calculating the number of Planck volumes in the universe:

Volume of universe = \frac{{2{L^3}}}{\pi }= 3.314 x {10^{183}}

Calculating the number of Planck volumes in the universe per bit in the universe:

3.314 x 10183 / 2.0559 x 10120 = 160 x 1060

Taking the cube root of this for the number of bits per Planck length = 5.4 x1020

 

Thus the model predicts that:

*The universe will exhibit an effective ‘pixilation’ or ‘grain size’ or quantisation of spacetime at Planck length and Planck time x 5.4 x 1020

Note that a derivation using the same Beckenstein-Hawking formula for a 13.8 bnlyr universe will give a different prediction because this model uses \frac{{2{L^3}}}{\pi }for hypersphere volume rather than \frac{{4\pi {r^3}}}{3}  c

 

*Note also that we have yet to observe anything ‘real’ below these new limits.

 http://en.wikipedia.org/wiki/Orders_of_magnitude_(length)

http://en.wikipedia.org/wiki/Orders_of_magnitude_(time)

Plus this may have interesting implications for the Heisenberg uncertainty relationships.

 

 

Thursday, 04 December 2014 09:15

Hypersphere Rotation

The Gödel 3d rotating universe has this solution derived from General Relativity: -

W = 2\sqrt {\pi {\rm{Gd}}}

“Matter everywhere rotates relative to the compass of inertia with an angular velocity equal to twice the square root of pi times the gravitational constant times the density.” – Gödel.

So, squaring and substituting mass over volume for d yields:

{W^2}= 4πGm/v

Substituting 4/3 π{{\rm{r}}^3}  for the volume of a sphere yields:

{W^2} = 3{\pi ^2}Gm/{r^3}

Substituting r = \frac{{3Gm}}{{{c^2}}}  the formula for a photon sphere (where orbital velocity equals lightspeed) yields:

{{\rm{W}}^2} = \frac{{{{\rm{\pi }}^{2{\rm{}}}}{{\rm{c}}^2}}}{{{{\rm{r}}^2}}}

Taking the square root:

(1*)  W = \frac{{\pi c}}{r}

Substituting L for r as we wish to find the solution for a hypersphere:

(2*) W = \frac{{\pi c}}{L}

Squaring yields:

{{\rm{W}}^2} = \frac{{{{\rm{\pi }}^{2{\rm{}}}}{{\rm{c}}^2}}}{{{{\rm{L}}^2}}}

Substituting GM/L = {c^2}  the formula for a hypersphere yields:

W= \frac{{2{\rm{\pi Gm}}}}{{{{\rm{V}}_{\rm{h}}}}}

Which we can also express as:

W=\sqrt {\frac{{2{\rm{\pi Gm}}}}{{{{\rm{V}}_{\rm{h}}}}}}   or as W = \sqrt {2{\rm{\pi Gmd}}}

Substituting W = 2πf  in (1*) and (2*), to turn radians into frequency we obtain:

f = \frac{{\rm{c}}}{{2{\rm{r}}}}for a sphere and f = \frac{c}{{2L}}for a hypersphere.

Centripetal acceleration a, where a = \frac{{{v^2}}}{r}for a sphere and a  = \frac{{{v^2}}}{L}for a hypersphere, yields:

a = \frac{{{c^2}}}{r}= for a sphere and a = \frac{{{c^2}}}{L}= for a hypersphere.

Now for a hypersphere where \frac{{{\rm{Gm}}}}{{\rm{L}}} = {{\rm{c}}^2}a centrifugal acceleration a, of a = \frac{{{\rm{Gm}}}}{{{{\rm{L}}^2}}}or more simply a = \frac{{{c^2}}}{L}will exist, to exactly balance the centrifugal effects of rotation.

In a hypersphere of about L = 13.8 billion light years, the rotation will correspond to about 0.0056 arcseconds per century. As this consists of a 4-rotation of the 3d ‘surface’, visualisation or measurement of its effects may prove problematical, the rotation should eventually cause every point within the 3d ‘surface’ to exchange its position with its antipode point, over a 13.8 billion year period, turning the universe into a mirror image of itself.

Note that if the universe does have the same number of dimensions of time as of space then it will rotate back and forth between matter and anti-matter as well.

A 4-rotation does not submit to easy visualisation but consider this: -

A needle rotated in front of an observer and viewed in the plane of rotation will appear to shrink to a small point and then become restored to its original length with further rotation.

 Likewise a sheet of card will appear to shrink to a line and then expand back into a plane if rotated in front of an observer.

In both cases the apparent shrinkage and expansion depends on the observer not seeing the needle or card in their full dimensionality.

A cube (or a sphere) rotated through a fourth dimension would appear to shrink to a point and then expand back to its original size to an observer viewing it in only three dimensions and ignoring the curvature.

Monday, 01 September 2014 06:31

Hypersphere Redshift

The alternative mechanism for redshift works as follows:

Redshift = Z = \frac{{{\lambda _o}}}{{{\lambda _e}}}  - 1

Where {\lambda _o}= observed wavelength, {\lambda _e}= expected wavelength, and the -1 simply starts the scale at zero rather than 1.

Now wavelength, \lambda , time frequency, f, still always equals lightspeed, c .

{\lambda _{e}}{f_e} = {\lambda _o}{f_o} = c

However

{\lambda _e}{f_o}< c

{\lambda _e}{f_o}= c - \sqrt {dA}

Where d = astronomical distance, A = Anderson acceleration. The Anderson acceleration (the small positive curvature of the hypersphere of the universe) works against the passage of light over the astronomical distance, d. It cannot actually decrease lightspeed but it acts on the frequency component.

So substituting {f_o}= \frac{c}{{{\lambda _o}}}

We obtain \frac{{c{\lambda _e}}}{{{\lambda _o}}} = c - \sqrt {dA}

Therefore Redshift, Z = \frac{c}{{c - \sqrt {dA} }}- 1

Thus Z becomes just a function of d, astronomical distance, not recession velocity.

To put it in simple terms, redshift does not arise from a huge and inexplicable expansion of the underlying spacetime increasing the wavelength; it arises from the resistance of the small positive gravitational curvature of hyperspherical spacetime to the passage of light decreasing its frequency.

Thus the ‘Hubble Constant’ has a value of precisely zero kilometres per second per magaparsec, but the ‘Hubble Time’ does give a reasonably accurate indication of the temporal horizon and hence the spatial horizon/antipode distance of the hypersphere of the universe.

 

Redshift and Distance.

We can rearrange the following equation derived from considerations of the effect of a positive spacetime curvature on the frequency of light from distant galaxies: -

Z = \frac{c}{{c - \sqrt {dA} }} - 1

To give an expression for cosmological distance in terms of redshift; -

d = \frac{{{c^2}}}{A}\left( {1 - \frac{1}{{Z + 1}}} \right)

This simplifies to: -

d = L\left( {1 - \frac{1}{{Z + 1}}} \right)

Graphically we can represent this as: -
 

The vertical scale represents Redshift.

The horizontal scale represents observer to antipode distance L, in both space and time in the Hypersphere Cosmology model. In the conventional big bang model it would represent only the time since the emission of the light.

Note that when Z = 0, d = 0.  When Z = 1, d =  \frac{L}{2}   When Z = 10, d ~ 0.91 L

And when Z →∞, d = L

A consideration of Frequency Shift gives a more clear and intuitive picture of the effect of the positive spacetime curvature on light from cosmological sources: -

fo / fe = c - (dA)1/2 / c 

Wednesday, 02 July 2014 16:50

HyperSphere Cosmology

 

                       Hypersphere Cosmology. (5)    P J Carroll 1/4/20

Abstract. Hypersphere Cosmology presents an alternative to the expanding universe of the standard LCDM-Big Bang model. In Hypersphere Cosmology, a universe finite and unbounded in both space and time has a small positive spacetime curvature and a form of rotation. This positive spacetime curvature appears as an acceleration A that accounts for the cosmological redshift of distant galaxies and a stereographic projection of radiant flux from distant sources that makes them appear dimmer and hence more distant.

Hypersphere Cosmology.

‘Hypersphere’ in this model corresponds to the mathematical object known as a 3-Sphere or a Glome. Such a hypersphere has an antipode length and a volume as shown in the zeroth equation.

A/2 = 7.32 x 10-10 m/s, the Pioneer Anomaly corrected for thermal recoil.

L = 1.23 x 1026 metres or 13 billion light years, see 7) for an exact determination of L.

M = 1.66 x 1053 kg.

1&2 yield a stable hypersphere, finite and unbounded in space and time. Galaxies rotate/vorticitate around the randomly aligned Hopf circles of the hyperspherical universe. The universe does not have an axis of rotation.

The small positive spacetime curvature of the hypersphere appears as an omnidirectional deceleration A, which increases the wavelength and decreases frequency of light over long distances, giving rise to both redshift and time dilation.

 

The omnidirectional deceleration A also slightly modifies all accelerations, and this results in a general very small boost to all orbital velocities. Thus, large hyperspheres cannot form within the hypersphere of the universe as their orbital velocities would tend to exceed lightspeed. Singularities cannot form. Black holes become unstable as hyperspheres form inside them.

Hyperspherical lensing from distance observed to actual distance, arising from stereographic projection of flux in a hypersphere. In this diagram the circle represents the Glome. In a Glome, the antipode distance L equals half the ‘circumference’.

 

Here we calculate the apparent distances of Type 1A supernovae from the flux given by their apparent magnitudes in Perlmutter et all, https://arxiv.org/pdf/astro-ph/9812133.pdf

See the calculations here https://www.specularium.org/hypersphere-cosmology/item/270-type-1a-supernovae-and-hypersphere-cosmology

When combined with the redshift-distance relationship these results show an antipode distance of L = 1.23 x 1026 metres, 13 billion light years.

No accelerating expansion exists. Dark energy does not exist.

Galactic rotation curve increases over that expected from Newtonian expectations, re Gomel & Zimmerman. Dark matter does not exist.

See an example calculation here: https://www.specularium.org/component/k2/item/290-rotation-of-the-triangulum-m33-galaxy

The application of the Bekenstein-Hawking Conjecture that ‘the information content of a spatially closed volume depends on its surface area in Planck units’ yields something of the order of 10-20 units of information per Planck length within the hypersphere.

 This raises the effective level of the Uncertainty Principle and suggests a reason for the mass and number of nucleons in the universe.

11) Commentary. The Hypersphere Cosmology model seeks to replace the standard LCDM Big Bang cosmological model with something approaching its exact opposite.

In HC the universe does not expand, it remains as a finite but unbounded structure in both space and time with spatial and temporal horizons of about 13bn light years and 13bn years. The HC universe does not collapse because its major gravitationally bound structures all rotate back and forth to their antipode positions over a 26bnyr period about randomly aligned axes, giving the universe no overall angular momentum and no observable axis of rotation.

The small positive spacetime curvature of the Glome type Hypersphere of the universe has many effects, it redshifts light traveling across it, it lenses distant objects making them look further away, it prevents smaller hyperspheres or singularities forming within black holes and it gradually causes black holes to eject mass and energy.

In HC the CMBR simply represents the temperature of the universe, and it consists of redshifted radiation that has reached thermodynamic equilibrium with the thin intergalactic medium***

The universe will appear to observers as having the flux from distant sources in stereographic projection due to the geometry of the small positive spacetime curvature.

In terms of the enormously dimmed flux from very distant sources the antipode will appear to lie an infinite distance away, and beyond direct observation, even though the antipode of any point in the universe lies about 13bnlyr distant.

The antipode thus in a sense plays the anti-role of the Big Bang Singularity in LCDM cosmology. We can never observe either, but instead of an apparently infinitely dense and infinitely hot singularity a finite distance away in universe undergoing an accelerating expansion in space and time, Hypersphere Cosmology posits an Antipode that will appear infinitely distant in space and time and infinitely diffuse and cold, even though actual conditions at the antipode of any point will appear broadly similar on the large scale for any observer anywhere in space and time within the hypersphere.

Both HC and LCDM-BB can both model many of the important cosmological observations but in radically different ways. HC has more economical concepts, as a small positive spacetime curvature alone can account for redshift without expansion, the dimming of distant sources of light without an accelerating expansion driven by dark energy, and it also offers a singularity free universe. 

Neither model really explains where the universe ‘came from’, but we have no reason to regard non-existence as somehow more fundamental than existence.

The evidence for one-way cosmological evolution remains mixed. The entropy of a vast Glome Hypersphere may remain constant as a function of its hypersurface area. On the very large scale the universe needs only the ability to break neutrons to maintain constant entropy. Very distant parts of the universe appear to contain structures far too large to have evolved in the BB timescale.

Hypersphere addenda. Notes and Further Speculations

2) Gödel derived an exact solution of General Relativity in which ‘Matter everywhere rotates relative to the compass of inertia with an angular velocity of twice the square root of pi times the gravitational constant times the density’. This solution became largely ignored because of the apparent lack of observational evidence for an axis of rotation. However, in a hypersphere the galaxies can rotate back and forth to their antipode positions about randomly aligned axes (most probably around the circles of a Hopf Fibration of the hypersphere, thus resulting in a universe with no net overall angular momentum. Such a ‘Vorticitation’ would stabilise a hypersphere against implosion under its own gravity and result in the universe rotating at a mere fraction of an arcsecond per century – well below levels that we can currently observe.

3) Mach’s Principle can only work in a universe of constant size and density. Strong evidence exists to show that the gravitational constant and inertial masses have remained constant for billions of years.

***The Cosmic Microwave Background Radiation (CMB/CMBR) may consist of redshifted trans-antipodal light that has reached thermodynamic equilibrium with the thin intergalactic medium, but Hyperspherical Lensing raises another possibility: -

Widely spatially separated observers within the universe may see a quite different CMBR or perhaps none at all, because the CMBR originates from near their antipodes.

Consider that a galaxy like our own lies near to our antipode point. Such a galaxy will have a spherical Hot Gas Halo extending for several hundred thousand light years around it, accounting for about half of its mass, and at a temperature of about I million Kelvin.

The angular size of such a distant galaxy in Euclidean space would come out at a paltry e21/e26 radians, about 1/10,000th of a radian.

However, lensing at a redshift of 300,000 would give it an apparent angular size of about 6 radians (it would thus fill the whole sky) and reduce its temperature to around 3 Kelvin.

The lower temperature light from the starry part of the distant galaxy would become redshifted far beyond observability, but all the light from the Hot Gas Halo would end up here, coming in from every direction, providing a microwave background radiation that remains location dependent rather than cosmically uniform.

   

Created by Peter J Carroll. This upgrade 1/4/20.

This email address is being protected from spambots. You need JavaScript enabled to view it.

 

 

 

 

 

 

 

Wednesday, 02 July 2014 16:10

Ur Chaos Theory

In this paper 'Chaos'€™ denotes randomness and indeterminacy, not the mere so-called 'deterministic chaos'€™ of systems with extreme sensitivity to initial conditions.
 
For many decades it has seemed from theoretical considerations that true Chaos or indeterminacy applies only at the Planck length scale of ~10^-35 metres or the Planck time scale of ~10^-44 seconds. This hardly accords with either life as we experience it or what we know about the behaviour of fundamental particles or our own minds.
 
Evidence mounts for a substantial upward revision of the level at which Chaos actually shapes the behaviour of the universe and everything within it, including ourselves.
 
This paper considers the emerging holographic universe principle in terms of vorticitating hypersphere cosmology, (VHC).
 
The Beckenstein-Hawking conjecture asserts a proportionality between the surface area of a gravitationally closed structure like a black hole or a hypersphere, and its internal entropy.
 
The Claude Shannon information theory relates information to entropy.

Combined, these theories lead to the idea of a Holographic Universe in which each Planck area of the surface of the universe carries one bit of information. These bits of information serve to define what happens in all the Planck volumes within the universe.

This results in far less than one bit of information for each Planck volume within the universe. A holographic universe thus contains a substantial information deficit, and thus it behaves with more degrees of freedom, or more chaotically than simple quantum theories suggest.

In VHC the calculation of the ratio becomes quite simple. L, the antipode length of the universe has the value lp times U, where lp equals the Planck length, ~ 10^-35m, and U represents the huge number ~ 10^60. (U also relates the time and mass of the universe to the corresponding Planck quantities).

Thus if Vu represents the volume of the universe and Vp represents Planck volume then
Vu / Vp divided by Au / Ap = U
where Au equals the surface area of the universe and Ap equals the Planck area.

Thus the ratio of bits to Planck volumes comes out at only 1 bit per 10^60 Planck volumes.

This may seem a contra-intuitively small figure, however if we take the cube root of this volume figure to reduce it to a length measurement then it comes out at 10^20 Planck lengths. As the Planck length equals ~10^-35m this results in a figure of ~10^-15 metres, merely down at the fundamental particle level.

The information content of the universe cannot then specify its conditions down to the Planck scale, but only down to the scale of ~10^-15 metres, or the corresponding time of just ~10^-24 seconds.

Now we do not actually observe any real quantities below these levels anyway. Although fundamental particles theoretically exist as dimensionless points they have effective sizes down at the 10^-15m size when they interact, and similarly nothing ever seems to happen in less than 10^-24 seconds. These lower limits seem to represent the actual pixelation level of the universe, its 'grain size'€™, and any real event either occupies a whole pixel or it doesn't happen. This minimum level of divisibility represents the point where causality ceases to apply and processes become indeterminate and Chaotic. Thus it's a whole lot more chaotic than Heisenberg thought.

This suggests that we should add the factor of the cube root of U to all of the Heisenberg uncertainty (indeterminacy) relationships, as in the following examples where D stands for Delta, the uncertainty, and h stands for slash-aitch, Planck's constant over 2 pi.

Momentum and Position. Dm Dp ~ h becomes Dm Dp ~ h (U)^1/3
and
Energy and time. DE Dt ~ h becomes DE Dt ~ h (U)^1/3
and also to the esoteric Entropy times temperature (SK) and time t, uncertainties which then becomes
DSK Dt ~ h (U)^1/3

This gives considerably more scope for the local spontaneous reversal of entropy that may allow the emergence of more complex forms from simpler ones.

The factoring in of cube root U seems to bring the indeterminacy of world up towards the scale at which we actually experience it, and it has two other consequences: -

Firstly it answers all those sarcastic physicists who opine that quantum based magical and mystical ideas remain invalid because quantum effects occur at a level far below the scale of brain cells for example:

Typical nerve conduction energy equates to~10^-15 joules and a typical nerve transmission time between neurones takes ~10^-3 seconds.

Applying this to DE Dt ~h (U)^1/3 suggests that Ur-Chaos enhanced quantum effects will certainly come into play at a brain cell level corresponding to actual thinking, the activation of about 1,000 neurones per second.

Thus Sir Roger Penrose's attempt, (in The Emperor's New Mind), to hypothesize such effects merely at the intracellular microtubule level perhaps seem rather too conservative now.

Secondly the value of U depends on the size of the universe. Bigger universes have a lower ratio of surface area to volume than smaller ones and hence behave more chaotically, so the value of U will affect the physical laws of the universe. A bigger universe will for example have fundamental particles with larger effective sizes, thus changing chemistry and radiation. When we examine 'fossil'€™ light from distant galaxies or even the fossil or geological record of our own fairly old planet it appears that the laws of physics have remained essentially unchanged.

Thus if we inhabit a universe subject to the holographic principle it cannot have expanded.

So welcome to the wonderful world of Ur-Chaos, where everything has twenty more orders of magnitude of Chaos than previous calculations suggested, but we suspected something like this all along.

The title Ur-Chaos Theory derives from two mnemonics, as it suggests both U-root(3) and Uber or 'greater'€™ Chaos.

An afterthought. In a lyrical sort of way, one might note that the bigger one's universe, i.e. the wider one's horizons, the more chaotic and interesting one's life becomes.

Wednesday, 02 July 2014 16:03

Quantum & Probability

I have repeatedly asserted in many of my magical and scientific books and papers that this universe runs on probability rather than deterministic certainty, yet I have never formally demonstrated this. I assumed, perhaps unfairly, that everyone would have a familiarity with the arguments that lead to this assertion. However I have received so many questions about it from those who seem to cling to various deterministic superstitions from simple Newtonianism, to theories of Predestination, and to old ideas about Cause and Effect. I feel that I must present a simple killer example to the contrary.


Herewith, perhaps the simplest proof that we inhabit a probabilistic rather than a deterministic universe. Philosophers have torn their hair out over this one for the last 50 years, it underlies the Schrödinger’s Cat Paradox, which to me doesn’t constitute a paradox at all, merely a realization that cause and effect operates only because the universe acts randomly.


Consider the so-called ‘Half-Life’ of radioactive isotopes. Every radioactive substance, whether made by natural processes, (e.g. Uranium) or by humans playing around with reactors, (e.g. Plutonium), has a half-life. This means that after a certain amount of time, half of the atoms in a sample will have gone off like miniature bombs and spat out some radiation and decayed into some other element. After the same amount of time has elapsed again, a further half of the remainder will have done the same leaving only a quarter of the original atoms, and so on. Highly unstable atoms may have a half life measured in seconds; somewhat more stable ones may have half lives measured in tens of thousands of years.


Now the half-life of a lump of radioactive material remains perfectly predictable so long as that lump consists of millions or billions of individual atoms. However that predictability depends entirely upon the behavior of individual atoms remaining entirely probabilistic and random. The half-life effect means that during the half-life period, half of the atoms will explode. So if one takes the case of a single atom, one can only say that it has an evens chance of exploding during its half-life period. So the half-life time defines the period in which it has an evens chance of exploding.


Suppose you threw two million coin tosses. If you got anything but close to a million heads and a million tails you would naturally suspect something non-random about the tossing procedure or the coin itself. Thus the nice smooth predictable exponential decrease in the number of unexploded atoms in a radioactive isotope, a half, a quarter, and eighth, and so on, with each passing of the half life period, overwhelmingly suggests that the individual atoms behave randomly. Imagine that after tossing the two million coins you discarded all the tails and tossed the heads again, and then repeated the process on and on, you would expect to halve the number of coins each time so long as the number of coins remained fairly large. Random behavior means that the outcome of an event has no connection to its past, the coin may have come down heads in the previous toss but that gives no clue as to what it will do subsequently, many gamblers willfully ignore this fact.


So here we have an odd insight, random behavior in detail leads to perfectly predictable behavior en masse. Indeed it seems difficult to see how anything but random behavior could lead to such predictability.


So what, you may ask, okay, so individual atoms may behave randomly, but surely doesn’t human scale reality behave itself according to cause and effect?


Well sometimes it does reflect the apparently causal predictability of randomness in bulk, but often it does not. The Schrödinger’s Cat thought experiment provides a seminal example. In principle one can easily rig up a device to measure whether or not a single atom has exploded within its half life period, and then use that measurement to trigger a larger scale event like shooting a captive cat concealed in a box. The poor cat has only an evens chance of surviving the experiment and nobody can tell what happened until they look in the box afterwards. This thought experiment demonstrates a fundamental randomness and unpredictability about the universe. Schrödinger thought it up to demonstrate that the cause and effect based thinking to which science had become dedicated, and which also forms a central plank of our ordinary thinking and language structure, does not always accurately describe how the universe works.


It would seem that a lot of what goes on in the universe, particularly at the human scale, remains subject ultimately to the random behavior of individual atomic particles. The long term behavior of the weather, the fall of a dice that bounces more than about seven times, human decisions, they all seem to depend on atomic randomness to some extent.


And if the universe permits even a single random event, then its entire future history becomes unpredictable!


Not many philosophers have managed to get their heads around this insight yet, although it was first realized decades ago. However I did see recently an apologist for one of the monotheist religions claim that his god must therefore do his business by tweaking probability in favor of what he or his devotees want. That doesn’t look like such a bad idea. A tweak here, a tweak there, and after a few hundreds of millions of years he can evolve an image of himself from the primeval slime for company.


Of course chaos magicians claim something similar, a spell here, a spell there, and after a while modified probabilities should deliver the goods. However the Chaoists do have some experimental data in their favor. Quite a number of parapsycholgical experiments indicate that events at the atomic scale remain surprisingly sensitive to psychic meddling.


Perhaps you do not even need sentience to tweak probability. The concept of emergence suggests that whenever nature accidentally throws up something more complex or sophisticated or interesting then some sort of feedback may occur which makes its subsequent appearance more likely. The very laws of the universe, and the conventions of chemistry and biology, and perhaps even those of thought, may owe something to this mechanism.

At the atomic or quantum level, experiments have shown that matter and energy behave in a way that seems very strange compared to the way they seem to behave on the macroscopic or human sized scale.


At the quantum level even basic concepts such as causality or the idea of a particle having a single definite location and momentum seem to break down. The very idea of thing-ness itself seems inapplicable to quantum particles. Particles simply do not behave like tiny little balls, and few verbal analogies or visualise-able images provide much of a guide as to what they actually do.


Now the whole of the observable universe consists of systems made up of quantum particles, yet their almost unimaginably strange individual behaviours add up to give the world we see around ourselves which seems to run on entirely different principles.


Nevertheless it has proved possible to devise mathematical models of what quantum particles do. However these mathematical models involve the use of imaginary and complex numbers which do not work in quite the same way as simple arithmetic and they give answers in terms of accurate probabilities rather than definite yes or no outcomes.


All this has led to endless debate about the reality of what actually goes on in the quantum realm.


Some theorists have taken the position that the mathematics which model quantum physics represents nothing real, it just gives good results because we fudged it to fit the observations, and that the underlying physical reality remains impossible to comprehend in any other way at the moment.


Others have taken the position that quantum physics remains incomplete and that further discoveries will allow us to make more sense of it.
This paper will attempt to show that the hypothesis of 3 dimensional time can allow a novel reinterpretation of the observed phenomena of quantum physics which allows us to form some idea of its underlying reality.


At the quantum level the basic constituents of matter and energy behave with some of the characteristics of both waves and particles. To a simple approximation they seem to move around rather like waves spread out over space, but they seem to interact and to register on detectors as localised particles. Heisenberg’s uncertainty principle models this behaviour mathematically, the more a particle reveals its momentum the less certain does its position become, and vice-versa. This situation does not represent merely the technical impossibility of measuring both of these quantities simultaneously. It seems to represent the physical reality that a particles momentum becomes progressively more objectively indeterminate as its position becomes more objectively determinate, and vice versa, at least in the 4-dimensional spacetime in which we observe its behaviour. If particles did not have this indeterminate aspect to their behaviour they could not act as they do and we would have a completely different reality.


Such wave/particle behaviour appears in a simple, definitive, and notorious experiment known as the Double Slit Experiment. This experiment has numerous variations and elaborations, and it works just as well with energy particles like light photons or matter particles such as electrons. Apparently it will even work with quite large clusters of atoms such as entire C60 buckyballs. Feynman identified it as encapsulating the entire mystery of quantum behaviour.


In the double slit experiment a single particle apparently passes somehow through both of two slits in a screen simultaneously and then recombines with itself to form a single particle impact on a target screen or detector. If the experimenter closes one of the slits, the particle can still hit the target but it will hit it in a different place. If both slits remain open the particle’s position on the target indicates that the presence of two open slits has somehow contributed to the final position of the particle. Now common sense dictates that a particle cannot go through two separate slits simultaneously, although a wave can do this. So how does something that arrives at its target as a particle apparently switch to wave mode during flight to get part of itself through both slits at once and then switch back to particle mode for landing? Big objects like aircraft never seem to do this sort of thing, even though they consist of particles which can.


The mathematical model of a particle in flight describes it as in a state of superposition, the notorious quantum condition in which a particle can apparently exist in more than one state simultaneously. The so called wave function of a particle does not constrain it to choose between apparently mutually contradictory qualities, like having two different positions, or two different spins in opposite directions. The choice seems only to occur when the particle gets measured or hits something and then manifests just one of its possible alternatives. The choice it makes however seems completely random when it takes its quantum jump.


This had led theorists into endless arguments and debates about what a quantum particle really ‘is’. Such Quantum Ontology seems very questionable, we cannot really ask questions about being because we do not actually observe any kind of being anywhere. Basic kinetic theory shows us this, nothing just sits there and exhibits being. Everything actually has a lot of internal atomic motion and exchanges heat and radiation with its environment. To observe something just being we would have to stop it doing anything at all by freezing it to absolute zero, but at that point it would simply permanently cease to exist.


Some theorists have argued that the wave function which describes particles as existing mainly in superposed states cannot really model reality because we always observe things in singular rather than superposed states. However this seems debatable, whilst photons and electrons for example do have singular characteristics when we catch them in detectors or at the point in time when they interact with other particles, most of the properties of materials that we observe actually arise quite naturally from a superposition of states. For example the strength of metallic crystals, the strength of bonds between the atoms in molecular gasses, the behaviour of molecules like benzene and indeed the behaviour of atoms in general, all find explanation in terms of particles occupying superposed states. Thus it seems more reasonable to suppose that for most of the time, matter and energy do superposition, and that when particles undergo measurement or other forms of interaction, they do monoposition. In the HD8 model, monoposition corresponds to the manifestation of a particle in one dimensional time, whilst superposition corresponds to what it does in three dimensional time.


A lot of theorists have a philosophical objection to the way particles seem to make a completely random choice when reverting from superposition to monoposition. They do not like the way in which the wave function seems to collapse in a completely probabilistic way without a sufficient cause for the observed effect. Science has depended on the principle of material cause and effect for centuries they argue, and we cannot abandon it just because we cannot find it in quantum behaviour.


Nevertheless the probabilistic collapse of the wave functions does lead to the fairly predictable behaviour of matter and energy as seen on the human scale. Toss a single coin and either alternative may result, but toss a million of them and the deviation from half a million heads will rarely stray beyond a fraction of one percent. Probability can thus lead to near certainty and most of the apparent cause and effect relationships that we observe in the human scale world can in fact arise precisely because quantum superpositions collapse randomly. Indeed, assuming that superpositions mainly define the state of the universe for most of the time, then turn the idea on its head and consider how bizarrely it might behave if those superpositions collapsed non-randomly. Ordinary lightbulbs might suddenly start emitting laser beams, the radioisotopes in smoke detectors might sometimes go off like small nukes instead of sedately decaying.


The idea of quantum particles in a superposed wave function mode perhaps becomes easier to understand in the HD8 model where 3 dimensions of time complement the 3 dimensions of space. A superposition of two or more states in the same place can occur if the extra states lie in the plane of time orthogonal to what we perceive as ordinary time. In effect orthogonal time provides a sort of pseudospace for parallel universes. I am not implying here that I have doppelgangers for example, or perhaps millions of them, in full scale parallel universes, but merely that I could have a slight thickness in sideways time which allows the superpositions of my constituent quantum particles to manifest my normal electo-chemical properties that ordinary 4 dimensional classical mechanics cannot explain. (I have a suspicion that such superpositions may also have some relevance to mental and perhaps parapsychological phenomena as well, but let’s not open that can of worms for a while yet.)


The notorious Double Slit Experiment and its variants also reveal another aspect to quantum superposition, the phenomena of quantum entanglement. When presented with two possible flightpaths to follow, some quantum phenomena appear to take both paths but the parts which go their separate ways seem to remain in some sort of instantaneous communication with each other, no matter how great the spatial distance between them. This entanglement of widely separated parts appears to violate the Special Relativistic principle that no signal can travel faster than light. However quantum physicists usually point out that no proper information actually gets transmitted because of the random nature of the outcome.


Now, as with superpositions, we cannot observe quantum entanglements directly, we can only make observations from which we can infer that the observed result must have come from an entanglement. HD8 explains entanglement in terms of multiple histories. When an entanglement collapses due to an interaction or a measurement, some of the alternative histories collapse. Alternative histories exist as superpositions in sideways time.


Entanglements occur when a single superposed quantum state splits into two (or more) parts, each of which then travels away in a superposed state. The original quantum state could consist of a single particle which apparently splits into two as in the double slit experiment or it could consist of a pair of particles in an intimate contact which forces them into the same quantum state. Now when one of the parts of the entangled system falls out of superposition because someone measures it, or because it interacts and decoheres into its environment, then the other half has to fall out of superposition into the opposite mode. If one component has say, spin up, then the other will have spin down, or the corresponding other half of a number of other quantum properties.


Thus if the initial quantum state has the superposed qualities of AB and splits into two parts, both parts seem to carry the AB superposition. Yet if we intercept one of the parts and find that it appears to us as A, then we know with certainty that the other part will have to manifest as B. Experiment has repeatedly confirmed this over macroscopic distances, and we have no reason to suspect that it will not work over astronomical or cosmic distances. We know that we cannot explain this by simply assuming that the original superposed state splits into an A component and a B component because in experiments where we recombine the two parts they recombine in such a way as to show that each must have carried the AB superposition.


Ordinary cause and effect cannot explain what happens in entanglement. From the point of view of classical physics the phenomenon seems completely impossible. Whatever mechanism constrains each part of an entanglement to jump into the opposite mode that the other part jumps to, must either act instantaneously across arbitrarily large distances, in violation of relativity, or it must act retroactively across time.
A non-local effect across space seems the least likely alternative. It would require that something from one half of the entangled pair somehow found its way to the other half which could have travelled anywhere within billions of cubic miles of space. Considering that the universe contains unimaginably vast numbers of virtually identical entangled pairs of quantum states, this seems fantastically improbable.
Temporal retroactivity on the other hand, requires no more than a certain two way traffic across time which can allow for the cancellation of some of the alternate histories.


Time does not appear to run both forwards and backwards on the macroscopic scale because energy dissipates in what we call entropy, and because gravity acts attractively only. Nevertheless, nothing seems to constrain quantum processes to progress in only one temporal direction, and some interpretations suggest that they actually proceed in both directions at once. In particular Cramer’s Transactional of quantum physics models a photon exchange as comprising a sort of superposition of a photon travelling from emitter to receiver and an antiphoton travelling backwards in time from receiver to emitter. This perhaps explains an oddity of Maxwell’s equations of electromagnetism. These time symmetric equations also yield a set of solutions for so-called advanced waves travelling backwards in time, but physicists usually quietly ignore them because they appear materially indistinguishable from the so-called retarded waves which travel forwards in time.
The material indistinguishibility of a quantum process proceeding forwards through time from the corresponding anti-process proceeding backwards through time has intriguing implications.


We can interpret it as an exchange between the past and the future which has bizarrely counterintuitive aspects. The event which will end up as the past can send multiple contradictory signals to the future. These signals eventually collapse at a moment of interaction or measurement to give a singular present. However those signals which do not manifest in the present cannot send a time reversed signal back to the past to complete the exchange, so that those particular signal paths cease to exist, effectively modifying the past. Thus when a superposition or an entanglement collapses it erases the multiple history of its own past. The whole concept of being or ‘is-ness’ falls apart here, and not just for particles flying around in specially contrived apparatus. As quantum systems generally behave as if they had just collapsed out of superposition or entanglement, and as quantum systems underlie the behaviour of all matter and energy, we must inhabit an almost unimaginably stranger world than our ordinary senses reveal.


Some theorists have spoken of the Omnium or the Multiverse underlying and supporting the mere surface reality that we directly experience. At the time of writing we have few concepts and little vocabulary to describe the signals exchanged across time and space out of which our perceived reality coalesces. Antiparticles in the conventional sense cannot act as the agents of information transfer into the past. Neither can particles as we understand them in 4D spacetime, carry superpositions and entanglements into the future. However we can form a partial visualisation of the processes involved by using reduced dimension graphs derived from a modified Minkowski formula (itself derived from Pythagoras), for the distance D, between points in 6 dimensions, 3 of space and 3 of time.


In simplified form the equation looks like this:

D = [s^2 - (ct)^2 + (ct1)^2 + (ct2)^2]^1/2

Where s = spatial distance,
t = time, (reference direction),
c = lightspeed,


and t1 and t2 represent the axes of the plane of imaginary time orthogonal to the temporal reference direction, which acts as a kind of pseudospace.


This equation yields two obvious null paths in addition to that conventionally reserved for ordinary light ( s = 1, ct = 1, ct1&2 = 0),
Namely,


1) s = 0, ct = ct1&2.


And


2) s ~ ct, where ct1&2 >0


The graphs of these equations represent Superposition and Entanglement respectively.


See Null Path Superposition & Entanglement, Figure 1 and 2.


Figure 1 shows a mechanism for superposition, representing a superposed quantum state at rest relative to an observer, so we can omit spatial coordinates for simplicity.


An event at the origin subtends a plethora of superposed states in imaginary time pseudospace, represented by the circle (which appears in perspective as an ellipse), at right angles to the temporal reference direction.


These states collapse at a further unit distance along the t-axis to produce a random unitary outcome. As soon as one of the paths from the origin to the final state completes an exchange, all other paths collapse and the exchange path becomes the 'new' temporal reference direction. This effectively changes history.


Figure 2 shows a mechanism for entanglement. The Figure shows single dimensions of space and time (reference directions for a moving particle) and a single dimension of orthogonal time for simplicity. The Figure could for example, represent entangled particles flying apart at lightspeed in opposite spatial directions. Note that as each particle flies away, it diverges into two parts in the plane of imaginary time. This represents the superposed condition of each of the particles. In practise the two particles may well have a number of superpositions, but we cannot show these with only a single dimension of imaginary time in the Figure.When one of the particles interacts, only one of the two paths shown will actually complete a transaction and become real. The other path will then cease to exist all along its history. This will have the effect of cancelling the superpositions all the way back down the time line to the origin, hence the other particle will then have to manifest with the opposite particle property when it interacts. In the new history of the two particles it will then appear that each set off from the origin with one of the two opposite properties.


What actually happens between moments of particle interaction remains an interesting question. Any attempt to look at the intervening period merely shortens it to the point at which we choose to take a measurement. Except at the point where a particle interacts it seems to consist of a multitude of superposed and/or entangled states. However as soon as it interacts, all but one of the particles multitudinous states become eliminated from history. At that point the path which remains in imaginary time becomes the real time history of the particle.


Some theorists have argued that we cannot answer the question in principle. Others have gone further and opined that we cannot even ask such a question because it would beg an answer in terms of objective reality about a realm in which objective reality does not apply. In other words they dismiss the question as as fruitless as theology.


However a combination of the hypothesis of 3-dimensiona; time with Heisenberg’s Uncertainty (Indeterminacy) Principle can perhaps offer some sort of an answer.


The Uncertainty principle allows nature to violate the conservation of pretty well any particle property of behaviour, including that of existence itself, so long as the violations remain very small and/or get paid back very quickly.. Planck’s constant sets a precise limit to the imprecision with which quantum phenomena can behave.


Thus for example; -

DE Dt = h and Dp Dl = h

Where D (delta) E means energy indeterminacy,
Dt means temporal (durational) indeterminacy,
Dp means momentum indeterminacy
Dl means spatial (positional) indeterminacy
h means Planck’s constant over 2 pi.

This means that the universe can allow the spontaneous creation of particles from the void or the background energy if you prefer. Such particles persist for a time inversely proportional to their masses so massless photons can persist indefinitely whilst massive fermion/antifermion pairs can persist for only the briefest instants. Now HD8 models bosons as consisting of particle/reverse particle pairs, and we can conceive of fermions as having a similar configuration whilst in flight (between interactions).


Now conventional theory calls for the existence of so called virtual particles to account for the electrostatic and magnetic fields and for various other fields. HD8 rejects this idea and suggests instead that such fields arise from the warping of spacetime in various dimensions in the vicinity of charges. To account for the behaviour of fields, virtual photons particles would have to have properties at variance with relativity. The distinction between so called real particles and virtual particles has become progressively blurred in Standard Theory, particularly as we can now make apparently real particles perform the double slit trick. I think that the hypothesis of virtual particles has outlived its usefulness.
It would seem more reasonable to describe all particles as real at the instant of their interaction, and that whilst they remain in flight as it were, they spread out into multiple form in the pseudospace of imaginary time, using the freedom of quantum indeterminacy.


At the risk of undermining the idea that particles in between interactions do have some actual reality we should perhaps consider calling them Imaginary Particles.


Consider again the double slit experiment, but now in terms of real and imaginary particles. An electron in an imaginary state in orbit about an atom in a light source emits an arbitrarily large number of imaginary photons towards two slits in a screen. As two imaginary photons, one having gone through each slit, fly towards the detector, their higher dimensional spacetime curvatures interact and they combine to hit an imaginary electron in a detector. However only one of the two imaginary electros can become real by making a time reversed exchange with the electron at the origin of its path. As it does so it becomes momentarily real, as does the electron it interacts with.


In summary something does actually go through both slits but after the completion of the exchange only one of the paths remains real (and we cannot tell which). The electron at the emitter becomes real only momentarily as it emits. Both electrons go back into superposed states around their atomic nuclei.


Particles spend only a vanishingly small part of their time in interactions that confer a momentary reality upon them. So almost the entire universe consists of imaginary particles at any moment.


We ourselves must also consist mainly of particles in an imaginary condition. Imaginary particles interact with each other to create the reality that our senses detect, but can the imaginary part of ourselves interact more directly with the Omnium, that Multiverse of superposition and entanglement underlying our perceived reality?


I propose to return to this question in a later paper, but for now I leave you with an odd thought.


Thought itself feels to me very much like a series of collapses of superposed and entangled mental states into real ideas, actions and decisions. It seems as though I fill my head with ideas and then let them become a bit fuzzy, and then somehow something definite springs into reality. It feels like a stochastic process. Most of it of course ends up in the wastebasket, the most powerful tool of the thinker according to Einstein.

Wednesday, 02 July 2014 15:51

Starships

Houston we have a problem.


If the human race does not develop starships that can reach other star systems as easily as sea freighters or jetplanes now traverse the Pacific Ocean, it has no long term future. Either an asteroid will eventually smash the planet, or we shall exhaust its natural resources, or the lack of a new frontier will lead to cultural decay. Our star will eventually run out of fuel anyway.

At the time of writing we face only a single obstacle to interstellar travel. Physics as we understand her in the opening years of the twenty first century. We have the technology to sustain life support in space; telemetry and astrogation present no real problems, but propulsion by reaction-thrust (firework) principles remains pathetically inadequate. The billions of dollars expended annually worldwide to improve chemical or even nuclear reaction thrust vehicles cannot in principle yield dividends in fields other than materials science, warfare, orbital astronomy, telecommunications, and a very modest exploration of the local solar system.

Any serious attempt to penetrate interstellar space must come from ideas developed in fundamental physics. No conceivable increment in rocket style technology can possibly get us to anywhere useful. We may as well abandon schemes based on Bussard ramjets, Orions, or Matter-Anti matter reaction thrust vehicles. Such schemes compare to crossing the pacific on surfboards. Whilst they remain just about possible, they would represent a heroic, if not suicidal, waste of effort compared to the quest to develop more useful means of transport.
Even if such drives could achieve a comfortable one gravity of acceleration for a couple of years without an impossible power input, they could achieve relativistic speeds but interstellar gas particles would hit the vessel like hard gamma rays, and dust particles would strike like battleship shells. Thus we have to look within and beyond special and general relativity or quantum physics for some principle which may allow us to 'warp' or 'teleport' across space rather than hammer our way through it.

Some of the ideas on this site may suggest a possible means of moving between star systems without traversing the daunting interstellar voids between them.
Consider the analogy with terrestrial travel. Friction against the solid or liquid surfaces of the planet limits ground and sea vehicles moving around the two dimensional surface to rather modest speeds. However aircraft moving in three rather than two dimensions escape from surface friction and can travel much faster. A tunnel dug right through the planet could provide an exceptionally efficient means of travelling to the antipodes of any point, although no known material could serve as a lining for the tunnel. Simply dive headfirst into the tunnel and freefall to the centre of the earth which you will pass at extreme speed. You will then begin to slow down as you continue, coming, in theory, to a momentary halt at the other end of the tunnel allowing you to just time to grab the lip and clamber out before you fall back again. Elapsed time 42min, fuel required none, g forces nil. In practice you really need to have a vacuum in the tunnel to prevent air friction and a spacesuit to prevent death.
So can we somehow exploit the extra degrees of freedom offered by six dimensions to achieve rapid transit across the universe? Six dimensions seem to offer the higher dimensional equivalents of both aircraft travel and tunnel travel, which for the purposes of this paper we can call Warp1 and Warp 2 respectively.
Now as  two dimensions of time lie orthogonal to all points of 4D spacetime, much as the up/down spatial dimension lies orthogonal to every point on a planetary surface, so Warp 1 and 2 journeys can, in principle begin and end anywhere. As with a Tardis, launch from within your garden shed or basement should not present a problem.
Warp 1 craft would basically travel "outside" the hyperspherical three-dimensional "surface" of the observable universe in the "ana" or forward direction of the plane of imaginary time. By analogy with aircraft travel we might expect that such vessels may have to expend energy to resist the spacetime curvature (gravity) of the universe. Warp 2 craft would travel through the "kata" or past direction of imaginary time to any point of equigravitational potential without expending energy, although opening a tunnel may have an energy requirement.

Warp 1. The invention of aircraft required the coming together of two technologies, the technology of aerodynamic lift, and the technology to push persistently against the air. By analogy Warp 1 craft will need something to hold them in imaginary time and something to give them a shove. Now as the plane of imaginary time corresponds in some sense to probability, perhaps we should look at the phenomenon of quantum superposition in which a particle appears to occupy (or to have occupied) an indeterminately large number of possible states simultaneously. At the time of writing, such superpositions only seem to involve small numbers of particles and they seem prone to decohere rapidly back into "normal" states. However perhaps one day we will discover how to place an entire ship into a quantum superposition with a wave function that has a non zero probability at every spatial point in the universe. Then perhaps we will only have to give it the gentlest push, and then allow it to decohere back into "ordinary" reality wherever we want, perhaps trillions of miles away, perhaps almost instantly.

Warp 2. Trying to dig a tunnel through spacetime using general relativity seems less sensible than at first it may appear, and Warp 2 travel does not involve this. Black holes would not consist of "holes" in spacetime; they would consist of impassable "knots". Four dimensional wormhole throats would require titanic amounts of "negative energy" to hold them open for ships to pass, and negative energy seems to have no real meaning or physical reality in this universe. To tunnel across the universe from one point to another we only actually need to penetrate beneath the four dimensional hyperspherical surface of which it consists, to gain the extra degrees of freedom on the "inside" in the kata or past direction of the plane of imaginary time. Warp 2 drive may require essentially the same technology as Warp 1 drive, except that the craft will require a push in the opposite "direction". Neither type of travel requires that the vessel risk ending up in the far past or future probabilities of its destination. Well perhaps it runs no more risk than an air traveller or tunnel diver from London to Australia risks in ending a journey a long way up in the air or some distance underground. (Timecrash- a new sci fi theme perhaps?)
A paper as wildly speculative and analogically based as this would certainly benefit from some supporting maths.

  • Arcanorium College - Department of Science. +

    The Arcanorium College Department of Science, Research and Collaboration Facility.   ‘Nobody understands Quantum Physics’, as Richard Feynman observed, and Read More
  • Arcanorium College - Department of Magic. +

    The Arcanorium College Department of Magic, Degree in Magic. Few if any academic institutions in the known universe recognise the Read More
  • Arcanorium College +

    Arcanorium College Interroga Omnia – Question all things. Arcanorium College consists of a Natural Philosophy Faculty with two Departments: - The Department Read More
  • 1