You have no items in your bag. get the Epoch
I have repeatedly asserted in many of my magical and scientific books and papers that this universe runs on probability rather than deterministic certainty, yet I have never formally demonstrated this. I assumed, perhaps unfairly, that everyone would have a familiarity with the arguments that lead to this assertion. However I have received so many questions about it from those who seem to cling to various deterministic superstitions from simple Newtonianism, to theories of Predestination, and to old ideas about Cause and Effect. I feel that I must present a simple killer example to the contrary.
Herewith, perhaps the simplest proof that we inhabit a probabilistic rather than a deterministic universe. Philosophers have torn their hair out over this one for the last 50 years, it underlies the Schrödinger’s Cat Paradox, which to me doesn’t constitute a paradox at all, merely a realization that cause and effect operates only because the universe acts randomly.
Consider the so-called ‘Half-Life’ of radioactive isotopes. Every radioactive substance, whether made by natural processes, (e.g. Uranium) or by humans playing around with reactors, (e.g. Plutonium), has a half-life. This means that after a certain amount of time, half of the atoms in a sample will have gone off like miniature bombs and spat out some radiation and decayed into some other element. After the same amount of time has elapsed again, a further half of the remainder will have done the same leaving only a quarter of the original atoms, and so on. Highly unstable atoms may have a half life measured in seconds; somewhat more stable ones may have half lives measured in tens of thousands of years.
Now the half-life of a lump of radioactive material remains perfectly predictable so long as that lump consists of millions or billions of individual atoms. However that predictability depends entirely upon the behavior of individual atoms remaining entirely probabilistic and random. The half-life effect means that during the half-life period, half of the atoms will explode. So if one takes the case of a single atom, one can only say that it has an evens chance of exploding during its half-life period. So the half-life time defines the period in which it has an evens chance of exploding.
Suppose you threw two million coin tosses. If you got anything but close to a million heads and a million tails you would naturally suspect something non-random about the tossing procedure or the coin itself. Thus the nice smooth predictable exponential decrease in the number of unexploded atoms in a radioactive isotope, a half, a quarter, and eighth, and so on, with each passing of the half life period, overwhelmingly suggests that the individual atoms behave randomly. Imagine that after tossing the two million coins you discarded all the tails and tossed the heads again, and then repeated the process on and on, you would expect to halve the number of coins each time so long as the number of coins remained fairly large. Random behavior means that the outcome of an event has no connection to its past, the coin may have come down heads in the previous toss but that gives no clue as to what it will do subsequently, many gamblers willfully ignore this fact.
So here we have an odd insight, random behavior in detail leads to perfectly predictable behavior en masse. Indeed it seems difficult to see how anything but random behavior could lead to such predictability.
So what, you may ask, okay, so individual atoms may behave randomly, but surely doesn’t human scale reality behave itself according to cause and effect?
Well sometimes it does reflect the apparently causal predictability of randomness in bulk, but often it does not. The Schrödinger’s Cat thought experiment provides a seminal example. In principle one can easily rig up a device to measure whether or not a single atom has exploded within its half life period, and then use that measurement to trigger a larger scale event like shooting a captive cat concealed in a box. The poor cat has only an evens chance of surviving the experiment and nobody can tell what happened until they look in the box afterwards. This thought experiment demonstrates a fundamental randomness and unpredictability about the universe. Schrödinger thought it up to demonstrate that the cause and effect based thinking to which science had become dedicated, and which also forms a central plank of our ordinary thinking and language structure, does not always accurately describe how the universe works.
It would seem that a lot of what goes on in the universe, particularly at the human scale, remains subject ultimately to the random behavior of individual atomic particles. The long term behavior of the weather, the fall of a dice that bounces more than about seven times, human decisions, they all seem to depend on atomic randomness to some extent.
And if the universe permits even a single random event, then its entire future history becomes unpredictable!
Not many philosophers have managed to get their heads around this insight yet, although it was first realized decades ago. However I did see recently an apologist for one of the monotheist religions claim that his god must therefore do his business by tweaking probability in favor of what he or his devotees want. That doesn’t look like such a bad idea. A tweak here, a tweak there, and after a few hundreds of millions of years he can evolve an image of himself from the primeval slime for company.
Of course chaos magicians claim something similar, a spell here, a spell there, and after a while modified probabilities should deliver the goods. However the Chaoists do have some experimental data in their favor. Quite a number of parapsycholgical experiments indicate that events at the atomic scale remain surprisingly sensitive to psychic meddling.
Perhaps you do not even need sentience to tweak probability. The concept of emergence suggests that whenever nature accidentally throws up something more complex or sophisticated or interesting then some sort of feedback may occur which makes its subsequent appearance more likely. The very laws of the universe, and the conventions of chemistry and biology, and perhaps even those of thought, may owe something to this mechanism.
At the atomic or quantum level, experiments have shown that matter and energy behave in a way that seems very strange compared to the way they seem to behave on the macroscopic or human sized scale.
At the quantum level even basic concepts such as causality or the idea of a particle having a single definite location and momentum seem to break down. The very idea of thing-ness itself seems inapplicable to quantum particles. Particles simply do not behave like tiny little balls, and few verbal analogies or visualise-able images provide much of a guide as to what they actually do.
Now the whole of the observable universe consists of systems made up of quantum particles, yet their almost unimaginably strange individual behaviours add up to give the world we see around ourselves which seems to run on entirely different principles.
Nevertheless it has proved possible to devise mathematical models of what quantum particles do. However these mathematical models involve the use of imaginary and complex numbers which do not work in quite the same way as simple arithmetic and they give answers in terms of accurate probabilities rather than definite yes or no outcomes.
All this has led to endless debate about the reality of what actually goes on in the quantum realm.
Some theorists have taken the position that the mathematics which model quantum physics represents nothing real, it just gives good results because we fudged it to fit the observations, and that the underlying physical reality remains impossible to comprehend in any other way at the moment.
Others have taken the position that quantum physics remains incomplete and that further discoveries will allow us to make more sense of it.
This paper will attempt to show that the hypothesis of 3 dimensional time can allow a novel reinterpretation of the observed phenomena of quantum physics which allows us to form some idea of its underlying reality.
At the quantum level the basic constituents of matter and energy behave with some of the characteristics of both waves and particles. To a simple approximation they seem to move around rather like waves spread out over space, but they seem to interact and to register on detectors as localised particles. Heisenberg’s uncertainty principle models this behaviour mathematically, the more a particle reveals its momentum the less certain does its position become, and vice-versa. This situation does not represent merely the technical impossibility of measuring both of these quantities simultaneously. It seems to represent the physical reality that a particles momentum becomes progressively more objectively indeterminate as its position becomes more objectively determinate, and vice versa, at least in the 4-dimensional spacetime in which we observe its behaviour. If particles did not have this indeterminate aspect to their behaviour they could not act as they do and we would have a completely different reality.
Such wave/particle behaviour appears in a simple, definitive, and notorious experiment known as the Double Slit Experiment. This experiment has numerous variations and elaborations, and it works just as well with energy particles like light photons or matter particles such as electrons. Apparently it will even work with quite large clusters of atoms such as entire C60 buckyballs. Feynman identified it as encapsulating the entire mystery of quantum behaviour.
In the double slit experiment a single particle apparently passes somehow through both of two slits in a screen simultaneously and then recombines with itself to form a single particle impact on a target screen or detector. If the experimenter closes one of the slits, the particle can still hit the target but it will hit it in a different place. If both slits remain open the particle’s position on the target indicates that the presence of two open slits has somehow contributed to the final position of the particle. Now common sense dictates that a particle cannot go through two separate slits simultaneously, although a wave can do this. So how does something that arrives at its target as a particle apparently switch to wave mode during flight to get part of itself through both slits at once and then switch back to particle mode for landing? Big objects like aircraft never seem to do this sort of thing, even though they consist of particles which can.
The mathematical model of a particle in flight describes it as in a state of superposition, the notorious quantum condition in which a particle can apparently exist in more than one state simultaneously. The so called wave function of a particle does not constrain it to choose between apparently mutually contradictory qualities, like having two different positions, or two different spins in opposite directions. The choice seems only to occur when the particle gets measured or hits something and then manifests just one of its possible alternatives. The choice it makes however seems completely random when it takes its quantum jump.
This had led theorists into endless arguments and debates about what a quantum particle really ‘is’. Such Quantum Ontology seems very questionable, we cannot really ask questions about being because we do not actually observe any kind of being anywhere. Basic kinetic theory shows us this, nothing just sits there and exhibits being. Everything actually has a lot of internal atomic motion and exchanges heat and radiation with its environment. To observe something just being we would have to stop it doing anything at all by freezing it to absolute zero, but at that point it would simply permanently cease to exist.
Some theorists have argued that the wave function which describes particles as existing mainly in superposed states cannot really model reality because we always observe things in singular rather than superposed states. However this seems debatable, whilst photons and electrons for example do have singular characteristics when we catch them in detectors or at the point in time when they interact with other particles, most of the properties of materials that we observe actually arise quite naturally from a superposition of states. For example the strength of metallic crystals, the strength of bonds between the atoms in molecular gasses, the behaviour of molecules like benzene and indeed the behaviour of atoms in general, all find explanation in terms of particles occupying superposed states. Thus it seems more reasonable to suppose that for most of the time, matter and energy do superposition, and that when particles undergo measurement or other forms of interaction, they do monoposition. In the HD8 model, monoposition corresponds to the manifestation of a particle in one dimensional time, whilst superposition corresponds to what it does in three dimensional time.
A lot of theorists have a philosophical objection to the way particles seem to make a completely random choice when reverting from superposition to monoposition. They do not like the way in which the wave function seems to collapse in a completely probabilistic way without a sufficient cause for the observed effect. Science has depended on the principle of material cause and effect for centuries they argue, and we cannot abandon it just because we cannot find it in quantum behaviour.
Nevertheless the probabilistic collapse of the wave functions does lead to the fairly predictable behaviour of matter and energy as seen on the human scale. Toss a single coin and either alternative may result, but toss a million of them and the deviation from half a million heads will rarely stray beyond a fraction of one percent. Probability can thus lead to near certainty and most of the apparent cause and effect relationships that we observe in the human scale world can in fact arise precisely because quantum superpositions collapse randomly. Indeed, assuming that superpositions mainly define the state of the universe for most of the time, then turn the idea on its head and consider how bizarrely it might behave if those superpositions collapsed non-randomly. Ordinary lightbulbs might suddenly start emitting laser beams, the radioisotopes in smoke detectors might sometimes go off like small nukes instead of sedately decaying.
The idea of quantum particles in a superposed wave function mode perhaps becomes easier to understand in the HD8 model where 3 dimensions of time complement the 3 dimensions of space. A superposition of two or more states in the same place can occur if the extra states lie in the plane of time orthogonal to what we perceive as ordinary time. In effect orthogonal time provides a sort of pseudospace for parallel universes. I am not implying here that I have doppelgangers for example, or perhaps millions of them, in full scale parallel universes, but merely that I could have a slight thickness in sideways time which allows the superpositions of my constituent quantum particles to manifest my normal electo-chemical properties that ordinary 4 dimensional classical mechanics cannot explain. (I have a suspicion that such superpositions may also have some relevance to mental and perhaps parapsychological phenomena as well, but let’s not open that can of worms for a while yet.)
The notorious Double Slit Experiment and its variants also reveal another aspect to quantum superposition, the phenomena of quantum entanglement. When presented with two possible flightpaths to follow, some quantum phenomena appear to take both paths but the parts which go their separate ways seem to remain in some sort of instantaneous communication with each other, no matter how great the spatial distance between them. This entanglement of widely separated parts appears to violate the Special Relativistic principle that no signal can travel faster than light. However quantum physicists usually point out that no proper information actually gets transmitted because of the random nature of the outcome.
Now, as with superpositions, we cannot observe quantum entanglements directly, we can only make observations from which we can infer that the observed result must have come from an entanglement. HD8 explains entanglement in terms of multiple histories. When an entanglement collapses due to an interaction or a measurement, some of the alternative histories collapse. Alternative histories exist as superpositions in sideways time.
Entanglements occur when a single superposed quantum state splits into two (or more) parts, each of which then travels away in a superposed state. The original quantum state could consist of a single particle which apparently splits into two as in the double slit experiment or it could consist of a pair of particles in an intimate contact which forces them into the same quantum state. Now when one of the parts of the entangled system falls out of superposition because someone measures it, or because it interacts and decoheres into its environment, then the other half has to fall out of superposition into the opposite mode. If one component has say, spin up, then the other will have spin down, or the corresponding other half of a number of other quantum properties.
Thus if the initial quantum state has the superposed qualities of AB and splits into two parts, both parts seem to carry the AB superposition. Yet if we intercept one of the parts and find that it appears to us as A, then we know with certainty that the other part will have to manifest as B. Experiment has repeatedly confirmed this over macroscopic distances, and we have no reason to suspect that it will not work over astronomical or cosmic distances. We know that we cannot explain this by simply assuming that the original superposed state splits into an A component and a B component because in experiments where we recombine the two parts they recombine in such a way as to show that each must have carried the AB superposition.
Ordinary cause and effect cannot explain what happens in entanglement. From the point of view of classical physics the phenomenon seems completely impossible. Whatever mechanism constrains each part of an entanglement to jump into the opposite mode that the other part jumps to, must either act instantaneously across arbitrarily large distances, in violation of relativity, or it must act retroactively across time.
A non-local effect across space seems the least likely alternative. It would require that something from one half of the entangled pair somehow found its way to the other half which could have travelled anywhere within billions of cubic miles of space. Considering that the universe contains unimaginably vast numbers of virtually identical entangled pairs of quantum states, this seems fantastically improbable.
Temporal retroactivity on the other hand, requires no more than a certain two way traffic across time which can allow for the cancellation of some of the alternate histories.
Time does not appear to run both forwards and backwards on the macroscopic scale because energy dissipates in what we call entropy, and because gravity acts attractively only. Nevertheless, nothing seems to constrain quantum processes to progress in only one temporal direction, and some interpretations suggest that they actually proceed in both directions at once. In particular Cramer’s Transactional of quantum physics models a photon exchange as comprising a sort of superposition of a photon travelling from emitter to receiver and an antiphoton travelling backwards in time from receiver to emitter. This perhaps explains an oddity of Maxwell’s equations of electromagnetism. These time symmetric equations also yield a set of solutions for so-called advanced waves travelling backwards in time, but physicists usually quietly ignore them because they appear materially indistinguishable from the so-called retarded waves which travel forwards in time.
The material indistinguishibility of a quantum process proceeding forwards through time from the corresponding anti-process proceeding backwards through time has intriguing implications.
We can interpret it as an exchange between the past and the future which has bizarrely counterintuitive aspects. The event which will end up as the past can send multiple contradictory signals to the future. These signals eventually collapse at a moment of interaction or measurement to give a singular present. However those signals which do not manifest in the present cannot send a time reversed signal back to the past to complete the exchange, so that those particular signal paths cease to exist, effectively modifying the past. Thus when a superposition or an entanglement collapses it erases the multiple history of its own past. The whole concept of being or ‘is-ness’ falls apart here, and not just for particles flying around in specially contrived apparatus. As quantum systems generally behave as if they had just collapsed out of superposition or entanglement, and as quantum systems underlie the behaviour of all matter and energy, we must inhabit an almost unimaginably stranger world than our ordinary senses reveal.
Some theorists have spoken of the Omnium or the Multiverse underlying and supporting the mere surface reality that we directly experience. At the time of writing we have few concepts and little vocabulary to describe the signals exchanged across time and space out of which our perceived reality coalesces. Antiparticles in the conventional sense cannot act as the agents of information transfer into the past. Neither can particles as we understand them in 4D spacetime, carry superpositions and entanglements into the future. However we can form a partial visualisation of the processes involved by using reduced dimension graphs derived from a modified Minkowski formula (itself derived from Pythagoras), for the distance D, between points in 6 dimensions, 3 of space and 3 of time.
In simplified form the equation looks like this:
D = [s^2 - (ct)^2 + (ct1)^2 + (ct2)^2]^1/2
Where s = spatial distance,
t = time, (reference direction),
c = lightspeed,
and t1 and t2 represent the axes of the plane of imaginary time orthogonal to the temporal reference direction, which acts as a kind of pseudospace.
This equation yields two obvious null paths in addition to that conventionally reserved for ordinary light ( s = 1, ct = 1, ct1&2 = 0),
1) s = 0, ct = ct1&2.
2) s ~ ct, where ct1&2 >0
The graphs of these equations represent Superposition and Entanglement respectively.
See Null Path Superposition & Entanglement, Figure 1 and 2.
Figure 1 shows a mechanism for superposition, representing a superposed quantum state at rest relative to an observer, so we can omit spatial coordinates for simplicity.
An event at the origin subtends a plethora of superposed states in imaginary time pseudospace, represented by the circle (which appears in perspective as an ellipse), at right angles to the temporal reference direction.
These states collapse at a further unit distance along the t-axis to produce a random unitary outcome. As soon as one of the paths from the origin to the final state completes an exchange, all other paths collapse and the exchange path becomes the 'new' temporal reference direction. This effectively changes history.
Figure 2 shows a mechanism for entanglement. The Figure shows single dimensions of space and time (reference directions for a moving particle) and a single dimension of orthogonal time for simplicity. The Figure could for example, represent entangled particles flying apart at lightspeed in opposite spatial directions. Note that as each particle flies away, it diverges into two parts in the plane of imaginary time. This represents the superposed condition of each of the particles. In practise the two particles may well have a number of superpositions, but we cannot show these with only a single dimension of imaginary time in the Figure.When one of the particles interacts, only one of the two paths shown will actually complete a transaction and become real. The other path will then cease to exist all along its history. This will have the effect of cancelling the superpositions all the way back down the time line to the origin, hence the other particle will then have to manifest with the opposite particle property when it interacts. In the new history of the two particles it will then appear that each set off from the origin with one of the two opposite properties.
What actually happens between moments of particle interaction remains an interesting question. Any attempt to look at the intervening period merely shortens it to the point at which we choose to take a measurement. Except at the point where a particle interacts it seems to consist of a multitude of superposed and/or entangled states. However as soon as it interacts, all but one of the particles multitudinous states become eliminated from history. At that point the path which remains in imaginary time becomes the real time history of the particle.
Some theorists have argued that we cannot answer the question in principle. Others have gone further and opined that we cannot even ask such a question because it would beg an answer in terms of objective reality about a realm in which objective reality does not apply. In other words they dismiss the question as as fruitless as theology.
However a combination of the hypothesis of 3-dimensiona; time with Heisenberg’s Uncertainty (Indeterminacy) Principle can perhaps offer some sort of an answer.
The Uncertainty principle allows nature to violate the conservation of pretty well any particle property of behaviour, including that of existence itself, so long as the violations remain very small and/or get paid back very quickly.. Planck’s constant sets a precise limit to the imprecision with which quantum phenomena can behave.
Thus for example; -
DE Dt = h and Dp Dl = h
Where D (delta) E means energy indeterminacy,
Dt means temporal (durational) indeterminacy,
Dp means momentum indeterminacy
Dl means spatial (positional) indeterminacy
h means Planck’s constant over 2 pi.
This means that the universe can allow the spontaneous creation of particles from the void or the background energy if you prefer. Such particles persist for a time inversely proportional to their masses so massless photons can persist indefinitely whilst massive fermion/antifermion pairs can persist for only the briefest instants. Now HD8 models bosons as consisting of particle/reverse particle pairs, and we can conceive of fermions as having a similar configuration whilst in flight (between interactions).
Now conventional theory calls for the existence of so called virtual particles to account for the electrostatic and magnetic fields and for various other fields. HD8 rejects this idea and suggests instead that such fields arise from the warping of spacetime in various dimensions in the vicinity of charges. To account for the behaviour of fields, virtual photons particles would have to have properties at variance with relativity. The distinction between so called real particles and virtual particles has become progressively blurred in Standard Theory, particularly as we can now make apparently real particles perform the double slit trick. I think that the hypothesis of virtual particles has outlived its usefulness.
It would seem more reasonable to describe all particles as real at the instant of their interaction, and that whilst they remain in flight as it were, they spread out into multiple form in the pseudospace of imaginary time, using the freedom of quantum indeterminacy.
At the risk of undermining the idea that particles in between interactions do have some actual reality we should perhaps consider calling them Imaginary Particles.
Consider again the double slit experiment, but now in terms of real and imaginary particles. An electron in an imaginary state in orbit about an atom in a light source emits an arbitrarily large number of imaginary photons towards two slits in a screen. As two imaginary photons, one having gone through each slit, fly towards the detector, their higher dimensional spacetime curvatures interact and they combine to hit an imaginary electron in a detector. However only one of the two imaginary electros can become real by making a time reversed exchange with the electron at the origin of its path. As it does so it becomes momentarily real, as does the electron it interacts with.
In summary something does actually go through both slits but after the completion of the exchange only one of the paths remains real (and we cannot tell which). The electron at the emitter becomes real only momentarily as it emits. Both electrons go back into superposed states around their atomic nuclei.
Particles spend only a vanishingly small part of their time in interactions that confer a momentary reality upon them. So almost the entire universe consists of imaginary particles at any moment.
We ourselves must also consist mainly of particles in an imaginary condition. Imaginary particles interact with each other to create the reality that our senses detect, but can the imaginary part of ourselves interact more directly with the Omnium, that Multiverse of superposition and entanglement underlying our perceived reality?
I propose to return to this question in a later paper, but for now I leave you with an odd thought.
Thought itself feels to me very much like a series of collapses of superposed and entangled mental states into real ideas, actions and decisions. It seems as though I fill my head with ideas and then let them become a bit fuzzy, and then somehow something definite springs into reality. It feels like a stochastic process. Most of it of course ends up in the wastebasket, the most powerful tool of the thinker according to Einstein.