Quantum free will & Entropic irreversibility
The manifestation of free will
The 19th century was marked by a big philosophical conflict between the apparent universality of deterministic theories of physical reality and the concept of free will (namely, if the law of nature determines how your arm moves then you can't do it yourself and if you do it yourself how can something like conservation of energy exist?). Free will is both rooted in daily experience and is a basic requirement for science (namely, experiments must be prepared, and the results observed, independently of their surroundings otherwise it can't be said which causes which). Also the theory is built up from experience and it's not the experiences that should be explained (away) from the theory!
Back then some non-deterministic elements did already exist (e.g. Statistical Mechanics), but those only indicated a lack of information about the system and so lacked universality. This changed with the advent of Quantum Mechanics in the 20th century. The central new concept in the theory has been the universal wave-particle duality: (quantum) particles from then on did not only have well-defined values like mass and charge, but showed also a certain fuzzyness that made interference possible (like waves show). Considering it philosophically this is actually not so strange. Particles (with clear identity) were always assumed to have no extension (because otherwise they could be split and lose their identity, right?) and waves that do have a clear extension for the same reason were considered to not have an identity. But without identity waves could be split up indefinitely, and without extension particles cannot fill space, both of which do not make sense. And so there is another philosophical need for a universal wave-particle duality: objects with extension ànd identity. But to remain a unity at some distance, those objects will display some ‘spooky’ non-locality (‘action at a distance’), and to not have signals faster than light (which is prohibited by relativity theory) that non-locality must also be non-deterministic (and so discontinuous)! [NB as relativity theory clearly teaches that space and time are connected, non-locality also implies non-temporality (‘action at a different time’); yet this concept is not really considered probably because it is a bit too far-fetched for most]
The de-Broglie relations [NB named after the originally catholic, french physicist Louis de Broglie, 1892-1987 AD] from 1923 (namely, p = h/λ and E = hν with h Planck's constant) mathematically express the failure of determinism in the double-slit experiment of at the same time wavelength (’width’), respectively, frequency (’duration’) of a particle and its measure of movement (momentum), respectively, energy (the right-hand, respectively, left-hand sides of the relations) [NB Heisenberg’s uncertainty relation from 1927 is derived from these; this, however, is a wrong translation of the original german "unschärfe Relation" and has caused a lot of misunderstanding. Because of the small value of h, in practice quantum effects only turn up when there are few, small particles; at larger scale they are ’washed out’]. John von Neumann [NB hungarian/american, jewish-catholic convert and one of the greatest geniuses from the 20th century, 1903-1957 AD] wrote the complete mathematical foundation of the new Quantum Mechanics in 1932; it has become the most successful scientific theory since (it has actually never been wrong).
Yet the results of individual measurements are often unpredictable. The two-slit experiment illustrates this the clearest (my professor of Introduction to Modern Physics even said: if you understand this experiment, you understand all of Quantum Mechanics!). Quanta from a source move through a screen with two slits and hit another screen where they are detected. An interference pattern is visible on the second screen that is built up 'point' by 'point'; individual positions are random [NB the initial state was namely identical, and otherwise the pattern can't arise] and their width depends on the resolution of the detector [NB since this is necessarily finite, this gives us another reason to do away with the 19th century concept of point particles].
It follows that the wavy pattern is irreversibly 'collapsed' by detection, this is the central issue of quantum mechanics, the so-called measurement problem: when and by what exactly occurs this non-deterministic and discontinuous ‘collapse’ (the transition from a delocalised to a localised description)? But note that localised does not mean ‘point’, detector states necessarily have extension, and so the wave-particle duality remains as it philosophically must. Moreover, delocalised does not mean virtual / potential / in need of realisation either! Unfortunately, physicists generally can't let go of the classical concept of point particles and so implicitely imagine that as some additional ‘hidden variable.’ Yet, how can it be that a measurement only involves the particles and otherwise only the waves are of importance?! And collapse is to be distinguished from the determinate and continuous process of decoherence, see further on; b.t.w. in practice the collapse never takes place before decoherence, which makes that its effects are unobservable).
And so collapse must have an immaterial cause, which is a requirement for the expression of free will. The rigorous analysis of Von Neumann in his book equally concludes that it's the non-material element in man (others use the term ‘consciousness’ although I dislike that as it is badly defined) that causes the collapse of the wave function. Many physicists however cannot accept a meta-physical reality as it cannot be physically controlled while it is actually precisely the thing that makes any physical control possible (but from the meta-physical side)?! And to give a pretence of reasonability to their personal dislike of metaphysical ‘free will,’ they come up with silly counterarguments: what then causes the collapse in the rest of the universe and / or the past where there are no observers? (as if quantum mechanics needs collapse to function…) Free will / consciousness are physically not well defined. (because they precisely are non-physical) Consciousness is a result of evolution… (no need even to address that) The explanation is unfalsifiable. (it is an explanation not a new theory) It introduces unnecessary elements / is inconsistent with materialism. (materialism itself is inconsistent with quantum mechanics so something else is needed)
For long it has been unclear how the collapse could be of any use (the other requirement for free will) until Alan Turing [NB british mathematician and computer scientist, 1912-1954 AD] described a side-effect of it in 1954. In 1977 the indian physicists Sudarshan and Misra called this the Quantum Zeno effect. It allows full control of the quantum dynamics by continuous observation (b.t.w. decoherence also works, but is not necessary). The Quantum Zeno equations below show a simple proof of concept for a two-state system: a continuous measurement of the states halts the proper oscillations of the system completely [NB the first is namely first order in time and the second second order]. The full control follows because we can determine ourselves what exactly those states are (e.g. observing rotations).
The last remaining question is via which states and processes precisely our free will expresses itself (what are we exactly observing?). A few years back there was a promising paper that exactly discussed this question [Quantum Cognition: The possibility of processing with nuclear spins in the brain by Matthew P. A. Fisher from september 2015]. It showed that the most promising possibility are the nuclear spins of Posner molecules, a phosphorus complex in our brain [NB see also the work of Henry P. Stapp, retired professor in California, born in 1928 AD: www.henrystapp.org].
Misunderstandings regarding the non-determinism in Quantum Mechanics (like ’uncertainty’, collaps, decoherence) are often connected with misunderstandings relating to irreversibility from Statistical Mechanics (the microscopic variant of Thermodynamics or Heat-theory). The concept ’arrow of time’ is often mentioned in this context as well although irreversibility itself has nothing more to do with time than any other physical phenomena. Time is simply the experience that things change and so it is an ingredient of each theory of physics, not a process that should be explained by the theory.
Irreversible processes (e.g. the breaking of glass or the escape of gas) on the other hand can be studied scientifically. For this Statistical Mechanics was constructed which describes systems of many particles. It has three laws (numbered 0, 1 and 2), one for each of its basic concepts: temperature is universal and has an absolute zero, energy is always conserved, and entropy (disorder) never diminishes. From a (microscopic) Hamiltonian containing all particle interactions, the (macroscopic) thermodynamic properties of the system are deduced via the (partition) sum of all possible (system) states. A non-reversible increase of entropy can therefore be deduced from reversible interactions?!
The famous Boltzmann equation [NB named after Ludwig Boltzmann, austrian physicists and mathematician who suffered from depression and hung himself, 1844-1906 AD], with kB the Boltzmann constant is:
S = kB · ln(g) ⇒ g(t) = e^S(t)/kB
It expresses that the entropy S of each collection t of states that can't be distinguished (macroscopically), only depends on the number of states in t. Because it is more probable that system-states go from collections with few states to collections with many states, the entropy increases. The underlying microscopic dynamics is still reversible though.
The breaking of glass and the escape of gas are two irreversible processes in daily life, but the underlying reversible dynamics makes that each system has a time of recursion, the time it takes to return to its original state. Nevertheless, a system with only 100 particles that each can only occupy 10 different states, already has a state-space with a size of 10^100 [NB this number is also called googol, the inspiration for the name of the search engine] and this is greater than the estimated amount of particles in the universe!
An impression of the limited fluctuations in entropy that can be expected in a reasonable amount of time, follows from the calculation that 63% {ln(2)} of all states are located within the upper 1 kB of entropy! It appears that a system out of (thermic) equilibrium (like our universe?!) is truly something remarkable. . .
In orde to make the connection with the collapse of the wave-function and decoherence from Quantum Mechanics, we best consider the increase of entropy from the equivalent concept of information loss. Macroscopic observables are then the known systeem and the microscopic variables constitute the unknown surroundings. By interaction of the system with its surroundings it transfers its information there and so its own disorder has increased. A classical measurement of the system yields new information and so again an ordered state function with low entropy. The measurement however hasn't changed the system in contrast to a quantum measurement that makes interference impossible. Yet, in that case both before and after the collaps of the wave-function there's a pure state with entropy 0 [NB so collaps is only irreversible for practicle reasons?!].
Decoherence is the quantum version of the interaction of a system with its surroundings and the corresponding loss of information about the system. The coherence of the states within the system is broken by correlations with states of the surroundings. These kind of correlations are still quantum mechanical though because they can be stronger than what is classically allowed. And the states themselves have not fundamentally changed so decoherence does not collaps the wave-function (which anyhow is a discrete and indeterministic process).
Comments
Post a Comment