Ordered thoughts regarding important stuff like God, Science, and the Universe. The author will endeavor to answer all sincere questions in these matters, including help with math homework, genuine questions about God, etc.

Thursday, January 05, 2006

RECONCILING RANDOMNESS AND DETERMINISM/ J. Colannino

Abstract
Neither the scientific method nor statistics requires formally unknowable events. Random error is the result of physical realities and follows logically via mathematical rigor. In no case does the concept of randomness demand or allow for uncaused effects.

The Scientific Method
The scientific method is a systematic search for regularity confined to subject matter that is observable, testable, repeatable, and falsifiable.[1] This much seems uncontroversial among scientists of various stripes. However, the reason for this normative view differs dramatically among theists and non-theists. Theists believe that that an all-wise, all-powerful, and rational Creator – in a word, God – formed the heavens and the earth. Modern theists therefore expect that God’s works unmistakably bear the mark of their Author. Since God is rational, so is the universe. God knows His work certainly and intimately, and the consequence of every event that may or may not transpire[2] is completely determined and foreknown in His mind. This is the theistic scientist’s raison d’etre for science, substantiated by God’s word.[3] Historians generally credit the origin of the scientific method to Francis Bacon and Christianity.[4]

Non-theists (e.g., materialists) admit to no transcendent cause beyond the physical universe. As such, they believe that strict rules (physics) govern the universe, but without any a priori reason for such a belief, only a pragmatic view that the scientific method corroborates their experience. Today some advocates overstate the scientific method as a philosophy about everything, which it certainly is not.[5] Notwithstanding, for those subjects that science is qualified to speak about, it has much to say.

All scientists believe in a cause-and-effect universe, otherwise, there is no point to any aspect of a scientific investigation. Without cause and effect, observation is futile because it correlates with nothing in particular. Experimentation is pointless if effects have no causes. For the same reason, repeating an experiment would have no confirmatory value. If there is no meaning to observation, testing, or repetition, then nothing is falsifiable or provable. In short, science subsumes a cause-and-effect universe and cannot exist without it.

A deterministic universe
Despite the inconsistency, and in contrast to a cause-and-effect (or a deterministic) universe, some scientists theorize that absolute knowability is impossible in principle. This is a self-defeating argument because it makes a certain and absolute truth claim that one can certainly never know absolute truth. Some base their claim (incorrectly) on scientific postulates of uncertainty, randomness, or chance (used synonymously here)[6] as embodied in the statistical sciences. As I will show, this is a result of an incorrect or equivocal understanding of the statistical concept of randomness.

On Impossibility
Some things are impossible even for God. For example, it is impossible for God to lie[7] or to err[8] due to His very nature. However, if it is impossible in this same way for God to determine the outcome of a random trial then God cannot be all-knowing. Once we admit that there is knowledge about the universe that God cannot apprehend, then history as a whole is impossible to predict, even for God. The theist will at once see the seriousness of this accusation and the conflict with God’s revealed will.[9] I shall refer to the concept of something being unknowable in principle as that which is formally unknowable.

In a book length treatment, Sproul[10] has pointed out that the idea of randomness or chance as a causative agent is philosophically impossible and relies on equivocation of the term chance. As Sproul notes, when a person says I met my spouse by chance, this does not imply an uncaused union, but an unanticipated one. Neither partner spontaneously appears on the scene. Planes, trains, and automobiles transport them to their destination. They are there for a reason – perhaps a business meeting or conference. Nothing about the union is uncaused, but because neither party is clairvoyant, they are delightfully surprised to meet one another. As I will show, neither may an antagonist take refuge in the formal statistical concept of random error as a device to prove the existence of a formally unknowable event.

On random error
At this point, we must ask, if the scientific method presumes a rational cause-and-effect (deterministic) universe, then what is random error and how is it compatible with science? We presume to live in a universe governed by rules and laws, i.e., physics. Yet, experience shows that whenever we repeat a well-executed, well-planned experiment we obtain slightly different results each time. Scientists call this experimental error; statisticians term it random error; but how can random error be compatible with a cause-and-effect universe?

We do indeed live in a universe governed by rules, law, and physics, but we never perform exactly the same experiment twice. We cannot set our input conditions perfectly. We cannot measure our output exactly. As a practical matter, we fail to account fully for all possible influential factors. As an example, let us consider the timed flight of a ball dropped from a given height.

We may derive an equation from Newtonian physics such as t = √(h/g) where t is the time, h is the height, and g is the acceleration due to gravity (9.8 m/s^2). In other words, if we drop a ball from 9.8 meters then it will take 1 second to hit the floor [t = √(9.8/9.8 m/s^2) = 1 s]. A ball dropped from 39.2 meters will take 2 seconds [√39.2/9.8 = 2]. But in fact, when we perform an experiment, we never obtain exactly the theoretical result. Why not? The equation is wrong! It only accounts for gravitational acceleration. It does not account for errors in measurement such as when I start and stop the stopwatch, or exactly when I release the ball, or other factors such as air friction, variations in the local gravitational constant, the Corriolis effect (rotational effect of the earth), wind currents and air movements, gravitational effects of nearby bodies, relativistic effects, etc. There is a good reason for not including these in the model. Their effects are real but vanishingly small for the matter at hand. To account for them, we can lump all of these in an error term (e). Then our model becomes t = √(h/g) + e.

What can we deduce about e? Let us presume that e aggregates many factors, and that on average, some of these factors slightly decrease our time, and some of them slightly lengthen it. Then e will distribute around some mean according to a normal (bell-shaped) distribution. This is what we call random error. Why can we expect a normal distribution? Because aggregating responses from identically distributed factors gives a bell-shaped curve as matter of a mathematical certainty. The theorem is known as the central limit theorem of statistics.

However, never does this imply that a ball’s motion is uncaused, or that God would somehow be confused about the time of flight for trial. Therefore, randomness does not mean unpredictable in principle or formally unknowable. It would be better to say that out of ignorance and convenience we must model the universe as a stochastic system comprising deterministic and random factors. By a stochastic universe I mean a universe with significant influential factors which we account for individually (so-called deterministic factors) and a large number of less influential factors which we account for in aggregate (so-called random factors) by means of the probability distribution and its properties. In no case does a stochastic universe imply a causeless or formally unknowable universe.

The analogy of the random number generator
At this point, a useful analogy is the random number generator; the phrase itself is oxymoronic. Generation implies a definite numerical procedure. Random, at least colloquially, implies lack of a formal rule or procedure. Which is it? Knowing the actual algorithm and starting number (seed) of a random number generator permits one to predict the “random” number sequence with absolute certainty. For this reason, the output sequence from such generators is termed a pseudorandom sequence. In the author’s opinion, there is no qualitative distinction. Notwithstanding, one may regard statistically random events (e.g., a coin toss) as completely determined by physics (analogous to the algorithm) and the initial conditions (initial velocity and rotational speed to name two – analogous to the seed). In either case, the ignorance of finite beings makes the outcome unanticipated – one may even say unknowable in a practical sense – but it does not render the matter formally unknowable.

The critical equivocation
Therefore, statistical randomness is equivocated to mean without rule or cause, and this without justification because the normal probability distribution requires order and rule to come to be. In conclusion, the science of statistics in no way demands a formally unknowable universe. The statistical concepts of randomness are the result of the aggregated contribution of many independent causes acting according to physical rules and mathematical dogma to generate a repeatable aggregate behavior – the normal probability distribution. This is as far away from anarchy as a thing can be.

Notes
[1] One may remember these four pillars of the scientific method with the acronym “only trust reliable facts.” The method has grown to include a hypothesis/validation cycle, a documentation/publication cycle, and inductive and deductive reasoning. Such a definition of science, pragmatic though it be, has been under challenge for some time, and is no longer fashionable. In my estimation, this is due to a self-defeating denial of absolutes and a confusion of normative science with the politics of science. See for example, Kuhn, Thomas, The Structure of Scientific Revolutions, University of Chicago Press, 1962.

[2] See for example 1 Sam 23:11-23, Matt 11:21, 23, and 2 Kings 13:19 for examples of conditional events that God knew were conditionally possible but not actual.

[3] Proverbs 25.2

[4] The Greeks were in a position to develop the scientific method, but were unwilling to consider empirical verification of their philosophy. A polytheistic worldview embraced a universe governed by caprice and whim. This appears to have been a major impediment toward development of science as we know it. Historically, the scientific method would have to wait for belief in a rational universe. By the sixteenth century, the spread of Christianity provided such a worldview. Historians generally credit Sir Francis Bacon (1561-1626) with the development of the scientific method, emphasizing experimentation and inductive reasoning in addition to deduction from general principles.

[5] The idea that science is explanatory of everything is a philosophical view more properly called scientism.

[6] Although there are technical and mathematical distinctions among these terms, all issue from the concept of an underlying probability distribution. Therefore, for the points I wish to emphasize in this paper, I shall treat the terms as synonymous.

[7] Titus 1.2

[8] Gen 18.25

[9] For example, predictive prophecy would be reduced to probable but not strictly certain events. In contrast, the scripture boldly declares “I declare the end from the beginning.”

[10]Sproul, R.C., Not a Chance: The Myth of Chance in Modern Science and Cosmology, Baker Books, Grand Rapids, Michigan, 1994.

0 Comments:

Post a Comment

<< Home