Saturday, February 29, 2020

Do dice play God?


This essay was recovered from a defunct web account.


A discussion of

Irreligion

by John Allen Paulos.

By PAUL CONANT
John Allen Paulos has done a service by compiling the various purported proofs of the existence of a (monotheistic) God and then shooting them down in his book Irreligion: a mathematician explains why the arguments for God just don't add up.

Paulos, a Temple University mathematician who writes a column for ABC News, would be the first to admit that he has not disproved the existence of God. But, he is quite skeptical of such existence, and I suppose much of the impetus for his book comes from the intelligent design versus accidental evolution controversy [1].

Really, this essay isn't exactly kosher, because I am going to cede most of the ground. My thinking is that if one could use logico-mathematical methods to prove God's existence, this would be tantamount to being able to see God, or to plumb the depths of God. Supposing there is such a God, is he likely to permit his creatures, without special permission, to go so deep?

This essay might also be thought rather unfair because Paulos is writing for the general reader and thus walks a fine line on how much mathematics to use. Still, he is expert at describing the general import of certain mathematical ideas, such as Gregory Chaitin's retooling of Kurt Goedel's undecidability theorem and its application to arguments about what a human can grasp about a "higher power."

Many of Paulos's counterarguments essentially arise from a Laplacian philosophy wherein Newtonian mechanics and statistical randomness rule all and are all. The world of phenomena, of appearances, is everything. There is nothing beyond. As long as we agree with those assumptions, we're liable to agree with Paulos. 

Just because...
Yet a caveat: though mathematics is remarkably effective at describing physical relations, mathematical abstractions are not themselves the essence of being (though even on this point there is a Platonic dispute), but are typically devices used for prediction. The deepest essence of being may well be beyond mathematical or scientific description -- perhaps, in fact, beyond human ken (as Paulos implies, albeit mechanistically, when discussing Chaitin and Goedel) [2].

Paulos's response to the First Cause problem is to question whether postulating a highly complex Creator provides a real solution. All we have done is push back the problem, he is saying. But here we must wonder whether phenomenal, Laplacian reality is all there is. Why shouldn't there be something deeper that doesn't conform to the notion of God as gigantic robot?

But of course it is the concept of randomness that is the nub of Paulos's book, and this concept is at root philosophical, and a rather thorny bit of philosophy it is at that. The topic of randomness certainly has some wrinkles that are worth examining with respect to the intelligent design controversy.

One of Paulos's main points is that merely because some postulated event has a terribly small probability doesn't mean that event hasn't or can't happen. There is a terribly small probability that you will be struck by lightning this year. But every year, someone is nevertheless stricken. Why not you?

In fact, zero probability doesn't mean impossible. Many probability distributions closely follow the normal curve, where each distinct probability is exactly zero, and yet, one assumes that one of these combinations can be chosen (perhaps by resort to the Axiom of Choice). Paulos applies this point to the probabilities for the origin of life, which the astrophysicist Fred Hoyle once likened to the chance of a tornado whipping through a junkyard and leaving a fully assembled jumbo jet in its wake. (Nick Lane in Life Ascending: The Ten Great Inventions of Evolution (W.W. Norton 2009) relates some interesting speculations about life self-organizing around undersea hydrothermal vents. So perhaps the probabilities aren't so remote after all, but, really, we don't know.) 

Shake it up, baby
What is the probability of a specific permutation of heads and tails in say 20 fair coin tosses? This is usually given as 0.520, or about one chance in a million. What is the probability of 18 heads followed by 2 tails? The same, according to one outlook.

Now that probability holds if we take all permutations, shake them up in a hat and then draw one. All permutations in that case are equiprobable [4]. Iintuitively, however, it is hard to accept that 18 heads followed by 2 tails is just as probable as any other ordering. In fact, there are various statistical methods for challenging that idea [5].

One, which is quite useful, is the runs test, which determines the probability that a particular sequence falls within the random area of the related normal curve. A runs test of 18H followed by 2T gives a z score of 3.71, which isn't ridiculously high, but implies that the ordering did not occur randomly with a confidence of 0.999.

Now compare that score with this permutation: HH TTT H TT H TT HH T HH TTT H. A runs test z score gives 0.046, which is very near the normal mean. To recap: the probability of drawing a number with 18 ones (or heads) followed by 2 zeros (or tails) from a hat full of all 20-digit strings is on the order of 10-6. The probability that that sequence is random is on the order of 10-4. For comparison, we can be highly confident the second sequence is, absent further information, random. (I actually took it from irrational root digit strings.)

Again, those permutations with high runs test z scores are considered to be almost certainly non-random [3].

At the risk of flogging a dead horse, let us review Paulos's example of a very well-shuffled deck of ordinary playing cards. The probability of any particular permutation is about one in 1068, as he rightly notes. But suppose we mark each card's face with a number, ordering the deck from 1 to 52. When the well-shuffled deck is turned over one card at a time, we find that the cards come out in exact sequential order. Yes, that might be random luck. Yet the runs test z score is a very large 7.563, which implies effectively 0 probability of randomness as compared to a typical sequence. (We would feel certain that the deck had been ordered by intelligent design.) 

Does not compute
The intelligent design proponents, in my view, are trying to get at this particular point. That is, some probabilities fall, even with a lot of time, into the nonrandom area. I can't say whether they are correct about that view when it comes to the origin of life. But I would comment that when probabilities fall far out in a tail, statisticians will say that the probability of non-random influence is significantly high. They will say this if they are seeking either mechanical bias or human influence. But if human influence is out of the question, and we are not talking about mechanical bias, then some scientists dismiss the non-randomness argument simply because they don't like it.

Another issue raised by Paulos is the fact that some of Stephen Wolfram's cellular automata yield "complex" outputs. (I am currently going through Wolfram's A New Kind of Science (Wolfram Media 2002) carefully, and there are many issues worth discussing, which I'll do, hopefully, at a later date.)

Like mathematician Eric Schechter (see link below), Paulos sees cellular automaton complexity as giving plausibility to the notion that life could have resulted when some molecules knocked together in a certain way. Wolfram's Rule 110 is equivalent to a Universal Turing Machine and this shows that a simple algorithm could yield anycomputer program, Paulos points out.
Paulos might have added that there is a countable infinity of computer programs. Each such program is computed according to the initial conditions of the Rule 110 automaton. Those conditions are the length of the starter cell block and the colors (black or white) of each cell.

So, a relevant issue is, if one feeds a randomly selected initial state into a UTM, what is the probability it will spit out a highly ordered (or complex or non-random) string versus a random string. In other words, what is the probability such a string would emulate some Turing machine? Runs test scores would show the obvious: so-called complex strings will fall way out under a normal curve tail. 

Grammar tool
I have run across quite a few ways of gauging complexity, but, barring an exact molecular approach, it seems to me the concept of a grammatical string is relevant.

Any cell, including the first, may be described as a machine. It transforms energy and does work (as in W = 1/2mv2). Hence it may be described with a series of logic gates. These logic gates can be combined in many ways, but most permutations won't work (the jumbo jet effect).

For example, if we have 8 symbols and a string of length 20, we have 125,970 different arrangements. But how likely is it that a random arrangement will be grammatical?

Let's consider a toy grammar with the symbols a,b,c. Our only grammatical rule is that b may not immediately follow a.

So for the first three steps, abc and cba are illegal and the other four possibilities are legal. This gives a (1/3) probability of error on the first step. In this case, the probability of error at every third step is not independent of the previous probability as can be seen by the permutations:
 abc  bca  acb  bac  cba  cab
That is, for example, bca followed by bac gives an illegal ordering. So the probability of error increases with n.

However, suppose we hold the probability of error at (1/3). In that case the probability of a legal string where n = 30 is less than (2/3)10 = 1.73%. Even if the string can tolerate noise, the error probabilities rise rapidly. Suppose a string of 80 can tolerate 20 percent of its digits wrong. In that case we make our n = 21.333. That is the probability of success is (2/3)21.333 = 0.000175.
And this is a toy model. The actual probabilities for long grammatical strings are found far out under a normal curve tail. 

This is to inform you
A point that arises in such discussions concerns entropy (the tendency toward decrease of order) and the related idea of information, which is sometimes thought of as the surprisal value of a digit string. Sometimes a pattern such as HHHH... is considered to have low information because we can easily calculate the nth value (assuming we are using some algorithm to obtain the string). So the Chaitin-Kolmogorov complexity is low, or that is, the information is low. On the other hand a string that by some measure is effectively random is considered here to be highly informative because the observer has almost no chance of knowing the string in detail in advance.

However, we can also take the opposite tack. Using runs testing, most digit strings (multi-value strings can often be transformed, for test purposes, to bi-value strings) are found under the bulge in the runs test bell curve and represent probable randomness. So it is unsurprising to encounter such a string. It is far more surprising to come across a string with far "too few" or far "too many" runs. These highly ordered strings would then be considered to have high information value.

This distinction may help address Wolfram's attempt to cope with "highly complex" automata. By these, he means those with irregular, randomlike stuctures running through periodic "backgrounds." If a sufficiently long runs test were done on such automata, we would obtain, I suggest, z scores in the high but not outlandish range. The z score would give a gauge of complexity.

We might distinguish complicatedness from complexity by saying that a random-like permutation of our grammatical symbols is merely complicated, but a grammatical permutation, possibly adjusted for noise, is complex. (We see, by the way, that grammatical strings require conditional probabilities.) 

A jungle out there
Paulos's defense of the theory of evolution is precise as far as it goes but does not acknowledge the various controversies on speciation among biologists, paleontologists and others.

Let us look at one of his counterarguments:

The creationist argument goes roughly as follows: "A very long sequence of individually improbable mutations must occur in order for a species or a biological process to evolve. If we assume these are independent events, then the probability that all of them will occur in the right order is the product of their respective probabilities" and hence a speciation probability is miniscule. "This line of argument," says Paulos, "is deeply flawed."

He writes: "Note that there are always a fantastically huge number of evolutionary paths that might be taken by an organism (or a process), but there is only one that actually will be taken. So, if, after the fact, we observe the particular evolutionary path actually taken and then calculate the a priori probability of its having been taken, we will get the miniscule probability that creationists mistakenly attach to the process as a whole."

Though we have dealt with this argument in terms of probability of the original biological cell, we must also consider its application to evolution via mutation. We can consider mutations to follow conditional probabilities. And though a particular mutation may be rather probable by being conditioned by the state of the organism (previous mutation and current environment), we must consider the entire chain of mutations represented by an extant species.

If we consider each species as representing a chain of mutations from the primeval organism, then we have for each a chain of conditional probability. A few probabilities may be high, but most are extremely low. Conditional probabilities can be graphed as trees of branching probabilities, so that a chain of mutation would be represented by one of these paths. We simply multiply each branch probability to get the total probability per path.

As a simple example, a 100-step conditional probability path with 10 probabilities of 0.9 and 60 with 0.7 and 30 with 0.5 yields an overall probability of 1.65 x 10-19. In other words, the more mutations and ancestral species attributed to an extanct species, the less likely that species is to exist via passive natural selection. The actual numbers are so remote as to make natural selection by passive filtering virtually impossible, though perhaps we might conjecture some nonlinear effect going on among species that tends to overcome this problem.

Think of it this way: During an organism's lifetime, there is a fantastically large number of possible mutations. What is the probability that the organism will happen upon one that is beneficial? That event would, if we are talking only about passive natural selection, be found under a probability distribution tail (whether normal, Poisson or other). The probability of even a few useful mutations occurring over 3.5 billion years isn't all that great (though I don't know a good estimate).

A botific vision
Let us, for example, consider Wolfram's cellular automata, which he puts into four qualitative classes of complexity. One of Wolfram's findings is that adding complexity to an already complex system does little or nothing to increase the complexity, though randomized initial conditions might speed the trend toward a random-like output (a fact which, we acknowledge, could be relevant to evolution theory).

Now suppose we take some cellular automata and, at every nth or so step, halt the program and revise the initial conditions slightly or greatly, based on a cell block between cell n and cell n+m. What is the likelihood of increasing complexity to the extent that a Turing machine is devised? Or suppose an automaton is already a Turing machine. What is the probability that it remains one or that a more complex-output Turing machine results from the mutation?

I haven't calculated the probabilities, but I would suppose they are all out under a tail.

In countering the idea that "self-organization" is unlikely, Paulos has elsewhere underscored the importance of Ramsey theory, which has an important role in network theory, . Actually, with sufficient n, "highly organized" networks are very likely [6]. Whether this implies sufficient resources for the self-organization of a machine is another matter. True, high n seem to guarantee such a possibility. But, the n may be too high to be reasonable. 

Darwin on the Lam?
However, it seems passive natural selection has an active accomplice in the extraordinarily subtle genetic machinery. It seems that some form of neo-Lamarckianism is necessary, or at any rate a negative feedback system which tends to damp out minor harmful mutations without ending the lineage altogether (catastrophic mutations usually go nowhere, the offspring most often not getting a chance to mate). 

Matchmaking
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.

For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.

That is, we don't calculate, say 11-11 (3.5x10-15), but we have that our series approximates very closely 1 - e-1 = 0.63.

Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?

The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.

It may be that the in's and out's of evolution arguments were beyond the scope of Irreligion, but I don't think Paulos has entirely refuted the skeptics [7].

Nevertheless, the book is a succinct reference work and deserves a place on one's bookshelf.

1. Paulos finds himself disconcerted by the "overbearing religiosity of so many humorless people." Whenever one upholds an unpopular idea, one can expect all sorts of objections from all sorts of people, not all of them well mannered or well informed. Comes with the territory. Unfortunately, I think this backlash may have blinded him to the many kind, cheerful and non-judgmental Christians and other religious types in his vicinity. Some people, unable to persuade Paulos of God's existence, end the conversation with "I'll pray for you..." I can well imagine that he senses that the pride of the other person is motivating a put-down. Some of these souls might try not letting the left hand know what the right hand is doing.

2. Paulos recounts this amusing fable: The great mathematician Euler was called to court to debate the necessity of God's existence with a well-known atheist. Euler opens with: "Sir, (a + bn)/n = x. Hence, God exists. Reply." Flabbergasted, his mathematically illiterate opponent walked away, speechless. Yet, is this joke as silly as it at first seems? After all, one might say that the mental activity of mathematics is so profound (even if the specific equation is trivial) that the existence of a Great Mind is implied.

3. We should caution that the runs test, which works for n1 and n2, each at least equal to 8 fails for the pattern HH TT HH TT... This failure seems to be an artifact of the runs test assumption that a usual number of runs is about n/2. I suggest that we simply say that the probability of that pattern is less than or equal to H T H T H T..., a pattern whose z score rises rapidly with n. Other patterns such as HHH TTT HHH... also climb away from the randomness area slowly with n. With these cautions, however, the runs test gives striking results.

4. Thanks to John Paulos for pointing out an embarrassing misstatement in a previous draft. I somehow mangled the probabilities during the editing. By the way, my tendency to write flubs when I actually know better is a real problem for me and a reason I need attentive readers to help me out.

5. I also muddled this section. Josh Mitteldorf's sharp eyes forced a rewrite.

6. Paulos in a column writes: 'A more profound version of this line of thought can be traced back to British mathematician Frank Ramsey, who proved a strange theorem. It stated that if you have a sufficiently large set of geometric points and every pair of them is connected by either a red line or a green line (but not by both), then no matter how you color the lines, there will always be a large subset of the original set with a special property. Either every pair of the subset's members will be connected by a red line or every pair of the subset's members will be connected by a green line.

If, for example, you want to be certain of having at least three points all connected by red lines or at least three points all connected by green lines, you will need at least six points. (The answer is not as obvious as it may seem, but the proof isn't difficult.) For you to be certain that you will have four points, every pair of which is connected by a red line, or four points, every pair of which is connected by a green line, you will need 18 points, and for you to be certain that there will be five points with this property, you will need -- it's not known exactly - between 43 and 55. With enough points, you will inevitably find unicolored islands of order as big as you want, no matter how you color the lines.

7. Paulos, interestingly, tells of how he lost a great deal of money by an ill-advised enthusiasm for WorldCom stock in A Mathematician Plays the Stock Market (Basic Books, 2003). The expert probabalist and statistician found himself under a delusion which his own background should have fortified him against. (The book, by the way, is full of penetrating insights about probability and the market.) One wonders whether Paulos might also be suffering from another delusion: that probabilities favor atheism.


The knowledge delusion: a rebuttal of Dawkins
Hilbert's 6th problem and Boolean circuits
Wikipedia article on Chaitin-Kolmogorov complexity
In search of a blind watchmaker
Wikipedia article on runs test
Eric Schechter on Wolfram vs intelligent design
The scientific embrace of atheism (by David Berlinski)
John Allen Paulos's home page
The many worlds of probability, reality and cognition

On Hilbert's sixth problem


This essay was recovered from a now defunct internet account.

There is no consensus on whether Hilbert's sixth problem: Can physics be axiomatized? has been answered.

From Wikipedia, we have this statement attributed to Hilbert:

6. Mathematical treatment of the axioms of physics. The investigations of the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which today mathematics plays an important part; in the first rank are the theory of probabilities and mechanics.

Hilbert proposed his problems near the dawn of the Planck revolution, while the debate was raging about statistical methods and entropy, and the atomic hypothesis. It would be another five years before Einstein conclusively proved the existence of atoms.

It would be another year before Russell discovered the set of all sets paradox, which is similar to Cantor's power set paradox. Though Cantor uncovered this paradox, or perhaps theorem, in the late 1890s, I am uncertain how cognizant of it Hilbert was.

Interestingly, by the 1920s, Zermelo, Fraenkel and Skolem had axiomatized set theory, specifically forbidding that a set could be an element of itself and hence getting rid of the annoying self-referencing issues that so challenged Russell and Whitehead. But, in the early 1930s, along came Goedel and proved that ZF set theory was either inconsistent or incomplete. His proof actually used Russell's Principia Mathematica as a basis, but generalizes to apply to all but very limited mathematical systems of deduction. Since mathematical systems can be defined in terms of ZF, it follows that ZF must contain some theorems that cannot be tracked back to axioms. So the attempt to axiomatize ZF didn't completely succeed.

In turn, it would seem that Goedel, who began his career as a physicist, had knocked the wind out of Problem 6. Of course, many physicists have not accepted this point, arguing that Goedel's incompleteness theorem applies to only one essentially trivial matter.

In a previous essay, I have discussed the impossibility of modeling the universe as a Turing machine. If that argument is correct, then it would seem that Hilbert's sixth problem has been answered. But I propose here to skip the Turing machine concept and use another idea.

Conceptually, if a number is computable, a Turing machine can compute it. Then again Church's lamda calculus, a recursive method, also allegedly could compute any computable. So are the general Turing machine and the lamda calculus equivalent? Church's thesis conjectures that they are, implying that it is unknown whether either misses some computables (rationals or rational approximations to irrationals).

But Boolean algebra is the real-world venue used by computer scientists. If an output can't be computed with a Boolean system, no one will bother with it. So it seems appropriate to define an algorithm as anything that can be modeled by an mxn truth table and its corresponding Boolean statement.

The truth table has a Boolean statement where each element is above the relevant column. So a sequence of truth tables can be redrawn as a single truth table under a statement combined from the sub-statements. If a sequence of truth tables branches into parallel sequences, the parallel sequences can be placed consecutively and recombined with an appropriate connective.

One may ask about more than one simultaneous output value. We regard this as a single output set with n output elements.

So then, if something is computable, we expect that there is some finite mxn truth table and corresponding Boolean statement. Now we already know that Goedel has proved that, for any sufficiently rich system, there is a Boolean statement that is true, but NOT provably so. That is, the statement is constructible using lawful combinations of Boolean symbols, but the statement cannot be derived from axioms without extension of the axioms, which in turn implies another statement that cannot be derived from the extended axioms, ad infinitum.

Hence, not every truth table, and not every algorithm, can be reduced to axioms. That is, there must always be an algorithm or truth table that shows that a "scientific" system of deduction is always either inconsistent or incomplete.

Now suppose we ignore that point and assume that human minds are able to model the universe as an algorithm, perhaps as some mathematico-logical theory; i.e., a group of "cause-effect" logic gates, or specifically, as some mxn truth table. Obviously, we have to account for quantum uncertainty. Yet, suppose we can do that and also suppose that the truth table need only work with rational numbers, perhaps on grounds that continuous phenomena are a convenient fiction and that the universe operates in quantum spurts.

Yet there is another proof of incompleteness. The algorithm, or its associated truth table, is an output value of the universe -- though some might argue that the algorithm is a Platonic idea that one discovers rather than constructs. Still, once scientists arrive at this table, we must agree that the laws of mechanics supposedly were at work so that the thoughts and actions of these scientists were part of a massively complex system of logic gate equivalents.

So then the n-character, grammatically correct Boolean statement for the universe must have itself as an output value. Now, we can regard this statement as a unique number by simply assigning integer values to each element of the set of Boolean symbols. The integers then follow a specific order, yielding a corresponding integer. (The number of symbols n may be regarded as corresponding to some finite time interval.)

Now then, supposing the cosmos is a machine governed by the cosmic program, the cosmic number should be computable by this machine (again the scientists involved acting as relays, logic gates and so forth). However, the machine needs to be instructed to compute this number. So the machine must compute the basis of the "choice." So it must have a program to compute the program that selects which Boolean statement to use, which in turn implies another such program, ad infinitum.

In fact, there are two related issues here: the Boolean algebra used to represent the cosmic physical system requires a set of axioms, such as Hutchinson's postulates, in order to be of service. But how does the program decide which axioms it needs for itself? Similarly, the specific Boolean statement requires its own set of axioms. Again, how does the program decide on the proper axioms?

So then, the cosmos cannot be fully modeled according to normal scientific logic -- though one can use such logic to find intersections of sets of "events." Then one is left to wonder whether a different system of representation might also be valid, though the systems might not be fully equivalent.

At any rate, the verdict is clear: what is normally regarded as the discipline of physics cannot be axiomatized without resort to infinite regression.

So, we now face the possibility that two scientific systems of representation may each be correct and yet not equivalent.

To illustrate this idea, consider the base 10 and base 2 number systems. There are some non-integer rationals in base 10 that cannot be expressed in base 2, although approximation can be made as close as we like. These two systems of representation of rationals are , strictly speaking, not equivalent.

(Cantor's algorithm to compute all the rationals uses the base 10 system. However, he did not show that all base n rationals appear in the base 10 system.)
2 Comments:
At 6:41 PM , Blogger Unknown said...
Hilbert's Sixth has been solved.

www.EinsteinGravity.com

At 1:42 PM , Anonymous Anonymous said...
A) Your assertion that ZF must contain some theorems which cannot be proved is rather sloppy. A theorem is a proposition which can be proved. You meant to say that ZF contains some propositions which cannot be proved, and whose negations cannot be proved either. This is incompleteness.

B) Goedel's Incompleteness Theorem has nothing to do with whether Physics can be axiomatised. It might have something to do with whether Theoretical Physics can ever be complete. But it might not. After all, Theoretical Physics does not have to include all of ZF. Set theory might go way beyond physical reality, so that Theoretical Physics would not actually contain all of mathematics, and thus Goedel's theorem would not apply. In fact, Goedel also proved a completeness theorem: First order Logic is complete. Now, why would a physicist want second-order quantifiers? So, who knows.

C) Forget Boolean algorithms and Turing Machines. No algorithm can be exactly modelled by a physical machine: there is always some noise. E.g., no square wave, and hence no string of bits, can be completely reliably produced in the real world, and if it could be produced, the information in it, the «signal», still could not be reliably extracted in an error-free fashion by any finite physical apparatus.

The cosmos cannot be fully modeled as a Turing machine


¶ This essay, which had been deleted by Angelfire, was recovered via the Wayback Machine.
¶ Note: The word "or" is usually used in the following discussion in the inclusive sense.

Many subscribe to the view that the cosmos is essentially a big machine which can be analyzed and understood in terms of other machines.

A well-known machine is the general Turing machine, which is a logic system that can be modified to obtain any discrete-input computation. Richard Feynman, the brilliant physicist, is said to have been fascinated by the question of whether the cosmos is a computer -- originally saying no but later insisting the opposite. As a quantum physicist, Feynmen would have realized that the question was difficult. If the cosmos is a computer, it certainly must be a quantum computer. But what does that certainty mean? Feynman, one assumes, would also have concluded that the cosmos cannot be modeled as a classical computer, or Turing machine.1

Let's entertain the idea that the cosmos can be represented as a Turing machine or Turing computation. This notion is equivalent to the idea that neo-classical science (including relativity theory) can explain the cosmos. That is, we could conceive of every "neo-classical action" in the cosmos to date -- using absolute cosmic time, if such exists -- as being represented by a huge logic circuit, which in turn can be reduced to some instance (computation) of a Turing algorithm. God wouldn't be playing dice. A logic circuit always follows if-then rules, which we interpret as causation. But, as we know, at the quantum level, if-then rules only work (with respect to the observer) within constraints, so we might very well argue that QM rules out the cosmos being a "classical" computer.

On the other hand, some would respond by arguing that quantum fuzziness is so miniscule on a macroscopic (human) scale, that the cosmos can be quite well represented as a classical machine. That is, the fuzziness cancels out on average. They might also note that quantum fluctuations in electrons do not have any significant effect on the accuracy of computers -- though this may not be true as computer parts head toward the nanometer scale. (My personal position is that there are numerous examples of the scaling up or amplification of quantum effects. "Schrodinger's cat" is the archetypal example.)

Of course, another issue is that the cosmos should itself have a wave function that is a superposition of all possible states -- until observed by someone (who?). (I will not proceed any further on the measurement problem of quantum physics, despite its many fascinating aspects.)

Before going any further on the subject at hand, we note that a Turing machine is finite (although the set of such machines is denumerably infinite). So if one takes the position that the cosmos -- or specifically, the cosmic initial conditions (or "singularity") -- are effectively infinite, then no Turing algorithm can model the cosmos. So let us consider a mechanical computer-robot, A, whose program is a general Turing machine. A is given a program that instructs the robotic part of A to select a specific Turing machine, and to select the finite set of initial values (perhaps the "constants of nature"), that models the cosmos.

What algorithm is used to instruct A to choose a specific cosmos-outcome algorithm and computation? This is a typical chicken-or-the-egg self-referencing question and as such is related to Turing's halting problem, Godel's incompleteness theorem and Russell's paradox.

If there is an algorithm B to select an algorithm A, what algorithm selected B? -- leading us to an infinite regression. Well, suppose that A has the specific cosmic algorithm, with a set of discrete initial input numbers, a priori? That algorithm, call it Tc, and its instance (the finite set of initial input numbers and the computation, which we regard as still running), imply the general Turing algorithm Tg. We know this from the fact that, by assumption, a formalistic description of Alan Turing and his mathematical logic result were implied by Tc. On the other hand, we know that every computable result is programable by modifying Tg. All computable results can be cast in the form of "if-then" logic circuits, as is evident from Turing's result.

So we have

Tc <--> Tg

Though this result isn't clearly paradoxical, it is a bit disquieting in that we have no way of explaining why Turing's result didn't "cause" the universe. That is, why didn't it happen that Tg implied Turing who (which) in turn implied the Big Bang? That is, wouldn't it be just as probable that the universe kicked off as Alan Turing's result, with the Big Bang to follow? (This is not a philisophical question so much as a question of logic.)

Be that as it may, the point is that we have not succeeded in fully modeling the universe as a Turing machine.

The issue in a nutshell: how did the cosmos instruct itself to unfold? Since the universe contains everything, it must contain the instructions for its unfoldment. Hence, we have the Tc instructing its program to be formed.

Another way to say this: If the universe can be modeled as a Turing computation, can it also be modeled as a program? If it can be modeled as a program, can it then be modeled as a robot forming a program and then carrying it out?

In fact, by Godel's incompleteness theorem, we know that the issue of Tc "choosing" itself to run implies that the Tc is a model (mathematically formal theory) that is inconsistent or incomplete. This assertion follows from the fact that the Tc requires a set of axioms in order to exist (and hence "run"). That is, there must be a set of instructions that orders the layout of the logic circuit. However, by Godel's result, the Turing machine is unable to determine a truth value for some statements relating to the axioms without extending the theory ("rewiring the logic circuit") to include a new axiom.

This holds even if Tc = Tg (though such an equality implies a continuity between the program and the computation which perforce bars an accurate model using any Turing machines).

So then, any model of the cosmos as a Boolean logic circuit is inconsistent or incomplete. In other words, a Turing machine cannot fully describe the cosmos.

If by "Theory of Everything" is meant a formal logico-mathematical system built from a finite set of axioms [though, in fact, Zermelo-Frankel set theory includes an infinite subset of axioms], then that TOE is either incomplete or inconsistent. Previously, one might have argued that no one has formally established that a TOE is necessarily rich enough for Godel's incompleteness theorem to be known to apply. Or, as is common, the self-referencing issue is brushed aside as a minor technicality.

Of course, the Church thesis essentially tells us that any logico-mathematical system can be represented as a Turing machine or set of machines and that any logico-mathematical value that can be expressed from such a system can be expressed as a Turing machine output. (Again, Godel puts limits on what a Turing machine can do.)

So, if we accept the Church thesis -- as most logicians do -- then our result says that there is always "something" about the cosmos that Boolean logic -- and hence the standard "scientific method" -- cannot explain.

Even if we try representing "parallel" universes as a denumerable family of computations of one or more Turing algorithms, with the computational instance varying by input values, we face the issue of what would be used to model the master programer.

Similarly, one might imagine a larger "container" universe in which a full model of "our" universe is embedded. Then it might seem that "our" universe could be modeled in principle, even if not modeled by a machine or computation modeled in "our" universe. Of course, then we apply our argument to the container universe, reminding us of the necessity of an infinity of extensions of every sufficiently rich theory in order to incorporate the next stage of axioms and also reminding us that in order to avoid the paradox inherent in the set of all universes, we would have to resort to a Zermelo-Frankel-type axiomatic ban on such a set. Now we arrive at another point: If the universe is modeled as a quantum computation, would not such a framework possibly resolve our difficulty?

If we use a quantum computer and computation to model the universe, we will not be able to use a formal logical system to answer all questions about it, including what we loosely call the "frame" question -- unless we come up with new methods and standards of mathematical proof that go beyond traditional Boolean analysis.

Let us examine the hope expressed in Stephen Wolfram's New Kind of Science that the cosmos can be summarized in some basic rule of the type found in his cellular automata graphs.

We have no reason to dispute Wolfram's claim that his cellular automata rules can be tweaked to mimic any Turing machine. (And it is of considerable interest that he finds specific CA/TM that can be used for a universal machine.)

So if the cosmos can be modeled as a Turing machine then it can be modeled as a cellular automaton. However, a CA always has a first row, where the algorithm starts. So the algorithm's design -- the Turing machine -- must be axiomatic. In that case, the TM has not modeled the design of the TM nor the specific initial conditions, which are both parts of a universe (with that word used in the sense of totality of material existence).

We could of course think of a CA in which the first row is attached to the last row and a cylinder formed. There would be no specific start row. Still, we would need a CA whereby the rule applied with aribitrary row n as a start yields the same total output as the rule applied at arbitrary row m. This might resolve the time problem, but it is yet to be demonstrated that such a CA -- with an extraordinarily complex output -- exists. (Forgive the qualitative term extraordinarily complex. I hope to address this matter elsewhere soon.)

However, even with time out of the way, we still have the problem of the specific rule to be used. What mechanism selects that? Obviously it cannot be something from within the universe. (Shades of Russell's paradox.)
Footnote
1. Informally, one can think of a general Turing machine as a set of logic gates that can compose any Boolean network. That is, we have a set of gates such as "not", "and," "or," "exclusive or," "copy," and so forth. If-then is set up as "not-P or Q," where P and Q themselves are networks constructed from such gates. A specific Turing machine will then yield the same computation as a specific logic circuit composed of the sequence of gates.

By this, we can number any computable output by its gates. Assuming we have less than 10 gates (which is more than necessary), we can assign a base-10 digit to each gate. In that case, the code number of the circuit is simply the digit string representing the sequence of gates.

Note that circuit A and circuit B may yield the same computation. Still, there is a countable infinity of such programs, though, if we use any real for an input value, we would have an uncountable infinity of outputs. But this cannot be, because an algorithm for producing a real number in a finite number of steps can only produce a rational approximation of an irrational. Hence, there is only a countable number of outputs.
¶ Dave Selke, an electrical engineer with a computer background, has made a number of interesting comments concerning this page, spurring me to redo the argument in another form. The new essay is entitled On Hilbert's sixth problem.
¶ Thanks to Josh Mitteldorf, a mathematician and physicist, for his incisive and helpful comments. Based upon a previous draft, Dr. Mitteldorf said he believes I have shown that, if the universe is finite, it cannot be modeled by a subset of itself but he expressed wariness over the merit of this point.

Draft 3 (includes discussion of Wolfram cellular automata)

The cosmos cannot be fully modeled as a Turing machine
(early draft recovered via Wayback Machine)

  



MAYNOVMAR
Previous capture06Next capture
201120122016
17 captures
13 Oct 2007 - 5 Mar 2016
Site hosted by Angelfire.com: Build your free website today!

The cosmos cannot be fully modeled as a Turing machine or Turing computation


Other Conant pages
Kryptograff blog: Information theory, cryptology, intelligent design
Dave Selke, an electrical engineer with a computer background, has made a number of interesting comments concerning this page, spurring me to redo the argument in another form. The new essay is entitled "On Hilbert's sixth problem" and may be found via the Kryptograff link above.

Draft 3 (Includes discussion of Wolfram cellular automata)

Comments and suggestions welcome


Note: The word "or" is usually used in the following discussion in the inclusive sense.

Many subscribe to the view that the cosmos is essentially a big machine which can be analyzed and understood in terms of other machines.
A well-known machine is the general Turing machine, which is a logic system that can be modified to obtain any discrete-input computation. Richard Feynman, the brilliant physicist, is said to have been fascinated by the question of whether the cosmos is a computer -- originally saying no but later insisting the opposite. As a quantum physicist, Feynmen would have realized that the question was difficult. If the cosmos is a computer, it certainly must be a quantum computer. But what does that certainty mean? Feynmen, one assumes, would also have concluded that the cosmos cannot be modeled as a classical computer, or Turing machine [see footnote below].
Let's entertain the idea that the cosmos can be represented as a Turing machine or Turing computation. This notion is equivalent to the idea that neo-classical science (including relativity theory) can explain the cosmos. That is, we could conceive of every "neo-classical action" in the cosmos to date -- using absolute cosmic time, if such exists -- as being represented by a huge logic circuit, which in turn can be reduced to some instance (computation) of a Turing algorithm. God wouldn't be playing dice.
A logic circuit always follows if-then rules, which we interpret as causation. But, as we know, at the quantum level, if-then rules only work (with respect to the observer) within constraints, so we might very well argue that QM rules out the cosmos being a "classical" computer.
On the other hand, some would respond by arguing that quantum fuzziness is so miniscule on a macroscopic (human) scale, that the cosmos can be quite well represented as a classical machine. That is, the fuzziness cancels out on average. They might also note that quantum fluctuations in electrons do not have any significant effect on the accuracy of computers -- though this may not be true as computer parts head toward the nanometer scale. (My personal position is that there are numerous examples of the scaling up or amplification of quantum effects. "Schrodinger's cat" is the archetypal example.)
Of course, another issue is that the cosmos should itself have a wave function that is a superposition of all possible states -- until observed by someone (who?). (I will not proceed any further on the measurement problem of quantum physics, despite its many fascinating aspects.)
Before going any further on the subject at hand, we note that a Turing machine is finite (although the set of such machines is denumerably infinite). So if one takes the position that the cosmos -- or specifically, the cosmic initial conditions (or "singularity") -- are effectively infinite, then no Turing algorithm can model the cosmos.
So let us consider a mechanical computer-robot, A, whose program is a general Turing machine. A is given a program that instructs the robotic part of A to select a specific Turing machine, and to select the finite set of initial values (perhaps the "constants of nature"), that models the cosmos.
What algorithm is used to instruct A to choose a specific cosmos-outcome algorithm and computation? This is a typical chicken-or-the-egg self-referencing question and as such is related to Turing's halting problem, Godel's incompleteness theorem and Russell's paradox.
If there is an algorithm B to select an algorithm A, what algorithm selected B? -- leading us to an infinite regression.
Well, suppose that A has the specific cosmic algorithm, with a set of discrete initial input numbers, a priori? That algorithm, call it Tc, and its instance (the finite set of initial input numbers and the computation, which we regard as still running), imply the general Turing algorithm Tg. We know this from the fact that, by assumption, a formalistic description of Alan Turing and his mathematical logic result were implied by Tc. On the other hand, we know that every computable result is programable by modifying Tg. All computable results can be cast in the form of "if-then" logic circuits, as is evident from Turing's result.
So we have

Tc <--> Tg
Though this result isn't clearly paradoxical, it is a bit disquieting in that we have no way of explaining why Turing's result didn't "cause" the universe. That is, why didn't it happen that Tg implied Turing who (which) in turn implied the Big Bang? That is, wouldn't it be just as probable that the universe kicked off as Alan Turing's result, with the Big Bang to follow? (This is not a philisophical question so much as a question of logic.)
Be that as it may, the point is that we have not succeeded in fully modeling the universe as a Turing machine.
The issue in a nutshell: how did the cosmos instruct itself to unfold? Since the universe contains everything, it must contain the instructions for its unfoldment. Hence, we have the Tc instructing its program to be formed.
Another way to say this: If the universe can be modeled as a Turing computation, can it also be modeled as a program? If it can be modeled as a program, can it then be modeled as a robot forming a program and then carrying it out?
In fact, by Godel's incompleteness theorem, we know that the issue of Tc "choosing" itself to run implies that the Tc is a model (mathematically formal theory) that is inconsistent or incomplete. This assertion follows from the fact that the Tc requires a set of axioms in order to exist (and hence "run"). That is, there must be a set of instructions that orders the layout of the logic circuit. However, by Godel's result, the Turing machine is unable to determine a truth value for some statements relating to the axioms without extending the theory ("rewiring the logic circuit") to include a new axiom.
This holds even if Tc = Tg (though such an equality implies a continuity between the program and the computation which perforce bars an accurate model using any Turing machines).
So then, any model of the cosmos as a Boolean logic circuit is inconsistent or incomplete. In other words, a Turing machine cannot fully describe the cosmos.
If by "Theory of Everything" is meant a formal logico-mathematical system built from a finite set of axioms [though, in fact, Zermelo-Frankel set theory includes an infinite subset of axioms], then that TOE is either incomplete or inconsistent. Previously, one might have argued that no one has formally established that a TOE is necessarily rich enough for Godel's incompleteness theorem to be known to apply. Or, as is common, the self-referencing issue is brushed aside as a minor technicality.
Of course, the Church thesis essentially tells us that any logico-mathematical system can be represented as a Turing machine or set of machines and that any logico-mathematical value that can be expressed from such a system can be expressed as a Turing machine output. (Again, Godel puts limits on what a Turing machine can do.)
So, if we accept the Church thesis -- as most logicians do -- then our result says that there is always "something" about the cosmos that Boolean logic -- and hence the standard "scientific method" -- cannot explain.
Even if we try representing "parallel" universes as a denumerable family of computations of one or more Turing algorithms, with the computational instance varying by input values, we face the issue of what would be used to model the master programer.
Similarly, one might imagine a larger "container" universe in which a full model of "our" universe is embedded. Then it might seem that "our" universe could be modeled in principle, even if not modeled by a machine or computation modeled in "our" universe. Of course, then we apply our argument to the container universe, reminding us of the necessity of an infinity of extensions of every sufficiently rich theory in order to incorporate the next stage of axioms and also reminding us that in order to avoid the paradox inherent in the set of all universes, we would have to resort to a Zermelo-Frankel-type axiomatic ban on such a set.
Now we arrive at another point: If the universe is modeled as a quantum computation, would not such a framework possibly resolve our difficulty?
If we use a quantum computer and computation to model the universe, we will not be able to use a formal logical system to answer all questions about it, including what we loosely call the "frame" question -- unless we come up with new methods and standards of mathematical proof that go beyond traditional Boolean analysis.
Let us examine the hope expressed in Stephen Wolfram's New Kind of Science that the cosmos can be summarized in some basic rule of the type found in his cellular automata graphs.
We have no reason to dispute Wolfram's claim that his cellular automata rules can be tweaked to mimic any Turing machine. (And it is of considerable interest that he finds specific CA/TM that can be used for a universal machine.)
So if the cosmos can be modeled as a Turing machine then it can be modeled as a cellular automaton. However, a CA always has a first row, where the algorithm starts. So the algorithm's design -- the Turing machine -- must be axiomatic. In that case, the TM has not modeled the design of the TM nor the specific initial conditions, which are both parts of a universe (with that word used in the sense of totality of material existence).
We could of course think of a CA in which the first row is attached to the last row and a cylinder formed. There would be no specific start row. Still, we would need a CA whereby the rule applied with aribitrary row n as a start yields the same total output as the rule applied at arbitrary row m. This might resolve the time problem, but it is yet to be demonstrated that such a CA -- with an extraordinarily complex output -- exists. (Forgive the qualitative term extraordinarily complex. I hope to address this matter elsewhere soon.)
However, even with time out of the way, we still have the problem of the specific rule to be used. What mechanism selects that? Obviously it cannot be something from within the universe. (Shades of Russell's paradox.)


Footnote
Informally, one can think of a general Turing machine as a set of logic gates that can compose any Boolean network. That is, we have a set of gates such as "not", "and," "or," "exclusive or," "copy," and so forth. If-then is set up as "not-P or Q," where P and Q themselves are networks constructed from such gates. A specific Turing machine will then yield the same computation as a specific logic circuit composed of the sequence of gates.
By this, we can number any computable output by its gates. Assuming we have less than 10 gates (which is more than necessary), we can assign a base-10 digit to each gate. In that case, the code number of the circuit is simply the digit string representing the sequence of gates.
Note that circuit A and circuit B may yield the same computation. Still, there is a countable infinity of such programs, though, if we use any real for an input value, we would have an uncountable infinity of outputs. But this cannot be, because an algorithm for producing a real number in a finite number of steps can only produce a rational approximation of an irrational. Hence, there is only a countable number of outputs.

*************************
Thanks to Josh Mitteldorf, a mathematician and physicist, for his incisive and helpful comments. Based upon a previous draft, Dr. Mitteldorf said he believes I have shown that, if the universe is finite, it cannot be modeled by a subset of itself but he expressed wariness over the merit of this point.


Email: prconant@yahoo.com

Site hosted by Angelfire.com: Build your free website today!
Sponsored by sponsor logo

Sunday, February 9, 2020

Some archived philosophy posts by Conant

In this single post is an archive of older posts from another account which I no longer control.


Tuesday, May 7, 2019


What is a continuum?
Russell knocks Hegel's logic (1903)


Bertrand Russell, in his Principles of Mathematics (1903), comments on G.W. Hegel's Logic:
271. The notion of continuity has been treated by philosophers, as a rule, as though it were incapable of analysis. They have said many things about it, including the Hegelian dictum that everything discrete is also continuous and vice versâ. This remark, as being an exemplification of Hegel’s usual habit of combining opposites, has been tamely repeated by all his followers. But as to what they meant by continuity and discreteness, they preserved a discreet and continuous silence; only one thing was evident, that whatever they did mean could not be relevant to mathematics, or to the philosophy of space and time. -- Chapter XXXV, Section 271.
Though Russell gives a page number from William Wallace's translation of Hegel's logic, I could not pin down the reference exactly. Still, the following excerpt from Wallace's version of "Being Part One of the Encyclopaedia of The Philosophical Sciences (1830)" [known as The Science of Logic] gives you an idea of what Russell meant.

From Wallace's translation:
Quantity, as we saw, has two sources: the exclusive unit, and the identification or equalisation of these units. When we look therefore at its immediate relation to self, or at the characteristic of self-sameness made explicit by attraction, quantity is Continuous magnitude; but when we look at the other characteristic, the One implied in it, it is Discrete magnitude. Still continuous quantity has also a certain discreteness, being but a continuity of the Many; and discrete quantity is no less continuous, its continuity being the One or Unit, that is, the self-same point of the many Ones.

(1) Continuous and Discrete magnitude, therefore, must not be supposed two species of magnitude, as if the characteristic of the one did not attach to the other. The only distinction between them is that the same whole (of quantity) is at one time explicitly put under the one, at another under the other of its characteristics.

(2) The Antinomy of space, of time, or of matter, which discusses the question of their being divisible for ever, or of consisting of indivisible units, just means that we maintain quantity as at one time Discrete, at another Continuous.

If we explicitly invest time, space, or matter with the attribute of Continuous quantity alone, they are divisible ad infinitum. When, on the contrary, they are invested with the attribute of Discrete quantity, they are potentially divided already, and consist of indivisible units. The one view is as inadequate as the other. Quantity, as the proximate result of Being-for-self, involves the two sides in the process of the latter, attraction and repulsion, as constitutive elements of its own idea. It is consequently Continuous as well as Discrete. Each of these two elements involves the other also, and hence there is no such thing as a merely Continuous or a merely Discrete quantity.

We may speak of the two as two particular and opposite species of magnitude; but that is merely the result of our abstracting reflection, which in viewing definite magnitudes waives now the one, now the other, of the elements contained in inseparable unity in the notion of quantity. Thus, it may be said, the space occupied by this room is a continuous magnitude and the hundred men assembled in it form a discrete magnitude. And yet the space is continuous and discrete at the same time; hence we speak of points of space, or we divide space, a certain length, into so many feet, inches, etc., which can be done only on the hypothesis that space is also potentially discrete. Similarly, on the other hand, the discrete magnitude, made up of a hundred men, is also continuous; and the circumstance on which this continuity depends is the common element, the species man, which pervades all the individuals and unites them with each other.

(b) Quantum (How Much) §101 Quantity, essentially invested with the exclusionist character which it involves, is Quantum (or How Much): i.e. limited quantity. Quantum is, as it were, the determinate Being of quantity: whereas mere quantity corresponds to abstract Being, and the Degree, which is next to be considered, corresponds to Being-for-self. As for the details of the advance from mere quantity to quantum, it is founded on this: that while in mere quantity the distinction, as a distinction of continuity and discreteness, is at first only implicit, in a quantum the distinction is actually made, so that quantity in general now appears as distinguished or limited. But in this way the quantum breaks up at the same time into an indefinite multitude of quanta or definite magnitudes. Each of these definite magnitudes, as distinguished from the others, forms a unity, while on the other hand, viewed per se, it is a many. And, when that is done, the quantum is described as Number.

§102 In Number the quantum reaches its development and perfect mode. Like the One, the medium in which it exists, Number involves two qualitative/factors or functions; Annumeration or Sum, which depends on the factor discreteness, and Unity, which depends on continuity. In arithmetic the several kinds of operation are usually presented as accidental modes of dealing with numbers. If necessary and meaning is to be found in these operations, it must be by a principle: and that must come from the characteristic element in the notion of number itself. (This principle must here be briefly exhibited.) These characteristic elements are Annumeration on the one hand, and Unity on the other, of which number is the unity. But this latter Unity, when applied to empirical numbers, is only the equality of these numbers: hence the principle of arithmetical operations must be to put numbers in the ratio of Unity and Sum (or amount), and to elicit the equality of these two modes.
I would mildly disagree with Russell to the extent that continuity and discreteness do seem to imply each other and do seem to be two sides of the same coin, rather like Change and the Absolute. If one, for example, divides a line segment into say 2n finite segments, then we assume that "at" infinity the finite segments have reached zero length, and yet each location (point) is discrete. But how can a discrete point of 0 width abut another discrete point of 0 width? The points sort of have a peculiar state of discreteness and non-discreteness that are, one might say, superposed.

Russell took a dim view of Hegel's logic in general. And, from the above passage, one can see why. Yet, we should at least grant that Hegel's Idealism meant that the mind of the observer was a candle of the flame of Spirit (or Mind), so that Russell's traditional objectivism would not do for Hegel. (Even so, Russell eventually came to the view that the cosmos must be made up of something weird, such as a mind-matter composite.)

Wednesday, May 1, 2019


No Turing machine can model the cosmos

Versions of this paper are found elsewhere on my blogs.


Dave Selke, an electrical engineer with a computer background, has made a number of interesting comments concerning this page, spurring me to redo the argument in another form. The new essay is entitled "On Hilbert's sixth problem" and may be found at
http://paulpages.blogspot.com/2011/11/first-published-tuesday-june-26-2007-on.html

Draft 3 (Includes discussion of Wolfram cellular automata)
Comments and suggestions welcome  

Note: The word "or" is usually used in the following discussion in the inclusive sense.

Many subscribe to the view that the cosmos is essentially a big machine which can be analyzed and understood in terms of other machines.A well-known machine is the general Turing machine, which is a logic system that can be modified to obtain any discrete-input computation. Richard Feynman, the brilliant physicist, is said to have been fascinated by the question of whether the cosmos is a computer -- originally saying no but later insisting the opposite. As a quantum physicist, Feynmen would have realized that the question was difficult. If the cosmos is a computer, it certainly must be a quantum computer. But what does that certainty mean? Feynmen, one assumes, would also have concluded that the cosmos cannot be modeled as a classical computer, or Turing machine [see footnote below].
Let's entertain the idea that the cosmos can be represented as a Turing machine or Turing computation. This notion is equivalent to the idea that neo-classical science (including relativity theory) can explain the cosmos. That is, we could conceive of every "neo-classical action" in the cosmos to date -- using absolute cosmic time, if such exists -- as being represented by a huge logic circuit, which in turn can be reduced to some instance (computation) of a Turing algorithm. God wouldn't be playing dice.
A logic circuit always follows if-then rules, which we interpret as causation. But, as we know, at the quantum level, if-then rules only work (with respect to the observer) within constraints, so we might very well argue that QM rules out the cosmos being a "classical" computer.
On the other hand, some would respond by arguing that quantum fuzziness is so miniscule on a macroscopic (human) scale, that the cosmos can be quite well represented as a classical machine. That is, the fuzziness cancels out on average. They might also note that quantum fluctuations in electrons do not have any significant effect on the accuracy of computers -- though this may not be true as computer parts head toward the nanometer scale. (My personal position is that there are numerous examples of the scaling up or amplification of quantum effects. "Schrodinger's cat" is the archetypal example.)
Of course, another issue is that the cosmos should itself have a wave function that is a superposition of all possible states -- until observed by someone (who?). (I will not proceed any further on the measurement problem of quantum physics, despite its many fascinating aspects.)
Before going any further on the subject at hand, we note that a Turing machine is finite (although the set of such machines is denumerably infinite). So if one takes the position that the cosmos -- or specifically, the cosmic initial conditions (or "singularity") -- are effectively infinite, then no Turing algorithm can model the cosmos.
So let us consider a mechanical computer-robot, A, whose program is a general Turing machine. A is given a program that instructs the robotic part of A to select a specific Turing machine, and to select the finite set of initial values (perhaps the "constants of nature"), that models the cosmos.
What algorithm is used to instruct A to choose a specific cosmos-outcome algorithm and computation? This is a typical chicken-or-the-egg self-referencing question and as such is related to Turing's halting problem, Godel's incompleteness theorem and Russell's paradox.
If there is an algorithm B to select an algorithm A, what algorithm selected B? -- leading us to an infinite regression.
Well, suppose that A has the specific cosmic algorithm, with a set of discrete initial input numbers, a priori? That algorithm, call it Tc, and its instance (the finite set of initial input numbers and the computation, which we regard as still running), imply the general Turing algorithm Tg. We know this from the fact that, by assumption, a formalistic description of Alan Turing and his mathematical logic result were implied by Tc. On the other hand, we know that every computable result is programable by modifying Tg. All computable results can be cast in the form of "if-then" logic circuits, as is evident from Turing's result.
So we have
Tc <--> Tg
Though this result isn't clearly paradoxical, it is a bit disquieting in that we have no way of explaining why Turing's result didn't "cause" the universe. That is, why didn't it happen that Tg implied Turing who (which) in turn implied the Big Bang? That is, wouldn't it be just as probable that the universe kicked off as Alan Turing's result, with the Big Bang to follow? (This is not a philisophical question so much as a question of logic.)
Be that as it may, the point is that we have not succeeded in fully modeling the universe as a Turing machine.
The issue in a nutshell: how did the cosmos instruct itself to unfold? Since the universe contains everything, it must contain the instructions for its unfoldment. Hence, we have the Tc instructing its program to be formed.
Another way to say this: If the universe can be modeled as a Turing computation, can it also be modeled as a program? If it can be modeled as a program, can it then be modeled as a robot forming a program and then carrying it out?
In fact, by Godel's incompleteness theorem, we know that the issue of Tc "choosing" itself to run implies that the Tc is a model (mathematically formal theory) that is inconsistent or incomplete. This assertion follows from the fact that the Tc requires a set of axioms in order to exist (and hence "run"). That is, there must be a set of instructions that orders the layout of the logic circuit. However, by Godel's result, the Turing machine is unable to determine a truth value for some statements relating to the axioms without extending the theory ("rewiring the logic circuit") to include a new axiom.
This holds even if Tc = Tg (though such an equality implies a continuity between the program and the computation which perforce bars an accurate model using any Turing machines).
So then, any model of the cosmos as a Boolean logic circuit is inconsistent or incomplete. In other words, a Turing machine cannot fully describe the cosmos.
If by "Theory of Everything" is meant a formal logico-mathematical system built from a finite set of axioms [though, in fact, Zermelo-Frankel set theory includes an infinite subset of axioms], then that TOE is either incomplete or inconsistent. Previously, one might have argued that no one has formally established that a TOE is necessarily rich enough for Godel's incompleteness theorem to be known to apply. Or, as is common, the self-referencing issue is brushed aside as a minor technicality.
Of course, the Church thesis essentially tells us that any logico-mathematical system can be represented as a Turing machine or set of machines and that any logico-mathematical value that can be expressed from such a system can be expressed as a Turing machine output. (Again, Godel puts limits on what a Turing machine can do.)
So, if we accept the Church thesis -- as most logicians do -- then our result says that there is always "something" about the cosmos that Boolean logic -- and hence the standard "scientific method" -- cannot explain.
Even if we try representing "parallel" universes as a denumerable family of computations of one or more Turing algorithms, with the computational instance varying by input values, we face the issue of what would be used to model the master programer.
Similarly, one might imagine a larger "container" universe in which a full model of "our" universe is embedded. Then it might seem that "our" universe could be modeled in principle, even if not modeled by a machine or computation modeled in "our" universe. Of course, then we apply our argument to the container universe, reminding us of the necessity of an infinity of extensions of every sufficiently rich theory in order to incorporate the next stage of axioms and also reminding us that in order to avoid the paradox inherent in the set of all universes, we would have to resort to a Zermelo-Frankel-type axiomatic ban on such a set.
Now we arrive at another point: If the universe is modeled as a quantum computation, would not such a framework possibly resolve our difficulty?
If we use a quantum computer and computation to model the universe, we will not be able to use a formal logical system to answer all questions about it, including what we loosely call the "frame" question -- unless we come up with new methods and standards of mathematical proof that go beyond traditional Boolean analysis.
Let us examine the hope expressed in Stephen Wolfram's New Kind of Science that the cosmos can be summarized in some basic rule of the type found in his cellular automata graphs.
We have no reason to dispute Wolfram's claim that his cellular automata rules can be tweaked to mimic any Turing machine. (And it is of considerable interest that he finds specific CA/TM that can be used for a universal machine.)
So if the cosmos can be modeled as a Turing machine then it can be modeled as a cellular automaton. However, a CA always has a first row, where the algorithm starts. So the algorithm's design -- the Turing machine -- must be axiomatic. In that case, the TM has not modeled the design of the TM nor the specific initial conditions, which are both parts of a universe (with that word used in the sense of totality of material existence).
We could of course think of a CA in which the first row is attached to the last row and a cylinder formed. There would be no specific start row. Still, we would need a CA whereby the rule applied with aribitrary row n as a start yields the same total output as the rule applied at arbitrary row m. This might resolve the time problem, but it is yet to be demonstrated that such a CA -- with an extraordinarily complex output -- exists. (Forgive the qualitative term extraordinarily complex. I hope to address this matter elsewhere soon.)
However, even with time out of the way, we still have the problem of the specific rule to be used. What mechanism selects that? Obviously it cannot be something from within the universe. (Shades of Russell's paradox.)


Footnote Informally, one can think of a general Turing machine as a set of logic gates that can compose any Boolean network. That is, we have a set of gates such as "not", "and," "or," "exclusive or," "copy," and so forth. If-then is set up as "not-P or Q," where P and Q themselves are networks constructed from such gates. A specific Turing machine will then yield the same computation as a specific logic circuit composed of the sequence of gates.
By this, we can number any computable output by its gates. Assuming we have less than 10 gates (which is more than necessary), we can assign a base-10 digit to each gate. In that case, the code number of the circuit is simply the digit string representing the sequence of gates.
Note that circuit A and circuit B may yield the same computation. Still, there is a countable infinity of such programs, though, if we use any real for an input value, we would have an uncountable infinity of outputs. But this cannot be, because an algorithm for producing a real number in a finite number of steps can only produce a rational approximation of an irrational. Hence, there is only a countable number of outputs.

Thanks to Josh Mitteldorf, a mathematician and physicist, for his incisive and helpful comments. Based upon a previous draft, Dr. Mitteldorf said he believes I have shown that, if the universe is finite, it cannot be modeled by a subset of itself but he expressed wariness over the merit of this point.

Objection to Proposition I of 'Tractatus'


An old post that for which I no longer vouch.

From Wittgenstein's 'Tractatus Logico-Philosophicus,' proposition 1:
1. The world is all that is the case.
1.1 The world is the totality of facts, not of things.
1.11 The world is determined by the facts, and by their being ALL the facts.
1.12 For the totality of facts determines what is the case, and also whatever is not the case.
1.13. The facts in logical space are the world.
1.2 The world divides into facts.
1.21 Each item can be the case or not the case while everything else remains the same.
We include proposition 2.0, which includes a key concept:
2.0 What is the case -- a fact -- is the existence of states of affairs [or, atomic propositions].
According to Ray Monk's astute biography, 'Ludwig Wittgenstein, the Duty of Genius' (Free Press division of Macmillan, 1990), Gottlob Frege aggravated Wittgenstein by apparently never getting beyond the first page of 'Tractatus' and quibbling over definitions.
However, it seems to me there is merit in taking exception to the initial assumption, even if perhaps definitions can be clarified (as we know, Wittgenstein later repudiated the theory of pictures that underlay the 'Tractatus'; nevertheless, a great value of 'Tractatus' is the compression of concepts that makes the book a goldmine of topics for discussion).
Before doing that, however, I recast proposition 1 as follows:
1. The world is a theorem.
1.1 The world is the set of all theorems, not of things [a thing requires definition and this definition is either a 'higher' theorem or an axiom]
1.12 The set of all theorems determines what is accepted as true and what is not.
1.13 The set of theorems is the world [redundancy acknowledged]
2. It is a theorem -- a true proposition -- that axioms exist.
This world view, founded in Wittgenstein's extensive mining of Russell's 'Principia' and fascination with Russell's paradox is reflected in the following:
Suppose we have a set of axioms (two will do here). We can build all theorems and anti-theorems from the axioms (though not necessarily solve basic philosophical issues).
With p and q as axioms (atomic propositions that can't be durther divided by connectives and other symbols except for vacuous tautologies and contradictions), we can begin:
1. p, 2. ~p
3. q, 4. ~q
and call these 4 statements Level 0 set of theorems and anti-theorems. If we say 'it is true that p is a theorem' or 'it is true that ~p is an anti-theorem' then we must use a higher order system of numbering. That is, such a statement must be numbered in such a way as to indicate that it is a statement about a statement.
We now can form set Level 1:
5. p & q [theorem]
6. ~p & ~q [anti-theorem]
7. p v q
8. ~p & ~q
9. p v ~q
10. ~p & q
11. ~p v q
12. p & ~q
Level 2 is composed of all possible combinations of p's, q's and connectives, with Level 1 statements combined with Level 2 statements, being a subset of Level 2.
By wise choice of numbering algorithms, we can associate any positive integer with a statement. Also, the truth value of any statement can be ascertained by the truth table method of analyzing such statements. And, it may be possible to find the truth value of statement n by knowing the truth value of sub-statement m, so that reduction to axioms can be avoided in the interest of efficiency.
I have no objection to trying to establish an abstract system using axioms. But the concept of a single system as having a priori existence gives pause.
If I am to agree with Prop 1.0, I must qualify it by insisting on the presence of a human mind, so that 1.0 then means that there is for each mind a corresponding arena of facts. A 'fact' here is a proposition that is assumed true until the mind decides it is false.
I also don't see how we can bypass the notion of 'culture,' which implies a collective set of beliefs and behaviors which acts as an auxiliary memory for each mind that grows within that culture. The interaction of the minds of course yields the evolution of the culture and its collective memory.
Words and word groups are a means of prompting responses from minds (including one's own mind). It seems that most cultures divide words into noun types and verb types. Verbs that cover common occurrences can be noun-ized as in gerunds.
A word may be seen as an auditory association with a specific set of stimuli. When an early man shouted to alert his group to imminent danger, he was at the doorstep of abstraction. When he discovered that use of specific sounds to denote specific threats permitted better responses by the group, he passed through the door of abstraction.
Still, we are assuming that such men had a sense of time and motion about like our own. Beings that perceive without resort to time would not develop language akin to modern speech forms.
In other words, their world would not be our world.
Even beings with a sense of time might differ in their perception of reality. The concept of 'now' is quite difficult to define. However, 'now' does appear to have different meaning in accord with metabolic rate. The smallest meaningful moment of a fly is possibly below the threshold of meaningful human perception. A fly might respond to a motion that is too short for a human to cognize as a motion.
Similarly, another lifeform might have a 'now' considerably longer than ours, with the ultimate 'now' being, theoretically, eternity. Some mystics claim such a time sense.
The word 'deer' (perhaps it is an atomic proposition) does not prove anything about the phenomenon with which it is associated. Deer exist even if a word for a deer doesn't.
Or does it? They exist for us 'because' they have importance for us. That's why we give it a name.
Consider the eskimo who has numerous words for phenomena all of which we English-speakers name 'snow.' We assume that each of these phenomena is an element of a class named 'snow.' But it cannot be assumed that the eskimo perceives these phenomena as types of a single phenomenon. They might be as different as sails and nails as far as he is concerned.
These phenomena are individually named because they are important to him in the sense that his responses to the sets of stimuli that 'signal' a particular phenomenon potentially affect his survival. (We use 'signal' reservedly because the mind knows of the phenomenon only through the sensors [which MIGHT include unconventional sensors, such as spirit detectors].
Suppose a space alien arrived on earth and was able to locomote through trees as if they were gaseous. That being might have very little idea of the concept of tree. Perhaps if it were some sort of scientist, using special detection methods, it might categorize trees by type. Otherwise, a tree would not be part of its world, a self-sevident fact.
What a human is forced to concede is important, at root, is the recurrence of a stimuli set that the memory associates with a pleasure-pain ratio. The brain can add various pleasure-pain ratios as a means of forecasting a probable result.
A stimuli set is normally, but not always, composed of elements closely associated in time. It is when these elements are themselves sets of elements that abstraction occurs.
Much more can be said on the issue of learning. perception and mind but the point I wish to make is that when we come upon logical scenarios, such as syllogisms, we are using a human abstraction or association system that reflects our way of learning and coping with pleasure and pain. The fact that, for example, some pain is not directly physical but is 'worry' does not materially affect my point.
That is, 'reality' is quite subjective, though I have not tried to utterly justify the solipsist point of view. And, if reality is deeply subjective, then the laws of form which seem to describe said reality may well be incomplete.
I suggest this issue is behind the rigid determinism of Einstein, Bohm and Deutsch (though Bohm's 'implicate order' is a subtle and useful concept).
Deutsch, for example, is correct to endorse the idea that reality might be far bigger than ordinarily presumed. Yet, it is his faith that reality must be fully deterministic that indicates that he thinks that 'objective reality' (the source of inputs into his mind) can be matched point for point with the perception system that is the reality he apprehends (subjective reality).
For example, his reality requires that if a photon can go to point A or point B, there must be a reason in some larger scheme whereby the photon MUST go to either A or B, even if we are utterly unable to predict the correct point. But this 'scientific' assumption stems from the pleasure-pain ratio for stimuli sets in furtherance of the organism's probability of survival. That is, determinism is rooted in our perceptual apparatus. Even 'unscientific' thinking is determinist. 'Causes' however are perhaps identified as gods, demons, spells and counter-spells.
Determinism rests in our sense of 'passage of time.'
In the quantum area, we can use a 'Russell's paradox' approach to perhaps justify the Copenhagen interpretation.
Let's use a symmetrical photon interferometer. If a single photon passes through and is left undetected in transit, it reliably exits only in one direction. If, detected in transit, detection results in a change in exit direction in 50 percent of trials. That is, the photon as a wave interferes with itself, exiting in a single direction. But once the wave 'collapses' because of detection, its position is irrevocably fixed and so exits in the direction established at detection point A or detection point B.
Deutsch, a disciple of Hugh Everett who proposed the 'many worlds' theory, argues that the universe splits into two nearly-identical universes when the photon seems to arbitrarily choose A or B, and in fact follows path A in Universe A and path B in Universe B.
Yet, we might use the determinism of conservation to argue for the Copenhagen interpretation. That is, we may consider a light wave to have a minimum quantum of energy, which we call a quantum amount. If two detectors intercept this wave, only one detector can respond because a detector can't be activated by half a quantum unit. Half a quantum unit is effectively nothing. Well, why are the detectors activated probablistically, you say? Shouldn't some force determine the choice?
Here is where the issue of reality enters.
From a classical standpoint, determinism requires ENERGY. Event A at time(0) is linked to event B at time(1) by an expenditure of energy. But the energy needed for 'throwing the switch on the logic gate' is not present.
We might argue that a necessary feature of a logically consistent deterministic world view founded on discrete calculations requires that determinism is also discrete (not continuous) and hence limited and hence non-deterministic at the quantum level.
[The hit counters on all of Paul Conant's pages have been behaving erratically, going up and in numbers with no apparent rhyme or reason.]
[This page first posted January 2002]

Sunday, March 17, 2019


Maybe you're God



What if the kooky idea of solipsism is a divine hint?

A solipsist is a person who thinks his is the only mind in the universe and that all the "reality" he perceives is an invention of his mind. Of course, we know that one's mind does have a unique and rather strong influence on what is taken for reality, but a solipsist takes this view to the ultimate extreme.

It is unlikely that anyone who is not schizophrenic really believes in such a philosophy.

And yet, what if there is a single mind of God who is talking to himself in a multiplicity of human voices? That is, what if your mind is, at root, actually God's? Doesn't Genesis say that we are made in his image? Doesn't Jesus, speaking to men, quote scripture: "You are gods"? (He also warns that some people around us are tares, the work of the devil, who will vanish in flame.)

The Fall represents the spiritual death of a man, whereby he is unable to commune with God. God has become alienated from Himself (you).

The reconciliation provided by Jesus permits these alienated sons (us) to commune with God as Jesus does. One can compare the Trinity -- three aspects of God in the individuals of Father, Son and Spirit -- to the light from a spectrum, which decomposes into three colors, but then recomposes into a single white light godhead when a reverse prism is set in place. But, even further, another prism can decompose white light into many colors, representing Jesus and his brothers (us), who all share in God's single mind.

Wednesday, February 13, 2019


The value of Mill's canons of logic (reply to Bradley)


Yes, of course J.S. Mill oversold his famous "canons of induction" in his book A System of Logic (1843). No doubt Francis Herbert Bradley in his Principles of Logic (1883 version) had some right to play the curmudgeon in scorning the canons as worthless insofar as logical proofs go.

A useful Wikipedia exposition of the canons:
https://en.wikipedia.org/wiki/Mill%27s_Methods

Here are the canons (from Bradley):

1. If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree is, is the cause (or effect) of the given phenomenon.

2. If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every instance in common but one, that one occurring only in the former: the circumstance in which alone the two instances differ, is the effect, or the cause, or an indispensable part of the cause, of the phenomenon.

3. If two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common but the absence of that circumstance; the circumstance in which alone the two sets of instances differ, is the effect, or the cause, or an indispensable part of the cause, of the phenomenon.

4. Subduct from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents.

5. Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation.

That Mill did not see these canons as hard and fast rules is even noted by Bradley who pointed out that Mill had observed that Method 1, in particular, along with other methods, had on occasion given rise to false results.

Still, Bradley claims that scientific men such as Rudolph Hermann Lotze, Christoph von Sigwart, William Whewell and William Stanley Jevons "had taken a view of the process of scientific discovery that was not favorable" to the canons.

No doubt not only Mill, but Bradley, would have benefited from the advances of modern statistics -- in particular the methods of quantitative correlation. Further, a solid course in the logic of set theory would have been enlightening to these gentlemen. But these developments were yet some years off.

On the other hand, though Mill may have been imprudent in his zeal for his canons, I think it is quite obvious that any empirical approach to discovery is subject to the criticism that an airtight logical proof is impossible. I expect that Bradley would have been familiar with Hume.

The "inductive logic" of Mill and his followers was fundamentally flawed, Bradley says. Because Mill's canons "presuppose universal truths, therefore they are not the only way of proving them. But if they are the only way of proving them, then every universal truth is unproved."

Again, any reader of Hume already is aware of this issue. (Further, one might question Bradley's reliance on universals [which came to replace Plato's Forms]. Bertrand Russell was one philosopher who questioned whether universals can be said to exist.)

Bradley, while generally praising Jevons, devoted a chapter to errors he had uncovered in Jevons' Principles of Science (1874), which outlined Jevons' equational logic, a precursor of modern symbolic logic that supplanted Booles' pioneering work. Bradley objects to Jevons' use of the sign "=" to indicate equality. Rather, says Bradley, the sign indicates identity. But in that case, if two propositions are identical, then to say that "A = B" is to say that "A = A" is to say nothing. (Russell later addressed this point; see posts on this blog.)

Bradley goes on to caution that, though Jevons' "logical machine" (a "logic piano" he invented) could calculate and do a form of reasoning, it fell short of what the human mind can do.

There may have been some mathematical insecurity showing here.

Readers, notes Bradley, may wonder why his critique of Jevons didn't include a mathematical analysis. Bradley is "compelled to throw the burden of the answer on those who had charge of my education, and who failed to give me the requisite instruction." Even so, the "mathematical logician" failed to impress Bradley, considering that "so long as he fails to treat (for example) such simple arguments as 'A before B and B with C, therefore A before C,' he has no strict right to demand a hearing."

In any case, it seems likely that when Mill used the word logic he had in mind processes of thought that were somewhat larger than those entertained by the scholastic philosophers and their modern formalist descendants.

On ground that Mill's methods are not truly inductive, Bradley thumps Mill's logic as a fiasco. "And if I am told that these flaws, or most of them, are already admitted by Inductive Logicians, I will not retract the word I have used. But to satisfy the objector I will gave way so far as to write for fiasco, confessed fiasco."

Admittedly, Mill's writings in general show a not brilliant -- though very intelligent -- mind at work. Yet, I can only admire the brilliance of his analysis of "induction." Do we not have here a precise portrayal of the scientific method of investigation? He gives us rules for weeding out probable causes and for reduction of error. Though his rules are useless for absolute proof, as in a mathematical theorem, they would certainly be helpful if studied by potential jurors, judges and officers of the court.

A case adjudicated by Mill's standards would result in a verdict in which jurors usually were convinced beyond a reasonable doubt. In that sense, Mill's canons do indeed undergird what is accepted as proof.

Scientists and engineers similarly use Millsian standards -- perhaps bolstered by modern statistical methods which in fact stand upon Millsian assumptions -- and accept "proofs" that they realize are subject to revocation and revision.

So, though I am quite enjoying Bradley's quirky and bombastic book on logic (a subject he eventually dropped in favor of Hegelian metaphysics), I cannot quite accept his pedantical bruising of Mill. Allowances, I think, should be given for Mill's rather loquacious and often impressionistic style, even when he is honing in on fine points.

Bradley's Logic, for all its irascibility, is a fascinating critique of the various assumptions of 19th century logicians. He had grounds for questioning the law of the excluded middle[1] long before Ludwig Brouwer and the intuitionists. And his off-beat analyses demonstrate that there is more to the subject of logic than most of us have been taught.

Take this tidbit, for example:
... if we refuse to isolate a relation within [a conceptual] whole, if we prefer to treat the entire compound synthesis as the conclusion we want, are we logically wrong? Is there any law which orders us to eliminate, and, where we cannot eliminate, forbids us to argue?...

... for the conclusion is not always a new relation of the extremes; it may be merely the relation of the whole which does not permit the ideal separation of a new relation. And, having gone so far, we are led to go farther. If, the synthesis being made, we do not always go on to get from that the fresh relation, if we sometimes rest in the whole we have constructed, why not sometimes again do something else? Why not try a new exit? There are other things in the world besides relations; we all know there are qualities, and a whole put together may surely, if not always at least sometimes, develop new qualities. If then by construction we can get to a quality, and not to a relation, once more we shall have passed from the limit of our formula.
Earlier in his book, Bradley had taken a dim view of the whole notion of relation as it applies to logic.

Another interesting thought: Does comparison count as a form of inference? Bradley asks. The suggestion that whenever we compare, we are reasoning runs counter to our established ideas. But how can we repulse it? he adds. "We start from data, we subject these data to an ideal process, and we get a new truth about these data."

We are liable to compare ABC and DBF and observe that they are alike in B, notes Bradley. "No doubt we may question the validity of this inference, but I do not see how we can deny its existence."

In modern parlance, we would say X = {A,B,C} and Y = {D,B,F}. Hence X ∩ Y = {B}. One might even contend that the set theoretic mode helps us pin down better what we mean by likeness.

I suppose that when logic is restricted to meaning-free formulas, many of Bradley's insights and concerns are bypassed. That is to say, for example, if P is held true and Q is held true, the formula "P implies Q" is deemed to be true.

But, one is uncomfortable with the proposition, "If Socrates is a man, then Mars is a planet." We demand more. The proposition "If Socrates is a man and all men are mortal, then Socrates is mortal" is usually accepted. The Socrates syllogism of course dovetails nicely with naive set theory, as in

(all x ∈ M)(x has property of mortality) & xo ∈ M. ∴ xo has the property of mortality

(where xo stands for Socrates).

By accepting parts of naive set theory as axiomatic, we avoid Russell's troubled efforts to define sets in terms of propositions.

In this regard, some logicians make a distinction between adjunctive operations and connective operations. If adjunctive reasoning is used, a false proposition implies every proposition and a true proposition is implied by every proposition. Connective reasoning is more common in routine discourse. The distinction works out to mean that a truth table may be read in both directions in the adjunctive case but in only one direction in the connective case.

The adjunctive camp has been warring against the connective camp since antiquity, according to Hans Reichenbach, U1 who is comfortable with both approaches. In any case, those who believe that "implication" should mean formal derivability, as in the Socrates set theoretic formulation, would favor the connective interpretation. (Reichenbach notes that he prefers the term "adjunctive implication" to Russell's "material implication.")

Returning to the issue of syllogisms, Bradley insists that the concept of major and minor premise is delusional. Though he may have a point as to how these terms were generally used, it seems to me that in a set theoretic context, we have the set which contains the members. A particular member then might be seen as equivalent to a minor premise. Of course, something I have not shown is my consideration that the various syllogisms (and their symbolic equivalents) yield Euclidean-style proofs when we can establish set membership (even though we don't generally bother).

Bradley, I assume, wrote his Logic in order to pave the way for his coming foray into metaphysics, Appearance and Reality (1893). His entire Logic is an attack on the assumptions of "popular" logic and reasoning. I will agree that he raises many interesting points, though I would say that at least some have been effectively answered by recent developments, such as information theory.

1. Sometime in the 1930s (I'll have to track down the precise reference at some point), Bertrand Russell pointed out that the assumption that a proposition was either true or false was based on a metaphysical notion that the cosmos is completely describable with facts (propositions with a true or false value). But this notion is assuredly unprovable. In the thirties there was no way to know whether there were any mountains on the dark side of the moon. It was presumed that the assertion or its negation must be true. Yet, one could as well have said the proposition was undecidable and hence neither true nor false.
It is noteworthy that experiments in quantum mechanics demonstrate that propositions such as "Schroedinger's poor cat has expired" are undecidable until a quantum event has been recorded. And, it is not altogether absurd to surmise that the lunar dark side's mountainous terrain was in superposition with a flat surface prior to the dark side's observation via spaceborne cameras three decades after Russell's comment.


U1. Elements of Symbolic Logic by Hans Reichenbach (Macmillan 1947). Chapter II, Section 7.

Sunday, November 18, 2018


A muse on Nietzsche and Hegel


B.A.G. Fuller observed of Nietzsche that "the Nazi-Fascist movement has publicly adopted him as its official philosopher..." which was not surprising
since certain of his ideas can be so construed as to lend themselves to the support and justification of the Nazi-Fascist ideology. For instance, his glorification of the Will for Power as the sum and substance of the universe; his praise of strength and virility as the essence of human virtue; his insistence on the decadent character of the Christian cult of meekness and weakness and upon its destructive influence on western culture; his appeal for the regeneration of western society by liberating the Will for Power from its bondage to Christian "slave-morality"; and his prophecy of the coming of the Superman [Overman] in whom the Will for Power will be given free play: -- all these can easily be turned into grist for the Nazi-Fascist mill.

But it can also be argued that such grist can be obtained only by lifting passages and portions of Nietzsche's teaching from the general context of his thought, and ignoring others, and by perverting the general character and trend of his philosophy in the interests of wishful thinking and to suit special needs. For he can be quoted in condemnation of such fundamental Nazi-Fascist tenets as anti-Semitism, the superiority of any one race over all others, and the dominance of the individual by the state. Furthermore, Nietzsche's concept of the Will for power is metaphysical and ethical rather than physical and political in its nature and implications, bound up as it is in his view that the Real is a complex of energies, activities and tensions.
Superior brute force is not what characterizes the superior man; superior moral force is what counts, according to Fuller's reading of Nietzsche.z1

Similarly, Walter Kaufmann in his seminal Nietzsche: Philosopher, Psychologist, Antichristz2, argued persuasively that Nietzsche's true aims had been subverted by his proto-Nazi sister, Elisabeth, once she got control of her disabled brother's manuscripts.

Kaufmann had a penetrating intellect, and his forensic work on Nietzsche and Hegelz3 was remarkable, helping me to understand something about those two Germans.

Kaufmann rejected his parents' Lutheranism at about 11 years old and converted to Judaism, only to learn later that all his grandparents were Jewish. He came to America ca. 1939 and served in the U.S. armed forces during World War II. No word on what happened to his parents. They would have been expelled from their church under Nazi edict and may well have ended up in a slave labor and/or death camp.

It's interesting that Kaufmann very strongly identified with Kierkegaard (= "Churchyard" in Danish), whose eccentric approaches to writing and philosophical topics he greatly appreciated -- this despite the fact that he rejected Kierkegaard's ardent Christianity and found fault with the Dane's denunciations of Hegel, of whom, said Kaufmann, Kierkegaard had no first-hand knowledge. Kaufmann admired Kierkegaard despite Kierkegaard's insistence that the philosophers were obstructing the path to the necessity of becoming a true Christian (not simply a nominal member of "Christendom").

Nietzsche had an adversarial relationship with Christianity. Hegel adopted his own form of Christianity, becoming an adversary of on the one hand disbelieving rationalists and on the other hand of faith-driven pietists. I doubt Hegel had a personal relationship with Christ. Hegel believed that a special function of the human mind, Reason (as opposed to Understanding) could, in Hegel's thinking, apprehend the mind of God through a dialectical process -- as opposed to direct revelation by the Holy Spirit.

I caution here that I am no expert on Nietzsche or Hegel. I rely on what others say about Hegel's highly abstruse writings, the absorption of which, it seems to me, would require too much effort on my part for a result that would not be all that worthy.

There was once, according to J.N. Findlay, a group of people who could read Hegel's The Phenomenology of the Spirit with "ease and pleasure" but that day is long past, requiring interpretative efforts to assist the student in discerning what the man was really saying.z4

Findlay argues that though Hegel was no theist, neither was he a humanist who had enthroned man in God's place.
Though Hegel has veiled his treatment of Religion in much orthodox-sounding language, its outcome is quite clear. Theism in all its forms is an imaginative distortion of final truth. The God outside of us who saves us by His grace, is a misleading pictorial expression for saving forces intrinsic to self-conscious Spirit, wherever this may be present. And the religious approach must be transcended (even if after a fashion preserved) in the final illumination. At the same time it would be wrong to regard Hegel as some sort of humanist: he has not dethroned God in order to put Man, whether as an individual or group of individuals, in His place. The self-conscious Spirit which plays the part of God in his system is not the complex, existent person, but the impersonal, reasonable element in him, which, by a necessary process, more and more "takes over" the individual, and becomes manifest and conscious in him. Hegel's religion, like that of Aristotle, consists of "straining every nerve to live in accordance with the best thing in us."
I must say that I don't hear the Master's voice in this quotation.

Findlay adds,
As a method, the dialectic is plainly one for rearward gazing admiration, and not for contemporary use. Through the grandiose sweep of its failure it, however, makes plain the profound affinity of notions too often and too lightly thought to be unrelated, and their constant connection with certain central ideals of intelligibility which are rightly held to spring from man's spiritual nature.z4
At some point, I suppose, I would like to talk more about Hegel's dialectical method of merging contrasting concepts into a "higher" one. At this point, I'll say: A neat trick, that doesn't always work.

And I add: Modern set theory would have spared Herr Hegel much haggle.

I hope to read Hegel's lectures on religion soon.

Though I can see that there are at some points strong parallels between Hegelian philosophy and traditional theology, I would be much more comfortable had he extensively quoted New Testament scripture in his works. I suppose I will have to take that as a note to myself: that I should not neglect Paul and the other writers when doing philosophico-theological commentary.

Since writing the previous paragraphs, I have delved into Hegel's lectures on religion -- and did the same as I have with his other works: stopped dead, aware that Hegel's line of thought is incommensurate with mine, meaning the more I read the more exasperated I become. I did notice however that his later theology/philosophy strove to show itself properly Christian at a time when he was under fire for being an atheist.

In any case, I found it worthwhile reading various commentaries. I wanted to know what it was that drew men's minds toward his philosophy. For those who were drawn by The Phenomenology of the Spirit, I would venture that they were impressed by the idea that Lutheranism, along with other Christian manifestations, was a front -- candy for the masses -- for the secret doctrine known only to a select few philosophers. The idea that one could find a (dialectical) logical, and thus scientific, version of the force behind reality must have had much appeal. Hegel larded his later work with commentary on scientific matters. Though much of that science is outdated, Hegel's defenders say, "No matter. It is the principle that counts." Perhaps. Yet, if much of the science is wrong (through no fault of his), then why should we expect the philosophical system to be all that accurate?

Well, that's another good reason for me to avoid spending too much time with Hegel.

I should add that I suspect another reason for the draw on the minds of reputable thinkers is simple fascination. The bad kind of fascination, I hasten to add. That is to say, one is entranced by a mysterious, alluring object, only to find that there is nothing there worth one's time. Bertrand Russell remarks somewhere that trying to master Hegel hardly seemed worth all the bother.

I realize I am doing rather a lot of carping for someone who declines to read Hegel through. Yet, I have read excerpts, and these excerpts are enough to convince me that Hegel left a lifework of straw. I don't like being quite so rude about another's efforts, especially as I did not walk a mile in his shoes. But my purpose is to forewarn others away. In my opinion, it is unlikely that this line of thinking can be of much help to the soul seeking God. Certainly Hegel's work contributes virtually nothing to Marxism, other than a propagandistic patina.

Though much of Hegel's theorizing strikes me as plain rubbish, I concede that many of the observations he makes while trying to state a case are quite interesting. But that is insufficient reason to recommend him.

Done with Hegel.

Probably.

z1. A History of Philosophy, Revised Edition by B.A.G. Fuller (Henry Holt, 1938, 1945).

z2. Princeton University Press 1950.

z3. Hegel, a Reinterpretation by Walter Kaufmann (Anchor Books Edition 1966).

z4. Hegel, a Re-examination by J.N. Findlay (Oxford 1958).

Thursday, October 4, 2018


Notable quotations

If a technician is a sort of a super-ape, the same cannot be said of Plato.
-- Nietzsche
as paraphrased by Walter Kaufmann
in Nietzsche: philosopher psychologist antichrist (World Publishing 1956).
That is, some individuals embody the perfection of a species -- but only rarely.


There is … a certain plausibility to Nietzsche's doctrine, though it is dynamite. He maintains in effect that the gulf separating Plato from the average man is greater than the cleft between the average man and a chimpanzee.
-- Walter Kaufmann
in Nietzsche: philosopher psychologist antichrist (World Publishing 1956).


One beautiful, starry-skied evening, [Hegel and I] stood next to each other at a window, and I, a young man of twenty-two who had just eaten well and had good coffee, enthused about the stars and called them the abode of the blessed. But the master grumbled to himself: "The stars, hum! hum! the stars are only a gleaming leprosy in the sky." For God's sake, I shouted, then there is no happy locality up there to reward virtue after death? But he, staring at me with his pale eyes, said cuttingly: "So you want to get a tip for having nursed your sick mother and for not having poisoned your brother?"
--Heinrich Heine,
as translated by Walter Kaufmann
in Kaufmann's Hegel, a Reinterpretation
(Anchor Books edition, 1966)

Monday, May 28, 2018


The Socrates syllogism in modern parlance


Socrates is a man
All men are mortal
Hence, Socrates is mortal
Implied above is a basic statement of naive (non-axiomatic) set theory. The notion of Russell and Whitehead that one can dispense with classes when establishing the logical foundations of arithmetic escapes me, as the basic logic form of implication takes classes for granted. But then, I have not read Principia Mathematica in detail, and so I cannot make any dogmatic assertion.

In any case, here is how I would put the Socrates syllogism:

1. It is required that any x in A be paired with b in {b}. (Given rule)

2. xo e A. (Given condition)

3. The ordered pair < xo, b > is required. (By Rule 1)

4. I.e., < xo, b > ∊ A X {b}. (Same as 3)

By substitution, A is the set of men; {b} is the set containing the property or attribute b, signifying "mortality;" xo is an instantiation of A otherwise known as "Socrates."

John Stuart Mill, in his A System of Logic, takes many pains to drive home the point that the syllogism proves nothing that is not already stated in the premise. Others have picked up the theme, arguing that formal logic does not really prove anything, but is a means of saying the same thing in different ways.

Doubtless, they are correct, but we must not lose sight of the fact that syllogistic reasoning is used in practical problem-solving, and seeking a solution to a problem is not necessarily a trivial endeavor.

This is how we may reason:
I know that P is true.
Now if Q is also true, then P•Q means R.
But if Q is false, then I won't be able to say that P•Q --> R,
and so R doesn't necessarily hold.
But would P•~Q --> S?
and so on...

In response to Mill's objection that once a syllogism's premise (or its major and minor premises) has been stated, the conclusion sheds no new light, I would argue that, though that is correct if we do not consider the issue of time, in fact syllogistic logic very often is tied up with sequential arrival of information.

The specific Socrates syllogism concerns things that are so universally accepted that one can agree with Mill that it is so trivial as to be pointless, although that does not mean the general form is trivial.

As an aside, we observe here that the major and minor premises are interchangeable in the Socrates-style syllogism. That is,

both of these syllogisms are equivalent:
Socrates is a man
All men are mortal
Hence, Socrates is mortal

All men are mortal
Socrates is a man
Hence, Socrates is mortal
Now consider this example:
You learn that everyone in a town you are visiting who
 has a swastika tattoo is a member of a particular Aryan cult.
You've long known that your friend Joe Smith has a
 swastika tattoo but laughed it off as a juvenile affectation.
You suddenly remember that Joe is from the town you're visiting.
Aha! you exclaim. Joe must be in that Aryan cult!
Your logic is not quite impeccable, as Joe may no longer be connected to the Aryans, or he may have got the tattoo for some other reason, such as boyish vanity. But the syllogism has given you strong reason to believe that Joe might be among the Aryans, thus perhaps warranting some effort on your part to nail down the truth one way or the other.

In any case, what we see is that the syllogism is valuable when new information arrives. It is not necessarily dead wood for the practical human user of it.

This last gives a good example of real-world thinking. In fact, this is indeed how scientists reason -- in which case the notion that scientific knowledge is purely inductive cannot be supported. Induction is necessary to establish some statements taken as "facts" but deduction is also required.

In other words, repeated experimentation, say, tells us that the velocity of light in a vacuum is constant, regardless of the velocity of its source. We regard this fact as inductively established. Yet, when Lorenz took this fact, he said that if it is so, then an object's length, relative to the observer, must shorten. This was a deduction.

As Mill and others have noted, the Socrates syllogism brings in a question of proof of a reality as opposed to a logical form. That is to say, What about the resurrected Jesus? He is immortal. Oh, well, perhaps he does not really count for a man, despite orthodox theology. Well, let's leave Christ out of it (as usual). Still, as Mill points out, we are not reasoning from a generality or a universal; we are considering the fact that all our forefathers, as far as we can tell, are dead.

There is no proof that no man is immortal. As Mill would say, we are reasoning from particulars to particulars.

This little rub is handled when we change the syllogism into the if-then form.
If all men are mortal and Socrates is a man, then Socrates is mortal.
Here we are not claiming that it is generally true that all men are mortal, but merely saying that if that property holds for all men, then it holds for Socrates. We are not required to assume the truth of the premise.

Saturday, May 26, 2018


Furious dreams of colorless green


Draft 1. 04/28/18. Updated twice as of 08/17/18. Updated again 02/04/19.

Rather than a Chomskyan approach to language, I find the use of relations, or ordered pairs, more pleasing.

Actually, this notion came to mind while reading Russell (one of his two mid-century books on epistomology; citation to come), where he talked about meaningless sentences. His position was that sentences tokenize facts, and those that don't are meaningless. I.e., there is a correspondence between a sentence and some event in spacetime if the sentence (proposition) is true; if there is a proposed correspondence to a fact that turns out not to be a spacetime event, then the sentence is false. Russell went into some subtleties about the logic of the not coupler, but those will not detain us.

Though I am sympathetic to the notion that sentences -- or, anyway, propositions -- represent or tokenize facts, it seems we must then cope with pseudo-facts, as in the pseudo-events of Hamlet and the pseudo-person Hamlet. How many ways must we slice "to be" and "not to be"? Russell found quite a few, but be that as it may, I highly recommend Russell's analysis, regardless of the fact that I prefer a different approach than he advised.

What is my approach? I consider a propositional sentence in English (and I suggest in any earthly human language) to be decomposable into sets of relations. (I have yet to read Carnap on this topic. Once I do, I may modify this essay accordingly.)

Let's begin with a complete sentence that lacks an object or other non-verb predicate.

Sally ran.

In this case we have the relation ran which pairs the subject, Sally, with nothing, or the null set.

So this gives sR∅ or run < s, ∅ >. Because s refers to a unique, non-variable (Sally), the matrix for this ordered pair contains only that pair. (In the interests of completeness, we note that the empty matrix is a subset of every relation, as in R <∅,∅>. Though one considers two elements to be "related" by some verb, a relation here means a set which includes the elements under the relation-word. So the set of all declarative sentence relations, for example, must have a null set as a subset, which would be ∪ Ri <∅,∅>.)

Note that in our method, the principle verb or (which can include a "composite verb" -- see below) serves as the relation.

A more common situation is given by sentences such as:

Sally ran home.

Sally ran fast.

Sally ran yesterday.

Sally ran for her life.


The object answers the question, "Where did she run?"

The adverb (non-verb predicate word) answers the question, "How did she run?"

The time element (non-verb predicate word) answers the question, "When did she run?"

The explainer answers the question "Why did she run?" So then we have for the relation ran a set A composed of all words suitable for a subject and a set B composed of all words that are formally usable as a non-verb predicate word. The set A X B is then the set of ordered pairs of subjects and non-verb predicates under the relation ran.

X B contains any such pair and many of those pairs may not tokenize facts. The problem here is that suppose the "meaningless" sentence has some ineffable meaning in some poem somewhere? It seems more useful to say that there is a C ⊂ A X B which contains pairs that have a relatively high probability of tokenizing some idea, concept or fact familiar to many people. I have not mathematically quantified the terms "relatively high probability" or "many people" because a high degree of complexity is implied, though I suggest these areas can be got at via information theory concepts.

A sentence such as Sally ran home for her life yesterday can be handled as the relation R < a, x > where a is the constant Sally and x is a member of the set {home, for her life, yesterday}. To be precise, we should say write ∪ Ri < a,x >.

With this approach in mind we look at Chomsky's classic example:

Colorless green ideas sleep furiously.

This is the sort of proposition Russell would have termed meaningless because it represents no fact in our actual world. Yet, others would say that for the typical English speaker it resonates as formally correct even if silly. And what if it is part of a poem?

Colorless green ideas sleep furiously
when the sun goes down in Argotha,
a timeless, though industrious hamlet
which straddles the borderline
between Earth and Limbo


Umm, well, let's not stretch this too far, and get back to business.

We have the subject, Colorless green ideas. Here we can decompose the structure into ordered pairs thus:

The subject has the property, or attribute, colorless green. But we notice that the adjective colorless is inconsistent with the subject modifier green That is, we are unlikely to accept an ordered pair used as a subject modifier that is logically inconsistent. But, we might. For example, the writer might be trying to convey that the green was so dull as to metaphorically qualify as colorless.

So for the subject's modifiers we use the relation "has the property or attribute of," as in:

(cPg)Pi

The section in parentheses is the relation green has property of colorlessness which is part of the relation idea has the property of colorless green.

The full sentence is then {(cPg)Pi}Sf, where S is the relation sleep and furiously the adverb.

The ordered pair notation gives < < c,g >, i >, f >, where I have left the relation symbols implicit.

We note here that green can serve as either a noun or secondary modifier (adjective), but not so for colorless. So the form green colorless ideas requires an implied comma between green and colorless so as to indicate that each modifier modifies the subject as a primary modifier and not as a modifier that modifies a modifier. Of course, by altering the morphology of colorless so as to project a noun, colorlessness, we are able to show that the relation is, if so desired, reflexive, as in green colorlessness.

It will be objected, ideas don't sleep; people sleep. Yet, ideas do, it seems, percolate. An idea that is ahead of its time might very well said to be slumbering in the collective consciousness of the intelligentsia. If the idea is being suppressed for ideological reasons, it could be said to be persevering in an agitated, even furious sleep. So what we say of Chomsky's sentence is that it is structurable as a relation in a full matrix of ordered pairs, but that in the subset of routine ordered pairs it is not to be found. I have not defined "routine" -- again because this would require some spadework in information theory, which I have not done.

In any event, a relation for, say, a simple declarative sentence can be styled xRy, or R< x, y >. This compares to such notation as P(x,y), in which P represents a predicate and, in this case, x and y two terms. So the proposition 2 + 1 = 3 requires that "2 + 1 =" be the predicate, with 3 a term, or that "3 =" be the predicate with "2 + 1" a term. I.e., P tokenizes "2 + 1 =" and "3" is a term. We abandon this standard notation with preference to the more compact and logically succinct relation notation.

So then we are able to write R< x, y > in which the only required constant is the principle verb, which is the relation.

Of course, many sentences use specific or particular descriptives for subjects. Even the word "they" is usually implicitly particular. Generally, we have some idea of who is meant by the words, They laughed. In other words, the subject term of a relation pair will often be a constant. Yet, that set would be a subset of the set of ordered pairs where all variables are used for subject and non-verb predicate.

A point of which to be aware: Reflexivity obtains in the general matrix even if some of the pairs are deemed meaningless in Russell's sense. But reflexivity may very well not be acceptable in a "probabilistic" subset. Horses eat hay tokenizes a Russellian fact, but Hay eats horses will be cast into outer darkness.

This seems a good place to reflect on the quantifier all. In honor of Russell, we shall write "all x" as (x). If we want to say that the proposition P holds for all x we may write (x)P or (x)Px. Now how does this quantifier work out in our system of relations? I suppose we have to be aware of levels. We write R < x, y >, where the use of the letters x and y implies arbitrary elements of the entire cross product. So we may apply a truth value to the whole cross product, to a cross product subset or to a single ordered pair, in which each element is constant. This line of thought corresponds to the "all," "some" or "none" quantifiers.

So for Russell, Horses eat hay is a proper proposition with a truth value T, and Hay eats horses is also a proper proposition that carries a truth value of F. But, Ideas sleep furiously correlates with no fact and so is meaningless and so carries no truth value. But, to controvert Russell, we note that language entwines the art of metaphor, invention and novelty, and so we cannot be sure that a "meaningless" construct won't say something meaningful to someone sometime.

Let us note something further on quantification, while we're at it. Horses eat hay is generally accepted as true. But does this mean all horses eat hay? or perhaps all horses are inclined to eat hay? or maybe most horses will thrive on hay? I cast my vote for the last option. This sort of ambiguity prompted Russell to argue in favor of strict symbolic notation, which, it was hoped would remove it. From my point of view, the relation E < a,b > expresses a subset of ordered pairs of things that eat and things that are eaten, to wit the subset of horses and the set of hay (where elements might be individual shoots, or bales, or packets). So we may be claiming that T holds for this entire subset. Or, we may wish to apply a truth value only after inserting the standard quantifier "some," which is a noun or subject modifier. One may write ∃E< a,b > if one can endure the abuse of notation. The notion of most is rather convenient and seems to warrant a quantifier-like symbol. We will use ⥽ . Hence ⥽E means most elements of this set have the truth value T. Or, better, there is a subset of E< a,b > such that its complement contains at least one less member than it contains. By the way, use of the "exactly one" quantifier ∃ ! permits us to justify the "most" symbol, as the reader can easily work out for himself.

We should also account for transitivity.

Case 1) Different relations

Example: Horses eat hay, hay feeds some animals

or E< a,b > • F < b,c >

Case 2) Same relation

Example: Men marry women, women marry for security

or M < a,b > • M < b,c >

We also have symmetry.

Example: Women marry men, men marry women

M < a,b > <--> M < b,a >

Other examples of symmetry:

u = v

Al was forced to face Al

In this last example, notice that we permit the action element to be a composite where exist, is and be are regarded as actions. I.e., the relation is was forced to face.

I have not carefully analyzed such composites, as I am mostly interested in the fact that the action element relates a subject to something that is acted upon. It is the function that seems important to me, as opposed to the details of the interior of the function.

A quick look at the is or existential relation:

The king of France is bald,

which may be taken as equivalent to

The king of France has the property or attribute of baldness.

Associated with the is relation is a complete set A X B such that A contains all English nouns and B contains all English words used for properties. Aside from an adverb, one might have an object in the form of a horse or the concierge.

Russell in 1905 argued that, because the sentence implicitly asserts the proposition, "There at present exists a king of France and that king is bald," the sentence is false because the first clause in the rewritten proposition is false. Our view is that the subset of relations to which truth values are attached is somewhat malleable. I.e., as discussed above, something might be true in a very limited context while in general not being taken as true.

For example, suppose the sentence is part of a limerick: The king of France is bald, and also quite the cuckold... etc. One would not apply a truth value to the sentence in a general way, but for the case of "suspended disbelief" that we humans deploy in order to enjoy fictions, the sentence is held to be true in a very narrow sense. At any rate, we find that, though it has no general truth value, we cannot consign it to the set of "meaningless" sentences. We have managed to put it into a context that gives it meaning, if we mean by that word something beyond gibberish.

What of such counterfactual objects: as "the gold mountain" or "the round square"?

We add the implicit verb exists so as to obtain the relation:

gE∅ and rE

It is claimed that these objects do not exist and so fall under the subset of false propositions. That would be the conventional judgment. In the case of a gold mountain, we are talking about something that has never been observed, but there is always the faint possibility that such an object may be encountered. So in that case the ordered pair < g, ∅ > would perhaps be placed in the complement set of the truth value set.

As for "the round square," we are faced with a contradiction. We can make this absurdly plain by writing "A square object contains four right angles on its perimeter" and "A round (or circular) object has no finite angles on its perimeter or it has an infinitude of what some call infinitesimal angles.."

And that gives: "A perimeter with four right angles, which are represented by a finite number, is a perimeter with no finite number of angles." So there is no issue with placing the "round square exists" relation in the subset of relations deemed false.

Russell in a lecture published in 1918 objects to Meinong's way of treating the round circle as an object.
Meinong maintains that there is such an object as the round square only it does not exist, and it does not even subsist, but nevertheless there is such an object, and when you say “The round square is a fiction,” he takes it that there is an object “the round square” and there is a predicate “fiction.” No one with a sense of reality would so analyze that proposition. He would see that the proposition wants analyzing in such a way that you won’t have to regard the round square as a constituent of that proposition. To suppose that in the actual world of nature there is a whole set of false propositions going about is to my mind monstrous. I cannot bring myself to suppose it. I cannot believe that they are there in the sense in which facts are there. There seems to me something about the fact that “Today is Tuesday” on a different level of reality from the supposition “That today is Wednesday.” When I speak of the proposition “That today is Wednesday” I do not mean the occurrence in future of a state of mind in which you think it is Wednesday, but I am talking about the theory that there is something quite logical, something not involving mind in any way; and such a thing as that I do not think you can take a false proposition to be. I think a false proposition must, wherever it occurs, be subject to analysis, be taken to pieces, pulled to bits, and shown to be simply separate pieces of one fact in which the false proposition has been analyzed away. I say that simply on the ground of what I should call an instinct of reality. I ought to say a word or two about “reality.” It is a vague word, and most of its uses are improper. When I talk about reality as I am now doing, I can explain best what I mean by saying that I mean everything you would have to mention in a complete description of the world; that will convey to you what I mean. Now I do not think that false propositions would have to be mentioned in a complete description of the world. False beliefs would, of course, false suppositions would, and desires for what does not come to pass, but not false propositions all alone, and therefore when you, as one says, believe a false proposition, that cannot be an accurate account of what occurs.
I suppose Russell preferred a different definition of "object" over Meinong's.

But our way of treating the proposed object shows that no such object is "acceptable" because of the internal contradiction, which makes the relation "A/the round square exists" an element of the falsehood subset of R X ∅

Point to Russell.

Russell saw the value of the use of relations for linguistic purposes as far back as 1918, as we can see from this excerpt from The Philosophy of Logical Atomism, Lecture V:
Now I want to come to the subject of completely general propositions and propositional functions. By those I mean propositions and propositional functions that contain only variables and nothing else at all. This covers the whole of logic. Every logical proposition consists wholly and solely of variables, though it is not true that every proposition consisting wholly and solely of variables is logical. You can consider stages of generalizations as, e.g.,

“Socrates loves Plato” “x loves Plato” “x loves y” “x R y.”

There you have been going through a process of successive generalization. When you have got to xRy, you have got a schema consisting only of variables, containing no constants at all, the pure schema of dual relations, and it is clear that any proposition which expresses a dual relation can be derived from xRy by assigning values to x and R and y. So that that is, as you might say, the pure form of all those propositions. I mean by the form of a proposition that which you get when for every single one of its constituents you substitute a variable. If you want a different definition of the form of a proposition, you might be inclined to define it as the class of all those propositions that you can obtain from a given one by substituting other constituents for one or more of the constituents the proposition contains. E.g., in “Socrates loves Plato,” you can substitute somebody else for Socrates, somebody else for Plato, and some other verb for “loves.” In that way there are a certain number of propositions which you can derive from the proposition “Socrates loves Plato,” by replacing the constituents of that proposition by other constituents, so that you have there a certain class of propositions, and those propositions all have a certain form, and one can, if one likes, say that the form they all have is the class consisting of all of them. That is rather a provisional definition, because as a matter of fact, the idea of form is more fundamental than the idea of class. I should not suggest that as a really good definition, but it will do provisionally to explain the sort of thing one means by the form of a proposition. The form of a proposition is that which is in common between any two propositions of which the one can be obtained from the other by substituting other constituents for the original ones. When you have got down to those formulas that contain only variables, like xRy, you are on the way to the sort of thing that you can assert in logic.

To give an illustration, you know what I mean by the domain of a relation: I mean all the terms that have that relation to something. Suppose I say: “xRy implies that x belongs to the domain of R,” that would be a proposition of logic and is one that contains only variables. You might think it contains such words as “belong” and “domain,” but that is an error. It is only the habit of using ordinary language that makes those words appear. They are not really there. That is a proposition of pure logic. It does not mention any particular thing at all. This is to be understood as being asserted whatever x and R and y may be. All the statements of logic are of that sort.

It is not a very easy thing to see what are the constituents of a logical proposition. When one takes “Socrates loves Plato,” “Socrates” is a constituent, “loves” is a constituent, and “Plato” is a constituent. Then you turn “Socrates” into x, “loves” into R, and “Plato” into y. x and R and y are nothing, and they are not constituents, so it seems as though all the propositions of logic were entirely devoid of constituents. I do not think that can quite be true. But then the only other thing you can seem to say is that the form is a constituent, that propositions of a certain form are always true: that may be the right analysis, though I very much doubt whether it is.

There is, however, just this to observe, viz., that the form of a proposition is never a constituent of that proposition itself. If you assert that “Socrates loves Plato,” the form of that proposition is the form of the dual relation, but this is not a constituent of the proposition. If it were you would have to have that constituent related to the other constituents. You will make the form much too substantial if you think of it as really one of the things that have that form, so that the form of a proposition is certainly not a constituent of the proposition itself. Nevertheless it may possibly be a constituent of general statements about propositions that have that form, so I think it is possible that logical propositions might be interpreted as being about forms.
Russell gave a discussion of the philosophy of relations in The Principles of Mathematics (1903) in which he contrasted the "monadistic" oulook with the "monistic." The monistic view that a relation was all of a piece with the relata (the objects of the relation). For example, if we regard "x is greater than y" as a relation, we find that there is a relation between the string within quotation marks and the constituents x and y, which, says Russell, leads to contradiction. He favors a string written "x is greater than" (from what I can gather), but this form has its own problems.

My form requires that nouns or gerunds or noun or gerund phrases are paired by a relation, which, admittedly, we have not defined very well. I would argue that the relation is either a verbial (action or pseudo-action as in existence) word or phrase. For example, the relation "greater than" may be finely elucidated in NBG set theory, or it may be left as a primitive concept.

A dashed-off note on the cult of Moloch/Mammon


The unwilling mother "knows" that she is more important than her baby. This belief is promoted by the idea that human beings do not have souls and hence "nothing happens" to a very young baby that will bring it suffering.

Abortion in a very great many cases puts the self ahead of the baby and rationalizes this selfish decision by dehumanizing the baby, by arguing that its extreme youth makes it not-yet human, that it is a non-person because the state has not granted it a certificate of personhood, that as a dehumanized construct, its mother's right to serve herself overrides a baby's right to life; in fact, by the dehumanization of the baby, that now non-person is stripped of all rights.

Isn't it wonderful that Irish women have won the right to be merciless toward the unborn, who are not real people and so can be easily thrown away?

"That guy doesn't love me and I don't want his child," is often the unspoken motivation.

A problem with the idea of disposable people (or "almost people") is that the boundaries of what is acceptable may shift. If abortion is socially sanctioned, then we may see cases in which women are court-ordered to have an abortion. Why not? The proto-human is a meaningless lump with no rights, and so a woman could get involved in a legal situation that results in forced abortion.

But more disturbingly, why does the mother think she should live and the baby should die? Unless she is suicidal, her normal inclination is to feel strongly that "I wish to survive." She has the will to live. That instinct is something precious, perhaps God-given. But doesn't the baby have that very same instinct? There is something not quite right about the woman favoring her own life instinct over her baby's, which she assumes doesn't count.

The trick is to twist words in such a way as to help the woman play a confidence game on herself. The unborn being isn't a baby, it's not even a being. It's little more than an inert thing. Really, it's not at all inert; it's lively, it's animate. But we must pretend that it's a mere assemblage of parts that are not up and running in synchronous order yet. Of course, that's not true either. Anyway, thank heaven, she's not a mother merely because she has something inside that makes her pregnant. We're not to say that she is with child. Children are human (all too...).

Well perhaps this will do: It is too young to have a mind, and so it won't know what it will miss. This could be true, if we are all very advanced forms of robotic intelligence. But suppose there is a Mind behind the mind? What if there is a soul? Many silly people assume that "Science shows that people don't have souls." But as the saying goes, "Absence of proof is not proof of absence."

Many women are ecstatic with the heady notion that they have overcome an unfair (to whom?) religious obstacle. They won't face the fact that they are using shallow euphemisms to help them evade moral responsibility for the souls nourished in their bodies. They don't see that they are following an ancient pagan practice of offering their children to Moloch as human sacrifices. Yes, that is what they do -- with the connivance of boyfriends and casual partners who don't want the bother of a child to care for. That's a burden. That requires commitment, with a plentiful dose of faith. Why wreck my life just so that that miserable little blot can live?

But isn't sisterhood wonderful? We women have won the battle to determine what goes on in our own bodies. And if we don't want some proto-human parasite, we can choose to kill it. Of course, the sisters won't word that last thought that way. They must marshal the mushy euphemisms that permit them to glide past the annoying moral crux.

What the woman wants to expel is not so much the baby, but what she regards as an obstacle to her grand expectations of a Wonderful Life. In other words, she is offering up her unborn child to the god Mammon, to the idol of a Pleasant, Self-based Life, to the god Moloch, who craved to devour human children.

Saturday, February 24, 2018


Some of Paul Conant's writings archived at Perma.cc

Created February 24, 2018
Kosmic kiks: Appendix A: anecdotal accounts of synchronicity
Created February 24, 2018
Created February 24, 2018
Created February 24, 2018
The Invisible Man: A new look: Feds twisted facts to pin anthrax blame on ailing scientist
Created February 24, 2018
Kosmic kiks: The many worlds of probability, reality and cognition
Created February 24, 2018
Created February 23, 2018
Created February 23, 2018

<i><U>What is a continuum? </u></i><br />Russell knocks Hegel's logic (1903)

Bertrand Russell, in his Principles of Mathematics (1903), comments on G.W. Hegel's Logic : 271. The notion of continuity has be...





AC and the subset axiom

AC may be incorporated into the subset axiom. The subset axiom says that, assuming the use of "vacuous truth," any set X has a s...