Sunday, December 26, 2021

A snapshot of undecidability


Notation: The logical product, or conjunction, of two declarations P and Q is expressed PQ (P and Q) or P.Q.
A logic language uses a finite set of symbols. A declaration is a legal arrangement of symbols in a string of finite length. It doesn't matter whether the declaration "makes sense."

A theory T contains a denumerable (finite or not) set of axioms and a denumerable set of declarations. That that set is infinite is shown by the simple P --> P --> P --> ad infinitum.

In any case, we need worry about nothing but the axiom subset of T. That is, if a declaration is regarded as provable, then it must be derived from axioms via modus ponens, as in AxAy --> Q, where Q is a theorem.

Note that though the axiom set may be infinite, proofs demand that a finite subset be used since an infinite chain of conjunctions means that a deduction is never realized. So then any provable or derivable theorem has a finite chain of declarations linking the theorem to some finite subset of axioms. This last statement expresses the deduction theorem. Hence, we can simply cite the axioms that yield the theorem. The intervening chain is built and checked by mechanical procedure, of course.

So we may express any provable theorem of T thus:

Π Ai --> P, in which case P is customarily taken as an element of T, though not always.

Now suppose we write Π Ai --> P. This may be rewritten ~(Π Ai) v P. Then, ~~[~(Π Ai) v P]. Then ~[(Π Ai).~P].

We denote that last declaration by R (some would call this a meta-declaration). R denies that the axiom subset can be true while P is false. But suppose P says, "This declaration cannot be derived from axioms" (or "This declaration is not provable in T")? Yet R assures P cannot be false. So, intuitively, we are inclined to accept it as true. Yet P cannot be proved. So, if P is true then T is incomplete. If P is false, then it can be derived from axioms, which tells us that R expresses an inconsistency, which is to be found in the axioms.

This is a snapshot because it does matter what the declaration means. A proof requires the use of logic symbols only. We must be able to use symbols to justify a declaration like "This declaration is not provable in T." By the use of a Goedel-type method, to include Goedel numbers, this snapshot can be made rigorous. That is, it is possible to encode the string represented by P as an element of Number Theory. That, in turn, shows that not all theorems -- or posed problems -- of Number Theory are decidable. (Goedel and others base their results on a subset of Peano's axioms.) Because the logic L of symbol strings can represent Axiomatic Set Theory, we then have that even that theory (usually ZFC) cannot be fully provable. Any system sufficient to be encoded by Goedel's tool will not be fully provable or decidable.

It has been shown that when Number Theory is limited to addition (and multiplication), that system is complete and consistent. So any system that cannot be encoded only at that level must be inconsistent or incomplete.

Saturday, December 25, 2021

Monty Hall II

A peculiar scenario
Behind three doors are two booby prizes and a luxury car.

You are asked to select a door and, after reflection, you choose one.

Monty says, "Have you chosen a door?"

"Yes."

"Are you very certain?"

"Yes."

"Before I open that door, would you like to change your choice?"

Now you must decide whether it would help you to change your choice even though Monty hasn't opened a door.

On your first choice, you had a 1/3 probability of being right.

So, your probability of being wrong is 2/3.

Thus, it makes sense to change your choice, even though nothing else has changed.

You now have a 2/3 probability of being right.

Similarly, Monty could hear you answer, "I pick door A."

Monty asks, "Are you very certain? Would you like to change your choice?"

The same reasoning applies as above -- even though there is no new information. That is, so-called Baysean reasoning does not appear to be terribly relevant here. The probabilities have not been updated by new information. The probabilities change based only on a person's firm mental act.

Weird, but not contradictory in the context of the usual logic of probability.

But a Bayesian solution to Monty Hall I DOES require a door to open. Yes, we can see that the Bayesian solution makes sense. But that does not answer the puzzle of why the non-Bayesian solution works for Monty Hall I and II. That is, the non-Bayesian solution seems to be the more general of the two.

Some would argue that this poser proves that one cannot use the simple solution given above. Yet, the logic itself is not flawed, though one may question any potential application. But perhaps we have an insight here into the interpretation problem of quantum physics.

You say, hold up! What we have here is two independent cases. Wrong. If that were so, we would be told -- before Monty asks "Are you sure?" -- that a reshuffle of prizes may have taken place.

Bayesian solution to Monty I
I found a very good Bayesian analysis online at Geek Data Guy
https://towardsdatascience.com/solving-the-monty-hall-problem-with-bayes-theorem-893289953e16

which I reproduce below:

Solving the Monty Hall Problem with Bayes Theorem

How your intuition can lose you money on gameshows

You’re on a gameshow called “Let’s Make a Deal”. There are 3 closed doors in front of you.

Behind each door is a prize. One door has a car, one door has breath mints, and one door has a bar of soap. You’ll get the prize behind the door you pick, but you don’t know which prize is behind which door. Obviously you want the car!

So you pick door A.

Before opening door A, the host of the show, Monty Hall, now opens door B, revealing a bar of soap. He then asks you if you’d like to change your guess. Should you?

My gut told me it doesn’t matter if I change my guess or not. There are 2 doors so the odds of winning the car with each is 50%. Unfortunately for me, that’s 100% wrong.

This is the famous Monty Hall problem.

By working through Bayes Theorem, we can calculate the actual odds of winning the car if we stick with door A, or switch to door C.

Bayes Theorem

Bayes Theorem describes probabilities related to an event, given another event occurs.

A = An event.

B = Another event.

P(A|B) = posterior = The probability of an event occurring, given another event occurs.

P(B|A)= likelihood = The probability event B occurs, if event A occurs.

P(A) = prior = The probability an event occurs, before you know if the other event occurs.

P(B) = The normalizing constant.

Bayes Theorem + Monty Hall

Note: A, B and C in calculations here are the names of doors, not A and B in Bayes Theorem.

Now let’s calculate the components of Bayes Theorem in the context of the Monty Hall problem.

Let’s assume we pick door A, then Monty opens door B.

Monty wouldn’t open C if the car was behind C so we only need to calculate 2 posteriors:

  1. P(door=A|opens=B), the probability A is correct if Monty opened B,
  2. P(door=C|opens=B), the probability C is correct if Monty opened B.

Prior: P(A)

The probability of any door being correct before we pick a door is 1/3. Prizes are randomly arranged behind doors and we have no other information. So the priorP(A), of any door being correct is 1/3.

  1. P(door=A), the prior probability that door A contains the car = 1/3
  2. P(door=C), the prior probability that door C contains the car = 1/3

Likelihood: P(B|A)

If the car is actually behind door A, then Monty can open door B or C. So the probability of opening either is 50%.

If the car is actually behind door C then monty can only open door B. He cannot open A, the door we picked. He also cannot open door C because it has the car behind it.

  1. P(opens=B|door=A), the likelihood Monty opened door B if door A is correct = 1/2
  2. P(opens=B|door=C), the likelihood Monty opened door B if door C is correct = 1

Numerator: P(A) x P(B|A)

  1. P(door=A) x P(opens=B|door=A) = 1/3 x 1/2 = 1/6
  2. P(door=C) x P(opens=B|door=C) = 1/3 x 1 = 1/3

Normalizing Constant: P(B)

In cases where analyzed events cover all possible options and don’t overlap, we can take the sum of the numerators.

P(B) = 1/6 + 1/3 = 3/6 = 1/2

Posterior: P(A|B)

Now we just need to do the remaining math.

  1. P(door=A|opens=B) = (1/6) / (1/2) = 1/3
  2. P(door=C|opens=B) = (1/3) / (1/2) = 2/3

This leaves us with a with a higher probability of winning if we change doors after Monty opens a door.

If this is super counter-intuitive, I think it’s important to remember 2 pieces of information:

  • Monty NEEDS to open a door
  • He can’t open the door with the car behind it

Conclusion

I found this to be a super interesting case where intuition is at odds with probability.

Try asking people (especially smart people) what they would do if playing the Monty Hall problem. Sometimes it’s fun to see how they answer :)

AC and the subset axiom

AC may be incorporated into the subset axiom. The subset axiom says that, assuming the use of "vacuous truth," any set X has a s...