Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A theory Γ is said to be complete whenever for all sentences we have either Γ ⊨ φ or Γ ⊨ ¬φ.

You mean Γ ⊢ φ or Γ ⊢ ¬φ .

But note that what you said is not mutually exclusive with what /u/Tainnor said, i.e.

Γ ⊨ φ → Γ ⊢ φ

The proof of Godel's theorem is to exhibit a φ for which "Γ ⊢ φ or Γ ⊢ ¬φ" is false. From that you conclude that "Γ ⊨ φ → Γ ⊢ φ" is false on the assumption that "Γ ⊨ φ" is true. But that last part is an informal argument. We believe that "Γ ⊨ G" is true for the Godel sentence G but we (obviously) can't prove it.

Indeed, adding ¬G as an axiom to PA produces a consistent system where "Γ ⊢ G" is (trivially) true. The semantics of this system are quite interesting. The semantics of that system say that there exists a Godel number of a proof of G, but no natural number is that number. It is similar to adding axioms like "there exists a number whose successor is zero" or "there exists a number whose square root is the number whose successor is zero", which have obviously been very fruitful extensions of PA.



> You mean Γ ⊢ φ or Γ ⊢ ¬φ .

In principle no, though it's possible that it's defined like that in certain places, since in first-order logic, that amounts to the same thing (given the completeness theorem). But "completeness" is a notion from model theory that doesn't care about proof systems.

In a system where the completeness theorem is not true, such as second-order logic with standard semantics, we do have complete theories (such as second-order PA), where we always have Γ ⊨ φ or Γ ⊨ ¬φ, but that doesn't mean that every statement is either provable or disprovable.

> The proof of Godel's theorem is to exhibit a φ for which "Γ ⊢ φ or Γ ⊢ ¬φ" is false. From that you conclude that "Γ ⊨ φ → Γ ⊢ φ" is false on the assumption that "Γ ⊨ φ" is true. But that last part is an informal argument. We believe that "Γ ⊨ G" is true for the Godel sentence G but we (obviously) can't prove it.

I think that has it a bit backwards. It's not that we believe that PA ⊨ G - in fact, if we believed that, it would immediately imply PA ⊢ G, when that's exactly what we showed to be impossible. It's that we believe that G is true in the natural numbers, which is not the same thing as being true in PA, and that belief rests in our belief that PA is sound (which we can't prove, but seems reasonable).

(There are some different "flavours" of the incompleteness theorems, some of them rely on soundness of PA and others only on it being consistent.)

> But note that what you said is not mutually exclusive with what /u/Tainnor said, i.e.

That's because that's me. ;)


> That's because that's me. ;)

Oh, sorry, I thought I was responding to the parent comment by /u/dvt

Let's go back to basics. To begin with, let's make sure we're on the same page with regards to notation.

"Γ ⊢ φ" means that the string φ is a theorem under Γ. This is a syntactic property of Γ. The claim that some sequence of strings P is a valid proof of φ (and that therefore Γ ⊢ φ is indeed true) can be verified mechanically.

"Γ ⊨ φ" means that the string φ is true under some model of Γ. This is a semantic property of Γ and it cannot be checked mechanically.

We want "Γ ⊨ φ → Γ ⊢ φ" to be true, that is, we would like it to be the case that for every statement φ that is true under some model of Γ, we can mechanically prove that φ is a theorem of Γ, i.e. we can show that there exists a P which is a proof of φ. If we have such a P in hand we can always check it, but if we don't have a P the question of whether or not one exists is generally an open one.

But we actually want more than that. We also want Γ to be "interesting" in some sense. There is no sport in coming up with uninteresting systems for which "Γ ⊨ φ → Γ ⊢ φ" is true. The obvious example is the empty system and a corresponding empty model where every φ is both false and not a theorem.

So we impose some minimal structure on Γ, not because the rules of formal systems demand it, but because systems where, say, P and ¬P are both false are not very interesting. So we demand that e.g. there exist at least one true statement and at least one false statement in our model, not because there is any cosmic rule that requires this, but simply because the whole enterprise becomes a pointless exercise in navel-gazing if we don't.

That raises the natural question: what is the minimal amount of structure we can impose on Γ to make it "interesting" and worthy of study. And there are many answers to this, because there are many possible things that are potentially "interesting". Propositional logic. Set theory. Yada yada yada.

The surprise is that it turns out that with only a very small amount of structure we lose "Γ ⊨ φ → Γ ⊢ φ". As with the imposition of structure in general in order to make systems interesting and worthy of study, there are many different ways in which to impose enough structure to lose this desirable property. One way, the way which historically was the first to be discovered, is to impose enough structure that the system can be modeled by the natural numbers. But this is not the only way. Another way is to impose enough structure that the system can be modeled by a simple machine consisting of a tape and a lookup table. Yet another way is to impose a structure that the system can be modeled by two urns that contain stones, and the ability to add and remove stones from the urns, and look in an urn to see if it is empty or not. There are many many variations on the theme.

All of these things turn out to be in some sense "equivalent", and because of that the details of the structure that we impose on the system are uninteresting. It doesn't matter if we use numbers or tapes or stones, there is a certain point beyond which we lose "Γ ⊨ φ → Γ ⊢ φ". And the big surprise is that the threshold is very, very low. It takes very little structure to cross the threshold. There are systems for which "Γ ⊨ φ → Γ ⊢ φ" does hold, but those are necessarily of very limited utility.

The proof of the falseness of "Γ ⊨ φ → Γ ⊢ φ" for any given system depends on the details of the system, but the general procedure is always the same: you produce a mapping from proofs to elements of the model. You then use that mapping to produce an element of the model (called G) which is true under the model if and only if there does not exist an element of the model corresponding to a proof of G under your mapping.

The key is the mapping from proofs to elements of the model, and Godel numbers were the first such map, but they are not the only such map, and there is nothing particularly special about them other than that they were the first to be discovered. They might have some advantages if you are actually writing out a detailed formal proof of Godel's theorem by hand, just as buggy whips can be useful if you are driving a horse and buggy. But there aren't many compelling reasons to do either one in today's world except as an exercise in nostalgia.


> "Γ ⊨ φ" means that the string φ is true under some model of Γ.

It means that φ is true in every model of Γ.[0]

> If we have such a P in hand we can always check it, but if we don't have a P the question of whether or not one exists is generally an open one.

> The surprise is that it turns out that with only a very small amount of structure we lose "Γ ⊨ φ → Γ ⊢ φ".

That is not true for first-order logic. In first order logic Γ ⊨ φ → Γ ⊢ φ is always true. That's the completeness theorem.[1] This really only starts to become false when you go to e.g. second-order logic, but first-order logic is really the most commonly used type of logic and plenty sufficient for almost all of mathematics (though that becomes a philosophical debate to some extent).

Gödel's theorem is really about first-order logic, and it says that even though completeness of FOL is a nice thing, it doesn't help us with the goal of finding an effective theory of the natural numbers (effective in the sense that proofs are mechanically verifiable).

[0]: That's the standard definition, as it can be found e.g. in https://milneopentextbooks.org/download/friendly-intro-2e-up... (Definition 1.9.1) or https://books.google.it/books?id=u0wlXPHATDcC&pg=PA67#v=onep... (Definition 3.4.4 (iv)) and many other textbooks on logic.

[1]: See e.g. https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_th... - and every introductory textbook about about mathematical logic, including the ones referenced above.


> It means that φ is true in every model of Γ.[0]

> (Definition 1.9.1)

That is not a definition of ⊨, that is a definition of logical implicature. Not the same thing.

The ⊨ symbol is defined in definition 1.7.4, and it is used in a different sense than we are using it here. The left-hand argument to the ⊨ operator in that book is a model (an "L-structure" using the book's terminology), not a theory. But Γ is a theory. So writing Γ ⊨ φ is fundamentally incompatible with the definition of ⊨ given in that book.

And our disagreement turns on this, at least in part, because one of the consequences of the incompleteness theorem is that there are models of PA (or RA or whatever your favorite axiomization of arithmetic happens to be) where G is true, and there are models where G is false.

> That is not true for first-order logic. In first order logic Γ ⊨ φ → Γ ⊢ φ is always true.

Only if you restrict what you mean by "first-order logic". Under some definition it is manifestly not true: any (first order) theory that includes arithmetic is incomplete.

> first-order logic is really the most commonly used type of logic and plenty sufficient for almost all of mathematics

Yes, and when you include arithmetic (or any of the other variants on that theme) it becomes incomplete.


> That is not a definition of ⊨, that is a definition of logical implicature. Not the same thing.

You should read that definition again. It's really hard to have a discussion if we can't agree on standard notation. The double turnstyle symbol has multiple uses (sad as it is), and when there is a structure on the LHS it means something different than when there is a set of formulas on the LHS. The second sense is defined in 1.9.1 and is based on the first sense.

> And our disagreement turns on this, at least in part, because one of the consequences of the incompleteness theorem is that there are models of PA (or RA or whatever your favorite axiomization of arithmetic happens to be) where G is true, and there are models where G is false.

This is true.

> any (first order) theory that includes arithmetic is incomplete.

That is basically true, assuming that the theory is also sound (because we can construct unsound complete theories of arithmetic), and if by "arithmetic" we mean "addition and multiplication" (otherwise, see Presburger Arithmetic).

Gödel's First Incompleteness theorem does give you a bit more than that though (at least in its strongest variants): it doesn't make any assumptions about the theory being sound (mainly because that's very hard to prove), it just needs the theory to be "strong enough" and to be consistent (i.e. not leading to a contradiction). So even if we were willing to accept some false statements we can't get a complete theory (technically, that's the Gödel-Rosser proof, Gödel's original proof had a slightly different condition). It also gives you a concrete example of a Gödel sentence that can be construed, which leads rather directly to the second theorem.


> The double turnstyle symbol has multiple uses

Ah. TIL.

> This is true.

OK, it's good that we agree on that, but it doesn't really matter. What matters is whether or not the english statement "G cannot be proven" (or some variation on that theme) is true or false. More to the point, what matters is what it takes to persuade a human being that it is true or false. The double-turnstyle symbol may be useful, but it is not the main event.

> That is basically true, assuming that the theory is also sound

OK, so we are basically in agreement on the things that actually matter. (Ironically, AFAICT Leary & Kristiansen don't actually define the term "sound". But again, whatever. It doesn't really matter for the substance of the point I am trying to make.)

The point I am trying to make is this: the reason Godel's theorem matters is that it is a negative result on the quest to reduce all of mathematics to a mechanical process. Implicit in this goal is that the process yield results that are in some sense "reasonable", i.e. if a putative mechanical process for doing math ever yields the result that 2+2=5 then we can be pretty sure that something has gone wrong even without knowing anything about double-turnstyles or soundness or validity or consistency or any of the other myriad things that logicians concern themselves with. These are all just ways of battening down various hatches, but none of them actually matter. What matters, the reason Godel's theorem is a thing, is that it is what informs us that it is impossible to mechanize all of mathematics, and that attempting to do so is a fool's errand. Finding the weakest possible conditions under which this result holds is of secondary importance, of interest only to logicians. If you can add 1 and check for equality -- or any of a myriad other combinations of trivial operations that allow you to enumerate proofs -- you've lost. That is what matters. And that can be proven without Godel numbers.


"Soundness" is a term that AFAIK comes from philosophy. An argument is correct if it follows logically from the premises, a sound argument is one in which the premises are also "true". In the context of theories, theories are sound if they are not merely consistent but also "true", i.e. correct with respect to reality.

You quickly see that this gets us into murky territory where you have to argue, outside of logic itself, whether or not certain things are "true". That's why soundness is not necessarily talked about much in an introductory mathematical logic text such as L&K.

The reason why this matters in the context of Gödel's theorems, though, besides a purely philosophical point of view, is that "PA is sound" can not be expressed in the language of arithmetic, but "PA is consistent" can be expressed in that language, and therefore Gödel's first theorem itself can be expressed in arithmetic. That is the key that establishes the link between the first and the second incompleteness theorem, the latter of which expresses that a theory of sufficient strength can't prove itself consistent.[0]

A naïve proof via Turing Machines, to my understanding, gives you something equivalent to Godel's theorem provided PA is sound, but not just simply provided PA is consistent.

Otherwise, I agree with your conclusions about the meaning of Gödel's theorems,[1] and I even agree that knowing all the technical conditions under which they apply shouldn't be of importance to non-logicians. I don't object to teaching CS students a version of Gödel's first incompleteness theorem via Turing Machines or even to some hand-waving where necessary.

But that's not the same thing as claiming that Turing Machines make the original proof ideas obsolete.

> And that can be proven without Godel numbers.

I've said it before but Gödel numbers are a trivial technical detail. What matters is that an encoding scheme exists, not which one it is. You can take Gödel's original proof and just replace Gödel numbering with your favourite encoding scheme and nothing substantial about the proof structure will change (except that you will want to substitute the "factoring primes" routine with a corresponding decoding routine for your preferred scheme).[2]

It just so happens that mathematicians actually like Gödel numbering because primes are very simple objects and every maths major learns about unique factorisation and the infinity of primes very early on. If as a computer scientist, you disagree, no big deal: just substitute your favourite encoding scheme.

That's an entirely different discussion though as the one whether you want to prove Gödel's theorems via Turing machines or via purely logical means and the theory of primitive recursion.

[0]: That, by itself, is not terribly interesting - if the theory was inconsistent, its own consistency proof wouldn't matter (since an inconsistent theory proves everything). The reason it becomes interesting is because that also implies that a theory can't be proven consistent by a weaker theory either (since every proof in the weaker theory is a proof in the original theory). This is what essentially kills the Hilbert program: We can't, say, agree that PA is sound and consistent (which seems easy enough to believe) and use that in order to prove ZFC consistent, or some such.

[1]: Although I'm not sure they're necessarily "practically" relevant. Had it turned out that model checking of PA is "only" NP-complete, instead of uncomputable, it wouldn't practically have made it much better.

[2]: This is, for example, briefly discussed in section 19.2 of this textbook about Gödel's theorems, which I highly recommend to anyone who does care about the technical details: https://www.logicmatters.net/resources/pdfs/godelbook/GodelB...


> I've said it before but Gödel numbers are a trivial technical detail.

Actually, you seemed to be saying the exact opposite before when you wrote:

> Your point about ASCII codes ignores that we don't just want individual symbols to be numbers, we want entire formulas to be numbers too.

And at this point I'm still not sure if you really understand this or not because:

> you will want to substitute the "factoring primes" routine with a corresponding decoding routine for your preferred scheme

Well, yeah, obviously, but translating between strings of symbols and numbers is trivial when your alphabet is ASCII (or LATIN1 or UTF8 or whatever). You don't need to factor, all you need to do is divide/modulo by powers of 2. This is just obvious to anyone who has even the most elementary knowledge of modern computers.

> A naïve proof via Turing Machines, to my understanding, gives you something equivalent to Godel's theorem provided PA is sound, but not just simply provided PA is consistent.

No, it is vastly simpler than that. All you need is a computable mapping between programs and numbers. Once you have that, any claim about a program can be translated into a claim about a number (and vice versa). "P is a program that halts when run on input I" has a corresponding "N is a number that describes a program that halts when run on input I". From that, Godel's theorem is an elementary proof by reductio: if arithmetic were decidable, you could solve the halting problem by translating the question of whether or not a program P halts into the question of whether a number N describes a program that halts.

Note that you essentially get soundness "for free" because the halting theorem is not grounded in axioms, it is grounded in physics. We don't have to wonder if our models of Turing Machines are "true". We can simply build a TM and observe that it behaves according to the model. And so the details of your axiomatization of math don't matter at all. Whatever axiomatization you can come up with, as long as you can use it to do arithmetic, and as long as someone can write a proof-checker for it, then Godel's theorem pops out as an elementary result.

> mathematicians actually like Gödel numbering because primes are very simple objects

Yes, I get that. But Turing machines are pretty simple too. The thing that makes TMs better than primes is that we can actually build a physical artifact whose behavior corresponds to the mathematical model of a TM. You can't do the same thing for primes because primes are not models of physical systems.


I really don't wish to belabour the primes vs. ASCII discussion because: it really is but a trivial implementation detail. You want to have a bijection between formulas and the natural numbers (such that certain operations on formulas, such as composition and extraction of subformulas can be computed). Primes do that, as do ASCII sequences.

> From that, Godel's theorem is an elementary proof by reductio: if arithmetic were decidable, you could solve the halting problem by translating the question of whether or not a program P halts into the question of whether a number N describes a program that halts.

I think you again misunderstand the full implications of Gödel's first theorem.

Gödel's first theorem: > Any theory of arithmetic that is sufficiently strong and consistent is incomplete.

What you have just proven is a corollary of that theorem, but actually a weaker result: > Arithmetic truth is not decidable

Gödel's theorem works even if PA is not sound, but merely consistent. And it also works for potential other, unsound theories of arithmetic, as long as they're strong enough and consistent.

Your proof via Turing machines does not: if PA is unsound, and if from PA you derive that the "impossible" program halts (or does not halt), it doesn't tell you anything, because the conclusions of PA can't be trusted. Therefore you can't derive the necessary contradiction.

Again, this matters not only philosophically, but also technically because "If PA is sound, it is incomplete* can't be written as a statement of arithmetic, but "If PA is consistent, it is incomplete" can, and this is the necessary link between the first and the second theorem.

---

I think there's a consistent thread in your comments where you think things are "obvious" and "true" because we can observe them somehow physically (e.g. your assertion that the Halting Problem is "not grounded in axioms, but in physics" - to which I would again object, but let's not go into that), but this is not how mathematics (and logic) work.

It's fine if you don't care about it personally, but Gödel's theorems have deep mathematical and philosophical implications about systems of reasoning which are not dependent on our physical world.

But I don't think this discussion will go anywhere.


> I really don't wish to belabour the primes vs. ASCII discussion because: it really is but a trivial implementation detail

OK, so we agree.

> What you have just proven is a corollary of that theorem, but actually a weaker result: > Arithmetic truth is not decidable

No. My argument does not depend on the soundness of the system. All it depends on is that the system be able to do arithmetic correctly, so it requires consistency (because it needs to be able to identify e.g. "1+1=3" as false), but not soundness. You can throw in as many additional axioms as you like, including false ones. As long as the false axioms do not destroy the system's ability to correctly do arithmetic the proof holds. And this is true for Godel's proof as well.

[UPDATE] I just realized that what I wrote earlier:

> Note that you essentially get soundness "for free"

is misleading at best, and probably just flat-out wrong. I'm actually not quite sure what I was thinking when I wrote that. It might have been something along the lines of, "The fact that we can build TMs is evidence that the mathematical models of TM's are in some sense 'true'", but this is obviously not on point. So I retract that part of my argument.

[END UPDATE]

> systems of reasoning which are not dependent on our physical world

You cannot have systems of reasoning that are not dependent on our physical world. Reasoning is a physical process conducted by physical systems. Mathematicians are physical systems. Symbols are physical things: patterns of ink on paper, or electrons moving around, or vibrations in a medium. Any claim about a system of reasoning is necessarily a claim about a physical system. Platonic ideals do not actually exist, not even in math.

We can imagine situations where this is not the case, but these are counterfactual. We can think about "systems of reasoning which are not dependent on our physical world" just as we can think about Santa Claus or Darth Vader. We can even make factual claims about them, e.g. "Santa Claus has a beard" or "Darth Vader is Luke Skywalker's father". Claims about "systems of reasoning which are not dependent on our physical world" have exactly the same ontological status as claims about Santa Claus and Darth Vader.

The reason that we care about "systems of reasoning" at all is that some "systems of reasoning" correspond to physical reality in ways that give us leverage to navigate that reality more effectively. Absent that, the entire enterprise would be a complete waste of time.


> consistency (because it needs to be able to identify e.g. "1+1=3" as false)

that's soundness, not consistency.

You can build consistent theories of arithmetic in which 1+1=3 is true. The easiest model for such a theory is a structure with a single element.

But that's maybe a bad example, as it's too weak for Gödel's theorems to apply to it. So let's consider instead models of Robinson arithmetic. There are models of Robinson arithmetic, for which the sentence "for all x, Sx != x" is false,[0] even though we clearly want it to be true in the natural numbers. So if we add the axiom "there is an x for which Sx = x" to Robinson arithmetic, we get a theory which is arithmetically unsound, so whatever we derive from it, does not readily apply to Turing machines. Yet, Gödel's theorems apply to this theory just as well (assuming it is consistent, which we can't prove, same as we can't prove it for PA).

I'll leave it at that. We're talking at cross-purposes, since we can't even get terminology straight.

[0]: https://math.stackexchange.com/questions/1066087/model-of-ro...


> But that's maybe a bad example

Gee, ya think?

> So let's consider instead models of Robinson arithmetic.

OK.

> There are models of Robinson arithmetic, for which the sentence "for all x, Sx != x" is false

OK. So?

> even though we clearly want it to be true in the natural numbers.

Who is "we"?

> if we add the axiom "there is an x for which Sx = x" to Robinson arithmetic, we get a theory which is arithmetically unsound

OK. So?

> whatever we derive from it, does not readily apply to Turing machines

Why do you think that this does not "readily apply to Turing machines"? Turing machines do not natively do arithmetic. They just write symbols to a tape according to a state transition table.

Do you really believe that writing a proof checker for Robinson arithmetic, with or without your new axiom, is different in any salient way than writing a proof checker for PA?

(And would you please consider the possibility that I might not be a complete idiot?)


> (And would you please consider the possibility that I might not be a complete idiot?)

I do not believe you're an idiot. As a matter of fact, I think you're probably a quite intelligent individual that knows a lot about many topics. Looking at your resume, I do get a strong sense that you're a much better engineer that I ever will be, for example.

I do however think that, when it comes to this topic, you're out of your depth, since you don't seem to be aware of concepts and ideas that are taught in a very introductory course on mathematical logic.

I don't even think you have digested many of the points I've made in my last post - or in previous ones. It seems like you're refusing to engage, opting instead for picking individual sentences apart as if they were devoid of context.

I encourage you to take some time to actually study Gödel's theorems, with all their nuances and the proper historical, mathematical and logical context. The books by Peter Smith seem like a good start: https://www.logicmatters.net/igt/ - I have only read the long one (which I've linked to previously above), but I suppose the short one should be good too.

(And maybe consider that I'm not quite dumb, either. And that mathematical logic is a topic I know a thing or two about.)


It's not your understanding of mathematical logic that I'm questioning, it's your understanding of the theory of computation. In particular, you didn't answer my question:

> Do you really believe that writing a proof checker for Robinson arithmetic, with or without your new axiom, is different in any salient way than writing a proof checker for PA?


> Do you really believe that writing a proof checker for Robinson arithmetic, with or without your new axiom, is different in any salient way than writing a proof checker for PA?

No, and this has exactly nothing to do with my point.


Then why did you bring up RA at all? You must think there is some salient difference between RA and PA with respect to my proof or (I presume) you would not have brought it up. What is it?

For that matter, what is the point you are trying to make? You originally wrote [1]:

> A theory Γ is said to be complete whenever for all sentences we have either Γ ⊨ φ or Γ ⊨ ¬φ.

If you recall, at the time I thought this was a simple mistake on your part, and that what you really meant was "Γ ⊢ φ or Γ ⊢ ¬φ", not "Γ ⊨ φ or Γ ⊨ ¬φ". But you never conceded this. So I'm going to ask you again, is that really what you meant? Because that is not the sense in which the term "completeness" is used in Goedel's incompleteness theorem. The word "complete" is used there to mean syntactic completeness. To quote Wikipedia [2]:

"A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms (Smith 2007, p. 24). THIS IS THE NOTION RELEVANT for Gödel's first Incompleteness theorem. It is NOT TO BE CONFUSED WITH SEMANTIC COMPLETENESS, which means that the set of axioms proves all the semantic tautologies of the given language." [emphasis added]

I'm trying to give you the benefit of the doubt here, but you really do appear to be confused on this point, particularly since you keep talking about soundness, which is a semantic property, not a syntactic one.

---

[1] https://news.ycombinator.com/item?id=33250168

[2] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: