Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We do it all the time. Whatever it is that you want your program to do, and any interesting functional property of it, is a non-compositional property. I know that because if it were compositional, you wouldn't need to write the program.

I don't think I understand what you're saying here?

Say I want to write a program that tells me the duplicated members of a list. My program might call helper routines like, "stable sort the list," and then "collect adjacent duplicates."

I might want to prove my program is "correct", e.g., a statement like, "the output is a list that contains exactly those elements of the input list whose duplicity exceeds 1"). To do this, I can first reason about the helper functions in isolation, then compose these results about the helper functions to establish that my desired property.

Why is this not an interesting property, or a program that I don't need to write?



> Why is this not an interesting property, or a program that I don't need to write?

It's both of those things, but it's not compositional, because "finds duplicate members in a list" is not a property of either of your components. If it were a property of one of them, you wouldn't need the other.

All proofs proceed in stages, and "compose" known facts. But that's not the meaning of compositional, and the fact that the property you want easily arises from the properties of the component is luck. I gave an example on this page where this is not the case (the `foo bar` example), and it is provably not the case in general.


Thanks. I think we may differ in our understanding of "compositional."

My understanding of "compositional" reasoning is something like, an approach where we (1) establish some properties about our helper functions, and then (2) reason about larger functions using these properties, instead of the definitions of the helper functions.

It seems like you have a different notion of compositional, which I'm still not sure I can articulate in a clear way... something more akin to transitivity?


A property P is compositional with respect to terms A and B and composition ∘ if P(A) ∧ P(B) ⇔ P(A∘B); the equivalence is sometimes replaced with implication in one of the directions (this is a special case for a predicate, i.e. a boolean-valued function; a function F is compositional if F(A∘B) = F(A) ⋆ F(B)).

What you're saying is that proofs can be composed. That is true, but unhelpful, as it doesn't make proofs easy. All proofs work this way. Compositionality does (in fact, all properties that are easily proven by type inference are compositional; type safety itself is compositional). I've seen somewhere compositionality described as the opposite of emergence[1], which makes sense. A property is emergent in a composition if it is not a property of the components.

[1]: https://julesh.com/2017/04/22/on-compositionality/


Thanks, this seems like a perfectly coherent definition of compositional. I think my earlier confusion was a purely terminological matter.

Regarding "What you're saying is that proofs can be composed. That is true, but unhelpful, as it doesn't make proofs easy." I agree that the ability to compose proofs does not make proof easy. But I would push back somewhat against the idea that this is unhelpful.

The ability to prove, in isolation, the relevant properties about some entity (e.g., a function, a hardware module, or what have you), and then to make use of these properties, without ever looking inside the entity again, to prove properties of some larger system, seems to me to be absolutely essential for scaling proofs to large systems.

This ability is of course completely common in tools like interactive proof assistants, but is less common in more automated procedures.


> The ability to prove, in isolation, the relevant properties about some entity (e.g., a function, a hardware module, or what have you), and then to make use of these properties, without ever looking inside the entity again, to prove properties of some larger system, seems to me to be absolutely essential for scaling proofs to large systems.

I understand the motivation, and of course you're right that it would be "essential", it's just that it has been mathematically proven (see my post) that this is not the case. Results in computational complexity theory, that studies how hard it is to do things, show that "making use of these properties, without ever looking inside the entity again" does not, in fact, make verification easier. More precisely, the difficulty of proving a property of A_1 ∘ ... ∘ A_n is not any function of n, because we cannot hide the inner details of the components in a way that would make the effort lower even though that is what we want to do and even though it is essential for success.

In practice, we have not been able to scale proofs to large systems, which means that maybe there is some truth to those mathematical results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: