Hacker Newsnew | past | comments | ask | show | jobs | submit | szany's commentslogin

This is a bit defeatist. Parsing the definition in your head is only the first level of understanding you can have about a mathematical structure. You don't really understand something until you can reinvent it (and in particular give a plausible answer to "why these axioms and not others?")

For example, to motivate groups, you could introduce the concept of a symmetry as a mapping from an object to itself that doesn't change its properties, introduce the idea of an isomorphism as a mapping with an inverse (where f and g being inverse means they compose to identity maps), put them together and postulate that a reasonable formalization of a symmetry is an automorphism (isomorphism from an object to itself), note that isomorphisms are closed under composition, and arrive at the definition of a group by considering sets of symmetries closed under finite composition (thinking of identity maps as the composition of 0 morphisms).

I'm sure there's a similarly conceptual way to motivate monads in functional programming. Hyland and Power have papers on algebraic theories of "effectful" operations and how they give rise to (finitary) monads, as one starting point.


Indeed the typical wisdom in math is that the best way to understand a group (and the reason we usually care about one) is its actions.

There's some quote to the effect of: "Just as with men, a group is known by its actions." This was the only reference I could find off the top of my head https://gowers.wordpress.com/2011/11/06/group-actions-i/, but I've definitely heard the sentiment repeated in a couple different texts.

The sentiment expressed in this article might useful to hear if you're solely interested in learning how to use monad transformers or whatever in Haskell, eg, I know a fair amount of math, but very little category theory, and manage use monads just fine without any deep intuition as to why they're an important formalism. But if you want to develop your problem-solving skills, for example, I think "quiet contemplation" of axioms is pretty much always worse than working through several examples and absorbing some good exposition.

Here's a recent article that I found very though-provoking: http://cognitivemedium.com/invisible_visible/invisible_visib.... They quote some mathematicians discussing their visual intuition for one property of groups.


If I remember correctly, you could forget the "inverses" bit and define a group action with only the monoid corresponding to the group.

So not all the "groupiness" is captured by actions.

Also, Bill Thurston's beautiful "On Proof and Progress in Mathematics" [1] contains several ways to understand the derivative, in addition to it being a look into what a very distinguished mathematician thought of the nature of the mathematical pursuit. Pretty similar ideas, explored in depth.

Edit: Apparently this is linked to in TFA :)

[1]: https://arxiv.org/abs/math/9404236


This isn't full-time — one of the goals is explicitly "to complement rather than conflict with the tenure of a full-time position elsewhere".


Thanks for raising that question. Actually the house was rented for the purpose of hosting the program, not the other way around.

Of course we're trying not to operate at a loss, but we're willing to accept that if necessary to accomplish the stated goals.


>Actually the house was rented for the purpose of hosting the program, not the other way around.

From the CFP I got the impression that the residency is hosted in the same place you all are currently living:

>live for four months in a 10,000 ft² mansion in San Francisco (of which the organizers are long-term residents)

Is that not the case?


Part of starting this was finding a place and moving in.


See also Bob Harper's blog for a more leisurely exposition:

https://existentialtype.wordpress.com/2011/03/27/the-holy-tr...


"Imagine a world in which logic, programming, and mathematics are unified, in which every proof corresponds to a program, every program to a mapping, every mapping to a proof!"

The thing I never understood about statements of this sort is that in my understanding of model theory, Godel's theorem and so-forth, a proof is a rare thing. Most of the true statements in a given model don't have proofs. Any consistent proof system admits an uncountable sets of independent theorems and so-forth.


While Goedel's first incompleteness theorem indeed shows that there will always be true statements that don't follow from a given set of (computable) axioms, this is almost never a problem in practise. It is hard to find natural examples of such statements. Almost every mathematical statement (or its negation) you or I can come up with is a consequence of the axioms of ZFC set theory, or whatever other foundation you prefer.


It doesn't follow in normal mathematical practice I think because mathematicians want natural, sane axioms that lead to human understandable models.

Computing already starts out much messier in its activities. Determining what will happen when a large system gets input is tricky.

I'm not sure what could proved about the operations of a "deep neural net" for example.


Bob is coming from the perspective of Brouwerian Intuitionism, and so he is not really that interested in a closed "formal system". So, "proofs" in the setting that Bob cares about are not derivations in a formal system, so Gödel's result doesn't really apply.


I'm pretty sure if you are talking about proofs having to do with a determined, computation system, Godel applies.

I mean, the unsolvability of the halting problem certainly applies and this gives a somewhat similar picture of the world.


In bob's setting, the computation system is not determined or fixed. It is open-ended...

In intuitionistic mathematics, we of course accept that there is no Turing computable halting oracle, but we do not rule out the possibility that there is some other effective oracle that can decide halting. (This is in contrast to recursive mathematics / Russian constructivism)


Unfortunately, that isn't a world in which you could connect a USB camera to your laptop and just have it work after the system finds the right driver.


Most true statements are uninteresting.


Indeed but so are most programs.


What makes ambition/creativity/drive/perspective/attitude/inspiration any more intrinsic than intelligence/knowledge?


I think the idea would be that those are more complex traits that will be harder to change through new neural technology (compared to simply learning new information) and thus less likely to change in the medium term.


Because a working fool will do more than a sitting genius? I don't remember the saying (English is not my first language).


It's a foundation of mathematics whose native objects are structures fundamental to higher mathematics (∞-groupoids, which are more or less equivalent to topological spaces up to homotopy), rather than awkward encodings (ZFC "sets", which are tree-like things), and where everything is automatically invariant with respect to abstract notions of equivalence (think isomorphism vs. equality).

The point is that it makes it much more realistic to check complicated math using computers, which will become more important as math inevitably gets more complicated.

Oh, and it happens to be a functional programming language. Which says something about functional programming.


Well, yes, it's a functional programming language. But not one like most people here are familiar with, so that's a little misleading. It's also an object-oriented language with polymorphism and inheritance.

Specifically, it's (almost exactly) the Calculus of Constructions (http://en.wikipedia.org/wiki/Calculus_of_constructions), the language at the top of the lambda cube (http://en.wikipedia.org/wiki/Lambda_cube).


No, HoTT is not based on the Calculus of Constructions, but instead on intensional Martin-Loef type theory with identity typed. The novel part is the univalence axiom, which MLTT doesn't have.

If you erase types, the underlying programming language is the lambda-calculus, familiar to every programmer.


Yes, Vladimir Voevodsky says HoTT is based on Martin-Loef type theory, but he discovered HoTT by learning the CoC used in Coq, and is implementing his Univalent Foundations library of proofs in Coq. That implies that HoTT is implementable in Coq (with a couple of minor tweaks), which means that CoC is at least as powerful as HoTT. And anyway, according to Voevodsky, they are the same thing.

Really, Martin-Loef type theory is the mathematics foundation, and the Calculus of Constructions is the computer science foundation. HoTT ties them together.


It depends on what you mean by "embedding". If one does a deep embedding then powerful calculi can be embedded in much weaker calculi. E.g. Isabelle uses a variant to LF as meta-language to embed all manner of more powerful logics. That doesn't mean LF has a lot of expressive power.

Coq's logic isn't quite the CoC anymore: impredictivity is switched off by default, Coq has universes etc.

With this in mind, I'm not sure what Voevodsky means when he says HoTT and CoC are the same thing.


I would like to see more clarification of why you'd call HoTT OO. I have no doubt you can encode those things, but I find it strange to try to argue that it doesn't have more in common with the FP tribe than the OO tribe. Given that neither term is terrifically well-defined, I think that's the best you could argue.


Inevitably? Strong words.


Sort of. As t approaches infinity, more and more of the simple parts of math will have been discovered. Therefore the only parts left to discover will be those which are more complicated; hence the overall idea is necessarily correct.


It could instead be the case that all of maths is simple, but that complexity tends to appear whenever we have an incomplete understanding of a domain.


You are misled to believe that mathematics has simple parts and not simple parts, and that simple parts are easier to "discover" than the complex parts.

In fact, the simple parts are usually the most difficult to discover. Why? Precisely because it takes extreme genius to give simple answers to mathematical questions. Over time complicated proofs and definitions get simplified and clarified (such it was with calculus, set theory, logic, group theory, and countless other topics).

So there's no precedent to believe that current mathematics won't be simplified in the future (unless, of course, you believe that all mathematics is complicated; but then you seem to be trying to make an objective point about complexity, so we'll ignore that possibility).


This would seem to be true as long as mathematics is discovered rather than invented. Otherwise I beleive that one may be able to invent generalisations that result in a simplification of mathematics.


This assumes that the purpose of math is to prove things, whereas another worthy goal for me at least is enabling humans to truly understand why things are true, and not just what things are true.


No disrespect, but your purpose for math is entirely irrelevant unless you speak for the majority of research mathematicians.


No disrespect, but dkural's idea of what mathematics is about is no more irrelevant than sillysaurus2's or yours, and dkural's view is quite well represented among research mathematicians, first-rate ones included. For a famous example, see William Thurston's "On proof and progress in mathematics" at http://arxiv.org/pdf/math/9404236v1.pdf .


I have a mathematics degree from Harvard, though I haven't taken a poll of research mathematicians. I do think algorithmic theorem proving is a great field of inquiry!


  SET : Group
  MAN : Lie Group
  TOP : Topological Group
  GRP : Objects
The last line should really be "GRP : Abelian Group"

See http://en.wikipedia.org/wiki/Eckmann-Hilton_argument


This is good advice, because the right sort of person to be a Thiel fellow would read this and scoff anyway.


“Programming languages are not arbitrary. They are manifestations of deep invariants of human thought, i.e. they are grounded in logic.” —Bob Harper

Logic ~ Type theory ~ Functional programming



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: