This is a bit defeatist. Parsing the definition in your head is only the first level of understanding you can have about a mathematical structure. You don't really understand something until you can reinvent it (and in particular give a plausible answer to "why these axioms and not others?")
For example, to motivate groups, you could introduce the concept of a symmetry as a mapping from an object to itself that doesn't change its properties, introduce the idea of an isomorphism as a mapping with an inverse (where f and g being inverse means they compose to identity maps), put them together and postulate that a reasonable formalization of a symmetry is an automorphism (isomorphism from an object to itself), note that isomorphisms are closed under composition, and arrive at the definition of a group by considering sets of symmetries closed under finite composition (thinking of identity maps as the composition of 0 morphisms).
I'm sure there's a similarly conceptual way to motivate monads in functional programming. Hyland and Power have papers on algebraic theories of "effectful" operations and how they give rise to (finitary) monads, as one starting point.
Indeed the typical wisdom in math is that the best way to understand a group (and the reason we usually care about one) is its actions.
There's some quote to the effect of: "Just as with men, a group is known by its actions." This was the only reference I could find off the top of my head https://gowers.wordpress.com/2011/11/06/group-actions-i/, but I've definitely heard the sentiment repeated in a couple different texts.
The sentiment expressed in this article might useful to hear if you're solely interested in learning how to use monad transformers or whatever in Haskell, eg, I know a fair amount of math, but very little category theory, and manage use monads just fine without any deep intuition as to why they're an important formalism. But if you want to develop your problem-solving skills, for example, I think "quiet contemplation" of axioms is pretty much always worse than working through several examples and absorbing some good exposition.
If I remember correctly, you could forget the "inverses" bit and define a group action with only the monoid corresponding to the group.
So not all the "groupiness" is captured by actions.
Also, Bill Thurston's beautiful "On Proof and Progress in Mathematics" [1] contains several ways to understand the derivative, in addition to it being a look into what a very distinguished mathematician thought of the nature of the mathematical pursuit. Pretty similar ideas, explored in depth.
"Imagine a world in which logic, programming, and mathematics are unified, in which every proof corresponds to a program, every program to a mapping, every mapping to a proof!"
The thing I never understood about statements of this sort is that in my understanding of model theory, Godel's theorem and so-forth, a proof is a rare thing. Most of the true statements in a given model don't have proofs. Any consistent proof system admits an uncountable sets of independent theorems and so-forth.
While Goedel's first incompleteness theorem indeed shows that there will always be true statements that don't follow from a given set of (computable) axioms, this is almost never a problem in practise. It is hard to find natural examples of such statements. Almost every mathematical statement (or its negation) you or I can come up with is a consequence of the axioms of ZFC set theory, or whatever other foundation you prefer.
Bob is coming from the perspective of Brouwerian Intuitionism, and so he is not really that interested in a closed "formal system". So, "proofs" in the setting that Bob cares about are not derivations in a formal system, so Gödel's result doesn't really apply.
In bob's setting, the computation system is not determined or fixed. It is open-ended...
In intuitionistic mathematics, we of course accept that there is no Turing computable halting oracle, but we do not rule out the possibility that there is some other effective oracle that can decide halting. (This is in contrast to recursive mathematics / Russian constructivism)
Unfortunately, that isn't a world in which you could connect a USB camera to your laptop and just have it work after the system finds the right driver.
I think the idea would be that those are more complex traits that will be harder to change through new neural technology (compared to simply learning new information) and thus less likely to change in the medium term.
It's a foundation of mathematics whose native objects are structures fundamental to higher mathematics (∞-groupoids, which are more or less equivalent to topological spaces up to homotopy), rather than awkward encodings (ZFC "sets", which are tree-like things), and where everything is automatically invariant with respect to abstract notions of equivalence (think isomorphism vs. equality).
The point is that it makes it much more realistic to check complicated math using computers, which will become more important as math inevitably gets more complicated.
Oh, and it happens to be a functional programming language. Which says something about functional programming.
Well, yes, it's a functional programming language. But not one like most people here are familiar with, so that's a little misleading. It's also an object-oriented language with polymorphism and inheritance.
No, HoTT is not based on the Calculus of Constructions, but instead on intensional Martin-Loef type theory with identity typed. The novel part is the univalence axiom, which MLTT doesn't have.
If you erase types, the underlying programming language is the lambda-calculus, familiar to every programmer.
Yes, Vladimir Voevodsky says HoTT is based on Martin-Loef type theory, but he discovered HoTT by learning the CoC used in Coq, and is implementing his Univalent Foundations library of proofs in Coq. That implies that HoTT is implementable in Coq (with a couple of minor tweaks), which means that CoC is at least as powerful as HoTT. And anyway, according to Voevodsky, they are the same thing.
Really, Martin-Loef type theory is the mathematics foundation, and the Calculus of Constructions is the computer science foundation. HoTT ties them together.
It depends on what you mean by "embedding". If one does a deep embedding then powerful calculi can be embedded in much weaker calculi. E.g. Isabelle uses a variant to LF as meta-language to embed all manner of more powerful logics. That doesn't mean LF has a lot of expressive power.
Coq's logic isn't quite the CoC anymore: impredictivity is switched off by default, Coq has universes etc.
With this in mind, I'm not sure what Voevodsky means when he says HoTT and CoC are the same thing.
I would like to see more clarification of why you'd call HoTT OO. I have no doubt you can encode those things, but I find it strange to try to argue that it doesn't have more in common with the FP tribe than the OO tribe. Given that neither term is terrifically well-defined, I think that's the best you could argue.
Sort of. As t approaches infinity, more and more of the simple parts of math will have been discovered. Therefore the only parts left to discover will be those which are more complicated; hence the overall idea is necessarily correct.
It could instead be the case that all of maths is simple, but that complexity tends to appear whenever we have an incomplete understanding of a domain.
You are misled to believe that mathematics has simple parts and not simple parts, and that simple parts are easier to "discover" than the complex parts.
In fact, the simple parts are usually the most difficult to discover. Why? Precisely because it takes extreme genius to give simple answers to mathematical questions. Over time complicated proofs and definitions get simplified and clarified (such it was with calculus, set theory, logic, group theory, and countless other topics).
So there's no precedent to believe that current mathematics won't be simplified in the future (unless, of course, you believe that all mathematics is complicated; but then you seem to be trying to make an objective point about complexity, so we'll ignore that possibility).
This would seem to be true as long as mathematics is discovered rather than invented. Otherwise I beleive that one may be able to invent generalisations that result in a simplification of mathematics.
This assumes that the purpose of math is to prove things, whereas another worthy goal for me at least is enabling humans to truly understand why things are true, and not just what things are true.
No disrespect, but dkural's idea of what mathematics is about is no more irrelevant than sillysaurus2's or yours, and dkural's view is quite well represented among research mathematicians, first-rate ones included. For a famous example, see William Thurston's "On proof and progress in mathematics" at http://arxiv.org/pdf/math/9404236v1.pdf .
I have a mathematics degree from Harvard, though I haven't taken a poll of research mathematicians. I do think algorithmic theorem proving is a great field of inquiry!
For example, to motivate groups, you could introduce the concept of a symmetry as a mapping from an object to itself that doesn't change its properties, introduce the idea of an isomorphism as a mapping with an inverse (where f and g being inverse means they compose to identity maps), put them together and postulate that a reasonable formalization of a symmetry is an automorphism (isomorphism from an object to itself), note that isomorphisms are closed under composition, and arrive at the definition of a group by considering sets of symmetries closed under finite composition (thinking of identity maps as the composition of 0 morphisms).
I'm sure there's a similarly conceptual way to motivate monads in functional programming. Hyland and Power have papers on algebraic theories of "effectful" operations and how they give rise to (finitary) monads, as one starting point.