Hacker Newsnew | past | comments | ask | show | jobs | submit | rivalis's commentslogin

If you're talking about the linked post, I think the person who wrote it is Colin Wright, and he has a PhD from Cambridge in Combinatorics and Graph Theory.

And anyway, at least they're talking about it... Is being ignored because math is "boring" better? Maybe yes, maybe no. I suppose it depends on how "evangelical" you want to be with respect to mathematics.


Not always. Sometimes "outsiders" just want some kind of insight as to why you do what you do all day; they're curious. A mathematician interested in engaging with the public has to try and identify which question is being asked. It's actually slightly dangerous to assume that your interlocutor is only interested in utility, because this type of answer can come across as disingenuous or dismissive.

I don't think that whenever someone asks an artist or writer what the "point" of their work is that they're looking for a justification for allocating public funds to it. They might want that, but they also might want some kind of insight into or identification with the intrinsic motivation for the work.


Mathematical truths and objects are real things with existence independent of our minds that we "discover," not just designed things. The author seems to believe that the language used to describe mathematics (which is indeed a designed thing, just like software) is the only thing "there." She is probably a formalist.

I think it is important to remember this, because mathematics, like a computer, "fights back." You cannot simply dream up whatever structure you want and have it mean what you want and behave how you want. See Godel's incompleteness theorems. No matter what you are doing, your mathematical constructs (including your implicit Turing Machines in your computer programs) must obey certain underlying constraints that are completely mind-independent. These constraints are what mathematicians study, albeit through a glass, darkly.

Regardless of ontological issues with the post, I like that it emphasizes the designed nature of our mathematical tools. The space of possible tools is so large that there is near-limitless room for human creativity and design in mathematical research. It is a shame that most mathematics classes don't really get that across.

edit: fixed misgendering, sorry, that was sexist.


You can simply dream up whatever axioms, undefined terms, and rules of logic you want. However, one runs the risk of having an inconsistent system or a system that is not interesting to others. Godel's Incompleteness Theorem does not say that this can't be done. Furthermore the "underlying contraints" imposed by Godel's Incompleteness Theorem is not at all what most mathematicians study. Unless I'm misinterpreting your meaning here.

There are knowledgeable people who do not believe that mathematics is independent of our minds. It's not too far fetched of an idea. While I do not personally agree with this, I won't downplay such beliefs.


>You cannot simply dream up whatever structure you want and have it mean what you want and behave how you want. See Godel's incompleteness theorems.

That is not at all what the Incompleteness Theorems actually say. They say literally nothing whatsoever about what sorts of structures you can implement inside a given foundational theory, except that there will always be more, because given any foundational theory, you can construct two more foundational theories as extensions (one in which the Goedel statement is unprovable, and one in which the theory believes it's inconsistent).


>Mathematical truths and objects are real things with existence independent of our minds

What makes you say this? Isn't this an open philosophical question? What makes you say that mathematical objects exist independent of our minds? I can dream up a set of axioms of my own and do maths from there, so I don't think mathematics necessarily exists in some Platonic ideal dimension independent of our minds.


>Mathematical truths and objects are real things with existence independent of our minds that we "discover," not just designed things.

While its almost certainly true that the content of mathematics is mind independent, it is far from obvious that these objects are "real things".The real meat of the issue is how exactly the mind-independence is cashed out. Different ideas paint a vastly different picture of mathematics and even the universe. For example platonism vs. nominalism. Lets not be so quick to put forward as an obvious truth the critical issue in question.


Can't a mathematical theory compress, generalize, and map out many relevant empirical facts very well without needing ontological commitments to the generalizations themselves?

The real numbers seem to be a perfect example: if you work in physics at scales where quantization doesn't noticeably apply, the only way to calculate correct predictions is really to use real numbers and continuous (mostly Euclidean) spaces. But that doesn't mean physical objects are ontological shadows of our mathematical abstractions, as Plato's Allegory of the Cave portrayed it. Quite the reverse: when you get down to a sufficiently small, fundamental level, objects, space, and time stop being continuous and correct experimental predictions only come from using discrete formalisms.

You can then proceed to ask, which one is Platonically real, the continuous mathematical spaces or the discrete physical ones? But I think the answer there might be, "Who says anything is Platonically real? The map is not the territory, so shut up and calculate."


>Can't a mathematical theory compress, generalize, and map out many relevant empirical facts very well without needing ontological commitments to the generalizations themselves?

Maybe, but its not obvious. The fact that the same generalizations are multiply realizable in different processes/structures certainly says something interesting. The consequences of this multiple realizability hasn't been fully investigated.

Your response seems to be arguing against my post which was mainly about mind-independence, by arguing against platonism. I don't see that mind-independence necessarily implies platonism. In fact, I find all forms of platonism extremely distasteful.

>The map is not the territory, so shut up and calculate.

Right, but this in fact goes to the heart of the question of the philosophy of mathematics. When someone says that mathematical objects are mind-independent, they are not talking about the notation itself (the map), but rather the content of the notations (the territory), i.e. the structure revealed through the notation. It should be pretty obvious that there are many interesting questions about the mind-independent structure of the territory. "Shut up and calculate" isn't an answer to this question, but rather the attitude that the answer simply doesn't matter. For many fields the answer doesn't matter, but the question is worth asking nonetheless.


I don't think math exists without sentience, it is a construct . What math describes can and is "real" in the traditional sense, but that doesn't necessarily make objects, concepts in mathematics "real objects." I take this position with language as well, it's all a metaphor.


But observations of real world systems can be identical to a specific mathematical system. I.e. the time it takes for a thing to fall at specific gravity at specific height, the frequency of a particular pendulum, and so on...(i.e. the rest of modern physics).

True, it is our observation and our model which are similar, so I suppose the philosophical question then is up to what point we can trust our observations. And if we trust our observations, I would conclude that the similarity of our observations and math means that the real world can at least exhibit 'maths', which means our minds are not the only place where math can exist.

The smartphone I'm typing this on leads me to conclude that lots of our observations are highly trustworthy :)


Yes, I think that humans have some ways to make sense of the external world, and math is one of them. May not be highly developed, could be buggy. Probably lots of things we could never figure out, like rats who can't solve mazes where they need to turn at prime numbers.

Our math sense could even conflict with our other useful facilities (as is the case with how easily fooled humans are when it comes to statistics).


>He is probably a formalist.

It's actually a she =)

Anyway, I do agree with you (and the author) that mathematics has the potential of being a superb pedagogical vehicle in teaching design thinking.


IamA graduate student in theoretical computer science.

I don't know that I agree with the poster above with respect to:

"a top school would require you to study a bunch of mathematics to gain a certain level of rigor of your thinking"

Most good schools require only two courses in "pure theory": first an introduction to discrete mathematics, where you learn about logic and how to prove stuff, a smattering of number theory/crypto, graph theory, probability, and automata theory; second an algorithms course that teaches you to analyse and construct algorithms according to certain design methods. Together this is really only a year of mathematics, and both courses are quite fun.

Note that discrete mathematics as taught by a good CS program is NOTHING like the math you learned in high school or elementary school, it teaches you to think creatively about mathematical structures and then justify that creativity with logic.

If you're already a developer, you'll probably have a bunch of latent structures hanging around in your brain that will let you have an easy/fun time with the material in these two courses. I suggest that you study discrete mathematics and algorithms in depth, to the level of some decent undergrad course on each. Learning this material will give your thinking the sort of quantitative/logical "edge" you might want -- these two courses contain the basic mathematical tools common to much of computer science. I do agree with the post above that they are not the most practical things in terms of everyday engineering, though.

Addendum: if you find that you really enjoy the mathematical angle and you think machine learning is cool, also learn linear algebra, because a bunch of machine learning theory and implementation relies on linear algebraic algorithms and concepts. Linear algebra actually would be practical to know if you ever do anything with data analysis.


I think your off base with this. A good school with a good theoretical cs program will always require up to multivariable calc, linear algebra, stats with calc, and discrete math. These are required because they will almost always be essential for doing theoretical work. Sometimes you just need it to properly express and explain complexity of a problem but many times in theoretical work knowledge of advanced mathematics is integral to to the actual research. Your advice is more geared towards what most good schools consider a general cs degree.


My advice is indeed geared towards general CS, it was unclear to me if the original poster wanted to get into heavy-duty theory or just gain a more rigorous understanding of CS in general.

Even at good schools (in the US, anyway) the calc up to multivar that is required isn't done with what I would call good proofs. They'll generally do epsilon-delta definitions of the limit and a few other fun things, but will fall far short of a truly rigorous development of the calculus. Same goes for many introductions to linear algebra and stats with calc courses. Some math departments use linear algebra to introduce proofs and creative mathematical thinking, though, so this isn't always the case.

So if someone is looking to develop how they think in mathematics and they already have a good programming background, they are much better served by taking a discrete math course than any of those sorts of math courses. If they then want to learn calculus, I would recommend self-study from Spivak's "Calculus" book which is a fun and elementary but rigorous treatment of the subject. You get to see not just how calculus works, but why it works, which is sorely lacking in most undergrad calc courses, even up to multivar.


What do you call good proofs? I think what you consider a "good school" is not what I consider a good school. In my calc classes we studied tons of awesome proofs that were very applicable to theoretical cs stuff. Plus most of my linear algebra classes and stats classes were just engineering, cs, and math majors so we once again focused on very useful real world applications and fields of active research.

If someone wants to develop their mathematical thinking and is good at programming descrete math isn't going to help them much because they have probably already seen much of it. They need to be exposed to high level calc and linear algebra to develop their mathematical thinking.


I am talking mostly from my own experience (from two or three European universities). A lot of my undergraduate CS programme was very mathematics heavy with a lot of focus on proofs (a lot of calculus, linear algebra, discrete mathematics, graph theory, algebra, even set theory for some, first order and predicate logic) and then of course a lot of foundational CS subjects like automata theory, basic datastructures, crypto, .... I would argue that proper rigorous proofs and thinking are maybe easier taught on the example of calculus or set theory rather than graph theory (I am not talking about discrete mathematics such as combinatorics without much of graphs, etc.). In graph theory, you often resort to doing proofs "by drawing" assuming that a skilled reader would be able to fill in the dots in a formal manner. That said, it's just my opinion and your course may have ways around it.


I am a graduate student at a big state university, studying complexity theory. I TA for the (required) introduction to discrete mathematics sometimes, and when we study FSM's I get to have a lot of fun motivating them for the students.

Automata theory may seem arcane, but if you want to truly understand concurrent programming, protocol design, robust systems, etc, you need good cognitive models. Heck, Erlang (one of my favorite languages for massively distributed computing) has some nice OTP stuff (http://www.erlang.org/documentation/doc-4.8.2/doc/design_pri...) built in for using FSMs to make your code sane and robust.

FSM's are one of the theoretical CS concepts that it is easiest to see the practical use for, but other TCS tends to be just as useful if you look at it right. Eg, space complexity right? For the most part that doesn't matter, does it? Nope. A bunch of modern internet-sized problems end up being streaming problems (http://geomblog.blogspot.com/2005/05/streaming-algorithms.ht...), and you need to understand basic space complexity, linear algebra, and probability, all of which a good CS degree will get into your head. I think the role of a good CS degree is to get some theory into people's heads, so that they have the right cognitive models for tackling difficult problems that come up in the real world.

I'm not saying that a CS degree is necessary to be a good programmer, or that you can't pick up those mental tools without a CS degree if you need them. But, it is easiest for most people to learn that kind of stuff in a university environment. I for one didn't know that I needed theory to work on the sorts of massive-data problems I was interested in, before going to university. A good CS degree knows about your unknown unknowns.


Nlp folks call those "stopwords," because they don't contribute much to statistical understanding of text. That is, in most nlp applications, those words are removed to leave more meaningful text behind. How did this make front page?


Random stuff. Really high-entropy, high-novelty stuff, as different from research papers and math/cs textbooks as I can manage (these make up the bulk of my reading material). Currently:

- Portrait of a Lady - The Illiad - A collection of Walter Benjamin's very early work - The Braddock Essays (a collection of award-winning essays about teaching composition to college students)


The first thing I do when someone tells me that they want to learn how to program is teach them how finite state automata work. If they seem to enjoy that, walk them through the NFA to regular expression proofs. Then we have a talk about the data structures that would be involved in storing and manipulating automata, regular expressions, and their input. Finally, I recommend that they start playing with SICP or How to Design Programs. The people that I've introduced to programming this way have a fairly sophisticated, "no-magic" attitude earlier than is normal. Introducing automata first provides a really good way to talk about machine model (automata itself) and programming language (how to specify the automata).


On the one hand, this does not inspire confidence. It is disturbing to have magic in one's software. On the other hand, the speed gains are really impressive: I'm sure there are times when it is reasonable to make a magic vs. utility tradeoff. Also, it's OSS: I'm sure someone will eventually want to make a well-understood and documented version, and the devs seem like people who would be willing to accept that.


I completely agree. There is, of course, always a balance to be struck but it's pretty difficult to ignore 100x speed up in something non-trivial.

Also, I read part of the paper linked in the post. It doesn't seem completely inaccessible, just dense. I'm sure someone will eventually come up with a non-hacky version.


Even when I'm working in a language that doesn't have first class functions, I find it easier to lay out my code by writing functional pseudocode and then "unrolling" maps into loops, closures into structs/objects, compositions into a sequence of calls, etc. It probably leads to idiomatically awful Java, but I find it easier to read and write, and nobody else needs to deal with my code. So...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: