This opinion is also biased. We have no theoretical method for determining which design philosophy is better than the other.
We can't know whether the OOP religion is better, we also can't know if the Haskell religion is better, and we can't know whether NEITHER is better. (this is key, even the neutral point of view where both are "good" can't be proven).
We do have theories to determine algorithmic efficiency. Computational complexity allows us to quantify which algorithm is faster and better. But whether that algorithm was better implemented using FP concepts or OOP concepts, we don't know... we can't know.
A lot of people like you just pick a random religion. It may seem more reasonable and measured to pick the neutral ground. But this in itself is A Religion.
It's the "it's all apples and oranges approach" or the "FP and OOP are just different tools in a toolbox" approach.... but without any mathematical theory to quantify "better" there's no way we can really ever know. Rotten apples and rotten oranges ALSO exist in a world full of apples and oranges.
You can't see it but even on an intuitive level this "opinion" is really really biased. It seems reasonable when you have two options to choose from "OOP" and "FP", but what if you have more options? We have Declarative programming, Lisp style programming, assembly language programming, logic programming, reg-exp... Are we really to apply this philosophy to ALL possible styles of programming? Is every single thing in the universe truly apples and oranges or just a tool in a toolbox?
With this many options it's unlikely. Something must be bad, something must be good and many things are better then other things.
I am of the opinion that normal Procedural and imperative programming with functions is Superior to OOP for the majority of applications. I am not saying FP is better than imperative programming, I am saying OOP is a overall a bad tool even compared with normal programming. But I can't prove my opinion to be right, and you can't prove it to be wrong.
Without proof, all we can do is move in circles and argue endlessly. But, psychologically, people tend to fall for your argument because it's less extreme, it seemingly takes the "reasonable" mediator approach. But like I said even this approach is one form of an extreme and it is not reasonable at all.
I mean your evidence is just a bunch of qualitative factoids. An opponent to your opinion will come at you with another list of qualitative factoids. You mix all the factoids together and you have a bigger list of factoids with no definitive conclusion.
> without any mathematical theory to quantify "better" there's no way we can really ever know. Rotten apples and rotten oranges ALSO exist in a world full of apples and oranges.
So you believe that the only way things can be compared is on quantitative measurements? Not with how they impress their users within whatever context they're in?
> I mean your evidence is just a bunch of qualitative factoids. An opponent to your opinion will come at you with another list of qualitative factoids. You mix all the factoids together and you have a bigger list of factoids with no definitive conclusion.
This is the process in which we gain knowledge in an uncertain world. I guess you could take the nihilistic stance and ignore it, but what's the use of arguing with nihilists?
>So you believe that the only way things can be compared is on quantitative measurements? Not with how they impress their users within whatever context they're in?
No but I believe that quantitative measurements are the ONLY way to definitively verify certain things.
>This is the process in which we gain knowledge in an uncertain world. I guess you could take the nihilistic stance and ignore it, but what's the use of arguing with nihilists?
I'm not ignoring anything. I'm saying especially for programming, nobody knows anything. Which is actually better OOP or FP? Nobody knows. This isn't philosophy, there is no definitive proof for which is better.
> Computational complexity allows us to quantify which algorithm is faster and better. But whether that algorithm was better implemented using FP concepts or OOP concepts, we don't know... we can't know.
The CPUs code runs on are imperative, with a lot of complexities and details hidden from programmers by magic the CPU does involving things like reordering and automatic parallelization.
However, none of the current languages are great at writing code that maps to how the CPU works. One can comment that functional programming does a better job of breaking up data dependencies, but imperative code can also do that just fine.
The problem with mapping paradigms to performance is that none of the paradigm purists care about performance, end of the day they care about theoretical purity.
CPUs don't care about paradigms, they care about keeping execution units busy and cache lined filled up.
>The problem with mapping paradigms to performance is that none of the paradigm purists care about performance, end of the day they care about theoretical purity.
It's not theoretical purity. It's more tech debt. How do I code things in a way where there's zero tech debt. Such that all code can be re-used anywhere at anytime.
Answer is good code review and design practices. Real CRs early on in the process, not just before signing off on feature complete.
I've seen horribly unusable code that was "good examples" of both OOP and FP. The OOP peeps have so much DI going on that tracing what actually happens is impossible, not to even get started on debugging.
The FP purists have so many layers of indirection before stuff actually happens (partial function application all over the place and then shove everything through a custom built pipe operator and abuse the hell out of /map/ to do the simplest of things).
Meanwhile some crusty old C programmer writes a for loop and gets the job done in 10, obvious, easy to read, lines.
I am from the camp that FP code produces much less tech debt then other forms of programming.
But the problem here is that no one here can prove or disprove what I just said. And that is the point of my thread.
In fact I believe tech debt is a fuzzy word that is is ripe for formalism such that we can develop a theory around what tech debt is and how to definitively eliminate it through calculation,... the same way you calculate the shortest distance between two points. I believe that the FP style is a small part of that theory.
But that's besides the point. Because since this theory doesn't exist yet, you and I have no way of verifying anything. You can leave me a code review and I can disagree with every qualitative opinion you leave in it.
> We have no theoretical method for determining which design philosophy is better than the other.
We do have a theoretical method. It's the scientific method. Other than that, I'm largely of the same thinking. Also, confusing language implementation with overall design is a major source of confusion (eg Java vs OOP vs Erlang vs FP vs Haskell, etc)
How to measure "better" and how the data is interpreted, are the major stopping points to improving software language usability. There have been some attempts (re: Quorum). Classic OOP (inheritance, et al) is simpler to use than mixins for many projects. So now we have to worry about project size as another axis. Then we have to address the issue of median developer effort. What about memory? How do you weigh FP allocate-stack-as-a-for-loop vs reduced mutability? It's more complex than FP good OOP bad.
This is the problem. You didn't even read my argument. Go read it again, carefully, instead of skimming through it.
My point is:
Maybe one of these religions is right. Maybe something is the best. Maybe a side must be picked.
You didn't argue for better. You argued that everything is the same, that all things are good and nothing is bad and that every single thing in the programming universe is a tool in a toolbox.
I disagree. Violently.
The point is neither the culting acolytes OR people like you can prove it either way.
But calling people who don't share your opinion as "culting acolytes" is manipulative. The words have negative connotations and it's wrong. Extreme opinions in science and logic are often proven to be true, they are often validated. To assume that anyone without a neutral opinion is a cultist is very biased in itself.
Here's a good analogy: I believe the world is round. I'm an extremist. You on the other hand embrace all theories as tools in a toolbox. The world could be round or it could be flat, your the reasonable neutral arbiter taking neither the side of the flat-earther or round-earther.
The illusion now is more clear. All 3 sides are a form of bias, but clearly our science says only one of these sides is true, and this side is NOT the "neutral arbiter" side
What makes sense for Haskell does not necessary make sense for other languages.
Also there is no "side" that needs to be picked. What's a good idea in one context could be a terrible idea in some other context.
But people are copying blindly Haskell lately.
The issue is that this happens blindly — again without questioning anything about the underlying premises.
Doing so is called "cargo culting". And that's something done by acolytes. (The words are loaded for a reason, btw.)
I'm coming from a language (Scala) where it took almost 15 years to recognize that Haskell isn't anything that should be imitated. Now that most people there start to get it people elsewhere start to fall for the exact same fallacy. But this time this could become so big that this will end up like the "OOP dark ages" which we're just about to finally leave. People are seemingly starting to replace one religion with the other. This won't make anything better… It's just equally stupid. It makes no difference whether you replace one "hammer" with another but still pretend that everything is a nail.
You did argue for everything is the same. Basically by "same" I mean everything is "equally good" depending on context. The whole hammers are for hammering and screwdrivers are for screwing thing... I explicitly said your argument was that everything was a tool in a toolbox and you exactly replicated what I said.
My point is: something can be truly bad and something can be truly good EVEN when considering all possible contexts.
You can't prove definitively whether this is the case for FP or OOP or any programming style for that matter. You can't know whether someones "side" is a cargo cult or not when there's no theoretical way for measuring this.
The cultish following may even be correct in the same way I cargo cult my belief that the world is ROUND and not flat.
> My point is: something can be truly bad and something can be truly good EVEN when considering all possible contexts.
No, that's impossible. "Truly good" or "truly bad" are moral categories. Something closely related to religion, BTW…
> You can't know whether someones "side" is a cargo cult […]
Of course I can.
If it objectively makes no sense (in some context), and is only blindly copied from somewhere else without understanding why there things were done the way they were done, this is called "cargo cult". That's the definition of this term.
How can I tell whether there is no understanding behind something? If the cultists would understand what they are actually copying they wouldn't copy it at all. ;-)
Replacing methods with free standing functions is for example on of such things: In Haskell there are no methods. So free standing functions are all you have. But imitating this style in a language with methods makes no sense at all! It complicates things for no reason. This is obviously something where someone does not understand why Haskell is like it is. They just copy on the syntax level something that they think is "functional programming". But surface syntax should not be missed for the actual concepts! Even it's easy to copy the syntax instead of actually adapting the ideas behind it (only where it makes sense of course!).
>No, that's impossible. "Truly good" or "truly bad" are moral categories. Something closely related to religion, BTW…
Wrong. Good and bad is used in a fuzzy way here, I'm OBVIOUSLY not talking about morality OR religion. What I am talking about are things that can be potentially quantified to a formal theory. For example we know the shortest distance between two points is a line. We have formalized algorithmic speed with computational complexity theory. O(N) is definitively more "good" then O(N^2).
Right now we don't have optimization theory or formal definitions on logic organization. We can't quantify it so we resort to opinionated stuff. And the whole thing goes in circles. But that is not to say this is impossible to formalize. We just haven't yet so all arguments go nowhere. But the shortest distance between two points? Nobody argues about that (I hope some pedantic person doesn't bring up non-euclidean geometry because come on).
All we can say right now is because there is no theory, nothing definitive can be said.
>Of course I can.
>If it objectively makes no sense (in some context), and is only blindly copied from somewhere else without understanding why there things were done the way they were done, this is called "cargo cult". That's the definition of this term.
You can't. The definition of bias is that the person who is biased is unaware of it. You can talk with every single religious person in the world. They all think they arrived at their beliefs logically. Almost everyone thinks the way they interpret the world is logical and consistent and it makes sense. They assume everyone else is wrong.
To be truly unbiased is to recognize the possibility of your own fallibility. To assume that your point of view is objective is bias in itself. You ask those people who "blindly" copy things if they did it blindly, they will tell you "No." They think they're conclusions are logical they don't think they're blind. The same way you don't think your blind, the same way I don't think I'm blind. All blind people point at other blind people and say everyone else is blind except for them.
The truly unbiased person recognizes the possibility of their own blindness. But almost nobody thinks this way.
Nobody truly knows who is blind and who is not. So they argue endlessly and present factoids to each other like this one here you just threw at me:
"Replacing methods with free standing functions is for example on of such things: In Haskell there are no methods. So free standing functions are all you have. But imitating this style in a language with methods makes no sense at all! It complicates things for no reason. This is obviously something where someone does not understand why Haskell is like it is. They just copy on the syntax level something that they think is "functional programming". But surface syntax should not be missed for the actual concepts! Even it's easy to copy the syntax instead of actually adapting the ideas behind it (only where it makes sense of course!)."
I mean how do you want me to respond to this factoid? I'll throw out another factoid:
Forcing people to use methods complicates things for no reason. Why not just have state and logic separated? Why force everything into some horrible combination? If I want to use my method in another place I have to bring all the state along with it. I can't move my logic anywhere because it's tied to the contextual state. The style of the program itself is a weakness and that's why people imitate another style.
And boom. What are you gonna do? Obviously throw another factoid at me. We can pelt each other with factoids and the needle doesn't move forward at all.
> Forcing people to use methods complicates things for no reason.
No, it doesn't.
All functions are in fact objects and most are methods in JavaScript, and there is nothing else.
Methods (== properties assigned function object values) are the natural way to express things in JavaScript.
Trying to pretend that this is not the case, and trying really hard to emulate (free) functions (which, to stress this point once more, do not exist in JavaScript) makes on the other hand's side everything more complicated than strictly needed.
> Why not just have state and logic separated?
That's a good idea.
This is also completely orthogonal to the question on how JavaScript is supposed to be used.
JavaScript is a hybrid langue. Part Small Talk, part Lisp.
It's common in JavaScript since inception to separate data (in the form of objects that are serializable to and from JSON) from functionality (in the form of function objects).
JavaScript was never used like Java / C++ / C#, where you glue together data and functionality into classes, and still isn't used like that (nevertheless they've got some syntax sugar called "class" at some point).
> Why force everything into some horrible combination?
Nobody does that. At least not in JavaScript.
Still that does permit to use methods.
Functions themself are objects. Using objects is the natural way for everything in JavaScript as there is nothing else than objects. Everything in JavaScript is an object. And any functionality the language provides is through methods.
Working against the basic principles of a language is a terrible idea! (In every language, btw). It complicates everything for no reason and has horrible code abominations as a consequence.
> If I want to use my method in another place I have to bring all the state along with it.
No, you don't. You need only to bring the data that you want to operate on.
The nice thing is: You get the functionality right at the same place as the data. You don't need to carry around anything besides the data that you work on.
The alternative is needing to bring the modules that carry the functionality that you want to apply to the data you need also to have around… As an example: `items.map(encode)` is nicer to write and read than `List.map items encode`.
You don't need to carry around the `List` module when the method can already be found on the prototype of the data object. Also it's more clear what's the subject and what's the object of the operation.
> I can't move my logic anywhere because it's tied to the contextual state.
That's just not true in JavaScript.
Nothing is easier then passing functions objects around, or change the values of properties that reference such function objects.
JavaScript is one of the most flexible languages out there in this regard!
You can even rewrite built-in types while you process them. (Not that I'm advocating for doing so, but it's possible).
> The style of the program itself is a weakness […]
You did not present any facts that would prove that claim.
> […] that's why people imitate another style.
No, that's not the reason.
You don't need to imitate Haskell when you want to write functional programs in a Lisp derived language… ;-)
People are obviously holding some cargo cult ceremonies when trying to write Haskell in JavaScript.
Your other opinions are based on wrong assumptions. I'm not going into that in detail, but some small remarks:
> For example we know the shortest distance between two points is a line.
In Manhattan¹? ;-)
> O(N) is definitively more "good" then O(N^2).
Maybe it's more "good"…
But it's for sure not always faster, or even more efficient, in reality.
Depending on the question and your resources (e.g. hardware) a brute force solution may be favorable against a solution with a much lower complexity on paper.
Welcome to the physical world. Where practice differs from theory.
> But the shortest distance between two points? Nobody argues about that (I hope some pedantic person doesn't bring up non-euclidean geometry because come on).
You don't need to look into non-euclidean geometry.
Actually, even there the shortest distance between two points is a "straight line". Only that the straight line may have some curvature (because of the curved space).
But you didn't even consider that "distance" is actually something² about that one can actually argue…
> You can't. The definition of bias is that the person who is biased is unaware of it.
It is, I just worded it differently. See the "cognitive biases" part on your citation. They use "reality" in place of what I mean by "unaware". If you think something incorrect is reality, then you are "unaware" of how incorrect your thinking is, because you think it's reality. These are just pedantic factoids we're throwing at each other.
>What was actually the point of your comment, btw?
The point is that FOR PROGRAMMING, nobody truly knows which camp is the cargo cult. Everyone is blind. Stick with the program.
>Welcome to the physical world. Where practice differs from theory.
This is called pedantism. Like did you really think you're telling me something I'm not aware about? Everyone knows this. But the pedantic details of the optimizations the compiler and the CPU goes through to execute code is besides the point, obviously I'm not referring to this stuff when I'm trying to convey a point.
>In Manhattan¹? ;-)
Your again just being pedantic. I'm perfectly aware of non-euclidean geometry, but you had to go there. I had a point, stick to the point, pedantism is a side track designed to muddy the conversation. Why are you muddying the conversation?
Is it perhaps that you're completely and utterly wrong and you're trying to distract me from that fact?
>Trying to pretend that this is not the case, and trying really hard to emulate (free) functions (which, to stress this point once more, do not exist in JavaScript) makes on the other hand's side everything more complicated than strictly needed.
Bro, my little paragraph arguing against you was just a random factoid. I don't care for your argument and I don't even care for mine. The whole main point is to say that we can endlessly spew this garbage at each other the needle doesn't move forward at all. Nobody can win, because we have no way of establishing an actual winner. Thus with no way of knowing who's right there's no POINT in it.
All I am saying and this is the heart of my argument, is that YOUR topic, your team of "don't be a cargo cult" is no different from all the other teams.
I thought I made that obvious that my factoid was sarcastic, but you fell for that quick and instantly retaliated with your own factoid. Man Not gonna go down that rabbit hole.
Not deltasevennine, but giving the CPU fewer things to do sounds good to me (in any context), even if it is currently unpopular. Some cults are popular and some aren't.
>I do not see where he did that. He argued simply that context matters. (And yes a "bad" tool can be the right tool, if it is the only tool avaiable.)
Well I see it. If you don't see it, I urge you to reread what he said.
A bad tool can be the right tool but some tools are so bad that it is never the right tool.
>And diving deeper into philosophy here, can you name one example?
Running is better then walking for short distances when optimizing for shorter time. In this case walking is definitively "bad." No argument by anyone.
Please don't take this into a pedantic segway with your counter here.
"Running is better then walking for short distances when optimizing for shorter time"
Yeah, but then the context is optimizing for shorter time. You said context does not matter. But it always does. And depending on the greater context, there would be plenty of examples where running is not adequate, even when optimizing for short time, because maybe you don't want to raise attention, you don't want to be sweaty when you reach the goal, or then your knee would hurt again, etc.
Is this pedantic, well yes, but if you make absolutistic statements then this is what you get.
But again, no one here ever said, it is all the same. It was said that it is always about context, to which I agree.
When you only have a low quality tool avaiable, or your people only trained in that low quality tool(and no time to retrain), than this is still the right tool for the job.
I made an absolutist statement which is definitely true. You failed to prove it wrong. Instead you had to do the pathetic move of redefining the statement in order to get anywhere. You weren't pedantic, you changed the entire subject with your redefinition.
As for context I am saying I can make absolute statement about things and this statement is true for all contexts.
My point for this entire thread is that I can say OOP is horrible for all contexts and this statement cannot be proven wrong. Neither can the statement OOP is good for all contexts or OOP is good for some contexts. All of these statements are biased.
If you were to be pedantic here you would be digging into what context means. You might say if the context was that everyone was trained on OOP and not fp then oop is the superior choice. To which I respond by context I mean contexts for practical consideration. If you can't figure out what set of contexts lives in that set for "practical consideration" then you are just too pedantic of a person for me to have a reasonable conversation with.
There are shit paradigms out there, shit design patterns and shit programming languages. But without proof this is an endless argument. You can't prove your side either, you're just going to throw me one off examples to which I can do the same. No point. I'm sorry but let's end it here, I don't want to descend further into that rabbit hole of endless qualitative arguments.
If you feel the need to for personal attacks over a philosophical debate, where you consistently insist of understanding the other side wrong, then you might want to check your tools of communication. They are clearly not working optimal but granted, they might be the best, you have avaiable - but you still could improve them.
"Nothing is true, nothing is false."
No one ever claimed that in this debate, except you.
>If you feel the need to for personal attacks over a philosophical debate, where you consistently insist of understanding the other side wrong,
No personal attack was conducted here. It's just you're sensitive. I mean I could take this: "where you consistently insist of understanding the other side wrong" as an insult.
I could also take this: "Well, if you think so, then consider yourself the tautological winner. " as a sarcastic insult as well.
But i don't. Because I'm not sensitive. Nobody was insulted here. You need to relax. Calling you a loser was just me turning your sarcastic "tautological winner" statement around and showing you how YOU are at the other side of the extreme. I'm not saying you're a "loser" anymore then you were sarcastically calling me a "winner."
Put it this way, the "loser" comment is an insult IF and ONLY if you're "winner" comment was an insult too. If it wasn't we should be good.
>No one ever claimed that in this debate, except you.
You never directly claimed this, but it's the logical consequence of your statements. You literally said my statement was flawed because it was "absolute". You're like "this is what you get when you make absolute claims." And my first thought was, "what on earth is wrong with an absolute claim?" We do not live in a universe where absolute claims are invalid because if we did then "Nothing is true and nothing is false" and everybody loses.
If this isn't the case then go ahead and clarify your points.
This isn't an opinion or a belief, it's a verifiable fact. We know the world is round because: if you travel east you'll eventually arrive at your starting destination; if you stand on a ship's crow's nest you can see further than the crew on the deck because of the curvature of the earth; if you fly a plane and make two 90 degree turns and travel an equal distance, you will end up at your starting point due to the curvature of the earth; if you go to space in a space station, you can visually verify that the earth is round.
Cargo culting acolytes will believe the earth is round with no explanation as to why. Just because you believe the right thing doesn't mean you're not cargo culting. If you can't explain why you believe what you believe, you're simply believing to follow a particular crowd, regardless of the validity of the belief.
I simply use this "world" example because everyone here is in the same cult as me: "The world is round cult." When I use it, they get it. If I were speaking to a different cult, I would use a different example.
You will note, both flat earthers and non-flat earthers have very detailed and complex reasoning for why they "believe" what they believe. But of course most members of either group have Not actually went to space to verify the conclusions for themselves.
My point was most people do cargo cult and that's bad, no matter what. And the notion that you have to go to space to know the earth is round is flawed, as I tried to illustrate using several examples that didn't necessitate traveling to space to infer that the earth is not flat.
After all, Eratosthenes was able to calculate the circumference of the earth with approximately 0.5% margin of error[0]. Since they didn't have rockets in 250 B.C, it should be clear that there are other empirical methods to test these hypotheses.
To reiterate, cargo culting is always bad. If you don't have a reason for what you believe, then there's a chance your belief is flawed and it would behoove you to research your question and prove to yourself the validity or invalidity of your belief.
Yeah I get it. But What I'm saying is that there are many times when nobody can truly prove which whether they're in the cargo cult or the other people are in the cargo cult.
So programming is one such thing. There are stylistic camps everywhere and nobody knows which one is the cargo cult, INCLUDING the neutral camp where people say everything is a tool depending on the context.
Nope. You're the one who isn't listening or understanding. still_grokking's point is that even if one approach is better than the other, a slavish cargo-cult adherence to that approach will still produce bad results.
Nope. You aren't listening to me. I am getting exactly what he's saying. What I am saying is that the cargo culters could be right. You don't know. Nobody knows.
Additionally he DID say that the approaches were all tools in a toolbox.
yeah but sometimes its useful to use a flat earth model when for instance the ground youre going to build something like a shed on is relatively flat. in the big picture sense i agree but in different contexts an alternative abstract model can suffice and actually be more efficient if the aim is to build the shed in this case
If you have not noticed that many people find it very easy to lie to you, I hope you discover it without suffering much. Elon Musk and Trump fans, e.g., seem not to have noticed.
I find Lambda calculus and term rewriting much more elegant and easy to understand. I don't think there is a simpler Turing-complete language (maybe Game of Life?).
For Turing machines I recommend the book "The annotated Turing".
Maxwell’s equations are arguably the most successful existing physical theory. They are incredibly accurate over a huge range of scales. They are used in essentially unaltered form all over modern day engineering and have astonishing predictive power.
On top of this, they are a useful tool without modification! They are the working tool for all electrical engineers. They’re not some lower-level substrate that exists in the background. They are used directly to model and generate other, simpler approximate theories (such as geometric and physical optics) which are powerful and elegant in their own right.
I don’t think Lisp, lambda calculus, Turing machines, or similar can make this kind of claim.
The lambda calculus is useful without modification. It can also be used to model other simpler languages or logics which are power and elegant in their own right.
So, yes, since I've used the lc in these ways I believe the same claim can be made regarding the lambda calculus.
what it means to say maxwell's equations have 'predictive power' is that we can
1. take a situation observed or designed in the contingent universe,
2. translate it into the abstract entities maxwell's equations talk about,
3. deduce consequences in the world of abstract entities,
4. translate those consequences back into the contingent universe, and then
5. find that the consequences in the contingent universe are within narrow uncertainty bounds we translated from the abstract world of ideas
the meaning of turing universality is precisely that any turing-complete programmable system can be used to model any other logical or mathematical system, including other turing-complete systems, in exactly the same way that maxwell's equations model electromagnetism
for example, you can model risc-v execution in lisp and predict what a risc-v processor will do, you can model lisp execution in the λ-calculus and predict what a lisp interpreter will do, you can model the λ-calculus in a turing machine and predict what λ-reduction will do, and you can model a turing machine in a risc-v processor and predict what the turing machine will do
there is a significant sense in which this sort of modeling is much more perfect than the kind done with maxwell's equations
when we apply maxwell's equations, we are subject to measurement error in steps 1 and 5; our measurements are never complete and correct, and heisenberg's uncertainty principle strongly suggests that they never can be. and in step 3, because maxwell's equations are continuous-time continuous-space differential equations, we often also introduce numerical error in our calculations as well, because we usually have to integrate them numerically rather than algebraically
on the other hand, in the case of computational universality all the entities being discussed are discrete, algebraic, mathematically abstract entities, so our simulations are absolutely perfect unless we run out of memory or suffer a rare hardware error
obviously these universal machines are not limited to modeling other universal machines; we can also use lisp or turing machines or risc-v processors to model things like gravitation, taxation, or maxwell's equations. and they are obviously also the main working tool for all electrical engineers today, having displaced slide rules and load lines generations ago
ultimately, though, we are also using maxwell's equations (and other equations describing electromagnetism, like the ebers-moll transistor model) to design our electronic computers which we use to simulate lisp
I think you're confused about the premise of this discussion. The comparison is not between Maxwell's equations and lambda calculus, it's between lambda calculus and other theoretical constructs in CS.
Maxwell's equations are "the Maxwell's equations" of physics because they are "the most successful" physical theory: i.e., incredible accuracy, wide applicability, and concision. This is relative to other physical theories. (I guess you could debate whether this is actually the case---obviously this is all subjective, and not everyone agrees. Maybe someone thinks Einstein's field equations should really be "the Maxwell's equations" of physics.)
To say that the lambda calculus or anything else are "the Maxwell's equations" of computer science is to say that they have the same set of properties relative to other let's theoretical constructs in computer science. But is this actually the case? It seems like Turing machines, lambda calculus, etc. are all similar in terms of relative modeling utility and concision. I disagree that they are widely applicable---in practice, just about any commonly used programming language is far more widely applicable than anything of these. And as you pointed out, by Turing universality, they're all logically equivalent---so the question of difference in modeling "power" is not relevant here.
I guess I just don't think it makes much sense to talk about anything in CS being "the Maxwell's equations" of CS. In physics, a wide variety of disparate physical phenomena are being modeled, with individual models having a quite varied range of power and applicability. In CS, things are much different.
Incidentally, even if you simulate Maxwell's equations on a computer using Lisp, a Turing machine, or whatever else, you will not go far if you insist on algebraically exact computations. :-) As you say yourself, you must integrate them numerically, which means numerical error introduced through your discretization. But I honestly can't say I understand your point including this information above.
there are lots of equivalent formulations of maxwell's equations (cf. https://en.wikipedia.org/wiki/Mathematical_descriptions_of_t... for a few); the ∇× ∇· one we most commonly use is quite a bit more compact and usable than maxwell's own formulation because vector calculus was, to a significant degree, invented to systematize maxwell's equations. there's apparently an even more compact form of them in terms of clifford algebras that i don't understand yet, \left(\frac{1}{c} \dfrac{\partial}{\partial t} + \boldsymbol\nabla \right) \mathbf F = \mu_0 c (c \rho - \mathbf J).
similarly there are lots of equivalent formulations of universal computation. lisp and the λ-calculus are analogous to the geometric-algebra formulation above and the more commonly used form in terms of vector field derivatives: they look very different, and they make different problems easy, but ultimately they all model the same thing, just like the various formulations of maxwell's equations
maxwell's equations are the maxwell's equations of classical electrodynamics, not of physics. without a lot of extra assumptions they won't get you very far in understanding why solid things are solid (which depends on the pauli exclusion principle) or why some nuclei break down or how transistors work or how stars can burn or why mercury's orbit precesses or why hot things go from red to orange to yellow when you heat them further
my point about modeling maxwell's equations in lisp (etc.) is that lisp (etc.) can model maxwell's equations (at least, as well as any other effective means of calculation we know of can model them — discretization and rounding error also happen when you numerically integrate with pencil and paper), so if we're looking for incredible accuracy, wide applicability, and concision, lisp (etc.) would seem to trump maxwell's equations — but good luck trying to build a computer without electromagnetism, maybe you can in some sense do it on a neutron star but not with atoms
A lot of people don't agree. But I feel category theory is the Maxwells Equation for software.
What is the shortest distances between two points? A straight line. I calculate the solution to that problem. I don't design it. What's the best way to travel between two points? Do I take a car or plane? Which is cheaper, faster, or more comfortable? I can't calculate a solution because the problem is to complex. Instead I design the solution to the problem. That is the fundamental difference between design and calculation.
Whenever you use the word "design" you are operating in a zone where humans have no foundational optimization theory. You are guessing, using your instincts, your common sense and your gut to find solutions for the problem. It's unlikely you'll hit the most optimal solution, but you can very likely arrive at a "good" solution.
If you squint, Category theory looks to fill this gap for designing programs. Depending on how you look at it, category theory looks like a fundamental theory on how to organize code. It can be thought of as a fundamental theory on abstraction or a fundamental theory on interfaces.
Maybe another way to put it is that the concept of interfaces are the maxwells equations of software..
Even turing machines/lambda caculus seem oddly specific let alone lisp. If we want something fundamental and general I think Category theory is it.
My guess is, if the advancement of computer science was allowed to play out for centuries and we got rid of all the historical baggage, some form of category theory would lie at the heart of it all.
Every time I try to understand what category theory is, I hit an impassable wall, I don't understand it. Alexander Grothendieck is much too intelligent for me. But I don't despair of getting there one day...
I've found Bartosz Milewski's lectures on category theory for programmers to be very understandable and enlightening. (And I'm weak on math) He explains it really well in plain terms.
It's the idea that most structures in math can be put upon a foundation of a structure consisting of objects and the relationships between those objects.
I was going to make the same recommendation. In particular I'd recommend her new book, _The Joy of Abstraction_. It's a real textbook that goes up through the Yoneda Lemma, but it isn't too scary and it doesn't assume that the reader already knows any theoretical math.
Yes, that book in particular. I have Emily Riehl's Category Theory in Context which was not immediately accessible to me at all. Joy of Abstraction aims to be an on-ramp to that.
I read the first couple chapters. It's a bit too watered down in the beginning IMO. I couldn't get past the first part especially when she tries to relate category theory to feminism. Might come back to it later.
As a non-math expert and programmer I highly highly recommend Bartosz stuff I linked it in another branch under my original comment.
Bartosz's material is great but it has a practical programming focus, so Cheng's books which focus more on teach Category Theory as mathematics, make a good complement.
Yeah you're probably right. Ill dive back into her book eventually.
It's mostly the beginning chapters don't even get into the meat of CT quick enough. Instead it spends a lot of time justifying things. I feel It's written for people who hated math even more then I did.
Lambda Calculus is a much better representation of computation than Category Theory. I can teach somebody functional programming using Lambda Calculus fairly quickly. It would take a lot longer using Category Theory. I have studied quite a bit of Category Theory but I would still struggle to explain beta-reduction using Category Theory. Another better system to use IMHO is Martin Luff style Type Theory. Much simpler. Much more useful in practice.
I do not agree with this (however, I only have one grad category theory course). I think category theory is a fine tool for formalizing computation and thinking about type systems, but wouldn't approach it as a program design tool.
You are taking Type A and converting it to Type B. That is the entire point of computation. All else is abstractions on top of that and algorithms below.
Category theory is what lives on top. Haskell is a programming language and style that borrows very very heavily from category theory. Getting a certain level of mastery in haskell well help you see how programming is related to CT.
I have studied haskell from this perspective. Like I said, you learn how to model computation and how the haskell language works, you don't learn how to design programs. I actually have gotten much more program design ideas from traditional branches of math (analysis, algebra, etc).
> If you designed programs in haskell then you designed them using concepts from category theory.
It's the level of abstraction and focus. Category theory can describe "mapping over promises" it doesn't give a lot of insight into how to design an internet communication protocol with low latency (for example).
> I would say you didn't get very far then
Sounds like you put a lot of value in your experience with this topic and I am unlikely to convince you.
I actually agree with you. Its not a good theory for finding optimal speed.
I would say it's more of a better theory for optimal organization and reuse of code. It's a theory of interfaces. If programming is all about abstraction then CT is the theory of design and abstraction.
I agree that detail and speed and execution are important in computer science but these things maybe aren't general enough as the efficiency of speed also relies on which turing complete language (out of an unlimited set of languages) that is used to drive the computation.
We also do have a general theory for optimizing speed. Computational Complexity theory. Which is based on Knuth's little assembly language. This theory can literally help you converge onto the best possible solution. Not by calculation but it can quantify the speed of an algorithm for definitive comparisons.
But then would complexity theory be the Maxwell's equations of programming? I think there's a good argument for that route but I don't think that's what the OP is thinking about in his article.
The op is thinking about design because Lisp is definitely a design philosophy. Lisp won't help you design an internet communication protocol for low latency either.
Maybe you did study category theory deeply but your thinking about it from an angle of computational cost. This is certainly valid. Apologies for that comment, Haskell and CT like any other design pattern out there is usually not targeted specifically at optimizing speed, but more for optimizing code reuse and increasing modularity. It's for dealing with technical debt and how to handle the complexity of programming through abstraction.
If you're thinking about it from that angle then I think the universe already has your answer. Computational Complexity theory is the Maxwell's equations of software. It's not a very complete theory but it's much more concrete then all the other "design" stuff in computer science.
> Haskell is basically a category theory framework.
Haskell probably draws more from category theory than any other mainstream language, but in absolute terms that's still not very much. It's okayish for modelling cartesian closed categories, but if you want any more structure than that things get quite painful. Even something as simple as a category with finitely many objects requires stupid amounts of type-level boilerplate.
There is no best mental tool. This is design, we don't have methods to converge on optimums. So you can't probably say any mental tool is better. It's all just different sets of primitives and category primitives, if you squint, feels like the proper theory of design.
Category theory is also so abstract that you can literally find it in everything. So it's not like it doesn't apply, it applies to everything.
When I say Haskell is a category theory framework I mean that Haskell has primitives that are explicitly modeled after CT and named with the same mathematical names. You don't have to know CT from a mathematical angle to use Haskell but it doesn't change the fact you are using CT primitives like functor to build things.
Other languages have CT concepts like functors but they don't crystallize the concept in such a pure and explicit form. That's just my opinion on it. You are welcome to disagree.
But when you say garbage like "100% nonsense" it's just fucking rude and against the rules here at HN. Don't say it again. Speak respectfully or don't speak at all.
> But when you say garbage like "100% nonsense" it's just fucking rude and against the rules here at HN. Don't say it again. Speak respectfully or don't speak at all.
I found that commment to be seriously rude and disrespectful. Yes I should have avoided the "100% nonsense" comment. However that doesn't give you licence to be rude and abusive in return.
It is rude, but it is reasonable. When a child acts unruly and immature he must be disciplined with a forceful hand. I called you out harshly and the harshness was fucking deserved, no matter how miffed you get about it.
I mean you expect me to just stand here and get raped by your rudeness? No reasonable man with spine will take that shit in real life and that's also typically why you don't likely speak that way in real life.
Why stop at the c-suite? We may not be close to being ready to disrupt software engineering but the trend is heading in that direction. We already passed a milestone for code generation.
Realistically, C-suite probably will probably target engineers first before letting themselves get replaced by AI. It may be fractionally partially responsible for the current layoff.
Of course. It's not about the best move or what looks better. Nobody cares for that.
It's about the truth. That's what people care about in the end. And if none of it was said here, parent is pointing out that Mark is truly an ass. Something like "laying off people because other companies are doing it" is pretty fucked up.
Yup. I don't think HN can avoid this at scale, though. It's been a problem on sites like this since early in the days of Slashdot.
The fundamental problem is that the voting and karma system actively incentivizes this kind of behavior. No amount of "did you read the article?" comments can counteract that force. All they do is increase the noise level even further.
>Perhaps, like any other tool-set... C has an optimal problem domain not everyone can understand
Maybe this is true specifically for C. However you imply this is true for "any other tool-set".
This is also a form of bias. It is illogical and irrational to think that every tool set is good for something. Things that exist can be horribly bad for everything and things that exist can in theory be good for almost anything.
There is not magical rule that says all tool-sets are great because they're always optimal for some niche problem domain.
The universe is not made up of apples and oranges. There are rotten oranges and rotten apples as well.
Languages like Python/C# have fundamentally broken threading models. And yet people like to use these for cluster work. Kind of like eating steak with a spoon, as you never knew about forks.
There are often language specific features that are not isomorphic, and are the primary reason a language was developed in the first place. =)
Sure there are forks and there are spoons. But there are also lumps of coal.
You have a problem: You need to eat soup and steak. So you use a spoon or a fork. The lump of coal is useless. You can probably smash the steak with it then eat the remains or lick the soup off the wet coal you dipped into it. Possible but a horrible tool overall.
There is an argument to be made whether certain languages are lumps of coals rather then a spoon or a fork =).
Why does 100,000 lines of code of python tend to be safer and more manageable then 100,000 lines of C++ despite the fact that python has no type checker and C++ has a relatively advanced type checker?
Why do startups choose a python web stack over a C++ web stack?
I don't think it's "self-evident." I think there's something more nuanced going on here. Hear me out. I think type systems are GREAT. I think python type hints and typescripts are the way forward. HOWEVER, the paradox is real.
Think about it this way. If you have errors in your program, does it matter that much if those errors are caught during runtime or compile time? An error in compile time is caught sooner rather then later but either way it's caught. YOU are protected regardless.
So basically compile time type checking just makes some of the errors get caught earlier which is a slight benefit but not a KEY differentiator. I mean we all run our code and test it anyways despite whether the system is typed or not so the programmer usually finds most of these errors anyways.
So what was it that makes python easier to use then C++?
Traceability and determinism. Errors are easily reproduced, languages that always display the same symptoms from certain errors and in turn deliver error messages that are clear and are readable. These are really the key factors. C++ on top of non-deterministic segfaults, astonishingly even has compile time messages that can confuse users even further.
There is no "paradox". C++ is dangerous because of memory management and awful semantics (undefined behavior/etc), both of which are orthogonal to static typing.
It's a bit like saying that there's a paradox: everyone says that flying is safer than driving, but experimental test pilots die at a much higher rate than school bus drivers!
Paradoxes don't exist in reality. It's a figure of speech based on something that was perceived as a paradox. This much is obvious.
Much of the fervor around dynamically typed languages in the past was driven largely by the dichotomy between c++ and other dynamically typed languages.
Nowadays it's more obvious what the differentiator was. But the point im making here is that type checking is NOT the key differentiator here.
> So basically compile time type checking just makes some of the errors get caught earlier which is a slight benefit but not a KEY differentiator.
Unfortunately, I have to completely disagree here, at least based on my experience. Shifting software error detection from runtime to compile time is absolutely paramount and, in the long run, worth any additional effort required to take advantage of a strong type system.
Firstly, writing unit tests that examine all the possible combinations and edge cases of software component input and state is... an art that requires enormous effort. (If you don't believe me, talk to the SQLite guys and gals, whose codebase is 5% product code and 95% unit test code.)
Secondly, writing automated UI tests that examine all the possible combinations and edge cases of UI event processing and UI state is... next to impossible. (If you don't believe me, talk to all the iOS XCUI guys and gals who had to invent entire dedicated Functional Reactive paradigms such as Combine and SwiftUI. ;) J/K)
Thirdly, I don't even want to get into the topic of writing tests for detecting advanced software problems such as memory corruption or multi-threaded race conditions. Almost nobody really seems to know how to write those truly effectively.
> So what was it that makes python easier to use then C++?
The Garbage Collector, which is side-stepping all the possible memory management problems possible with careless C++. However, a GC programming language probably cannot be the tool of choice for all the possible problem domains (e.g., resource-constrained environments such as embedded and serverless; high-performance environments such as operating systems, database internals, financial trading systems, etc.)
"financial trading systems" This is a myth. Many financial trading systems are written in C# and Java. Don't be distracted by the 1% of hedge funds with lousy funding that need nanosecond reactions to make money. If you have good funding, product diversity matters more than speed.
Otherwise, your post is excellent. Lots of good points. SQLite is something that EADS/ESA/NASA/JAXA would write for a aeroplane / jet fighter / satellite / rocket.
I'm sure C# and Java make excellent programming languages for many if not most financial applications, but I meant that in the context of high-volume Enterprise Application Integration (EAI). Basically financial message transformation, explosion, summarization, audit, etc. across multiple financial institutions. The volume of messages to be processed was quite considerable, so nobody even thought about taking the risk of switching from battle-tested C++ to anything else.
I am sure your use case was incredibly specific. For insane performance requirements plus enterprise software that is not greenfield, basically everything is C++.
No trolling. Have you ever seen the high-frequency Java stuff from Peter Lawrey's Higher Frequency Ltd.? It is insanely fast. Also, LMAX Disruptor (Java) data structure (ring buffer) is also legendary. I have seen it ported to C++. That said, you can beat all of this with C++, given enough time and resources!
Another thing you're not addressing here is basically Type checking solves none of the problems you describe. You claim it's extraordinarily hard to write tests for UI and for memory corruption. And that's your argument for type checkers? It's next to impossible to type check UI and memory corruption. So your argument has no point here.
SQlite is written in C. It has type checking. Yet people still write unit tests for it. Why? Because type checking is mostly practically inconsequential. All your points don't prove anything. It proves my point.
All the problems you talk about can be solved with more advanced proof based checkers. These systems can literally proof check your entire program to be fully in spec precompile time. It goes far beyond just types. Agda, Idris, Coq, and Microsofts lean have facilities to prove your programs to be fully correct 100% of the time. They exist. But they're not popular. And there's a reason for that.
You say it's paramount to move error detection to compile time. I say, this problem is ALREADY solved, but remains unused because these methods aren't PRACTICAL.
Incorrect. Have a look at the Swift OpenCombine library. Multiple Publishers of a particular type that emits a single boolean value (e.g., an "Agree to Terms" UI checkmark and an "Agree to Privacy Policy" UI checkmark) are combined at compile-time to be transformed into a single Publisher of a type that emits only a single boolean value (e.g., the enabled/disabled state of a "Submit" button). Effectively, it is not possible to even compile an app that incorrectly ignores one of the "Agree" checkmarks before enabling/disabling the "Submit" button.
> It's next to impossible to type check (...) memory corruption
Incorrect. Have a look at the Rust standard library. Sharing data across multiple treads requires passing a multi-threaded Mutex type; attempting to share data through a single-threaded Rc (reference-counted) type will not compile. Once the Mutex type is passed, each thread can only access the memory the Mutex type represents by acquiring another type, a MutexGuard, through locking. Effectively, it is not possible to even compile a program that incorrectly ignores multi-threading or incorrectly accesses memory in a race condition with other threads thus possibly corrupting that memory. Moreover, it is also not possible for a thread not to properly release a lock once the MutexGuard type goes out of scope.
> All the problems you talk about can be solved with more advanced proof based checkers.
Unlikely. Without feeding strong type information that describes your problem domain into a checker, the checker cannot reason about your code and figure out possible errors. A strong type system is a "language" for a programmer to communicate with his or her checker.
> You say it's paramount to move error detection to compile time. I say, this problem is ALREADY solved, but remains unused because these methods aren't PRACTICAL.
> [the C language] has type checking. Yet people still write unit tests for it. Why? Because type checking is mostly practically inconsequential.
Please do not hold it against me if I do not continue commenting here - you must be from a different, parallel Universe. (How is Elvis doin' on your end? ;) J/K)
>Incorrect. Have a look at the Swift OpenCombine library. Multiple Publishers of a particular type that emits a single boolean value
First off types can't emmit values. Types don't exist at run time. They're simply meta info for the compiler to run checks. Thus they can't emmit anything. Second if you're talking about something that emmits a value then it involves logic that doesn't have to do with UI. A UI is not about logic, it is simply a presentation given to the user, all logic is handled by things that AREN'T UI based.
UI would be like html and css. Can you type check html and css make sure the hackernews UI is correct? There is no definition of correctness in UI thus it can't be type checked. The example you're talking about is actually type checking the logic UNDERNEATH the UI.
>Effectively, it is not possible to even compile a program that incorrectly ignores multi-threading or incorrectly accesses memory in a race condition with other threads thus possibly corrupting that memory. Moreover, it is also not possible for a thread not to properly release a lock once the MutexGuard type goes out of scope.
This is different. It's not type checking memory corruption. It's preventing certain race conditions by restricting your code such that you can't create a race condition. There's a subtle difference here. You can violate Rusts constraints in C++ yet still have correct code. Type checking memory corruption would involve code that actually HAS a memory corruption, and some checker proving it has a memory violation. My statement still stands Memory corruption cannot be type checked.
Think about it. A memory corruption is an error because we interpret to be an error. Logically it's not an error. The code is doing what you told it to do. You can't check for an error that's interpreted.
At best you can only restrict your code such that ownership lives in a single thread and a single function which prevents certain race conditions. which is what rust does. This has a cost such that implementing doubly linked lists are a hugely over complicated in rust: https://news.ycombinator.com/item?id=16442743. Safety at the cost of highly restricting the expressiveness of the language is very different from type checking. Type checking literally finds type errors in your code, borrow checking does NOT find memory corruption... it prevents certain corruption from happening that's about it.
>Unlikely. Without feeding strong type information that describes your problem domain into a checker, the checker cannot reason about your code and figure out possible errors. A strong type system is a "language" for a programmer to communicate with his or her checker.
No no, you're literally ignorant about this. There's a whole industry out there of automated proof checking of code via type theory and type systems and there's technology that enables this. It's just not mainstream. It's more obscure then haskell but it's very real.
It's only unlikely to you because you're completely ignorant about type theory. You're unaware of how "complex" that "language" can get. Dependent types is one example of how that "type language" can actually "type check" your entire program to be not just type correct but logically correct. Lean, Idris, Coq, Agda, literally are technologies that enable proof checking at the type level. It's not unlikely at all. it's reality.
>Please do not hold it against me if I do not continue commenting here - you must be from a different, parallel Universe. (How is Elvis doin' on your end? ;) J/K)
It's quick sort implemented in a language called idris. The implementation is long because not only is it just quick sort, the programmer is utilizing the type system to PROVE that quick sort actually does what it's suppose to do (sort ordinal values).
I'd appreciate an apology if you had any gall. But you likely won't "continue commenting here". Wow just wow. I am holding it against you 100%. I didn't realize how stupid and rude people can actually be.
"Typestates are a technique for moving properties of state (the dynamic information a program is processing) into the type level (the static world that the compiler can check ahead-of-time)."
> you're completely ignorant about type theory. (...) This is just fucking rude. (...) I didn't realize how stupid and rude people can actually be.
Yes, of course, naturally, you must be right, how blind could I have been?
> I'd appreciate an apology if you had any gall.
Sure, sorry about my little previous joke[1], meant no factual offense. The very best of luck to you as a programmer and a wonderfully polite human being with a great sense of humor.
[1] "Topper: I thought I saw Elvis. Block: Let it go, Topper. The King is gone. Let's head for home." ("Hot Shots!", 1991)
>Sure, sorry about my little previous joke[1], meant no factual offense. The very best of luck to you as a programmer and a wonderfully polite human being with a great sense of humor.
Jokes are supposed to be funny. Not offensive. Your intent was offense under the guise of humor. Common tactic. Anyone serious doesn't take well to the other party being sarcastic or joking, you know this, yet you still play games. It's a typical strategy to win the crowd by making someone overly serious look like a fool. But there is no crowd here, nobody is laughing. Just me and you.
So your real intent is just to piss me off given that you know nobody is here to laugh at your stupid joke. Your just a vile human being. Go ahead crack more jokes. Be more sarcastic, it just shows off your character. We're done.
> Your intent was offense (...) you still play games (...) a typical strategy to win the crowd (...) But there is no crowd here (...) your real intent is just to piss me off (...) Your just a vile human being (...) it just shows off your character
I assure you that I am not joking when I say the following: you are beginning to act in a disturbing manner at this point, please consider speaking to a mental health professional.
Again, sorry to have caused you discomfort with my little joke and best of luck to you.
Bro. If someone was truly disturbing and you truly wanted to help them wouldn't walk up to them and tell them to speak to a mental health professional. Telling them that is even more offensive. We both know this.
You're not joking. You're just being an even bigger ass, but now instead of jokes, you're feigning concern. It's stupid.
There's subtle motivations behind everything. A genuine apology comes without insulting the other party. Clearly you didn't do that here, and clearly you and everyone else knows what a genuine apology should NOT look like: "go get help with your mental problems, I'm really sorry."
It shows just what kind of person you are. It's not me who's disturbing... it's you, the person behind a mask.
Also clearly my words are from a place of anger and seriousness not mental issues. Mental problems are a very grave issue and it's a far bigger problem and the symptoms are far more extreme then what's happening here. But you know this. And you're trying to falsely re-frame the situation by disgustingly using mental issues as some kind of tool to discredit the other party. It's just vile.
I don't wish you the best of luck. I think someone like you doesn't deserve it.
Your argument makes no sense. I say the type checker is not the key differentiator then you say for python the key differentiator is the garbage collector.
So that makes your statement contradictory. You think type checkers are important but you think python works because of garbage collection.
Either way I'm not talking about the implementation of the language. I'm talking about the user interface. Why is one user interface better than the other?
I bet you if c++ has sane error messages and was able to deliver the exact location of seg faults nobody would be complaining about it as much. (There's an implementation cost to this but I am not talking about this)
Even an ugly ass language like golang is loved simply because the user interface is straight forward. You don't get non deterministic errors or unclear messages.
No contradiction, really, it's just that we are talking about two different programming goals: I emphasize the goal of producing well-behaved software (especially when it comes to large software systems), while you emphasize the goal of producing software in an easier (more productive) manner. For my goal, a strong type system is a key differentiator. For your goal, a garbage collector is a key differentiator. The discussion probably comes to down to the question of whether garbage-collected, weakly-typed Python is as "bug-prone" as memory-managed, strongly-typed C++. I have no significant experience with Python, so I cannot answer authoritatively, but I suspect your assumption that "100,000 lines of code of python tend to be safer and more manageable then 100,000 lines of C++" might be wrong. In a large codebase, there will probably be many more dynamic-typing error opportunities (after all, the correct type has to be used for every operation, every function call, every calculation, every concatenation, etc.) than memory-management error opportunities (the correct alloc/dealloc/size has to be used for every pointer to a memory chunk; but only if C++ smart pointers are not used).
>but I suspect your assumption that "100,000 lines of code of python tend to be safer and more manageable then 100,000 lines of C++" might be wrong.
I can give you my anecdotal experience on this aka "authoritative" in your words. I am a really really really good python engineer with over a decade of experience. For C++ I have 4 years of experience, I would say I'm just ok with it.
Python is indeed safer then C++. Basically when you check for type errors at runtime, you actually easily hit all reasonable use cases pretty quickly. This is why unit testing works in reality even though your only testing a fraction of the domain.
Sure this isn't a static proof but in Practical terms static type checking is only minimally better then run-time type checking. You can only see this once you have extensive experience with both languages and you see how trivial type errors are. Practicality of technologies isn't a property you can mathematically derive, it's something you get a feel for once you've programmed enough in the relevant technologies. It helps you answer the question of "How often and how easy do type errors occur uncaught by tests?" Not that much more often and not hard at all to debug.
The thing that is actually making C++ less usable are the errors outside of type checking. The memory leaks, the segfaults, etc. The GC basically makes memory leaks nearly impossible and python doesn't have segfaults period. What python does is fail fast and hard once you write something outside of memory bounds. Basically it has extra run time checks that aren't zero cost that make it much much more safe.
All of this being said, I am talking about type-less python above... when I write python, I am in actuality a type Nazi. I extensively use all available python type hints including building powerful compositional sum types to a far more creative extent then you can with C++. I am extremely familiar with types and python types. I have a very detailed viewpoint from both sides of the spectrum from both languages. That's why I feel I'm qualified to say this.
>No contradiction, really, it's just that we are talking about two different programming goals: I emphasize the goal of producing well-behaved software (especially when it comes to large software systems), while you emphasize the goal of producing software in an easier (more productive) manner.
I'm actually partly saying both. Python is both easier and more well-behaved and more safe. The "well-behaved" aspect has a causal relationship to "easier". It makes sense if you think about it. Python behaves as expected more so then C++.
Literally I repeat: Python (even without types) is categorically safer then C++. I have a total of 14 years of experience in both. I would say that's enough to form a realistic picture.
GC was one of the most important and relevant features (if not the most important) that allowed Java to penetrate, and eventually dominate the space where C++ used to be relevant in terms of middleware/business type applications. This detail matters a lot in this discussion. Then once that is taken as a given, you can compare different GC enabled languages based on other factors, such as type safety (or lack thereof in the case of python).
If it does matter to the conversation then it's evidence supporting my point. I'm saying type checking isn't a key differentiator between something like JS/ruby/python vs. C++. You're implying the GC is the key differentiator.
If you're saying that you CAN'T compare the python to C++ because of the GC then I disagree. GC only stops memory leaks. That is not the most frequent error that happens with C++. Clearly if you just subtract memory leak issues from C++ there's still a usability issue with just C++.
GC is not just for memory leaks, but memory safety in general. It also enables several paradigms that are extremely difficult to get right without memory safety.
In order to have a proper comparison, you should control for variables that are irrelevant to the experiment. In this case, you want to look at the effect of typing, so you should control for GC. Which is why you should compare python to other GC'd static languages, but not to static non-GC'd languages.
>GC is not just for memory leaks, but memory safety in general.
No this is not true. Memory safety and memory leaks are different concepts. You can trigger a memory leak without violating memory safety. In fact a memory leak is not really an error recognized by an interpreter or a compiler or a GC. It is a logic error. A memory leak is only a leak because you interpret it as a leak. Otherwise the code is literally doing what you told it to do. It's similar to a logic error. I mean think about it, the interpreter can't know whether you purposefully allocated 1gb of memory or whether you accidentally allocated it.
Memory safety on the other hand is protection against violation of certain runtime protocols. The interpreter or runtime knows something went wrong and immediately crashes the program. It is a provable violation of rules and it is actually not open to interpretation like the memory leak was.
See python: https://docs.python.org/3/library/gc.html. You can literally disable the GC (during runtime) and the only other additional crash error that becomes more frequent is OOM. The GC literally just does reference counting and generational garbage collection... that's it.
I can tell you what makes python MORE memory safe then C++. It's just an additional runtime checks that are not zero cost.
x = [1,2]
print(x[2])
The above triggers an immediate exception that names the type of error (out of bounds) and the exact line that triggered it. This error will occur regardless of whether or not you disabled the GC. It happens because every index access to a list also checks against a stored length. If you're above that length it raises an exception. It's not zero cost but it's more safe.
For C++:
int x[] = {1,2};
std::cout<<x[2]<<std::endl;
This triggers nothing. It will run even though index 2 is beyond the bounds of the array. There is no runtime check because to do so would make the array data structure not zero cost. This is what happens during buffer overflows. It's one of the things that makes C++ a huge security problem.
Let's look at the type issue.
def head(input_list: List[int]) -> Optional[int]:
return input_list[0] if len(input_list) > 0 else None
x: int = head(2)
--------------------
#include <optional>
#include <vector>
std::Optional<int> head(const std::vector<int>& input_list){
return (input_list.length() > 0) ? input_list[0] : std::nullopt;
}
int main(){
auto x = head(2)
return 0;
}
Both pieces of code are identical. Python is type annotated for readability (not type checked). But both literally produce the same error messages (wrong input type on the call to head). Both will tell you there's a type error. It's just python happens at runtime and C++ happens at compile time. C++ has a slight edge in the fact that the error is caught as a static check. But this is only a SLIGHT advantage. Hopefully this example will allow you to see what I'm talking about as both examples literally have practically the exact same outcome of a type error. A minority of bugs are exclusively caught with type checking because runtime still catches a huge portion of the same bugs... and in general this is why overall C++ is still MUCH worse in terms of usability then python despite type checking.
I don't think anyone is arguing that C++ is more difficult to use than Python, and much less safe. The question is how does python stack up to Java or C#? As you can see in this thread and many other discussions on this forum and elsewhere, people with experience working on larger systems will tell you that it doesn't.
If you had jobs in both stacks as I have you'll see that the differences are trivial. Python can get just as complex as either c# and java.
Those other people your copying your argument from likely only had jobs doing Java or C# and they did some python scripts on the side and came to their conclusions like that. I have extensive experience for production work in both and I can assure you my conclusions are much more nuanced.
Python and java stack up pretty similarly in my experience. There's no hard red flags that make either language a nightmare to use when compared to the other. People panic about runtime errors, but like I said those errors happen anyway.
Python does however have a slight edge in the fact that it promotes a more humane style of coding by not enforcing the oop style. Java programmers on the otherhand are herded into doing oop so you have all kinds of service objects with dependency injection and mutating state everywhere. So what happens is in Java you tend to get more complex code, while python code can be more straightforward as long as the programmer doesn't migrate their oop design patterns over to python.
That's the difference between the two in my personal experience. You're mostly likely thinking about types. My experience is that those types are not that important, but either way, modern python with external type checkers actually has a type system that is more powerful then Java or C#. So in modern times there is no argument. Python wins.
But prior to that new python type system my personal anecdotal experience is more relevant and accurate then other people's given my background in both Java and python And C++. Types aren't that important period. They are certainly better then no types but any practical contribution to safety is minimal.
> If you have errors in your program, does it matter that much if those errors are caught during runtime or compile time?
Of course it matters. If an error can be caught by the compiler, it will never get to production. Big win.
With typeless languages like python the code will get to production unless you have 100% perfect test coverage (corollary: nobody has 100% perfect test coverage) and then some unexpected moment it'll blow up there causing an outage.
This happens with metronomic regularity at my current startup (python codebase), at least once a month. It is so frustrating that in this day and age we are still making such basic mistakes when superior technology exists and the benefits are well understood.
That's fine. A type checker won't catch everything. Run time errors happen regardless. I find it unlikely that all the errors your code base is experiencing is the result of type errors.
Something like c++. You get a runtime errors. You have no idea where it lives or what caused it.
Your python code base delivers an error but a patch should trivial because python tells you what happened. Over time these errors should become much less.
That's a strawman, nobody has claimed a statically typed language will catch all possible errors.
It will however catch an important category of common errors at compile-time, thus preventing them from reaching production and blowing up there. Other types of logic error of course exist, in all languages.
> Something like c++. You get a runtime errors. You have no idea where it lives or what caused it.
I don't know what this means? You seem to be suggesting that code in a statically typed language cannot be debugged? Clearly that's not true. Debugging is in fact usually easier because you can rule out the type errors that can't happen.
>I don't know what this means? You seem to be suggesting that code in a statically typed language cannot be debugged?
You don't know what it means probably because you don't have experience with C++. These types of errors are littered throughout C++. What you think I'm suggesting here was invented by your own imagination. I am suggesting no such thing.
You talk about strawmen? Literally what you said can be viewed as an aspect of deception at it's finest. I literally in no way suggested what you accused me of suggesting. Accusatory language is offensive. Just attack the argument... don't use words like "strawman" to accuse people of being deliberately manipulative here. We both believe what we're saying, no need to accuse someone of an ulterior agenda when ZERO motive for one exists.
What I am suggesting here is that there is an EXAMPLE of a statically typed language that is FAR less safe and FAR harder to debug then a dynamically typed language (C++ and python). This EXAMPLE can function as evidence for the fact that static type checking is not a key differentiator for safety or ease of use or ease of debugging.
>Debugging is in fact usually easier because you can rule out the type errors that can't happen.
You don't get it. Type errors that happen at runtime or compile time contain the same error message. You get the same information. Therefore you rule out the same thing. Type checking is only doing extra checking in the sense that it checks code that doesn't execute while runtime checks code that does execute.
Python was programmed with sane error messages and runtime checks that immediately fail the program and gives you relevant logic about where the error occurred. This is the key differentiator that allows it to beat out a language like C++ which has none of this. C++ does have static type checking but it does little to make it better then python in terms of safety and ease of use.
> You don't know what it means probably because you don't have experience with C++.
I started developing in C++ in 1992, so I have a few years with it. I've never run into the problems you seem to be experiencing.
> Type errors that happen at runtime or compile time contain the same error message.
Yes. But for the runtime error to occur, you need to trigger it by passing the wrong object. Unless you have a test case for every possible wrong object in every possible call sequence (approximately nobody has such thorough test coverage) then you have untested combinations and some day someone will modify some seemingly unrelated code in a way that ends up calling some distant function with the wrong object and now you have a production outage to deal with.
If you had been catching these during compile time, like a static type system allows, that can never happen.
>Yes. But for the runtime error to occur, you need to trigger it by passing the wrong object. Unless you have a test case for every possible wrong object in every possible call sequence (approximately nobody has such thorough test coverage)
And I'm saying from a practical standpoint manual tests and unit tests PRACTICALLY cover most of what you need.
Think about it. Examine addOne(x: int) -> int. The domain of the addition function is huge. Almost infinite. Thus from a probabilistic standpoint why would you write unit tests with one or two numbers? it makes no sense as the your only testing a probability of 2 out of infinite of the domain. But that probability is flawed because it is in direct conflict with our behavior and intuition. Unit tests are an industry standard because it works.
The explanation for why it works is statistical. Let's say I have a function f:
assert(f(6) == 5).
The domain and the range are practically infinite. Thus for f(6) to randomly produce 5 is a very low probability because of the huge number of possibilities. This must mean f is not random. With a couple of unit tests verifying confirming that f outputs non-random low probability results demonstrates that the statistical sample you took has high confidence. So statistically unit tests are basically practically almost as good as static checking. They are quite close.
This is what I'm saying. Yes static checks catch more. But not that much more. Unit tests and manual tests cover the "practical" (keyword) majority of what you need to ensure correctness without going for an all out proof.
>If you had been catching these during compile time, like a static type system allows, that can never happen.
>I started developing in C++ in 1992, so I have a few years with it.
The other part of what I'm saying is that most errors that are non-trivial happen outside of a type system. Seg faults, memory leaks, race conditions etc... These errors happen outside of a type system. C++ is notorious for hiding these types of errors. You should know about this if you did C++.
Python solves the problem of segfaults completely and reduces the prevalence of memory leaks with the GC.
So to give a rough anecdotal number, I'm saying a type system practically only catches roughly 10% of errors that otherwise would not have been caught by a dynamically typed system. That is why the type checker isn't the deal breaker in my opinion.
I don't understand why you're talking about statistical sampling. Aside from random functions, functions are deterministic, unit testing isn't about random sampling. That's not the problem here.
Problem is you have a python function that takes, say, 5 arguments. The first one is supposed to be an object representing json data so that's how it is used in the implementation. You may have some unit tests passing a few of those json objects. Great.
Next month some code elsewhere changes and that function ends up getting called with a string containing json instead, so now it blows up in production, you have an outage until someone fixed it. Not great. You might think maybe you were so careful that you actually earlier had unit tests passing a string instead, so maybe it could've been caught before causing an outage. But unlikely.
Following month some code elsewhere ends up pulling a different json library which produces subtly incompatible json objects and one of those gets passed in, again blowing up in production. You definitely didn't have unit tests for this one because two months ago when the code was written you had never heard of this incompatible json library. Another outage, CEO is getting angry.
And this is one of the 5 arguments, same applies for all of them so there is exponential complexity in attempting to cover every scenario with unit tests. So you can't.
Had this been written in a statically typed language, none of this can ever happen. It's the wrong object, it won't compile, no outage, happy CEO.
This isn't a theoretical example, it's happening in our service very regularly. It was a huge mistake to use python for production code but it's too expensive to change now, at least for now.
> I don't understand why you're talking about statistical sampling. Aside from random functions, functions are deterministic, unit testing isn't about random sampling. That's not the problem here.
Completely and utterly incorrect. You are not understanding. Your preconceived notion that unit testing has nothing to do with random sampling is WRONG. Unit Testing IS Random sampling.
If you want 100% coverage on your unit tests you need to test EVERY POSSIBILITY. You don't. Because every possibility is too much. Instead you test a few possibilities. How you select those few possibilities is "random." You sample a few random possibilities OUT OF a domain. Unit Testing IS random sampling. They are one in the same. That random sample says something about the entire population of possible inputs.
>Next month some code elsewhere changes and that function ends up getting called with a string containing json instead, so now it blows up in production, you have an outage until someone fixed it. Not great. You might think maybe you were so careful that you actually earlier had unit tests passing a string instead, so maybe it could've been caught before causing an outage. But unlikely.
Rare. In theory what you write is true. In practice people are careful not to do this; and unit tests mostly prevent this. I can prove it to you. Entire web stacks are written in python without types. That means most of those unit tests were successful. Random Sampling statistically covers most of what you need.
If it blows up production the fix for python happens in minutes. A seg fault in C++, well that won't happen in minutes. Even locating the offending line, let alone the fix could take days.
>Following month some code elsewhere ends up pulling a different json library which produces subtly incompatible json objects and one of those gets passed in, again blowing up in production. You definitely didn't have unit tests for this one because two months ago when the code was written you had never heard of this incompatible json library. Another outage, CEO is getting angry.
Yeah except first off in practice most people tend to not be so stupid as to do this, additionally unit tests will catch this. How do I know? Because companies like yelp have had typeless python as webstacks for years and years and years and this mostly works. C++ isn't used because it's mostly a bigger nightmare.
There are plenty of companies for years and years have functioned very successfully using python without types. To say that those companies are all wrong is a mistake. Your company is likely doing something wrong... python functions just fine with or without types.
>And this is one of the 5 arguments, same applies for all of them so there is exponential complexity in attempting to cover every scenario with unit tests. So you can't.
I think you should think very carefully about what I said. You're not understanding it. Unit testing Works. You know this. It's used in industry, there's a reason why WE use it. But your logic here is implying something false.
You're implying that because of exponential complexity it's useless to write unit tests. Because you are only covering a fraction of possible inputs (aka domain). But then this doesn't make sense because we both know unit testing works to an extent.
What you're not getting is WHY it works. It works because it's a statistical sample of all possible inputs. It's like taking a statistical sample of the population of people. A small sample of people says something about the ENTIRE population of people. Just like how a small amount of unit tests Says something about the correctness of the entire population of Possible inputs.
>This isn't a theoretical example, it's happening in our service very regularly. It was a huge mistake to use python for production code but it's too expensive to change now, at least for now.
The problem here is there are practical examples of python in production that do work. Entire frameworks have been written in python. Django. You look at your company but blindly ignore the rest of the industry. Explain why this is so popular if it doesn't work: https://www.djangoproject.com/ It literally makes no sense.
Also if you're so in love with types you can actually use python with type annotations and an external type checker like mypy. These types can be added to your code base without changing your code. Python types with an external checker are actually more powerful then C++ types. It will give you equivalent type safety (with greater flexibility then C++) to a static language if you choose to go this route. I believe both yelp and Instagram decided to do add type annotations and type checking to their code and CI pipeline to grab the additional 10% of safety you get from types.
But do note, both of those companies handled production python JUST FINE before python type annotations. You'd do well do analyze why your company has so many problems and why yelp and instagram supported a typeless python stack just fine.
I think it is simpler than that: C++ is an incredibly complex and verbose language. Most of web development is working with strings, and C++ kinda sucks there. There is also a compilation/build step, so overall productivity is lower. Python is "easier" all the way around (we'll ignore the dependency management/packaging debates.)
It depends on how you define "safer." Run-time errors with Python happen frequently in large programs due to poor type checking all the time. Often internal code is not well documented (or documented incorrectly) so you may get back a surprise under certain conditions. Unless you've have very strict tooling, like mypy, very high test coverage, etc. there is less determinism with Python.
Also, this may come as a surprise, but many people do not run or test their code. I've seen Python code committed that was copy-pasta'd from elsewhere and has missing imports, for example. Generally this is in some unhappy path that handles an error condition, which was obviously never tested or run.
I know it happens "all the time" but these runtime errors happen fast and quick. You catch most of these issues while testing your program.
Statistically more errors are caught by python runtime then an equivalent type checked c++ program simply because the python user interface fails hard and fast with a clear error message. C++ on the other doesn't do this at all. The symptoms of the error are often not related to the cause. Python is safer then C++. And this dichotomy causes insight to emerge. Why did python beat c++?
In this case the type checker is irrelevant. Python is better because of clear and deterministic errors and hard and fast failures. If this is exemplary of the dichotomy between c++ and python and if type checkers are irrelevant in this dichotomy it points to the possibility that type checking isn't truly what makes a language easier to use and safer.
The current paradigm is rust and Haskell are great because of type checking. This is an illusion. I initially thought this was well.
Imagine a type checker that worked like c++. Non deterministic errors and obscure error messages. Sure your program can't compile but you are suffering from much of the same problems, it's just everything is moved to compile time.
It's not about type checking. It's all about traceability. This is the key.
>there is less determinism with Python
You don't understand the meaning of the word determinism. Python is almost 100 percent deterministic. The same program run anywhere with an error will produce the same error message at the same location all the time. That is determinism. Type checking and unit testing does not correlate with this at all.
I think it's better to catch errors sooner than later. This is where type checking helps. I've seen plenty of Python code that takes a poorly named argument (say "data").. is it a dict? list? something from a third party library like boto3? If it's a dict, what's in the dict? What if someone suddenly starts passing in 'None' values for the dict? Does the function still work? Almost nobody documents this stuff. Unless you read the code, you have no idea. "Determinism" of code is determined based on inputs. Type checking helps constrain those inputs.
As for C++ "non-determinism": If you write buggy code that overwrites memory, then of course you're going to get segfaults. This isn't C++'s fault.
I've seen plenty of code in all languages (including Python) that appears to exhibit chaotic run time behavior. At a previous company, we had apps that Python would bloat to gigabytes in size and eventually OOM. Is this "non-determinism"? No, it's buggy code or dependencies.
>I think it's better to catch errors sooner than later. This is where type checking helps.
Agreed. It is better. But it's not that much better. That's why python is able to beat out C++ by leagues in terms of usability and ease of debugging and safety. This is my entire point. That type checking is not the deal breaker here. Type checking is just some extra seasoning on top of good fundamentals, but it is NOT fundamental in itself.
>As for C++ "non-determinism": If you write buggy code that overwrites memory, then of course you're going to get segfaults. This isn't C++'s fault.
This doesn't happen in python. You can't segfault in python. No language is at "fault" but in terms of safety python is safer.
This language of "which language is at fault" is the wrong angle. There is nothing at "fault" here. There is only what is and what isn't.
Also my point was that when you write outside of memory bounds, anything could happen. You can even NOT get a segfault. That's the problem with what makes C++ so not user friendly.
>I've seen plenty of code in all languages (including Python) that appears to exhibit chaotic run time behavior. At a previous company, we had apps that Python would bloat to gigabytes in size and eventually OOM. Is this "non-determinism"? No, it's buggy code or dependencies.
This is literally one of the few things that are non-deterministic in python or dynamic languages. Memory leaks. But these are Very very very hard to trigger in python. But another thing you should realize is that this error has nothing to do with type checking. Type checking is completely orthogonal to this type of error.
>I think it's better to catch errors sooner than later. This is where type checking helps. I've seen plenty of Python code that takes a poorly named argument (say "data").. is it a dict? list? something from a third party library like boto3? If it's a dict, what's in the dict? What if someone suddenly starts passing in 'None' values for the dict? Does the function still work? Almost nobody documents this stuff. Unless you read the code, you have no idea. "Determinism" of code is determined based on inputs. Type checking helps constrain those inputs.
When you get a lot of experience, you realize that "sooner" rather then "later" is better but not that much. Again the paradox reels it's head here. Python forwards ALL type errors to "later" while C++ makes all type errors happen sooner and Python is STILL FAR EASIER to program in. This is evidence for the fact that type checking does not improve things by too much. Other aspects of programming have FAR more weight on the the safety and ease of use of the language. <-- That's my thesis.
Well, we do agree on something! I too much prefer programming in Python over C++. I honestly hope I never have to touch C++ code again. It's been about 5 years.
I try to add typing in Python where it makes sense (especially external interfaces), mostly as documentation, but am not overly zealous about them like some others I know. Mostly I look at them as better comments.
>I try to add typing in Python where it makes sense (especially external interfaces), mostly as documentation, but am not overly zealous about them like some others I know. Mostly I look at them as better comments.
See you don't type everything because it doesn't improve things from a practical standpoint. You view it as better comments rather then additional type safety. You leave holes in your program where certain random parts aren't type checked. It's like if only half of C++ was type checked, one would think that it'd be a nightmare to program in given that we can't assume type correctness everywhere in the code. but this is not the case.
Your practical usage of types actually proves my point. You don't type everything. You have type holes everywhere and things still function just fine.
I type everything for that extra 1% in safety. But I'm not biased. I know 1% isn't a practical number. I do it partly out of habit from my days programming in haskell.
You shouldn't compare a Python web stack with a C++ web stack, as C++ and Python target very different use cases.
You can compare however with a Java or C# web stack, both of which offer a superior developer experience, as well as a superior production experience (monitoring, performance, package management, etc.).
And worse language, in so many aspects, that you need everything to tame it.
In contrast, other langs like python have the luxury of see what C/C++ do wrong and improve over it.
Just having a `String` type, for example, is a massive boost.
So for them, the type system already have improved the experience!
---
So this is key: Langs like python have a type system (and that includes the whole space from syntax to ergonomics - like `for i in x`, to semantics) and the impact of adding a "static type system checker analysis" is reduced thank to that.
And considering that if you benchmark for a "static type system checker analysis" is what C++/C#/Java (at the start?) is then the value is not much.
Is only when you go for ML type systems where the value of a static checker become much more profitable.
Hindley mindler allows for flexibility in your types and this high abstraction and usability in code. The full abstraction of categories allows for beautiful and efficient use of logic and code but it's not safety per se.
Simple type systems can also offer equivalent safety with less flexibility. What make Haskell seem more safe is more the functional part combined with type safety. Functional programming eliminates out of order errors where imperative procedure were done in the wrong order.
Well there's an irony to your statement. Those programmers who write embedded systems (I'm one of them) tend to use C++. C++ lacks memory safety and has segfaults, python doesn't. They literally used the most unsafe programming language ever that literally doesn't even alert you to errors either at compile time or runtime.
C++ is chosen for speed. Not for safety. The amount of run-time and compile time checks C++ skips is astronomical. The passengers may think it matters, but the programmers of those systems by NOT using a program that does compile time or run time checks are saying it doesn't matter.
> Why does 100,000 lines of code of python tend to be safer and more manageable then 100,000 lines of C++ despite the fact that python has no type checker and C++ has a relatively advanced type checker?
Because C++ sucks, but static types are not to blame for that.
My point here is that static types didn't do much to improve C++. We should be focusing on what made C++ bad. The things that made C++ bad and the fixes for those things are what makes python Good.
I'm saying type checking is not one of those things.
Of course. I'm a python guru. I know the python type annotation inside and out. I'm a type nazi when it comes to writing python.
That's why I know exactly what I'm talking about. I can unbiasedly say that from a practical standpoint the type checker simply let's you run and the "python" application less, and the "mypy" application more.
Example:
def addOne(x: int) -> int:
return x + 1
addOne(None)
The above... if you run the interpreter on it, you get a type error. Pretty convenient, you can't add one to None.
But if you want to add type checking you run mypy on it. You get the SAME type error if you run mypy. They are effectively the same thing. One error happens at runtime the other happens at before runtime. No practical difference. Your manual testing and unit testing should give you practically the amount of safety and coverage you need.
Keyword here is "practically." yes type checking covers more. But in practice not much more.
Sure but the time delta is inconsequential. Why? because you're going to run that program anyway. You're going to at the very least manually test it to see if it works. The error will be caught. You spend delta T time to run the program. Either you catch the error after delta T or at the beginning of delta T. Either way you spent delta T time.
Additionally something like your example code looks like data science work as nobody loads huge databases into memory like that. Usually web developers will stream such data or preload it for efficiency. You'll never do this kind of thing in a server loop.
I admit it is slightly better to have type checking here. But my point still stands. I talk about practical examples where code usually executes instantly. You came up with a specialized example here where code blocks for what you imply to be hours. I mean it has to be hours for that time delta to matter, otherwise minutes of extra execution time is hardly an argument for type checking.
Let's be real, you cherry picked this example. It's not a practical example unfortunately. Most code executes instantaneously from the human perspective. Blocking code to the point where you can't practically run a test is very rare.
Data scientists, mind you, from the one I've seen, they don't use types typically with their little test scripts and model building that they do. They're the ones most likely to write that type of code. It goes to show that type checking gives them relatively little improvement over their workflow.
One other possibility is that expensive_computation() can live in a worker processing jobs off a queue. A possible but not the most common use case. Again for this, likely the end to end or your manual testing procedures will test loading a very small dataset which will in turn make the computation fast. Typical engineering practices and common sense lead you to uncover the error WITHOUT type checking being involved.
To prove your point you need to give me a scenario where the programmer won't ever run his code. And this scenario has to be quite common for it to be a practical scenario as that's my thesis. Practicality is a keyword here: Types are not "practically" that much better.
I would not use C++ in your comparison. Try with C# or Java. Not even close. They will crush in developer productivity and maintenance over Python, Ruby, Perl, JavaScript.
First off python now has types (you can place type annotations on the interpreter and run an external type checker) and javascript people use typescript. In terms of type safety i would argue python and javascript are now EQUAL to C# and Java.
Developer productivity in these scripting languages is also even higher. Simply because of how much faster they are to program in with the code then run/test loop. Java and C# can have loong compile times. Python and typescript are typically much much more quicker. With the additional type safety python and typescript are actually categorically higher in developer productivity then C# or Java.
But that's besides my point. Let's assume we aren't using modern conventions and javascript and python are typeless. My point is that whether or not C# or java crushes python and javascript over maintenance it doesn't win because of type checking.
You wrote: <<Java and C# can have loong compile times.>> Yes, for initial build. After, it is only incremental. I have worked on three 1M+ line Java projects in my career. All of them could do initial compile with top spec desktop PC in less than 5 mins. Incremental builds were just a few seconds. If your incremental build in Java or C# isn't a few seconds, then your build is broken. Example: Apache Maven multi-module builds are notoriously slow. Most projects don't really need modules, but someone years ago thought it was a good idea. Removing modules can improve compile time by 5x. I have seen it with my own eyes.
><<Java and C# can have loong compile times.>> Yes, for initial build. After, it is only incremental.
I work with C++ currently. Even the incremental build is too slow. Also eventually you have to clear the cache for various reasons including debugging, a new library, etc, etc/
1M line python is 0s compilation time. You hit the runtime section instantaneously.
Go was created with fast compilation times to get around this problem. I would say in terms of compilation, go basically is the closest in terms of the python experience.
Basically when things are fast enough "5x compilation time" isn't even thought about because things are too fast to matter anyway. Go hits this area as well as python (given no compilation)
Types got a bad wrap because of C++. There was a strange dichotomy between languages like python/javascript and C++. If type systems were so good why was it easier to program with javascript and python then with C++? People got confused and promoted dynamically typed languages as better.
What many people didn't realize was that C++ was hard DESPITE the type system, not because of it. This was soon rectified with type script which eventually caused a complete flip of opinion in the industry once javascript developers realized how much better it is.
The other question to this equation is why was python so easy to program for DESPITE not having a type checker (it has external type checks now, but I'm saying before this)?
The answer is deterministic errors and easy traceability. If you have an error that happens either at runtime or at compile time you want to easily know what the error is, where it came from, and why it occurred. Python makes it VERY easy to do this. Not all type checkers make this easy (see C++).
In actuality type checking is sort of sugar on top of it all imo. Rust is great. But really the key factor to make programming more productive is traceability. Type checking, while good is not the key factor here.
Think about it. Whether the error occurs at runtime or compile time is besides the point. Compile time adds a bit of additional safety, but really if an error exists, it will usually trigger at some point anyways.
The thing that is important is that when this error occurs whether compile time or runtime you need as much information about it as possible. That is the key differentiator.
That is why typeless python and typed rust, despite being opposites, are relatively easy to write complex code for when compared to something like C++.
> Whether the error occurs at runtime or compile time is besides the point. Compile time adds a bit of additional safety, but really if an error exists, it will usually trigger at some point anyways.
Well, if the error is at compile time, there's no chance that code makes it to production and affects customers.
If the error is at runtime, you need to have tested that edge case and if you haven't, there could be customer impact.
I mean, once you see a few TypeErrors in Python code with no type annotations, or a few NullPointerExceptions in Java where there's no compile time null checking by default, I think it becomes very clear that catching things at compile time is much better...
I agree with you, but I'm saying things from a practicality standpoint.
Let me put it this way. If you have type checking I'm saying that from my anecdotal experience you probably catch 10% more errors then you would normally catch before deployment. The reason is you're bound to run runtime tests anyway and these tests cause you to correct all your little type bugs anyway.
And this isn't even errors you would'nt've caught. It's just about catching the errors earlier.
That's it. Catching 10% of errors after deployment rather then before... that is not a huge deal breaker. Type checking benefits are marginal in this sense. Yes agreed it's better, but it's not the deal breaker.
I'm trying to point out the deal breaker feature. The delta difference that causes python to be BETTER then C++ in terms of usability and safety. Type checking is a negligible factor in that delta is basically my thesis. This is subtle. A lot of people are going on tangents but that is my point.
This is offensive. To suggest my opinion has no connection to reality?
Read what I wrote carefully. I'm not talking about type errors. I'm talking about errors in GENERAL.
Your comment is carefully tailored to incite flame war. You represent HN bias at it's finest, reading something and giving a casual dismissal without really interpreting it.
But the topic here is type systems, and Rust's is more like C++'s than any other, and vice versa. C++ compilers used to have error messages that were hard to interpret, but competition between compilers has improved them.
Meanwhile, C++ itself has changed enabling better error messages because it is clearer what your code is trying to do.
You just can't stop can you? There's really zero need to say this other then trying to be an ass. I don't hate C++. I chose to write C++ as my day to day job after quite some time doing python because it's a challenge. There is no hate. But I have no loyalty to the language either. That is my way. No loyalty and therefore no bias. C++ is definitively less safe then python. This is fact and that is why I program in it.
I am also talking about type systems. Not just C++. However I AM using C++ as an example. It is flawed despite a stronger type system then python. The paradoxical dichotomy between the two languages is the quintessential example of my point. The type system is not essential to safety. It's an illusion. Types are simply sugar on top of it all because in essence you're just relying on error messages to resolve all the errors. Whether those errors happen at compile time or runtime aren't a big deal.
That is the point. I'm not waging a war against C++. I'm EXPLAINING to people like you who don't bother to read carefully or think carefully.
>But the topic here is type systems, and Rust's is more like C++'s than any other, and vice versa.
Categorically wrong. Rust's type system is derived from the Hindley Milner. See: https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_sy.... Haskell is the the language famous for using this type system. In short Rust's type system comes from functional programming and C++'s type system is derived from OOP origins.
>but competition between compilers has improved them.
I use C++ everyday for my work. It may have improved but it's still overall horrible.
We can't know whether the OOP religion is better, we also can't know if the Haskell religion is better, and we can't know whether NEITHER is better. (this is key, even the neutral point of view where both are "good" can't be proven).
We do have theories to determine algorithmic efficiency. Computational complexity allows us to quantify which algorithm is faster and better. But whether that algorithm was better implemented using FP concepts or OOP concepts, we don't know... we can't know.
A lot of people like you just pick a random religion. It may seem more reasonable and measured to pick the neutral ground. But this in itself is A Religion.
It's the "it's all apples and oranges approach" or the "FP and OOP are just different tools in a toolbox" approach.... but without any mathematical theory to quantify "better" there's no way we can really ever know. Rotten apples and rotten oranges ALSO exist in a world full of apples and oranges.
You can't see it but even on an intuitive level this "opinion" is really really biased. It seems reasonable when you have two options to choose from "OOP" and "FP", but what if you have more options? We have Declarative programming, Lisp style programming, assembly language programming, logic programming, reg-exp... Are we really to apply this philosophy to ALL possible styles of programming? Is every single thing in the universe truly apples and oranges or just a tool in a toolbox?
With this many options it's unlikely. Something must be bad, something must be good and many things are better then other things.
I am of the opinion that normal Procedural and imperative programming with functions is Superior to OOP for the majority of applications. I am not saying FP is better than imperative programming, I am saying OOP is a overall a bad tool even compared with normal programming. But I can't prove my opinion to be right, and you can't prove it to be wrong.
Without proof, all we can do is move in circles and argue endlessly. But, psychologically, people tend to fall for your argument because it's less extreme, it seemingly takes the "reasonable" mediator approach. But like I said even this approach is one form of an extreme and it is not reasonable at all.
I mean your evidence is just a bunch of qualitative factoids. An opponent to your opinion will come at you with another list of qualitative factoids. You mix all the factoids together and you have a bigger list of factoids with no definitive conclusion.