Not reading an article because of that would be like not reading other code because it uses 2 spaces for tabs instead of 4. Makes no difference to the language, only to you.
The article (justifying not using semicolons) seems like a rant with not real aim, beside "I am right, semicolons are wrong - and I have all these half-baked ideas to back me up".
The author was actually annoying me by the time I reached "It's good coding style".
One example:
> My advice on JSLint: don’t use it. Why would you use it? If you believed that it helps you have less bugs in your code, here’s a newsflash; only people can detect and solve software bugs, not tools. So instead of tools, get more people to look at your code.
Pretty sure lots of people use JSLint, pep8 checkers, gofmt, or whatever the equivalent tool for the language at hand is. They certainly help, one cannot deny that.
Then the author goes on to pick at Crockford for suggesting people space their JS with four spaces… Yep, I'm done.
Giving advice to not use JSLint without a real alternative is very naive in my opinion. When I was working with a team of developers with some novice JS developers, JSLint was a godsend to pick up simple, avoidable bugs while enforcing some sanity with regards to code style.
The thing is, semicolons do make a difference with JavaScript.
It's now 404ing, but there was a pretty well-known argument on one of Twitter Bootstrap's GitHub issues[1]. The Bootstrap guys didn't use semicolons, and it caused problems when minifying the code. It's an edge case, I know, but it was a problem.
I never understood the whole anti-semicolon thing anyway. It just seems really hipster to me. Use CoffeeScript, if you don't want semicolons.
Also note, that ultimately regardless of personal semicolon preference, this was an issue because of a parsing bug in JSMin. It was fixed in JSMin with this commit[0]. Other minifiers such as Closure and YUI did not exhibit any issues with the code as it had been written.
I've always equated javascript without semi-colons with missing comments and poor commit messages. It's just inconsiderate to other developers who have to read your code.
I understand that it is a "feature" of javascript (misfeature in my opinion), but the point that "everyone else does it is not a good argument" is false. We've got like 15 years of javascript written with semicolons. Stop being an inconsiderate prick and just friggin write the code like everyone else!
Don't be that guy (or gal). Comment your code. Write good commit messages. Use semicolons in your javascript.
It shouldn't, they genuinely are optional, and Javascript coders need to know when they can be inserted because you can't turn that behaviour off.
However, needing to understand the rules is one thing - I much prefer to see the semi-colons in there. I dislike seeing any formatting that is substantially different from the accepted norm and find it harms readability.
If it caused problems when minifying the code, the problem was with the minifier not parsing correctly - a minifier shouldn't change the way the code is interpreted.
I understand that using existing minifiers is a possible reason to use semicolons, but not using semicolons is not an inherent issue here.
Why does it seem hipster to use a language feature? Is it hipster to use null coalesce as well?
The article linked above explains the reasons to omit semi-colons very well. I would personally prefer if JS forced you to terminate all statements with a semi-colon to avoid any ambiguity, but there you go.
Speaking of ambiguity caused by whitespace, Coffeescript is a 1st degree offender for this. All you have to do is indent the wrong block of code, and you completely change the scope of a nested function, and you have no visual indication of your mistake whatsoever.
My interpretation of "hipster" is being different because you think it's cool to be different. Being different because you think it's a better approach is called "making progress", either because you'll be proven right or proven wrong.
I've thought about the de-facto standard way a lot and I think that it does nothing to avoid bugs while potentially misleading a coder about the language. Therefore I think it's worse.
The only reason I follow the de-facto standard is because the time spent arguing about it with my peers is better spent getting work done.
Nobody has any good reasons that apply to most people, both for and against semicolons. There really isn't a huge difference in either style except in rare cases.
> The only reason I follow the de-facto standard is because the time spent arguing about it with my peers is better spent getting work done.
Which is why that is an excellent reason. Standards are useful, so if there is one, stick to it. If there is no reason to go against the standard, then don't.
> I think that it does nothing to avoid bugs while potentially misleading a coder about the language. Therefore I think it's worse.
Not seeing any semicolons can also potentially mislead about the language. It is not worse, they are both misleading until you realise ASI exists.
Omitting semicolons also does nothing to avoid bugs, and introduces a different (additional?) set of edge cases where bugs may appear.
> Also, a lot of the weird ambiguous cases are disambiguated by including parenthesis, so if you're unsure just include the parens and you should be ok.
this is exactly the parallel I was going to make to the discussion we're currently having here. People who want semicolons on every line, even when they don't matter, are the same people who want to parenthesize every infix operation, even when the natural precedence the expression has without parentheses is already correct. I'm not sure I understand it in either case--are you afraid that someone might edit the code without understanding the "implicit defaults" of the language's syntax? Why are you letting that person near your codebase?
Some programmers are good at remembering large numbers of arbitrary rules. Others aren't, and I've worked with good and bad programmers of both kinds.
So yes, I'm worried that a colleague might read the code and not know what the precedence is, and they will have to waste their time looking it up (thankfully some IDEs now have an command to add parentheses quickly, but it's still a distraction from their actual task).
Pretty much all languages have some features that are more confusing than helpful, and good codebases avoid using those features (whether via formal policy or not). IMO most precedence rules fall into that category; it would be better if e.g. "a && b || c" were a syntax error until bracketed properly.
If the code currently works, then they can read it, and infer that whatever precedence the operators have is the correct one for producing the result the code produces. If "a + b * c" is producing 17 where (a=2,b=3,c=5), then you know that your language makes multiplication precede addition.
If the code doesn't currently work, then they'll have to figure out via some external method (looking up the original formula used in the code, say) what the precedence needs to be, in order to parenthesize to make it work.
On a separate note,
> it would be better if e.g. "a && b || c" were a syntax error until bracketed properly.
this reminds me of the horribly-confusing practice of using "a && b || c" to mean "a ? b : c" in shell-scripting. It almost works, too... unless (a=true,b=false), in which case you unintentionally get the side-effects of c.
>If the code currently works, then they can read it, and infer that whatever precedence the operators have is the correct one for producing the result the code produces. If "a + b * c" is producing 17 where (a=2,b=3,c=5), then you know that your language makes multiplication precede addition.
By that logic why bother using a font in which * and + look like different symbols? Heck, why read the code at all? If the code is working you can infer what it must be doing by observing what comes out when you feed it different inputs.
You read code precisely because you don't know what it does for every input, or don't know how it implements the algorithm; you want to be able to look at a line and see what it does, without having to fire up a repl and run through several examples. I mean, the idea that code should be readable - i.e. that you should be able to tell what a given line of code does without having to run it or look it up - is about as fundamental a good coding principle as it gets.
When you read "a + b * c"--and then test your assumption of what it does in a REPL--the result is a learning moment where that knowledge now sticks to you; from then on, you know which of the two operators come first. You only have to do it once.
On the other hand, "using a font in which * and + look like [the same symbol]" means never being able to recognize the pattern, which means never learning anything and having to check every time.
Also,
> the idea that code should be readable - i.e. that you should be able to tell what a given line of code does without having to run it or look it up - is about as fundamental a good coding principle as it gets.
I would agree that that is a good coding principle for low-level C/C++/Java code; in these languages, you can't separate the abstract meaning of code from its implementation, so it's better to just keep the two things together.
But on the other hand, the equivalent coding principle for Lisp is "create a set of macros which form a DSL to perfectly articulate your problem domain--and then specify your solution in that DSL." The equivalent for Haskell is "find a Mathematical domain isomorphic to your problem domain; import a set of new operators which match the known Mathematical syntax of that domain; and then state your problem in terms of that Mathematical domain by using those operators." In either of these cases, nobody can really be expected to just jump in and read the code without looking something, or a lot of somethings, up.
This isn't because the code is "objectively bad", but rather that it has externalized abstractions which in a lower-level language like C would have to be written explicitly into the boilerplate-structure of your code. These are just two different cultures.
>When you read "a + b * c"--and then test your assumption of what it does in a REPL--the result is a learning moment where that knowledge now sticks to you; from then on, you know which of the two operators come first. You only have to do it once.
If you're good at memorizing essentially arbitrary rules then you only have to do it once. For +/* you can argue that the precedence is standard in the domain language (mathematics), but in C-like languages there are often a dozen operators with their order, and there's no natural reason why >> should be higher or lower than /.
>The equivalent for Haskell is "find a Mathematical domain isomorphic to your problem domain; import a set of new operators which match the known Mathematical syntax of that domain; and then state your problem in terms of that Mathematical domain by using those operators." In either of these cases, nobody can really be expected to just jump in and read the code without looking something, or a lot of somethings, up.
Sure - code is written in the language of the domain. If you don't know what an interest rate calculation is then even the best-written implementation won't be readable to you. But that domain terminology should make sense (indeed a large part of understanding a field is understanding its terminology), whereas many language precedence rules don't - they're completely arbitrary, there's no way to derive them from first principles if you forget.
I'd argue that a language where you don't have to memorize a precedence list (such as Lisp - the precedence is always explicit from the syntax, one simply can't write (+ a b * c - so I'm kind of surprised you mention it). I'm surprised if Haskell is different in this regard) is, all other things being equal, better than a language where you do have to memorize a precedence list. But rather than having to write a whole new language, it's more lightweight to form a "dialect" by declaring "we will write C (or whatever), but only use constructs that do not require memorizing the precedence table".
Terrible article in my opinion teaching some really bad habits that will cause hard to find bugs over time. Truth is, omitting semi-colons is probably fine at smaller scale, but for larger javascript code base with multiple maintainers, it's definitely a better practice in my opinion. Oh well, no point in me saying what has already been said :)
I agree with you, but still, I look at (a lot of) JavaScript every day and it's hard to explain it, but for some reason I find the lack of semicolons disturbing. I have no problem with CoffeeScript and I've also worked with Python a lot so I appreciate a language without semicolons, but I don't approve mixing the styles.
I think this is more of a psychological thing than a syntax thing because if you're used to a syntax of a language, it gets embedded in your brain and with years of practice you can spot a missing semicolon from a mile away after only a quick glance over the code. People are afraid of changes and if they are used to something and that thing is taken away they become unsettled.
Or their brain explodes if they see JavaScript with missing semicolons :P.
Maybe it is not necessary but I don't see anything against it. That was the weird thing about that article all kinds of semi legit reasons why it is not necessary to use them but no reason why not to use them. I would say if there are some semi legit reasons to use them and no reason not to use them than just use them :)
I totally understand the preference for using explicit statement termination in JS. I can even understand the general preference for a non-whitespace token to terminate a statement.
But I don't understand why that would cause someone's brain to explode or otherwise make it difficult for them to pick out the author's points about the language from the example code.
While I was looking at the JavaScript code my brain almost exploded.