Really? I think it is terrible when compared to other languages (C#, Java, Ruby, Python, etc...). I have only done a few applications on node, but my experience with including modules was painful to say the least.
I'd have to say the biggest thing that npm has over module systems found in java, ruby, python etc. is the complete isolation of transitive dependencies. It is nice to use two dependencies and not waste a day or two because:
* module A depends on module C v1.0.2
* module B depends on module C v1.4.3
In all the languages you mentioned it becomes a pain because you can only use one version of module C, meaning either module A or B simply will not work until you find a way around it.
This is a cultural problem, not a module management problem per se. Blame the author of module C, not the package management system.
Semantic versioning's raison d^etre is to prevent these sorts of issues. Reusing a major version number is supposed to constitute a promise not to remove or change the call signatures of any functions published by your library. This is necessary so that a dependent library can declare a dependency on version 1.x.x of your library when version 1.1.0 is released, without having to worry that version 1.2.0 of your library will break things.
The problem is that too many library and module authors (who are otherwise talented, or are simply the first provider of a useful library that ends up gaining traction) refuse to follow the rules, and there's no effective sanction in the OSS marketplace for this sort of antisocial behavior.
As soon as backward-incompatible change is introduced without bumping the major version number, the dependent module author becomes paranoid (and being closer to the user, he's going to wrongly get a disproportionate amount of blame), and (rightly) feels he has no other option but to declare a strict version dependency. And when there is more than one dependent module involved, the misbehavior of the independent module author can cause a dependency graph that is impossible to satisfy.
As far as I can tell, this whole situation started with Ruby, and is the main reason (along with second-class documentation) why I am generally averse to its ecosystem. rvm and its ilk shouldn't even have to exist.
> Semantic versioning's raison d'être is to prevent these sorts of issues.
Semver may surface them by making it very clear (assuming all involved libraries use semver) where they can occur, but, if you have a package management/loading system that only allows one version of a particular package to be loaded, obviously can't do anything to prevent the situation where different dependencies rely on incompatible versions of the same underlying library.
Sure, with semver it won't happen if A depends on C v.1.0.1 and B depends on C v.1.4.3 (as A and B can both use C v.1.4.3), but it will still happen if A depends on C v.1.0.1 and B depends on C v.2.0.0.)
To actually avoid the problem, you need to isolate dependencies so that they aren't included globally but only into the package, namespace, source file, or other scope where they are required.
That's all good and well, but these packaging problems happen and they'll continue to happen, so wouldn't you rather have a system like npm that can tolerate mediocre packaging than one that doesn't? When you're trying to fix clashing dependencies, are you really going to care about whether those clashes are an intrinsic or a cultural problem?
Regarding semantic version: it works in theory, but in practice applications often ends up relying on bugs, private APIs, or other kinds of non-public behavior. For example the GNOME libraries have been following semantic versioning forever, yet sometimes an upgrade breaks something else because that something else was relying on a bug. In 2004 there was a famous case where upgrading your glib would break gnome-panel. This is of course not to say that semantic versioning is useless, but in practice you will still need some kind of version pinning system.
As for "rvm and its ilk shouldn't even have to exist": you do realize that rvm and its ilk are not just to allow you to pin your software to a specific Ruby version, right? They're also there to allow you to easily upgrade to newer versions. Let's face it, compiling stuff by hand sucks, and your hair has turned white by the time the distro has caught up.
> because you can only use one version of module C
This is not strictly true in Java. You can set up your own classloaders that allow you to load multiple different versions of the same class and hand out the right instances on demand. (This requires some work of our own since by themselves classes are not versioned. But you can evolve simple versioning schemes on (say) Jar files to solve this)
Obviously not trivial, especially if you are rolling one on your own. But you can use an OSGI implementation to do most of this for you in a standard way. JSR 277, if and when it is implemented, should provide another solution in "standard" Java.
NPM's way of managing dependencies still can waste a day or two (or more) of your time.
For example, get a C object from B, then pass it into A.
Things are even more twisted when you have a half dozen versions of C floating around in your node_modules, and the problem isn't in your code, but a dependency of a dependency.
Another issue I've run into is patching a bug in a module, and then having to figure out how to get that patch into all of the other versions that cropped up in node_modules.
NPM is one way to solve the modules problem, but it's no panacea.
That's great, but it's not without cost. Here, the cost is you end up with deeply-nested directory nodes (which breaks Jenkins ability to properly purge the directory after a job). Node modules are also extremely liberal in the number of files they create -- even a "simple" app using just a few common modules could end up with 1k+ extra files. This can produce problems in your IDE, as well as with your source control or continuous delivery systems, among other things.
Maybe you need to run jenkins under a better/faster filesystem. We use jenkins as well, and our deeply nested directories are deleted in under a tenth of a second.
I feel like your complaints are a user problem. I don't have the "too many files" issue when I use vim.
The OS is Windows, and Jenkins handles everything else just fine. It's just the Node projects that ever have issues. Of course, it's easier to blame the OS.
Are your concerns more than just theoretical? I've been developing in Node for a long, long time now and have never had any issue with any of this. Take source control for instance: isn't the first thing you do to put `node_modules` in your gitignore?
What makes you think they're just theoretical? Are you insinuating that I'm just here to argue arbitrary crap for the hell of it?
Also, gitignore does not work in SVN (*omg he uses SVN! the shame!), and the node_modules do actually have to be included in the source since the runtime is disconnected from the internet (intranet app).
Since for some reason that's what everyone else is doing in this thread ("I once read the Node documentation two years ago and am therefore in a position to make grand and sweeping judgments about it") I'm afraid I lumped you in with that crowd. Apologies.
Did you use bnd/bndtools? Without those I heartily agree: you'll have the steep learning curve of OSGi upfront, and the manual maintenance of metadata for the duration of your project. Using bnd/bndtools, it's only the initial learning curve that you have to worry about.
Weird conventions, like including index.js by default. It does not play nicely with coffeescript's default behavior of including a module. The modules being a hack built into the language, I have to read library docs and follow external decisions (versus a holistic approach built by the language maintainer). Introducing a variable to contain a library is unconventional and potentially limiting (specifically for meta programming).
Like I said, I didn't spend much time with them. They are a solution. NPM feels like the old rails gems and is certainly better than most package managers. Requiring external libraries and files specifically is a hack and feels as such. I would gladly take any of the languages inclusion techniques I listed above over node's.
Edit: I am not trying to put down the hard work that went into creating the system. JavaScript does not support modules natively, so adding support is a momentous accomplishment. The unfortunate reality is that it will always be a limited solution. JavaScript modules (in any form) are going to be inferior to languages that natively support such a basic part of programming.
I see your point, but really, what he said about CoffeeScript doesn't make any sense. It doesn't have any 'default behaviour of including a module' because it's just syntax sugar, you use the same require() and libraries.
The default behavior for coffeescript is to hide the code in a JS module (the pattern). This is to prevent turning variables into globals. This behavior can be disabled, but I see it as one of the many benefits of coffeescript.
"JS modules" (what you're actually referring to are called IIFEs, which stands for "immediately-invoked function expressions" — JS modules are something else, and CoffeeScript doesn't support them) don't have anything to do with importing/exporting code. In browser-CoffeeScript the only way to "import" and "export" code is to make global whatever you intend to export.
IIFEs are also pretty orthogonal to Node's module system — they work perfectly fine together, and don't have much to do with each other at all. Not sure why you'd feel the need to disable IIFEs in the CoffeeScript compiler except out of desire for cleanliness since they're an ugly hack in the first place.
Could you provide an example scenario in which this would cause a problem? This has never had any noticeable effects in the 2 years I've been using Node and CoffeeScript together.
A closure. That's true, but it doesn't change the behavior inside a node module at all, they already have their own contained scope, you have access to the same globals `module/exports/global/require/etc`, and `this` is preserved.
Fair points, though I quite like the concept of a "variable containing a library" (or module).
Personally, with JavaScript being the first language I seriously learned, it's intuitive for me to reason about. JavaScript is built around closures, and you're simply importing a closure that returns (exports) its public interface.
In fact, the main issue I have with ES6 modules is that they differentiate a "module object" from its default exports, which can be tricky to reason about (basically, what I wanted from ES6 modules was just syntactic sugar over CommonJS, whereas what we have is more powerful but more complex).
Why do you feel node modules are a hack? They are part of node.js core and have very well-defined behavior. Most developers prefer them over the proposed `import` syntax in ES6.
I have nothing but praise for NPM - it's simple, fast, keeps dependencies isolated (but can dedupe if wanted), and did I mention fast? It runs circles around any other package manager I've used, except maybe for homebrew.
Which is not the same thing as node's modules. There's nothing stopping me from making global variables within a module "pattern". Contrast that with a node.js module where even if a variable declaration leaves out a 'var' by accident, the global will not show up in anything that imports that module. I.e., there is no chance of name collisions because each node module gets its own unique and clean global scope.
True, but the question was whether JS had any native ways to support modules, and closures are literally right there. That's a good way to start them. Add more shit on top to make it robust, but they're still there.
index.js is not a default you need to include in your package. you better go read and experiment before making bullshit. JavaScript is a language with multiple platforms and NodeJS is just one of them.
I personally found that some packages wouldn't install because they had dependencies that were broken in some way. This was about a year ago though, so I don't know how things have changed since then. Also, node packages seems to be small and specific so npm downloads many dependencies for each package. Again, I only played around with node for a little but I got a bit frustrated with all the packages and dependencies.
I think the issue you are describing really stems from poor semantic versioning. People will peg to specific revisions that they happen to use rather than depending on the general major or minor revision level, so small fixes don't naturally propagate to second- or third-level dependencies. There's no good solution to the problem that doesn't lead to other sorts of hell (imagine asking users to manage the revisions for third-level dependencies)
Were you using Windows? Although node.js "works" on Windows, there are a number of modules that only compile under UNIX, which makes Windows basically useless for node development.
While I like using NuGet packages with C#, I'm not really wild about how they can get magically linked in to a project, and then required. I had nunit and fluent assertions become inextricable from a project I was working on even after all the tests were removed. Just a total mind-f*ck. Python when using pip is a whole lot better but I've had some issues finding things there too. Ruby... it depends. Are we talking Rails gemfile or "gem install $package"? Conflicting versions can become an issue. Java with Gradle has been pretty cool so far. NPM as a whole, has just worked. Packages are referenced in ONE place (package.json) I can do an "npm install $package --save" during development and it gets included automatically.
There's nothing magic going on. A C# project's settings are located in the .csproj file (other .net languages work similarly). This file is normally editied from within Visual Studio, but it's just xml and can be edited by hand fairly easily.
A project file has references [1], which define the project's external dependences, Removing references is easy. [2][3]
Didn't understand the NuGet sentence. It's just binaries and XML descriptions. Worst case - just delete package.config and all the package folders. A one minute fix I used when not using source control to revert.