Hacker Newsnew | past | comments | ask | show | jobs | submit | ghodss's commentslogin

One of the premises of this article - that "competing standards" exist primarily because of political reasons or shallow decision making - demonstrates a fundamental misunderstanding of software engineering.

Sure, some engineers (especially junior ones) enjoy reinventing the wheel more than using something that already exists to solve users' problems. We all like to trick ourselves into thinking we're unique snowflakes (this problem extends into our personal lives as well). But more often than not, when multiple popular standards emerge, it's because there are legitimate engineering tradeoffs that are being made.

JSON vs. XML? XML is far more capable but complex. JSON is far simpler but less capable. Sometimes when you're delivering value to customers, you need that extra complexity, so you use XML. Other times you don't need it so you use JSON. It has nothing to do with ego or politics. Multiple different JSON libraries? One may prioritize ease of use to get up and running, the other may prioritize strong typing for serialization speed. One may prioritize strict compliance with the spec, the other might prioritize speed above all else. JSON vs. a binary format? JSON is more readily compatible and easier to debug. Binary is faster but more complex to setup. It goes on and on.

Again, sometimes competing standards or libraries or languages emerge because of political or capitalistic concerns. But usually when you're talking about competing open standards, there are multiple because there are legitimate engineering tradeoffs being made because engineering is not a one-size-fits-all science. (Few sciences are, otherwise they wouldn't still have people working on them.)

When an open source project rejects a contribution, or five projects exist to solve the same problem, they're often making legitimate engineering tradeoffs that are in no way arbitrary, and any one project would suffer to try to be every thing to every person. The article in question doesn't even point out a single example of a set of standards or libraries that are entirely arbitrary in their differences or could be collapsed into one solution, which further highlights how this is a theoretical argument, not a practical one.


Indeed. Sometimes things are complicated/messy because we're stupid or lazy or ignorant.

But sometimes things are complicated/messy because we're trying to solve complex problems.

That's not to say that there isn't a simple solution to a particular problem nor that we should strive to find it.

But to assume that there are always simple solutions to complex problems is itself a form of ignorance.


Another thing to remember is that our collective understanding of the problems evolve over time and often subsequent standards reflect this. XML vs JSON is definitely a case of this, as is the progression of CORBA -> SOAP -> JSON based REST services.


author here. Thanks for your comment.

I clarified that "Sometimes" the causes are political, it used to say "Usually".

But if after a 30 minute read you picked up on a stupid example and you started a discussion on XML vs JSON you are just proving my point. The expectation in the 60s was that machines will figure out protocol, even make protocol, on the fly, by asking about each other. Yet, you want to have another discussion about XML and JSON.


Machines do figure out the protocol on the fly by asking each other; see the Accept and Upgrade headers in HTTP, for example. Of course, HTTP is itself a protocol, because you can't "ask each other", or do any kind of communication at all, without a common protocol to start with.

As for making protocols on the fly, that makes roughly as much sense as two people inventing their own language to talk to each other.


And of course, since it doesn't make sense to you, it is impossible.

Turns out that people invented their own language by talking to each other.


I never said it was impossible.


That was the great time of belief in "4GL". I think what we've learned from that is that the hard part is not so much implementing what we want, but making all the decisions. Coping with all the interacting possibilities and working out what we want in each case.


You hit the nail right on its head :)


You think the ISO vs TCP/IP fight didn't have some element of politics?


For those who want to try it out in San Francisco, Reboot Float Spa[1] is a great facility.

[1] http://rebootfloatspa.com/


My personal favorite... great colors/contrast on a Macbook Pro. http://vimcolors.com/1/jellybeans/dark


It's my favourite dark colorscheme too.


One major limitation of this approach is that any project that wishes to vendor or lock their dependencies can no longer be used as a dependency for another project. From the gb GitHub:

> A project is the consumer of your own source code, and possibly dependencies that your code consumes; nothing consumes the code from a project.

This seems to imply that any code outside of a project (i.e. the code inside vendor/src) has no recourse for indicating the versions of their dependencies. This is nice in that it simplifies the problem, but to completely remove the ability for any and all libraries to indicate the versions of their dependencies seems unnecessarily restrictive. If I build a library for others to use, and I have a dependency, I want to be be able to lock to a specific version, or at least give my preference for a version.

Of course, this creates its own issues - what do you do when two libraries depend on two different versions of the same library? (Also known as the diamond dependency problem.) This is where the Go culture helps, where as long as you pick a later version, things are likely to work. But I'd rather have the tooling let me detect the two versions that the two libraries want, show that there is a mismatch, and give me the ability to override and pick one (probably the later one). Instead, the gb approach eliminates the ability for the libraries to even have the ability to indicate what version they would prefer, which makes it even more difficult to get a bunch of libraries that share dependencies to work correctly together.

godep (https://github.com/tools/godep) seems to have the best compromise: vendor dependencies without path rewriting (though with GOPATH rewriting), but also keep track of their versions in a Godep.json file. You can gracefully pick between conflicting versions upstream if need be.


+1 for godep w/o import path rewriting. It isn't addressed in the presentation at all, only using it with path rewriting is mention and all the problems associated with it are with the path rewriting.

IMO it is the best solution right now with just 1 issue, that it is not included with go. This is a pain with CI systems (e.g. Jenkins) where you have plugins to provide go itself, but have to figure out a way to get godep around to run your build. Right now I'm punting and just doing a go-get godep, then using it to build my project. I'm not happy with that though.


But it's not really material, because bar doesn't touch x (and if it did, this problem wouldn't exist to begin with).

In other words, it might be a bit strange and cause a slight detour in your quest to discover the cause of a bug, but it wouldn't actually change any behavior or cause any issues.


Your argument is essentially: The barriers you insert, even if they block optimization, will never block optimization in a way that changes behavior. This is demonstrably false, since you are, among other things, taking the address of a variable, which means escape analysis won't do things to it, etc.

You can argue "The behavior it changes doesn't matter". As i've shown, 1. it does in a threaded environment (like, you know, go) 2. It depends on whether your code is buggy or not.

IT's certainly true that it never, on it's own, causes bugs. But as i've shown, it can make bugs appear to come or go.

If you don't think that will ever happen, i don't know what to tell you, other than "It has happened in literally every compiler that has ever had barriers like this".

Without any evidence why Go should be different here, i don't see why go will be different here.


I have always wondered why, if functional programming is such a good deal in terms of better abstractions, entire classes of bugs eliminated, cleaner code, etc., it doesn't come near-universally recommended by the world's most experienced programmers for mid- to high-level tasks. I'm thinking people like Martin Fowler, Donald Knuth, etc., but especially people like Rob Pike, Russ Cox, Guido van Rossum, Yukihiro Matsumoto, who are all incredibly smart and experienced engineers who have dedicated their lives to developing non-FP languages. There must be some trade-offs to FP that are almost never make it into these kinds of "FP has dramatically improved my life" articles.

(BTW I don't buy the "they're just not used to it" or "they're comfortable with what they know" explanation for this kind of people. These are not stodgy Java programmers who are working in programming as a day-job who are resistant to learning new things, they're people who know more about programming than a hundred average programmers combined and spend nearly every waking minute thinking about how to make it better.)


This is pure argument from authority, but I'm gonna take the bait anyway.

> Martin Fowler

What has this guy done except writing books about "the best way to program" without ever designing a full system himself ?

> Guido van Rossum, Yukihiro Matsumoto

Those guys are just language designers, and the language they designed are just as questionnable as the functional ones, so why do they get a pass actually ? Because their languages are more used ? Do you want to follow suit and say that COBOL creators are probably part of the "world's most experienced programmers" ? What about PHP ?

There are some good points and critiques about the practicality of functional languages, but you don't actually touch on any of them here.


I had a conversation with another semi-famous PL designer. Not one of the ones you list, but only a little below them. He primarily worked on languages the C/C++ family, and had no conception of the value of a closure.

There are lots of smart PL designers working systems languages, and there are lots of smart PL designers working on high-level, functional languages. That doesn't mean that either group is necessarily aware of everything the other group is doing, and it doesn't mean they share the same goals, experience, or taste.


Being smart doesn't make you a good engineer, and being a great language or library designer doesn't necessarily make you an authority on language selection for most engineering uses. Finally, take Pike and Cox and their work on Go. Go is a great minimum-change-for-engineers language for Google's purposes. Is it a great language for most? No. But if you're trying to introduce new concepts to thousands of C++ programmers at the same time, Go is a safer bet than trying to go full Haskell.

To further answer your question, "functional programming" isn't always so well-defined. We know, realistically, that pure functional programming isn't going to work for all use cases. Once you're grounded in FP, you think of mutable state (or, in databases, destructive updates and deletes) as an optimization... but sometimes it's an optimization that you need. No language is FP-only because no language can be; even Haskell has the "dirty" IO monad.

I think that most good programmers (like, 99%) recognize the importance of immutability and referential transparency, when possible, and in the function rather than the action being the standard compositional unit for programs. Where there is disagreement is on when, how, and how often to depart from the functional ideal.


As codygman mentions the IO Monad doesn't make haskell impure.

unsafePerformIO :: IO a -> a on the other hand does make haskell impure when it is used. And it is used in many libraries.


> unsafePerformIO :: IO a -> a on the other hand does make haskell impure when it is used. And it is used in many libraries.

Hmm, does it really? Is 'unsafePerformIO (return 1)' impure?


"And it is used in many libraries."

Citation needed - or at least clarification as to what "many" means here.


I don't think I'd call the IO monad "dirty" since it's still pure. Also "dirty" make it sounds like a hack.


Pure manipulation of values describing impure computation.


Good point.


"By studying the output of the TraceThreadId method we see that in ASP.NET/GUI it’s the same thread that enters ReadTask and that exits ReadTask ie no problems. When we run it as a Console application we see that ReadTask is entered by one thread and exited by another ie readingFiles is accessed by two separate threads with no synchronization primitives which mean we have a race-condition."

This is not entirely true - the code as written does not have a race condition because the two accesses are run sequentially. Accessing the same variable by two separate threads with no synchronization primitives is actually okay if those two threads never run in parallel. Now, if you called many ReadTask()'s in a row and you had thread_pool > 1 (as in the GUI/ASP.NET application), then you would have a race condition. But if you're accessing a shared variable from a multithreaded context that should be somewhat obvious. It would depend on the programmer's understanding of the async/await paradigm, which I think is the author's point. ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: