In some ways, this is correct. Go is much more simple and consistent than Ruby or Java, has a better deployment story, and better tooling in some ways.
As a language, it's much less enjoyable to write. Yes, the concurrency primitives are much better, and that's a good programming tool. But to a developer coming from Ruby, code is often needlessly repetitive and obtuse to write. It ends up being overly verbose, full of copypasta, and much less expressive.
It's got it's niche – I've found it useful for writing small command-line utilities, and it's been surprisingly helpful at putting together a good deploy story for some work I've done on the Raspberry Pi (with the benefit of being well-structured and reasonably performant).
But I wouldn't like to use if full-time – mostly because it's just so weirdly irritating to write.
"code is often needlessly repetitive and obtuse to write."
I think it helps to remember that Go's use case is a lot of teams interacting to produce fairly large code bases, i.e., at Google. I'm getting into it for my job because it has the same use case, and I find it hits a nice sweet spot in what you can do, vs. what you can't do, and I happen to be in a position in which I am routinely hit hard by other people's "clever" code. I don't do my personal coding in it, though.
Part of the problem with the "clever" code is that it often uses the clever features, but, wrong. Like, all the pain of seeing someone do something "clever"... and you'll note how I keep scare-quoting that... and none of the benefits. If the clever code was actually more concise and performant than my generally-non-clever replacements, I wouldn't mind so much, but it's amazing how often I rip out a "fluent" API or something and replace it with something simpler, faster, and still results in several hundred negative lines on the commits.
I've developed a theory that it isn't even because the developers are "dumb" or something, especially since in many cases they manifestly are not. I think it's that once you pass a certain size of organization, but are still sharing some code bases, you encounter an increasing number of instances where a developer has to fix something in the shared code, parachutes in to make the minimum possibly change that might work with the minimal cognitive effort, and gets out and back to their own code as quickly as possible. The net effect in such situations is that even if your code is being written by nothing but senior devs with decades of experience, from the point of view of the code being developed on it might as well be being bashed on by a series of above-average gorillas bashing away on keyboards.
Languages that tend to point you at a single right answer, and confine the very busy, very distracted gorillas from beating on the code too hard, and make it obvious when that's happening, can be advantageous in those cases.
As a language, it's much less enjoyable to write. Yes, the concurrency primitives are much better, and that's a good programming tool. But to a developer coming from Ruby, code is often needlessly repetitive and obtuse to write. It ends up being overly verbose, full of copypasta, and much less expressive.
It's got it's niche – I've found it useful for writing small command-line utilities, and it's been surprisingly helpful at putting together a good deploy story for some work I've done on the Raspberry Pi (with the benefit of being well-structured and reasonably performant).
But I wouldn't like to use if full-time – mostly because it's just so weirdly irritating to write.