Hacker Newsnew | past | comments | ask | show | jobs | submit | ben_pr's commentslogin

> but I dream of the day when I can carry my whole computer in my pocket and be able to use it anywhere.

Have you tried the compute stick[1]?

http://www.intel.com/content/www/us/en/compute-stick/intel-c...


Is there something like this with a battery that can make it through an entire day? (Needs to run linux)


2amp powerbank ? Compute stick can run Linux (Ubuntu) or Windows depending on which you get...


I'm looking forward to my first project with GO. It appears to offer a lot with minimal complexity.

> Because Go has so little magic, I think this was easier than it would have been in other languages. You don’t have the magic that other languages have that can make seemingly simple lines of code have unexpected functionality. You never have to ask “how does this work?”, because it’s just plain old Go code.

That lack of magic and his comparison to C# sounds like a really good mix.


> It appears to offer a lot with minimal complexity.

Actually I think it offers little with minimal complexity.


Here's a blog post from Rob Pike about the design philosophies inherent in Go, and how that affected adoption from C/C++ developers vs. Python, Ruby, etc.

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


> That lack of magic

I have a really hard time understanding what people mean when they say magic. In every language I've ever worked in I spend a fair bit of time saying "how does this work". Go doesn't seem any different in that regard to me.


Rails is the epitome of the magic philosophy. Stuff "happens" through inference because you touched some part of the code (check out routes, foo_{url,path} methods, url_for, and passing some models as argument to those), or the database schema (defining accessors from DB fields or adding features when magic-imbued names are used, such as type, version or foo_id/foo_type). This makes one feel fast and powerful at first, but as the application grows one has to memorise all sorts of conventions and DSLs of Rails's as well as one's own, and this is more and more stuff the developers have to remember instead of being explicitly stated in the code. As the application grows in scope, it is bound to veer ever so slightly away from the Holy Conventional Way and trip onto something lurking in a dark corner, and that's where things start to break for seemingly no reason at all unless you want to dive deeper into the cave where dragons born out of someone's eagerness at being smart lie asleep.

IOW "Magic" is wanting to achieve extreme generalisation through combined use of conventions and dynamic features of languages, which inevitably leads to gotchas†, corner cases, and pitfalls[0] as well as significant cognitive dead weight due to the very nature of its implicitness.

† ever tried to mix STI, polymorphism and url_for?

[0]: http://urbanautomaton.com/blog/2013/08/27/rails-autoloading-...


I've been pondering this topic lately, as I come back to Rails (seems every few years I'll write a Rails app on the side, and my brain has totally forgotten everything since last time): one other way to look at this "magic", is it makes programming feel "intuitive". Not sure how to do something, I often find I can just "guess" the right and most natural way, and the code will just work. For that reason, I always feel like I'm most productive when writing Ruby (I really love Go too, just for different reasons).

I can totally see how the situation you describe would be frustrating too, I felt the same about Java annotations when they came out, and on massive code bases it could become a nightmare when used to the extreme. My own experience has been that Go scales very well to large code bases, I've never wanted to try the same with Rails.


I've had this same experience bouncing back to Rails for contract gigs. You always feel this temptation like you're missing out on 'real' programming, performance with low level code or super clever languages like Haskell or Clojure. But ultimately Rails is just a great programming experience for getting the job done.

Despite it's faults and the problems with using 'magic' frameworks like Rails it's a really great language/framework for what it's meant to do. And it still is in 2017 despite what some people say (although Elixir/Phoenix is getting there if it can reach the scale of adoption as Rails).

That's the end of the road lesson, that there's the right tools for different jobs. There is no 'perfect' solution. Not rabbit to keep chasing.

Either way though it's still good to get exposure to as many different languages as possible (low level ala C, easily parallel-based languages ala Erlang, some lisps, typed languages like Haskell, dynamic FP ala clojure, etc).


Example, properties in C# can be method calls, and while they appear to have the complexity of accessing a field they actually could be arbitrarily algorithmically complex. This leads to a programmer down the road, calling it in a tight look, expecting field access overhead and getting somones complex property-method-logic. That is an example of magic.

IMHO magic is when the run or space time complexity of a code isn't obvious by its on-screen representation.


So by that definition of magic, something like ranging over a channel is magic?

It looks like a simple for each loop, just like over a slice or map, but under the covers involves locking semantics.

If so, I guess I'll buy that definition of magic, I'm not sure thats any different than knowing what the language does.


In go, there are a minimal amount of primitives (like channels, slices) to learn. Once you know them, they're fairly intuitive.

In C#, properties can be arbitrarily complex. You can't just know how properties "work" and then do mental shorthand on them. Every time you look at a new codebase you might have to dig through several files to find out what one line does.


But the "magic" in this case is that properties can be methods. Once you know that how is it any different than methods?

In go methods can be arbitrarily complex. You can't know how they work without digging through several files to find what one line does.

Another example of magic in go would be method names. You have no idea if it is safe to change the name of a method because it could be satisfying an interface far away from the definition site (or in the case of exported methods nowhere you have access to).

We could go back and forth all day about what is and is not magic, but it still just seems like "language differences" to me. If the claim is "go does a lot less for you than other languages, so has a lot less opportunity for magic", I could probably concede that.


>But the "magic" in this case is that properties can be methods. Once you know that how is it any different than methods?

It's different because you have to apply that general knowledge in every single case when reading code that you are not familiar with.

In Go or Java, the information on whether a.x is a constant time variable access or a function call of arbitrary complexity is available at the call site. You don't have to look it up. It's one less thing to do when reading code.

And when you do have to look up what an expression means, how straightforward is it? Consider this expression:

  f(x)
In Go f(x) means whatever the function f does, and f is exactly one function in the current package.

f(x) in C++ (and to a slightly lesser degree in Java, C# or Swift) is one of a set of functions called f. Knowing which one actually gets called requires knowledge of tens of pages of name lookup rules plus knowledge of possibly large swaths of the codebase.

It is often claimed that languages more powerful than Go just have a steeper learning curve. But it's not true. Even if you know all the name lookup rules of your favorite language (do you?), you still have to apply them every single time you read unfamiliar code.

In my view it's pretty simple. If you have to read a lot of unfamiliar code all the time then Go is great. If you can know both a more powerful language and your codebase inside out, then Go will be frustrating for its lack of abstraction features.


> Another example of magic in go would be method names. You have no idea if it is safe to change the name of a method because it could be satisfying an interface far away from the definition site (or in the case of exported methods nowhere you have access to).

Is this true? If you changed the name of a method, it will no longer satisfy that interface and your code would not compile.


I don't think that's quite right. E.g. if you do call a method, it's complexity is unknowable with only local context, so it can't be obvious. I would rewrite that to:

> Magic is when the run or space time complexity of code is misleading by its on-screen representation.

An apparent field access that is actually a method is misleading. Calling a method explicitly just directs you to check that method to know for sure.


I think we are on the same page.


Yes, I'm just being pedantic about how you say it. :)


So these will be made into functions with uncertain O complexity. How's this situation preferable?


Most of the time in Go people use fields directly, so there is a clear difference between struct.Field and struct.Method(), struct.Field is preferred and you only have to worry about uncertain complexity if you see struct.Method(). The parent is saying in C# struct.Field might be a simple access or it might be a complex method.


When the programmer sees a function call they know its complexity is a O(?) and will (hopefully) vet it before calling it in a tight performance sensitive loop.


When a C# programmer sees any call to code they don't know - property or function - they should know its complexity is O(?) and will (hopefully) vet it before calling it in a tight performance sensitive loop.

I get that someone could say "Go has fields, and they're always fast" and that seems like a great facility of the language, but any C# developer that says similar about instance members is wrong, and has some invalid assumptions about the language they use.


A corollary to this is that C# property accesses can have side effects. I recently started working on a legacy code base where reordering access to a set of properties on an object resulted in different results!

And I'm not saying that properties are strictly a negative; in some cases, they can be very useful for refactoring an underlying implementation without having to change the API exposed to callers. But just like any magic, it needs to be applied thoughtfully and judiciously.


Take a look at a java codebase that uses:

  * Complex DI frameworks (Spring Bean*Processors, event listeners, XML config)
  * classpath scanning-based autowiring (See Spring @Component)
  * aspect weaving-based autowiring (See Spring @Configurable)
  * Code littered with annotations that invite aspect-based pointcuts
  * Complex ORMs like hibernate that are incredibly difficult to use properly
And you'll start to get an idea of how ridiculous things can be. Golang is making a huge mistake for not adding Generics. 99.9% of the complexity in a typical Java codebase has zero to do with generics and everything to do with the insane abuses of the JVM classloading system that the java community has subjected itself to, as well abuses of overly complex libraries like Spring and Hibernate.

If the Java community allowed itself to write simple golang-like code the majority of the time, there'd be much less defection to golang in my opinion.


There is nothing language specific or magic about those things. You could (and you will see people) write those things in go as the language starts getting more adoption.

Go goes further and encourages code gen, so that will probably be the way you start seeing terrible frameworks being built.

In any case, "configuration as code" doesn't seem like a good definition of "magic" to me.


My point exactly. The issue isn't java the language. The issue is the flexibility of the JVM runtime and how people are abusing it.

Also, if load-time aspect-weaving and classpath-scanning-based autodiscovery don't count as magic to you, then not much will. Code generation at least has the huge, huge, humongous advantage that you have code on disk that you can read and debug.

I also admire Racket's macro system for coming with IDE support for introspecting and debugging the code generated by macros. Macros are a much better design because they generally run at compile time and they generally only make local code transformations that are much easier to reason about, as opposed to the sweeping global changes a weaver will make.


When people say "magic", what they often mean is "code over here can effect the execution of code over there in an implicit way". Like in Ruby, I could conditionally monkey-patch a function into an object someone way over there was using, causing code to break.

Other languages, like those with stronger type systems, will not allow this to happen.


Yea, monkey patching is helpful when dealing with a 3rd party library that needs to be tweaked 10 layers up the inheritance chain without having to change the object type all over the whole system.

If it gets overused it causes problems but there are times when it is close to a miracle. That said, there is a reason ruby devs are so test conscious.


Yeah - of course monkey patching has good uses :) The problem is that when you're trying to debug an issue, it's another thing that you'll have to remember - "is anyone monkey patching something in here?"


Yea, I can't work on Ruby codebases without something like Rubymine where I can jump straight to the declaration for that exact reason.


one of the things that go eschews is operator overloading.

    a := b + c
What's the runtime complexity of this statement? How much memory will it cause to be allocated? In go there's only two possibilities for what this code is doing... either this is string concatenation, or it's adding two numbers. Both of which are immediately comprehensible for impact on run time and memory.

In C#, you can overload operators, so the + could in theory do anything. And what's bad about that is that it is deceptive. It's easy to miss the fact that this line might actually be doing something complex.

It also means that if someone is looking at your code, they can't make any assumptions about what any particular line of code is doing, without complete understanding of a vast amount of code.

This is one of the pieces of magic that I'm glad go doesn't have.


Hogwash. In a language with operator overloading, this would be something like `a.assign(b.plus(c))`. Which really doesn't tell you more about what go on under the wraps than the operator form.

What can be confusing is what the meaning of `+` (or `plus` is). In some case it can be fairly obvious (e.g. concatenating sequences), while in other not so much. Operator overloading is nice, but has to be used tastefully (like every abstraction or language tool).


In C#, you can overload operators, so the + could in theory do anything.

What can be confusing is what the meaning of `+` (or `plus` is)

QED


Any function can do anything. I can write a function called "read_from_file()" that doesn't read any files.

Amazing, I know.

Also please actually read the comment before replying:

> Operator overloading is nice, but has to be used tastefully.


The point the OP made was that operator overloading is not nice - it means that any operator (not just function) can do anything. It makes code harder to read and reason about.


    a := Sum (b, c)
How can you be sure that Sum actually does a sum without looking at its implementation?


I think you are missing the point. Of course you can't assume what a function will do with certainty.


From CS point of view + is just a function name just like any other.

A concept used in lambda calculus, introduced in computing since Lisp exists.

Also part of abstract mathematics field, where operator symbols get defined for the proofs.


> From CS point of view + is just a function name just like any other.

From a Go point of view, it isn't.


Just because Go eschews decades of CS knowledge, in the name of the "easy to hire programmers" for Google[1], it doesn't make it less true.

[1] - According to the language designers own words


What you wrote isn't a universal truth. In Go, + is not a function like any other. There's no argument to this.


What is the difference, apart from the notation?


The point is in most languages operator overloading is no more complex than a method call.

It's not exactly magic.


Overloading + could be magic if you want it to be. In go, + is exactly what you think it is. In languages with operator overloading, I literally could make + do whatever I wanted to.


Just like you can make a function do something totally unrelated to how it is called, do what it is actually in the name, wipe out the hard drive, launch missiles, whatever.


> How much memory will it cause to be allocated?

In Go, it's impossible to tell because "a" might be captured by a closure, in which case it will be heap-allocated. But if escape analysis promoted it to the stack or a register, then it will not allocate memory.


I honestly don't understand the difference with ' a := add(b,c) '. Does it really make things that much difference? And lack of overloading makes maths heavy code horrible to read.

Of course, you can go to the C extreme, no overloading, specialisation or anything, every function name means one thing. That does add some nice features, but is a pain in the ass when naming things!


The difference is more apparent in the other (non-overloaded) case. When in Go (or C, …) you see an expression "x << y" you know immediately that this is just a shift operation, mapping to at most a couple machine instructions. In certain other languages it's most likely an integer shift, too. Still, you have to carefully consider the context lest this simple shift expression causes synchronous I/O to some space probe near Mars


"they can't make any assumptions about what any particular line of code is doing, without complete understanding of a vast amount of code."

Yes, this is a problem with many designs. More importantly they can't easily look up what does an operator do. But this problem can be easily avoided if a set of operators used in a particular scope had to be explicitly specified. For example, if you want "+" to mean a bigint addition from a set of bigint math operators, you would have to import that set into that scope, kind of like this:

  import_operators "bigint"
  a := b + c
Now you still have overloading, but it is very clear where to look up an operator and a set of operators used in this scope.

"In go there's only two possibilities for what this code is doing..."

There should be only one possibility, though. Dual-meaning operators don't provide any value to ever have them, apart from familiarity with design mistakes of the past.


I agree, I wish + wasn't string concatenation either.


One of the few good decisions PHP made is not using + for concatenation.


> In go there's only two possibilities for what this code is doing... either this is string concatenation, or it's adding two numbers

That is overloading! What go eschews is user-defined overloading.


One way to look at it is, some languages, I would suggest, C, C++, Clojure

When you start learning those languages, and then compare the code you write, to the code of popular major or standard libraries, they look completely different

They say, that Go is different and special this way, in that advanced Go code, doesn't look all that different from average Go code

Anyway, I can't really judge, I never looked or tried to learn Go, hence (They say)


I don't know about this axis of "magic vs. muggle" we're talking about, but for me the phenomenon we're talking about seems like it can best be described by the ease with which you can find the code that implements specific behavior. Go (and Java) are pretty good at this. Python is OK. Ruby is awful.


This is true except in the case of structural satisfaction for interfaces. You can't trust a rename function in a IDE in go for instance due to this.

And in the case of channel select behavior...and ranges over a channel...and the context object, etc.

My point being "magic" seems to be code for "familiarity with the language and it's idioms". Which I'll grant might be easier in go because of how limiting it is.


Magic is when you access struct pointer members w/o the asterisk. ;-)


I see that JavaScript has it's place in the browser but the whole back-end thing scares me. The ugly code (callbacks, etc), npm injecting only God knows what into your back-end servers, tons of work-arounds for trying to make JS not so ugly, are over the top.

The JS everywhere is so much like the "only tool you have is a hammer, so every problem looks like a nail" thing, it's amazing.

Creating a simple, secure, extensible middle-tier is a solved problem and is not in need of JS trying to solve it in a much more obtuse way. I've created many myself in everything from Delphi, PHP, C#, Groovy, to Java and I would never pick JS for that layer.

And a final thought, PHP used to get tons of bad press for being messy, etc, etc. But this JS stuff takes that mess to a whole new level. Perhaps PHP devs moved to node/js so they could make a mess and everyone would still think they are the cool kids?


I'd probably still prefer .Net, but ES6 and TypeScript have made nodejs pretty bearable for me. I'd prefer it to and maybe even argue it's a better platform for concurrency than Python at this point, even 3.5, for most situations.

The concurrency you get with async/await is nice enough and performant enough over 0 concurrency to make it pretty good. You don't get thread pools and it's not the fastest at single threaded processing.. And there is only one number representation which is a hassle forcing the use of BigNum occasionally.. The stdlib is a mix of callback and events and requires wrapping for good async/await consumptions and.. Well there are a lot of drawbacks.

C# on CLR is superior IMHO as a platform and language in nearly every way except... It's not JS. If you are moving around between a ton of stuff constantly you'll end up doing a lot of JS(and in my case TypeScript because I've successfully introduced it multiple times) due to frontend work and the employee common denominator.

Edit: This is a bit of a ramble. I don't think NodeJS is the best thing going. But while I personally would prefer C#, Go(depending on project), F#, potentially clojure, and etc after removing personal preference and adding in all the other factors that come in to play when selecting a technology that a team has to use and support -> nodejs/TypeScript is often a pretty good option.


> es6 and TypeScript have made nodejs pretty bearable for me

I'm glad you found a solution, but I think the fact that you need typescript to make javascript bearable is an issue. Typescript isn't javascript, so the implication is that what we get out of the box is not bearable, and we then have competing languages, tooling, workflows, etc. and no "right way" to do things. Just more fragmentation, and an unbearable default.


Javascript on the backend is HORRIBLE. From a dev UX perspective it's a terrible cluster of random errors, and strange workarounds.

One time a project I was working on wouldn't run with some random ass error from some random ass node module. Try try try, not working. Spent 2 hours trying to find an answer online. For the hell of it I tried again and this time it ran with NO code changes.

That's the day Javascript as a backend language died for me. It's a house of cards stuck together with chinese knockoff glue.


Everytime I try to get excited about Javascript on the server and built anything of substance, every damned module's example code is nothing but a bunch of console.logs. There's must be some magical framework I'm missing where that's a way to build apps. (Console Dot Logs on Fails?)


Except something did change, and it wasn't JS's fault. If it wasn't the code, it was the environment (time, variables, FS, system resources, etc.).



If you have NPM on the server, uninstall it asap! Running NPM on the server is very dangerous! Use rsync from development/staging instead. Personally I avoid bloated modules, and keep everything in SCM. NPM (the archive) is a superb service, but you should not depend on it.


Full stack polyglot here. I have no idea how you justify anything you just said with facts. It sounds a lot more like you didn't learn the language or are repeating something you read of the internet.


You don't need many callbacks in the backend. Use await / async or promises for most problems instead of callbacks. Of course, callbacks are a fine tool when suitable.

I don't know what "work-arounds" you're talking about.


I just changed a refrigerator, stove is basically dead, both 8 years old, Whirlpool brand. My Mom is still using the same appliances from when I was a kid. Nearly every family member I have says the same thing, new stuff lasts about 8 - 10 years and why can't we just get one like Mom's that will last 35+ years. My bosh dishwasher runs like new and is 8 years old.


Counter-anecdote: my Bosch dishwasher lasted barely six years before the impeller motor failed.


I'm a little shocked at this decision. The issues faced are all solved problems from PB storage to HA on server clusters even across Data centers. Good solutions do have upfront costs but the math that cloud hosting is 5-10X more than co-location company owned/leased is still in the ball park. It sounds like they may need an architect with enterprise experience to help them out rather than random comments on HN.


Yea, every growing company I've been at has been in the process of moving off of hosted solutions and onto their own hardware to cut costs.

Two such companies made the mistake of doing that with OpenStack, which is terrible and should die in a fire.

But one later switched to DC/OS and containers and it has worked really well. They've been migrating apps running on EC2 instances into docker containers than can run on marathon in our local data center and the savings is pretty substantial (even adding in the cost of the teams needed to maintain our own platforms).

Managed solutions are great for startups. There is a lot of value in not having to setup, maintain and manager your own hardware .. but that does reach a limit and companies need to be prepared for that transition and avoid lockins.


A few years ago Bluecross Blueshield in Chattanooga would hire almost anyone that wanted to learn Cobol and train them, I have a friend that got in on this. I also have a relative making insane amounts of $ as a Cobol developer, but honestly Cobol is no fun (for me) and I wouldn't do it. There are a lot of Cobol devs retiring now and in the next few years and not many people want to replace them.

I can't think of anything that is Web Dev, Remote, $100 hour, and less than 5 years of experience.


What's the insane amount of $ ? (For the Cobol development.)


But are those Cobol jobs remote?


Any job can be remote if you're in demand and the employer needs you more than they need you.

Find their pain points and negotiate.


Typo: the employer needs you more than you need them.


One of the best hires I ever had was someone that was 50+ about 10+ years ago. He didn't have a technical background but wanted to be a programmer. It was a big risk but in two years time he became my top developer out performing those with 10+ years more experience. Now I primarily look to hire those with a few white hairs as they are much more stable and cause a lot less personal issues than those right out of school. If I was hiring and you really wanted to be a programmer I would certainly give you a shot at it. If you want to be in technology I don't see anything stopping you.


This, old guys don't get enough respect. When I was a young coder I had several over 50 co workers who showed me the ropes. Now that I have gotten old myself I have come to realize that there is a wisdom in age.

The only thing thats stopping you from writing code is you, pick up a book, take a class (think community college don't spend a ton of money on it) -- if you have made it this far in business your probably fairly pragmatic already a skill most young coders have to learn.


Most of the older (50+) programmers I've worked with can code up a storm and are virtually drama free. Not sure what all the shade is about!


A lot of it is the toxic Silicon Valley "everyone-is-a-rebel-rockstar-Ruby-hacker-with-purple-hair-and-a-skateboard" bullshit perpetuated image of what a "programmer" should be. A 50+ guy in nice slacks who is perhaps not even a (gasp) Bernie Sanders supporter might find himself out-of-place in the typical Silicon Valley shop.

It's a culture thing, not a skills thing.


If you are not a Bernie supporter I suggest staying away from Silicon Valley period.


So I shouldn't dye my hair before running the interview gauntlet?


I won't let go of my Just For Men until they pry it from my cold, dead hands!


Shave your head instead.


Blasphemy. My locks are impressively full.


But if you do shave your head, you must get some of those stylish glasses. So says the fashion guidelines of the Programmers of the Middle Age.


What made you want to hire that person?


I used to use OVH and most of the time their stuff works but when it doesn't there is very little chance of their tech support figuring out what the problem is and fixing it. They basically deny anything is on their side and you really have to move to a different VPS/Dedicated server or whatever instead of the issue actually being resolved. I use a dedicated server from a place in FL now that is a small shop but I get a real person(with a brain) when something goes wrong and they actually take me seriously and fix their stuff.


Fundamentally, if you use their backup service as well for critical services is that at least reliable? Rackspace was.....lackluster.


I use nomachine, works on Linux and Windows. I have a VPN installed at all locations and the software runs behind that.


Having worked with Phone Systems for call centers and financial services for more than a decade I found this very useful.

Interesting uses: You want to use sentiment analysis to automatically pinpoint calls with angry customers and bridge in a supervisor.

You want to detect the language of inbound messages to route them to a person who can respond quickly in native tongue.

You want to identify demographics of an inbound sales call so you can prioritize people with the best buying profile.

You want to use spam or fraud scoring on inbound calls and messages so you can drop them on the floor before they distract your staff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: