No, I'm not. Through university (and even before) I have access to their full suite. I have tried to use PyCharm, GoLand and Idea.
Idea was useful for Java but felt quite slow and even with vim bindings was a pain to navigate. Learning the shortcuts helped but it never got quite as fast as helix/vim for general editing (especially as my work usually is a polyglot). It might be the best for Java (compared to eclipse or bluej) but that does not mean it fits my workflow/way of work.
PyCharm/GoLand both are "nice" but it did not feel better/more efficient than pylance/pyright)/gopls + vscode/helix. The only I still occasionally use is DataStorm as it is quite a nice SQLite/PostgreSQL client.
No, not just like. You're downplaying significant differences between the two that do in fact matter. So much so in fact, that you're just wrong. Stop spreading misinformation.
Go use cases overlap the most with Java. I think the reputation you mentioned comes from Google using a lot of C++ for high-level things others would likely do in Java, so they see Go as a replacement for C++ in some areas. (assuming you meant C++ not C)
It's replete with oddities and limitations that signal "ah, this is because systems language."
Go’s type system, for example, is very much a systems-language artifact. The designers chose structural typing because it was lighter weight, but provided enough type safety to get by. It sucks though for enterprise app development where your team (and your tooling) are desperate for nominal typing clarity and determinism.
The error handling is like a systems language for sure, I'll agree on that.
But where do Go's docs or founders call it a C replacement? gf000 asked where this is mentioned besides marketing, but I don't see it in the marketing either.
Thanks. I'm not surprised they called it a C++ competitor back then. All those systems-style features do make it awkward now that it's targeting the Java-like use cases. No pointer arithmetic, but pointers yeah, and it's not very clear what you're supposed to pass by value vs by ref. You can do things like a DBMS in Go that count as systems programming, but for sure it's not a competitor with C.
Go happened to be attractive for web backends too because it had good greenthreading before Java etc did, and better performance than JS, so it makes sense that they changed the marketing.
Go's runtime is thin: goroutines, a GC specialized for concurrency, networking, and little else. Java, by contrast, assumes a JVM plus massive stdlibs to handle everything from enterprise apps to big-data, making its platform genuinely "fat" and layered. Other Java-tier languages, C# included, follow the same model.
I agree Go's runtime is as thin as runtimes get. But having a runtime at all disqualifies it from being a C replacement. Rust is trying to replace C, Go is trying to replace Java and some of C++.
> We now know that we prefer composition over inheritance
When people say "composition over inheritance" in Java discussions, they usually mean the trivial modeling rule: prefer has-a over is-a.
But that’s not what composition is really about.
The deeper idea is interface composition -- building types by composing multiple behavioral contracts behind a single cohesive surface.
Java provides inheritance and interfaces, but it doesn’t provide first-class delegation or traits. So most developers never really practice interface composition. They either subclass, or they wire objects together and expose the wiring.
The slogan survived. The concept mostly didn’t.
The manifold project, for example, experiments with language-level delegation to make interface composition practical with Java.
> Java provides inheritance and interfaces, but it doesn’t provide first-class delegation or traits.
I'm not sure I am missing first class delegation much (not a lot of UI projects in Java these days).
But interfaces with default (and static) method implementations are actually quite usable as traits / mixins. Since Java 8 IIRC.
You can also pass around functions / lambdas (coincidentally also since Java 8) to compose functionality together. A bit harder to follow and/or understand, but another perfectly legitimate and very powerful tool nevertheless.
How does a type class help with composition? They do help with the expression problem (adding support for an "interface" after definition), and via parametric polymorphism they might give you a bit with regards to composing two traits.. but you do also have generics in Java, even if not as good as type classes.
So anyways, I don't see as big of a change here. But there was a Brian Goetz mail/presentation somewhere where he talked about adding "basically type classes" to Java? But unfortunately I couldn't find it for you now.
Kotlin's "delegation" feature isn't true delegation, it's just call forwarding, which is better than nothing, but it falls down pretty quickly as an alternative to implementation inheritance.
The manifold project provides true delegation[1] for Java.
Yes, the empty infix operator is often called the "juxt" operator, which is an apt term here.
However, I use the term "binding expressions" intentionally because there’s more going on than ordinary juxtaposition. In a normal juxt expression such as:
a b c
the evaluation order is static and independent of the type system:
(a b) c
With binding expressions the precedence is type-directed, so the type system determines which grouping is valid:
(a b) c
a (b c)
Additionally, the operation itself can be provided by either operand. For a given expression a b, the compiler may resolve it as:
a.prefixBind(b)
b.postfixBind(a)
For example:
10kg
Here kg is a MassUnit, and MassUnit defines postfixBind(Number) returning Mass, so given there is no left-to-right binding, the expression resolves right-to-left as:
kg.postfixBind(10)
So while juxtaposition is the syntactic surface, the semantics are type-directed binding.
> California's new speed camera pilot (AB 645) explicitly solves for this... like parking tickets
That makes the Florida judge's framing of red light cameras as a revenue generating scheme even more applicable. More than that, it ambiguates the crime.
> The drawback is that building an AST now requires a symbol table
Well, yes and no. During AST building a binding expression resolves as an untyped polyadic expression. Only later during the compiler's type attribution phase does a binding expression's structure resolve based on the operand types.
var plumber = contacts.select(contactId)
var date = inputForm.getDateTime()
var building = findLocation(warehouse)
Schedule plumber on date at building
But, honestly, I can't say I personally use it that way ;)
Yes, technically this is a form of backtracking, similar to what a parser does. The key difference is that the search is drastically constrained by the type system: reductions are only attempted where the types actually support a binding operator. Unlike a parser exploring all grammar possibilities, this mechanism prunes most candidates automatically, so the compiler efficiently "solves" the expression rather than blindly exploring every syntactic alternative.
Here is the high-level explanation of the mechanism:
But the short answer is that it’s not parser-style backtracking over a grammar.
The Java parser still produces a normal AST for the sequence of tokens. What happens afterward is a type-directed binding phase where adjacent expressions may bind if their types agree on a binding operator. The compiler effectively reduces the expression by forming larger typed expressions until it reaches a stable form.
The algorithm favors left associativity, but since a type can implement the binding operator as either the left or right operand, the overall structure of the expression can emerge in different ways depending on the participating types.
So rather than exploring grammar productions, the compiler is solving a set of type-compatible reductions across the expression.
The meaning is therefore determined entirely by which types participate in binding e.g., `LocalDate`, `Month`, `Integer`, `Range`, etc. and which reductions they define.
If a competing interpretation exists but the types don’t support the necessary bindings, it simply never forms.
In that sense it behaves less like a traditional parser and more like a typed reduction system layered on top of the Java AST.
The frequency of primarily AI-guided PRs is getting out of hand, particularly the whole codebase oriented type: “this PR improves performance, fixes bugs, and refactors random chunks of code.” The cherry on top is the author never attempting to verify the claims of the PR.
A PR should be a focused, cohesive body of code that targets a predetermined, well articulated goal.
Personally, I do not review widely scoped PRs esp. the AI-driven ones. Wasted far too much time chasing down false positive claims and reviewing junk code.
reply