Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Explicit term inference with Scala 3 (scala-lang.org)
91 points by _lbaq on Nov 10, 2020 | hide | past | favorite | 74 comments


A summary of what Scala is great for:

- Safe and high performant concurrent programming. Scala is miles ahead of most other languages here and pretty much on par with the best ones (such as Haskell)

- ETL / data transformations. Python is a big player here - but for stable and performant data pipelines, I strongly believe Scala is the better choice. For explorative things, python has the edge though.

- Actor systems on the JVM. I believe that most "scalable" systems are over-engineered. A well configured postgres / elasticsearch and a machine with your application on good hardware goes _very_ far. But sometimes it's not enough and then Scala offers a great solution with Akka on the JVM. Erlang/Elixir with their runtime are probably even better, but Scala offers the better typesystem and one can stay in the JVM world if they want.

- in-language custom DSLs. No seriously, Scala is the best language I know in terms of building typesafe, customizable DSLs within the language, making it almost look like another language. This is great if not so techy folks are to make changes to actual code in an easy way. The link in this post here shows some examples but it goes beyond.

- Writing glue-code. Surprisingly, I think Scala is better than python and many other languages here, because of the sheer power of composability of the language and ways to connect pieces with each other in a typesafe manner. The reason for this lies mostly in the concept of implicits, which both enables great power and reusability but also makes the learning curve much steeper.

This list might be a bit subjective and I don't know every language out there, but as the questions comes up almost everytime, I wanted to list the strengths of Scala in the areas where most other languages can't compete with it.


I agree with your first point, Scala allows writing incredibly performant concurrent code. However, if -like me- you can't bear the thought of working with Scala anymore, Rust is an excellent competitor.

Note, I spent around 5 years working with Scala full-time and grew a profound aversion for the language and its ecosystem. It definitively affects my judgment. I found in Rust all the "good parts" of Scala minus the things I resented.


what did you resent?


In no particular order:

- the JVM

- the tooling for writing with VSCode

- SBT

- NullPointerExceptions bubbling up from Java libs

- the split between the functional programming community and the "Scala is just Java without ;" one

- everything related to implicits (creation, resolution, etc) and everyone using them

- the ability to write the same 0-100 loop with 4 different keywords and 3 different styles

- the inability to enforce a coding style properly and consistently with a team of more than 1 engineer

- too much people writing DSLs for no particular reason

- ++, +:, :+, :++, ++=, _+_, ::, :::, _, `, <%, %, %%, %%%, <-, =>, :>, >:, (╯°□°)╯︵ ┻━┻, ., .., ...

- compiler errors


There have been some great improvements in Scala tooling in recent years. Li Haoyi talks about all these new tools in the Hands-On Scala programming book. Some specific responses:

* tooling for writing VSCode: the metals project is great: https://github.com/scalameta/metals

* NullPointerExceptions: those should be handled with Option/ Some / None

* the inability to enforce a coding style properly - scalafmt is great for this and provides a "Go-like" automatic formatting experience https://scalameta.org/scalafmt/

* SBT - there are now alternatives like Mill: https://github.com/lihaoyi/mill

Glad you're happier with Rust, but know that Scala is a lot better now if you ever come back ;)


I wrote Scala for three years or so professionally and agree with most of your points. A couple of comments though that haven’t been addressed by the sibling commenter who also addressed some of your points:

- JVM is JVM I guess. You either love it or you don’t. I personally really enjoyed the access to the ecosystem so that was a major boon for me.

- Gradle is just so much better than SBT in almost every way aside from simplicity. I found building Scala projects with Gradle to be quite straightforward and nearly every project that supports SBT also supports Gradle.

- I agree about VSCode tooling but I do want to establish that there are actually really nice tools for writing code in Scala - specifically Jetbrains is quite great. I am not saying your point is any less worthwhile, only that I don’t want someone to read your comment and take away that there are no good code assistance tools for writing Scala.

- Agree with the Java NPE’s. This is annoying

- Entirely agree with implicits. The Scala team either needs to better educate people on how to use them or just tell people not to use them unless it’s a very specific circumstance. I cannot tell you how many times I’ve dived into someone else’s code and spent literally hours trying to coax the compiler to do something that was bizarrely prevented by a poorly thought out usage of implicits.

- it is definitely possible to write “Scava”, and I wouldn’t really recommend a team adopting it without someone having at least a decent background in functional programming. Otherwise you might as well write Java if you are going to write imperatively.

- Yes, I find that the usage of symbols in Scala is a bit excessive, but once you learn what they do it does feel more terse and concise. It doesn’t stray into the illegibility of Perl IMO.

Overall it’s a nice language, fun to write in, but a bit frustrating at times. I wouldn’t use it personally at home but it does fill a nice niche professionally and it’s absolutely great for writing Spark code.


> Gradle is just so much better than SBT in almost every way aside from simplicity.

I picking this up, because it was also mentioned in a sibling comment. Let's continue this discussion.

It doesn't matter if we have multiple alternative to SBT, some more usable than other. Rust has one package manager (+build tool +distribution tool): Cargo. That's it. Whatever the project you just cloned and whatever the platform you're using, `cargo build` is going to compile it. Now, if every Scala project starts using a different package manager, that's going to be a problem. The best solution would have been to make SBT usable without a prescription from your doctor. It didn't happen. That's okay, programming languages are just tools anyway, and they get replaced too.


The canonicalization does happen with Gradle like with Rust, only at a higher (JVM) level. If you work somewhere that has Java, Groovy, Scala and Clojure projects being able to canonically clone any of them and build them with ./gradlew clean build is amazing. That forces your org to completely adopt Gradle but that’s not terribly difficult for a technically minded leader to enforce. Of course, that only applies in certain circumstances, and as you imply using the right tool for the job is the most important thing.


We have recently switched from SBT to Gradle and I'm finding Gradle a lot more painful than SBT.

For starters, SBT had a nice interactive shell, while Gradle takes quarter minute just to list the available tasks, after which it's another quarter minute to get anything else done...


Gradle is undoubtedly slower. However it’s at least 10x more capable of doing various things around building your app like testing, building images, publishing, etc. Like if I needed to build, fat jar, test and publish a multiproject repository I would almost certainly want to use Gradle over SBT.

But Gradle is probably a bit overkill for a single entry point small app that does one thing and has minimal testing needs (though it can certainly do that!). You wouldn’t bring in a dump truck whenever a pickup would work just fine.


Yeah, the symbolic method names can be quite annoying and (often) make code unreadable. Especially ScalaZ went really overboard there.

I'm happy to say that the convention is becoming more strict in this regard and symbolic method names are (mostly) discouraged [0].

[0] https://docs.scala-lang.org/style/naming-conventions.html#sy...


- compilation time


Seriously.

I modify a single line, in a single file, and the scala system takes 2-3 minutes to recompile at work. Apparently it spends over a minute inferring types!

Hello! The types are EXACTLY THE SAME AS ALWAYS. Please cache the whole typing phase as a build artifact based on file/directory hashes. Maybe even make this cache source-control-safe.


I am curious if you are using SBT incremental builds? (putting aside how hard it can be to get and keep SBT working for a project ... I was the guy that had to do that so I know that pain quite well). Because in my experience with really large scala projects that should not be the case, unless you modify a file at the very top of the dependency tree.

We used what I would call the package object predef pattern, which basically involved, rolling your own predef to have certain functions / extension classess / objects always in scope through package objects added to each sub-project. And the build would cross-compile both jvm scala and scalajs. It would not take more than a second or two unless one of those predef files were touched (which would fire off effectively a full rebuild).


Well, neither I, nor anybody at the startup I work at apparently (it would seem) has been able to figure it out.

Out of curiosity, how many hours do you think it would take to get the compile time down? I wonder if we could find somebody to help us with that.


At first I would try sbt-tmpfs, and then the largest factor in compile time imho are dependencies. Make sure that you split your project in submodules of semantically valid units. That reduce the amount of code that has to be analyzed. And then make sure that you don't have many unused imports.

In every project I've been are some former Eclipse users that are accustomed to collapsing all imports and adding new ones automatically. They never look at their ever growing list of unused files that are searched, loaded, and parsed.


It kind of depends on what you are looking for. It sounds like it's improving compilation times while you develop so you can get errors messages from the compiler and so on faster which SBT incremental compilation should help with.

Are you guys currently using SBT? If so, is incremental compilation not working for some reason (or are you not aware of it?).

In terms of time, it would really depend on the size of the project and what is currently in place. It could be a couple of hours to a few days.

BTW my email is in my hn profile you want to discuss more privately.


Yeah we are using SBT. I'm off this week, but next week I can try to see if this has legs. I checked your profile but didn't see an email.


Upgrade your sbt version and make sure that whatever file you update is not (transitively) imported/used by every other file. If that doesn't help, check for excessive macro/typelevel usage.

Incremental compile times should be seconds not minutes, something is probably wrong with your setup.


Scala is incredible for web apps.

I can write my React/JS front end [1] and highly concurrent, real-time scalable backend [2] using the same language, build tools etc.

[1] http://slinky.dev

[2] http://zio.dev


>ETL / data transformations. Python is a big player here - but for stable and performant data pipelines, I strongly believe Scala is the better choice. For explorative things, python has the edge though.

I disagree. And I've done a lot of Scala ETL work. The issue is that in Scala ETL work your data structures are either untyped (Spark Dataframe), case classes or typed tuples. If they're untyped then there's little to gain from Scala vs. Python in terms of type safety and a lot of overhead. If they're case classes then you're moving around a lot of unnecessary data fields and you've got the overhead of making fifty intermediate case classes yourself. If it's typed tuples then it's on you to remember which field is what as nothing is named.

Some sort of intelligent compile time case class subset generation would have helped the situation immensely. I think frameless (https://github.com/typelevel/frameless) gives you that power but it's a lot of shapeless black magic overhead and I've never seen it used in production (and probably various maximum field limitations and much slower compile times).

edit: Case classes also have (or had) a steeply increasing compile time memory requirement as you add more fields which is fun when your data is 200+ fields long. Almost as fun as having to make a 200 field case class to read in a single file of which you only need 50 fields but want to be type safe.

edit2: Scala is popular in ETL because the Hadoop ecosystem is JVM and Java used to be atrocious. So Spark (which wanted Hadoop compatibility) picked Scala because it was better than the alternatives. However nowadays Databricks is investing a lot more into their Python support from what I can tell and there's Python native competitors out there (Dask, Ray, etc.).


Another possibility is to use "typesafe heterogenuous containers" as described in Joshua Bloch, "Effective Java", 2nd edition and probably 3rd edition. The idea is to use maps with specially constructed keys, that give you type safe access to their accompanied values. That is better than completely untyped, you have a place to document stuff at the declaration of those keys and you can easily create runtime checks, that check for the presence of a set of well defined keys in the map. The fact that they are described in a Java book does not mean that they aren't a nice pattern in Scala and several other statically typed language with generics.


> they're case classes then you're moving around a lot of unnecessary data fields and you've got the overhead of making fifty intermediate case classes yourself.

Haskell developer here... What about case classes and lenses? Do they solve this?


As I understand it lenses don't change the underlying data structure. For ETL you need a way to basically say "the code only uses fields X, Y and Z so we will only load X, Y and Z during runtime." Automatically based on usage without having to keep updating your lens definition. Modern on disk file formats are columnar so they can very efficiently read subsets of the data. If your data has 200 columns than reading the 199 unnecessary ones can be very slow.

They could help with the intermediate data structure but some of them aren't subsets or trivial derivatives. So you really need an inline way to create single use case classes. I think frameless in Scala can do some of this for standard transformations but that requires the black magic of shapeless.

Spark in Python (and the untyped Dataframe API in Scala) compiles everything internally before running it to achieve the above. So it's trivial to have unit tests on empty data structures which "type check" your Spark code.


"For explorative things, python has the edge" -- I'm a lazy programmer, how can I get contextual auto-complete like I can with Scala (and other type safe languages)?

From my experience, auto-complete for Python shows everything (like JavaScript's), so I end up googling.


From my experience, for exploration of data, you usually use the functionality you already know and you visualize things as much as possible. So there is not a huge advantage for Scala in terms of autocompletion and such, but there is quite a bit of advantage for python for batteries-included, visualisation and speed of execution.

But yeah, I get your point, I find it annoying too.


For exploratory work you're in a notebook so your auto-complete usually has access to the exact object instances in question. There's also some good machine learning based autocomplete systems out there nowadays.


Well said. In some of my older comments I've shat on Scala (compared to Haskell) but after being forced to write Python for a day job, I've realized how good I had it with Scala. Luckily I'll be writing some again soon.


This is a nice write-up, it's beeb some time since I've written Scala and Scala 3 looks promising.

Though, I am curious if there is anyone else who shares my mindset on Scala.

In a corporate environment, I found it to be an extremely expressive and powerful language but that power comes at a grave price, which I'll try to summarize:

- it's difficult to understand other people's code compared to other languages (e.g. Go) - it's so very implicit that you end up having to hold a lot of state in your head to understand what's going on. - the language is basically impossible to read effectively without IDE features.

I usually enjoy reading most code bases, but Scala is downright painful to read in plaintext. You end up having to be a compiler.


In a good Scala codebase you use the implicitness to put the business logic front-and-center and push secondary concerns into the background, but without making them completely invisible. The plaintext becomes something akin to the DSLs that people write in e.g. Ruby (using metaclasses and other such magic), but when you open it in an IDE (or compile it in your head) all those extra details become visible to you in a reliable way, rather than having to guess what a given piece of code actually does.

IMO modern language designers should be making IDE-first languages - after all, most serious programmers do use IDEs (even if they build that IDE within Emacs or Vim). The problems with "visual languages" are that without a textual representation you can't meaningfully diff/merge/blame, not that using the GUI is an inherently bad idea. With Scala you get the best of both worlds: it's textual enough that version control works properly, but you have standard-ish way of folding, hovering etc. that mean that you can "zoom in" on the details of unfamiliar code but also "zoom out" to get a clear overview, in a way that few other languages manage.


Agree. It's a shame because I do believe it is very much cultural - it is possible to write very clear, expressive and safe Scala code. But it doesn't seem the community has settled on that as a cultural idiom. Instead they frequently favor maxing out the power of the language and type system at every opportunity and that means you often encounter lines of code that look like a string of hieroglyphics, or functional programming concepts and higher order types thrown at basic simple logic that could be imperatively written with much more clarity.


AkkaHttp being the main offender imo. If something as braindead simple as "routing http requests" can't be expressed without magnet-pattern / 5+ nested levels of curly braces / odd-compile-errors / magical punctuation functions / 0-debugging then your library has failed.

If I can't understand the type of a route (or a db-query) because it's a 5-level nested type, how am I going to write a function that takes one as a parameter?

</rant>


I agree with most of what you said, but for the last point, I think I can add a different perspective.

Because, once I got used to the (pure) functional programming style, I found it more difficult to mix this style with imperative code. It's almost always easier to for example use for example the state monad than writing it with mutation. This is probably quite subjective and it depends what you are used to.

For me, I make an exception for tests. E.g. when I mock my key-value database with an in-memory map, I choose a mutable map. But I know that some people prefer FP style even for tests.


> - it's so very implicit that you end up having to hold a lot of state in your head to understand what's going on.

I wouldn't use the word "state", since that has connotations that implicits are somehow changing at runtime. It's certainly true that scope is much more important and non-obvious than explicit variable names.

Type aliases can also get in the way when thinking about implicits (e.g. we might want an implicit 'Foo[Bar]', but if those are aliases then the compiler might start chaining Maps of Lists of Strings of whatever to resolve it).

Implicit conversions definitely seem like they could cause headaches (e.g. performing 'String' operations on a 'List[Char]' and having it implicitly and silently converted). I've only been using Scala for the last year, and thankfully implicit conversions seem to be avoided these days!


Scala is my favorite language, even though I don't work with it professionally anymore. And I've seen it all. And yes, bad code in Scala is probably harder to read than bad code in other languages. It can be outright painful. However, good clean code in Scala is easier to read IMO. I think it takes a strong culture and strict standards for an organization to adopt understandable Scala. Unfortunately, those organizations are rare and there will be people who will likely abuse some of the "clever" features of Scala. I've been fortunate to be in those organizations with good clean code (and places with unreadable Scala code).


Completely agree. I got to work with a Scala codebase for a few months, and those few months were painful indeed. It was well-nigh impossible to make sense of any pre-existing code.

I'm no stranger to diving into a new codebase and trawling my way through spaghetti to make sense of it ... but I absolutely do not want to do this with a Scala codebase ever again.


> it's difficult to understand other people's code compared to other languages

I think it depends a lot on the code. Scala 2 gives you a lot of guns to shoot yourself in the foot (see: implicit, as discussed here) - but you can also use it in its mostly-functional form, with immutable values, and then it can become very readable. Granted - I've always been using IntelliJ but I could read the scala code on Github too, just fine.


To make sure everyone can understand each other's code in a corporate environment, use strict linting (scalastyle, wartremover) and formatting (scalafmt).


This is necessary, but definitely not sufficient, I'd think.

Oftentimes one stares at a mile-long chain of methods on a list and is left wondering "exactly what did the original author want to do here, and why are things breaking"? In the end, the only way to find out part of the answer is to exercise the relevant codepath via some test data (if you're lucky) and use IntelliJ's excellent debugger to help you move forward. Even this just reveals the "what", not the intent.

No amount of consistent linting or formatting can make this better. And no, ExpressivelyVerboselyNamedFunctionsInAidOfSelfDocumentingCode don't help either.


After a couple weeks of dabbling between Scala and F# to migrate Python and Node.js codebases (at a scaling startup serving big enterprises) I ended up choosing F#.

Scala is a beast and the package manager + build tools were giving me headaches. I’m optimistic about the current trajectory with Scala 3 and simplifying the language.


I think F# is really underused - it fills a very similar spot in the trade off space and from what time I have read has excellent design.


My main beef with F# is the .net library -- none of it is written from a functional perspective, which means you're constantly mixing functional and object-oriented paradigms.


Unfortunately Scala suffers a similar story on the JVM with Java libraries.


It misses type-classes and HKTs unfortunately. But the ML-like way of defining types and data is top notch. Even with Scala 3, F#'s way of defining types looks cleaner to me.


> Scala is a beast and the package manager + build tools were giving me headaches.

Ignore them and use maven. It's much better documented, consistent, and backwards-compatible. I struggle to understand why SBT ever gained any popularity, and it should certainly have never been recommended to newcomers.


SBT is easy to use at the beginning, simply because the build files are 1/10 the length of a Maven XML build description, the console output looks better, the build REPL has lots of commands that make things easier, and the simple projects just work fine.

Once you start to dig deeper, you find out that setting multimodule projects is still easier in SBT. And if you decide to dig deep and really learn the build tool, it doesn't really matter which one you choose. But devs that chose SBT as their Scala build tool will probably keep using it.


> SBT is easy to use at the beginning, simply because the build files are 1/10 the length of a Maven XML build description, the console output looks better, the build REPL has lots of commands that make things easier, and the simple projects just work fine.

The build REPL has lots of commands that aren't documented, and when you search for tutorials the commands have changed (e.g. runMain is apparently now run-main, and if you write runMain you get the wonderfully helpful error "Expected ';'"). There's nothing so simple as Maven's list of phases https://maven.apache.org/guides/introduction/introduction-to... ; instead each project has its own slightly different set of build commands.

> Once you start to dig deeper, you find out that setting multimodule projects is still easier in SBT. And if you decide to dig deep and really learn the build tool, it doesn't really matter which one you choose.

Disagree. Maven multimodule projects are very simple: you can have a module that contains a list of submodules to build and... that's it. Importantly, every submodule acts exactly like a normal top-level project, so if you only ever want to work on one module you don't have to understand anything about the bigger project. And moving between single-module and multi-module projects isn't a big conceptual leap.

SBT multimodule projects are not only their own unique thing, they make the above problem even more confusing, because each project has its own slightly different set of build commands, and each module has a slightly different subset of them that works.

And even if you really learn it in detail, SBT is still awful on multiple levels. https://www.lihaoyi.com/post/SowhatswrongwithSBT.html talks about the deeper problems.


"runMain" is still "runMain", I don't know where that comes from. There are lots of available commands indeed, due to the nature of SBT, but a basic workflow always uses the same commands: compile, test, run. Any extra tasks used usually come from plugins, just like in Maven. And reading documentation is needed for configuring and running those, just like in Maven. Projects should document why each plugin is included, and how to do things, because every project is indeed different. The difference is the extra configuration needed by each plugin is about 1/10 the size.

Maven multimodule projects are strange. Yes, modules can work without knowledge of what's above them, but that's not usually what we want to do, because it means lots of duplication (e.g. dependencies) which can then lead to inconsistencies. And if the common settings are defined within the parent project, it stops being self-contained. SBT accepts this reality and that's why multimodule projects are a thing. And having multiple subprojects in SBT doesn't mean that suddenly some of them work and some don't. By default, every project has the same settings (save for the obvious ones, like where the sources are). Any extra settings and task definitions just tell SBT to do extra things when certain tasks are run. One quick example: if you enable the integration test configuration just for some subprojects, the ones that don't have them don't start to fail. They correctly report a "successful, 0 tests run, 0 test failed". By the way: creating a separate integration test config in Maven is painful, up to the point that people recommend to just add a new subproject that contains them. Yay, more nesting!

Regarding SBT current shortcomings: that article correctly points out lots of issues, like the lack of tooling. To this day, IntelliJ still has issues downloading the correct SBT sources. And when they work, there's no easy way to find the actual code that does the work. That should be improved, because for lots of plugins, a quick look at the source is just what's needed when working on the build. Namespacing is a problem too. The internal complexity of the core SBT concepts is complained about too, but that's not as solvable as the other two things.

All in all, Maven and SBT are different tools that focus on different issues, and choose different sides in the consistency vs flexibility tradeoff.


> "runMain" is still "runMain", I don't know where that comes from.

Ah, I got it backwards: old tutorials say to use "run-main", and if you do run-main now you get "Expected ';'".

> There are lots of available commands indeed, due to the nature of SBT, but a basic workflow always uses the same commands: compile, test, run. Any extra tasks used usually come from plugins, just like in Maven. And reading documentation is needed for configuring and running those, just like in Maven.

But in Maven you don't need to read any documentation if you're just building the project, because every project has the exact same lifecycle. If you want to add or remove plugins to make the build do something different then you need to read the documentation of those plugins, sure, but you don't have to know anything about the plugins if you're just working on the code. E.g. if an SBT project is packaged for docker then you'll have to run some docker-plugin-specific command to do it (and you've got no way to discover which submodule you're supposed to run it in), whereas if a maven project is packaged for docker then you just run "mvn deploy" as usual.

> And if the common settings are defined within the parent project, it stops being self-contained. SBT accepts this reality and that's why multimodule projects are a thing.

Parent projects work consistently whether or not you're using a multimodule project, which is another problem with SBT - it's really confusing to share parts of a build definition between more than one project, to the point that most people copy/paste instead. With maven you can do a very natural, gradual progression from single module -> multi-module project -> multi-module project where the parent is its own module -> independent projects using a shared parent (e.g. an organisation-level parent). It's another case of a hierarchy working much better than a grid.

> By the way: creating a separate integration test config in Maven is painful, up to the point that people recommend to just add a new subproject that contains them. Yay, more nesting!

Think of the people who come to join your project! I've seen SBT modules with 5 different scala source directories and no obvious relationship between what depends on what, and good luck getting any IDE to understand whether test depends on integration-test or vice versa (most will give up and just build everything together, which is fine until you add something that works in the IDE and then errors when you build it with SBT).

A separate submodule is a much better approach - tools and people are much better at handling "module A depends on module B" than "this source folder depends on that source folder".


Although the silly "Expected ':'" is still there, the newer SBT versions show better output, there's even a "did you mean" feature.

Regarding "discovering which submodule you're supposed to run commands in": this is a non-issue. Commands are either run under the obvious subproject (e.g. "api/run", "business-logic/test"), or are run as top-level commands. This is true 99% of the time. I have a fairly complex project open right now, and I just run "docker:publishLocal" to create two separate Docker images, one for each subproject that includes the appropriate plugin. I think this is better than overloading a single command. SBT projects usually have a small readme describing what is the command for any specific task you're supposed to do. In Maven, you know the commands, but you need to read to ensure what they do (does "deploy" push to an artifact repository, does it create a container image, does it create a deb package, or does it create a standalone jar?).

The use case where a multimodule project becomes multiple projects is more gradual in Maven, I fully agree on this. The problem, as I said, lies in that child Maven projects are not self-contained. They usually depend on information declared in the parent builds. This breaks the assumption that people can work on a project without worrying about the parent project.

Finally, source dependencies are explicit in SBT save for the base case "test depends on main". Everything else must be specified, so it's either there in the build file, or added as part of a build plugin. It can also be checked from the REPL. And building from the IDE is a dumb mistake if the IDE doesn't use the correct build tool underneath: the build is there for a reason and the IDE shouldn't bypass it.


> does "deploy" push to an artifact repository, does it create a container image, does it create a deb package, or does it create a standalone jar?

"package" creates some kind of package, "deploy" pushes it to some kind of artifact repository. The details of what kind of package and what kind of repository will vary from project to project, but you don't need to know them to start work on the project. In theory there could be projects out there that do surprising things (after all, nothing actually stops you from configuring your build to push a container image to a repository when someone runs "mvn test"), but the overwhelming majority of projects find a way to fit themselves into the maven lifecycle, and it really reduces the "where do I start" effect when you check out a new project: I don't have to read the build file or hunt for documentation, I can run "mvn install" and see what it does, and be confident that that's "how you build this project".

> The problem, as I said, lies in that child Maven projects are not self-contained. They usually depend on information declared in the parent builds. This breaks the assumption that people can work on a project without worrying about the parent project.

Well, SBT projects also come with a bunch of default tasks that aren't defined explicitly in their project definition; some of them are defined by plugins and some of them are defined... somewhere (I genuinely don't know where the tasks that are available by default come from, or how to see a list of all of them except by using the interactive commands). So an SBT project is not self-contained in that sense either. In maven anything that's not explicitly defined in the project is coming from its parent pom, which is either the default parent pom or an explicitly specified one, and the default parent pom is a real pom that you can look at that follows the normal rules.

So you do have to understand the concept of a parent pom, but it's not an extra thing that you have to understand, because the default parent and any parent pom you're using in a multi-module project work exactly the same way.

> And building from the IDE is a dumb mistake if the IDE doesn't use the correct build tool underneath: the build is there for a reason and the IDE shouldn't bypass it.

Yes and no: the IDE has a bunch of knowledge of its own that the build tool doesn't, such as which files the user has changed, or which specific test the user has asked to run, or which import the user wants to add a dependency for. So I don't think the idea of the IDE dumbly invoking the build tool and letting it do its thing works (particularly in a language like Scala where you have to rely heavily on incremental compilation if you don't want enormous build times); instead there needs to be a deeper integration where the build tool and the IDE share a structured model of the project, and the IDE can perform a build via that model (effectively embedding part of the build tool as a library, if you like). Likewise I don't want to invoke a separate build REPL to understand the project definition, I want to be able to explore it within the IDE.


SBT's deeper problems are the 4-dimensional data model and the 3-layer meta-interpretation model. Probably not the best option for really complicated builds, but it's great for simple builds and I appreciate the tireless open source work the folks do to keep it maintained!


Those deeper problems are describe here https://www.lihaoyi.com/post/SowhatswrongwithSBT.html


Maven had some real issues which left Scala developers unsatisfied. So they created SBT which solved none of those and introduced some new ones.

I believe the tagline/motto of every build tool should be “Don’t worry; You’ll get used to it.”


Personally, I wouldn't recommend Maven to beginners. I would recommend beginners to learn Maven as it is one of the de-facto standards in JVM dependency management, but the documentation is highly confusing and the learning curve is very steep. Beginners will likely struggle with Maven concepts.


I very much disagree; the only people I've known to have trouble with Maven are experienced developers who expected their build tool to work in a very specific way (i.e. that they would define a bunch of commands for it to execute and tell it what to do). For a beginner who comes to it with no preconceptions, maven is very easy: you fill in the parts your XML editor tells you to, list your dependencies, and then run one of their short list of phases for what you want to do: https://maven.apache.org/guides/introduction/introduction-to... . What could be simpler?


The things I want to do, are they ‘goals’, ‘tasks’, ‘phases’ or ‘executions’? And do I manage my dependencies in ‘dependencies’ or ‘dependencyManagement’? And how is it that none of the ‘dependency’ goals can tell me the origin of the version - I have to use ‘help:effectivePom’?

Maven is powerful and people forget how innovative it was. But it’s not easy.


> The things I want to do, are they ‘goals’, ‘tasks’, ‘phases’ or ‘executions’?

The page I linked to is pretty clear about which is which and what they do.

> And do I manage my dependencies in ‘dependencies’ or ‘dependencyManagement’?

You list your dependencies in dependencies, you manage them in dependencyManagement. But it's clear in the documentation, and if you're looking at a tutorial from 10 years ago then it'll still be accurate.

There are confusing things in maven. But documentation and backward-compatibility go a long way, and I really think the fixed build lifecycle puts it head and shoulders above a lot of alternatives; the cost/benefit of each project having a slightly different set of build commands just doesn't stack up.


or gradle, or bazel. Scala doesn't require sbt, and the other build tools work well for mixed codebases (scala, java, kotlin, javascript).


One of the reasons I chose F#. Too many choices for a Scala novice. Seems to be a pervasive pattern.

I think Scala was actually a better choice since my company mainly does AI/ML (python) and Scala has some clout in data science. But I’m only migrating the application layer so not a deal breaker.

I was leaning more towards Scala going into the “exploration phase”.

——————————

Side note. Using typescript on front end, f# on back, vscode, and GitHub for repo + CI/CD. Feels nice using a single vendor (Microsoft) for dev, even though each tool is stand-alone.


Bazel is a nice build tool, but AFAIK it still doesn't support Scala 2.13, released mid-2019: https://github.com/bazelbuild/rules_scala/issues/809


I think Lucid (the charts, not EVs) use these internally: https://github.com/higherkindness/rules_scala

And they support Scala 2.13


Interesting. Here is a comparison of the two rules_scala implementations: https://github.com/higherkindness/rules_scala/issues/261


> Avoiding repetition with contextual parameters

This is the type of stuff that made me go away from Scala.

Why implement a feature that optimises for code writing? How does my editor let me know that function receives that parameter without me going "???" and having to go into its definition?

Implicit conversions fall into the same category, of making code pretty to look at, quick to write, and a nightmare to understand when you're new to a codebase.

Shame, really. The good parts of Scala are wonderful.


It's not meant to replace the stuff that you'd use a normal function call for, it's meant to replace the stuff that you'd do completely invisibly.

Every real-world Java codebase ends up using massive amount of incomprehensible magic for e.g. DI, transaction management, web request mapping, serialisation. Every real-world Python/Ruby codebase ends up using magic proxies, metaclasses, method_missing or other such stuff. Scala is the only language where I've been able to find real, enterprise-scale codebases written in 100% plain old code without any AOP-style magic, because the implicits let the language be expressive enough that you can avoid those things without your business logic getting drowned in secondary concerns.


> How does my editor let me know that function receives that parameter without me going "???" and having to go into its definition?

IntelliJ has been able to do that for a while now.

> Implicit conversions fall into the same category, of making code pretty to look at, quick to write, and a nightmare to understand when you're new to a codebase.

Don't do it, and don't allow it in your codebase.


> IntelliJ has been able to do that for a while now.

It has been 4 years since I did Scala everyday, back then it was hit or miss. Good to know things have improved.

> Don't do it, and don't allow it in your codebase.

I get your point, but that doesn't solve the issue if the community is down for using these features.


Both JetBrains and the Scala center have invested in the tooling experience in the past 2-3 years, you should give it a try again!

And I don't think I've encountered implicit conversions that often in the ecosystem.


The community isn't very fond of implicit conversions. That's why you get warnings unless you enable a compiler flag, and why Scala 3 downgrades them from magic keyword definitions to avoidable stdlib types.


Cleanest way to extend functionality if ides are able to catch up and provide discoverability. Big win on syntax


Has the scala hype train passed?


Yep, hype has peaked. Here's a great article on The Death of Hype: What's Next for Scala: https://www.lihaoyi.com/post/TheDeathofHypeWhatsNextforScala...


Haoyi, is that you?


what makes you say that? (genuinely curious, but also I agree a little bit)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: