Hacker Newsnew | past | comments | ask | show | jobs | submit | a0's commentslogin

> Common reaction from them I receive very often "We never knew that JS/TS can do that"

Maybe it's about what you cannot do in ReasonML, and not about what you can do in TypeScript? :)


ClojureScript is beautiful! If only it had types :)


Rich Hickey is very convincing... he's just great at communicating in presentations, and he has plenty of good arguments why dynamic typing is the best for Clojure (and essentially for real-world problem solving).

My summary is: the extra burdens - tight coupling, code verbosity, custom type propagation for dealing with sparse or varying data - are just not worth the cost.

Also, I don't think there's much real evidence that static typing prevents production bugs. If you're doing decent testing, you'll catch those problems. And if your solution is fewer LOC and easier to reason about, you're more likely to not make type mistakes.


I think I don't mind lack types as much in immutable languages. Clojure and Erlang are like that. But they're also both simple languages with simple semantics.

One issue I have with Clojure is when I have to work with nested datatypes to perform transformations. It's just too hard to keep the shape of that data in your head. Types help with that.

Another important thing is modelling business domains with types. By this I really just mean record and variant types and not some advanced type-fu. It really helps seeing what kind of data your application manages.


> One issue I have with Clojure is when I have to work with nested datatypes to perform transformations

Out of so many "excuses" to dislike Clojure I've heard over the years, the first time I see something like this, and this sounds wrong for a few reasons. Dealing with nested transformations is not harder or easier in Clojure, it is just different, due to immutable data structures.

Besides, if you need to deal with deeply nested data perhaps it has to be broken down into smaller pieces. Maps are extremely composable in Clojure.

There are libraries that can help you with transformations, you can "walk" the structure, you can use zippers, you can use Specter (a library that I haven't find a use for in over three years of writing Clojure).

Modeling business domains with types is a valid point though. However, I find exactly that is the reason why using Clojure for building real business apps is so much simpler and faster. You are not restricted by the boundaries of whatever type system you use. You don't get stuck in analysis paralysis trying to prototype things in an overly complex type system. At the same time you don't necessarily have to throw the types away. Type systems do help and absolutely have certain benefits, etc. And Clojure has a pretty nice one called Clojure.Spec. In Spec you can for example conform that your function returns specific shape of data. You can use specs to validate inputs, produce human readable error-messages, generate fake data and use it for example for property-based testing. Specs can be shared between different systems, between front-end and back-end, etc.


>One issue I have with Clojure is when I have to work with nested datatypes to perform transformations. It's just too hard to keep the shape of that data in your head. Types help with that.

And in Ruby or Java, I have the same problem with objects.

No matter what your method of interaction, keeping up with levels of abstraction is hard.


Clojurescript HAS types. It has pretty awesome type system - Clojure.Spec. You can use Spec to enforce all sorts of things in your functions. You can use them for data validation, for data generation, for human-readable error messages, etc. Specs can be shared between server and front-end.


then it won't be beautiful anymore :D


There was a workshop at the last ReasonLDN meetup. Here are the slides that were used, they include quickstart instructions: https://docs.google.com/presentation/d/1wuAveSHslRfKShD6SiVd...

If I recall correctly, everyone managed to set things up swiftly, with an exception of one Windows user who had some editor issues.


One place where clean build times do matter is in the CI. Having very fast builds and tests is great for speedy deployments.


Why? Does your CI do a clean build for every test?


Most CIs do clean builds for every PR, merge, and deploy ... that adds up to a lot of time.


This kind of comments make me really sad. Most ReasonML developers are actually "JavaScript developers".

ReasonML was specifically designed for JavaScript developers. Being JavaScript-friendly is in it's DNA really. I even see ReasonML a language that was specifically designed to work with React.

I can understand why you would feel this way though. I think the main problem is that ReasonML introduces new concepts that simply do not exist in JavaScript. Learning those concepts is hard. But if it wasn't hard, would you be learning anything or just writing JavaScript in a slightly different way?

Are there any specific examples of things that look too "distant" from JavaScript?

I'll finish this by saying that ReasonML will make you a better JavaScript developer. As will probably learning any other language that challenges the way you think.


> I'll finish this by saying that ReasonML will make you a better JavaScript developer. As will probably learning any other language that challenges the way you think.

I wholeheartedly agree! But a lot of people don't want to be challenged by the language when there are already plenty of other challenges in their personal and professional lives.

With TypeScript you don't need to think half as much, and the thing your boss generally cares about (whether the feature shipped) gets done.


> With TypeScript you don't need to think half as much

That seems a bit dangerous, to be honest. TypeScript has an unsound type system. If you don't think carefully, you might end up shipping runtime type errors.


It's ok. There's pretty good support for linting and code completion. The plugins are fast and have a rich feature set.

Take a look at this: https://marketplace.visualstudio.com/items?itemName=jaredly....

The only problem that I have found is that in some situations it doesn't report errors correctly when adding new dependencies. Refreshing the window or rebuilding the project usually helps.

On the other hand, the compiler feels very solid. It can be annoying in the beginning because of how disciplined the code needs to be, and the error messages can be hard to understand.

After getting used to that, the dev ex is really smooth specially during complicated refactoring, the compiler and the IDE make things almost boring.


Many modern languages support type inference to some degree. The special thing about ReasonML is that the type inference is much more robust and complete.

In practice you don't need to provide any types annotations at all at any point in your program and you should still get the same safety benefits.

Every value in your code will automatically have one single type assigned to it based on how it is being used. When the compiler finds contradictions it will let you know about it. On the other hand, Typescript will try to unify the type to some sort of Any, which is probably not what you want.


I understand not everyone will agree, but I find this to be a deficiency of ReasonML compared to strict TS.

Types offer inline documentation to developers that come after you. They should make it easier to establish and maintain a mental model.

Obviously, this requires buy-in from all parties, but if the tool itself doesn't encourage buy-in, it is a bug, not a feature.


One problem with people selling ReasonML's type inference is that it's not depicting the actual developer experience.

The first thing to note is that you do interact with types to build a mental model! The editor will show them to you as you move your cursor around the code. It will do that for every single value. Just move the cursor over that `requestContext` argument and voilà, it'll show you the type.

Another important aspect is that you still have to define types. Have a person object with a bunch of fields? You do have to type annotate every single field. Have multiple actions to handle in reducers? You need to tell ReasonML in advance what those are using a variant type.

And finally, if you care about composition/abstraction you should provide interfaces for modules. This requires writing types for all functions explicitly to ensure that you are exposing a correct protocol. This is optional, but it helps both the author of the code and the consumers.

Type-inference is just a nice to have feature for the low-level implementation details.


Nothing stops you using type annotations in ML, and good developers will do so where it is useful. But it doesn't require them in cases where they are redundant, for instance when you have a parameter `foo` of type `Foo`.


Why do people care about this? Writing type annotations is not that much work, but they make your code more readable.

I can see the argument for it when you have unspellable nested generic types, or when you don't want to write the same thing on both sides of an assignment. Simple type inference like in TS gives you that.

Having really smart type inference makes your compiler slower and more complicated. What's the real-world benefit?

> Typescript will try to unify the type to some sort of Any, which is probably not what you want.

Not really, unless you push the type inference beyond its limits, in which case you should just change your code.


Many interesting points! :)

> Why do people care about this? Writing type annotations is not that much work (...)

I personally care about this when I'm prototyping something. Not having to write types means that I can simply write what's on my mind. The benefit of good type inference is that the compiler can still help me highlight any mistakes I made during this process.

Another benefit is that some production code can get very complex. Having to type every single value is just too cumbersome and distracting. It contributes to boilerplate and increases cognitive load, in my opinion.

> Having really smart type inference makes your compiler slower and more complicated.

I would actually disagree with this. First of all, ReasonML's compiler is absurdly fast. No, seriously try it. Sometimes I go and double check the generated JS code just to be sure it actually did anything.

Regarding the "more complicated" part – the only reason why full type inference works is because it is based on a very solid theoretical foundation. It might be somewhat "complicated", but it will never be as complicated as something ad-hoc that needs to account for all inconsistencies that exist in untyped languages like JavaScript.


> It contributes to boilerplate and increases cognitive load, in my opinion.

In my experience with OCaml, I found that knowing the types of my variables reduce the cognitive load of trying to infer the types myself when reading the code, so despite OCaml supporting type inference, I use explicit type annotations almost everywhere.


Being a Vim/Merlin user I normally just type `<Leader>t` to see the type of the value under the cursor (this maps to `:MerlinTypeOf` if I recall correctly).

What I do use type annotations for is debugging. If the compiler finds a type error, in some situations it helps adding type annotations to find where the actual error is.


I do use that feature, too, for functions, in Emacs. So in the mode line I see for example: Array.length: 'a array -> int. I find it quite useful.


If you are writing in a Functional paradigm you'll most likely create a lot of small functions that perform very direct actions and then chain them all together to get a more robust operation. When I write Haskell I'll often annotate the larger functions and keep all the smaller ones to auto-generate. They will either determine the type based on the larger function's definition or by operations inside the smaller ones. No point in me writing an extra line for a one line function when its obvious the type.


WASM currently does not have a GC but that's precisely why one is needed. Languages that target WASM need to implement their own GC but there's already a proposal to integrate a GC implementation into WASM[1].

[1]: https://github.com/WebAssembly/gc/blob/master/proposals/gc/O...


In principle it seems like a significant limitation but in practice it is rarely an issue.

ReasonML does support parametric polymorphism so it is still possible to write generic code.

Some language features conveniently help to avoid boilerplate. For example it’s possible to have a `Float` module with arithmetic operators and “open” it like this: `Float.(10.0 + 0.5 / 3.0)`.

There’s also a work in progress project called Modular Implicits that will introduce ad-hoc polymorphism.


Even then, I feel like working with Floats vs. Ints in a web browser is almost never an issue. I'd just open Float if I need it because I can't think of any scenario where I'd use both in one module


I used OCaml in production for a 2 years and it was a very pleasant experience. Recently I started using ReasonML at my team to implement a Kubernetes configuration tool. It’s surprisingly easy to use ReasonML for backend development with dune and esy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: