I've had the misfortune of working on a C# code base that uses this pattern for many years.
I've also used it with F#, where it feels natural - because the language supports discriminated unions and has operators for binding, mapping etc.
Without that, it feels like swimming against the tide.
Code has a greater cognitive overhead when reading it for the first time.
And there is always a big over head for new starters needing to understand the code.
It feels idiomatic in F#. It feels crow-barred in with C#
F# also has substantially better type inference so you don't need to write the types out everywhere, type aliases are first class too so you can easily write out some helper types for readability.
You can pipe a monadic type through various functions writing little to no type declarations, doing it nicely is F#'s bread and butter.
In C# version n+1 when the language is supposedly getting discriminated unions for real this time I still don't see them being used for monadic patterns like F# because they're going to remain a menace to compose.
I felt the same way with fp-ts then effect in typescript. Pretty cool libraries and I learned a lot about FP while trying them out for a couple of years, but a lot of ceremony and noise due to them (especially effect) almost being a new language on top of typescript.
Recently got the opportunity to try out elixir at my job and I'm liking it thus far, although it is an adjustment. That static typing and type inference are being added to the language right now is helpful.
I had a similar impression with using those constructs on TypeScript.
IMO it's hard to justify creating Option<T>/Result<T,E> wrappers when T|null and T|E will work well enough for the majority of use cases.
effect specifically feels like a different programming language altogether. And maybe going that path and compiling down to TS/JS could've been a better path for them. I'm not on the ecosystem though, so it's an uninformed thought.
Monadic binding and other functional mainstays in C# mostly fall into the same uncanny valley. Like non-exhaustive pattern matching, we get some nice sugar, but it’s not the same, and not a proper substitute for what we’re trying to do.
F# ~~ripped off~~ is deeply inspired by OCaml, with a very practical impact on its standard library: there are facilities available for all the functional programming jazz one hasn’t though about or bumped into. In active patterns, pattern matching, recursive list comprehensions, applicatives, or computation expressions when you bump into the corners of the language you find a deep, mature, OCaml core that nerds much smarter and more talented have refined for decades. The language was built around those facilities.
Bumping into the edges of the partial features in C# is a pretty common experience for me, resulting in choices about kludges to support a superficial syntax, raising concerns about goldbricking.
It feels crowbarred because it was.
“Railway oriented programming” passes over well as a concept, but it’s an easier sale when you see its use resulting in smaller, simpler, easier functions
My favourite inexplicable feature of the PureGym app on iOS is that when you open it, it stops any audio you are listening to. In the same way as if you have opened another audio app. Yet it isn’t playing any sound. Crazy
1. They will have added code that declares the app requires an exclusive audio context. So iOS pauses all other audio when the app is foregrounded.
Or
2. It’s possible that they use anti screenshot technology which sometimes involves embedding a secure video in place of an image. The video playback might be grabbing the audio context.
I've had this a few times on Android (eg the new Subway app). I'm 99% sure it's the latter but not for security, just a fancy splash screen animation that was implemented as a video without thinking about setting it as "no audio".
The title has the wrong year. It should be 2005. The quote was: "By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machine's."
Right, communicating to the business is actually the core message of DDD. It should potentially be called “anthropological design” since the “domain experts” are a synonym for your non-technical users of your software (the domain is the business domain, which is whatever your software is trying to do for them). The message is that you have to observe your users in their natural habitat.
Let me put it this way, when Twitter started out, they did not have tweets. They had posts, and the act of posting to Twitter was called twittering. They were not associated with birds (actually more with whales lol). The idea of birds and tweeting actually came later with a third-party client interacting with their API.
Eric Evans in the early aughts now makes a big splash with this outrageous statement, where many of us graybeards would instead say “if it ain't broke don't fix it”: Eric Evans would recommend that the posts table in the database be renamed to the tweets table. Version 2 of the API should not reference “posts” or post_ids, but rather tweets and tweet_ids.
Why?! Those sorts of migrations are painful and clumsy! Yes, Eric says. (He is not stupid.) Maybe it's a lost cause. But, Eric remarks on two things:
1. There is no reason to believe, given software’s previous performance, that any amount of upfront planning is going to generate the most consistent useful model before the software is built and we can interact with it. So you're going to want to iterate. What are the systematic obstacles to renaming the table and the API, and can we overcome them so that we can do lots of little experiments?
2. Something else that is clumsy and painful, is when your users come to you reporting a problem with Widgets or whatever, and you go off and you fix the FactoryService to add some new functionality to widgets, tell the user that their problem is fixed, and they go and do the thing again and run into the same problem that they ran into, “it's not fixed yet!”. Why did this happen? One big reason is that the word “widget” means something different in the database versus the backend, or in the backend versus the frontend, or in the frontend versus the real world. Twitter might get some other notion of “topics” and they roll it out and everyone starts to call them “posts”, now the topic table holds posts and the posts table holds tweets, and you're always looking for “posts” in the wrong table now.
So, you should rename the table because first, this should be a possible thing for you to do and building up that sort of leverage is going to pay dividends later, and second, the less friction we can have by transforming the way we developers speak into the way that our users speak, is going to pay dividends too.
This anthropology is kind of the core part of DDD, I don't understand why people try to do DDD as design patterns rather than saying that it's the users who unwittingly dictate the design, as we redesign around them to reduce friction.
It's similar to, I don't understand why people find it hard to draw context boundaries in DDD. So bounded contexts are a programming idea, in programming we call them namespaces, they exist to disambiguate between two names that are otherwise the same. DDD says that we need to do this because different parts of the business will use the same word to refer to different things, and trying to get either side of the business to use some different word is error-prone and a losing proposition. So instead we need namespaces, so that both of our domain experts can speak in their own language and we can understand them both because in this context we use this namespace, in that context we use that namespace. So: where do you draw the boundary? In other words how big should your modules be? (Or these days, for “module” read “microservice”.)
Simple: you partition users into groups, based on the sorts of things that they seem to care about when they are interacting with the system, and the different ways that they talk about the world. The bounded context is not an “entity” or a “strong entity” or a service-discovery threshold, rather it is an anthropological construct just like everything else in DDD. “The people in shipping care about this for one reason, the people in billing care about it for another, they don't usually talk to each other, but I guess sometimes they do...” sounds like you've got a shipping module/microservice and a billing module/microservice. The boundary is the human boundary.
Similarly for “should I use events or RPC?” ... Does someone from shipping ever come up to the billing department and say “The delivery costs a ton more because XYZ, the customer said they preferred to pay more rather than cancel the order, I am gonna stay here in billing until this critical task is complete,” or whatever, or would they prefer an asynchronous process like email, “we will just put it on the shelf until we can pay to safely ship it.” Different industries would have different standards here! If it's something that has no shelf life, that delivery does not want to keep in the shelves for one second longer than it has to, then that drives the different behavior. Only way you can know is by observing your users in their natural habitat.
I do, occasionally. But it looks annoyingly verbose (which to me already feels enough to avoid using it too much) and IIRC has some limitations (I can't remember) as compared to Scala's concise and universal `val valueName = value`.
The second sentence here is such an odd one to include in the article:
"Benyamin Ahmed is keeping his earnings in the form of Ethereum - the crypto-currency in which they were sold.
This means they could go up or down in value and there is no back-up from the authorities if the digital wallet in which he is holding them is hacked or compromised."
Many people don't understand there's a difference between realised and unrealised profits, so it's never a bad idea to remind them they aren't the same thing.
How is that odd? Crypto subreddits are full of posts describing exactly the above situation. People are trying to get rich quick, without understanding what "be your own bank" requires in terms of security.
On internal UK news, there is nothing as good as Private Eye. Every two weeks they publish more “hard” material than newspapers do in a month. I’m a subscriber, the value for money is simply ridiculously good.
I have also recently subscribed to Private Eye - mostly because there is now so little other investigative journalism going on in the UK that I think they deserve some support (the main papers are nearly all owned by billionaire mates of the Conservative Party). It is also quite funny.
I've also used it with F#, where it feels natural - because the language supports discriminated unions and has operators for binding, mapping etc. Without that, it feels like swimming against the tide.
Code has a greater cognitive overhead when reading it for the first time. And there is always a big over head for new starters needing to understand the code.
It feels idiomatic in F#. It feels crow-barred in with C#