Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Deep Dive Into C# 9 (c-sharpcorner.com)
103 points by alugili on Dec 6, 2019 | hide | past | favorite | 133 comments


As a dev that has worked with C# for the last 3 years I gotta say it's really amazing how productive you can be with it and how easy it was for me to pick it up when I joined the company I worked for at the time. Fun fact is that I didn't know it was a C# shop before going to the interview. During the interview I iterated multiple times that I don't know C# but have a vague idea (coming from a C++ background and done some projects with Java). I got the job and was doing normal feature work and bug fixing in the first week.

I really enjoy the language and the functional features that seep from F# are great. Right now C# is my goto language for fast prototyping of anything really. At this point I trust Microsoft to push the language in the right direction.

EDIT: A bit more detail and typos


Similar experience here. Joined company as a front end developer but our team generally needs a lot more back end work than front end, so I decided to give it a try. Because it's .net core I can work on it just fine on my MacBook, I have MSSQL in a docker image and do development in Rider. Within a relatively short time I was also contributing and nowadays I'm more back end than front end.

C# is a great language indeed.


I’m not sure we need initonly.

Constructors are a design pattern that works and is intuitive at this point. Fair enough there are differences, but the overlap in use case is significant enough that I don’t think it warrants a rival implementation of a core language feature.

The irony of me complaining about it is that I’d much prefer we get proper object composition by letting extension methods support extended state, which would rival dependency injection in its current form.

This would overturn the apple cart even more but it’s long overdue - I implemented this years ago by using conditional weak tables but didn’t trust it even in my own work solely because if MS isn’t standing over the implementation then I’ve no way of knowing how the GC is going to handle orphaned references over time.

Example:

A car has five (obvious) wheels, only four of which have tyres and rims.

Modelling this from interfaces and injected dependencies on a parent class is a laborious anti-pattern that C# still forces me into.

I love C#, but this could be so much better.


Initonly is a great feature that prevents ctors from growing into the unmaintainable soup of optional args, which inevitably happens as class gets used more while being worked on at the same time.


Right but that argument only works to say that it's better than argument soup in the constructor for injecting dependencies when (IMHO) we shouldn't be doing that in the first place anyway, as proper object composition would be much cleaner than either constructors or initonly.


What are ctors? And were they introduced with initonly?


The term ctor refers to good old type constructors. It's a common abbreviation, and it's even the keyword for defining constructors in .Net IL.


Constructors


This post actually looks like a speculation. These are all proposals. Have any of them been accepted at all?

The syntax looks outright ugly in many examples.


Does ConditionalWeakTable<TKey,TValue> class from the current .NET do what you want? It can attach arbitrary state to objects, you can do that in extension methods or any other places.


Yeah that’s what I was referring to above - I implemented this before with CWT, but what happens to the underlying object when my reference runs out of scope? What happens to the attached state of that object if it’s collected? How is the reference held in the CIL bytecode?

If composition isn’t implemented as a core language feature and officially supported then I’ve no way of knowing exactly how my implentation will behave or perform in the future or if there will be breaking changes down the road that render the foundation I’ve built all my code on moot.


What would extension methods, and extension state have over inheritance at that point? Those restrictions are in place for a reason.


They are in place for a reason, but how much of the reason is due to decisions that are already set in stone is kind of my question?

It's kind of agreed that object composition is a design goal we should be reaching for but are stopped frustratingly short of being able to compose objects from smaller components directly, so the design has to be top-down (class-based inheritance) rather than bottom-up (object composition).

It's also cleaner than dependency injection as you wouldn't need to pre-specify what a class hierarchy can or cannot be composed of at top level.


I like all the new features C# is getting over the years but the language is getting gigantic, almost like C++. It's perfectly possible to write code that looks almost alien to somebody that has 5 years of professional experience with the language. Also the null-hardening is a bit bulky compared to languages that had it from the get-go.


I work with C# daily, and I don't really feel like this. In general, I really like how C# has evolved over the years.

I think it helps a lot if you use Rider, or Visual Studio with ReSharper, as they will automatically make suggestions to use new language features when appropriate - that way you always get exposed to new syntax.


I don’t like the new record and DU features at all. They just tried to copy F# but the implementation is ridiculously verbose and heavy. Much better if they used all those efforts to finally introduce type classes or something equivalent in F#.


I meant I like the features that have actually made it into the spec so far - all the stuff in the article is still at the proposal stage.

I do agree that the syntax proposed thus far for records and discriminated unions is far from pretty, but I can't see it making it into the spec like that.


"I really like how C# has evolved over the years"

I completely agree - I rather like how C# and .Net is progressing - sure there are new features every so often that take time to learn but isn't that the same for absolutely everything.


Same feeling. I've started C# by myself around 2008 first with the command line compiler then VS 2008. My skills matured at the same time the language evolve, and while I always lag a few years behind the new features I still can keep up to date.

Now, for someone new the features set, the various keyword overloads and the magic syntactic sugar can make it hard to tame. Back then, It was mostly a (slightly) better Java. It has evolved into something totally different, that often tries to mimic F# while still keeping the familiar curly-race syntax, which makes things awkward at time. C# is my main language, but sometime I feel like there should be a fork, with a legacy C# that keep being ported to new frameworks and platforms, and a new branch where feature creep can continue go wild.


> sometime I feel like there should be a fork, with a legacy C#

You don't have to use language features you dislike. If you want to prevent them being used, you can set the c# language level to an old version using the LangVersion element in your csproj file.

if working in a team, your team members are unlikely to thank you!


Depends. If people on the team are using different versions of Visual Studio or the compiler, they will be thankful that they can still build the code.

For us, the language features we can use internally are dictated by our own compiler to translate the code to JS and Java and yes, a null check is somewhat verbose compared to ?., but in such cases it can be important to restrict the language level, even if you could locally use a newer one. We're also, for compat reasons, building the code on the build server with basically VS 2010 to ensure that there's nothing in it that .NET 4.0 can't handle. Customers can sometimes be very_ conservative and value this kind of thing.


You can set the language level in your project settings. It's under the assembly information.


Was on mobile and messed up italics in my parent comment (can't edit it now), it should have read:

If you want to prevent them being used, you can[0] set the c# language level to an old version using the LangVersion element in your csproj file.

[0] if working in a team, your team members are unlikely to thank you!


Think it's more about preventing use of older features which you don't want. Not stopping use of the new stuff.


That's not how I read the parent post at all?


I used to write C# professionally for many years. Stopped after moving to a new company right about C# 4. I had to pick it up again earlier this yeah and I gotta say... wow. It has so, so much more. Some I like, some kinda meh and I know for sure my old habits are not always the greatest.

It's been an... interesting ride.


My favourite eco-systems are Java, .NET and C++.

C# still has a lot to go to catch up with C++, on the language level.

On the other hand, contrary to common beliefs regarding C++'s complexity, no one is able to master the complete standard library of either Java or .NET, let alone the major frameworks that also get used alongside them.


> My favourite eco-systems are Java, .NET and C++.

The only thing about Java that I really like is that you can not use it in it's ecosystem while not feeling like a second class citizen. Kotlin (I didn't use the other JVM languages so far) feels 100% supported while F# in Visual Studio is kind of rough.

Java itself: hope I never have to work with it.


I don't have big hopes for Kotlin beyond Android (aka the KVM since #KotlinFirst).

Platform languages always win long term, adopting what matters from guest languages, while exposing the platform without any additional FFI, IDE plugins, build tools, idiomatic wrapper libraries, new layers to debug,...


The absolute worst part is all the legacy bits that Microsoft made and later abandoned. We set up a Microsoft Service Center Orchestration service for Windows 2012, and while we’re migrating it to Azure now, the C# library that handled authentication became obsolete in 2014, at least to Microsoft. Which means that even with 100% Microsoft tech, you still have to build your own library extensions if Microsoft moves another direction.

It’s like that with a lot of our systems, and it’s pushed us more and more from C# to Python where a lot of things are just easier.


That seems unrelated to the language of the library though.


The library was a part of .NET version 4.x and was then dropped when Microsoft wanted people to move to Azure.

We had similar issues with the AD APIs in .NET. You’d think Microsoft would have extensive AD integration support for C#, but you’d be wrong. It’s nothing you can’t fix by overriding the standard libraries with extensions of your own, but it’s a lot of work. Which is kind of the opposite of why you’d pick C#. At least in my opinion, you pick it because it comes with a powerful IDE and powerful standard libraries, but it turns out that it doesn’t actually do that once you dig a little deeper than standard CRUD applications.


I feel you pain. But how would it have been any different if the library had been in Python?


They lost me when they added indexed properties for better office interop, yet neglected core CLR runtime performance optimisations like struct inlining for several years.


It seems like a catch 22 for language developers if you want to maintain backwards compatibility.

Either you freeze the language and let it stagnate or you keep adding features without removing them and suffer from bloat.

Seems like the slow churn of high level languages is inevitable while maintaining backwards compatibility.


A long-lived code base will end up containing different conventions for doing things based on the era the code was written in. I’m OK with that, the later code will (should) have, per screenful, higher clarity, more functionality and fewer bugs.

I feel like there’s probably a good analogy with evolution of human languages and cultures around them. How many of us can read Beowulf in the original?


Some of the syntatic sugar will work in older versions as long as you've got a new version of Visual Studio, which was quite surprising to me when I realized. The problem is that sometimes it's hard to work out what is syntatic sugar and what isn't.


Same here.

I think these days you must force a specific code style according to a specific C# version otherwise you're going to have a mess. Harder to read for those used to older c# version while also hard to read for those used to newer c# versions.

You can have a valid reason to have c# classes that just use lambda to implement interfaces, a valid reason to have an interface with methods implementation, and a valid reason to do classic c#. In the end, you still have a mess.

I hope that someone makes a tool that helps make the code consistent and let developers avoid the suffering of thinking "but why is this like this??? oh wait... this the other c# version feature which I never used before.... I guess it's okay? ¯\_(ツ)_/¯


Expression-bodied members aren’t lambdas, they are just syntactic sugar for methods returning a single expression.


Quite honestly it really pisses me off how C# is evolving and to me this is a clear indication that Microsoft has still a largely wrong mindset in divdev.

C# was once a great language, but today it is so bloated with features that there is no clear way anymore on doing a single thing. Everything in C# has 3-10 different ways of doing it with huge BUTs, making the language harder and harder to learn for beginners.

It doesn't even make sense to make C# more funcitonal. It's trying to be too many things at the same time. It's almost like an obsession which Microsoft has with C#. Everything great they see somewhere they just shoehorn it into C# and actually really harming the language. What happened to just being a good OO language. .NET already has two other languages, one is functional and you can use F# and C# side by side in a project. What is the purpose of mudding C# to a point where nobody really knows anymore how to write clean and good C# code?

Honestly, I see C# developers constantly re-writing and re-designing their code for the sake of rewriting because there's constantly a new way of doing something. C# developers are like Java developers, spending so much time thinking how to write a feature instaed of just getting on with the work.

.NET Core is great, but .NET, C# and Microsoft in general is still the same old shit plagued by the same old MSFT mindset.

If developers want to write great applications then choose anything but .NET (Core), because in C# you'll constantly be chasing and re-writing existing code because things change for the sake of changing without really making the application any better.

.NET Core itself and ASP.NET Core is still changing so much that it's just tiring.


Long time C# dev and team lead here. Can't say I agree with you.

C# has become a great language for the "functional core, imperative shell" way of doing things. That means it has to be a hybrid language. F# is more tilted toward functional-land, C# is tilted toward imperative/OO-land. Both have their place.

If C# devs are constantly rewriting their code, that means there's no vision for that particular codebase. It's true that there are many different ways to "do C#." So, a developer or team needs to pick a particular way (OO classic, or functional core/imp shell, or functional, or minimal golang-style) and stick to it. It's more of an architectural challenge than you would get with a language that can only do one of those. The upside is, your system can blend approaches as necessary, all within one codebase. Every module of the system is perfectly suited to its task, from philosophy all the way down. Of course there's the danger of it becoming a mess, but the risk is worth the reward IMO. With great power comes great responsibility as they say.


What you describe can only be achieved in the short term at best.

If something can be written in 10 different ways then it will be re-writting in 10 different ways, because today you are leading your team and defining 1/10 ways. Tomorrow you leave and another lead or senior comes onboard and will disagree with your architecture and then slowly re-write everything.

Even worse, every time you add a new person on your team there's a 9/10 chance that they will disagree with your code. Your team will waste so much time between people just talking about how to write something because people get hung up on minor details instead of just trying to build a good application solving a real problem. The micro benefits of doing something in c# one way or another are in 99.999% of applications a complete waste of time.

Whenever I'm trying to hire C# developers (something I've been doing a lot over the years) I'm amazed how little people even know about C#. There's literally so much when you go deep that most people are completely clueless and it's getting worse and worse.


I've seen what you describe happen. But it doesn't have to be that way.

A wise lead may prefer a different path to their predecessor but respect the decisions that were made before them and realize that it's counterproductive to chaotically "drip-migrate." And good experienced developers will not get bogged down in endless debates with no clear winners.

It just sounds like you've had some bad experiences.


I've seen it happen too and it's not a C# problem. To attribute a social problem to a programming language seems odd.

A general rule is you made modifications to a codebase the same style / technology as the original codebase. I've seen bad developers not do this and it certainly turns into a mess.


I have been programming in C# for 13 years now. Majority of new features are just syntactic sugar. Code I wrote in 2009 is still running even in 2019. I don't rewrite my applications in a newer version each time there is a new release of .NET Framework. I only use the latest language features on new projects. Problem starts when I am in a support and maintenance job because I quickly fall behind. When I go for interviews, I get asked questions about new features of the language, which I feel is unfair.

I feel sorry for new Developers because I learnt those features bit-by-bit over the years and they are expected to learn them at a go. It is also hard to understand why a syntatic was introduced when you never used the way it used to be done.


I for the most part agree with you. But Entity Framework and ASP.NET changes significantly (for the better) with .Net Core.

I also use Resharper, it repeatedly reminds me of new syntactical ways of doing things.


Visual Studio itself also does that (I forget when it was introduced; a few versions back I think?). I haven't used Resharper in a long time, so I'm not sure how it compares now, though.


I started seeing it on VS2017. I use it a lot in VS2019.


Visual Studio copied Resharper. VS now suggests some new language features.


I read your comment twice trying to fully understand your perspective. I've been working in C# off an on now for about 10 years and as a guy who "grew up" on version 2, I mostly use the core features, but wow do I love some of the new features. The introduction of Tuples in 7 changed the way I program so dramatically in a positive way - every has their own style for me Tuples as per my example allow me to return a code + the data. I welcome the improvements and think MS is doing an amazing job.


For me tuples are a game changer. The code stays clean and easy to undestand.


Every language has a lifecycle.

I understood from a Microsoft conference I attended (regrettably I forget the speaker) that their decision to incorporate new features so rapidly was a very conscious decision, based on what I thought to be a quite perceptive look at the landscape - to some extent C# has always been a reaction to Java; a language that often comes under fire for its extremely slow and cautious incorporation of features.


You don't have to, asp.net mvc still works. WPF, ..

I think the big changes between .net framework and . Net Core weren't that big for a project.

And it got a lot faster.


I'm not talking about the changes between .NET and .NET Core. As far as I'm concerned .NET doesn't exist to me anymore and it's not even worth talking about.

I am angry that .NET Core keeps changing faster than Kim Kardashian changes her outfits. First they focused on making .NET Core all about ASP.NET with a clear focus on MVC and making everything super granular. Then they didn't like how granular it was and started to put lots of featurs into smaller pacakges again. Then they keep re-inventing things. First introduce Newtonsoft Json into the default .NET Core stack. Then rewrite everything. The webhost model keeps constnantly changing. The ASP.NET Core team now is realising that people hate MVC and they are splitting out more features from MVC into more basic ASP.NET Core features, which is why routing has completely changed again with endpoint routing. Honestly nothing is constant, not even for 6 months. Every version of .NET Core almost requires a developer to completely rewrite their Startup.cs class. It's just ridiculous.

The reason for all of this is old MSFT thinking. It's not bloody rocket science, people have been saying it for years that they don't want to be forced into MVC, they want things to be more lightweight, bla bla bla. But obviously MSFT cares more about making a shit hello world demo at BUILD and therefore they first must hack together ASP.NET Core which was mostly just about MVC before being allowed to build the core platform to something that is actually useful to others.

They'll constantly keep changing the fundamentals, moving the carpet under developers feet and distracting businesses with stupid useless excercises or rewriting shit instead of just creating a stable base platform on which people can freely build applications and actually focus on their own apps.


I will try to attempt to address some of your concerns...

"First they focused on making .NET Core all about ASP.NET with a clear focus on MVC and making everything super granular. Then they didn't like how granular it was and started to put lots of featurs into smaller pacakges again."

.Net core was a ground-up re-write and was the vehicle used to opensource all .Net. It was always meant to grow into a full-scale offering and eventually bring along all the features users demanded. I personally love reading all the fun library code: https://github.com/dotnet/runtime/tree/master/src/libraries

" First introduce Newtonsoft Json into the default .NET Core stack. Then rewrite everything."

Newtonsoft itself is bloated and there is no turning back for that library. MS is providing an option to use a lightweight JSON library that uses the new SPAN ref struct.

"The ASP.NET Core team now is realising that people hate MVC and they are splitting out more features from MVC into more basic ASP.NET Core features, which is why routing has completely changed again with endpoint routing."

Endpoint mapping wasn't born out of hate for MVC, it facilitates the separation of framework/transport/protocol without introducing config files (or handler code) for each. https://github.com/aspnet/AspNetCore/issues/4772

"The reason for all of this is old MSFT thinking. It's not bloody rocket science, people have been saying it for years that they don't want to be forced into MVC, they want things to be more lightweight, bla bla bla."

Maybe I am old but I remember when MS was almost forced to adopt MVC. They kept webforms alive for a long time. They introduced razor pages when SPA world demanded an easier solution. I am not sure if there was going to be a way to satisfy everyone here.


> The webhost model keeps constnantly changing.

The name changed and there is an obsolete attribute on it with instructions.

> First introduce Newtonsoft Json into the default .NET Core stack.

With the same usage/properties/methods but faster

> routing has completely changed

Another method, the older one is still available ( also, obsolete attribute)

> completely rewrite their Startup.cs class

Wait what? You already named 2 of the 3 changes and there is probably a year in between. Also, the third change is related to the webhost change.

Kim changes every day


that is quite a rant for someone who feels its not even worth talking about


I think he meant specifically .NET framework, the older .NET runtime.


Newtonsoft.Json was the default JSON parser in ASP.NET Core for a bit, but that’s changed with .NET Core 3’s System.Text.Json. Newtonsoft was never tied to .NET Core itself directly.


Then you are lucky, they mentioned .net core 3.1 with Enterprise support.

That means you don't have to update for a very long time ;)


What platform do you like for development?


There are quite a few re-writes required for .NET Core migrations, hence why many enterprises are holding on to .NET Framework.


Doesn't that happen with any toolset though - when I used Java (a long time ago) I remember a lot of similar complaints when new version were introduced. And look at the fuss around Python 2 and 3.

Are there any popular application programming platforms that are widely used that haven't evolved fairly rapidly and caused a lot of complaints along the way?


All of this happened within less than 2 years to the point where you had to do extensive rewriting for just starting your old code. Python 2 was mostly stable 8 years, and you could continue to run Python 2. Java as far as I know (it's not a stack I regularly work in) rarely require rewrite because a new version comes out.


I suspect it's because it's new and they are still flushing out how they want the overall framework to "feel." In my experience, this is more the norm than the anomaly. IIRC, Delphi would create breaking changes nearly every release.

It's two philosophies: 1. Backwards compatibility is king, 2. The best framework possible is king. #1, over time, can lead to a hot mess.


That’s true for anything. There are still VB6 maintenance jobs popping up every now and then.


Indeed, hence why it is a fallacy to think everyone is running to deliver .NET Core applications.

I am yet to get a RFP that even mentions it.


Well how much does your anecdotal experience really say about the broad adoption of .Net Core?

I know a lot of companies that really want to run away from a dependency on Windows - especially in cloud environments. Every time you introduce Windows into the mix it costs more for licenses and resources.

This is coming from someone who has exclusively developed and deployed to Windows servers until 2 years ago and even now my only Linux deployments are Lambda and Docker.


Those RFP are coming from either DAX or Fortune 500 companies, none of them eager to rewrite WCF into gRPC, rewrite EF 6 stuff with unavailable features on Core, buy new licenses for 3rd party dependencies or buy replacements with the respective rewrite and so on.


And again your anecdotal experiences doesn’t say much about the broader market. I see plenty of “Fortune 500” companies trying to find a migration path from .Net Framework and Windows. Heck even MS is abandoning .Net Framework.


My anecdotal experiences are as good as yours.

Better pay attention to MS conferences then, they have stressed multiple times that VS is going to stay .NET Framework and how they are commited to keep it as long as there is Windows.

And at .NET Core 3.0 release conference it was visible in multiple occasions how they are fighting the re-writing fatige from many enterprises.


They’ve also said that .Net Framework is in maintenance mode and won’t get any new features. Does that sound like a carriage you want to tie your horse to?

I think JetBrains knows a little about the .Net ecosystem.

https://www.jetbrains.com/lp/devecosystem-2019/csharp/

As far as cloud and server adoption.

https://www.makeuseof.com/tag/linux-market-share/

On Amazon EC2, standard Linux (along with its various distros) controls 92 percent of the market. It boasts more than 350,000 individual instances. Again, Windows is responsible for the other eight percent.

Even MS said that 2/3 of their VMs on Azure are running Linux.


Our DAX and Fortune 500 customers decide where to tide the hourses, we just follow along.

Those VMs running Linux is where we put our Java stuff not .NET.


So now that I’ve shown you non anecdotal evidence of the adoption rate of .Net Core, you’re going back to what you see in your one company?


JetBrains? It is the sample of the anecdotes of their user base.

Just like my anecdotes are the sample of our DAX and Fortune 500 customers.

Linux adoption? Yes it is wiping Windows on the server, that is why I always worked for Java/.NET shops since 2006, switching stacks as per customer project requirements, in some projects even both get equally used.

Somehow your replies always feel being about fear of using outdated tech, always having to jump into new toys to keep being employable.

Never felt the need to worry about that, as long as we have happy customers, opportunities abound, regardless of what is the latest tech stack fashion.


Resharper is quite a popular plug in for C#.

I think JetBrains sample size is a lot larger than yours.

.Net Core isn’t “new”. It’s been around since 2016 and the direction that Microsoft is headed in.

never felt the need to worry about that, as long as we have happy customers, opportunities abound, regardless of what is the latest tech stack fashion.

And what happens when either you want to or are forced to change jobs? Someone who is 40+ will be seen as just another old head who hasn’t kept up with technology and be on HN screaming about “ageism”. Not directed at you personally. I’m also in my mid 40s and seen it happen time and time again. Someone staying at a job for 20 years and then gets laid off and all they have to offer is that they are really good at ASP.Net WebForms when the world has moved on.

Heck it happen to me at 35 looking for a job and my experience after staying with a company for 10 years was VB6 and C++/MFC.

Instead of moving on to technologies that are in the “slope of enlightenment” phase of the hype cycle would you suggest that I kept doing C on DEC and Stratus mainframes like I did on my first job?

The average tenure of a developer in the US is 3-4 years.


> .Net Core isn’t “new”. It’s been around since 2016 and the direction that Microsoft is headed in.

And yet most of those RFP keep referring to stuff like .NET Framework 4.6 and .NET 4.7.1.

As for the job, luckily Europe is not as bad as US in what concerns ageism.

Over 40 here as well and so far no problems switching jobs, because I always strived not to be labelled as only being good at technology X, without anything else to offer.

My advice is to diversify domain knowledge, master soft skills, be able to jump between developer, QA and technical lead roles across delivery sprints, and most customers will find less relevant how well one masters the very latest version of technology X, rather what is the business value one brings to the organization.

And some of them, do value a lot that one is able to keep that old clunky VB 6 application running, instead of sinking several thousand euros attempting an half-baked rewrite into the latest stack trend, and do pay accordingly as well.


And yet most of those RFP keep referring to stuff like .NET Framework 4.6 and .NET 4.7.1.

And yet again you use your own anecdotal evidence without any large sample size.....

As for the job, luckily Europe is not as bad as US in what concerns ageism.

So how competitive is someone all other things being equal who hasn’t kept up with technology as someone who has?

And some of them, do value a lot that one is able to keep that old clunky VB 6 application running, instead of sinking several thousand euros attempting an half-baked rewrite into the latest stack trend, and do pay accordingly as well.

Until Microsoft introduces an operating system that doesn’t support it and you’re stuck with an unsupported OS with an unsupported runtime with security and compliance concerns.


Competitive enough not to worry about being unemployed.

If they get the job of their dreams, it is a completely different matter though.


Totally agree with you. In fact feature and syntax bloat seems common is most popular languages today, like Javascript, C#, Python or even C++, as they keep adding things to compete with new languages like Rust or Swift (and these languages keep adding new things on every iteration too). It's very tiring to keep up.


Which is the amazing part of lisps. Syntactic changes can be implemented as libraries. You have to have some idea of what is a macro and what isn't, but following the rule "if it can be written a a procedure, write it as a procedure" has never failed me.

The world's best OOP-system (CLOS) was conceived using macos.


> C# developers are like Java developers, spending so much time thinking how to write a feature instaed of just getting on with the work.

I see this in Scala, not Java, as the latter's feature set is much more restricted.


Aye it’s a burning trash heap of everything thrown on top now.

MSFT stacks are an ever ending wild ride of change, deprecation, constant direction shifting and bugs.

I’ve been writing c# since 2002 and I’m quite frankly done now. It doesn’t help me solve problems now. It just creates new ones I don’t want to solve or have to pay to solve.

The IDE is buggy, the platform is buggy and the churn is so bad it’s a ridiculous prospect trying to build a non trivial business on it now.

Most companies are still stuck on classic .net because hardly anything is really portable to core. On top of that when you do finally drag it to core 2.2 then it’s deprecated so you have to rewrite half your MVC stack for 3.0.

And then there’s the customer abuse like opt out Telemetry.


Have you looked at the Javascript and front end ecosystem over the past 10 years? .Net is remarkably stable comparatively.


Yeah I’ve avoided it intentionally!


Disagreed. Upgrading from 2.2 to 3.0/3.1 is not rewrite half your stack. I literally just upgraded a 1.5yo project at work from 2.2 to 3.1 in a couple of hours. I had 1 undocumented issue which took the bulk of the time to fix. Everything else was pretty much namespace fixes or EF being more explicit.

2.2 to 3.1 is probably the easiest upgrade I’ve ever done. You wanna talk about mvc 1 to 2/3, now that was a painful upgrade.


2.1 to 2.2 can be a bit of a pain though


It's not specific to C#.

Every language seems to start out as a crisp new tool for solving a well chosen specific set of problems. Then over the years with each release a new set of use/edge cases get covered by introducing new concepts and syntax. And while each of these has its own merits, in aggregate they turn the system gradually into an unapproachable behemoth of opaque incantations.

And so the wheel turns and we all start over the process with a crisp new language that targets a well chosen smaller set of use cases ....


This doesn't happen to every language. C# is extreme in this regard.

And there are languages that are deliberately conservative about adding new features, like Go, Elixir and even Python to some extent. Though I think Go is too far on the other end of the spectrum, I always prefer languages with a small, simple core.

C# seems to just constantly throw everything new in.


It really isn't that extreme. Change happens to almost all widely used languages. Just look how c++, java, js and php have changed in the last 20 years. Furthermore, the designers do not constantly throw in new stuff. The change is rather gradual, and the features that are introduced have all been proven to work in other, more experimental languages, most often whole decades after they were introduced.


C# is certainly on the maximalist end of the spectrum. It already has classes and structs and now adds records. That's at least one, possibly two, too many in my book.

But hey, some seem to prefer to have all these different ways of doing things and I'm not the one to tell them what to like or not. I don't have to use C# myself.


The proposed record types are just regular types with minimal changes to the initialization syntax. It's not like the divide between value types (``struct`` ) and reference types (``class``), which indeed have quite different semantics. You can argue that this is too much, but IMHO you need some way to guarantee that value types will behave like primitives without introducing a massive burden on the gc.


Which is why after some industry experience, one learns to be cynic with the next cycle of "simple" languages.

Programming languages are software products as well, and to stay relevant they need to cover use cases that make developers "buy" them.


I'm the opposite I'm cynic over languages simply adding more and more stuff.

The new languages are simple like original languages started out but with a different set of simple principles that have learned about from the previous generation.

Languages are just as much about they exclude, as well as what they include.


Go is the latest example that starting simple and not adding features just doesn't work, if one wants to stay relevant on the market of programming languages.


I used Common Lisp for a number of years, which probably is the most sublime example of a "behemoth of opaque incantations" - I never really saw that as a problem though.


While CL as described in Guy Steele's green book might have been expansive for it's time, It was utterly dwarfed by most of the "comprehensive" horizontal programming systems that came later.

I'm specifically using 'system' as opposed to 'language' as reference to not just the core language syntax but including the canon APIs knowledge needed to claim a proficiency in developing in the language's ecosystem.


But that's just because they come with a pile of libraries - the core of C# and .Net aren't that complex, there's just a lot of it.

e.g. Compare how classes and objects work in .Net with CLOS and its MOP.


But quantity brings its own complexity. When you are writing the code this is usually not a problem as you are selective in your patterns and features you employ. However, when you have to read other's code, you need to be aware of details and intricacies of coding styles, patterns and language features they employed and that might not be part of your daily toolbox. This is where 'there's just a lot of it' tends to becomes problematic.

CLOS is a topic on its own. I loved it myself, as for me it aligned much more with the way my brain wrapped around an object paradigm and I felt the dispatching in what became a more traditional object approach from Smalltalk to Java and C# to be way too restrictive and therefore needing to resort quaint constructs for pretty basic composite things. But I do know this was not the most popular opinion.


Me neither. Those features are there for a reason, everyone's 10% aren't the same, yet the language (product) needs to provide a solution for 100% of the user base.


What bugs me is the too-strong coupling between the language and the run-time environment.

(Non-)nullable references are a perfect example. They could have been supported on the old .NET Framework, but MS chose to only officially support them under .NET Core (will be .NET 5 soon).

I have a library that is used in both "legacy" (.NET Framework, never to be ported to .NET Core) and "new" (.NET Core or soon-to-be .NET Core) projects. It makes perfect sense to annotate it for nullability and it doesn't need any of the other C# 8 features that depend on .NET Core. But I can't (officially) do it, even though Mads Torgersen himself wrote:

https://devblogs.microsoft.com/dotnet/embracing-nullable-ref...

You also have to set the language version to C# 8.0, of course, and that is not a supported scenario when one of the target versions is below .NET Core 3.0. However, you can still do it manually in your project settings, and unlike many C# 8.0 features, the NRT feature specifically happens to not depend on specific elements of .NET Core 3.1.


Feel free to express your support at [1]. Although I'm not very optimistic that NRT in 7.x will happen, more support would help.

[1] https://github.com/dotnet/csharplang/issues/2995


And C# edges closer still to F#

F# should be Microsofts main language, not C# It's a great language that doesn't get enough attention IMO.


For that to happen, functional languages have to take over. Most people are just not into functional programming languages. They prefer imperative programming languages. I feel Microsoft is doing a great job of slowly forcing C#.NET Developers to learn F#. It was so easy for me to learn F#. It was so strange that I was able to understand a functional programming language. I didn't have luck learning other functional languages.


F# is intentionally cumbersome for imperative/OOP programming which makes it impractical for many project types.


Not at all. The class definition syntax is very terse and you can use the mutable keyword for imperative programming.

In fact, I think F# is a better imperative language than C#, if you choose to use it that way.


Yep this is my experience as well. isNull make it easy to pattern match on null, the only thing I really needed to have a terse way of dealing with outside word, either C# modules or IO that can have nulls. I was shocked how smooth everything is. I was productive maybe in 2 days without prior knowledge of the entire MS/.NET/C# world. The reason why I chose F# was because of the similarity to OCaml's syntax.


Can you back this up with some examples ? I feel it is exactly the opposite. For loops, mutable dictionary, in place updates to name a few imperative things you can do with easy in F#


But the .Net framework is mostly implemented C# / classes, so your F# code will always need to reference those objects from the stack which may be null etc. This is why I think MS is still building C# becuase a rewrite of .NET to pure f# is not worth it. My 2c.


The only cumbersome part is when you need to interop with C#. The .net event syntax is ridiculous for example.


The imperative features feel bolted on and underdeveloped. Just try to break out of a loop or do an early return in f#. The language really discourages the pure interface, impure implementation approach. I feel often compelled to use explicit tail recursion instead, which in turn feels like it's placed in a weird middle ground between a goto fest and structured programming.


If you allow early returns in a loop then loops are no longer expressions.


Why do you think it's intentional, and what could be done differently?


I'd much rather prefer them to spend their time to add stuff to the CLR to remove the boiler code required for simple things.

We need more of the likes of "System.IO.File.ReadAllLines()", one liner functions that take care of the most common things. Like if you want to encrypt with AES you need a 8-10 lines of codes, if you want to convert a byte array to a hex string or do case insensitive string comparisons, you need to create your own helper functions, etc. Until recently there was no json serializer out of the box.

For some of the features mentioned here, I can only wonder how many developers will really use them, while the cost is to make the syntax even more cryptic to a beginner.


I’d rather they leave this kind of functionality to libraries than to include every possible thing they think people will use the language for.


The “battery included” approach worked well for python. Yeah sure, you can always create your own dlls and load a bunch of nuget packages every time. But it’s cumbersome for small projects and it makes code snippets non shareable.


> We need more of the likes of "System.IO.File.ReadAllLines()", one liner

Funny part is VB.NET gas had that for a while


New mainstream languages like TypeScript, Rust, Kotlin, etc. People just opened the Pandora box of advanced types like discriminated union and intersection types.

There's no turning back. People will soon find they need more and more type operators, generics, and all kinds of type-level things. Each concrete operator or construct solves several specific type problems, and then introduces new ones. Until people realize dependent type is a thing.


Initonly seems to be a fix for what readonly should have been.

I never understood why readonly did not allow during initialization, especially since initializing values is often used in place of a proper constructor, and is understood to be something that happens at construction.

But there must have been a use case they were targeting that I'm just not familiar with.

When I first tried to use readonly, I actually expected it to behave the way initonly would now.


I dislike that they (want to) implemented records with classes instead of special casing some kind of struct.


It looks like records can also be structs, so it's a matter of taste (and competence) whether you make your records classes. See mentions here https://github.com/dotnet/csharplang/blob/master/proposals/r... of the ability to make them structs, not just classes.


yes records can be class or structs


What is the "lightweight" part of records? Is it because there is no need for a vtable as there is no inheritance?


lightweight means they have structural equality and are immutable. this is already a huge bonus. microsoft thought people will add methods to them, thus used the class type.


Thanks. I don't really understand what is light about that.

Being able to compare memory to compare two object doesn't make an object any lighter, compared to magically calling a compare method on the object itself (can be decided at compilation time, thus no performance penalty: it depends on what the compare method does.)

Immutability also doesn't make the object lighter, it's again a compile-time property.

I am still missing something...


yeah as said, normally people would see records/data classes as classes that don't use the same amount of memory than a normal class. i.e. some kind of immutable struct. that's why I'm a little bit salty about the feature.

currently records are more comparable to the scala case classes, where you have automatic destructors, better equality, immutability, etc... and not necessary a compacter memory data.

reading the proposal makes more sense than this deep dive, since it explains the reasoning why they don't added special cased data classes.


Previously you had to write a lot of code to implement equality/compare etc.

Records automatically implement them. So its lightweight in that you need less code.


99% of the time when I make a record type with a "with" it's a color, a vector etc. Not sure I really understand why the records can't be either classes or structs.


What's the difference between initonly and const?


Const is fixed at compile time, it looks like initonly is like readonly but can be used outside of a constructor.


what I'd like to know is whether new C# 9 will require VS upgrade and/or .Net Core upgrade above 3.


Most of these are just syntactic sugar, since compilers can just generate IL that is compatible with existing CLR. So language server and compiler update should be enough in most cases.

But e.g. native ints will most likely require CLR update since I assume you can use native ints with reflection.


yes I think both and it will come with .NEt 5




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: