Hacker Newsnew | past | comments | ask | show | jobs | submit | zigzag312's commentslogin

Type unions only at first, but there's more being planned.

.NET JIT supports dynamic PGO.

That's due to trimming which can be also be enabled for self-contained builds that use JIT compilation. Trimming is mandatory for AOT though. But you can use annotations to prevent trimming of specific thing.

AOT doesn't support generating new executable code at runtime (Reflection.Emit), like you can do in JIT mode.


From what I've read, this is for the first implementation of unions, to reduce amount of compiler work they need to do. They have designed them in a way they can implement enhancements like this in the future. Things like non-boxing unions and tagged unions / enhanced enums are still being considered, just not for this version.

This is the general pattern of how the C# team operates, IME.

    "Never let perfect be the enemy of good"
Very much what I've seen from them over the years as they iterate and improve features and propagate it through the platform. AOT as an example; they ship the feature first and then incrementally move first party packages over to support it. Runtime `async` is another example.

In the meantime I still haven't done any project with nullable references, because the ecosystem has yet to move along. Same applies to ValueTask for async code.

Which part of the ecosystem is blocking your projects from using nullable references? I find them very helpful, but the projects were all newer or migrated to new SDK.

What all projects?

It is relatively easy to find corporate libraries, or commercial products that still aren't using it, including Microsoft products still stuck in .NET Framework, or .NET standard 2.0.

If you want name shaming of commercial products with modern .NET, here is one, can provide more.

https://github.com/episerver/content-templates


You can use dependencies that aren't using nullable reference types in projects that use it. You can enable/disable nullable reference types per file, as it only influences static analysis. There's no runtime difference between a non-nullable reference type and a nullable reference type.

I personally like the direction C# is taking. A multi-paradigm language with GC and flexibility to allow you to write highly expressive or high performance code.

Better than a new language for each task, like you have with Go (microservices) and Dart (GUI).

I'm using F# on a personal project and while it is a great language I think the syntax can be less readable than that of C#. C# code can contain a bit too much boilerplate keywords, but it has a clear structure. Lack of parenthesis in F# make it harder to grasp the structure of the code at a glance.


There are more than 10x more users than lines of code


Being less efficient is also a problem, because if majority becomes less efficient (lower productivity), the overall wealth and economic growth of that society are going to decline significantly.

We do have evidence that when money is not a problem, we become less efficient. For example, monopolies or state run companies.

Just the first result from google: https://www.mdpi.com/2227-7390/11/3/657

Another problem with UBI is that, if we want for UBI to cover basic costs of living, these expenses are actually quite big as UBI essentially would need to cover things like rent, food and health services. Otherwise we will still have plenty of homeless people with UBI.


I think this further proves that the hypothesis of decoupling content from presentation is flawed. The question is how many more data points do we need before we admit that?


Yes, iirc the concept wasn't to decouple content and presentation but to decouple semantics from presentation in order to re-present content in different media in that medium's native representation of a particular semantic. However, many things are not much different in different media, a headline is a headline. And other things like "emphasis" can have cultural differences even within the same media, like being bold, italicized or even double-quotes.


I suppose to a limited extent, that being “articles” in the typical sense, the strategy might be said to have some modicum of success. I’m sure many CMSs store articles as mostly “plain” HTML and regurgitate the same, directly into a part of the final HTML document, with actual normal CSS rules styling that.


Try creating a task that tries to delete these tasks. It could be triggered on startup and periodically like once a day.


Some sort of LLM audit trail is needed (containing prompts used, model identifier and marking all code written by LLM). It could be even signed by LLM providers (but that wouldn't work with local models). Append only standard format that is required to be included in PR. It wouldn't be perfect (e.g. deleting the log completely), but it might help with code reviews.

This would probably be more useful to help you see what (and how) was written by LLMs. Not really to catch bad actors trying to hide LLM use.


This would be a useful feature to bake into the commits generated by agents. Heck you don’t even need to wait — just change your prompt to tell it to include more context in its commit messages and to sign them as Claude rather than yourself…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: