Yeah working on a smart way to rate limit stale requests for those who don't have accounts.But the final version will allow anybody who is not a bot, to get into a vim instance without logging in. Thanks for the feedback.
IMO rust started at this from the wrong direction. Comparing to something like zig which just cannot panic unless the developer wrote the thing that does the panic, cannot allocate unless the developer wrote the allocation, etc.
Rust instead has all these implicit things that just happen, and now needs ways to specify that in particular cases, it doesn't.
He's talking about this problem. Can this code panic?
foo();
You can't easily answer that in Rust or Zig. In both cases you have to walk the entire call graph of the function (which could be arbitrarily large) and check for panics. It's not feasible to do by hand. The compiler could do it though.
"Panic-free" labels are so difficult to ascribe without being misleading because temporal memory effects can cause panics. Pusher too much onto your stack because the function happened to be preceded by a ton of other stack allocations? Crash. Heap too full and malloc failed? Crash. These things can happen from user input, so labelling a function no_panic just because it doesn't do any unchecked indexing can dangerously mislead readers into thinking code can't crash when it can.
There's plenty of independent interest in properly bounding stack usage because this would open up new use cases in deep embedded and Rust-on-the-GPU. Basically, if you statically exclude unbounded stack use, you don't even need memory protection to implement guard pages (or similar) for your call stack usage, which Rust now requires. But this probably requires work on the LLVM side, not just on Rust itself.
Failable memory allocations are already needed for Rust-on-Linux, so that also has independent interest.
Or effect aliases. But given that it's strictly a syntactic transformation it seems like make the wrong default today, fix it in the next edition. (Editions come with tools to update syntax changes)
Something like that, except you probably also want to be able to express things like “whatever the callback I’m passed can throw, I can throw all of that and also FooException”. And correctly handle the cases when the callback can throw FooException itself, and when one of the potential exceptions is dependent on a type parameter, and you see how this becomes a whole thing when done properly. But it’s doable.
> Comparing to something like zig which just cannot panic unless the developer wrote the thing that does the panic
The zig compiler can’t possibly guarantee this without knowing which parts of the code were written by you and which by other people (which is impossible).
So really it’s not “the developer” wrote the thing that does the panic, it’s “some developer” wrote it. And how is that different from rust?
Huh? It seems to me that in these respects the two languages are almost identical. If I tell the program to panic, it panics, and if I divide an integer by zero it... panics and either those are both "the developer wrote the thing" or neither is.
In Zig, dividing by 0 does not panic unless you decide that it should or go out of your way to use unsafe primitives [1]. Same for trying to allocate more memory than is available. The general difference is as follows (IMO):
Rust tries to prevent developers from doing bad things, then has to include ways to avoid these checks for cases where it cannot prove that bad things are actually OK. Zig (and many others such as Odin, Jai, etc.) allow anything by default, but surface the fact that issues can occur in its API design. In practice the result is the same, but Rust needs to be much more complex both to do the proving and to allow the developers to ignore its rules.
Could you clarify what's going on in the Zig docs[0], then? My reading of them is that Zig definitely allows you to try to divide by 0 in a way the compiler doesn't catch, and this results in a panic at runtime.
I'd be interested if this weren't true, since the only feasible compiler solutions to preventing division-by-0 errors are either: defining the behaviour, which always ends up surprising people later on, or; incredibly cumbersome or underperformant type systems/analyses which ensure that denominators are never 0.
> the only feasible compiler solutions to preventing division-by-0 errors are either: defining the behaviour, which always ends up surprising people later on, or; incredibly cumbersome or underperformant type systems/analyses which ensure that denominators are never 0.
I don't think it's very cumbersome if the compiler checks if the divisor could be zero. Some programming languages (Kotlin, Swift, Rust, Typescript...) already do something similar for possible null pointer access: they require that you add a check "if s == null" before the access. The same can be done for division (and remainder / modulo). In my own programming language, this is what I do: you can not have a division by zero at runtime, because the compiler does not allow it [1]. In my experience, integer division by a variable is not all that common in reality. (And floating point division does not panic, and integer division by a non-zero constant doesn't panic either). If needed, one could use a static function that returns 0 or panics or whatever is best.
>Some programming languages (Kotlin, Swift, Rust, Typescript...) already do something similar for possible null pointer access: they require that you add a check "if s == null" before the access.
For Rust, this is not accurate (though I don't know for the other languages). The type system instead simply enforces that pointers are non-null, and no checks are necessary. Such a check appears if the programmer opts in to the nullable pointer type.
The comparison between pointers and integers is not a sensible one, since it's easy to stay in the world of non-null pointers once you start there. There's no equivalent ergonomics for the type of non-zero integers, since you have to forbid many operations that can produce 0 even on non-0 inputs (or onerously check that they never yield 0 at runtime).
>The same can be done for division (and remainder / modulo). In my own programming language, this is what I do: you can not have a division by zero at runtime, because the compiler does not allow it... In my experience, integer division by a variable is not all that common in reality
That's another option, but I hardly find it a real solution, since it involves the programmer inserting a lot of boilerplate to handle a case that might actually never come up in most code, and where a panic would often be totally fine.
Coming back to the actual article, this is where an effect system would be quite useful: programmers who actually want to have their code be panic-free, and who therefore want or need to insert these checks, can mark their code as lacking the panic effect. But I think it's fine for division to be exposed as a panicking operation by default, since it's expected and not so annoying to use.
The syntax in Kotin is: "val name: String? = getName(); if (name != null) { println(name.length) // safe: compiler knows it's not null }"
So, there is no explicit type conversion needed.
I'm arguing for integer / and %, there is no need for an explicit "non-zero integer" type: the divisor is just an integer, and the compiler need to have a prove that the value is not zero. For places where panic is fine, there could be a method that explicitly panics in case of zero.
I agree an annotation / effect system would be useful, where you can mark sections of the code "panic-free" or "safe" in some sense. But "safe" has many flavors: array-out-of-bounds, division by zero, stack overflow, out-of-memory, endless loop. Ada SPARK allows to prove absence of runtime errors using "pragma annotate". Also Dafny, Lean have similar features (in Lean you can give a prove).
> I think it's fine for division to be exposed as a panicking operation by default
That might be true. I think division (by non-constants) is not very common, but it would be good to analyze this in more detail, maybe by analyzing a large codebase... Division by zero does cause issues sometimes, and so the question is, how much of a problem is it if you disallow unchecked division, versus the problems if you don't check.
More specifically, Zig will return an error type from the division and if this isn't handled THEN it will panic, kind of like an exception except it can be handled with proper pattern matching.
I can't find anything related to division returning an error type. Looking at std.math.divExact, rem, mod, add, sub, etc. it looks to me like you're expected to use these if you don't want to panic.
Actually you're right, I was going by the source code which was in the link of the comment you replied to, but I missed that that was specifically for divExact and not just primitive division.
It's nice that people are taking this up, and one of the main benefits of open source in the first place. I have my doubts that this will succeed if it's just one guy, but maybe it takes on new life this way and I would never discourage people from trying to add value to this world.
That said I increasingly have a very strong distaste of these AI generated articles. They are long and tedious to read and it really makes me doubt that what is written there is actually true at all. I much prefer a worse written but to the point article.
I agree completely. I know everyone is tired of AI accusations but this article has all of the telltale signs of LLM writing over and over again.
It’s not encouraging for the future of a project when the maintainer can’t even announce it without having AI do the work.
It would be great if this turns into a high effort, carefully maintained fork. At the moment I’m highly skeptical of new forks from maintainers who are keen on using a lot of AI.
I'm not Dang, but I agree AI articles are a disease - but with reservations.
In this case, a Chinese developer who's not a native English speaker - I feel is _adding_ to "interesting conversations" not detracting from them but using AI assistance to publish an article like this in readable/understandable English.
I know HN and Ycombinator is _hugely_ US focused and secondarily English-speaking focused. But there's more and more interest in non US based "intellectual curiosity" where the original source material is not in English. From YC's capitalism-driven focus, they largely don't care. From my personal hacker ethic curiosity, I'd hate to miss out on articles like this just because of a prejudice against non English speakers who use AI to provide me with understandable versions.
Having said that, AI hype in general certainly feels like a disease to me. I was noting recently how the percentage of homepage like/discussions I click has gone way down. I remember the days where I'd click and read 80 or 90% of the things that made it to the homepage. These days I eyeroll my way past probably 2/3rds of them because they look at first glance (and from recent experience>) to just be AI hype in one form or another. (I've actually considered building myself a tool that'd grab the first three or so pages and then filter out everything AI related - but the other option is just to visit less often...)
I'm all for people who aren't native English speakers publishing their thoughts and opinions. But I would much prefer they still wrote down their own thoughts in their own words in their native language and machine translated it. It would be much more authentic and much more interesting--and much more worth reading.
I just get my agent to read them for me and present a few options for comments as derived from the vibes of any existing comments. If I time out, it posts a random option, then at the end of the week I get it to summarise all the content I (royal) read and distill it into a take-aways note in my (royal) journal. It's been a huge productivity boost. When ever I think I might want to think about something I just ask the agent to find a topic I (royal) read within some timeframe and have it synthesise a few new dot points in my (royal) journal. I'm hoping to reach 10,000 salient points by the end of the year.
I have nothing against a skilled maintainer with attention to detail using AI tools for assistance.
The important part is the human who will do more than just try to get the LLM to do the hard work for them, though. Once software matures the bugs and edge cases become more obscure and require more thoughtful input. AI is great at getting things to some high percentage of completeness, but it takes a skilled human to keep it all moving in the right direction.
I would cite this blog post as an example of lazy LLM use: It's over-dramatic, long, retains all of the poor LLM output styling that most human editors remove, and suggests that the maintainer isn't afraid to outsource everything to the LLM.
> it really makes me doubt that what is written there is actually true at all
Indeed, the whole "Ironically, switching from Apache 2.0 to AGPL irrevocably makes the project forkable" section seems misguided. Apache 2.0-licensed software is just as forkable.
The point being we can simply tell our agents to start at the rug pull point and implement the same features and bug fixes on the Apache fork referring to the AGPL implementation.
> I have my doubts that this will succeed if it's just one guy
Normally, I'd agree with you 100%.
But there are some interesting mitigating circumstances here.
1) It's "just one guy" who's running a fairly complex open source project already, one which uses minio.
2) The stated intention is that the software is considered "finished" with no plans to add any features, so the maintenance burden is arguably way lower than typical open source projects (or forks)
3) they're quite open about using AI to maintain it - and like it or hate it, this "finding and helping fix bugs in complex codebases" seems to be an area where current AI is pretty good.
I'm sure a lot of people will be put off by the forker being Chinese, but honestly, from outside the US right now, it's unclear if Chinese or American software is a more existential risk.
I'll admit I'd never heard of their Pigsty project before, but a quick peek at their github shows a project that's been around for 5 years already, and has pull requests from over a dozen contributors. That's no guarantee this isn't just a better prepared Jia Tan zx utils supply chain attack, but at least it's clearly not just something that's all been created by one person over 2 or 12 months.
I am sorry about that. What I am saying is that it's hard to trust the content given the context. And more so these articles are extremely verbose with a lot of BS in them, so it makes getting to the "content" a lot more work for me.
In any case I had one paragraph about the content and one side-note about the writing style. Every single reply except one focused on the side-note, including you.
I'm generally fully in agreement that AI writing is bad.
But this is one of the few cases where it might be acceptable.
Author is not a native speaker; in an announcement that a known project is being forked for maintenance the occasional odd phrasing and possible errors in grammar could sound unprofessional.
I wonder if in such cases a better use of AI would be to try to write it yourself and just ask a LLM to revise instead? Maybe with some directive to "just point out errors in syntax and grammar, and factual mistakes. No suggestions on style"?
In general if you have the (IMO sensible) approach of taking as few dependencies as possible and not treating them like a black box, then for any error you can simply look at the call stack and figure out the problem from reading the code during development.
Outside of that, error codes are useful for debugging code that is running on other people's machines (i.e. in production) for and for reporting reasons.
I'm kind of on the same journey, a bit less far along. One thing I have observed is that I am constantly running out of tokens in claude. I guess this is not an issue for a wealthy person like Mitchell but it does significantly hamper my ability to experiment.
I have done the monte carlo thing in practice with a team and it works well under some conditions.
The most important is that the team needs to actually use the task board (or whatever data source you use to get your inputs) to track their work actively. It cannot be an afterthought that gets looked at every now and then, it actually needs to be something the team uses.
My current team kind of doesn't like task boards because people tend to work in small groups on projects where they can keep that stuff in their own heads. This requires some more communication but that happens naturally anyway. They are still productive, but this kind of forecasting doesn't work then.
I hate this whole thing with me having to use some tool to track the work (usually Jira which is a PoS). My entire output is data, why can't a tool automatically summarise what I'm doing? It seems an ideal task for an AI actually.
Jira is excel for task management. OOTB setup works absolutely great, and then someone comes along who wants a custom field on tasks to support <something that they read about elsewhere> and now you have to fill in that custom field. they leave, and someone else comes in and adds a new one. 5 years later you have 11 new fields that partially overlap, some are needed for some views, some are needed for other, but you can't use default boards because person Y decided that they wanted to call Epics Feat's, and made a custom issue type.
And in the end, the people who actualy use those boards just export a filter to excel and work there...
I don't think things have changed that much in the time I've been doing it (roughly 20 years). Tools have evolved and new things were added but the core workflow of a developer has more or less stayed the same.
I also wonder what those people have been doing all this time... I also have been mostly working as a developer for about 20 years and I don't think much has changed at all.
I also don't feel less productive or lacking in anything compared to the newer developers I know (including some LLM users) so I don't think I am obsolete either.
At some point I could straight-up call functions from the Visual Studio debugger Watch window instead of editing and recompiling. That was pretty sick.
Yes I know, Lisp could do this the whole time. Feel free to offer me a Lisp job drive-by Lisp person.
Isn’t there a whole ton of memes about the increase in complexity and full stack everything and having to take in devops, like nothing has changed at all?
I don't think that's true, at least for everywhere I've worked.
Agile has completely changed things, for better or for worse.
Being a SWE today is nothing like 30 years ago, for me. I much preferred the earlier days as well, as it felt far more engineered and considered as opposed to much of the MVP 'productivity' of today.
MVP is not necessarily opposed to engineered and considered. It's just that many people who throw that term around have little regard for engineering, which they hide behind buzzwords like "agile".
It does make sense to highlight, because this kind of statistic is a very strong indicator that the market is not competitive. This is not a normal kind of profit margin and basically everyone except for Apple would benefit from them lowering the margins.
In normal markets there are competitors who force each other to keep reasonable profit margins and to improve their product as opposed to milking other people's hard work at the expense of the consumer.
Might not be competitive but it’s totally voluntary. No one needs app, it’s not food or shelter, so clearly consumers are willing and able to pay this.
The consumer is willing to pay the price based on the perceived value from the App Store
The relevant market here is the creators not the consumers. As a creator you have no choice but to accept whatever fees Apple, Google, Steam etc set. Or whatever rates Spotify pays you per stream. The fact you "could" host your own website is irrelevant when the reality is nobody will visit it.
> The relevant market here is the creators not the consumers. As a creator you have no choice but to accept whatever fees Apple, Google, Steam etc set. Or whatever rates Spotify pays you per stream. The fact you "could" host your own website is irrelevant when the reality is nobody will visit it.
Collective action by the creators would help.
All they have to do is dual-host (a fairly trivial matter, compared to organised collective action). What would make things even better is if they dual host on a competing platform and specify in their content that the competing platform charges lower fees. If even 10% of the creators did this:
1. Many of the consumers would switch.
2. Many of the creators not on the competing platform would also offer dual-hosting.
The problem is not "As a creator you have no choice but to accept whatever fees Apple, Google, Steam etc set". The problem is the mindset that their content is not their own.
I say it's their mindset, because they certainly don't act as if they own the content - when your content is available only via a single channel, you don't own your content, you are simply a supplier for that channel.
How? I thought it was a Patreon thing - the "competing platform" would be competing with the Patreon app.
I'm not familiar with Patreon, but I thought the way it worked was that you could tip content creators via the Patreon app. I'm pretty certain that Apple cannot tell Patreon (a third party) that they are only allowed to offer exclusive content.
Apple doesn’t allow you to mention that you have alternate payment channels on other platforms. Can’t even allude to it.
To me this is the thing that should be outlawed. Let people pay the Apple tax if they want, but don’t prevent people from making other arrangements. Most people are lazy and will pay the tax, if it isn’t excessive.
What is also totally voluntary is our decision to let Apple exist as an entitiy, to give them a government enforced monopoly over certain things, to make it illegal to break their technical protections of their monopoly etc.
If the AI generated most of the code based on these prompts, it's definitely valuable to review the prompts before even looking at the code. Especially in the case where contributions come from a wide range of devs at different experience levels.
At a minimum it will help you to be skeptical at specific parts of the diff so you can look at those more closely in your review. But it can inform test scenarios etc.
A lot of Dutch government and government adjacent services run on Microsoft Azure as well. Which is not the same level of concern, but it does mean the US government has access to that data.
even if they don't have access to the actual data, the US government has the option to order Microsoft to switch these essential government services services off. For example, as a means of pressuring the Dutch government into supporting the American annexation of Greenland.
Or even, post-Greenland, to force the Dutch to give Trump the Dutch Caribbean islands off the Venezuelan coast as well (Aruba, Bonaire, Curaçao).
If I were a Dutch member of parliament, I would be insisting this particular vulnerability to extortion be addressed as soon as possible. Of course, the US can still threaten to, at worst, nuke us all to smithereens but let's hope they're not willing to go that far.
Which has happened before and is the reason why the International Criminal Court is moving away from MS365 [0]
This prompted me to try OnlyOffice, and man is that nice. I do like LibreOffice, but 2 things bug me: It just looks old. And second, I have, since the dawn of time (and the Sun's Star Office) had issues just telling the software: "This is a Dutch doc, apply Dutch spelling and Grammar Checks". It has never worked well, even Firefox text fields work better. But with OnlyOffice it seems to work well so far, and also, it will be much much more recognizable by ex-MS Office users. It hear the interop with MS formats is also better.
> the US government has the option to order Microsoft to switch these essential government services services off
They can also order MS and Amazon and Google and Apple to switch off services on which most of the economy relies, and which most devices require to function.
But if they do that, the Dutch government has the option to pull ASML and its services (like maintenance, parts) from the US, which will cripple its chip industry. I wouldn't be surprised if there's a remote shutdown built into their devices.
The prime-minister in waiting has said that there will be a cabinet post for digital security, and Parliament has expressed in the same motion that they are worried about dependence on foreign cloud services as well.
Note: legally, the Netherlands can't give Aruba or Curaçao to the US as in the constitutional framework of the dutch kingdom they are seen as sovereign entities.
reply