So this is exactly what's unintuitive about queues, an analogy would be car lanes. Intuition might lead you to conclude that if a 2 lane road has traffic constantly going to 4 lanes will solve the traffic. But this is not true. Many people that would have used the road might have been using public transport or just decided not to commute or stay inside normally will join the traffic until it once again equilibriates. Adding more maintainers without addressing the core problems of the queue won't lead to success
If you only focus on "solving the traffic" then you're right, adding more lanes ultimately just leads to more lanes being full. But the overall throughput is much higher! We need more holistic solutions, to be sure, but I hope no one thinks that means I-5 around LA could just be 2 lanes of traffic because they'll be full of traffic either way.
Does induced demand apply to open source maintaining? What would be the mechanism for that?
For traffic, more users note that the highway is easier to drive on and come over. Would people notice development speeding up and start adding more issues?
Fair call-out, although couple things to point out, I am used to a Squash Merge workflow which I think makes reviews easier based on comments as the reviewer gets to see what changed after their comment easier. Many of the commits are merge commits. If you actually look at the timeline of the original PR, you will see that it also started with a smaller scope but as time passed I also went through the cycle of "while at it, let me also fix this" loop that I mentioned in the article.
The point of the article is: there is a feature that people would like, there is someone who wants to add it, the appropriate time and a lot more for this feature to be merged has been spent yet the feature is nowhere to be found. That's the two way street I am trying to get across. I wish I wasn't even able to open the PR, I wish the maintainer would utilize more automation tools to groom feature requests and potential contributors with agreed upon plans and agreed upon timelines so that both sides time could be used much more effectively.
As far as PR descriptions etc goes, I asked multiple times what the best route to merging would be. If that went through better descriptions, I was happy to do that, as you can see, I wasn't aware of the "no conventional commits" rule, so in my next PRs I used the correct approach, but that should be completely automatable. Yes, I should have spent more time studying Jellyfin's conventions, but I shouldn't have to, not because its unfair for me, simply because there are more contributors than maintainers, so maintainers should not rely on desired behavior from contributors, they should force that behavior as much as possible.
Agreed. A problem I see with how AI reviews have been used is that after one kicks it off, now the maintainer has to review both the PR and the AI's review which doesn't really save time. Like you said, if AI review was used more intentionally, e.g. all PRs have to go through AI review that checks for the baseline requirements and only after the contributor signals "I addressed everything AI commented either by giving my disagreement reasons or making the changes", maintainers spending time on the review could save a lot of quality time.
Thanks a lot! I appreciate the kind words. I do want to clarify that I think in Jellyfin-web's case, the maintainer does mean well and doesn't really have the "benevolent dictat... er, maintainer" approach. But there seems to be this defeatist argument of: we have one maintainer which means 6 months per PR and features not being merged, that I think Open Source projects could do a better job at
Git is a DVCS, created to help manage Linux, which uses a distributed cabal of individuals, each of varying "authority" who choose whether something gets in or not.
The problem is that despite using the same DVCS for source code management, other projects insist on a hub-and-spokes development model, which does not scale.
Projects would be a lot more productive (and a lot more resilient) if they also followed a model where "The <x> maintainer hasn't accepted my pull request" just wasn't a big deal.
Nothing to do with the devs, it's that Linux has, as you say, a distributed cabal of individuals while most things on Github have a nondistributed set of size 1. The reason why it's hub-and-spokes is that that's all you can do with only one or two people, or even half a dozen apparently when only one or two are doing all the work. If the <x> maintainer doesn't accept your PR it is a big deal because there's no-one else there to accept it. Even worse is when you've got the opposite, one or two devs spread across half a dozen projects (HACS springs to mind) where they never respond to anything on most of the projects because there's essentially 1/10th of a developer on each one even if the apparent maintainer list is several people.
1. Free Software / Open Source are Good and True by assertion. There is no God but source code, and Stallman is its prophet.
2. Questions whose answers tend to contradict point 1., such as “Gee, the world runs on Python — as wonderful as job as Guido and his inner circle have done, is it time to ask what an ideal management structure for a technology worth (tens? hundreds? of) billions of dollars might be?” are not welcome — are largely not asked.
(There could be a long discussion here about expectations placed on unpaid maintainers, and what the real purpose of Open Source / Free Software is beyond merely being zero cost at the point of use, but those tend to just go round forever. There's even a paid alternative to Jellyfin: Plex.)
We can have a business model. I can pay the developer to prioritize my PR if I consider it worthy enough that it solves my pain point. Companies do that as I have heard. There could be a Groupon like model where multiple people facing the issue can pool the money for prioritization.
Forks don't have to be hostile. A perfectly reasonable way to react to an overwhelmed maintainer is just to do a friendly fork. Keep the original name, attribution, git history etc, update the README and start acting as a trustworthy lieutenant. You can review stuck PRs and merge them into your own branch, whilst also merging with upstream master. After a while if you seem to be making good calls the original maintainer can do a bulk merge from your branch to bring in many PRs at once, and maybe add you to the repository.
Check out my fork, Jellyden(iro). It’s the best way to watch Heat 2. All the media selection garbage is removed for a streamlined Heat 2 experience, because why would you want to watch anything else when you could be watching Heat 2 instead.
It's worth asking "if AI is so great for software development, won't that make it dramatically easier for people to maintain their own forks of software?"
(I suspect the answer ends up being no, but the reasons could be interesting)
I'm curious why you think the answer would be no. I've had some success with resolving complex merges with GPT 5.4, and it seems obvious enough that AI is a good solution for maintainers who don't have anyone they can trust to take over the project whilst also needing to boost throughput.
I've been using 5.4 recently, and even on "extra high" some of the tests it wrote were opening the source code and doing a regex to confirm the presence (or in some cases the absence) of specific substrings. It wasn't running the code to confirm behaviour, and the regexes didn't even do a basic check to confirm the text wasn't commented out (not that it would've been sufficient if they had, this is just to illustrate how bad it was).
So, yeah. I'd guesstimate this model was fine 75% of the time, mediocre 15-20%, and actively bad 5-10% of the time. How valuable it is depends on how much energy you can spare as a human on spotting the bad.
You are right that it is faster but how often you are running dependency update? It will take more time to ensure that new dependencies did not break anything than doing upgrade.
It should, but if you are using poetry and say you forgot to pin boto to a specific version, your entire day might be spent waiting for poetry to solve.
Cool site! Personally, I think the weekly highlight is nice and all, but the value of an aggregator comes from categorization and searching, and I didn't see either on the site. I would love to see it's focus to be like selfh.st
> Is there anything we can improve here that can make this easier for you?
No OP but it would be nice to include link to categories in the header bar. It would make it easier. the dynamic animation of the categorizes at the middle of the page is annoying. You have it in very small font in the footer but this isn't the best.
Ah , missed the link on grayscale. In any case, I think a datatable is a must in an aggregator. I would get a lot more value out of being able to filter and sort based on the language, categories, github stars, etc.
For a solo dev project this looks great and good luck with it! But am I correct to understand this can't be self-hosted/is not open source? To me "privacy-focused" is pretty much synonymous with open source and self hostable but I am curious if I missed something or if the community thinks otherwise.
Thank you! The SDKs are open-source, which I believe is the most important part. Users can verify how the integrations work and exactly what data is collected.
To me, privacy-focused also means to avoid collecting sensitive data in the first place. I'm hoping that mostly negates the need for self-hosting, in the spirit of keeping things simple for users.
That said, I'm definitely considering to support self-hosting in the future as well, and possibly open-sourcing the rest of the codebase.