Hacker Newsnew | past | comments | ask | show | jobs | submit | zokier's commentslogin

But why bother running Ubuntu at all just to jump through hoops to avoid snaps? Snaps are obviously Ubuntus the thing, so feels counterproductive to run Ubuntu and fight against it.

Most of our servers are Debian (well, mine are Devuan) but there are a few that have to be Ubuntu or Redhat for official support of COTS.

Of those choices, I prefer Ubuntu as being closer to the Debian/Devuan ones.


Why are you stuck on Ubuntu, what is holding you back?

which of course raises the question why the fuck snap doesn't use either of these mechanisms?

It doesn't completely solve function coloring though. Causing carrier threads to get pinned is still not good, similarly as calling blocking function from async function is not good in colored systems.

There are not a lot of cases left that still cause pinning. FFI is the main one.

Go has gc too and arguably worse one than Java

Yeah but I do like not having to give Go several flags to do something reasonable with its memory

The "reasonable" thing go does is pausing core threads doing the actual work of your program, if it feels they create too much garbage so it can keep up, severely limiting throughput.

I think this is a misunderstanding. If the program out-paces the GC because the GC guessed the trigger point wrong, something has to give.

In Go, what gives is goroutines have to use some of their time slice to assist the GC and pay down their allocations.

In Java, I believe what you used to get was called "concurrent mode failure" which was somewhat notorious, since it would just stop the world to complete the mark phase. I don't know how this has changed. Poking around a little bit it seems like something similar in ZGC is called "allocation failure"?

The GC assist approach adopted by Go was inspired by real-time GC techniques from the literature and in practice it works nicely. It's not perfect of course, but it's worked just fine for lots of programs. From a purely philosophical point of view, I think it results in a more graceful degradation under unexpectedly high allocation pressure than stopping the world, but what happens in practice is much more situational and relies on good heuristics in the implementation.


A lot of the answer is that if you can do more work while generating less garbage (lower allocation rate) this problem basically solves itself. Basically every "high performance GC language" other than Java allows for "value type"/"struct"s which allow for way lower allocation rate, which puts a lot less pressure on the GC.

How much less allocation rate? Value types are an important thing and fortunately they are coming to Java as well. But they don't decrease allocation rates nearly enough in every kind of software. They may be a necessity in games/certain computations/low-lat trading, but for a typical web server they don't matter all that much - people are using identity having objects in value typed languages the same way here. Especially that with thread local allocation buffers in Java single-use object allocations are not particularly expensive to begin with - live objects are evacuated and then the whole buffer is reset.

So unless you claim that there is no software in Go/C# where the GC is the bottleneck, no, the problem absolutely doesn't solve itself.


And yet Java outruns pretty much all of them, because it doesn't actually allocate everything on the heap all of the time. And you've been able to declare and work with larger structures in raw memory for ... 20 years? You mostly don't need to, but sometimes you want to win benchmark wars.

And of course it's getting value types now, so it'll win there too. As well as vectors.


Post benchmarks. No, ones where you use 20x the memory of Rust to do the same job 1% faster don't count.

You don’t have to do that with Java either.

That's a very shallow argument.

If it were shallow, it'd be easy for them to fix

Backwards compatibility, every vim and emacs and bash enthusiast should know about it.

It's easy for the USER to fix, since there are flags available. In the day of LLMs it's also easy to find out about those flags and what they do. And if it's so important, testing shouldn't be supremely hard, either.


Do you mean backwards compatibility for things that rely on the default settings? The defaults have already changed across Java versions and also depend on the system.

Normally when you run a non-Java binary, it uses very little memory to start, doesn't have a limit, and returns memory to the system when it frees things. Supposedly you can set JVM flags to do all that, but performance probably suffers, otherwise they would've just done that. So in practice users are always setting the flags carefully.


> As for build systems, Maven is old and cranky but if something else replaces it, it will probably be quite similar anyway.

Bazel is the most obvious contender and very different from Maven in almost every possible way.


Not really, no. It is similar but worse than Maven in that it requires quite a bit of time investment to configure, and shares the drawback of Gradle that it is configured in a programming language instead of a configuration language.

Switching out Maven for a larger maintenance burden might be reasonable in a large organisation that is swimming in competent employees, but most do not.

As for obvious contender, I'd say that would be Gradle, which is harder to get someone started with than Maven and due to the DSL allows you to invent weirder footguns.


Did everyone just agree to forget about Gradle? It was everywhere not too long ago. I think I even prefer it to Maven, in a choice between a rock and a hard place type of way.

I don't think so, but the pain points have become more widely known and taken the edge off the hype.

What is unsloths business/income? They seem to be publishing lot of stuff for free, with no clear product to back them?

Hey! Our primary objective for now is to provide the open source community with cool and useful tooling - we found closed source to be much more popular because of better tooling!

We have much much in the pipeline!!


Thanks! How do you earn or keep yourself afloat? I really like what you guys are doing. And similar orgs. I am personally doing the same, full-time. But I am worried when I will run out of personal savings.

I've been wondering this since they started it, mostly as a concern they stay afloat. Since Daniel does the work of ten, it seems like their value:cost ratio is world-class at the very least.

With the studio release, it seems to like they could be on the path to just bootstrapping a unicorn or a 10x corn or whatever that's called, which is super interesting. Anyway, his refusal to go into details reassures me, sounds like things are fine, and they're shipping. Vai com dios


Daniel is a very impressive guy. Well within the realm of “fund the people not the idea” that YC seems to do. Got a few bucks from them and probably earning from collaborations etc. Odds of them not figuring out a business model seem slim.

https://www.ycombinator.com/companies/unsloth-ai


From comments elsewhere in this thread, it sounds like Unsloth could also be getting some decent consulting revenue from larger companies.

The opportunity here is HUUUUGGGEEEE!!!

Companies have no idea what they are doing, they know they need it, they know they want it, engineers want it, they don’t have it in their ecosystem so this is a perfect opportunity to come in with a professional services play. We got you on inference training/running, your models, all that, just focus on your business. Pair that with huggingface’s storage and it’s a win/win.


Investments are not income

You didn't answer the parent question.

They don't owe anyone an answer.

But if they want to attract users, like they seem to do, then answering would go long way.

that doesnt sound reassuring?

With a team size of eight (!) I think they are not exactly bleeding money

worth noting that you get basic line editing for "free" from kernels tty subsystem even if you don't use readline.

Yes, but it is really basic. Is it more than backspace? Most cursor key presses are just forwarded to the program as escape sequences.


Bit of pedantry but I don't think traditional unix shell (like this) follows repl model; the shell is not usually doing printing of the result of evaluation. Instead the printing happens more as a side effect of the commands.

It’s a shell, not the whole thing. The whole thing is the shell+kernel+programs.

Even if you view the system as a whole the printing is deeply intertwined with the evaluation, which is very different from repl where eval returns a value and print prints it

I remember my first shell programming I ever did was batch in windows back in the 3.11/95 days.

The first line was always to turn off echo, and I've always wondered why that was a decision for batch script. Or I'm misremembering. 30 years of separation makes it hard to remember the details.


Echo in that case prints command lines before executing them. Its analog is `set -x` rather than `echo`.

> the shell is not usually doing printing of the result of evaluation

I always include $? in the prompt, so I guess I can say it does print the result of the evaluation.


It prints a prompt.

That's not what print in repl means.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: