Hacker Newsnew | past | comments | ask | show | jobs | submit | fulafel's commentslogin

You can do this with Docker too without Dockerfile or rebuilding. You can treat the container as mutable and just start/stop it, doing changes manually, and make snapshots with docker commit.

You'll forfeit the benefits of reproducible scripted environment of course but Docker does let you do it.


Seems it only support macOS so for practical purpouses it's local-only.

Has DDR5 caught up to DDR4 latency yet? I remember it was worse at least in the beginning. There's more bandwidth per channel but a hw design can always add more channels for the desired BW. Not so for latency.

> add more channels

and unfortunately increase latency even more with registered DIMMs. Comparing bandwidth increase (50 GB/s) to the stagnated latency (~80..120 ns total, less than ~0.1 GB/s) over last decades, I'm wondering, whether one still can call today's RAM random memory (though sure it can be accessed randomly). Similar to hard disk drives. Up to 300 MB/s sequentially but only up to less than 1 MB/s 4KB random (read).


People have been wondering that for a while: https://news.ycombinator.com/item?id=19304281

Alert fatigue has been long identified and complained about, this is just a new kind of that. But it's hitting a different set of people.

Yes, in the headlines the agencies playing adversaries to the common folk are definitely mainly chinese... /s

Depends on how you do the accounting. Are you counting inference costs or are you amortizing next gen model dev costs. "Inference is profitable" is oft repeated and rarely challenged. Most subscription users are low intensity users after all.

Isn't this a wrongly editorialized title - "Reported by Shaheen Fazim on 2026-02-11" so more like 7-day.

It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug

The term has long watered-down to mean any vulnerability (since it was always a zero-day at some point before the patch release, I guess is those people's logic? idk). Fear inflation and shoehorning seems to happen to any type of scary/scarier/scariest attack term. Might be easiest not to put too much thought into media headlines containing 0day, hacker, crypto, AI, etc. Recently saw non-R RCEs and supply chain attacks not being about anyone's supply chain copied happily onto HN

Edit: fwiw, I'm not the downvoter


It's original meaning was days since software release, without any security connotation attached. It came from the warez scene, where groups competed to crack software and make it available to the scene earlier and earlier. A week after general release, three days, same-day. The ultimate was 0-day software, software which was not yet available to the general public.

In a security context, it has come to mean days since a mitigation was released. Prior to disclosure or mitigation, all vulnerabilities are "0-day", which may be for weeks, months, or years.

It's not really an inflation of the term, just a shifting of context. "Days since software was released" -> "Days since a mitigation for a given vulnerability was released".


Wikipedia: A zero-day (also known as a 0-day) is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it

This seems logical since by etymology of zeroday it should apply to the release (=disclosure) of a vuln.


> It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug

Zero-day vulnerability or zero-day exploit refer to the vulnerability, not the vulnerable software. Hence by common sense the availability refers to the vulnerability info or the exploit code.


I think the implication in this specific context is that malicious people were exploiting the vuln in the wild prior to the fix being released

Between IRC and Discord/Slack we had XMPP which almost made it, but then Google etc killed support for it.

Python mainly uses reference counting for garbage collction, and the reference cycle breaking full-program gc can be manually controlled.

For RC, each "kick in" of the GC is usually small amount of work, triggered by the reference count of an object going to 0. In this program's case I'd guess you don't hear any artifacts.


This seems to conflate different things.

Interpreted is not a problem from the predictable behaviour point of view. You may get less absolute performance. Though with Python you can do the heavy lifting in numpy etc which are in native code. And this is what is done here, see eg https://github.com/gpasquero/voog/blob/main/synth/dsp/envelo...

Languages that have garbage collection: not going to rehash the standard back-and-forth here, suffice it to say that the devil is in the details.


I was speaking in broad generalities (and did mention Lua as a counter-example).

If you want realtime safe behavior, your first port of call is rarely going to be an interpreted language, even though, sure, it is true that some of them are or can be made safe.


There's a lot of soft-realtime (=audio/video, gaming etc) apps using interpreted languages. Besides Python and Lua, also Erlang.

They don't use python or similar languages in their realtime threads, I would wager.

Oh and of course SuperCollider.

It compiles and sends bytecode to the server, no? I'm quite sure the server at least does not run a plain interpreter, and I know for sure you build a graph there. That's why you can also use it with other languages (Saw a clojure example I think I wanted to give a try)

Generating audio is far from being an "intensive" operation these days.

It has nothing to do with cpu cycles, and everything to do with realtime safety. You must be able to guarantee that nothing will block the realtime audio thread(s), and that's hard to do in a variety of "modern" languages (because they are not designed for this).

I know you are an audio guy, I also wrote low-latency audio software. I was just saying that setting HIGH_PRIORITY on the audio running thread and it's feeding threads is enough, you don't need QNX. Python has the GIL problem, but that is another story.

For a simple audio app like this synth on a modern CPU it's kind of trivial to do it in any language if the buffer is >40 ms. I'm talking about managing the buffers. Running the synth/filter math in pure Python is still probably not doable.


Sure, but 40ms for a synth intended to be played is generally the kiss of death these days, unless you target audience are all pipe organ players ...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: