You can do this with Docker too without Dockerfile or rebuilding. You can treat the container as mutable and just start/stop it, doing changes manually, and make snapshots with docker commit.
You'll forfeit the benefits of reproducible scripted environment of course but Docker does let you do it.
Has DDR5 caught up to DDR4 latency yet? I remember it was worse at least in the beginning. There's more bandwidth per channel but a hw design can always add more channels for the desired BW. Not so for latency.
and unfortunately increase latency even more with registered DIMMs. Comparing bandwidth increase (50 GB/s) to the stagnated latency (~80..120 ns total, less than ~0.1 GB/s) over last decades, I'm wondering, whether one still can call today's RAM random memory (though sure it can be accessed randomly). Similar to hard disk drives. Up to 300 MB/s sequentially but only up to less than 1 MB/s 4KB random (read).
Depends on how you do the accounting. Are you counting inference costs or are you amortizing next gen model dev costs. "Inference is profitable" is oft repeated and rarely challenged. Most subscription users are low intensity users after all.
It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug
The term has long watered-down to mean any vulnerability (since it was always a zero-day at some point before the patch release, I guess is those people's logic? idk). Fear inflation and shoehorning seems to happen to any type of scary/scarier/scariest attack term. Might be easiest not to put too much thought into media headlines containing 0day, hacker, crypto, AI, etc. Recently saw non-R RCEs and supply chain attacks not being about anyone's supply chain copied happily onto HN
It's original meaning was days since software release, without any security connotation attached. It came from the warez scene, where groups competed to crack software and make it available to the scene earlier and earlier. A week after general release, three days, same-day. The ultimate was 0-day software, software which was not yet available to the general public.
In a security context, it has come to mean days since a mitigation was released. Prior to disclosure or mitigation, all vulnerabilities are "0-day", which may be for weeks, months, or years.
It's not really an inflation of the term, just a shifting of context. "Days since software was released" -> "Days since a mitigation for a given vulnerability was released".
Wikipedia: A zero-day (also known as a 0-day) is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it
This seems logical since by etymology of zeroday it should apply to the release (=disclosure) of a vuln.
> It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug
Zero-day vulnerability or zero-day exploit refer to the vulnerability, not the vulnerable software. Hence by common sense the availability refers to the vulnerability info or the exploit code.
Python mainly uses reference counting for garbage collction, and the reference cycle breaking full-program gc can be manually controlled.
For RC, each "kick in" of the GC is usually small amount of work, triggered by the reference count of an object going to 0. In this program's case I'd guess you don't hear any artifacts.
Interpreted is not a problem from the predictable behaviour point of view. You may get less absolute performance. Though with Python you can do the heavy lifting in numpy etc which are in native code. And this is what is done here, see eg https://github.com/gpasquero/voog/blob/main/synth/dsp/envelo...
Languages that have garbage collection: not going to rehash the standard back-and-forth here, suffice it to say that the devil is in the details.
I was speaking in broad generalities (and did mention Lua as a counter-example).
If you want realtime safe behavior, your first port of call is rarely going to be an interpreted language, even though, sure, it is true that some of them are or can be made safe.
It compiles and sends bytecode to the server, no? I'm quite sure the server at least does not run a plain interpreter, and I know for sure you build a graph there. That's why you can also use it with other languages (Saw a clojure example I think I wanted to give a try)
It has nothing to do with cpu cycles, and everything to do with realtime safety. You must be able to guarantee that nothing will block the realtime audio thread(s), and that's hard to do in a variety of "modern" languages (because they are not designed for this).
I know you are an audio guy, I also wrote low-latency audio software. I was just saying that setting HIGH_PRIORITY on the audio running thread and it's feeding threads is enough, you don't need QNX. Python has the GIL problem, but that is another story.
For a simple audio app like this synth on a modern CPU it's kind of trivial to do it in any language if the buffer is >40 ms. I'm talking about managing the buffers. Running the synth/filter math in pure Python is still probably not doable.
You'll forfeit the benefits of reproducible scripted environment of course but Docker does let you do it.
reply