I'm not sure about moral per se, but LLVM is practically a painful dependency—it doesn't have API compatibility between versions, release builds are rapidly approaching a gigabyte, you are given some franken-version by the system, and you won't have a good time if you end up linking/loading more than one version for whatever reason.
IIRC there was also some case where an LLVM bump caused tcmalloc to explode inside wine.
Vulkan capture support on Windows was introduced in v25 (on linux you need to use a plugin). There is no Vulkan renderer support—which the post clearly stated...
With the common case of SimplifyCFG turning this into an assumption (for ReleaseFast), this is UB (LLVM def) unless you can prove that the assumption always holds.
I suppose that's par for the course with an unchecked release build, but at the same time memory corruption is much easier to debug as compared to 'the compiler changed the logic of my program in strange/incorrect ways'.
While no compiler is perfect (e.g., pointer provenance), one could just as easily argue that Clang has higher quality—most modern C/++ tooling is built on it, sanitizers are brought up in Clang and sometimes ported to GCC later, all of the modern safety work is in Clang (-Wbounds-safety, the various thread safety approaches, lifetime analysis, Sean’s borrow checked C++, Fil-C). The Clang static analyzer has also been used in production for over a decade, whereas -fanalyzer is still experimental and limited to C, …
I have the feeling that the bugs that aren’t being fixed are either because the bug report is unactionable (3yr old Ubuntu Franken-clang is slower than than 3yr old franken-gcc) or because the problem is very non-trivial (e.g., you can find recent discussion about enabling dependence analysis by default, the aforementioned pointer provenance issues stemming from C, ABI/lowering warts around large _BitInt)
Pony is fun and I love the actor paradigm but it definitely feels like the community lost a lot of energy when Sylvan Clebsch stopped working on it (to work on a similar project for MS).
It's a lot faster to add a log point in a debugger than to add a print statement and recompile. Especially with cargo check, I really don't see the point of non-debuggable builds (outside of embedded, but the size of debuginfo already makes that a non-starter).
Yes, but we're talking about time spent with "add a print statement and recompile" vs time saved by not including debuginfo on every other build. You have to do that comparison yourself.
2. Conditionally compiling some code like logging (not sure if matters for typical Rust projects, but for embedded C projects it's typical).
3. Conditionally compiling assertions to catch more bugs.
I'm using logs, because debugger breaks hardware. Very rarely do I need to reach debugger. Even when hard exception occurs, usually enough info is logged to find out the root cause of the bug.
On embedded, debuggers almost never work until you get to really expensive ones.
In addition, debuggers tend to obscure the failure because they turn on all the hardware which tends to make bugs go away if they are related to power modes.
One of my "best" debuggers for embedded was putting an interactive interpreter over a serial interface on an interrupt so I could query the state of things when a device woke up even if it was hung--effectively a run time injectable "printf".
Crude, but it could trace down obscure bugs that were rare because the device would stay in the failure mode.
The bigeest problem was maintaining the database of code so that we knew exactly what build was on a device. We had to hash and track the universe.
I’ve worked with many M3s and M4s and some Cypress microchips and the JTAG debuggers always worked fine as far as I recall. There were some vendors that liked to force you to buy really expensive ancillary HW but a) there was plenty of OSS that worked fairly well b) you could pick which vendor you went with.
All of those chips you mentioned will turn on all the units at full power when you connect a debugger to them.
And the OSS stuff never works correctly. I wind up debugging the OSS stuff more than my own hardware. And I've used a LOT of OSS (to the point that I wrote software to use a Beaglebone as my SWD debugger to work around all the idiocies--both commercial and OSS).
Depends what you're working on. Stopped in an unfortunate place? That one element didn't get turned off and burned out. Or the motor didn't stop. Or the crucial interrupts got missed and your state is now reset. Or...
are there debugging tools specifically for situations like that?
do you just write code to test manually?
How do you ensure dev builds don't break stuff like that even without considering debugging?
The most useful tool is a full tracing system (basically a stream of run instructions you can use to trace the execution of the code without interrupting it), but unfortunately they're quite expensive and proprietery, and require extra connections to the systems that do support them, so they're not particularly commonly used. Most people just use some kind of home-grown logging/tracing system that tracks the particular state they're interested in, possibly logged into a ringbuffer which can be dumped when triggered by some event.
You ensure dev builds don't break stuff like that with realtime programming techniques. Dev tools exist and they're usually some combination of platform specific, expensive, buggy, and fragile.
printf and friends are fantastic when applicable. Sometimes the cost to even do an async print or even building in any mode except stripped release is impossible though, which usually leads to !fun!.
I mean that stopping execution will often break software logic, for example BLE connection will time out, SPI chip will stop communicate because of lack of commands, watchdog will reboot the chip, etc. And then the whole program will not work as expected, so debugging it will not make further sense. Sorry for miscommunication, I did not mean that hardware physically breaks. It might be possible to solve some of those issues, but generally printing is enough, at least that was my experience.
The fun thing with branch predictors is that they tell you where the next branch is (among other things like the direction of the branch). Since hardware is built out of finite wires, the prediction will saturate to some maximum distance (something in the next few cache lines).
How this affects decode clusters is left as an exercise to the reader.
IIRC there was also some case where an LLVM bump caused tcmalloc to explode inside wine.