Hacker Newsnew | past | comments | ask | show | jobs | submit | st_goliath's commentslogin

You might be thinking of ar, the classic Unix ARchive that is used for static libraries?

The format used by `ar` is a quite simple, somewhat like tar, with files glued together, a short header in between and no index.

Early Unix eventually introduced a program called `ranlib` that generates and appends and index for libraries (also containing extracted symbols) to speed up linking. The index is simply embedded as a file with a special name.

The GNU version of `ar` as well as some later Unix descendants support doing that directly instead.


Yes, one can only hope Ken Shirriff eventually happens to come across one of those models, but I guess they are probably very rare these days.

Besides the multiplication, the 386 had quite a number of teething problems[1], including occasionally broken addressing modes, unrecoverable exceptions, virtual address resolution bugs around the 2G mark, etc...

A while ago, there was also an article posted here that analyzed the inner workings of the Windows/386 loader[2]. Interestingly, Windows simply checks a pair of instruction (XBTS/IBTS) that early 386 steppings had, but was later removed, raising an invalid opcode exception instead.

Raymond Chen also wrote a blog post describing a few workarounds that Windows 95 had implemented[3].

[1] https://www.pcjs.org/documents/manuals/intel/80386/

[2] https://virtuallyfun.com/2025/09/06/unauthorized-windows-386...

[3] https://devblogs.microsoft.com/oldnewthing/20110112-00/?p=11...


From what I've read, the 386 multiplication bug was a semi-analog problem, so the fix was probably making a transistor larger. As a result, it would probably be hard to find the fix on the die and wouldn't be as interesting as, say, the Pentium division bug.

This reminds me of a problem from undergrad computer architecture: how can you validate the multiplier without checking all possible N squared inputs? (Which would take forever.)

I read later in a TI DRAM report about which bit pairs to exercise, based on proximity in silicon layout, to verify the part. I suppose something like that to stress-test the ALU.


Fun little tidbit: The 0x40-0x4f range used for the REX prefix actually clashes with the single-byte encodings for increment/decrement.

When AMD designed the 64 bit extension, they had run out of available single-byte opcodes to use as a prefix and decided to re-use those. The INC/DEC instructions are still available in 64 bit mode, but not in their single-byte encodings.


Which clever code can utilize to determine which mode its running in and branch appropriately depending if the inc/dec were executed or not.

It is still crazy to me that intel though having 2 1 byte instructions for something that could just be coded as a single add with immediate instruction (which they had anyway).

If you have JavaScript enabled, that is. JWZ at least does the redirect on the server side.

The following is pulled in from `https://soc.me/assets/js/turnBack.js`:

    const undesirables = [
      "news.ycombinator.com/",
      // "reddit.com/", // disable temporaily
      "lobste.rs/"
    ] ;

    if (undesirables.find(site => document.referrer.includes(site))) {
      window.location.replace(document.referrer);
    }
I wonder why Reddit is "temporarily not undesirable".

Git history doesn't explain it unfortunately

https://github.com/soc/soc.me/blame/main/assets/js/turnBack....

Although, when we inspect author's profile on lobste.rs, we'll see that he's banned:

https://lobste.rs/~soc [Banned 4 years ago by pushcx: Troll.]

Maybe he's banned from HN as well. And this 'undesirables' is a method of taking some kind of revenge.


Last comment was just over 5 years ago.

https://news.ycombinator.com/user?id=soc


Author has said that development is moved to codeberg, but github version was good enough for the "turnBack.js" analysis.

Uh, before I wrote my sibling comment I read 'comment' as 'commit' and because of it I somehow assumed it was a github link... not sure how that happened.

Anyway, the user named 'soc' on HN has "listbite" in the description. I saw this profile, but I think it's not the same guy. But also I wasn't sure so I didn't paste the link to HN at all.


Why are they undesirable though

> Thought for 37s

> ...

> Ah - that makes sense, that's why it's on fire

oh how very relatable, I've had similar moments.

I knew about SEDs (smoke emitting diodes) and LERs (light emitting resistors), but what do you call the inductor version?


This reminds me of the video:

"Who did your electricals?"

"My nephew Thomas!"

"Oh, so when did his house burn down?"

"Last ye.... wait how do you know his house burnt down?"


Smoke Emitting Choke?

Would that be Rapidly Decompressing Capacitors?

...only know what an inductor is from watching a video one the youtubes where they were talking about using them on the suspensions of F1 cars and they explained their relationship to electronic circuits, forget what their actual name is.


> how to exceed the normal limits on Linux kernel modules.

Uh, what limits? I'm not aware of anything that would stop your module, once probed, from reaching around the back of the kernel and futzing around in the internals of another driver/device in a completely unrelated subsystem, or subsystem internals. SoC/SoM vendors love to pull that kind of crap in their BSPs.

> hooks the VFS to allow dynamically remapping file paths on a per process basis

Instead of messing with kernel VFS internals, you could try:

- patching the offending application or package (ideally make the path configurable and contribute that back upstream)

- running the application in a mount namespace and bind-mount something over the path

- use LD_PRELOAD to wrap fopen/open/openat (I'm pretty sure, ready made solutions for this already exist)


> use LD_PRELOAD to wrap fopen/open/openat (I'm pretty sure, ready made solutions for this already exist)

I think I would literally recompile libc to patch fopen/open/openat long before I would even begin to consider writing a kernel module to mess with filesystem paths on a per-process basis.

I feel like if you find yourself seriously considering writing a kernel module then you are either contributing to kernel development, or have embarked on an adventure specifically to learn about kernel internals, or have take a very wrong turn.


LD_PRELOAD has nothing to do with the kernel, it's entirely resolved in user space; in this context, it would be used to replace libc functions.

> I think I would literally recompile libc to patch fopen/open/openat

That's literally the functionality that LD_PRELOAD provides without having to recompile libc.


Yes, I am aware. I was suggesting that even going to the ridiculous length of patching and replacing libc system wide would likely make more sense than authoring a custom kernel module to accomplish most tasks for which such options are applicable.

Statically compiled binaries don't use libc. Golang is one, anything with Rust and MUSL is another, and reliably injecting an environment variables into Nix is well..not reliable. It also links its own hashed libc paths which you can't predict and which shouldn't be different to any process which isn't trying to establish TLS connections.

It's not like I didn't try this stuff.


You can hook the system call to open a file regardless of libc use. If for some strange reason you really wanted to patch libc and the program you're using statically links it (ex musl) that isn't an issue - just patch the relevant libc implementation and recompile. But more generally, if you have access to the source code then why would you not directly patch the program in question instead of resorting to these sorts of shenanigans?

Seriously, you're doing it wrong. Just hook the relevant system call and be done with it. Your usecase is literally one of the first eBPF tutorials that comes up when looking for information about modifying system call arguments. https://github.com/eunomia-bpf/bpf-developer-tutorial/tree/m...


A lovely video from a Shenzhen factory, mass producing disposable vapes, in case someone's interested: https://www.youtube.com/watch?v=WohEiRvn2Dg+

Most likely a promotional from the looks of it. I myself stumbled over it a about a year ago, when someone posted it on an IRC channel.


lol, at 0:15 someone is literally testing the vapes with their mouth. I hope they don't do that all day long

Later at 6:45 they show more people testing them


It’s hard to know for sure what’s acceptable when it comes to working conditions in China. The information we get is incredibly limited. Most of what makes it through is propaganda.

That said, it wouldn’t surprise me if he does it all day long, 6 days per week.


They are, there's a video on YouTube you can find where they interview someone with that job and they test 10,000 a day. Then they mention that they go home and vape some more

That’s much less automated than I would have thought. Also the dude vape testing the sticks… I don’t think they are aware they are probably doing more damage than good.

Not great from a hygiene perspective given they never show it being sterilised after the manual check.

It's not that surprising that a company that sells these awful gadgets to people who don't really care about their own health would behave in such a manner.

The icon is supposed to represent one of those waving cat figurines: https://en.wikipedia.org/wiki/Maneki-neko

It has some long tradition placing those visibly on the podium. As the story goes, the idea is that you can immediately see if the video stream freezes up (because the cat in the video suddenly stops waving). You wouldn't immediately catch that in between talks (when you have some time to fix the issue) if the camera was just pointed at an empty stage with no movement. I think at 30C3 or so, I saw one that was placed so that it would repeatedly knock on the microphone as well.

Anyway, the waving cat has become a bit of a meme by itself and mascot of the VOC, hence also the (animated) icon in video player.


Thank you both!


> ... a partial download would be totally useless ...

no, not totally. The directory at the end of the archive points backwards to local headers, which in turn include all the necessary information, e.g. the compressed size inside the archive, compression method, the filename and even a checksum.

If the archive isn't some recursive/polyglot nonsense as in the article, it's essentially just a tightly packed list of compressed blobs, each with a neat, local header in front (that even includes a magic number!), the directory at the end is really just for quick access.

If your extraction program supports it (or you are sufficiently motivated to cobble together a small C program with zlib....), you can salvage what you have by linearly scanning and extracting the archive, somewhat like a fancy tarball.


At work, our daily build (actually 4x per day) is a handful of zip files totaling some 7GB. The script to get the build would copy the archives over the network, then decompress then into your install directory.

This works great on campus, but when everyone went remote during COVID it wasn't anymore. It went from three minutes to like twenty minutes.

However. Most files change only rarely. I don't need all the files, just the ones which are different. So I wrote a scanner thing which compares the zip file's filesize and checksum to the checksum of the local file. If they're the same, we skip it, otherwise, we decompress out of the zip file. This cut the time to get the daily build from 20 minutes to 4 minutes.

Obviously this isn't resilient to an attacker, crc32 is not secure, but as an internal tool it's awesome.


How would this have compared to using rsync?


Not as much geek cred for using an off the shelf solution? ;)


XPS (Microsoft's alternative to PDF) supported this. XPS files were ZIP files under the hood and were handled directly by some printers. The problem was the printer never had enough memory to hold a large file so you had to structure the document in a way it could be read a page at a time from the start.


> the directory at the end is really just for quick access.

No, its purpose was to allow multi floppy disks archives. You would insert the last disk, then the other ones, one by one…


That literally is quick access, it does the same thing in both cases, trying to get rid of the linear scan and having to plow through data unnecessarily.

If the archive is on a hard disk, the program reads the directory at the end and then seeks to the local header, rather than doing a linear scan. Or the floppy motor, if it is a small archive on a single floppy.

If you have multiple floppies, you insert the last one, the program reads the header and then tells you what floppy to insert, rather than having to go through them one by one, which you know, would be slower.

In one case, a hard disk arm, or the floppy motor, does the seeking, in the other case, your hands do the seeking. But it's still the same algorithm, doing the same thing, for the same reason.


> rodent controlled surveillance drones

See also: WWII era, pigeon-controlled guided bomb: https://en.wikipedia.org/wiki/Project_Pigeon


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: