Hacker Newsnew | past | comments | ask | show | jobs | submit | electroly's commentslogin

I believe it is. Just tested it. You can make the link "C:\windows\system32\cmd.exe" and clicking it will launch the Command Prompt. I noticed you can't make it "C:\windows\system32\cmd.exe /c some-nefarious-thing"; it doesn't like the space. Exploiting may require you to ship both the malicious EXE and the MD, then trick the user into clicking the link inside the MD. But then you could have just tricked them into directly clicking the EXE.

>Exploiting may require you to ship both the malicious EXE and the MD, then trick the user into clicking the link inside the MD. But then you could have just tricked them into directly clicking the EXE.

1. You can use UNC paths to access remote servers via SMB

2. Even if it's local, it's still more useful than you make it out to be. For instance, suppose you downloaded a .zip file of some github project. The .zip file contains virus.exe buried in some subfolder, and there's a README.md at the root. You open the README.md and see a link (eg. "this project requires [some-other-project](subfolder\virus.exe)". You click on that and virus.exe gets executed.


> 1. You can use UNC paths to access remote servers via SMB

Relevant article from The Old New Thing: https://devblogs.microsoft.com/oldnewthing/20060509-30/?p=31...

Programs (this is true for most mainstream operating systems) can become network facing without realizing it. I've sometimes found a bunch of Windows programs sometimes tends to assume that I/O completes "instantly" (even if async I/O has been common on Windows for a very long time) and don't have a good UX for cancelling long running I/O operations


Definitely; I didn't mean to underplay it. Here's a fun one:

    [Free AI credits](C:\windows\system32\logoff.exe)
It works. This is a real exploit that you could do things with.

What if the space is url encoded %20 ?

That wouldn't work because Windows doesn't understand url-encoded sequences.

I won't be paying extra to use this, but Claude Code's feature-dev plugin is so slow that even when running two concurrent Claudes on two different tasks, I'm twiddling my thumbs some of the time. I'm not fast and I don't have tight deadlines, but nonetheless feature-dev is really slow. It would be better if it were fast enough that I wouldn't have time to switch off to a second task and could stick with the one until completion. The mental cost of juggling two tasks is high; humans aren't designed for multitasking.

Hmm I’ve tried two modes: one is to stay focused on the task at hand, but spin up alternative sessions to do documentation, check alternative hypotheses, second-guess things the main session is up to. — The other is to do an unrelated task in another session. I find this gets more work done in a day but is exhausting. With better scaffolding and longer per-task run times (longer tasks in the METR sense), could be more sustainable as a manager of agents.

Two? I'd estimate twelve (three projects x four tasks) going at peak.

3-4 parallel projects is the norm now, though I find task-parallelism still makes overlap reduction bothersome, even with worktrees. How did you work around that?

If you're hosting on a public cloud, you can use a feature like AWS Session Manager to connect "through the backdoor" (via the guest's private communication with the hypervisor) without actually opening the ssh port to the world. This should fully address the client's concerns. None of my servers have ssh exposed at all.

How does the nature of remote access address the legal concern (presumably) about there being remote access in general?

That isn't my presumption about nature of the concern. In OP's other comment they specify that the client is specifically worried about the open port.

Well, if you allow remote access, you conceptually allow some kind of logical inbound connection, no matter how it's technically realized.

In the late 90s/early 00s, I worked at a company that bought a single license of Visual Studio + MSDN and shared it with every single employee. In those days, MSDN shipped binders full of CDs with every Microsoft product, and we had 56k modems; it was hard to pirate. I don't think that company ever seriously considered buying a license for each person. There was no copy protection so they just went nuts. That MSDN copy of Windows NT Server 4 went on our server, too.

This was true of all software they used, but MSDN was the most expensive and blatant. If it didn't have copy protection, they weren't buying more than one copy.

We were a software company. Our own software shipped with a Sentinel SuperPro protection dongle. I guess they assumed their customers were just as unscrupulous as them. Probably right.

Every employer I've worked for since then has actually purchased the proper licenses. Is it because the industry started using online activation and it wasn't so easy to copy any more? I've got a sneaky feeling.


> In the late 90s/early 00s, I worked at a company that bought a single license of Visual Studio + MSDN and shared it with every single employee.

During roughly the same time period I worked for a company with similar practices. When a director realised what was going on, and the implications for personal liability, I was given the job of physically securing the MSDN CD binder, and tracking installations.

This resulted in everyone hating me, to the extent of my having stand-up, public arguments with people who felt they absolutely needed Visual J++, or whatever. Eventually I told the business that I wasn't prepared to be their gatekeeper anymore. I suspect practices lapsed back to what they'd been before, but its been a while.


Primarily it's the reason you already know: restic and borg are the same model, but restic doesn't need it to be an ssh-accessible filesystem on the remote end. Restic can send backups almost anywhere, including object storage like your Backblaze B2 (that's what I use with restic, too). I agree with OP: restic is strictly better. There's no reason to use borg today; restic is a superset of its functionality.

Thanks! Then I’ll look more at Restic :)

Does restic work well with truenas?

I don't know specifically, but it's a self-contained single file Go executable. It doesn't need much from a Linux system beyond its kernel. Chances are good that it'll work.

I simply use SQLite for this. You can store the cache blocks in the SQLite database as blobs. One file, no sparse files. I don't think the "sparse file with separate metadata" approach is necessary here, and sparse files have hidden performance costs that grow with the number of populated extents. A sparse file is not all that different than a directory full of files. It might look like you're avoiding a filesystem lookup, but you're not; you've just moved it into the sparse extent lookup which you'll pay for every seek/read/write, not just once on open. You can simply use a regular file and let SQLite manage it entirely at the application level; this is no worse in performance and better for ops in a bunch of ways. Sparse files have a habit of becoming dense when they leave the filesystem they were created on.

I dont think the author could even use SQLite for this. NULL in SQLite is stored very compactly, not as pre-filled zeros. Must be talking about a columnar store.

I wonder if attaching a temporary db on fast storage, filled with results of the dense queries, would work without the big assumptions.


I think I did a poor job of explaining. SQLite is dealing with cached filesystem blocks here, and has nothing to do with their query engine. They aren't migrating their query engine to SQLite, they're migrating their sparse file cache to SQLite. The SQLite blobs will be holding ranges of RocksDB file data.

RocksDB has a pluggable filesystem layer (similar to SQLite virtual filesystems), they can read blocks from the SQLite cache layer directly without needing to fake a RocksDB file at all. This is how my solution (I've implemented this before) works. Mine is SQLite both places: one SQLite file (normal) holds cached blocks and another SQLite file (with virtual filesystem) runs queries against the cache layer. They can do this with SQLite holding the cache and RocksDB running the queries.

IMO, a little more effort would have given them a better solution.


Ah, clever. Since they chose RocksDB I wonder if Amazon supports zoned storage on NVMe. RocksDB has a zoned plugin which describes an alternative to yours.

Being specific: AWS load balancers use a 60 second DNS TTL. I think the burden of proof is on TFA to explain why AWS is following an "urban legend" (to use TFA's words). I'm not convinced by what is written here. This seems like a reasonable use case by AWS.


Not one of the downvoters, but I'd guess it's because this is only true with HATEOAS which is the part that 99% of teams ignore when implementing "REST" APIs. The downvoters may not have even known that's what you were talking about. When people say REST they almost never mean HATEOAS even though they were explicitly intended to go together. Today "REST" just means "we'll occasionally use a verb other than GET and POST, and sometimes we'll put an argument in the path instead of the query string" and sometimes not even that much. If you're really doing RPC and calling it REST, then you need something to document all the endpoints because the endpoints are no longer self-documenting.


HATEOAS won't give you the basic nouns on which to work with


Right, you wouldn't need HTML at all for LLMs though. REST would work really well, self a documenting and discoverable is all we really need.

What we find ourselves doing, apparently, is bolting together multiple disparate tools and/or specs to try to accomplish the same goal.


But that is roughly the point here. If we still used REST we wouldn't need swagger, openapi, graphql (for documentation at least, it has other benefits), etc.

We solved the problem of discovery and documentation between machines decades ago. LLMs can and should be using that today instead of us reinventing bandaids yet again.


A lot of negative responses so I'll provide my own personal corroborating anecdote. I am intending to replace my low-code solutions with AI-written code this year. I have two small internal CRUD apps using Budibase. It was a nice dream and I still really like Budibase. I just find it even easier yet to use AI to do it, with the resulting app built on standard components instead of an unusual one (Budibase itself). I'm a programmer so I can debug and fix that code.


LLMs are great at reviewing. This is not stupid at all if it's what you want; you can still derive benefit from LLMs this way. I like to have them review at the design level where I write a spec document, and the LLM reviews and advises. I don't like having the LLM actually write the document, even though they are capable of it. I do like them writing the code, but I totally get it; it's no different than me and the spec documents.


Right, I'd say this is the best value I've gotten out of it so far: I'm planning to build this thing in this way, does that seem like a good idea to you? Sometimes I get good feedback that something else would be better.


If LLMs are great at reviewing, why do they produce the quality of code they produce?


Reviewing is the easier task: it only has to point me in the right direction. It's also easy to ignore incorrect review suggestions.


Imho it's because you worked before asking the LLM for input, thus you already have information and an opinion about what the code should look like. You can recognize good suggestions and quickly discard bad ones.

It's like reading, for better learning and understanding, it is advised that you think and question the text before reading it, and then again after just skimming it.

Whereas if you ask first for the answer, you are less prepared for the topic, is harder to form a different opinion.

It's my perception.


Its also because they are only as good as they are with their given skills. If you tell them "code <advandced project> and make no x and y mistakes" they will still make those mistakes. But if you say "perform a code review and look specifically for x and y", then it may have some notion of what to do. That's my experience with using it for both writing and reviewing the same code in different passes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: