Hacker Newsnew | past | comments | ask | show | jobs | submit | stephen_cagle's commentslogin

I'm blown away by the idea of not using Chris Tucker for Ruby Rhod. It is like imagining anyone but Hugh Jackman as Wolverine. They are basically perfect castings.

Last month I "panic bought" a $999 Macbook Mini (32G) so I could run small models, Image Generation, and Voice synthesis on it. I don't think I regret it yet, despite the fact that you can get a 16G for $599, which is honestly a much more efficient price per Gig.

I think it is interesting that, at least thus far, Apple has chosen not to raise the price of their comps despite presumably the price of RAM going up multiples.

Tipping point for me: It will be a pretty kickass media server for at least a decade.


Didn't they eliminate the highest tier Mac Pro and raise the price of the one under it?

Writing (unassisted) is probably the first step towards your own independent thoughts.

I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

I think a diversity of opinion is important for society. I'm worried that LLM's are going to group-think us into thinking the same way, believing the same things, reacting the same way.

I wonder if future children will need to be taught how to purposely have their own opinions; being so used to always asking others before even considering things on their own? The LLM will likely reach a better conclusion than you would on your own, but there is value in diverging from the consensus and thinking your own thoughts.

https://stephencagle.dev/posts-output/2025-10-14-you-should-...


> I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

The scene you mentioned (amazing movie and holds up to this day) with the Major and Togusa:

https://youtube.com/watch?v=VQUBYaAgyKI

While I frequently use a similar argument, "We need someone 'untainted' to provide a different point of view", my honest opinion is somewhat more nuanced. These models tend to gravitate towards some sort of level of writing competence based on how good we are at filtering pre-training data and creating supervised data for fine-tuning. However, that level is still far below where my current professional writing is and I find it dreadful to read compared to good writing. Plenty of my students can not "see" this, as they are still below the level of current LLMs and I caution them to overly rely on LLMs for writing as they can then never learn good writing and "reach above" LLM-level writing. Instead, they must read widely, reflect, and also I always provide written feedback on their writing (rather than making edits myself) so that they must incorporate it manually into their own and when doing so they consider why I disagree with the current writing and hopefully learn to become better writers.


Bitterpilled. Wow, the audio mixing on that clip is great. I miss art like this. I'm afraid that nothing will recapture the way I felt watching GOTS the first time.

There are some many pieces of media that I wish I could fully scrub my memory of to experience for a second time.

You just invented a category for a list! Going to have fun thinking of mine.

Agree. Also, deference to consensus has always been a thing. "Best practices" is a thing at all levels of school and work. So it's very much a human thing, AI drastically compresses the timeline.

Importantly, it's not wrong. I say this as someone that seems to have the contrarian gene. I am worried too, that status-quo is now instant and all-consuming for anyone anywhere. But there's still hope in that AI compresses ramp up speed for anyone that would have the capacity to branch out anyway. So that's good.


I think LLM writing is probably a short term fad. It doesn't provide any value and no one likes reading it. That said, anywhere where value can be extracted by posting writing will be completely destroyed by LLMs as people try to grift their way in.

Either we find some way to filter out AI slop or the internet just stops getting used to post and consume content.


[flagged]


It's similar to the "workslop" problem where you can generate reports and documents rapidly, but the real work has shifted to the receiver who has to review and correct mistakes. In open source this has moved to the PR review being the actual work while generating the code and submitting it is worthless.

Obviously this is nonsensical long term. Why would I want to receive your LLM output when I could get the same output myself?


I think the most interesting idea here is the idea of people purposely keeping secrets in order to maintain advantages.

Beliefs: At this time, I do not actually believe that LLM's can innovate in any real way. I'm not even clear if they can abstract. I think the most creative thing they can do is act as digital "nudgers" on combinatorial deterministic problems; illustrated by their performance on very specific geometry and chemistry problems.

Anyway, my point is that I think they may still need human beings to actually provide novel solutions to problems. To handle the unexpected. To simplify. LLM's can execute once they have been trained, but they cannot train themselves.

In the past, the saying in silicon valley was often "ideas are cheap". And there was some truth to that. Execution was far more difficult then the idea itself. Execution was so much more difficult than "pure thought" that you could often publicize the algorithmic/process/whatever that you had and still offer a product/service/consultancy that made use of it. The execution was the valuable thing.

But LLM's execute at a cost that is fractional of human cost and multiples of human development speed. The idea hasn't increased in value, but the execution cost has decreased markedly. In this world, protecting the idea is far more valuable than it is in the previous world. You can't keep your competitors away by out executing them, but you can keep them away if you have some advantage that they do not understand.

And, I agree, that is quite worrisome. If people don't share knowledge then knowledge disseminate much more slowly as everyone has to independently learn things on their own. That is a frightening future.


Does anyone have a breakdown from the case itself about what particular features of these social media apps makes them threshold into the "addictive" classification?

- Infinite Scrolling?

- Play Next Video Automatically?

- Shorts?

- Matching to your peer group?

- Variable Reward?

- Social Reciprocity?

- Notifications?

- Gamification (Streaks)?

Was the case won on the argument that it is the aggregate of these things (and many more I am sure)? The power imbalance between the user and the company? Was it some particular subset of them that they rest their argument on? I'm just genuinely curious how you can win a very challenging case like this without inadvertently lassoing so many other industries that your arguments seem ludicrous?


I'm somewhat skeptical of this "enter the trades" movement. Actually, I am more skeptical of that statement than I am of LLM's replacing white collar work in general. I think parts of coding are being replaced quickly because they are the parts that don't require discernment. Trades likely contain just as many automatable and just as many discernment parts as white collar work. At this moment in history, the automatable parts are being automated in the knowledge based world. People think the physical world is somehow different, but with world models (along the full spectrum of what that means) the physical world will be just as trainable as the knowledge based world.

tldr; Just like knowledge work, most trade stuff is probably mostly repeated (i.e. very trainable) task with a small amount of taste and discernment applied. The repeated will be trainable, the discernment may be trainable. I don't think the physical world is necessarily any safer than the knowledge world.


The difference is the physical aspect of the trades. The design for wiring can be (and already has been) automated, but you physically need an electrician on site to pull the wires. So I can see a hollowing out of the engineers, but not the actual electricians.

That being said, the absolute focus on trades from the fed right now just reeks of the wild pendulum swing. It used to be 'go to college to get a good job' then we had too many college grads. In ten years we'll have a glut of people trained in the trades with no prospects.

It just keeps swinging back and forth and somehow Joe Regularworker keeps losing.


Indeed. If you squint a little, it kind of looks like the machines are trying to shift to a world where we are just meat puppets to do the tricky stuff there aren't robotics for (yet). :(


Cory Doctorow's "The Reverse-Centaur’s Guide to Criticizing AI" [1] agrees with you:

"<...> a reverse centaur is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine."

[1] https://doctorow.medium.com/https-pluralistic-net-2025-12-05...


Or humans are just the "sex organs" that work to bring about the artificial life-forms that come next.


Have you seen what Unitree G1 can already do? I see the writing on the walls for going onsite and pulling wires.


Yeah, things change. What do you propose to do about that? The only people who lose are the ones who can't accept that they may need to change careers to make more money.


Robots are expensive, software is not. I can instantly duplicate software 1 million times and run it in parallel, I can't just produce 1 million robots. Physical world is always harder.

Even if we get robots who can, say, build roads start to end, there is still a HUGE gap between that and it actually being used. There is a hard floor, too. Robots are made of physical things, physical things have scarcity, and there's no way around that to our knowledge. Even if you can build the robot for 1 cent, the material cost will still exist.


> Robots are expensive

People are not, though, and all the folks who are no longer necessary in knowledge work are available for physical work.


Dark thoughts... Imagine a future where most human beings are just overseered by an LLM and we are just wearing AR work glasses. Barely aware of what (physical) work we are doing as we overlay our hands within the projections of our AR glasses. Every task is decomposed into a set of small physical steps, you don't even think about what you are trying to actually accomplish, just follow the steps one at a time. I wonder if an entire fast food restaurant could be run in this fashion? No managers, no shift supervisors, just a skeleton crew doing one step of a task at a time.


Why have fast food restaurants at all at that point? Just have everyone eat the same mass-produced, nutritionally-optimized substance, and use the AR vision to superimpose pretty pictures over that food. Varied meals are for the rich.


Hasn't the US already minimised the cost of all the construction work that are "the parts that don't require discernment" to minimum wage who-cares-if-they're-documented-or-not day workers?


Seems the answer is no, the average wage is about $25/hr depending on region.


Cool, I can make that working at Walmart many places nowadays.


The average for Walmart is $18.25.


Have you heard of any good projects for running isolated containers in NixOS that are cheaply derived from your own NixOS config? Because that is what I want. I want a computer where I can basically install every non stock app in its own little world, where it thinks "huh, that is interesting, I seem to be the only app installed on this system".

Basically, I want to be able to run completely unverified code off of the internet on my local machine, and know that the worst thing it can possibly due is trash its own container.

I feel like NixOS, is one path toward getting to that future.



There is also https://microvm-nix.github.io/microvm.nix/ if you want increased isolation.


I can recommend MicroVM.nix, since it allows for multiple VM runtimes like QEMU, Firecracker, etc.

There's also nixos-shell for ad-hoc virtual machines: https://github.com/mic92/nixos-shell


Can you do those ad-hoc though? I was looking into this too. I feel like it requires a system config change, apply, and then you need to do container start + machinectl login to actually get a shell.

That's definitely what I want... most of the time.


Yes, NixOS containers can be run in:

* declarative mode, where your guest config is defined within your host config, or

* imperative mode, where your guest NixOS config is defined in a separate file. You can choose to reuse config between host and guest config files, of course.

It sounds like you want imperative containers. Here's the docs: https://nixos.org/manual/nixos/stable/#sec-imperative-contai...


Oh I totally missed that!


sounds like you want qubes os https://www.qubes-os.org/


> I want a computer where I can basically install every non stock app in its own little world, where it thinks "huh, that is interesting, I seem to be the only app installed on this system".

NixOS containers are the most convenient way to do this, but those will map the entire global nix store into your container. So while only one app would be in your PATH, all other programs are still accessible in principle. From a threat-modelling perspective, this isn't usually a deal-breaker though.

There's also dockerTools, which lets you build bespoke docker/podman images from a set of nix packages. Those will have a fully self-contained and minimal set of files, at the expense of copying those files into the container image instead of just mapping them as a volume.


https://spectrum-os.org/ is trying to marry QubesOS (everything runs inside a VM) with Nix. It's still very much in development, though.


If containers are safe enough for ur use case then just use nixos containers they just a few more lines to setup in a regular nixos config

If it isn't enough there's microvm.nix which is pretty much the same in difficulty /complexity, but runs inside a very slim and lightweight VM with stronger isolation than a container


Sounds like Ghaf might be what you're after: https://ghaf.tii.ae/ghaf/overview


depends whether you consider rootless Docker "cheap". I tried running ZeroClaw in a Nix-derived Docker (spoiler - it was a bad idea to use ZeroClaw at all since the harness is very buggy) and there is still a potential for container escape zero-days, but that's the best I've found. also, Nix's own containerization is not as hermetic as Docker; they warn about that in docs


That's hard given most apps have dependencies and often share them.

It will always look like curl is available or bash or something

What's wrong with another user account for such isolation?

They can be isolated to namespaces and cgroups. Docker and Nix are just wrappers around a lot of OS functionality with their own semantics attempting to describe how their abstraction works.

Every OS already ships with tools for control users access to memory, disk, cpu and network.

Nix is just another chef, ansible, cfengine, apt, pacman

Building ones own distro isn't hard anymore. If you want ultimate control have a bot read and build the LFS documentation to your needs.

Nothing more powerful than the raw git log and source. Nix and everything else are layers of indirection we don't need


> Nix is just another chef, ansible, cfengine, apt, pacman

No, because Nix code is actually composable. These other tools aren't.


Not only is it composable, but it is generalizable. So yes there is also chef, ansible, apt, uv, nodeenv, etc... or there is just nix. It is able to be the "one tool" to rule them all, often with better reproducibility guarantees.


> Find Minimum in Rotated Sorted Array

I've seen that problem in an interview before, and I thought the solution I hit upon was pretty fun (if dumb).

  class Solution:
      def findMin(self, nums: List[int]) -> int:
          class RotatedList():
              def __init__(self, rotation):
                  self.rotation = rotation
              def __getitem__(self, index):
                  return nums[(index + self.rotation) % len(nums)]
  
          class RotatedListIsSorted():
              def __getitem__(self, index) -> bool:
                  rotated = RotatedList(index)
                  print(index, [rotated[i] for i in range(len(nums))])
                  return rotated[0] < rotated[len(nums) // 2]
              def __len__(self):
                  return len(nums)
  
          rotation = bisect_left(RotatedListIsSorted(), True)
          print('rotation =>', rotation)
          return RotatedList(rotation)[0]

I think it is really interesting that you can define "list like" things in python using just two methods. This is kind of neat because sometimes you can redefine an entire problem as actually just the questions of finding the binary search of a list of solutions to that problem; here you are looking for the leftmost point that it becomes True. Anyway, I often bomb interviews by trying out something goofy like this, but I don't know, when it works, it is glorious!

Good luck on your second round!


I only did these types of interviews when applying for internships in uni, but I really don't think you would get away with using an inbuilt binary search (via bisect left) in a question that is basically "binary search with a quirk".


It's certainly not an "elitist initiation ordeal" to ask a potential crewmate to sketch a library function they've selected, what abstraction or ideas it implicitly contains, and what bearing those have on their overall approach. GP's use of `bisect_left` is instructive here:

  In [2]: Solution().findMin([1, 0, 1, 1])
  Out[2]: 4
However, requesting a bug-free implementation of `bsearch()` (to say nothing of the actual problem being solved) during a timed, in-person interview is less a structured rehearsal of an individual's unique capacities, and more of a jumping-in ritual proving only a willingness to die for their cybergang.


> you start off checking every diff like a hawk, expecting it to break things, but honestly, soon you see it's not necessary most of the time.

My own experience...

I've tried approaching vibe coding in at least 3 different ways. At first I wrote a system that had specs (markdown files) where there is a 1 to 1 mapping between each spec to a matching python module. I only ever edited the spec, treating the code itself as an opaque thing that I ignore (though defined the intrefaces for). It kind of worked, though I realized how distinct the difference between a spec that communicates intent and a spec that specifies detail really is.

From this, I felt that maybe I need to stay closer to the code, but just use the LLM as a bicycle of the mind. So I tried "write the code itself, and integrate an LLM into emacs so that you can have a discussion with the LLM about individual code, but you use it for criticism and guidance, not to actually generate code". It also worked (though I never wrote anything more then small snippets of Elisp with it). I learned more doing things this way, though I have the nagging suspicion that I was actually moving slower than I theoretically could have. I think this is another valid way.

I'm currently experimenting with a 100% vibe coded project (https://boltread.com). I mostly just drive it through interaction on the terminal, with "specs" that kind of just act as intent (not specifications). I find the temptation to get out of the outside critic mode and into just looking at the code is quite strong. I have resisted it to date (I want to experiment with what it feels like to be a vibe coder who cannot program), to judge if I realistically need to be concerned about it. Just like LLM generated things in general, the project seems to get closer and closer to what I want, but it is like shaping mud, you can put detail into something, but it won't stay that way over time; its sharp detail will be reduced to smooth curves as you then switch to putting detail elsewhere. I am not 100% sure on how to deal with that issue.

My current thoughts is that we have failed to actually find a good way of switching from the "macro" (vibbed) to the "micro" (hand coded) view of LLM development. It's almost like we need modules (blast chambers?) for different parts of any software project. Where we can switch to doing things by hand (or at least with more intent) when necessary, and doing things by vibe when not. Striking the balance between those things that nets the greater output is quite challenging, and it may not even be that there is an optimal intersection, but simply that you are exchanging immediate change for future flexibility to the software?


I largely reached the same conclusion recently => https://stephencagle.dev/posts-output/2025-10-14-you-should-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: