Hacker Newsnew | past | comments | ask | show | jobs | submit | monus's commentslogin

> You may wonder whether I tried asking an LLM for help or not. Well, I did. In fact it was very helpful in some tasks like summarizing kernel logs [^13] and extracting the gist of them. But when it came to debugging based on all the clues that were available, it concluded that my code didn't have any bugs, and that the CPU hardware was faulty.

This matches my experience whenever I do an unconventional or deep work like the article mentions. The engineers comfortable with this type of work will multiply their worth.


For now. We are training our replacement. They will learn how to do this sort of work over time, likely shorter than you might expect.

> Along the way I have developed a programming philosophy I now apply to everything: the best software for an agent is whatever is best for a programmer.

Not a plug but really that’s exactly why we’re building sandboxes for agents with local laptop quality. Starting with remote xcode+sim sandboxes for iOS, high mem sandbox with Android Emulator on GPU accel for Android.

No machine allocation but composable sandboxes that make up a developer persona’s laptop.

If interested, a quick demo here https://www.loom.com/share/c0c618ed756d46d39f0e20c7feec996d

muvaf[at]limrun[dot]com


Agreed. I don’t know if it will create or eliminate jobs but this is certainly another level from what we’ve seen before.

Since last 2 months, calling LLMs even internet-level invention is underserving.

You can see the sentiment shift happening last months from all prominent experienced devs to.


Yeah, the latest wave of Opus 4.5, Codex 5.2, Gemini Pro 3 rendered a lot of my skepticism redundant as well. While I generally agree with the Jevon's paradox line of reasoning, I have to acknowledge it's difficult to make any reasonable prediction on technology that's moving at such immense speed.

I expected the LLM's would have hit a scaling wall by now, and I was wrong. Perhaps that'll still happen. If not, regardless of whether it'll ultimately create or eliminate more jobs, it'll destabilize the job market.


Well, we are serving latency sensitive remote control to <one of the biggest banks in US> via WebRTC which uses TLS over TURN so you get 443 HTTPS for the whole traffic.

No NAT, no UDP, just pure TURN traffic over Cloudflare TURN with TLS.


We’ve recently had a disk failure in the primary and CloudNativePG promoted another to be primary but it wasn’t zero downtime. During transition, several queries failed. So something like pgBouncer together with transactional queries (no prepared statements) is still needed which has performance penalty.


> So something like pgBouncer together with transactional queries

FYI - it's already supported by cloudnativepg [1]

I was playing with this operator recently and I'm truly impressed - it's a piece of art when it comes to postgres automation; alongside with barman [2] it does everything I need and more

[1] https://cloudnative-pg.io/docs/1.28/connection_pooling [2] https://cloudnative-pg.io/plugin-barman-cloud/


We’re building the Browserbase for mobile - Android & iOS at agentic scale with no concurrency limits.

We run them on bare metal without VM brittleness, fully GPU-accelerated with WebRTC streaming using hardware encoder. As good as it gets and it’s amazed every single person who tried it.

Still behind waitlist, give me a heads up at [email protected] to try it out.

https://lim.run


The problem with old computers isn’t that they’re slow but fail randomly so they don’t need “smaller” Linux, they need more resiliency that can work with random RAM erros, corrupt disks, absurd CPU instruction failures.

The size was a 90s problem.


The real issue is that old hardware use a lot of electrical power. You can get a small Single Board Computer with at least as much computing power as those but using 20 to 30 time less electrical power, and fitting in the palm of your hand.


It's not really a real problem for most retro computing enthusiasts; it only comes out to a couple of bucks a month in electricity, and that's assuming you leave the computer running all month.


It's not an issue, it's just a price to pay.


Do you have any recommendations on resilient software and practices?


What sorts of techniques can be used to deal with those issues?


My old computers that I still run _are_ 90s machines.

Well, technically the eee is '07. But it is 32bit and everything that entails.


Agreed. Use LLM all you want to do the discovery and proof but do not use it to replace your voice. I literally can’t read, my brain just shuts off when I see LLM text.


The hard part is the content of isMalicious() function. The bots can crash but they’d be quick to restart anyway.


> bringing HTTP/2 all the way to the Ruby app server is significantly complexifying your infrastructure for little benefit.

I think the author wrote it with encryption-is-a-must in the mind and after he corrected those parts, the article just ended up with these weird statements. What complexity is introduced apart from changing the serving library in your main file?


In a language that uses forking to achieve parallelism, terminating multiple tasks at the same endpoint will cause those tasks to compete. For some workflows that may be a feature, but for most it is not.

So that's Python, Ruby, Node. Elixir won't care and C# and Java... well hopefully the HTTP/2 library takes care of the multiplexing of the replies, then you're good.


A good python web server should be single process with asyncio , or maybe have a few worker threads or processes. Definitely not fork for every request


Your response explains the other one, which I found just baffling.

I didn't say forking per request, good god. I meant running a process per core, or some ratio to the cores to achieve full server utilization. Limiting all of HTTP/2 requests per user to one core is unlikely to result in good feelings for anybody. If you let nginx fan them out to a couple cores it's going to work better.

These are not problems Java and C# have.


I don't think any serious implementation would do forking when using HTTP/2 or QUIC. Fork is a relic of the past.


You are correct about the first assumption, but even without encryption dealing with multiplexing significantly complexify things, so I still stand by that statement.

If you assume no multiplexing, you can write a much simpler server.


In reality you would build your application server on top of the HTTP/2 server, so you'd not have to deal with multiplexing, the server will hide that from you, so it's the same as an HTTP/1 server (ex: you pass some callback that gets called to handle the request). If you implement HTTP/2 from scratch, multiplexing is not even the most complex part... It's rather the sum of all parts: HPACK, flow-control, stream state, frames, settings, the large amount of validations, and so on.


This may be true with some stacks, but my answer has to be understood in the context of Ruby where the only real source of parallelism is `fork(2)`, hence the natural way to write server is an `accept` loop, which fits HTTP/1 very well, but not HTTP/2.


There is a gem that implements lightweight threads[0], and there is an HTTP/2 server that seems to abstract things out[1]. Your point probably still holds in the context of ruby + async + http/2; but then it's not http/2 fault, but rather ruby for not having a better concurrency story, like say golang.

[0]: https://github.com/socketry/async

[1]: https://github.com/socketry/falcon


The Ruby concurrency story is fine, the problem is parallelism. I have a whole list of posts about all that.

> it's not http/2 fault, but rather ruby

My post is to be read primarily in the context of Ruby, as the intro clearly explains it. I'm not the one who posted it here, it really isn't intended for the HN audience. I would never submit my posts here.

Many of my points are more general than just Ruby-centric, but yes, if your stack of choice have very good support for HTTP/2 I'm not saying not to use it in your DC.

My point is that as a Ruby user, there isn't much reason to lament over the lack of HTTP/2 support in Puma or some other servers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: