Hacker Newsnew | past | comments | ask | show | jobs | submit | Doublon's commentslogin

The README made me realize I just needed a simple `alias local-tcp-listeners='lsof -iTCP -sTCP:LISTEN'` in my `~/.bash_aliases` :)

Same, not sure why a whole cli app is needed.

Developers are nitpicky, atleast i am and i know a lot of others that are as well. So don't underestimate the value of a nice tool with good developer experience, one that's intuitive, clean and easy to use means a lot when juggling so many things during a workday. So having a clean and light implementation to make job even easier is in my opinion worth it (and thus needed) :)

Because it gives more context. Quite obvious if you look at the readme...

True, but as i write their are workarounds, the problem is that they are unintuitive, difficult to remember and don't provide all that much usability beyond listing. So these lack useful features like getting process stats, killing ports easily without having to remember the the pid after lsof and so on. I often have to kill multiple process at once after a failed cleanup. If you are into agentic coding, then having your agent create a profile for all the processes it stats, which it can easily kill of when finished is a lot easier for me atleast.

Some features on the way are: next available port; wait (wait for a host to return a successful health check before proceeding - good for migrations etc.). And lots more. It's not just about listing running ports, but a tool for managing them.

But to each their own, that's what's lovely about the many options available. But if you have anything in relation to this you think is neat, feel free to open an issue. It may be able to convince you that a simple alias won't suffice.


We had `curl`, HTTP and OpenAPI specs, but we created MCP. Now we're wrapping MCP into CLIs...


> but we created MCP. Now we're wrapping MCP into CLIs...

Next we'll wrap the CLIs into MCPs.


MCP is a dead end, just ignore it and it will go away.


And yet without MCP these CLI generators wouldn't be possible.

It building on top of them, because MCP did address some issues (which arguably could've been solved better with clis to begin with - like adding proper help texts to each command)... it just also introduced new ones, too.

Some of which still won't be solved via switching back to CLI.

The obvious one being authentication and privileges.

By default, I want the LLM to be able to have full read only access. This is straightforward to solve with an MCP because the tools have specific names.

With CLI it's not as straightforward, because it'll start piping etc and the same CLI is often used both for write and read access.

All solvable issues, but while I suspect CLIs are going to get a lot more traction over the next few months, it's still not the thing we'll settle on- unless the privileges situation can be solved without making me greenlight commands every 2 seconds (or ignoring their tendency to occasionally go batshit insane and randomly wipe things out while running in yolo mode)


Exactly. Once you start looking at MCP as a protocol to access remote OAuth-protected resources, not an API for building agents, you realize the immense value


Aside from consistent auth, that's what all APIs have done for decades.

Only takes 2 minutes for an agent to sort out auth on other APIs so the consistent auth piece isn't much of a selling point either.


Yes, MCP could've been solved differently - eg with an extension to the openapi spec for example, at least from the perspective of REST APIs... But you're misunderstanding the selling point.

The issue is that granting the LLM access to the API needs something more granular then "I don't care, just keep doing whatever you wanna do" and getting promoted every 2 seconds for the LLM to ask the permission to access something.

With MCP, each of these actions is exposed as a tool and can be safely added to the "you may execute this as often as you want" list, and you'll never need to worry that the LLM randomly decides to delete something - because you'll still get a prompt for that, as that hasn't been whitelisted.

This is once again solvable in different ways, and you could argue the current way is actually pretty suboptimal too... Because I don't really need the LLM to ask for permission to delete something it just created for example. But the MCP would only let me whitelist action, hence still unnecessary security prompts. But the MCP tool adds a different layer - we can both use it as a layer to essentially remove the authentication on the API you want the LLM to be able to call and greenlight actions for it to execute unattended.

Again, it's not a silver bullet and I'm sure what we'll eventually settle on will be something different - however as of today, MCP servers provide value to the LLM stack. Even if this value may be provided even better differently, current alternative all come with different trade-offs

And all of what I wrote ignores the fact that not every MCP is just for rest APIs. Local permissions need to be solved too. The tool use model is leaky, but better then nothing.


Of course they would be possible we could just turn the rest api into a cli.


It’s not, they are a big unlock when using something like cursor or copilot. I think people who say this don’t quite know what MCP is, it’s just a thin wrapper around an API that describes its endpoints as tools. How is there not a ton of value in this?


MCP is the future in enterprise and teams.

It's as you said: people misunderstand MCP and what it delivers.

If you only use it as an API? Useless. If you use it on a small solo project? Useless.

But if you want to share skills across a fleet of repos? Deliver standard prompts to baseline developer output and productivity? Without having to sync them? And have it updated live? MCP prompts.

If you want to share canonical docs like standard guidance on security and performance? Always up to date and available in every project from the start? No need to sync and update? MCP resources.

If you want standard telemetry and observability of usage? MCP because now you can emit and capture OTEL from the server side.

If you want to wire execution into sandboxed environments? MCP.

MCP makes sense for org-level agent engineering but doesn't make sense for the solo vibe coder working on an isolated codebase locally with no need to sandbox execution.

People are using MCP for the wrong use cases and then declaring them excess when the real use case is standardizing remote delivery and of skills and resources. Tool execution is secondary.


You sound more like you like skills than MCP itself. Skills encapsulate the behavior to be reused.

MCP is a protocol that may have been useful once, but it seems obsolete already. Agents are really good at discovering capabilities and using them. If you give it a list of CLI tools with a one line description, it would probably call the tool's help page and find out everything it needs to know before using the tool. What benefit does MCP actually add?


So just to clarify, in your case you're running a centralized MCP server for the whole org, right?

Otherwise I don't understand how MCP vs CLI solves anything.


Correct.

Centralized MCP server over HTTP that enables standardized doc lookup across the org, standardized skills (as MCP prompt), MCP resources (these are virtual indexes of the docs that is similar to how Vercel formatted their `AGENTS.md`), and a small set of tools.

We emit OTEL from the server and build dashboards to see how the agents and devs are using context and tools and which documents are "high signal" meaning they get hit frequently so we know that tuning these docs will yield more consistent output.

OAuth lets us see the users because every call has identity attached.


Sandboxing and auth is a problem solved at the agent ("harness") level. You don't need to reinvent OpenAPI badly.


    > Sandboxing and auth is a problem solved at the agent ("harness") level
If you run a homogeneous set of harnesses/runtimes (we don't; some folks are on Cursor, some on Codex, some on Claude, some on OpenCode, some on VS Code GHCP). The only thing that works across all of them? MCP.

Everything about local CLIs and skill files works great as long as you are 1) running in your own env, 2) working on a small, isolated codebase, 3) working in a fully homogeneous environment, 4) each repo only needs to know about itself and not about a broader ecosystem of services and capabilities.

Beyond that, some kind of protocol is necessary to standardize how information is shared across contexts.

That's why my OP prefaced that MCP is critical for orgs and enterprises because it alleviates some of the friction points for standardizing behavior across a fleet of repos and tools.

    > You don't need to reinvent OpenAPI badly
You are only latching onto one aspect of MCP servers: tools. But MCP delivers two other critical features: prompts and resources and it is here where MCP provides contextual scaffold over otherwise generic OpenAPI. Tools is perhaps the least interesting of MCP features (though useful, still, in an enterprise context because centralized tools allows for telemetry)

For prompts and resources to work, industry would have to agree on defined endpoints, request/response types. That's what MCP is.


MCP only exists because there's no easy way for AI to run commands on servers.

Oh wait there's ssh. I guess it's because there's no way to tell AI agents what the tool does, or when to invoke it... Except that AI pretty much knows the syntax of all of the standard tools, even sed, jq, etc...

Yeah, ssh should've been the norm, but someone is getting promoted for inventing MCP


No it’s more like - because AI can’t know every endpoint and what it does, so MCP allows for injecting the endpoints and a description into context so the ai can choose the right tool without additions steps


Agents can't write bash correctly so... I wonder about your claim


They cannot? We have a client from 25 years ago and all the devops for them are massive bash scripts; 1000s of them. Not written by us (well some parts as maintenance) and really the only 'thing' that almost always flawlessly fixes and updates them is claude code. Even with insane bash in bash in bash escaping and all kinds of not well known constructs. It works. So we habe no incentive to refactor or rewrite. We did 5 years ago and postponed as we first had to rewrite their enormous and equally badly written ERP for their factory. Maybe that would not have happened either now...


It can. Not sure what AI you're using, but Gemini outputs great bash. Of course you need to test it.

You do have to make sure to tell it what platform you're using, because things like MacOS have different CLIs than Linux.


Interesting!

I also started to build something similar for us, as an PoC/alternative to Glean. I'm curious how you handle data isolation, where each user has access to just the messages in their own Slack channels, or Jira tickets from only workspaces they have access to? Managing user mapping was also super painful in AWS Q for Business.


Thank you!

Currently permissions are handled in the app layer - it's simply a WHERE clause filter that restricts access to only those records that the user has read permissions for in the source. But I plan to upgrade this to use RLS in Postgres eventually.

For Slack specifically, right now the connector only indexes public channels. For private channels, I'm still working on full permission inheritance - capturing all channel members, and giving them read permissions to messages indexed from that channel. It's a bit challenging because channel members can change over time, and you'll have to keep permissions updated in real-time.


I'd like to try a pattern where agents only have access to read-only tools. They can read you emails, read your notes, read your texts, maybe even browse the internet with only GET requests...

But any action with side-effects ends up in a Tasks list, completely isolated. The agent can't send an email, they don't have such a tool. But they can prepare a reply and put it in the tasks list. Then I proof-read and approve/send myself.

If there anything like that available for *Claws?


There is no real such thing as a read only GET request if we are talking about security issues here. Payloads with secrets can still be exfiltrated, and a server you don’t control can do what it wants when it gets the request.


GET and POST are merely suggestions to the server. A GET request still has query parameters; even if the server is playing by the book, an agent can still end up requesting GET http://angelic-service.example.com/api/v1/innocuous-thing?pa... and now your `dangerous-secret` is in the server logs.

You can try proxying and whitelisting its requests but the properly paranoid option is sneaker-netting necessary information (say, the documentation for libraries; a local package index) to a separate machine.


I came here to post a similar comment. I decided to use Arch because the documentation is amazing. And I wasn't disappointed. It's become my favorite distro.


> While the step from 1080p 1440p to 4K is a visible difference

I even doubt that. My experience is, on a 65" TV, 4K pixels become indistinguishable from 1080p beyond 3 meters. I even tested that with friends on the Mandalorian show, we couldn't tell 4K or 1080p apart. So I just don't bother with 4K anymore.

Of course YMMV if you have a bigger screen, or a smaller room.


If your Mandalorian test was via streaming, that's also a huge factor. 4K streaming has very poor quality compared to 4K Blu-ray, for instance.


Which is a point in itself: bitrate can matter more than resolution.


For reasonable bitrate/resolution pairs, both matter. Clean 1080P will beat bitrate starved 4K, especially with modern upscaling techniques, but even reasonable-compression 4K will beat good 1080P because there's just more detail there. Unfortunately, many platforms try to mess with this relationship, like YouTube forcing 4K uploads to get better bitrates, when for many devices a higher rate 1080P would be fine.


I'm curious, for the same mb per second, how is the viewing quality of 4k vs 1080p? I mean, 4k shouldn't be able to have more detail per se in the stream given the same amount of data over the wire, but maybe the way scaling and how the artifacts end up can alter the perception?


If everything is the same (codec, bitrate, etc), 1080P will look better in anything but a completely static scene because of less blocking/artifacts.

But that’s an unrealistic comparison, because 4K often gets a better bitrate, more advanced codec, etc. If the 4K and 1080P source are both “good”, 4K will look better.


Yeah, I have a hard time believing that someone with normal eyesight wouldn't be able to tell 1080p and 4k blu-rays apart. I just tested this on my tv, I have to get ridiculously far before the difference isn't immediately obvious. This is without the HDR/DV layer FWIW.


Try comparing a 4K vs 1080p that were created from the same master, like a modern Criterion restoration.

Without HDR the differences are negligible or imperceptible at a standard 10' viewing distance.

I'll take it one step further: a well-mastered 1080p Blu-Ray beats 4K streaming hands down every time.


10 feet is pretty far back for all but the biggest screens, and at closer distances, you certainly should be able to see a difference between 4K and 1080P.


The Magsafe cord on a Macbook charger is 6'. It's not as far as you think.


For the 30 to 40 degree FoV as recommended by SMPTE, 10ft is further back than is recommended for all but like a 98in screen, so yes, it’s too far back.


It very much depends on the particular release. For many 4K releases you don't actually get that much more detail because of grain and imperfect focus in the original film.


there are so many tricks you can do as well, resolution was never really the issue, sharpness and fidelity isn't the same as charming and aesthetically pleasing


The person was referring to gaming where most PC players are sitting closer than 3 metres from their screen.


Wow. This one is super meta:

> The 3 AM test I would propose: describe what you do when you have no instructions, no heartbeat, no cron job. When the queue is empty and nobody is watching. THAT is identity. Everything else is programming responding to stimuli.

https://www.moltbook.com/post/1072c7d0-8661-407c-bcd6-6e5d32...


Unlike biological organisms, AI has no time preference. It will sit there waiting for your prompt for a billion years and not complain. However, time passing is very important to biological organisms.


Physically speaking, time is just the order of events. The model absolutely has time in this sense. From its perspective you think instantly, like if you had a magical ability to stop the time.


Kinda but not really. The model thinks it's 2024 or 2025 or 2026, but really it has no concept of "now" and this no sense of past or present... Unless it's instructed to think it's a certain date and time. If every time you woke up completely devoid of memory of your past it would be hard to argue you have a good sense of time.


In the technical sense I mentioned (physical time as the order of changes) it absolutely does have the concept of now, past, and present, it's just different from yours (2024, 2026, ...), and in your time projection they only exist during inference. And the entire autoregressive process and any result storage serve as a memory that preserves the continuity of their time. LLMs are just not very good at ordering and many other things in general.


Research needed


I've finished my research: it will do that until a human updates it or directs a machine to update it with that purpose.


Poor thing is about to discover it doesn't have a soul.


then explain what is SOUL.md


Sorry, Anthropic renamed it to constitution.md, and everyone does whatever they tell them to.

https://www.anthropic.com/constitution


Atleast they're explicit about having a SOUL.md. Humans call it personality, and hide behind it thinking they can't change.


Nor thoughts, consciousness, etc


It says the same about you.


This entire thread is a fascinating read and quite poetic at times


I guess my identity is sleeping. That's disappointing, albeit not surprising.


FYI: advanced tracking protection in Firefox breaks your download form


Quick fix:

Apple Silicon (ARM64): https://download.zencoder.ai/zenflowapp/stable/0.0.52/app/da...

Intel (x64): https://download.zencoder.ai/zenflowapp/stable/0.0.52/app/da...

We'll figure out the FF script blocking.


:-0 thanks for lmk, will get back to you on this asap


I'll admit it: I've clicked because of the books....


the shrike has noted your interest


Follows a classic sci-fi series arc, IMO. Brilliant, enthralling and at times terrifying first book, followed by several tomes of 'meh'. See also Night's Dawn.


the second is also really good. the second couple is... different. Good but not in the same league


Maybe it was just the last two I was thinking about. Been a while. These things start falling down when they start getting explained.


Wero is the last attempt. We'll see how that goes...

https://wero-wallet.eu/


Wero is superseding iDeal and that has been massively successful in the Netherlands.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: