Hacker Newsnew | past | comments | ask | show | jobs | submit | AnimalMuppet's commentslogin

What's wrong with it is, when I extend with a new subtype, I have to fix up the locations that use the type. Potentially all of the locations that use it - I at least have to look at all of them.

With the polymorphic approach, I just have to create the new subtype, and all the users can do the right thing (if they were written with polymorphism in mind, anyway - if they use virtual functions on the base class).


Why would I change the users at all instead of just modifying the dispatch method in the super type?

I think the question is, do you know at compile time what the concrete type is? In situations where you do, use static. (I'm not sure I'd call that "polymorphism". If you know the static type it's just a function on a type, and who cares that other types have functions with the same name?) But if you don't know the concrete type at compile time, then you must use dynamic dispatch.

And you can use each approach with the same type at different points in the code - even for the same function. It just depends on you local knowledge of the concrete type.


Honest question: Could you define "agent" in this context?

I like simonw's definition: "An LLM agent runs tools in a loop to achieve a goal."

I guess agent isn't the best term here since the LLM wouldn't be driving the logic in the daemon. Using an LLM to select which item to add to the cart would mimic the behavior of full agentic loop without the risk of it going off the rails and completing the purchase.


So if I understand correctly, in an agent, the LLM is in charge, but it can send part of the work off to other tools. And the problem here is that we're trying to have something in charge over the LLM, which is the reverse of the "agent" setup. Do I have that right?

Yeah, OpenClaw agents have a full set of tools to interact with a browser in arbitrary ways. My idea was to instead give it a tool for a browser wrapper with a limited API surface. And that tool could use LLMs internally in specific contexts.

I can think of at least one possibility - confidentiality failure. If the customer data was not contained - especially if it was DoD data - that would be reason to do such a thing.

OK, but we learned decades ago about putting safety guards on dangerous machinery, as part of the machinery. Sure, you can run LLMs in a sandbox, but that's a separate step, rather than part of the machinery.

What we need is for the LLM to do the sandboxing... if we could trust it to always do it.


Again, the trust is for the human/self. it's auto-complete, it hallucinates and commits errors, that's the nature of the tool. It's for the tools users to put approprite safeguards around it. Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it. You're expecting a dumb tool to be smart and know better. I suspect that is because of the "AI" marketing term and the whole supposition that it is some sort of pseudo-intelligence. it's just auto-complete. When you have it run code in an environment, it could auto-complete 'rm -rf /'.

> Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it.

True. But I expect my furnace to be trustworthy to not burn my house down. I expect my circular saw to come with a blade guard. I expect my chainsaw to come with an auto-stop.

But you are correct that in the AI area, that's not the kind of tool we have today. We have dangerous tools, non-OSHA-approved tools, tools that will hurt you if you aren't very careful with them. There's been all this development in making AI more powerful, and not nearly enough in ergonomics (for want of a better word).

We need tools that actually work the way the users expect. We don't have that. (And, as you say, marketing is a big part of the problem. People might expect closer to what the tool actually does, if marketing didn't try so hard to present it as something it is not.)


I think I'm in agreement with you. But regardless of expectations, the tool works a certain way. It's just a map of it's training data which is deeply flawed but immensely useful at the same time.

Also in that analogy, the LLM is the fire, not the furnace. If you use codex for example, that would the furnace, and it does have good guardrails, no one seems to be complaining about those.


But that really is a false equivalence, as you state. Hawaii created a slate of alternate electors, in case the recount changed the result. But only one slate was endorsed by the governor; only one slate was presented at the Electoral College.

Having "alternate electors", who were not endorsed by the governor, and who didn't win the recount (or the court case), show up at the electoral college anyway, claiming to be the real thing... that is a whole different deal. It's not a good-faith contingency plan for if you win the recount; it's a bad-faith attempt to overthrow the vote after you lost the recount.


Software is a "good", as far as economic statistics go.

AI is helping produce more software, right? Including more software that is for sale?[1] Or more online services that are for sale?

[1] One of the interesting things here is going to be liability. You can vibecode an app. You can throw together a corporation to sell it. But if it malfunctions and causes damage, your thrown-together corporation won't have the resources to pay for it. Yeah, you can just have the company declare bankruptcy and walk away, leaving the user high and dry.

After that happens a few times, the commercial market for vibecoded apps may get kind of thin. In fact, the market for software sold by any kind of startup may also get thin.


Software stopped being a good when it no longer came in a box with finite inventory, that you had to pay for only once. It's part of the services economy, same as insurance or car rental services, regardless of how the Fed classifies it.

So is the premise here that making more software is going to have a deflationary effect on the entire economy of material goods? If so then that's obviously nonsensical.

That's not what I said, no. More software is going to have a deflationary effect on software, which is part of the "goods" economy if it's sold in a box, or even (I think) if it's sold as a download. If it's just online, it's probably considered a service. Either way, more of it, more cheaply produced, decreases the value of each piece.

I haven't paid for any software in a long time & my monthly subscriptions for data storage & basic AI adds up to less than $100/month. Data storage is already as cheap as it could possibly get so AI is not going to make that any cheaper. More money in the economy is not going to have a deflationary effect, prices for everything will go up, including software services like data backups b/c cost of the service has nothing to do w/ software & the hardware is only going to get more expensive.

Version control isn't the only thing like that (though it might be the most important). They ought to have some familiarity with the idea of a bug database, for example. A requirements database (a software engineer could go through their entire career without having to ever touch one, but they should be familiar with the idea).

You don't see any harm in a disease slowly robbing your mind, while you are not warned, and so you waste the time you have left?

Tribes reproduce as the people who make up the tribe reproduce.

Values reproduce as the people who hold them reproduce, plus as others adopt those values, minus as those who hold those values drop them.

But the US was supposed to be a country where values mattered more than tribe. "We hold these truths to be self evident", and all that, and if you accepted the values, you belonged. That was an imperfect ideal, but it was the ideal until rather recently. I'm not sure to what degree it still is.


Are we ever allowed to stop being a "values country" and just be a normal one? Or are we at least allowed to change our values? Are we allowed to make that decision for ourselves?

A country based on shared values is normal.

And we are of course allowed to change that, if that is what the people want, but a minority should not make that decision on behalf of the whole.


If you want to change values like "equal rights" and "rule of law", you may be able to do so, but you probably have to amend the constitution to do it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: