Hacker Newsnew | past | comments | ask | show | jobs | submit | rgarcia's commentslogin

This is great. I'm all for agents calling structured tools on sites instead of poking at DOM/screenshots.

But no MCP server today has tools that appear on page load, change with every SPA route, and die when you close the tab. Client support for this would have to be tightly coupled to whatever is controlling the browser.

What they really built is a browser-native tool API borrowing MCP's shape. If calling it "MCP" is what gets web developers to start exposing structured tools for agents, I'll take it.


The article assumes that AI coding is thoughtless, but prompting is writing, and writing is thinking. If you approach AI coding the same way as regular programming, your thinking phase involves crafting a prompt that describes your thoughts.


It snapshots / pauses the entire unikernel instance after launching chromium, and then resumes the instance in <20ms with exactly the same state.


Is that safe? I was under the impression that snapshot/resume of ex. anything running crypto libraries was a minefield of duplicate keys and reused nonces.


Another method you might consider implementing would be identity verification via SMS code. I've experienced this with docusign: https://support.docusign.com/s/document-item?language=en_US&...

It requires you to know the phone number of the signer, but for important stuff you typically do.


Yep, support for SMS verification will be added eventually with ability to bring own Twilio credentials when self-hosting it.


The closest thing I could compare it to would be services that make it easier to get data into Redshift or S3. E.g. segment.io's redshift product: https://segment.com/redshift.


Also, their messaging seems a little disingenuous. Otto talks about how important it is to support microservice development and deployment, but Nomad lists as a con that Kubernetes has too many separately deployed and composed services.

This is consistent with a (reasonable) belief that microservice architecture is an important design pattern to support, but may not be the best approach for all problems. From reading the docs, my sense is that Nomad takes the position that for a cluster scheduler, fewer moving parts leads to lower operational overhead, which outweighs any benefit that microservices may bring. E.g., it's more difficult to deploy a microservice platform like Nomad if the platform itself is deployed as a set of microservices.


I think there's definitely a bootstrapping problem here: microservices are great if you have something like Kubernetes, Nomad, Mesos etc. on which to run and deploy them, but you have to run your platform on something and be able to bring it back up if it goes down and that's where I think Nomad might have the edge.


Agree (Kubernetes and OpenShift dev here) - OpenShift is actually bundled as a monolithic Go binary that contains the full Kubernetes stack and client, the Openshift admin client, user client, and js web console for exactly that reason (even though it is all technically micro services on the server side). The single binary comes with downsides (binary is 95M) but it makes the "try it out" flow much, much, easier to see it all working. But the converse is true - you have to be able to decouple those bits at scale, and you eventually will want to start leveraging the platform to run itself.


Here's my understanding: if you use Wealthfront direct indexing, you mostly hold stocks, so you pay a minimal amount in ETF fees. This means your expense ratio is pretty close to the Wealthfront fee of 0.25%.

If you put everything in a Vanguard target retirement fund, your expense ratio is something like 0.18%. [1]

So as long as tax loss harvesting adds a tiny fraction of a percent (~0.07%) to your returns, it seems optimal to use Wealthfront, no?

[1] https://personal.vanguard.com/us/funds/snapshot?FundId=0699&...


This is correct. Also, tax-loss harvesting isn't the only benefit of direct indexing. Charitable gifting of highly appreciated securities is a potentially huge benefit that direct indexing has over ETFs (even better given that it doesn't ratchet down your basis like TLH or create wash sale issues).


Wealthfront fee is ON TOP OF the underlying ETFs.


The problem of keeping this kind of configuration up-to-date is an interesting one. I'm curious if anyone has tried encoding the local development configuration in how they run integration tests so that it never falls out of date.


I open-sourced a tool a few days ago that lets you do mass modification of git repos, including (but not limited to) adding license files: https://github.com/clever/gitbot. I've used it to add licenses to close to a hundred repositories at Clever. Would be great to see if others find it useful.


As few tools as possible (no task runners, etc...)

:thumbsup: to this sentiment. Whenever I see a "skeleton" starter project that includes tooling that could easily be added at a later date, I usually close the tab.

I think people forget that skeleton projects need to serve an educational purpose to newcomers. Each additional framework/tool that you throw into the mix compromises the skeleton's ability to do that.


I had to learn how to use gulp and npm and browserify and and and ohmygodthelistgoeson, and I tell you I feel exactly the opposite.

Trying to bolt tooling onto an existing project is daunting, especially when you're not sure how it works yet. When I started with a fresh skeleton project, two source files, the whole shebang set-up, it was super easy. Just add my extra source files in the dir, and it's picked up.

I love it when a skeleton project includes the entire toolset. I can easily remove stuff I don't need, even when I don't know how it works. But adding JS tooling myself? What a disaster.

Now, I look at my first project, the one I had to add gulp and everything to myself, and it's a complete mess. A lot of time and effort would have been saved if I had just started with a good directory structure right away.


I had to learn how to use gulp and npm and browserify and and and ohmygodthelistgoeson,...

My point is that you shouldn't have to learn a long list of tools in order to learn the underlying framework the project is trying to introduce.

I can easily remove stuff I don't need, even when I don't know how it works. But adding JS tooling myself? What a disaster.

Really? I've walked in on quite a few existing gulp/grunt/insert-latest-hotness setups and found them incredibly hard to follow (let alone unwind) without a solid understanding of these tools.


"ohmygodthelistgoeson"!


Amen :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: