Hacker Newsnew | past | comments | ask | show | jobs | submit | dathanb82's commentslogin

Skills are part of the repo, and CLIs are installed locally. In both cases it's up to you to keep them updated. MCP servers can be exposed and consumed over HTTPS, which means the MCP server owner can keep them updated for you.

Better sandboxing. Accessing an MCP server doesn't require you to give an agent permissions on your local machine.

MCP servers can expose tools, resources, and prompts. If you're using a skill, you can "install" it from a remote source by exposing it on the MCP server as a "prompt". That helps solve the "keep it updated" problem for skills - it gets updated by interrogating the MCP server again.

Or if your agentic workflow needs some data file to run, you can tell the agent to grab that from the MCP server as a resource. And since it's not a static file, the content can update dynamically -- you could read stocks or the latest state of a JIRA ticket or etc. It's like an AI-first, dynamic content filesystem.


You can install skills globally so they are available in all projects.

Oh man, Turbo Pascal was my first "real" programming language -- it was all various flavors of BASIC before, and mostly toy projects. The developer experience with Turbo Pascal (by which I guess I mostly mean Turbo Vision) was honestly pretty great

The use of XML as a data serialization format was always a bad choice. It was designed as a document _markup_ language (it’s in the name), which is exactly the way it’s being used for Claude, and is actually a good use case.


And do what? Leave the ducting, pipes, and electrical lines exposed for the one time in 20 years you need to do something with them?

In addition to being much more attractive than exposed infrastructure, drywall and the insulation that gets put behind it help make your house much more energy efficient.


No -- use doors.


So a bunch of doors everywhere you don't open for potentially 100 years?


As you might expect from the description -- largely passed on via contaminated water -- the guinea worm is mostly present in areas of extreme poverty. Even if such a treatment were feasible, it would be inaccessible to most of the relevant population.


It’s probably even more pronounced, since it’s unlikely that someone is going to _average_ 180bpm for their entire workout, especially as they get older.


Exactly!

And that level of workout will probably produce an even significantly lower resting heart rate than the 52 I cited. And for the top endurance athletes in distance running, cycling, nordic skiing, although they might spend 10+ hours/week at threshold or some training zone, so double those extra 'exercsie beats', they also often have resting heartrates in the low 40s/minute, which will yield an even greater lifespan if it is measured in heartbeate.


Yes: I'm 57 and my heartrate rarely goes over 150. ~135bpm is "run a 5k twice a week" pace, only maintained for 30 mins, and resting heart rate is ~50


The auto-attach flag isn’t a huge deal, since it’s a one-liner that can be statically documented and the fix works in all cases. The bigger issue is the JDK / runtime team’s stance that libraries should not be able to dynamically attach agents, and that the auto-attach flag might be removed in the future. You can still enable mockito’s agent with the —javaagent flag, but you have to provide the path to the mockito jar, and getting that path right is highly build-system-dependent and not something the mockito team can document in a way that minimizes the entry threshold for newcomers.


Just shy of $600K in 2022, based on data from IRS and BLS: https://smartasset.com/data-studies/what-it-takes-to-be-in-t...


It's available (usually as an optional feature) in lots of languages where dependencies are linked statically. The most standard terminology I'm aware of for it is "dead code elimination"

In Javascript, webpack (as well as other module bundlers) supports it, though they call their dead code elimination mechanism "tree shaking".


Lets you define queries over some data set declaratively, and instead of recomputing the query over the entire data set every time you want an updated answer, it uses Differential Dataflow <https://github.com/frankmcsherry/differential-dataflow> to efficiently(^1) calculate the new results by updating the results of the previous query execution in response to new updates to the data set.

^1: I'm not an expert on Differential Dataflow, so I don't know what "efficiently" means in this context, other than "should be faster than running the query from scratch."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: