> Does this mean other state actors are beyond needs of RCE vulns
No, from experience, any nation state actor would love to take advantage of a RCE vuln: this was painted from the perspective of Bottlerocket which is in use by DoD, NSA, etc.
It feels far too early for a protocol that's barely a year old with so much turbulence to be donated into its own foundation under the LF.
Alot of people don't realize this, but the foundations that wrap up to the LF have revenue pipelines that are supported by those foundations events (like Kubecon brings in ALOT of money for the CNCF), courses, certifications, etc. And, by proxy, the projects support those revenue streams for the foundations they're in. The flywheel is _supposed_ to be that companies donate to the foundation, those companies support the projects with engineering resources, they get a booth at the event for marketing, and the LF can ensure the health and well-being of the ecosystem and foundation through technical oversight committees, elections, a service-desk, owning the domains, etc.
I don't see how MCP supports that revenue stream nor does it seem like a good idea at this stage: why get a certification for "Certified MCP Developer" when the protocol is evolving so quickly and we've yet to figure how OAuth is going to work in a sane manner?
Mature projects like Kuberentes becoming the backbone of a foundation, like it did with CNCF, makes alot of sense: it was a relatively proven technology at Google that had alot of practical use cases for the emerging world of "cloud" and containers. MCP, at least for me, has not yet proven it's robustness as a mature and stable project: I'd put it into the "sandbox" category of projects which are still rapidly evolving and proving their value. I would have much preferred for Anthropic and a small strike team of engaged developers to move fast and fix alot of the issues in the protocol vs. it getting donated and slowing to a crawl.
At the same time, the protocol's adoption has been 10x faster than Kubernetes, so if you count by this metric, it actually makes sense to donate it now to let others actors in. For instance, without this Google will never fully commit to MCP.
It doesn't matter because only a minority of product companies worldwide (regardless enterprise or not) uses MCP.
I'd bet only minority uses LLMs in general.
For what it's worth, I don't write MCP servers that are shell scripts. I have ones that are http servers that load data from a database. It's nothing really all that more exciting than a REST API with an MCP front end thrown on top.
Many people only use local MCP resources, which is fine... it provides access to your specific environment.
For me however, it's been great to be able to have a remote MCP HTTP server that responds to requests from more than just me. Or to make the entire chat server (with pre-configured remote MCP servers) accessible to a wider (company internal) audience.
Honest question, Claude can understand and call REST APIs with docs, what is the added value? Why should anyone wrap a REST API with another layer? What does it unlock?
I have a service that other users access through a web interface. It uses an on-premises open model (gpt-oss-120b) for the LLM and a dozen MCP tools to access a private database. The service is accessible from a web browser, but this isn’t something where the users need the ability to access the MCP tools or model directly. I have a pretty custom system prompt and MCP tools definitions that guide their interactions. Think of a helpdesk chatbot with access to a backend database. This isn’t something that would be accessed with a desktop LLM client like Claude. The only standards I can really count on are MCP and the OpenAI-compatible chat completions.
I personally don’t think of MCP servers as having more utility than local services that individuals use with a local Claude/ChatGPT/etc client. If you are only using local resources, then MCP is just extra overhead. If your LLM can call a REST service directly, it’s extra overhead.
Where I really see the benefit is when building hosted services or agents that users access remotely. Think more remote servers than local clients. Or something a company might use for a production service. For this use-case, MCP servers are great. I like having some set protocol that I can know my LLMs will be able to call correctly. I’m not able to monitor every chat (nor would I want to) to help users troubleshoot when the model didn’t call the external tool directly. I’m not a big fan of the protocol itself, but it’s nice to have some kind of standard.
The short answer: not everyone is using Claude locally. There are different requirements for hosted services.
(Note: I don’t have anything against Claude, but my $WORK only has agreements with Google and OpenAI for remote access to LLMs. $WORK also hosts a number of open models for strictly on-prem work. That’s what guided my choices…)
Gatekeeping (in a good way) and security. I use Claude Code in the way you described but I also understand why you wouldn’t want Claude to have this level of access in production.
> 5. Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI. What would this area look like if autonomous AI agents were already part of the team? This question can lead to really fun discussions and projects.
I'm still baffled by the disconnect between what executives believe and are being sold. And what is currently possible with AI tools: I know of no tools that can fully replace headcount, especially in engineering.
Today's AI output looks great from a distance and less good the closer you inspect. It's unfortunate but not all too surprising that those taking a high-level view have an over-optimistic impression of it.
We can't talk about lowering headcount without due consideration to the loss of precious collaboration and diversity, things we spent the last 2 decades having rammed down our throats as critical to any exceptional team. Mask is off now I guess.
I'm glad this is making the rounds since I haven't seen alot on the "AI-DevOps" or infrastructure side of actually running an at-scale AI service. Many of the AI inference engines that offer an OpenAI compatible API (like vLLM, llama.cpp, etc.) make it very approachable and cost effective. Today, this vLLM AI service handles all of our batching micro-services which scrape for content to generate text on over 40,000+ repos on GitHub.
I'm happy to answer any / all questions anyone might have!
> Too much witchcraft and hand waving in the AI space at the moment.
Yeah +1: I found most frameworks (like langchain, llamaindex) to be abit too magical for my taste where as the well understood and well structured OpenAI API makes for building ontop of inference much easier. Things are moving really, really fast, but I'm excited for where they're headed.
I would love to continue the conversation: I think it's a really important topic in the world of increasingly deep dependencies. I too hadn't heard of the Gorilla maintainers reaching out for new contributors or sun-setting the project until after the fact.