Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Web RPC protocols are not trying to replace local function calls though. They're trying to replace remote API calls (REST, GraphQL etc.). And all remote protocols have to solve problems like availability, latency, and reliability. Those problems remain. This is trying to solve a different problem with different trade-offs.

The problem it solves is unnecessary protocol noise and the trade-off is tight coupling between frontend and backend code. I think it's obvious that the trade-off makes sense for small projects and MVPs: most changes require touching both ends and typically the team isn't differentiated between backend and frontend anyway.

But even with large projects that have to cater to many clients, a tightly-coupled RPC system can solve problems like overfetching and underfetching as a straightforward method for implementing the BFF pattern[1].

[1] https://medium.com/mobilepeople/backend-for-frontend-pattern...



Exactly.

Telefunc is about replacing generic endpoints (RESTful/GraphQL APIs) with tailored endpoints (RPC).

That's why the trade-off is simpler architecture (generic endpoints are often an unnecessary abstraction) VS decoupling (tailored endpoints require the frontend and backend to be deployed in-sync).

In the long term I foresee RPC to be also used for very large teams. I've ideas around this, stay tuned. I can even see companies like Netflix ditching GraphQL for RPC, although this won't happen any time soon.

In the meantime, RPC is a natural fit for small/medium sized teams that want to ship today instead of spending weeks setting up GraphQL.


Every generation keeps trying rpc and learns its lesson…eventually.

On windows it was DCom, then COM+ then remoting the WCF then who knows I lost track.

Rest APIs are simple and easily debuggable, magic remote api layers are not.

That’s why REST is still prevalent even in websockets had better better performance parameters (and in my testing it did have performance advantages) yet 7 years after my testing hire many sites are running websockets?


For lots of APIs this is somewhat true. However, I recently took a deep dive into "REST" and realized that for many APIs, you really have to contort how you think in order to make it fit into that model.

It generally works for retrieving data (with caveats...), but I found that when modifying data on the server, with restricted transformations that don't necessarily map 1-1 with the data on the server, it feels like forcing a square peg into a round hole. I tend to think of verbs in that case, which starts to look like RPC.

("REST" is in quotes because the REST model proposed by Fielding (ie, with HATEOAS) looks almost nothing like "REST" in practice).


If Telefunc is bug-free (which I intend to be the case), then you won't need to debug Telefunc itself.

For error tracking, Telefunc has hooks for that.

For debugging user-land, you can `console.log()` just like you would with normal functions.

Do you see a concrete use case where this wouldn't suffice?


> If Telefunc is bug-free (which I intend to be the case), then you won't need to debug Telefunc itself.

And if no one crashes, you don't need airbags.

If I'm being blunt, reality doesn't give a shit what you think. It's better to design with the assumption there are bugs so that _WHEN_ it happens, the users are up a creek without a paddle.

These sorts of implicit assumptions are how painful software is made.


Software can be verified to be correct, stupidity of others is unavoidable.


Did someone solve the halting problem while I wasn't looking?


No, but somebody created type checking, linting and testing.

Not sure what's up with the CS people and their halting problem. In the industry we've solved (as in developed ways to deal with) the problem of verification decades ago.

Also, debuggers. Nobody said the verification can't be done by a human.


verifying software is correct implies solving the halting problem.

What you mean is "no known bugs", so may be use those words instead. "verification of correctness" has a specific meaning in our industry.

yeah yeah, I get it, those stupid CS people and their P=NP talk. Don't they know you can obviously verify correctness without verifying it for all possible inputs? What next, you can't prove a negative such as absence of bugs?!?


> verifying software is correct implies solving the halting problem.

No, producing a program that can verify that all correct programs are correct implies solving the halting problem.

Verifying a particular piece of software is correct just implies you've proved that one piece correct. (And probably wasted your time dicking around with it only to find that the actual issue was in software you treated as 'outside' of the software you were verifying...)


What you're describing is the programming version of approximation. It's understood that it has a margin of error due to the approximation.

What you're claiming here is that if you check enough inputs you've proven it correct, and what I'm telling you is that's not the case.

The fact is, nothing has been proven, the wording on the webpage itself is more honest (no known bugs and a suite of automated tests).

To verify a program is correct is a much stronger claim than what is even on the website. And that requires restricting the input or solving the halting problem.


Generic endpoits are a design smell. Junior devs making Junior dev problems because someone who's been coding for 10 minutes wrote a series on api design on Medium.

See the backend for front end pattern.

Debuggabilty wins.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: