Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Eh... I'm torn on it and could write at length about this, but here's my off-the-cuff thoughts. For context, my co-founder and I are quite active in the GraphQL community. If you've ever used apollo-cache-persist[1] or graphql-crunch[2], we authored those libraries. Our startup, if you're curious, is a social podcasting app written in react-native (https://banter.fm). We learned a lot of GraphQL lessons building it.

The tl;dr; is that graphql gives you a lot of flexibility and typed schemas are nice, but it doesn't come for free and I miss the tooling around http.

Pros:

- Typed schema

- Custom queries retrieve all the data the client needs in one request

- Really easy to implement, both server-side and client-side

- Server-side, it's trivial to have any field resolved in any way you want (redis, memcache, postgres, some random service)

- Easy and arbitrary mutations. There's no pontificating over what verbs are the proper ones to us.

- If you use react, the community and ecosystem often assumes you're using GraphQL, so it may make sense to use graphql just so you don't swim against the current.

Cons:

- The payloads can quickly become huge because there is often a ton of duplication in a responses (depending on your query patterns). See this example on the SWAPI demo: https://bit.ly/2uOFZBP. The result is 1MB of JSON, ~97% of which is data that exists somewhere else in the response already.

- Refactoring types is often impossible to do in a backwards-compatible way, even if the shape of the data is the same.

- You don't know what data you'll need in advance, so you're basically doing all of your joins by going back and forth between the api resolver and your data sources (this can be alleviated with persistent queries, but those come with their own set of issues). A typical query to hydrate a response for a user's feed in our app requests ~1,100 objects. After caching and consolidating queries into multi-gets, it translates to about 50 distinct DB queries.

- Tooling: Working at the HTTP level simply has better tooling and tons of infrastructure around caching and serving content (varnish, nginx, etc...)

We found that graphql payloads were so large that older mobile phones were spending significant time parsing them. We created graphql-crunch to de-duplicate responses before sending them over the wire. This led to nice perf improvements on mobile platforms. It also gave us referential equality when persisting the results to cache, allowing us to reduce a lot of work client side.

If you're going to use GraphQL, embrace the javascript ecosystem. Also use Apollo[3]. Also use DataLoader[4]. Roughly 40% of our queries get resolved for "free" by data loader.

If I were to do it again, I'd at least prototype a REST-api with resources designed specifically for http cache-ability (that is, break out session-specific resources/attributes vs shared resources) and see if HTTP/2 multiplexing + nginx caching + etags results in a good client experience. But I also mostly work on the backend while my co-founder mostly works on the frontend, so we have different desires and constraints. Ideally, as few requests as possible would make it to code that I wrote. With GraphQL that's nearly impossible.

[1] https://blog.apollographql.com/announcing-apollo-cache-persi... [2] https://github.com/banterfm/graphql-crunch [3] https://www.apollographql.com/server [4] https://github.com/facebook/dataloader



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: