Isn't it more work to create REST services and GraphQl to implement BFF pattern. Also why is GraphQL bad for data services? The N+1 problem that author mentions is easily solvable by dataloaders. Why create two layers on backend?
If the underlying data store is SQL capable, then dataloaders are just a hack trying to do what the SQL engine already does in much more efficient way. Much better to map the GraphQL query to proper SQL query avoiding the round trips.
Yes, with SQL engine you can fetch a parent row with all its children in one go. However, sometimes the client does not need child rows at all and in this case there will overfetching from database and maybe it is fine when query is fast.
My main question was why implement GraphQL BFF layer on top of REST layer. How is this efficient, unless you have legacy REST service that you want expose as GraphQL service. If I am writing a service from scratch why would I create a REST service and then wrap it with GraphQL service?
That is more related to the frontend, the frontend doesn't have to overfetch data.
But the graphql still will fetch that data and just filter what does out, it still has to get that data.
Example is a query like
```
{
currentUser {
id
name
todoLists {
title
items {
name
}
}
}
}
```
The resolver will likely get the whole user object from the database, then just send name and id. Then when it finished getting the user, it will then query for the todo lists, and then only send the title (even though it got the whole row for each todo list), then after it fetches those lists, it will query for the items. And retrieve the whole rows of each item from the database.
The data the server needed to fetch didn't change, just what the frontend receives. It is still loading and fetching all the data on that query and then graphql filters the results leaving the server.
Also in the above steps, you notice it queries AGAIN after a data set has been retrieve, this causes an N+1 problem.
It is not inherent in the specs or implementation that fixes these. If you want to avoid fetching the whole object you will need custom code, and to avoid N+1 problem, you need batching of data within requests that "caches" or consolidate nested requests like data-loader, and some form of response caching to help with these issues.
Not siding against the tech, just clarifying those cons.
Yes the client queries for only data it needs and server returns only data which client requested.
With this query,
{ currentUser { id name todoLists { title items { name } } } }
It is up to the server how it is implemented.
- The server can fetch all the data for the user, todolist and items from the database in one go and resolve the client query mentioned above. In this case there will be overfetching from the database if the client only requested user information.
The server can also fetch the data in 3 queries
1> First to fetch the user, lets say with id 1.
2> Then get all the todos for the for user id 1.
3> Then get all the items for all the todos in step 2. Batching/Dataloaders.
All these queries can be executed in parallel on the server side. Does this make the server complex? Yes but there is also benefit to this when the user only request currentUser it does not fetch any todolists or items from the database.
No organization is perfect. Because organizations are made of people and people are not perfect. Can the resources be utilized in a better way? maybe so. Even with all these inefficiencies what Bill Gates is doing through his foundation should be commended.
Thanks for the tip, does FluentMigrator require me to duplicate the table structure in it's DSL or does it have a way to pick up EF Core tables & detect changes?
What's the common use-case scenario for migration handling in the C# world? I've seen a lot of people doing checks on application startup: Do I have pending migrations? If so, run them. In the Ruby world, checking and running migrations is usually part of the deploy routine, not application startup, and are usually handled by a separate CLI command.
That's actually a weird thing with a lot of MS tech, especially .NET desktop development. They barely used Winforms themselves, WPF got only used in Visual Studio 2010 (and made it slow and buggy for the first releases). They put out stuff for developers but don't use it themselves.
WPF has also been used for PowerShell ISE, Windows Live Writer, ABC a bunch of other things. A problem with adopting Windows forms or WPF internally is probably that no Windows core component or application could depend on .NET or non-system libraries, and a lot of applications are older than .NET and certainly wouldn't be rewritten.