With SQL you need to explicitly test all queries where the shape granularity is down to field level.
When you map data onto an object model (in the dto sense, not oop sense) you have bigger building blocks.
This gives a simpler application that is more reliable.
Obviously you need to pick a performant orm - and it seems a lot of people in these threads have been traumatized.
Personally, I run a complex application where developers freely use a graphql schema and requests are below 50ms p99 - gql in translated into joins by the orm, so we do not have any n+1 issues, etc.
The issue with GraphQL tends to be unoptimized joins instead. Is your GraphQL API available for public consumers? How do you manage them issuing inefficient queries?
I've most often seen this countered through data loaders (batched queries that are merged in code) instead of joins, or query whitelists.
The issue I've seen with GraphQL isn't necessarily the count of queries run, but rather the performance or said queries (i.e. most SQL queries are not performant without proper indexes for the specific use case, but GraphQL allows lots of flexibility in what queries users can run.)
Yes - one needs to ensure that the data is well indexed - that is reasonable.
But indices does not need to yield a single result. It is OK that indices reduce the result set to tens or couple of hundreds of result. That is well within the performance requirements (... of our app)
> You assume your ORM does the basic data mapping right
You know, it should. There's no good reason for an ORM to ever fail at runtime due to mapping problems instead of compile time or start time. (Except, of course if you change it during the software's execution.)
I have to respond here as I seemingly the depth limit is reached.
As you've mentioned graphql you probably comparing ORM in that sense to an traditional custom API with backed by raw sql. In a fair comparison both version would do the exactly same, require the same essential tests. Assuming more variations for the raw sql version is just assuming it does more or somehow does it badly in terms of architecture. Which is not a fair comparison.
You will have a bigger variety of queries hwne you don't use an orm - this puts a higher load on software testing to get the same level of reliability.
You realize that’s abysmally bad performance for any reasonable OLTP query, right? Sub-msec (as measured by the DB, not including RTT etc.) is very achievable, even at scale. 2-3 msec for complex queries.