I'd agree it's not critical, but discard the assumption that requests within the data center will be fast. People have to send requests to third parties, which will often be slow. Hopefully not as slow as across the Atlantic, but still magnitudes worse than an internal query.
You will often be in the state where the client uses HTTP2, and the apps use HTTP2 to talk to the third party, but inside the data center things are HTTP1.1, fastcgi, or similar.
Why does HTTP2 help with this? Load balancers use one keepalive connection per request and don't experience head of line blocking. And they have slow start disabled. So even if the latency of the final request is high, why would HTTP2 improve the situation?
If every request is quick, you can easily re-use connections, file handles, threads, etc. If requests are slow, you will often need to spin up new connections, as you don't want to wait for the response that might take hundreds of milliseconds.
But I did start by saying it's not important. It's a small difference, unless you hit a connection limit.
You will often be in the state where the client uses HTTP2, and the apps use HTTP2 to talk to the third party, but inside the data center things are HTTP1.1, fastcgi, or similar.