The Googlebot has a crawl budget which limits how deep into a website it'll traverse relative to a given entrypoint. This can lead to unexpected omissions, for example Github issues with a high number and few or no cross-references.
For example: apache airflow issue 9939.
This issue wasn't indexed by the Googlebot and can't be found using Google Search.
And yet, stumbling upon such bits of information can be important and there's an expectation that Google can find everything on the public internet.
How can internet users demand that big tech companies invest in such long tail problems and not just mainstream content which might be easier to sell ads for?