I get that perhaps you were using the proverbial "I" in your last sentence as an example of something everyone should be willing to do in/for their career, but does someone of your stature even need a real interview, much less a LC test, to obtain a top job? It's kind of funny imagining a young tech bro interviewing you, tbh.
I generally do not use clever algorithms in my code. I just use straightforward ones. Rarely, I might need a better one and go looking for it (like a better hash algorithm). I rarely use a data structure more complicated than an array, list, binary tree, hash, or single inheritance.
What I have, though, is decades of experience with what works and what doesn't work. (My favorite whipping boy is macros. Macros look like they are great productivity boosters. It takes about 10 years to realize that macros are a never-ending source of confusion, they just confuse and obfuscate every code base that uses them. I could go on about this! ...)
I have become pretty good at writing modules that minimize dependencies, and pretty good at the user interface design of a language.
But still, if the job wanted a leetcode test, I'd take it, no problem. I'd study up first, though.
If a young tech bro was interviewing me, I'd suggest he show me his best code, and I'd do a review of it :-) The point of that would not be to humilate him, but to demonstrate the value I can bring to improving code quality.
If I was being interviewed for a job writing a faster divide routine (the ones I wrote were shift-subtract, slower but bulletproof), a better random number generator, a cryptographically secure hash function, a tighter compression algorithm, a faster sort, I'm not the right guy for that.
P.S. I was once asked to review the code of a famous programmer I won't name. I was shocked to discover that the large codebase had 3 different implementations of bubblesort in it. I replaced them all with a call to qsort(). He asked me how I managed to speed it up :-/
My favorite blind spot is how our definitions of quality change over time. I knew someone who had a mature codebase which a new developer made substantially faster by removing his old optimizations. He’d measured very real improvements back when he made that hand-rolled assembly code on early Pentium generations but by the time we revisited it less than a decade later the combination of compiler and processor improvements meant that the C reference implementation was always faster. (I was assisting with the port to PowerPC and at first we thought that it was just XLC being especially good there, but then we tested it on x86 with GCC and found the same result)
Beyond the obvious lesson about experience and sunk costs it was also a great lesson about how much time you assume you have for maintenance: when he’d first written that code as a grad student he’d been obsessed with performance since that was a bottleneck for getting his papers out but as his career progressed he spent time on other things, and since it wasn’t broken he hadn’t really revisited it because he “knew” where it was slow. Over time the computer costs eventually outweighed that original savings.