I won’t completely argue against that, and I’ve also adopted Rust for smaller or faster work. Still, I contend that a freaking enormous portion of computing workloads are IO bound to the point that even Python’s speed is Good Enough in an Amdahl’s Law kind of way.
I hear this a lot, but can you really say that you're consistently saturating a 1Gbps line for netcode or 6+ Gbps nvme for disk data? In my experience this doesn't really happen with code that isn't intentionally designed to minimize unnecessary work.
A lot of slow parsing tends to get grouped in with io, and this is where python can be most limiting.
I don't personally use Python directly for super IO intensive work. In my common use cases, that's nearly always waiting for a database to return or for a remote network API to respond. In my own work, I'm saturating neither disk nor network. My code often finds itself waiting for some other process to do that stuff on its behalf.
I've never understood this. Python cannot be optimized like C, C++ or Rust. It cannot do advanced functional things like OCaml, Haskell or Scala. It cannot run in browsers like TypeScript. It cannot do games programming like C# and it can't do crazy macro stuff like Clojure. I don't think it's even second best at those things.