Even if it doesn't have index size limits, I can't imagine that it doesn't have row size limits and some kind of off-row storage or multi-page storage with resulting overhead.
Looking at the doc[0], I see this:
> PostgreSQL uses a fixed page size (commonly 8 kB), and does not allow tuples to span multiple pages. Therefore, it is not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. This happens transparently to the user, with only small impact on most of the backend code.
So, yeah, large values are stored differently and there's an impact on performance.
Sorry, a blanket statement like "you should use varchar with no length limit" seems extremely ill advised to me. Like, sure, you don't want to use varchar(30) for last names. But varchar(255) is probably going to do you just fine.
Looking at the doc[0], I see this:
> PostgreSQL uses a fixed page size (commonly 8 kB), and does not allow tuples to span multiple pages. Therefore, it is not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. This happens transparently to the user, with only small impact on most of the backend code.
So, yeah, large values are stored differently and there's an impact on performance.
Sorry, a blanket statement like "you should use varchar with no length limit" seems extremely ill advised to me. Like, sure, you don't want to use varchar(30) for last names. But varchar(255) is probably going to do you just fine.
[0]: https://www.postgresql.org/docs/current/storage-toast.html