length . repeat won't type either. Briefly, recursion and corecursion have the following types
cata : (f a -> a) -> mu f -> a
ana : (a -> f a) -> a -> nu f
where `mu` is the "least fixed point" of a base functor f and `nu` the "greatest fixed point". In non-terminal, non-strict languages `mu f == nu f`, but in total languages they don't. We call types of the form `mu f` data and types of the form `nu f` codata. Data is guaranteed to be finite while codata is not.
You think of it like a generator. Before I wrote something like
ana : Step f seed -> seed -> Nu f
but we might define `Nu f` using an existential to just be
data Nu f = forall seed . (Step f seed, seed)
and then you can write an eliminator for this type which just handles each step. So, `repeat` might look like
data ListF a x = Nil | Cons a x
repeat :: a -> Nu (ListF a)
repeat a = Nu (\x -> Cons x x, a)
and then `take` will be a parametric catamorphism on Nat which just repeatedly creates new layers and consumes them. Together these form a thing called a hylomorphism.
My understanding is that there's nothing wrong with using codata - the problem is with recursing on codata ("problem" in that you're stuck generating codata). Something like "take n" can produce data when reading codata because the recursion happens partly on data (in this case, n).
I'm sure tel will let me know if I've made a misunderstood something.
That's pretty much exactly it! The other thing to note is that the reduction semantics of codata are to... just sit there until it's consumed.
If you use Haskell you get the idea that codata just expands and expands because of the behavior of the repl and laziness. Sometimes, a better intuition is that it's more like a Python generator which must be yielded explicitly (by a recursive algorithm recurring on some data).
Another way to think about it is that only recursion "drives" computation. Corecursion sets up a potentially infinite number of computation steps each of which can be chosen to be churned through via recursion.