You're remembering wrong, unless they did something crazily non-standard. Whole numbers are represented perfectly in binary floating point. At least until you run out of digits, at which point they start rounding to even numbers, then to multiples of 4, etc.
There is one spot where binary floating point has trouble compared to decimal, and that's dividing by powers of 5 (or 10). If you divide by powers of 2 they both do well, and if you divide by any other number they both do badly. If you use whole numbers, they both do well.
Also even if you do want decimal, you don't want BCD. You want to use an encoding that stores 3 digits per 10 bits.
There is one spot where binary floating point has trouble compared to decimal, and that's dividing by powers of 5 (or 10). If you divide by powers of 2 they both do well, and if you divide by any other number they both do badly. If you use whole numbers, they both do well.
Also even if you do want decimal, you don't want BCD. You want to use an encoding that stores 3 digits per 10 bits.