If they did normalize the data across gender, then you’re correct it may indicate bias on Amazon’s part. But I don’t know about that. The article doesn’t provide enough information. I think it should be obvious, to Amazon as well, that if you want to repair inequality in a trait (gender) you can’t use an unequal dataset to train a machine to select people. I just don’t think it follows that machine bias must mirror human bias.