The AI learns proxy signals. Name, work experience, skills (e.g., an emphasis on A11Y) ... all have some predictive power for gender, for some sorts of disabilities, ....
You can fix the problem by going nuclear and omitting any sort of data that could serve as a proxy for the discriminatory signals, but it's possible to explicitly feed the discriminatory signals into the model and enforce that no combination of other data amounting to knowledge about them can influence the model's predictions.
There was a great paper floating around for a bit about how you could actually manage that as a data augmentation step for broad classes of models (constructing a new data set which removed implicit biases assuming certain mild constraints on the model being trained on it). I'm having a bit of trouble finding the original while on mobile, but they described the problem as equivalent to "database reconstruction" in case that helps narrow down your search.
Many (most?) employers ask if you are disabled when filling out a job application. I personally don't consider myself disabled, but I have one of the conditions that is listed as a disability in this question. I never know what to put. I thought it wouldn't matter if I just said, yes, that I'm disabled, since I literally have one of the conditions listed, but people online who work in hiring say I will most likely be discriminated against if I do that. Sure, it's illegal, but companies do that anyway, apparently.
I wonder if the answer to the disability question is something the AI uses when evaluating candidates, and if it has learned to just toss out anyone who says yes?
e.g. if you knew one or two sign languages, wouldn't you list it under languages? What if the job involves in-person communication with masses of people?