Perceptual uniformity is in some ways opposite to the linearization suggested above - the L* component of CIELAB is much more like the gamma-encoded values of sRGB than a linear light measure.
It seems tough to come up with hard and fast rules for whether to mimic the linear physical processes, or work in a perceptual space more like the human visual system. I'd love to hear about more rigorous work in this area - most things I read have boiled down to "this way works better on these images".
It's interesting for example that using Sinc-type filters to resize truly linear data, like that from HDR cameras, usually gives rise to horrible dark haloing artifacts around small specular highlights, despite that being the most "physically correct" way to do it. Doing the same operation in a more perceptual space immediately sorts out the problem.
Resampling in CIELAB space tends to work better than resampling in gamma-adjusted R′G′B′ space, because at least you never end up averaging two pixels and getting a lightness which is outside the range of input lightnesses, which is what causes the worst artifacts in R′G′B′. A linear space will give a better result, but CIELAB results are usually acceptable.