I didn't, and I'm graduating with a BS in CS from Stanford in a few weeks. We have an introductory stats requirement, but I found that the stats class I took my sophomore year for my (dropped) economics major was far more enlightening than my CS stats requirement.
I think the fundamental issue is that there's very little focus on statistical practice in most of these courses. Social scientists have it good: they're always taught how to deal with and interpret statistics in the same way they'll have to in their line of work. It's totally useless to throw a bunch of theory at students. I think that teaching statistics in the context of real problems is the only way (most) students will actually learn and come to appreciate how useful it is.
The department revamped the major this year, and they've introduced a mandatory stats class tailored to CS students: cs109.stanford.edu. I haven't looked through it at all, but I think it's a step in the right direction.
Finally, I have to give a shout-out to The Little Handbook of Statistical Practice (http://www.tufts.edu/~gdallal/LHSP.HTM) in this thread. It's an amazing resource for anyone who works with statistics. I've referenced it while doing performance testing, building an A/B testing system, and working on problem sets. From the website:
"My aim is to describe, for better or worse, what I do rather than simply present theory and methods as they appear in standard textbooks. This is about statistical practice--what happens when a statistician (me) deals with data on a daily basis."
For a different view of how useful statistical practice (when applied mindlessly) is, read "The Black Swan". It's all about how people use statistical models with Normal distributions in places where it's patently unjustified, and the price we pay for it.
I played this game myself with a friend. I sent him ten samples of a (for him) unknown distribution and asked him to estimate the mean. Then 100, then 1000. His estimate of the mean kept changing to higher and higher values, because the samples were drawn from a Pareto (power-law) distribution with a mean of 1000. Such a distribution is almost indistinguishable from one with a mean of infinity, because all the signal is in the very rare, large outliers. If you try to analyze samples from such a process assuming it's Gaussian, nothing will make sense, and the standard deviation will give you an estimated uncertainty of the mean that is far, far below the actual uncertainty.
I think the fundamental issue is that there's very little focus on statistical practice in most of these courses. Social scientists have it good: they're always taught how to deal with and interpret statistics in the same way they'll have to in their line of work. It's totally useless to throw a bunch of theory at students. I think that teaching statistics in the context of real problems is the only way (most) students will actually learn and come to appreciate how useful it is.
The department revamped the major this year, and they've introduced a mandatory stats class tailored to CS students: cs109.stanford.edu. I haven't looked through it at all, but I think it's a step in the right direction.
Finally, I have to give a shout-out to The Little Handbook of Statistical Practice (http://www.tufts.edu/~gdallal/LHSP.HTM) in this thread. It's an amazing resource for anyone who works with statistics. I've referenced it while doing performance testing, building an A/B testing system, and working on problem sets. From the website:
"My aim is to describe, for better or worse, what I do rather than simply present theory and methods as they appear in standard textbooks. This is about statistical practice--what happens when a statistician (me) deals with data on a daily basis."
Read it now!