One Metric to Fool Yourself – A Cautionary Tale in Machine Learning Evaluation
February 9 @ 1:00 pm - 1:50 pm
‘When fitting a model, statistical or machine learning, we often want to evaluate its performance. We have a wealth of different methods for all types of scenarios, from classification and regression to survival analysis. While these performance metrics work as intended, we can often get more out of models by carefully combining and using these metrics to capture what we really care about in our models. Optimal performance and minimal bias.
UC Love Data Week guest speaker: Emil Hvitfelt from Posit.
Speaker(s): Emil Hvitfelt’
