A self-driving car needs to make many decisions at any point in time. If the classifiers built on sensory inputs say that the road is blocked, then the car must break. Unfortunately, no classifiers are perfect and there always remains some uncertainty about the surroundings. Class probability estimators quantify the uncertainty about each class, allowing to make risk-aware decisions, such as slowing down whenever the probability of an obstacle increases considerably.
This talk introduces some methods of building class probability estimators that are calibrated, i.e. neither under- nor over-confident. Calibration is important for making optimal decisions under imbalanced misclassification costs. If information from many classifiers needs to be aggregated, then it is further important to know how reliable each classifier is. We formalise the notion of reliability and show how this relates to different measures of how good a class probability estimator is.