Evaluation ========== We use the evaluators for the evaluation of our models. Several evaluators can be used for the evaluation. The superclass :py:mod`kitcar_ml.utils.evaluation.evaluator` provides the interface for all evaluators. The evaluators can be called with the detections and groundtruths. InterpolationEvaluator ---------------------- The :py:mod:`kitcar_ml.utils.evaluation.tutorial` demonstrates how to use the evaluators. First, we fake a dataset and the detections to demonstrate the evaluator, with the following two functions. The following code demonstrates how to use an evaluator. .. literalinclude:: ../../../kitcar_ml/utils/evaluation/tutorial.py :language: python We initialize the evaluator with custom iou thresholds. The evaluator calculates the evaluation with the groundtruths and detections. Then a summary is printed and looks like this: .. code:: -------------- IoU 0.3 mAP: 1.0 -------------- IoU 0.5 mAP: 0.88 -------------- IoU 0.8 mAP: 0.7 -------------- The mean average precision(mAP) is calculated with the true and false positives. A detection is accepted, if the intersection over union ratio is higher than the threshold. That is why the mAP decreases with a higher iou threshold. We can see that the bounding box 1 and 2 have a higher iou than 0.5 and the bounding boxes 3 and 4 have a higher iou than 0.8. We can also show the calculated interpolation in a plot. That creates these plots: .. image:: resources/plot_05_iou.png .. image:: resources/plot_08_iou.png An interpolation is calculated for every iou threshold. Therefore this creates a plot for every iou threshold. An explanation for IoU can be found at :ref: `https://en.wikipedia.org/wiki/Jaccard_index` A larger example that generates more bounding boxes and changes the boxes more can be found in :py:mod:`kitcar_ml.utils.evaluation.example`.