kitcar_ml.utils.evaluation package

Submodules

kitcar_ml.utils.evaluation.evaluator module

Classes:

Evaluator()

class Evaluator[source]

Bases: ABC

Methods:

split_bbs_per_class(groundtruth, detections)

Split the bounding boxes into lists for each class.

calculate_tp(detections, groundtruths, ...)

Iterates over all detections and create the accumulated true positive and false positive arrays.

calculate_all_tp(groundtruth, detections, ...)

Calculate the True and False positive array for a list of images.

find_max_iou(groundtruth, detection)

Find the groundtruth with the maximal IOU.

find_all_classes(groundtruth, detections)

Calculate the set of all classes and the set of all classes contained in the groundtruth.

Attributes:

_abc_impl

classmethod split_bbs_per_class(groundtruth: List[List[BoundingBox]], detections: List[List[BoundingBox]])[source]

Split the bounding boxes into lists for each class. The images are all separated into their own list.

Parameters
  • groundtruth – The groundtruth bounding boxes.

  • detections – The detection bounding boxes.

Returns

The dictionary with the bounding boxes per class and a list of classes.

classmethod calculate_tp(detections: List[BoundingBox], groundtruths: List[BoundingBox], iou_threshold: float) Tuple[ndarray, ndarray][source]

Iterates over all detections and create the accumulated true positive and false positive arrays.

Parameters
  • detections – The detections for this image.

  • groundtruths – The groundtruths for this image.

  • iou_threshold – The intersection over union threshold.

Returns

The true positive array.

classmethod calculate_all_tp(groundtruth: List[List[BoundingBox]], detections: List[List[BoundingBox]], iou_threshold: float) Tuple[ndarray, ndarray][source]

Calculate the True and False positive array for a list of images.

Parameters
  • groundtruth – A list of bounding box for each image.

  • detections – A list of bounding box for each image.

  • iou_threshold – The threshold that is needed for a true positive.

Returns

The true_positives for all images

static find_max_iou(groundtruth: List[BoundingBox], detection: BoundingBox) Tuple[float, int][source]

Find the groundtruth with the maximal IOU.

Returns

The maximal IoU. id_match_gt: The index of the groundtruth with the maximal IOU.

Return type

iou_max

static find_all_classes(groundtruth: List[List[BoundingBox]], detections: List[List[BoundingBox]])[source]

Calculate the set of all classes and the set of all classes contained in the groundtruth.

Parameters
  • groundtruth – List of bounding box per image.

  • detections – List of bounding box per image.

Returns

Set for all classes and classes represented in the groundtruth.

_abc_impl = <_abc_data object>

kitcar_ml.utils.evaluation.example module

Functions:

predict(model, gts)

Predicts the bounding boxes for the images using the model.

predict(model, gts) List[List[BoundingBox]][source]

Predicts the bounding boxes for the images using the model.

Returns: The groundtruth labels and the detections

kitcar_ml.utils.evaluation.interpolation_evaluator module

Classes:

InterpolationResult(recall, precision, ap, ...)

Data class containing the interpolation results.

InterpolationEvaluator([iou_thresholds, ...])

Functions:

head(iterable)

Returns the first element of a list.

class InterpolationResult(recall: List[float], precision: List[float], ap: float, recall_interpolation: List[float], precision_interpolation: List[float], total_positives: int, true_positives: int, false_positives: int)[source]

Bases: object

Data class containing the interpolation results.

Attributes:

recall

List with the recall values.

precision

List with the precision values.

ap

Average precision.

recall_interpolation

Interpolated recall values.

precision_interpolation

Interpolated precision values.

total_positives

Total number of ground truth positives.

true_positives

Number of true positive detections.

false_positives

Number of false positive detections.

recall: List[float]

List with the recall values.

precision: List[float]

List with the precision values.

ap: float

Average precision.

recall_interpolation: List[float]

Interpolated recall values.

precision_interpolation: List[float]

Interpolated precision values.

total_positives: int

Total number of ground truth positives.

true_positives: int

Number of true positive detections.

false_positives: int

Number of false positive detections.

head(iterable)[source]

Returns the first element of a list.

class InterpolationEvaluator(iou_thresholds: Tuple[float, ...] = (0.5, 0.75, 0.95), use_every_point_interpolation: bool = True)[source]

Bases: Evaluator

Methods:

_class_label_string(pair, name)

calculate_ap_every_point(recall_vector, ...)

Interpolate ap for every point.

calculate_interpolation_points(...)

Calculate the interpolated points for the recall and the precision.

calculate_ap_11_point_interp(recall_vector, ...)

Interpolate recall and precision at eleven points.

calculate_sorted_prefix_sum(detections, ...)

Sorts the true and false positive arrays and calculates the prefix sum.

calculate_metrics(groundtruth, detections, ...)

Calculate the metrics for the interpolation.

calculate_class_results(groundtruth, ...)

Calculate the interpolation, true positives and false positives for groundtruth and detection.

calculate_results(groundtruth, detections, ...)

Calculate the metrics of all classes.

plot_precision_recall_curves([classes, ...])

Plot the precision and recall curve.

Attributes:

_abc_impl

static _class_label_string(pair, name) str[source]
_abc_impl = <_abc_data object>
classmethod calculate_ap_every_point(recall_vector: ndarray, precision_vector: ndarray) Tuple[float, List[float], List[float]][source]

Interpolate ap for every point.

Parameters
  • recall_vector – numpy array of recalls

  • precision_vector – numpy array of precision

Returns

The average precision. recall_interpolation: The interpolated recall precision_interpolation: The interpolated precision

Return type

ap

static calculate_interpolation_points(recall_interpolation: List[float], precision_interpolation: List[float]) List[Tuple[int, int]][source]

Calculate the interpolated points for the recall and the precision. The maximal precision is used for equal recall value.

Parameters
  • recall_interpolation – The interpolation of the recall values

  • precision_interpolation – The interpolation of the precision values

classmethod calculate_ap_11_point_interp(recall_vector: ndarray, precision_vector: ndarray) Tuple[float, List[float], List[float]][source]

Interpolate recall and precision at eleven points.

Parameters
  • recall_vector – numpy array of recall values

  • precision_vector – numpy array of precision values

Returns

The average precision. recall_interpolation: The interpolated recall precision_interpolation: The interpolated precision

Return type

ap

static calculate_sorted_prefix_sum(detections: List[List[BoundingBox]], true_positives: ndarray, false_positives: ndarray) Tuple[ndarray, ndarray][source]

Sorts the true and false positive arrays and calculates the prefix sum.

Parameters
  • detections – List of all detections from this class.

  • true_positives – Array that defines the true positive.

  • false_positives – Array that defines the false positive.

classmethod calculate_metrics(groundtruth: List[List[BoundingBox]], detections: List[List[BoundingBox]], true_positives: ndarray, calculate_interpolation) InterpolationResult[source]

Calculate the metrics for the interpolation.

Parameters
  • groundtruth – List that contain the groundtruth bounding boxes.

  • detections – List that contain the detections bounding boxes.

  • true_positives – Array of the true positive value for each detection.

  • calculate_interpolation – The function that calculates the interpolation.

classmethod calculate_class_results(groundtruth: List[List[List[BoundingBox]]], detections: List[List[List[BoundingBox]]], calculate_interpolation, iou_threshold: float) InterpolationResult[source]

Calculate the interpolation, true positives and false positives for groundtruth and detection.

Parameters
  • groundtruth – List that contain the groundtruth bounding boxes.

  • detections – List that contain the detections bounding boxes.

  • iou_threshold – The threshold that bounds the acceptance of a detection.

  • calculate_interpolation – The function that calculates the interpolation.

calculate_results(groundtruth: List[List[BoundingBox]], detections: List[List[BoundingBox]], classes: List[str], iou_threshold: float = 0.5)[source]

Calculate the metrics of all classes.

Parameters
  • groundtruth – List of BoundingBoxes representing groundtruth bounding boxes;

  • detections – List of BoundingBoxes representing detections bounding boxes;

  • iou_threshold – IOU threshold indicating which detections will be considered TP or FP

Returns

A result dictionary for every class.

plot_precision_recall_curves(classes=None, show_interpolated_precision: bool = True, save_path: Optional[str] = None, save_prefix: str = 'plot', show_graphic: bool = True)[source]

Plot the precision and recall curve.

Parameters
  • classes – The classes that should be plotted, “all” can be a class.

  • show_interpolated_precision – True if the interpolation should be shown.

  • save_path – Save path of the plots, plots are not saved if no path is given.

  • save_prefix – The prefix of all saved files.

  • show_graphic – True if the plots should be shown.

kitcar_ml.utils.evaluation.simple_evaluator module

Classes:

SimpleEvaluator([iou])

class SimpleEvaluator(iou=0.5)[source]

Bases: Evaluator

Methods:

calculate_f1score(precision, recall)

Calculate the f1score of the precision and recall.

precision(true_positive, false_positive)

Calculate the precision of the true positive and false positive.

recall(true_positives, ngts)

Calculate the recall.

Attributes:

_abc_impl

static calculate_f1score(precision: float, recall: float) float[source]

Calculate the f1score of the precision and recall.

static precision(true_positive: int, false_positive: int) float[source]

Calculate the precision of the true positive and false positive.

static recall(true_positives: int, ngts: int) float[source]

Calculate the recall.

Parameters
  • true_positives – True positives

  • ngts – Number of groundtruth labels

_abc_impl = <_abc_data object>

kitcar_ml.utils.evaluation.tutorial module

Functions:

fake_dataset()

Simulate the dataset and create the groundtruth.

fake_prediction()

Simulate a model and create predictions.

fake_dataset()[source]

Simulate the dataset and create the groundtruth.

fake_prediction()[source]

Simulate a model and create predictions.

Module contents