kitcar_ml.traffic_sign_detection.fasterrcnn package
Submodules
kitcar_ml.traffic_sign_detection.fasterrcnn.evaluate module
kitcar_ml.traffic_sign_detection.fasterrcnn.export module
kitcar_ml.traffic_sign_detection.fasterrcnn.inference module
kitcar_ml.traffic_sign_detection.fasterrcnn.model module
Classes:
|
- class Model(class_names: List[str], pretrained: bool = True)[source]
Bases:
DetectionModelMethods:
predict(images, **kwargs)Take in a list of images and predict the bounding boxes.
fit(data_loader, val_data_loader[, epochs, ...])Train the model on the given data_loader.
save(file)Save the internal model weights to a file.
load(file)Load a model from a .pth file containing the model weights.
export_to_onnx(output_file)Export this model into a onnx format.
half()Switch to FP16 values instead of FP32 to speed up inference.
Attributes:
- __get_prediction_indices(scores: List[float], min_score=0, max_iou=0.2)
Apply non maximum suppression and min_score.
Returns resulting indices.
- predict(images: List[ndarray], **kwargs) List[Tuple[List[ndarray], List[str], List[float]]][source]
Take in a list of images and predict the bounding boxes.
Returns: A list of bounding boxes with labels and scores.
- DEFAULT_OPTIMIZER_KWARGS = {'lr': 0.005, 'momentum': 0.9, 'weight_decay': 0.0005}
- fit(data_loader: DataLoader, val_data_loader: DataLoader, epochs: int = 10, optimizer_name: str = 'SGD', optimizer_args: Dict[str, float] = {'lr': 0.005, 'momentum': 0.9, 'weight_decay': 0.0005}, visualize: bool = False, tensorboard_path: str = 'runs') List[float][source]
Train the model on the given data_loader.
If given a validation data_loader, returns a list of loss scores at each epoch.
- save(file: str)[source]
Save the internal model weights to a file.
- Parameters
file – The name of the file. Should have a .pth file extension.
- classmethod load(file: str) Model[source]
Load a model from a .pth file containing the model weights.
- Parameters
file – The path to the .pth file containing the saved model.
- Returns
The model loaded from the file.
- export_to_onnx(output_file: str)[source]
Export this model into a onnx format.
- Parameters
output_file – Path to the output file
- half()[source]
Switch to FP16 values instead of FP32 to speed up inference.
This does only work on CUDA.
- _abc_impl = <_abc_data object>