simulation.utils.machine_learning.cycle_gan.configs package
Submodules
simulation.utils.machine_learning.cycle_gan.configs.base_options module
Classes:
- class BaseOptions[source]
Bases:
objectAttributes:
activation(*input, **kwargs)Choose which activation to use.
Models are saved here.
Specify number of convolution layers per resnet block.
Then crop to this size.
Dilation for individual conv layers in every resnet block.
Which epoch to load?
Scaling factor for normal, xavier and orthogonal.
Network initialization [normal | xavier | kaiming | orthogonal]
3 for RGB and 1 for grayscale
Weight for loss identity of domain A.
Weight for loss identity of domain B.
Weight for cycle loss.
Scale images to this size.
Path to a mask overlaid over all images.
Number of layers in the discriminator network.
Name of the experiment.
# of discriminator filters in the first conv layer
Specify discriminator architecture.
Specify generator architecture [resnet_<ANY_INTEGER>blocks | unet_256 | unet_128]
# of gen filters in the last conv layer
No dropout for the generator.
Instance normalization or batch normalization [instance | batch | none]
3 for RGB and 1 for grayscale.
Scaling and cropping of images at load time.
If specified, print more debugging information.
Standard deviation of noise added to the cycle input.
The size of image buffer that stores previously generated images.
Maximum amount of images to load; -1 means infinity.
Decide whether to use wasserstein cycle gan or standard cycle gan.
"l1" or "l2"; Decide whether to use l1 or l2 as cycle and identity loss functions.
Use sigmoid activation at end of discriminator.
Methods:
to_dict()- activation(*input, **kwargs): Module = Tanh()
Choose which activation to use.
- checkpoints_dir: str = './checkpoints'
Models are saved here.
- conv_layers_in_block: int = 3
Specify number of convolution layers per resnet block.
- crop_size: int = 512
Then crop to this size.
- dilations: List[int] = [1, 2, 4]
Dilation for individual conv layers in every resnet block.
- epoch: int | str = 'latest'
Which epoch to load?
set to latest to use latest cached model
- init_gain: float = 0.02
Scaling factor for normal, xavier and orthogonal.
- init_type: str = 'normal'
Network initialization [normal | xavier | kaiming | orthogonal]
- input_nc: int = 1
3 for RGB and 1 for grayscale
- Type:
# of input image channels
- lambda_idt_a: float = 0.5
Weight for loss identity of domain A.
- lambda_idt_b: float = 0.5
Weight for loss identity of domain B.
- lambda_cycle: float = 10
Weight for cycle loss.
- load_size: int = 512
Scale images to this size.
- mask: str = 'resources/mask.png'
Path to a mask overlaid over all images.
- n_layers_d: int = 4
Number of layers in the discriminator network.
- name: str = 'dr_drift'
Name of the experiment.
It decides where to store samples and models
- ndf: int = 32
# of discriminator filters in the first conv layer
- netd: str = 'basic'
Specify discriminator architecture.
[basic | n_layers | no_patch]. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator.
- netg: str = 'resnet_9blocks'
Specify generator architecture [resnet_<ANY_INTEGER>blocks | unet_256 | unet_128]
- ngf: int = 32
# of gen filters in the last conv layer
- no_dropout: bool = True
No dropout for the generator.
- norm: str = 'instance'
Instance normalization or batch normalization [instance | batch | none]
- output_nc: int = 1
3 for RGB and 1 for grayscale.
- Type:
Of output image channels
- preprocess: set = {'crop', 'resize'}
Scaling and cropping of images at load time.
[resize | crop | scale_width]
- verbose: bool = False
If specified, print more debugging information.
- cycle_noise_stddev: float = 0
Standard deviation of noise added to the cycle input.
Mean is 0.
- pool_size: int = 75
The size of image buffer that stores previously generated images.
- max_dataset_size: int = 15000
Maximum amount of images to load; -1 means infinity.
- is_wgan: bool = False
Decide whether to use wasserstein cycle gan or standard cycle gan.
- l1_or_l2_loss: str = 'l1'
“l1” or “l2”; Decide whether to use l1 or l2 as cycle and identity loss functions.
- use_sigmoid: bool = True
Use sigmoid activation at end of discriminator.
simulation.utils.machine_learning.cycle_gan.configs.test_options module
Classes:
- class TestOptions[source]
Bases:
BaseOptionsAttributes:
Path to images of domain A (real images).
Path to images of domain B (simulated images).
Saves results here.
Aspect ratio of result images.
Enable or disable training mode.
- dataset_a: List[str] = ['./../../../../data/real_images/maschinen_halle_parking']
Path to images of domain A (real images).
- dataset_b: List[str] = ['./../../../../data/simulated_images/test_images']
Path to images of domain B (simulated images).
- results_dir: str = './results/'
Saves results here.
- aspect_ratio: float = 1
Aspect ratio of result images.
- is_train: bool = False
Enable or disable training mode.
- class WassersteinCycleGANTestOptions[source]
Bases:
TestOptions
- class CycleGANTestOptions[source]
Bases:
TestOptions
simulation.utils.machine_learning.cycle_gan.configs.train_options module
Classes:
- class TrainOptions[source]
Bases:
BaseOptionsAttributes:
Path to images of domain A (real images).
Path to images of domain B (simulated images).
Window id of the web display.
Visdom port of the web display.
Enable or disable training mode.
# threads for loading data
Frequency of saving the current models.
Frequency of showing training results on console.
Momentum term of adam.
Input batch size.
Initial learning rate for adam.
Multiply by a gamma every lr_decay_iters iterations.
Learning rate policy.
Multiplication factor at every step in the step scheduler.
Number of epochs with the initial learning rate.
Number of epochs to linearly decay learning rate to zero.
Flip 50% of all training images vertically.
Load checkpoints or start from scratch.
- dataset_a: List[str] = ['./../../../../data/real_images/beg_2019']
Path to images of domain A (real images).
Can be a list of folders.
- dataset_b: List[str] = ['./../../../../data/simulated_images/random_roads']
Path to images of domain B (simulated images).
Can be a list of folders
- display_id: int = 1
Window id of the web display.
- display_port: int = 8097
Visdom port of the web display.
- is_train: bool = True
Enable or disable training mode.
- num_threads: int = 8
# threads for loading data
- save_freq: int = 100
Frequency of saving the current models.
- print_freq: int = 5
Frequency of showing training results on console.
- beta1: float = 0.5
Momentum term of adam.
- batch_size: int = 3
Input batch size.
- lr: float = 0.0005
Initial learning rate for adam.
- lr_decay_iters: int = 1
Multiply by a gamma every lr_decay_iters iterations.
- lr_policy: str = 'step'
Learning rate policy.
[linear | step | plateau | cosine]
- lr_step_factor: float = 0.1
Multiplication factor at every step in the step scheduler.
- n_epochs: int = 0
Number of epochs with the initial learning rate.
- n_epochs_decay: int = 10
Number of epochs to linearly decay learning rate to zero.
- no_flip: bool = False
Flip 50% of all training images vertically.
- continue_train: bool = False
Load checkpoints or start from scratch.
- class WassersteinCycleGANTrainOptions[source]
Bases:
TrainOptionsAttributes:
Number of iterations of the critic before starting training loop.
Upper bound for weight clipping.
Lower bound for weight clipping.
Number of iterations of the critic per generator iteration.
Decide whether to use wasserstein cycle gan or standard cycle gan.
- wgan_initial_n_critic: int = 1
Number of iterations of the critic before starting training loop.
- wgan_clip_upper: float = 0.001
Upper bound for weight clipping.
- wgan_clip_lower: float = -0.001
Lower bound for weight clipping.
- wgan_n_critic: int = 5
Number of iterations of the critic per generator iteration.
- is_wgan: bool = True
Decide whether to use wasserstein cycle gan or standard cycle gan.
- class CycleGANTrainOptions[source]
Bases:
TrainOptions