• Docs >
  • slideflow.ModelParams


The ModelParams class organizes model and training parameters/hyperparameters and assists with model building.

See Training for a detailed look at how to train models.


class ModelParams(*, loss: str = 'CrossEntropy', **kwargs)[source]

Build a set of hyperparameters.

Configure a set of training parameters via keyword arguments.

Parameters are configured in the context of the current deep learning backend (Tensorflow or PyTorch), which can be viewed with slideflow.backend(). While most model parameters are cross-compatible between Tensorflow and PyTorch, some parameters are unique to a backend, so this object should be configured in the same backend that the model will be trained in.

  • tile_px (int) – Tile width in pixels. Defaults to 299.

  • tile_um (int or str) – Tile width in microns (int) or magnification (str, e.g. “20x”). Defaults to 302.

  • epochs (int) – Number of epochs to train the full model. Defaults to 3.

  • toplayer_epochs (int) – Number of epochs to only train the fully-connected layers. Defaults to 0.

  • model (str) – Base model architecture name. Defaults to ‘xception’.

  • pooling (str) – Post-convolution pooling. ‘max’, ‘avg’, or ‘none’. Defaults to ‘max’.

  • loss (str) – Loss function. Defaults to ‘sparse_categorical_crossentropy’.

  • learning_rate (float) – Learning rate. Defaults to 0.0001.

  • learning_rate_decay (int) – Learning rate decay rate. Defaults to 0.

  • learning_rate_decay_steps (int) – Learning rate decay steps. Defaults to 100000.

  • batch_size (int) – Batch size. Defaults to 16.

  • hidden_layers (int) – Number of fully-connected hidden layers after core model. Defaults to 0.

  • hidden_layer_width (int) – Width of fully-connected hidden layers. Defaults to 500.

  • optimizer (str) – Name of optimizer. Defaults to ‘Adam’.

  • early_stop (bool) – Use early stopping. Defaults to False.

  • early_stop_patience (int) – Patience for early stopping, in epochs. Defaults to 0.

  • early_stop_method (str) – Metric to monitor for early stopping. Defaults to ‘loss’.

  • manual_early_stop_epoch (int, optional) – Manually override early stopping to occur at this epoch/batch. Defaults to None.

  • manual_early_stop_batch (int, optional) – Manually override early stopping to occur at this epoch/batch. Defaults to None.

  • training_balance (str, optional) – Type of batch-level balancing to use during training. Options include ‘tile’, ‘category’, ‘patient’, ‘slide’, and None. Defaults to ‘category’ if a categorical loss is provided, and ‘patient’ if a linear loss is provided.

  • validation_balance (str, optional) – Type of batch-level balancing to use during validation. Options include ‘tile’, ‘category’, ‘patient’, ‘slide’, and None. Defaults to ‘none’.

  • trainable_layers (int) – Number of layers which are traininable. If 0, trains all layers. Defaults to 0.

  • l1 (int, optional) – L1 regularization weight. Defaults to 0.

  • l2 (int, optional) – L2 regularization weight. Defaults to 0.

  • l1_dense (int, optional) – L1 regularization weight for Dense layers. Defaults to the value of l1.

  • l2_dense (int, optional) – L2 regularization weight for Dense layers. Defaults to the value of l2.

  • dropout (int, optional) – Post-convolution dropout rate. Defaults to 0.

  • uq (bool, optional) – Use uncertainty quantification with dropout. Requires dropout > 0. Defaults to False.

  • augment (str, optional) –

    Image augmentations to perform. Characters in the string designate augmentations. Combine these characters to define the augmentation pipeline. For example, ‘xyrj’ will perform x-flip, y-flip, rotation, and JPEG compression. True will use all augmentations. Defaults to ‘xyrj’.




    Random x-flipping


    Random y-flipping


    Random cardinal rotation


    Random JPEG compression (10% chance to JPEG compress with quality between 50-100%)


    Random Guassian blur (50% chance to blur with sigma between 0.5 - 2.0)


    Stain Augmentation (requires stain normalizer)

  • normalizer (str, optional) – Normalization strategy to use on image tiles. Defaults to None.

  • normalizer_source (str, optional) – Stain normalization preset or path to a source image. Valid presets include ‘v1’, ‘v2’, and ‘v3’. If None, will use the default present (‘v3’). Defaults to None.

  • include_top (bool) – Include post-convolution fully-connected layers from the core model. Defaults to True. include_top=False is not currently compatible with the PyTorch backend.

  • drop_images (bool) – Drop images, using only other slide-level features as input. Defaults to False.

to_dict(self) Dict[str, Any]

Return a dictionary of configured parameters.

get_normalizer(self, **kwargs) StainNormalizer | None

Return a configured slideflow.StainNormalizer.

validate(self) bool

Check that hyperparameter combinations are valid.

model_type(self) str

Returns ‘linear’, ‘categorical’, or ‘cph’, reflecting the loss.

Mini-batch balancing

During training, mini-batch balancing can be customized to assist with increasing representation of sparse outcomes or small slides. Five mini-batch balancing methods are available when configuring slideflow.ModelParams, set through the parameters training_balance and validation_balance. These are 'tile', 'category', 'patient', 'slide', and 'none'.

If tile-level balancing (“tile”) is used, tiles will be selected randomly from the population of all extracted tiles.

If slide-based balancing (“patient”) is used, batches will contain equal representation of images from each slide.

If patient-based balancing (“patient”) is used, batches will balance image tiles across patients. The balancing is similar to slide-based balancing, except across patients (as each patient may have more than one slide).

If category-based balancing (“category”) is used, batches will contain equal representation from each outcome category.

If no balancing is performed, batches will be assembled by randomly selecting from TFRecords. This is equivalent to slide-based balancing if each slide has its own TFRecord (default behavior).

See Oversampling with balancing for more discussion on sampling and mini-batch balancing.


If you are using a Trainer to train your models, you can further customize the mini-batch balancing strategy by using slideflow.Dataset.balance() on your training and/or validation datasets.