slideflow.gan¶
Submodule used to interface with the PyTorch implementation of StyleGAN2 (maintained separately at https://github.com/jamesdolezal/stylegan2-slideflow).
See Generative Networks (GANs) for more information on working with GANs.
StyleGAN2 Interpolator¶
- class StyleGAN2Interpolator(gan_pkl: str, start: int, end: int, *, device: torch.device | None = None, target_um: int | None = None, target_px: int | None = None, gan_um: int | None = None, gan_px: int | None = None, noise_mode: str = 'const', truncation_psi: int = 1, **gan_kwargs)[source]¶
Coordinates class and embedding interpolation for a trained class-conditional StyleGAN2.
- Parameters:
- Keyword Arguments:
device (torch.device, optional) – Torch device. If None, will automatically select a GPU if available. Defaults to None.
target_um (int, optional) – Target size in microns for the interpolated images. GAN output will be cropped/resized to match this target. If None, will match GAN output. Defaults to None.
target_px (int, optional) – Target size in pixels for the interpolated images. GAN output will be cropped/resized to match this target. If None, will match GAN output. Defaults to None.
gan_um (int, optional) – Size in microns of the GAN output. If None, will attempt to auto-detect from training_options.json. Defaults to None.
gan_px (int, optional) – Size in pixels of the GAN output. If None, will attempt to auto-detect from training_options.json. Defaults to None.
noise_mode (str, optional) – Noise mode for GAN. Defaults to ‘const’.
truncation_psi (int, optional) – Truncation psi for GAN. Defaults to 1.
**gan_kwargs – Additional keyword arguments for GAN inference.
- __init__(gan_pkl: str, start: int, end: int, *, device: torch.device | None = None, target_um: int | None = None, target_px: int | None = None, gan_um: int | None = None, gan_px: int | None = None, noise_mode: str = 'const', truncation_psi: int = 1, **gan_kwargs) None [source]¶
Coordinates class and embedding interpolation for a trained class-conditional StyleGAN2.
- Parameters:
- Keyword Arguments:
device (torch.device, optional) – Torch device. If None, will automatically select a GPU if available. Defaults to None.
target_um (int, optional) – Target size in microns for the interpolated images. GAN output will be cropped/resized to match this target. If None, will match GAN output. Defaults to None.
target_px (int, optional) – Target size in pixels for the interpolated images. GAN output will be cropped/resized to match this target. If None, will match GAN output. Defaults to None.
gan_um (int, optional) – Size in microns of the GAN output. If None, will attempt to auto-detect from training_options.json. Defaults to None.
gan_px (int, optional) – Size in pixels of the GAN output. If None, will attempt to auto-detect from training_options.json. Defaults to None.
noise_mode (str, optional) – Noise mode for GAN. Defaults to ‘const’.
truncation_psi (int, optional) – Truncation psi for GAN. Defaults to 1.
**gan_kwargs – Additional keyword arguments for GAN inference.
- class_interpolate(seed: int, steps: int = 100) Generator [source]¶
Sets up a generator that returns images during class embedding interpolation.
- generate(seed: int | List[int], embedding: torch.Tensor) torch.Tensor [source]¶
Generate an image from a given embedding.
- Parameters:
seed (int) – Seed for noise vector.
embedding (torch.Tensor) – Class embedding.
- Returns:
Image (float32, shape=(1, 3, height, width))
- Return type:
torch.Tensor
- generate_end(seed: int) torch.Tensor [source]¶
Generate an image from the ending class.
- Parameters:
seed (int) – Seed for noise vector.
- Returns:
Image (float32, shape=(1, 3, height, width))
- Return type:
torch.Tensor
- generate_np_end(seed: int) ndarray [source]¶
Generate a numpy image from the ending class.
- Parameters:
seed (int) – Seed for noise vector.
- Returns:
Image (uint8, shape=(height, width, 3))
- Return type:
np.ndarray
- generate_np_from_embedding(seed: int, embedding: torch.Tensor) ndarray [source]¶
Generate a numpy image from a given embedding.
- Parameters:
seed (int) – Seed for noise vector.
embedding (torch.Tensor) – Class embedding.
- Returns:
Image (uint8, shape=(height, width, 3))
- Return type:
np.ndarray
- generate_np_start(seed: int) ndarray [source]¶
Generate a numpy image from the starting class.
- Parameters:
seed (int) – Seed for noise vector.
- Returns:
Image (uint8, shape=(height, width, 3))
- Return type:
np.ndarray
- generate_start(seed: int) torch.Tensor [source]¶
Generate an image from the starting class.
- Parameters:
seed (int) – Seed for noise vector.
- Returns:
Image (float32, shape=(1, 3, height, width))
- Return type:
torch.Tensor
- generate_tf_end(seed: int) Tuple[tf.Tensor, tf.Tensor] [source]¶
Create a processed Tensorflow image from the GAN output of a given seed and the ending class embedding.
- Parameters:
seed (int) – Seed for noise vector.
- Returns:
A tuple containing
tf.Tensor: Unprocessed resized image, uint8.
tf.Tensor: Processed resized image, standardized and normalized.
- generate_tf_from_embedding(seed: int | List[int], embedding: torch.Tensor) Tuple[tf.Tensor, tf.Tensor] [source]¶
Create a processed Tensorflow image from the GAN output from a given seed and embedding.
- Parameters:
seed (int) – Seed for noise vector.
embedding (torch.tensor) – Class embedding.
- Returns:
A tuple containing
tf.Tensor: Unprocessed resized image, uint8.
tf.Tensor: Processed resized image, standardized and normalized.
- generate_tf_start(seed: int) Tuple[tf.Tensor, tf.Tensor] [source]¶
Create a processed Tensorflow image from the GAN output of a given seed and the starting class embedding.
- Parameters:
seed (int) – Seed for noise vector.
- Returns:
A tuple containing
tf.Tensor: Unprocessed image (tf.Tensor), uint8.
tf.Tensor: Processed image (tf.Tensor), standardized and normalized.
- interpolate_and_predict(seed: int, steps: int = 100, outcome_idx: int = 0) Tuple[List, ...] [source]¶
Interpolates between starting and ending classes for a seed, recording raw images, processed images, and predictions.
- linear_interpolate(seed: int, steps: int = 100) Generator [source]¶
Sets up a generator that returns images during linear label interpolation.
- plot_comparison(seeds: int | List[int], titles: List[str] | None = None) None [source]¶
Plots side-by-side comparison of images from the starting and ending interpolation classes.
- seed_search(seeds: List[int], batch_size: int = 32, verbose: bool = False, outcome_idx: int = 0, concordance_thresholds: Iterable[float] | None = None) DataFrame [source]¶
Generates images for starting and ending classes for many seeds, calculating layer activations from a set classifier.
- Parameters:
- Raises:
Exception – If classifier model has not been been set with .set_classifier()
- Returns:
Dataframe of results.
- Return type:
pd.core.frame.DataFrame