pe.api.image.improved_diffusion_lib package

Submodules

pe.api.image.improved_diffusion_lib.gaussian_diffusion module

This code contains minor edits from the original code at https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py and https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/script_util.py to support sampling from the middle of the diffusion process with start_t and start_image arguments.

class pe.api.image.improved_diffusion_lib.gaussian_diffusion.SkippedSpacedDiffusion(use_timesteps, **kwargs)[source]

Bases: SpacedDiffusion

ddim_reverse_sample(model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None, eta=0.0)[source]

Sample x_{t+1} from the model using DDIM reverse ODE.

ddim_sample(model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None, eta=0.0)[source]

Sample x_{t-1} from the model using DDIM.

Same usage as p_sample().

ddim_sample_loop(model, shape, noise=None, clip_denoised=True, denoised_fn=None, model_kwargs=None, device=None, progress=False, eta=0.0, start_t=0, start_image=None)[source]

Generate samples from the model using DDIM.

Same usage as p_sample_loop().

ddim_sample_loop_progressive(model, shape, noise=None, clip_denoised=True, denoised_fn=None, model_kwargs=None, device=None, progress=False, eta=0.0, start_t=0, start_image=None)[source]

Use DDIM to sample from the model and yield intermediate samples from each timestep of DDIM.

Same usage as p_sample_loop_progressive().

p_sample_loop(model, shape, noise=None, clip_denoised=True, denoised_fn=None, model_kwargs=None, device=None, progress=False, start_t=0, start_image=None)[source]

Generate samples from the model.

Parameters:
  • model – the model module.

  • shape – the shape of the samples, (N, C, H, W).

  • noise – if specified, the noise from the encoder to sample. Should be of the same shape as shape.

  • clip_denoised – if True, clip x_start predictions to [-1, 1].

  • denoised_fn – if not None, a function which applies to the x_start prediction before it is used to sample.

  • model_kwargs – if not None, a dict of extra keyword arguments to pass to the model. This can be used for conditioning.

  • device – if specified, the device to create the samples on. If not specified, use a model parameter’s device.

  • progress – if True, show a tqdm progress bar.

Returns:

a non-differentiable batch of samples.

p_sample_loop_progressive(model, shape, noise=None, clip_denoised=True, denoised_fn=None, model_kwargs=None, device=None, progress=False, start_t=0, start_image=None)[source]

Generate samples from the model and yield intermediate samples from each timestep of diffusion.

Arguments are the same as p_sample_loop(). Returns a generator over dicts, where each dict is the return value of p_sample().

pe.api.image.improved_diffusion_lib.gaussian_diffusion.create_gaussian_diffusion(*, steps=1000, learn_sigma=False, sigma_small=False, noise_schedule='linear', use_kl=False, predict_xstart=False, rescale_timesteps=False, rescale_learned_sigmas=False, timestep_respacing='')[source]

pe.api.image.improved_diffusion_lib.unet module

This code contains minor edits from the original code at https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/unet.py and https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/script_util.py to avoid calling self.input_blocks.parameters() in the original code, which is not supported by DataParallel.

class pe.api.image.improved_diffusion_lib.unet.FP32UNetModel(in_channels, model_channels, out_channels, num_res_blocks, attention_resolutions, dropout=0, channel_mult=(1, 2, 4, 8), conv_resample=True, dims=2, num_classes=None, use_checkpoint=False, num_heads=1, num_heads_upsample=-1, use_scale_shift_norm=False)[source]

Bases: UNetModel

property inner_dtype

Get the dtype used by the torso of the model.

training: bool
pe.api.image.improved_diffusion_lib.unet.create_model(image_size, num_channels, num_res_blocks, learn_sigma, class_cond, use_checkpoint, attention_resolutions, num_heads, num_heads_upsample, use_scale_shift_norm, dropout)[source]