ed.BiGANInference
ed.BiGANInference
Class BiGANInference
Inherits From: GANInference
Aliases:
- Class
ed.BiGANInference
- Class
ed.inferences.BiGANInference
Defined in edward/inferences/bigan_inference.py
.
Adversarially Learned Inference (Dumoulin et al., 2017) or Bidirectional Generative Adversarial Networks (Donahue, Krähenbühl, & Darrell, 2017) for joint learning of generator and inference networks.
Works for the class of implicit (and differentiable) probabilistic models. These models do not require a tractable density and assume only a program that generates samples.
Notes
BiGANInference
matches a mapping from data to latent variables and a mapping from latent variables to data through a joint discriminator.
In building the computation graph for inference, the discriminator’s parameters can be accessed with the variable scope “Disc”. In building the computation graph for inference, the encoder and decoder parameters can be accessed with the variable scope “Gen”.
The objective function also adds to itself a summation over all tensors in the REGULARIZATION_LOSSES
collection.
Examples
with tf.variable_scope("Gen"):
xf = gen_data(z_ph)
zf = gen_latent(x_ph)
inference = ed.BiGANInference({z_ph: zf}, {xf: x_ph}, discriminator)
Methods
init
__init__(
latent_vars,
data,
discriminator
)
build_loss_and_gradients
build_loss_and_gradients(var_list)
finalize
finalize()
Function to call after convergence.
initialize
initialize(
optimizer=None,
optimizer_d=None,
global_step=None,
global_step_d=None,
var_list=None,
*args,
**kwargs
)
Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm’s computation graph.
Args:
optimizer
: str or tf.train.Optimizer. A TensorFlow optimizer, to use for optimizing the generator objective. Alternatively, one can pass in the name of a TensorFlow optimizer, and default parameters for the optimizer will be used.optimizer_d
: str or tf.train.Optimizer. A TensorFlow optimizer, to use for optimizing the discriminator objective. Alternatively, one can pass in the name of a TensorFlow optimizer, and default parameters for the optimizer will be used.global_step
: tf.Variable. OptionalVariable
to increment by one after the variables for the generator have been updated. Seetf.train.Optimizer.apply_gradients
.global_step_d
: tf.Variable. OptionalVariable
to increment by one after the variables for the discriminator have been updated. Seetf.train.Optimizer.apply_gradients
.var_list
: list of tf.Variable. List of TensorFlow variables to optimize over (in the generative model). Default is all trainable variables thatlatent_vars
anddata
depend on.
print_progress
print_progress(info_dict)
Print progress to output.
run
run(
variables=None,
use_coordinator=True,
*args,
**kwargs
)
A simple wrapper to run inference.
- Initialize algorithm via
initialize
. - (Optional) Build a TensorFlow summary writer for TensorBoard.
- (Optional) Initialize TensorFlow variables.
- (Optional) Start queue runners.
- Run
update
forself.n_iter
iterations. - While running,
print_progress
. - Finalize algorithm via
finalize
. - (Optional) Stop queue runners.
To customize the way inference is run, run these steps individually.
Args:
variables
: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.use_coordinator
: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed intoinitialize
.
update
update(
feed_dict=None,
variables=None
)
Run one iteration of optimization.
Args:
feed_dict
: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.variables
: str. Which set of variables to update. Either “Disc” or “Gen”. Default is both.
Returns:
dict. Dictionary of algorithm-specific information. In this case, the iteration number and generative and discriminative losses.
Notes
The outputted iteration number is the total number of calls to update
. Each update may include updating only a subset of parameters.
Donahue, J., Krähenbühl, P., & Darrell, T. (2017). Adversarial Feature Learning. In International conference on learning representations.
Dumoulin, V., Belghazi, I., Poole, B., Lamb, A., Arjovsky, M., Mastropietro, O., & Courville, A. (2017). Adversarially Learned Inference. In International conference on learning representations.