ed.VariationalInference
ed.VariationalInference
Class VariationalInference
Inherits From: Inference
Aliases:
- Class
ed.VariationalInference
- Class
ed.inferences.VariationalInference
Defined in edward/inferences/variational_inference.py
.
Abstract base class for variational inference. Specific variational inference methods inherit from VariationalInference
, sharing methods such as a default optimizer.
To build an algorithm inheriting from VariationalInference
, one must at the minimum implement build_loss_and_gradients
: it determines the loss function and gradients to apply for a given optimizer.
Methods
init
__init__(
*args,
**kwargs
)
build_loss_and_gradients
build_loss_and_gradients(var_list)
Build loss function and its gradients. They will be leveraged in an optimizer to update the model and variational parameters.
Any derived class of VariationalInference
must implement this method.
Raises:
NotImplementedError.
finalize
finalize()
Function to call after convergence.
initialize
initialize(
optimizer=None,
var_list=None,
use_prettytensor=False,
global_step=None,
*args,
**kwargs
)
Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm’s computation graph.
Args:
optimizer
: str or tf.train.Optimizer. A TensorFlow optimizer, to use for optimizing the variational objective. Alternatively, one can pass in the name of a TensorFlow optimizer, and default parameters for the optimizer will be used.var_list
: list of tf.Variable. List of TensorFlow variables to optimize over. Default is all trainable variables thatlatent_vars
anddata
depend on, excluding those that are only used in conditionals indata
.use_prettytensor
: bool.True
if aim to use PrettyTensor optimizer (when using PrettyTensor) orFalse
if aim to use TensorFlow optimizer. Defaults to TensorFlow.global_step
: tf.Variable. A TensorFlow variable to hold the global step.
print_progress
print_progress(info_dict)
Print progress to output.
run
run(
variables=None,
use_coordinator=True,
*args,
**kwargs
)
A simple wrapper to run inference.
- Initialize algorithm via
initialize
. - (Optional) Build a TensorFlow summary writer for TensorBoard.
- (Optional) Initialize TensorFlow variables.
- (Optional) Start queue runners.
- Run
update
forself.n_iter
iterations. - While running,
print_progress
. - Finalize algorithm via
finalize
. - (Optional) Stop queue runners.
To customize the way inference is run, run these steps individually.
Args:
variables
: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.use_coordinator
: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed intoinitialize
.
update
update(feed_dict=None)
Run one iteration of optimization.
Args:
feed_dict
: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.
Returns:
dict. Dictionary of algorithm-specific information. In this case, the loss function value after one iteration.