当前位置: 首页 > 文档资料 > Edward 中文文档 >

ed.ScoreRBKLqp

优质
小牛编辑
127浏览
2023-12-01

ed.ScoreRBKLqp

Class ScoreRBKLqp

Inherits From: VariationalInference

Aliases:

  • Class ed.ScoreRBKLqp
  • Class ed.inferences.ScoreRBKLqp

Defined in edward/inferences/klqp.py.

Variational inference with the KL divergence

$(\text{KL}( q(z; \lambda) \| p(z \mid x) ).)$

This class minimizes the objective using the score function gradient and Rao-Blackwellization.

Notes

Current Rao-Blackwellization is limited to Rao-Blackwellizing across stochastic nodes in the computation graph. It does not Rao-Blackwellize within a node such as when a node represents multiple random variables via non-scalar batch shape.

The objective function also adds to itself a summation over all tensors in the REGULARIZATION_LOSSES collection.

Methods

init

__init__(
    latent_vars=None,
    data=None
)

Create an inference algorithm.

Args:

  • latent_vars: list of RandomVariable or dict of RandomVariable to RandomVariable. Collection of random variables to perform inference on. If list, each random variable will be implictly optimized using a Normal random variable that is defined internally with a free parameter per location and scale and is initialized using standard normal draws. The random variables to approximate must be continuous.

build_loss_and_gradients

build_loss_and_gradients(var_list)

finalize

finalize()

Function to call after convergence.

initialize

initialize(
    n_samples=1,
    *args,
    **kwargs
)

Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm’s computation graph.

Args:

  • n_samples: int. Number of samples from variational model for calculating stochastic gradients.

print_progress

print_progress(info_dict)

Print progress to output.

run

run(
    variables=None,
    use_coordinator=True,
    *args,
    **kwargs
)

A simple wrapper to run inference.

  1. Initialize algorithm via initialize.
  2. (Optional) Build a TensorFlow summary writer for TensorBoard.
  3. (Optional) Initialize TensorFlow variables.
  4. (Optional) Start queue runners.
  5. Run update for self.n_iter iterations.
  6. While running, print_progress.
  7. Finalize algorithm via finalize.
  8. (Optional) Stop queue runners.

To customize the way inference is run, run these steps individually.

Args:

  • variables: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.
  • use_coordinator: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into initialize.

update

update(feed_dict=None)

Run one iteration of optimization.

Args:

  • feed_dict: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.

Returns:

dict. Dictionary of algorithm-specific information. In this case, the loss function value after one iteration.