当前位置: 首页 > 文档资料 > Edward 中文文档 >

ed.MonteCarlo

优质
小牛编辑
131浏览
2023-12-01

ed.MonteCarlo

Class MonteCarlo

Inherits From: Inference

Aliases:

  • Class ed.MonteCarlo
  • Class ed.inferences.MonteCarlo

Defined in edward/inferences/monte_carlo.py.

Abstract base class for Monte Carlo. Specific Monte Carlo methods inherit from MonteCarlo, sharing methods in this class.

To build an algorithm inheriting from MonteCarlo, one must at the minimum implement build_update: it determines how to assign the samples in the Empirical approximations.

Notes

The number of Monte Carlo iterations is set according to the minimum of all Empirical sizes.

Initialization is assumed from params[0, :]. This generalizes initializing randomly and initializing from user input. Updates are along this outer dimension, where iteration t updates params[t, :] in each Empirical random variable.

No warm-up is implemented. Users must run MCMC for a long period of time, then manually burn in the Empirical random variable.

Examples

Most explicitly, MonteCarlo is specified via a dictionary:

qpi = Empirical(params=tf.Variable(tf.zeros([T, K-1])))
qmu = Empirical(params=tf.Variable(tf.zeros([T, K*D])))
qsigma = Empirical(params=tf.Variable(tf.zeros([T, K*D])))
ed.MonteCarlo({pi: qpi, mu: qmu, sigma: qsigma}, data)

The inferred posterior is comprised of Empirical random variables with T samples. We also automate the specification of Empirical random variables. One can pass in a list of latent variables instead:

ed.MonteCarlo([beta], data)
ed.MonteCarlo([pi, mu, sigma], data)

It defaults to Empirical random variables with 10,000 samples for each dimension.

Methods

init

__init__(
    latent_vars=None,
    data=None
)

Create an inference algorithm.

Args:

  • latent_vars: list or dict. Collection of random variables (of type RandomVariable or tf.Tensor) to perform inference on. If list, each random variable will be approximated using a Empirical random variable that is defined internally (with unconstrained support). If dictionary, each value in the dictionary must be a Empirical random variable.
  • data: dict. Data dictionary which binds observed variables (of type RandomVariable or tf.Tensor) to their realizations (of type tf.Tensor). It can also bind placeholders (of type tf.Tensor) used in the model to their realizations.

build_update

build_update()

Build update rules, returning an assign op for parameters in the Empirical random variables.

Any derived class of MonteCarlo must implement this method.

Raises:

NotImplementedError.

finalize

finalize()

Function to call after convergence.

initialize

initialize(
    *args,
    **kwargs
)

print_progress

print_progress(info_dict)

Print progress to output.

run

run(
    variables=None,
    use_coordinator=True,
    *args,
    **kwargs
)

A simple wrapper to run inference.

  1. Initialize algorithm via initialize.
  2. (Optional) Build a TensorFlow summary writer for TensorBoard.
  3. (Optional) Initialize TensorFlow variables.
  4. (Optional) Start queue runners.
  5. Run update for self.n_iter iterations.
  6. While running, print_progress.
  7. Finalize algorithm via finalize.
  8. (Optional) Stop queue runners.

To customize the way inference is run, run these steps individually.

Args:

  • variables: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.
  • use_coordinator: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into initialize.

update

update(feed_dict=None)

Run one iteration of sampling.

Args:

  • feed_dict: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.

Returns:

dict. Dictionary of algorithm-specific information. In this case, the acceptance rate of samples since (and including) this iteration.

Notes

We run the increment of t separately from other ops. Whether the others op run with the t before incrementing or after incrementing depends on which is run faster in the TensorFlow graph. Running it separately forces a consistent behavior.