networkqit.algorithms.optimize.StochasticOptimizer¶
-
class
StochasticOptimizer
(G: autograd.numpy.numpy_wrapper.array, x0: autograd.numpy.numpy_wrapper.array, model: networkqit.graphtheory.models.GraphModel.GraphModel, **kwargs)[source]¶ This class is at the base of implementation of methods based on stochastic gradient descent. The idea behind this class is to help the user in designing a nice stochastic gradient descent method, such as ADAM, AdaGrad or older methods, like the Munro-Robbins stochastic gradients optimizer. Working out the expression for the gradients of the relative entropy, one remains with the following:
- math
abla_{ heta}S( ho | sigma) = eta extrmiggl lbrack ho abla_{ heta}mathbb{E}_{ heta}[L]} iggr brack`
- math
rac{partial S( ho | sigma)}{partial heta_k} = eta extrm{Tr}lbrack ho rac{partial}{partial heta_k} brack + rac{partial}{partial heta_k}mathbb{E}_{ heta}log extrm{Tr} e^{-eta L( heta)}lbrack brack`
This class requires either Tensorflow or Pytorch to support backpropagation in the eigenvalues routines. Alternatively you can use github.com/HIPS/autograd method for full CPU support.
-
__init__
(G: autograd.numpy.numpy_wrapper.array, x0: autograd.numpy.numpy_wrapper.array, model: networkqit.graphtheory.models.GraphModel.GraphModel, **kwargs)[source]¶ Initialize the stochastic optimizer with the observed graph, an initial guess and the model to optimize.
- Args:
G (numpy.array) :is the empirical network to study. A N x N adjacency matrix as numpy.array. x0 (numpy.array): is the k-element array of initial parameters estimates. Typically set as random. model (nq.GraphModel): the graph model to optimize the likelihood of.
Methods
__init__
(G, x0, model, **kwargs)Initialize the stochastic optimizer with the observed graph, an initial guess and the model to optimize.
gradient
(x, rho, beta[, batch_size])