Algorithms

This submodule contains definition of various algorithms applied to the spectral entropy framework. They are mostly based on optimization algorithms for model fitting.

Optimization methods

Define the base and inherited classes for model optimization, both in the continuous approximation and for the stochastic optimization.

The ModelOptimizer class defines the base class where all the other optimization classes must inherit. The most important class to optimize an expected adjacency model is ContinuousOptimizer. In this class the gradients are defined as:

\[\frac{\partial S(\boldsymbol \rho \| \boldsymbol \sigma(\mathbb{E}_{\boldsymbol \theta}[\mathbf{L}]))}{\partial \boldsymbol \theta_k} = \beta \textrm{Tr}\biggl \lbrack \left(\boldsymbol \rho - \boldsymbol \sigma(\mathbb{E}_{\boldsymbol \theta}[L])\right)\frac{\partial \mathbb{E}_{\boldsymbol \theta}[\mathbf{L}]}{\partial \theta_k} \biggr \rbrack\]

In the StochasticOptimizer class we instead address the issue to implement stochastic gradient descent methods. In these methods the gradients are defined as:

\[\frac{\partial \mathbb{E}_{\boldsymbol \theta}[S(\boldsymbol \rho \| \boldsymbol \sigma)]}{\partial \theta_k} = \beta \textrm{Tr}\biggl \lbrack \boldsymbol \rho \frac{\partial \mathbb{E}_{\boldsymbol \theta}[L]}{\partial \boldsymbol \theta_k}\biggr \rbrack + \frac{\partial}{\partial \theta_k}\mathbb{E}_{\boldsymbol \theta}\biggl \lbrack \log \left( \textrm{Tr}\left \lbrack e^{-\beta L(\boldsymbol \theta)} \right \rbrack \right) \biggr \rbrack\]

The stochastic optimizer is the correct optimizer, as it makes no approximation on the Laplacian eigenvalues. It is more suitable for small graphs and intermediate \(\beta\), where the differences between the random matrix s pectrum and its expected counterpart are non-neglibile. For large and dense enough graphs however the ContinuousOptimizer works well and yields deterministic results, as the optimization landscape is smooth.

In order to minimize the expected relative entropy then, we need both the expected Laplacian formula, which is simple to get, and a way to estimate the second summand in the gradient, that involves averaging over different realizations of the log trace of \(e^{-\beta L(\boldsymbol \theta)}\). A good the approximation to the expected logtrace \(\mathbb{E}_{\boldsymbol \theta}[\log \textrm{Tr}[\exp{(-\beta L)}]]\) is, makes a better is the estimate of the gradients.

Finally, the MLEOptimizer maximizes the standard likelihood of a model and it is not related to the spectral entropies framework introduced in the paper on which networkqit is based.

MLEOptimizer(G, x0, model, **kwargs)

This class, inheriting from the model optimizer class solves the problem of maximum likelihood parameters estimation in the classical settings.

ContinuousOptimizer(A, x0, beta_range, …)

Continuos optimization method of spectral entropy in the continuous approximation S(rho, sigma(E[L])))

StochasticOptimizer(G, x0, model, **kwargs)

This class is at the base of implementation of methods based on stochastic gradient descent.

Adam(G, x0, model, **kwargs)

Implements the ADAM stochastic gradient descent.

Basinhopping modified

The basinhopping global optimization algorithm with modifications to work within the spectral entropies framework. The modification are very small compared to the original implementation of basinhopping from scipy, but necessary to work with problems with a number of local minima and fitting classical maximum entropy models with highly non-linear saddle-point equations.

Storage(minres)

Class used to store the lowest energy structure

BasinHoppingRunner(x0, minimizer, …[, disp])

This class implements the core of the basinhopping algorithm.

AdaptiveStepsize(takestep[, accept_rate, …])

Class to implement adaptive stepsize.

RandomDisplacement([stepsize, random_state])

Add a random displacement of maximum size stepsize to each coordinate

MinimizerWrapper(minimizer[, func])

wrap a minimizer function as a minimizer class

Metropolis(T[, random_state])

Metropolis acceptance criterion

BHBounds(xmin, xmax)

BHRandStepBounded(xmin, xmax[, stepsize])

Bounded random displacement: see: https://stackoverflow.com/a/21967888/2320035 Modified! (dropped acceptance-rejection sampling for a more specialized approach)

Community detection utilities

Here a number of community detection utilities are implemented for the manipulation of membership vectors, as well as for the detection of block structures in networks.

comm_mat(adj, memb)

Returns multiple community related quantities.

comm_assortativity(A, memb)

This function computes the modular group assortativity and the newman modularity as from Eq.18-19 of the Peixoto paper “Nonparametric weighted stochastic block models” https://arxiv.org/pdf/1708.01432.pdf

reindex_membership(memb[, key, …])

This function has the membership as input and output the membership where the communities number are ordered by the number of nodes in that community

reassign_singletons(memb)

Set all the singleton communities into the same community.