12 results on '"gibbs distribution"'
Search Results
2. Parameter Estimation for Gibbs Distributions.
- Author
-
Harris, David G. and Kolmogorov, Vladimir
- Abstract
A central problem in computational statistics is to convert a procedure for sampling combinatorial objects into a procedure for counting those objects, and vice versa. We consider sampling problems coming from Gibbs distributions, which are families of probability distributions over a discrete space \(\Omega\) with probability mass function of the form \(\mu^{\Omega}_{\beta}(\omega)\propto e^{\beta H(\omega)}\) for \(\beta\) in an interval \([\beta_{\min},\beta_{\max}]\) and \(H(\omega)\in\{0\}\cup[1,n]\). Two important parameters are the partition function, which is the normalization factor \(Z(\beta)=\sum_{\omega\in\Omega}e^{\beta H(\omega)}\) and the vector of pre-image counts \(c_{x}=|H^{-1}(x)|\). We develop black-box sampling algorithms to estimate the counts using roughly \(\tilde{O}(\frac{n^{2}}{\varepsilon^{2}})\) samples for integer-valued distributions and \(\tilde{O}(\frac{q}{\varepsilon^{2}})\) samples for general distributions, where \(q=\log\frac{Z(\beta_{\max})}{Z(\beta_{\min})}\) (ignoring some second-order terms and parameters). We show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we estimate all values of the partition function using \(\tilde{O}(\frac{n^{2}}{\varepsilon^{2}})\) samples for integer-valued distributions and \(\tilde{O}(\frac{q}{\varepsilon^{2}})\) samples for general distributions. This improves over a prior algorithm of Huber (2015) which computes a single point estimate \(Z(\beta_{\max})\) and which uses a slightly larger amount of samples. We show matching lower bounds, demonstrating this complexity is optimal as a function of \(n\) and \(q\) up to logarithmic terms. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. An Algorithm for Estimating the Unknown Parameter of the Gibbs Distribution Based on the Stochastic Quasi-Gradient Method*.
- Author
-
Samosonok, O. S.
- Subjects
- *
BOLTZMANN factor , *GIBBS sampling , *MARKOV processes , *STOCHASTIC processes , *MAXIMUM likelihood statistics , *MATHEMATICAL models - Abstract
The author considers a practical algorithm for estimating an unknown parameter of the mathematical model of a Markov process with local interaction based on the Gibbs distribution. It is proposed to apply the stochastic quasi-gradient method to the maximum likelihood function, which is constructed from the observations of the realizations of the Gibbs field. The obtained results have a wide application scope in the modeling of stochastic processes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. БАГАТОКРИТЕРІЙНІ ЗАДАЧІ ОПТИМІЗАЦІЇ З ВЕКТОРНИМИ НЕОДНОРІДНИМИ ЗГОРТКАМИ КРИТЕРІЇВ.
- Author
-
БРИЛА, А. Ю.
- Abstract
The author considers a practical algorithm for estimating an unknown parameter of a mathematical model of a Markov process with local interaction based on the Gibbs distribution. It is proposed to apply the method of stochastic quasi-gradients to the maximum likelihood function, which is constructed from the observations of the implementations of the Gibbs field. The obtained results have a wide application in the modeling of stochastic processes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
5. Sampling from the Gibbs Distribution in Congestion Games.
- Author
-
Kleer, Pieter
- Subjects
BOLTZMANN factor ,GIBBS sampling ,DISTRIBUTION (Probability theory) ,MARKOV processes ,COMPLETE graphs - Abstract
Logit dynamics is a form of randomized game dynamics in which players have a bias toward strategic deviations that give a higher improvement in cost. It is used extensively in practice. In congestion (or potential) games, the dynamics converge to the so-called Gibbs distribution over the set of all strategy profiles when interpreted as a Markov chain. In general, logit dynamics can converge slowly to the Gibbs distribution, but beyond that, not much is known about its algorithmic aspects, nor that of the Gibbs distribution. In this work, we are interested in the following two questions for congestion games: (i) Is there an efficient algorithm for sampling from the Gibbs distribution? (ii) If yes, does there also exist natural randomized dynamics that converge quickly to the Gibbs distribution? We first study these questions in extension parallel congestion games, a well-studied special case of symmetric network congestion games. As our main result, we show that there is a simple variation on the logit dynamics that converges quickly to the Gibbs distribution in such games. We also address the first question for the class of so-called capacitated uniform congestion games and the second question for max cut games played on a complete graph. To prove our results, we rely on the recent breakthrough work of Anari et al. (2019) regarding the approximate sampling of a base of a matroid according to a strongly log-concave probability distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. An Algorithm for Estimating the Unknown Parameter of the Gibbs Distribution Based on the Stochastic Quasi-Gradient Method*.
- Author
-
Samosonok, O. S.
- Subjects
BOLTZMANN factor ,GIBBS sampling ,MARKOV processes ,STOCHASTIC processes ,MAXIMUM likelihood statistics ,MATHEMATICAL models - Abstract
The author considers a practical algorithm for estimating an unknown parameter of the mathematical model of a Markov process with local interaction based on the Gibbs distribution. It is proposed to apply the stochastic quasi-gradient method to the maximum likelihood function, which is constructed from the observations of the realizations of the Gibbs field. The obtained results have a wide application scope in the modeling of stochastic processes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. On the Characterization of a Finite Random Field by Conditional Distribution and its Gibbs Form.
- Author
-
Khachatryan, Linda and Nahapetian, Boris S.
- Abstract
In this paper, we show that the methods of mathematical statistical physics can be successfully applied to random fields in finite volumes. As a result, we obtain simple necessary and sufficient conditions for the existence and uniqueness of a finite random field with a given system of one-point conditional distributions. Using the axiomatic (without the notion of potential) definition of Hamiltonian, we show that any finite random field is Gibbsian. We also apply the proposed approach to Markov random fields. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Entropic regularization of neural networks: Self-similar approximations.
- Author
-
Asadi, Amir R. and Loh, Po-Ling
- Subjects
- *
BOLTZMANN factor , *NONLINEAR functions , *COMPUTATIONAL complexity , *INFORMATION theory - Abstract
This paper focuses on entropic regularization and its multiscale extension in neural network learning. We leverage established results that characterize the optimizer of entropic regularization methods and their connection with generalization bounds. To avoid the significant computational complexity involved in sampling from the optimal multiscale Gibbs distributions, we describe how to make measured concessions in optimality by using self-similar approximating distributions. We study such scale-invariant approximations for linear neural networks and further extend the approximations to neural networks with nonlinear activation functions. We then illustrate the application of our proposed approach through empirical simulation. By navigating the interplay between optimization and computational efficiency, our research contributes to entropic regularization theory, proposing a practical method that embraces symmetry across scales. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Polymer dynamics via cliques: New conditions for approximations.
- Author
-
Friedrich, Tobias, Göbel, Andreas, Krejca, Martin S., and Pappik, Marcus
- Subjects
- *
PARTITION functions , *APPROXIMATION algorithms , *MARKOV processes , *BOLTZMANN factor , *PROTHROMBIN - Abstract
Abstract polymer models are systems of weighted objects, called polymers, equipped with an incompatibility relation. An important quantity associated with such models is the partition function, which is the weighted sum over all sets of compatible polymers. Various approximation problems reduce to approximating the partition function of a polymer model. Central to the existence of such approximation algorithms are weight conditions of the respective polymer model. Such conditions are derived either via complex analysis or via probabilistic arguments. We follow the latter path and establish a new condition—the clique dynamics condition—, which is less restrictive than the ones in the literature. We introduce a new Markov chain where the clique dynamics condition implies rapid mixing by utilizing cliques of incompatible polymers that naturally arise from the translation of algorithmic problems into polymer models. This leads to improved parameter ranges for several approximation algorithms, such as a factor of at least 2 1 / α for the hard-core model on bipartite α -expanders. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Perfect sampling from spatial mixing.
- Author
-
Feng, Weiming, Guo, Heng, and Yin, Yitong
- Subjects
BOLTZMANN factor ,GIBBS sampling ,SAMPLING (Process) ,NEIGHBORHOODS - Abstract
We introduce a new perfect sampling technique that can be applied to general Gibbs distributions and runs in linear time if the correlation decays faster than the neighborhood growth. In particular, in graphs with subexponential neighborhood growth like ℤd, our algorithm achieves linear running time as long as Gibbs sampling is rapidly mixing. As concrete applications, we obtain the currently best perfect samplers for colorings and for monomer‐dimer models in such graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. A Spectral Independence View on Hard Spheres via Block Dynamics
- Author
-
Friedrich, Tobias, Göbel, Andreas, Krejca, Martin S., Pappik, Marcus, Hasso Plattner Institute [Potsdam, Germany], Recherche Opérationnelle (RO), LIP6, and Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[INFO.INFO-CC]Computer Science [cs]/Computational Complexity [cs.CC] ,[MATH.MATH-PR]Mathematics [math]/Probability [math.PR] ,Spectral Independence ,Gibbs Distribution ,Partition Function ,General Mathematics ,Theory of computation → Random walks and Markov chains ,Markov Chain ,Hard-sphere Model ,Approximate Counting - Abstract
The hard-sphere model is one of the most extensively studied models in statistical physics. It describes the continuous distribution of spherical particles, governed by hard-core interactions. An important quantity of this model is the normalizing factor of this distribution, called the partition function. We propose a Markov chain Monte Carlo algorithm for approximating the grand-canonical partition function of the hard-sphere model in d dimensions. Up to a fugacity of λ < e/2^d, the runtime of our algorithm is polynomial in the volume of the system. This covers the entire known real-valued regime for the uniqueness of the Gibbs measure. Key to our approach is to define a discretization that closely approximates the partition function of the continuous model. This results in a discrete hard-core instance that is exponential in the size of the initial hard-sphere model. Our approximation bound follows directly from the correlation decay threshold of an infinite regular tree with degree equal to the maximum degree of our discretization. To cope with the exponential blow-up of the discrete instance we use clique dynamics, a Markov chain that was recently introduced in the setting of abstract polymer models. We prove rapid mixing of clique dynamics up to the tree threshold of the univariate hard-core model. This is achieved by relating clique dynamics to block dynamics and adapting the spectral expansion method, which was recently used to bound the mixing time of Glauber dynamics within the same parameter regime., LIPIcs, Vol. 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), pages 66:1-66:15
- Published
- 2022
- Full Text
- View/download PDF
12. Sampling from the Gibbs distribution in congestion games
- Author
-
Pieter Kleer, Research Group: Operations Research, and Econometrics and Operations Research
- Subjects
FOS: Computer and information sciences ,Computer Science::Computer Science and Game Theory ,sampling ,Discrete Mathematics (cs.DM) ,congestion games ,General Mathematics ,Computation ,Management Science and Operations Research ,symbols.namesake ,Mixing (mathematics) ,Computer Science - Computer Science and Game Theory ,Computer Science - Data Structures and Algorithms ,Data Structures and Algorithms (cs.DS) ,Mathematics ,Gibbs distribution ,Markov chain ,Sampling (statistics) ,Data structure ,Boltzmann distribution ,Computer Science Applications ,Nash equilibrium ,Best response ,symbols ,logit dynamics ,Mathematical economics ,Computer Science and Game Theory (cs.GT) ,Computer Science - Discrete Mathematics - Abstract
Logit dynamics is a form of randomized game dynamics where players have a bias towards strategic deviations that give a higher improvement in cost. It is used extensively in practice. In congestion (or potential) games, the dynamics converges to the so-called Gibbs distribution over the set of all strategy profiles, when interpreted as a Markov chain. In general, logit dynamics might converge slowly to the Gibbs distribution, but beyond that, not much is known about their algorithmic aspects, nor that of the Gibbs distribution. In this work, we are interested in the following two questions for congestion games: i) Is there an efficient algorithm for sampling from the Gibbs distribution? ii) If yes, do there also exist natural randomized dynamics that converges quickly to the Gibbs distribution? We first study these questions in extension parallel congestion games, a well-studied special case of symmetric network congestion games. As our main result, we show that there is a simple variation on the logit dynamics (in which we in addition are allowed to randomly interchange the strategies of two players) that converges quickly to the Gibbs distribution in such games. This answers both questions above affirmatively. We also address the first question for the class of so-called capacitated $k$-uniform congestion games. To prove our results, we rely on the recent breakthrough work of Anari, Liu, Oveis-Gharan and Vinzant (2019) concerning the approximate sampling of the base of a matroid according to strongly log-concave probability distribution., Accepted at the 22nd ACM Conference on Economics and Computation (EC 2021)
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.