4 results on '"Staton, Samuel"'
Search Results
2. Concrete sheaf models of higher-order recursion
- Author
-
Matache, Cristina and Staton, Samuel
- Subjects
categories (Mathematics) ,denotational semantics ,programming languages (Electronic computers) - Abstract
This thesis studies denotational models, in the form of sheaf categories, of functional programming languages with higher-order functions and recursion. We give a general method for building such models and show how the method includes examples such as existing models of probabilistic and differentiable computation. Using our method, we build a new fully abstract sheaf model of higher-order recursion inspired by the fully abstract logical relations models of O'Hearn and Riecke. In this way, we show that our method for building sheaf models can be used both to unify existing models that have so far been studied separately and to discover new models. The models we build are in the style of Moggi, namely, a cartesian closed category with a monad for modelling non termination. More specifically, our general method builds sheaf categories by specifying a concrete site with a class of admissible monomorphisms, a concept which we define. We combine this approach with techniques from synthetic and axiomatic domain theory to obtain a lifting monad on the sheaf category and to model recursion. We then prove the models obtained in this way are computationally adequate.
- Published
- 2022
3. Structural foundations for probabilistic programming languages
- Author
-
Stein, Dario Maximilian and Staton, Samuel
- Subjects
Categories (Mathematics) ,Programming languages (Electronic computers) ,Measure theory ,Denotational semantics ,Descriptive set theory - Abstract
Probability theory and statistics are fundamental disciplines in a data-driven world. Synthetic probability theory is a general, axiomatic formalism to describe their underlying structures and methods in a elegant and high-level way, abstracting away from the mathematical foundations such as measure theory. We showcase the value of synthetic probabilistic reasoning to computer science by presenting a novel, synthetic analysis of three situations: the Beta-Bernoulli process, exact conditioning on continuous random variables and an investigation into the relationship of random higher-order functions and fresh name generation. The synthetic approach to probability has merit not only by matching statistical practice and achieving greater conceptual clarity, but it also lets us transfer and generalize probabilistic ideas and language to other domains: A wide range of computational phenomena admit a probabilistic analysis, such as nondeterminism, generativity (e.g. name generation), unification, exchangeable random primitives and local state. This perspective is particularly useful if we wish to compare or combine those effects with actual probability: We develop a purely stochastic model of name generation (gensym) which is not only a tight semantic fit, but also uncovers striking parallels to the theory of random functions. Probabilistic programming is an emergent flavor of Bayesian machine learning where complex statistical models are expressed through code. We argue that probabilistic programming and synthetic probability theory are two sides of the same coin. Not only do they share a similar goal, namely to act as an accessible, high-level interface to statistics, but synthetic probability theories are also the natural semantic domains for probabilistic programs. Conversely, every probabilistic language defines its own internal theory of probability. This is a probabilistic version of the Curry-Howard correspondence: Statistical models are programs. This opens up the analysis and optimization of statistical inference procedures to tools from programming language theory. Much as category theory has served as a unifying model for logic and programming, we argue that probabilistic programming languages arise precisely as the internal languages of categorical probability theories. Our methods and examples draw from diverse areas of computer science and mathematics such as rewriting theory, logical relations, linear and universal algebra, categorical logic, measure theory and descriptive set theory.
- Published
- 2021
4. Automating inference for non-standard models
- Author
-
Zhou, Yuan, Rainforth, Tom, Staton, Samuel, Teh, Yee Whye, and Yang, Hongseok
- Subjects
006.3 ,Probabilistic Programming ,Bayesian Machine Learning - Abstract
Probabilistic models enable us to infer the underlying relationships within data and make decisions based on this information. Certain models are more commonly used not because they are more appropriate to imitate a particular system, but because they are simple enough to analyze given the current computational resources and available inference algorithms. However, there are a broad range of non-standard models that we could use to describe real-world tasks better; they may have complex dependency structures, variables following non-common distributions, and/or a stochastic number of variables. But we typically avoid using them as these features substantially complicate the inference procedure such that many conventional inference algorithms would fail when dealing with these models. To alleviate the computational concerns of data scientists or domain experts and free them to develop non-standard, complex models as needed, more sophisticated inference methods need to be more easily accessible. Additionally, because it is usually difficult to extract helpful information of non-standard models ahead of inference, like where the modes might be, such inference techniques should be expected to work reasonably well without too much such information. In an ideal world, domain experts would be able to specify any model of interest, and no longer need to worry about the technical details of inference algorithms or how to implement them. Moreover, these algorithms are provided as generic engines and can be used to reason the models in an automated manner. Achieving so is the ambitious, long-term goal of the emerging field called 'probabilistic programming'. The aim of this thesis is to make inroads to this ultimate goal: we develop and automate advanced inference methods that are applicable for a broad range of non-standard probabilistic models. In particular, we introduce a novel class of adaptive inference algorithms which can be implemented in a black-box manner. They are especially useful for the models with multiple, separated modes where many off-the-shelf options struggle. Furthermore, we investigate both a restricted class of probabilistic programming systems (PPSs) that impose strong constraints on the model class to ensure inference efficiency, and universal PPSs which are designed around the long-term goal and aim to support any possible model but substantially complicate the inference procedures. For the former, we propose a principled way to extend them to incorporate a broader range of models and inference engines that are not supported before. For the latter, we develop a general framework to handle one class of the most challenging models where the support of a model can be stochastic.
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.