250 results on '"Marcello Sanguineti"'
Search Results
102. Learning with generalization capability by kernel methods of bounded complexity.
- Author
-
Vera Kurková and Marcello Sanguineti
- Published
- 2005
- Full Text
- View/download PDF
103. Rates of Minimization of Error Functionals over Boolean Variable-Basis Functions.
- Author
-
Paul C. Kainen, Vera Kurková, and Marcello Sanguineti
- Published
- 2005
- Full Text
- View/download PDF
104. Minimization of Error Functionals over Variable-Basis Functions.
- Author
-
Paul C. Kainen, Vera Kurková, and Marcello Sanguineti
- Published
- 2004
- Full Text
- View/download PDF
105. Comparison of worst case errors in linear and neural network approximation.
- Author
-
Vera Kurková and Marcello Sanguineti
- Published
- 2002
- Full Text
- View/download PDF
106. Optimization-based learning with bounded error for feedforward neural networks.
- Author
-
Angelo Alessandri, Marcello Sanguineti, and Manfredi Maggiore
- Published
- 2002
- Full Text
- View/download PDF
107. CAC with Nonlinearly-Constrained Feasibility Regions.
- Author
-
Marco Cello, Giorgio Gnecco, Mario Marchese, and Marcello Sanguineti
- Published
- 2011
- Full Text
- View/download PDF
108. Bounds on rates of variable-basis and neural-network approximation.
- Author
-
Vera Kurková and Marcello Sanguineti
- Published
- 2001
- Full Text
- View/download PDF
109. Guest Editorial.
- Author
-
O. Erhun Kundakcioglu, Marcello Sanguineti, and Theodore B. Trafalis
- Published
- 2009
- Full Text
- View/download PDF
110. Intelligent Technologies for Interactive Entertainment : 14th EAI International Conference, INTETAIN 2023, Lucca, Italy, November 27, 2023, Proceedings
- Author
-
Martin Clayton, Mauro Passacantando, Marcello Sanguineti, Martin Clayton, Mauro Passacantando, and Marcello Sanguineti
- Subjects
- Interactive multimedia, Multimedia systems, Computer vision, Education—Data processing, Computer Networks, User interfaces (Computer systems), Human-computer interaction
- Abstract
This book constitutes the refereed proceedings of the 14th International Conference on Intelligent Technologies for Interactive Entertainment, INTETAIN 2023 which was held in Lucca, Italy, during November 27, 2023.The 15 full papers presented in this book were selected from 56 submissions. They present novel and innovative work in areas of methods (machine learning, movement), computer-based systems (architectures, software, algorithms), and devices (digital cameras, smartphones). The papers are grouped in sessions of thematic issues on Games and Game-Based learning; Motion Capture; Sports and Competitions; and Interfaces and Applications.
- Published
- 2024
111. Braess’ paradox: a cooperative game-theoretic point of view
- Author
-
Marcello Sanguineti, Yuval Hadas, Mauro Passacantando, Giorgio Gnecco, Passacantando, M, Gnecco, G, Hadas, Y, and Sanguineti, M
- Subjects
Game theoretic ,Computer Networks and Communications ,Computer science ,traffic assignment ,user equilibrium ,Braess’ paradox ,TU games ,Hardware and Architecture ,TU game ,transportation networks ,Braess' paradox ,Point (geometry) ,transportation network ,Mathematical economics ,Software ,transportation networks, TU games, Braess’ paradox, traffic assignment, user equilibrium, system optimum ,system optimum ,Information Systems - Abstract
Braess' paradox is a classical result in the theory of congestion games. It motivates theoretically why adding a resource (e.g., an arc) to a network may sometimes worsen, rather than improve, the overall network performance. Differently from previous literature, which studies Braess' paradox in a non-cooperative game-theoretic setting, in this work, a framework is proposed to investigate its occurrence by exploiting cooperative games with transferable utility (TU games) on networks. In this way, instead of focusing on the marginal contribution to the network utility provided by the insertion of an arc when a single initial scenario is considered, the arc average marginal utility with respect to various initial scenarios, that is, its Shapley value in a suitably-defined TU game, is evaluated. It is shown that, for choices of the utility function of the TU game modeling congestion, there are cases for which the Shapley value associated with an arc is negative, meaning that its average marginal contribution to the network utility is negative.
- Published
- 2021
112. On Braess’ paradox and average quality of service in transportation network cooperative games
- Author
-
Mauro Passacantando, Giorgio Gnecco, Yuval Hadas, Marcello Sanguineti, Cerulli, R, Dell’Amico, M, Guerriero, F, Pacciarelli, D, Sforza, A, Passacantando, M, Gnecco, G, Hadas, Y, and Sanguineti, M
- Subjects
Transportation network ,Transportation networks ,Quality of service ,Transferable utility games ,Braess’ paradox ,Traffic assignment ,User equilibrium ,Transferable utility game - Abstract
In the theory of congestion games, the Braess’ paradox shows that adding one resource to a network may sometimes worsen, rather than improve, the overall network performance. Here the paradox is investigated under a cooperative game-theoretic setting, in contrast to the non-cooperative one typically adopted in the literature. A family of cooperative games on networks is considered, whose utility function, defined in terms of a traffic assignment problem and the associated Wardrop equilibrium, expresses the average quality of service perceived by the network users.
- Published
- 2021
113. A computational method to automatically detect the perceived origin of full-body human movement and its propagation
- Author
-
Antonio Camurri, Olga Matthiopoulou, Denis Mottet, Marcello Sanguineti, Benoît G. Bardy, and Giorgio Gnecco
- Subjects
Ground truth ,Computer science ,business.industry ,Movement (music) ,Graph theory ,Machine learning ,computer.software_genre ,Shapley value ,Feature (computer vision) ,Artificial intelligence ,Set (psychology) ,business ,Centrality ,Game theory ,computer - Abstract
The work reports ongoing research about a computational method, based on cooperative games on graphs, aimed at detecting the perceived origin of full-body human movement and its propagation. Compared with previous works, a larger set of movement features is considered, and a ground truth is produced, able to assess and compare the effectiveness of each such feature. This is done through the use of the Shapley Value as a centrality index. An Origin of Movement Continuum is also defined, as the basis for creating a repository of movement qualities.
- Published
- 2020
114. Deterministic optimal control over a finite horizon
- Author
-
Riccardo Zoppoli, Marcello Sanguineti, Giorgio Gnecco, and Thomas Parisini
- Subjects
Dynamic programming ,Mathematical optimization ,Optimization problem ,Discretization ,Computer science ,State space ,State vector ,Optimal control ,Curse of dimensionality ,Nonlinear programming - Abstract
This chapter addresses discrete-time deterministic problems, where optimal controls have to be generated at a finite number of decision stages. No random variables influence either the dynamic system or the cost function. Then, there is no necessity of estimating the state vector. Such optimization problems are stated for their intrinsic practical importance and to describe the basic concepts of dynamic programming. As the problems are formulated under very general assumptions, their optimal solutions cannot be found in an analytical form. Therefore, we resort to an approximation consisting of the discretization of the state space into suitable grids at each decision stage. The discretization by regular grids is the simplest approach (and the one most widely used until some time ago). However, unless a small dimension of the state space is considered, this approach leads to an exponential growth of the number of samples, and thus to the curse of dimensionality. Therefore, the discretization by deterministic sequences of samples is addressed, which spread the samples in the most uniform way. Specifically, low-discrepancy sequences are considered, like quasi-Monte Carlo sequences. We also point out that the optimization problem can even be viewed as a nonlinear programming problem solvable by gradient-based descent techniques.
- Published
- 2020
115. Design of mathematical models by learning from data and FSP functions
- Author
-
Giorgio Gnecco, Thomas Parisini, Marcello Sanguineti, and Riccardo Zoppoli
- Subjects
Dynamic programming ,Mathematical model ,Computer science ,Statistical learning theory ,Computation ,Monte Carlo method ,Probabilistic logic ,Orthogonal array ,Algorithm ,Curse of dimensionality - Abstract
First, well-known concepts from Statistical Learning Theory are reviewed. In reference to the problem of modelling an unknown input/output (I/O) relationship by fixed-structure parametrized functions, the concepts of expected risk, empirical risk, and generalization error are described. The last error is then split into approximation and estimation errors. Four quantities of interest are emphasized: the accuracy, the number of arguments of the I/O relationship, the model complexity, and the number of samples generated for the estimation. The possibility of generating such samples by deterministic algorithms like quasi-Monte Carlo methods, orthogonal arrays, Latin hypercubes, etc. gives rise to the so-called Deterministic Learning Theory. This possibility is an intriguing alternative to the random generation of input data, typically obtained by using Monte Carlo techniques, since it enables one to reduce the number of samples (under the same accuracy) and to obtain upper bounds on the errors in deterministic terms rather than in probabilistic ones. Deterministic learning relies on some basic quantities such as variation and discrepancy. Special families of deterministic sequences called “low-discrepancy sequences” are useful in the computation of integrals and in dynamic programming, to mitigate the danger of incurring the curse of dimensionality deriving from the use of regular grids.
- Published
- 2020
116. Neural Approximations for Optimal Control and Decision
- Author
-
Riccardo Zoppoli, Marcello Sanguineti, Giorgio Gnecco, Thomas Parisini, Zoppoli, Riccardo, Sanguineti, M., Gnecco, G., and Parisini, T.
- Subjects
optimal control ,Neural networks ,optimization ,Neural network - Abstract
Neural Approximations for Optimal Control and Decisionprovides a comprehensive methodology for the approximate solution of functional optimization problems using neural networks and other nonlinear approximators where the use of traditional optimal control tools is prohibited by complicating factors like non-Gaussian noise, strong nonlinearities, large dimension of state and control vectors, etc. Features of the text include: • a general functional optimization framework; • thorough illustration of recent theoretical insights into the approximate solutions of complex functional optimization problems; • comparison of classical and neural-network based methods of approximate solution; • bounds to the errors of approximate solutions; • solution algorithms for optimal control and decision in deterministic or stochastic environments with perfect or imperfect state measurements over a finite or infinite time horizon and with one decision maker or several; • applications of current interest: routing in communications networks, traffic control, water resource management, etc.;and • numerous, numerically detailed examples.
- Published
- 2020
117. Stochastic optimal control with imperfect state information over a finite Horizon
- Author
-
Riccardo Zoppoli, Marcello Sanguineti, Giorgio Gnecco, and Thomas Parisini
- Published
- 2020
118. Optimal control problems over an infinite Horizon
- Author
-
Thomas Parisini, Giorgio Gnecco, Riccardo Zoppoli, and Marcello Sanguineti
- Subjects
Control (management) ,Applied mathematics ,State (functional analysis) ,Optimal control ,Stability (probability) ,Random variable ,Action (physics) ,Mathematics ,Deterministic system ,Ritz method - Abstract
Optimal control problems over an infinite number of decision stages are considered with emphasis on the deterministic scenario. Both the open-loop and the closed-loop formulations are given and conditions for the existence of a stationary optimal control law are provided. Unless strong assumptions are made on the dynamic system and on the random variables (if present), the design of the optimal infinite-horizon controllers is an almost impossible task. Then, the well-known “receding-horizon” (RH) approximation is considered and the optimal control problem is restated accordingly. In the second part of the chapter, we consider the fundamental issue of closed-loop stability that arises owing to the infinite number of decision stages. More specifically, we address the stability properties of the closed-loop deterministic system under the action of approximate RH control laws obtained by the “extended Ritz method” and implemented through fixed-structure parametrized functions containing vectors of “free” parameters. Conditions are established on the maximum allowable approximation errors so as to ensure the boundedness of the state trajectories.
- Published
- 2020
119. Team optimal control problems
- Author
-
Thomas Parisini, Marcello Sanguineti, Riccardo Zoppoli, and Giorgio Gnecco
- Subjects
Stochastic control ,Dynamic programming ,Mathematical optimization ,Control theory ,Computer science ,Information structure ,Routing (electronic design automation) ,Optimal control ,Finite set ,Counterexample - Abstract
We consider discrete-time stochastic optimal control problems over a finite number of decision stages in which several controllers share different information and aim at minimizing a common cost functional. This organization can be described within the framework of “team theory.” Unlike the classical optimal control problems, linear-quadratic-Gaussian hypotheses are sufficient neither to obtain an optimal solution in closed-loop form nor to understand whether an optimal solution exists. In order to obtain an optimal solution in closed-loop form, additional suitable assumptions must be introduced on the “information structure” of the team. The “information structure” describes the way in which each controller’s information vector is influenced by the stochastic environment and by the decisions of the other controllers. Dynamic programming cannot be applied unless the information structure takes particular forms. On the contrary, the “extended Ritz method” (ERIM) can be always applied. The ERIM consists in substituting the admissible functions with fixed-structure parametrized functions containing vectors of “free” parameters. The ERIM is tested in two case studies. The former is the well-known Witsenhausen counterexample. The latter is an optimal routing problem in a store-and-forward packet-switching network.
- Published
- 2020
120. Neural approximations in discounted infinite-horizon stochastic optimal control problems
- Author
-
Marcello Sanguineti and Giorgio Gnecco
- Subjects
Stochastic control ,State variable ,Computer science ,Control (management) ,020206 networking & telecommunications ,02 engineering and technology ,State (functional analysis) ,Exponential growth ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,020201 artificial intelligence & image processing ,Point (geometry) ,Infinite horizon ,Electrical and Electronic Engineering ,Curse of dimensionality - Abstract
Neural approximations of the optimal stationary closed-loop control strategies for discounted infinite-horizon stochastic optimal control problems are investigated. It is shown that for a family of such problems, the minimal number of network parameters needed to achieve a desired accuracy of the approximate solution does not grow exponentially with the number of state variables. In such a way, neural-network approximation mitigates the so-called “curse of dimensionality”. The obtained theoretical results point out the potentialities of neural-network based approximation in the framework of sequential decision problems with continuous state, control, and disturbance spaces.
- Published
- 2018
- Full Text
- View/download PDF
121. Guest editorial.
- Author
-
Enza Messina and Marcello Sanguineti
- Published
- 2010
- Full Text
- View/download PDF
122. An approach to transportation network analysis via transferable utility games
- Author
-
Giorgio Gnecco, Marcello Sanguineti, and Yuval Hadas
- Subjects
Theoretical computer science ,Transportation ,Network science ,02 engineering and technology ,Network theory ,Management Science and Operations Research ,Network simulation ,Centrality measures ,Games on graphs ,Network analysis ,Shapley value ,Transferable utility games ,Betweenness centrality ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Transferable utility ,Civil and Structural Engineering ,Mathematics ,050210 logistics & transportation ,05 social sciences ,Network formation ,020201 artificial intelligence & image processing ,Weighted network ,Centrality ,Mathematical economics - Abstract
Network connectivity is an important aspect of any transportation network, as the role of the network is to provide a society with the ability to easily travel from point to point using various modes. A basic question in network analysis concerns how “important” each node is. An important node might, for example, greatly contribute to short connections between many pairs of nodes, handle a large amount of the traffic, generate relevant information, represent a bridge between two areas, etc. In order to quantify the relative importance of nodes, one possible approach uses the concept of centrality. A limitation of classical centrality measures is the fact that they evaluate nodes based on their individual contributions to the functioning of the network. The present paper introduces a game theory approach, based on cooperative games with transferable utility. Given a transportation network, a game is defined taking into account the network topology, the weights associated with the arcs, and the demand based on an origin-destination matrix (weights associated with nodes). The network nodes represent the players in such a game. The Shapley value, which measures the relative importance of the players in transferable utility games, is used to identify the nodes that have a major role. For several network topologies, a comparison is made with well-known centrality measures. The results show that the suggested centrality measures outperform the classical ones, and provide an innovative approach for transportation network analysis.
- Published
- 2017
- Full Text
- View/download PDF
123. Commitment-Based Equilibrium Environmental Strategies Under Time-Dependent Absorption Efficiency
- Author
-
Fouad El Ouardighi, Giorgio Gnecco, Konstantin Kogan, and Marcello Sanguineti
- Subjects
TheoryofComputation_MISCELLANEOUS ,State variable ,Strategy and Management ,Strategy and Management1409 Tourism ,Transboundary pollution ,0211 other engineering and technologies ,General Decision Sciences ,02 engineering and technology ,Social Sciences (all) ,Microeconomics ,symbols.namesake ,Arts and Humanities (miscellaneous) ,Management of Technology and Innovation ,Differential game ,0202 electrical engineering, electronic engineering, information engineering ,Economics ,Cooperative solution ,Revenue ,Environmental absorption efficiency ,Revenue function ,Commitment-based strategies ,Decision Sciences (all) ,Strategy and Management1409 Tourism, Leisure and Hospitality Management ,021103 operations research ,Leisure and Hospitality Management ,TheoryofComputation_GENERAL ,General Social Sciences ,Nash equilibrium ,symbols ,020201 artificial intelligence & image processing ,Absorption efficiency - Abstract
This paper investigates how current and future generations are affected by commitment-based Nash equilibrium environmental strategies when the environmental absorption efficiency is susceptible to switch from a pollution sink to a source. We formulate a two-player differential game model of transboundary pollution that includes the environmental absorption efficiency as a state variable that can be enhanced thanks to restoration efforts. Based on a logarithmic specification for the instantaneous revenue function, we characterize the cooperative solution and the commitment-based Nash equilibrium strategy, and examine their differences in terms of steady state and transient behavior. We notably show that a commitment-based Nash equilibrium strategy makes it possible to prevent a definitive switching of the environmental absorption efficiency from a pollution sink to a source but imposes greater economic sacrifices on current generations than on future generations. In comparison, the cooperative solution imposes greater sacrifices on current generations in terms of revenues but it imposes lower environmental costs on both current and future generations than commitment-based Nash equilibrium strategy.
- Published
- 2017
- Full Text
- View/download PDF
124. Probabilistic lower bounds for approximation by shallow perceptron networks
- Author
-
Marcello Sanguineti and Věra Kůrková
- Subjects
Discrete mathematics ,0209 industrial biotechnology ,Polynomial ,Logarithm ,Cognitive Neuroscience ,Probabilistic logic ,02 engineering and technology ,Function (mathematics) ,Perceptron ,Model complexity ,Domain (mathematical analysis) ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Neural Networks, Computer ,Mathematics - Abstract
Limitations of approximation capabilities of shallow perceptron networks are investigated. Lower bounds on approximation errors are derived for binary-valued functions on finite domains. It is proven that unless the number of network units is sufficiently large (larger than any polynomial of the logarithm of the size of the domain) a good approximation cannot be achieved for almost any uniformly randomly chosen function on a given domain. The results are obtained by combining probabilistic Chernoff–Hoeffding bounds with estimates of the sizes of sets of functions exactly computable by shallow networks with increasing numbers of units.
- Published
- 2017
- Full Text
- View/download PDF
125. Some Families of FSP Functions and Their Properties
- Author
-
Riccardo Zoppoli, Marcello Sanguineti, Thomas Parisini, and Giorgio Gnecco
- Subjects
Optimization problem ,Function approximation ,Norm (mathematics) ,Applied mathematics ,Linear combination ,Convexity ,Modulus of continuity ,Curse of dimensionality ,Mathematics ,Ritz method - Abstract
We report properties of fixed-structure parametrized (FSP) functions that give insights into the effectiveness of the “Extended Ritz Method” (ERIM) as a methodology for the approximate solution of infinite-dimensional optimization problems. First, we present the structure of some widespread FSP functions, including linear combinations of fixed-basis functions, one-hidden-layer (OHL) and multiple-hidden-layer (MHL) networks, and kernel smoothing models. Second, focusing on the case of OHL neural networks based on ridge and radial constructions, we report their density properties under different metrics. Third, we present rates of function approximation via ridge OHL neural networks, by reporting a fundamental theorem by Maurey, Jones, and Barron, together with its extensions, based on a norm tailored to approximation by computational units from a given set of functions. We also discuss approximation properties valid for MHL networks. Fourth, we compare the classical Ritz method and the ERIM from the point of view of the curse of dimensionality, proving advantages of the latter for a specific class of problems, where the functional to be optimized is quadratic. Finally, we provide rates of approximate optimization by the ERIM, based on the concepts of modulus of continuity and modulus of convexity of the functional to be optimized.
- Published
- 2019
- Full Text
- View/download PDF
126. Stochastic Optimal Control with Perfect State Information over a Finite Horizon
- Author
-
Giorgio Gnecco, Riccardo Zoppoli, Thomas Parisini, and Marcello Sanguineti
- Subjects
Dynamic programming ,Stochastic control ,Discretization ,State space ,State vector ,Applied mathematics ,Optimal control ,Curse of dimensionality ,Mathematics ,Nonlinear programming - Abstract
Discrete-time stochastic optimal control problems are stated over a finite number of decision stages. The state vector is assumed to be perfectly measurable. Such problems are infinite-dimensional as one has to find control functions of the state. Because of the general assumptions under which the problems are formulated, two approximation techniques are addressed. The first technique consists of an approximation of dynamic programming. The approximation derives from the fact that the state space is discretized. Instead of using regular grids, which lead to an exponential growth of the number of samples (and thus to the curse of dimensionality), low-discrepancy sequences (as quasi-Monte Carlo ones) are considered. The second approximation technique is given by the application of the “Extended Ritz Method” (ERIM). The ERIM consists in substituting the admissible functions with fixed-structure parametrized functions containing vectors of “free” parameters. This requires solving easier nonlinear programming problems. If suitable regularity assumptions are verified, such problems can be solved by stochastic gradient algorithms. The computation of the gradient can be performed by resorting to the classical adjoint equations, which solve deterministic optimal control problems with the addition of one term, dependent on the chosen family of fixed-structure parametrized functions.
- Published
- 2019
- Full Text
- View/download PDF
127. Numerical Methods for Integration and Search for Minima
- Author
-
Marcello Sanguineti, Giorgio Gnecco, Thomas Parisini, and Riccardo Zoppoli
- Subjects
Optimization problem ,Computer science ,Computation ,Monte Carlo method ,Applied mathematics ,Expected value ,Stochastic approximation ,Random variable ,Curse of dimensionality ,Nonlinear programming - Abstract
Two topics are addressed. The first refers to the numerical computation of integrals and expected values of functions that may depend on a large number of random variables. Of course, integration includes the computation of the expected values of functions dependent on random variables. However, the latter shows peculiar nontrivial aspects that the former does not have. In case of a large number of random variables, the use of regular grids implies the risk of incurring the curse of dimensionality. Then, suitable sampling methods are taken into account to reduce such risk. In particular, Monte Carlo and quasi-Monte Carlo sequences are addressed. The second topic refers to the solution of the nonlinear programming problems obtained from the approximation of infinite-dimensional optimization problems by the Extended Ritz Method. We mention a few well-known direct techniques and gradient-based descent algorithms. In the case of nonlinear programming problems stated in stochastic frameworks, the stochastic approximation approach deserves attention and thus it is considered in some detail. Within this context, we describe the stochastic gradient algorithm enabling one to avoid the computation of integrals, hence, the computation of expected values of functions dependent on random variables. Convergence properties of that algorithm are reported.
- Published
- 2019
- Full Text
- View/download PDF
128. The Basic Infinite-Dimensional or Functional Optimization Problem
- Author
-
Thomas Parisini, Giorgio Gnecco, Marcello Sanguineti, and Riccardo Zoppoli
- Subjects
Nonlinear system ,Optimization problem ,Computer science ,Numerical analysis ,Applied mathematics ,Linear combination ,Drawback ,Term (time) ,Nonlinear programming ,Ritz method - Abstract
In infinite-dimensional or functional optimization problems, one has to minimize (or maximize) a functional with respect to admissible solutions belonging to infinite-dimensional spaces of functions, often dependent on a large number of variables. As we consider optimization problems characterized by very general conditions, optimal solutions might not be found analytically and classical numerical methods might fail. In these cases, one must use suitable approximation methods. This chapter describes an approximation method that uses families of nonlinear approximators – including commonly used shallow and deep neural networks as special cases – to reduce infinite-dimensional problems to finite-dimensional nonlinear programming ones. It is named “Extended Ritz Method” (ERIM). This term originates from the classical Ritz method, which uses linear combinations of functions as an approximation tool. It does not seem that the Ritz method has met with important successes as regards problems whose admissible solutions depend on large number of variables. This might be ascribed to the fact that this method can be subject to one of the forms of the so-called “curse of dimensionality.” However, theoretical and numerical results show that the ERIM might mitigate this drawback. Examples of infinite-dimensional optimization problems are presented that can be approximated by nonlinear programming ones.
- Published
- 2019
- Full Text
- View/download PDF
129. From Functional Optimization to Nonlinear Programming by the Extended Ritz Method
- Author
-
Thomas Parisini, Riccardo Zoppoli, Giorgio Gnecco, and Marcello Sanguineti
- Subjects
Optimization problem ,Function approximation ,Property (programming) ,Approximations of π ,Applied mathematics ,Argument of a function ,Ritz method ,Nonlinear programming ,Mathematics ,Curse of dimensionality - Abstract
This chapter describes the approximate solution of infinite-dimensional optimization problems by the “Extended Ritz Method” (ERIM). The ERIM consists in substituting the admissible functions with fixed-structure parametrized (FSP) functions containing vectors of “free” parameters. The larger the dimensions, the more accurate the approximations of the optimal solutions of the original functional optimization problems. This requires solving easier nonlinear programming problems. In the area of function approximation, we review the definition of approximating sequences of sets, which enjoy the property of density in the sets of functions one wants to approximate. Then, we provide the definition of polynomially complex approximating sequences of sets, which are able to approximate functions provided with suitable regularity properties by using, for a desired arbitrary accuracy, a number of “free” parameters increasing at most polynomially when the number of function arguments grows. In the less studied area of approximate solution of infinite-dimensional optimization problems, the optimizing sequences and the polynomially complex optimizing sequences of FSP functions are defined. Results are presented that allow to conclude that, if appropriate hypotheses occur, polynomially complex approximating sequences of sets give rise to polynomially complex optimizing sequences of FSP functions, possibly mitigating the curse of dimensionality.
- Published
- 2019
- Full Text
- View/download PDF
130. On the Optimal Selection of Motors and Transmissions For a Back-Support Exoskeleton
- Author
-
Marcello Sanguineti, M. Mahdi Ghazaei Ardakani, Jesús Ortiz, Darwin G. Caldwell, and Erfan Shojaei Barjuei
- Subjects
Electric motor ,Computer science ,Control theory ,Bandwidth (signal processing) ,Robot ,Wearable computer ,Torque ,Back support ,Actuator ,Exoskeleton - Abstract
High performance of an exoskeleton depends on torque/force tracking in a proper way according to human motion. Hence, because of the importance of actuator systems in wearable robots which usually include electrical motors and transmissions, this paper proposes a solution for selection of these components for an upper-body exoskeleton. In order to perform an optimization procedure, the dynamic model of the system has been developed analytically. Accordingly, optimization criteria in terms of closed-loop system bandwidth, system power consumption and component weights have been formulated by taking technical limitations into account. Extensive simulation results are presented in order to make comparison among possible optimization results.
- Published
- 2019
131. Classification by Sparse Neural Networks
- Author
-
Marcello Sanguineti and Vera Kurkova
- Subjects
Independent and identically distributed random variables ,Theoretical computer science ,Artificial neural network ,Computer Networks and Communications ,Computer science ,Computation ,Probabilistic logic ,Statistical model ,Chernoff–Hoeffding bound ,Computer Science Applications ,Support vector machine ,measures of sparsity ,feedforward networks ,Binary classification ,Exponential growth ,Artificial Intelligence ,Probability distribution ,Binary classification, Chernoff–Hoeffding bound, dictionaries of computational units, feedforward networks, measures of sparsity ,dictionaries of computational units ,Random variable ,Software - Abstract
The choice of dictionaries of computational units suitable for efficient computation of binary classification tasks is investigated. To deal with exponentially growing sets of tasks with increasingly large domains, a probabilistic model is introduced. The relevance of tasks for a given application area is modeled by a product probability distribution on the set of all binary-valued functions. Approximate measures of network sparsity are studied in terms of variational norms tailored to dictionaries of computational units. Bounds on these norms are proven using the Chernoff–Hoeffding bound on sums of independent random variables that need not be identically distributed. Consequences of the probabilistic results for the choice of dictionaries of computational units are derived. It is shown that when a priori knowledge of a type of classification tasks is limited, then the sparsity may be achieved only at the expense of large sizes of dictionaries.
- Published
- 2019
132. A theoretical analysis of buffer occupancy for Intermittently-Connected Networks
- Author
-
Giorgio Gnecco, Marcello Sanguineti, Marco Cello, Fabio Patrone, Luca Boero, and Mario Marchese
- Subjects
Epidemic routing ,Wireless ad hoc network ,Computer science ,Computer Networks and Communications ,Distributed computing ,Congestion control ,02 engineering and technology ,Topology ,Ad-hoc networks ,Intermittently-Connected Networks ,Markov chains ,Software ,Modeling and Simulation ,Hardware and Architecture ,Network simulation ,Computer Science::Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Network performance ,Dimensioning ,Buffer overflow ,020203 distributed computing ,Queueing theory ,Node (networking) ,020206 networking & telecommunications ,Network congestion - Abstract
Network congestion is a well-known problem that may heavily affect the overall network performance. Congestion control approaches in Intermittently-Connected Networks (ICNs) differ from those used in classical networks, since the assumptions of “universal connectivity” of the nodes and “global information” about the network do not hold. In this paper, an analytical framework is proposed to investigate node buffer occupancy in ICNs through bulk-arrivals/bulk-services queuing models. A relation in the z -domain between the discrete probability densities of the buffer state occupancies and of the sizes of the arriving bulks is exploited to analyze two classes of forwarding strategies for ICNs. The infinite- and finite-buffer cases are investigated, simulated, and compared in terms of the concept of stochastic order, which is also used to compare models obtained for different parameter choices. The results can be exploited for buffer dimensioning and for deriving estimates of performance metrics such as average buffer occupancy, average delivery delay, and buffer overflow probability. The theoretical analysis is complemented by numerical outcomes from a network simulator and from real mobility traces.
- Published
- 2017
133. Supervised and Semi-Supervised Classifiers for the Detection of Flood-Prone Areas
- Author
-
Giorgio Roth, Rita Morisi, Giorgio Gnecco, Marcello Sanguineti, and Angela Celeste Taramasso
- Subjects
010504 meteorology & atmospheric sciences ,Flood myth ,Computer science ,business.industry ,0208 environmental biotechnology ,Pattern recognition ,Computational intelligence ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,020801 environmental engineering ,Theoretical Computer Science ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel (statistics) ,Geometry and Topology ,Artificial intelligence ,business ,computer ,Software ,0105 earth and related environmental sciences - Abstract
Supervised and semi-supervised machine-learning techniques are applied and compared for the recognition of the flood hazard. The learning goal consists in distinguishing between flood-exposed and marginal-risk areas. Kernel-based binary classifiers using six quantitative morphological features, derived from data stored in digital elevation models, are trained to model the relationship between morphology and the flood hazard. According to the experimental outcomes, such classifiers are appropriate tools when one is interested in performing an initial low-cost detection of flood-exposed areas, to be possibly refined in successive steps by more time-consuming and costly investigations by experts. The use of these automatic classification techniques is valuable, e.g., in insurance applications, where one is interested in estimating the flood hazard of areas for which limited labeled information is available. The proposed machine-learning techniques are applied to the basin of the Italian Tanaro River. The experimental results show that for this case study, semi-supervised methods outperform supervised ones when—the number of labeled examples being the same for the two cases—only a few labeled examples are used, together with a much larger number of unsupervised ones.
- Published
- 2017
134. Narrowing the Search for Optimal Call-Admission Policies Via a Nonlinear Stochastic Knapsack Model
- Author
-
Mario Marchese, Marcello Sanguineti, Marco Cello, and Giorgio Gnecco
- Subjects
Mathematical optimization ,Class (set theory) ,Control and Optimization ,Theoretical computer science ,Applied Mathematics ,Quality of service ,Call Admission Control ,Continuous knapsack problem ,MathematicsofComputing_NUMERICALANALYSIS ,Management Science and Operations Research ,Nonlinear system ,Knapsack problem ,Theory of computation ,Graph (abstract data type) ,Mathematics - Abstract
Call admission control with two classes of users is investigated via a nonlinear stochastic knapsack model. The feasibility region represents the subset of the call space, where given constraints on the quality of service have to be satisfied. Admissible strategies are searched for within the class of coordinate-convex policies. Structural properties that the optimal policies belonging to such a class have to satisfy are derived. They are exploited to narrow the search for the optimal solution to the nonlinear stochastic knapsack problem that models call admission control. To illustrate the role played by these properties, the numbers of coordinate-convex policies by which they are satisfied are estimated. A graph-based algorithm to generate all such policies is presented.
- Published
- 2014
- Full Text
- View/download PDF
135. Exploiting the Shapley Value in the Estimation of the Position of a Point of Interest for a Group of Individuals
- Author
-
Simone Ghisio, Antonio Camurri, Donald Glowinski, Floriane Dardard, Marcello Sanguineti, and Giorgio Gnecco
- Subjects
Characteristic function (convex analysis) ,Class (set theory) ,point of interest ,Point of interest ,Group behavior ,05 social sciences ,02 engineering and technology ,Cooperative game theory ,Discount points ,group behavior ,Shapley value ,Measure (mathematics) ,cooperative games ,0506 political science ,Cooperative games ,Position (vector) ,050602 political science & public administration ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,General Materials Science ,Mathematical economics ,Mathematics - Abstract
Concepts and tools from cooperative game theory are exploited to quantify the role played by each member of a team in estimating the position of an observed point of interest. The measure of importance known as “Shapley value” is used to this end. From the theoretical point view, we propose a specific form of the characteristic function for the class of cooperative games under investigation. In the numerical analysis, different configurations of a group of individuals are considered: all individuals looking at a mobile point of interest, one of them replaced with an artificially-generated one who looks exactly toward the point of interest, and directions of the heads replaced with randomly-generated directions. The corresponding experimental outcomes are compared.
- Published
- 2014
- Full Text
- View/download PDF
136. FLOOD HAZARD ASSESSMENT VIA THRESHOLD BINARY CLASSIFIERS: CASE STUDY OF THE TANARO RIVER BASIN
- Author
-
Giorgio Gnecco, Massimiliano Degiorgis, Giorgio Roth, Silvia Gorni, Marcello Sanguineti, and Angela Celeste Taramasso
- Subjects
Hydrology ,geography ,geography.geographical_feature_category ,Flood myth ,Drainage basin ,Soil Science ,Binary number ,Structural basin ,Binary classification ,Environmental science ,Flood hazard ,Digital elevation model ,Agronomy and Crop Science ,Cartography ,Classifier (UML) - Abstract
This contribution deals with the identification of flood hazards at the catchment scale. The aim is to distinguish flood-exposed areas from marginal risk ones, and to extend available information on flood hazards to cover the whole catchment. Threshold binary classifiers based on six selected quantitative morphological features, derived from data stored in digital elevation models (DEMs), are used to investigate the relationships between morphology and the flooding hazard, as described in flood hazard maps. Results show that threshold binary classifier techniques should be taken into account when one is interested in an initial low-cost detection of flood-exposed areas. This may be needed, for example, in applications related to the insurance market, in which one is interested in estimating the flood hazard of specific areas for which limited information is available, or whenever a first flood hazard delineation is required to further address detailed investigations for flood mapping purposes. The method described in the paper has been tested on the basin of the Tanaro River. Results present a high degree of accuracy: indeed, the best classifier correctly identifies about 91% of flood-exposed areas, whereas the percentage of the areas exposed to marginal risk that are incorrectly classified as flood-exposed areas is about 16%. Copyright © 2013 John Wiley & Sons, Ltd.
- Published
- 2013
- Full Text
- View/download PDF
137. LQG Online Learning
- Author
-
Giorgio Gnecco, Marco Gori, Marcello Sanguineti, and Alberto Bemporad
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Computer science ,Cognitive Neuroscience ,Linear model ,010103 numerical & computational mathematics ,02 engineering and technology ,Kalman filter ,Linear-quadratic-Gaussian control ,Optimal control ,01 natural sciences ,Regularization (mathematics) ,Nonlinear system ,020901 industrial engineering & automation ,Kernel method ,Arts and Humanities (miscellaneous) ,Optimization and Control (math.OC) ,FOS: Mathematics ,0101 mathematics ,Random matrix ,Mathematics - Optimization and Control - Abstract
Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called kernel trick, the case of nonlinear models.
- Published
- 2016
138. Model complexities of shallow networks representing highly varying functions
- Author
-
Marcello Sanguineti and Vĕra Krůková
- Subjects
Discrete mathematics ,Cognitive Neuroscience ,Multivariable calculus ,Probabilistic logic ,020206 networking & telecommunications ,02 engineering and technology ,Function (mathematics) ,Perceptron ,Domain (mathematical analysis) ,Computer Science Applications ,symbols.namesake ,Exponential growth ,Artificial Intelligence ,Chernoff bound ,0202 electrical engineering, electronic engineering, information engineering ,Gaussian function ,symbols ,Applied mathematics ,020201 artificial intelligence & image processing ,Mathematics - Abstract
Model complexities of shallow (i.e., one-hidden-layer) networks representing highly varying multivariable { - 1 , 1 } -valued functions are studied in terms of variational norms tailored to dictionaries of network units. It is shown that bounds on these norms define classes of functions computable by networks with constrained numbers of hidden units and sizes of output weights. Estimates of probabilistic distributions of values of variational norms with respect to typical computational units, such as perceptrons and Gaussian kernel units, are derived via geometric characterization of variational norms combined with the probabilistic Chernoff Bound. It is shown that almost any randomly chosen { - 1 , 1 } -valued function on a sufficiently large d-dimensional domain has variation with respect to perceptrons depending on d exponentially.
- Published
- 2016
139. Classifiers for the detection of flood-prone areas using remote sensed elevation data
- Author
-
Marcello Sanguineti, Giorgio Gnecco, Silvia Gorni, Angela Celeste Taramasso, Giorgio Roth, and Massimiliano Degiorgis
- Subjects
Optimization problem ,Flood myth ,Receiver operating characteristic ,Computer science ,Gaussian ,Shuttle Radar Topography Mission ,computer.software_genre ,Flooding (computer networking) ,Support vector machine ,symbols.namesake ,symbols ,Data mining ,computer ,Classifier (UML) ,Water Science and Technology - Abstract
Summary A technique is presented for the identification of the areas subject to flooding hazard. Starting from remote sensed elevation data and existing flood hazard maps – usually available for limited areas – the relationships between selected quantitative morphologic features and the flooding hazard are first identified and then used to extend the hazard information to the entire catchment. This is performed through techniques of pattern classification, such as linear classifiers based on quantitative morphologic features, and support vector machines with linear and Gaussian kernels. The experiment starts by discriminating between flood-prone areas and marginal hazard areas. Multiclass classifiers are subsequently used to graduate the hazard. Their designs amount to solving suitable optimization problems. Several performance measures are considered in comparing the different classifiers, such as the area under the receiver operating characteristics curve, and the sum of the false positive and false negative rates. The procedure has been validated for the Tanaro basin, a tributary to the major Italian river, the Po. Results show a high reliability: the classifier properly identifies 93% of flood-prone areas, and only 14% of the areas subject to a marginal hazard are improperly assigned. An increase of this latter value up to 19% is detected when the same structure is applied for hazard graduation. Results derived from the application to different catchments seem to qualitatively indicate the ability of the classifier to perform well also outside the calibration region. Pattern classification techniques should be considered when the identification of flood-prone areas and hazard grading is required for large regions (e.g., for civil protection or insurance purposes) or when a first identification is needed (e.g., to address further detailed flood-mapping activities).
- Published
- 2012
- Full Text
- View/download PDF
140. Accuracy of approximations of solutions to Fredholm equations by kernel methods
- Author
-
Marcello Sanguineti, Giorgio Gnecco, and Věra Kůrková
- Subjects
Applied Mathematics ,Gaussian ,Mathematical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Fredholm integral equation ,Fredholm theory ,Integral equation ,Computational Mathematics ,symbols.namesake ,Kernel method ,Kernel embedding of distributions ,Kernel (statistics) ,Radial basis function kernel ,symbols ,Mathematics - Abstract
Approximate solutions to inhomogeneous Fredholm integral equations of the second kind by radial and kernel networks are investigated. Upper bounds are derived on errors in approximation of solutions of these equations by networks with increasing model complexity. The bounds are obtained using results from nonlinear approximation theory. The results are applied to networks with Gaussian and kernel units and illustrated by numerical simulations.
- Published
- 2012
- Full Text
- View/download PDF
141. Can dictionary-based computational models outperform the best linear ones?
- Author
-
Vra Krková, Giorgio Gnecco, and Marcello Sanguineti
- Subjects
Computational model ,Cognitive Neuroscience ,Linear model ,Reproducibility of Results ,Basis function ,Mathematical proof ,Set (abstract data type) ,Combinatorics ,Artificial Intelligence ,Approximation error ,Linear Models ,Applied mathematics ,Computer Simulation ,Neural Networks, Computer ,Linear approximation ,Linear combination ,Algorithms ,Mathematics - Abstract
Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n-tuples of basis functions computable by units belonging to a set called ''dictionary'') and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.
- Published
- 2011
- Full Text
- View/download PDF
142. Some comparisons of complexity in dictionary-based and linear computational models
- Author
-
Vra Krková, Marcello Sanguineti, and Giorgio Gnecco
- Subjects
Computational model ,Mathematical optimization ,Offset (computer science) ,Artificial neural network ,Cognitive Neuroscience ,Homogeneity (statistics) ,Models, Neurological ,Computational Biology ,Dictionaries as Topic ,Statistics, Nonparametric ,Artificial Intelligence ,Linear regression ,Orthogonal polynomials ,Linear Models ,Neural Networks, Computer ,Uniqueness ,Linear combination ,Algorithm ,Mathematics - Abstract
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.
- Published
- 2011
- Full Text
- View/download PDF
143. Optimization based on quasi-Monte Carlo sampling to design state estimators for non-linear systems
- Author
-
Angelo Alessandri, Marcello Sanguineti, Cristiano Cervellera, and Danilo Maccio
- Subjects
Lyapunov function ,Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Stability (learning theory) ,Constrained optimization ,Estimator ,Sampling (statistics) ,Management Science and Operations Research ,Nonlinear programming ,symbols.namesake ,Exponential stability ,symbols ,Quasi-Monte Carlo method ,Mathematics - Abstract
State estimation for a class of non-linear, continuous-time dynamic systems affected by disturbances is investigated. The estimator is assigned a given structure that depends on an innovation function taking on the form of a ridge computational model, with some parameters to be optimized. The behaviour of the estimation error is analysed by using input-to-state stability. The design of the estimator is reduced to the determination of the parameters in such a way as to guarantee the regional exponential stability of the estimation error in a disturbance-free setting and to minimize a cost function that measures the effectiveness of the estimation when the system is affected by disturbances. Stability is achieved by constraining the derivative of a Lyapunov function to be negative definite on a grid of points, via the penalization of the constraints that are not satisfied. Low-discrepancy sampling techniques, typical of quasi-Monte Carlo methods, are exploited in order to reduce the computational burden in ...
- Published
- 2010
- Full Text
- View/download PDF
144. Team optimization problems with Lipschitz continuous strategies
- Author
-
Marcello Sanguineti and Giorgio Gnecco
- Subjects
Mathematical optimization ,Control and Optimization ,Optimization problem ,Production (economics) ,Computational intelligence ,Mathematical proof ,Lipschitz continuity ,Mathematics - Abstract
Sufficient conditions for the existence and Lipschitz continuity of optimal strategies for static team optimization problems are studied. Revised statements and proofs of some results appeared in the literature are presented. Their extensions are discussed. As an example of application, optimal production in a multidivisional firm is considered.
- Published
- 2010
- Full Text
- View/download PDF
145. Suboptimal Solutions to Dynamic Optimization Problems via Approximations of the Policy Functions
- Author
-
Giorgio Gnecco and Marcello Sanguineti
- Subjects
Smoothness ,Mathematical optimization ,Control and Optimization ,Optimization problem ,Applied Mathematics ,State vector ,Management Science and Operations Research ,Dynamic programming ,symbols.namesake ,Uniform norm ,Approximation error ,symbols ,Gaussian process ,Mathematics ,Curse of dimensionality - Abstract
The approximation of the optimal policy functions is investigated for dynamic optimization problems with an objective that is additive over a finite number of stages. The distance between optimal and suboptimal values of the objective functional is estimated, in terms of the errors in approximating the optimal policy functions at the various stages. Smoothness properties are derived for such functions and exploited to choose the approximating families. The approximation error is measured in the supremum norm, in such a way to control the error propagation from stage to stage. Nonlinear approximators corresponding to Gaussian radial-basis-function networks with adjustable centers and widths are considered. Conditions are defined, guaranteeing that the number of Gaussians (hence, the number of parameters to be adjusted) does not grow “too fast” with the dimension of the state vector. The results help to mitigate the curse of dimensionality in dynamic optimization. An example of application is given and the use of the estimates is illustrated via a numerical simulation.
- Published
- 2010
- Full Text
- View/download PDF
146. Error bounds for suboptimal solutions to kernel principal component analysis
- Author
-
Marcello Sanguineti and Giorgio Gnecco
- Subjects
Mathematical optimization ,Control and Optimization ,Kernel method ,Kernel embedding of distributions ,Variable kernel density estimation ,Polynomial kernel ,Kernel (statistics) ,Principal component analysis ,Applied mathematics ,Principal component regression ,Kernel principal component analysis ,Mathematics - Abstract
Suboptimal solutions to kernel principal component analysis are considered. Such solutions take on the form of linear combinations of all n-tuples of kernel functions centered on the data, where n is a positive integer smaller than the cardinality m of the data sample. Their accuracy in approximating the optimal solution, obtained in general for n = m, is estimated. The analysis made in Gnecco and Sanguineti (Comput Optim Appl 42:265–287, 2009) is extended. The estimates derived therein for the approximation of the first principal axis are improved and extensions to the successive principal axes are derived.
- Published
- 2009
- Full Text
- View/download PDF
147. Complexity of Gaussian-radial-basis networks approximating smooth functions
- Author
-
Marcello Sanguineti, Vra Krková, and Paul C. Kainen
- Subjects
Statistics and Probability ,Numerical Analysis ,Control and Optimization ,Algebra and Number Theory ,Smoothness (probability theory) ,Gaussian-radial-basis-function networks ,Degree (graph theory) ,Basis (linear algebra) ,Applied Mathematics ,General Mathematics ,Gaussian ,Tractability of approximation ,Mathematical analysis ,Rates of approximation ,Function (mathematics) ,Model complexity ,Sobolev space ,symbols.namesake ,Variation norms ,symbols ,Bessel and Sobolev norms ,Bessel function ,Mathematics - Abstract
Complexity of Gaussian-radial-basis-function networks, with varying widths, is investigated. Upper bounds on rates of decrease of approximation errors with increasing number of hidden units are derived. Bounds are in terms of norms measuring smoothness (Bessel and Sobolev norms) multiplied by explicitly given functions a(r,d) of the number of variables d and degree of smoothness r. Estimates are proven using suitable integral representations in the form of networks with continua of hidden units computing scaled Gaussians and translated Bessel potentials. Consequences on tractability of approximation by Gaussian-radial-basis function networks are discussed.
- Published
- 2009
- Full Text
- View/download PDF
148. Management of water resource systems in the presence of uncertainties by nonlinear approximation techniques and deterministic sampling
- Author
-
Marcello Sanguineti, Cristiano Cervellera, Riccardo Zoppoli, and Marco Baglietto
- Subjects
Dynamic programming ,Stochastic control ,Computational Mathematics ,Nonlinear system ,Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Sampling (statistics) ,Basis function ,Linear combination ,Stochastic approximation ,Mathematics ,Curse of dimensionality - Abstract
Two methods of approximate solution are developed for T-stage stochastic optimal control (SOC) problems, aimed at obtaining finite-horizon management policies for water resource systems. The presence of uncertainties, such as river and rain inflows, is considered. Both approaches are based on the use of families of nonlinear functions, called "one-hidden-layer networks" (OHL networks), made up of linear combinations of simple basis functions containing parameters to be optimized. The first method exploits OHL networks to obtain an accurate approximation of the cost-to-go functions in the dynamic programming procedure for SOC problems. The approximation capabilities of OHL networks are combined with the properties of deterministic sampling techniques aimed at obtaining uniform samplings of high-dimensional domains. In the second method, admissible solutions to SOC problems are constrained to take on the form of OHL networks, whose parameters are determined in such a way to minimize the cost functional associated with SOC problems. Exploiting these tools, the two methods are able to cope with the so-called "curse of dimensionality," which strongly limits the applicability of existing techniques to high-dimensional water resources management in the presence of uncertainties. The theoretical bases of the two approaches are investigated. Simulation results show that the proposed methods are effective for water resource systems of high dimension.
- Published
- 2008
- Full Text
- View/download PDF
149. Geometric Upper Bounds on Rates of Variable-Basis Approximation
- Author
-
Marcello Sanguineti and Vera Kurkova
- Subjects
Approximation theory ,Basis (linear algebra) ,Orthogonal functions ,Function (mathematics) ,Library and Information Sciences ,Upper and lower bounds ,Computer Science Applications ,Combinatorics ,Approximation error ,Applied mathematics ,Orthonormal basis ,Linear combination ,Information Systems ,Mathematics - Abstract
In this paper, approximation by linear combinations of an increasing number n of computational units with adjustable parameters (such as perceptrons and radial basis functions) is investigated. Geometric upper bounds on rates of convergence of approximation errors are derived. The bounds depend on certain parameters specific for each function to be approximated. The results are illustrated by examples of values of such parameters in the case of approximation by linear combinations of orthonormal functions.
- Published
- 2008
- Full Text
- View/download PDF
150. Approximation Schemes for Functional Optimization Problems
- Author
-
Marcello Sanguineti and S. Giulini
- Subjects
Mathematical optimization ,Control and Optimization ,Basis (linear algebra) ,Applied Mathematics ,Approximation algorithm ,Basis function ,Management Science and Operations Research ,Functional optimization · Approximation schemes · Complexity of admissible solutions · Upper bounds on accuracy · Curse of dimensionality · Ritz method · Extended Ritz method ,Linear form ,Theory of computation ,Linear approximation ,Linear combination ,Curse of dimensionality ,Mathematics - Abstract
Approximation schemes for functional optimization problems with admissible solutions dependent on a large number d of variables are investigated. Suboptimal solutions are considered, expressed as linear combinations of n-tuples from a basis set of simple computational units with adjustable parameters. Different choices of basis sets are compared, which allow one to obtain suboptimal solutions using a number n of basis functions that does not grow "fast" with the number d of variables in the admissible decision functions for a fixed desired accuracy. In these cases, one mitigates the "curse of dimensionality," which often makes unfeasible traditional linear approximation techniques for functional optimization problems, when admissible solutions depend on a large number d of variables.
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.