136 results on '"Gokcesu, Hakan"'
Search Results
2. A Note On Nonlinear Regression Under L2 Loss
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Mathematics - Numerical Analysis ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We investigate the nonlinear regression problem under L2 loss (square loss) functions. Traditional nonlinear regression models often result in non-convex optimization problems with respect to the parameter set. We show that a convex nonlinear regression model exists for the traditional least squares problem, which can be a promising towards designing more complex systems with easier to train models.
- Published
- 2023
3. Efficient Lipschitzian Global Optimization of H\'older Continuous Multivariate Functions
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Computer Science - Computational Complexity ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
This study presents an effective global optimization technique designed for multivariate functions that are H\"older continuous. Unlike traditional methods that construct lower bounding proxy functions, this algorithm employs a predetermined query creation rule that makes it computationally superior. The algorithm's performance is assessed using the average or cumulative regret, which also implies a bound for the simple regret and reflects the overall effectiveness of the approach. The results show that with appropriate parameters the algorithm attains an average regret bound of $O(T^{-\frac{\alpha}{n}})$ for optimizing a H\"older continuous target function with H\"older exponent $\alpha$ in an $n$-dimensional space within a given time horizon $T$. We demonstrate that this bound is minimax optimal., Comment: this work draws from arXiv:2206.02383
- Published
- 2023
4. Formulation of Weighted Average Smoothing as a Projection of the Origin onto a Convex Polytope
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Signal Processing ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
Our study focuses on determining the best weight windows for a weighted moving average smoother under squared loss. We show that there exists an optimal weight window that is symmetrical around its center. We study the class of tapered weight windows, which decrease in weight as they move away from the center. We formulate the corresponding least squares problem as a quadratic program and finally as a projection of the origin onto a convex polytope. Additionally, we provide some analytical solutions to the best window when some conditions are met on the input data., Comment: arXiv admin note: substantial text overlap with arXiv:2203.02997
- Published
- 2023
5. A 2-opt Algorithm for Locally Optimal Set Partition Optimization
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Data Structures and Algorithms ,Computer Science - Machine Learning ,Mathematics - Combinatorics ,Mathematics - Optimization and Control - Abstract
Our research deals with the optimization version of the set partition problem, where the objective is to minimize the absolute difference between the sums of the two disjoint partitions. Although this problem is known to be NP-hard and requires exponential time to solve, we propose a less demanding version of this problem where the goal is to find a locally optimal solution. In our approach, we consider the local optimality in respect to any movement of at most two elements. To accomplish this, we developed an algorithm that can generate a locally optimal solution in at most $O(N^2)$ time and $O(N)$ space. Our algorithm can handle arbitrary input precisions and does not require positive or integer inputs. Hence, it can be applied in various problem scenarios with ease.
- Published
- 2023
6. Data Dependent Regret Guarantees Against General Comparators for Full or Bandit Feedback
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Computer Science - Information Theory ,Electrical Engineering and Systems Science - Signal Processing ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We study the adversarial online learning problem and create a completely online algorithmic framework that has data dependent regret guarantees in both full expert feedback and bandit feedback settings. We study the expected performance of our algorithm against general comparators, which makes it applicable for a wide variety of problem scenarios. Our algorithm works from a universal prediction perspective and the performance measure used is the expected regret against arbitrary comparator sequences, which is the difference between our losses and a competing loss sequence. The competition class can be designed to include fixed arm selections, switching bandits, contextual bandits, periodic bandits or any other competition of interest. The sequences in the competition class are generally determined by the specific application at hand and should be designed accordingly. Our algorithm neither uses nor needs any preliminary information about the loss sequences and is completely online. Its performance bounds are data dependent, where any affine transform of the losses has no effect on the normalized regret., Comment: this article draws from arXiv:2009.04372,arXiv:2109.09212,arXiv:2204.06660
- Published
- 2023
7. $1D$ to $nD$: A Meta Algorithm for Multivariate Global Optimization via Univariate Optimizers
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Mathematics - Optimization and Control ,Computer Science - Computational Complexity ,Computer Science - Data Structures and Algorithms ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
In this work, we propose a meta algorithm that can solve a multivariate global optimization problem using univariate global optimizers. Although the univariate global optimization does not receive much attention compared to the multivariate case, which is more emphasized in academia and industry; we show that it is still relevant and can be directly used to solve problems of multivariate optimization. We also provide the corresponding regret bounds in terms of the time horizon $T$ and the average regret of the univariate optimizer, when it is robust against nonnegative noises with robust regret guarantees., Comment: this article extends arXiv:2108.10859, arXiv:2201.07164
- Published
- 2022
8. Optimal Tracking in Prediction with Expert Advice
- Author
-
Gokcesu, Hakan and Kozat, Suleyman S.
- Subjects
Computer Science - Machine Learning ,Mathematics - Optimization and Control ,68Q32 (Primary) 68T05, 68Q25 (Secondary) - Abstract
We study the prediction with expert advice setting, where the aim is to produce a decision by combining the decisions generated by a set of experts, e.g., independently running algorithms. We achieve the min-max optimal dynamic regret under the prediction with expert advice setting, i.e., we can compete against time-varying (not necessarily fixed) combinations of expert decisions in an optimal manner. Our end-algorithm is truly online with no prior information, such as the time horizon or loss ranges, which are commonly used by different algorithms in the literature. Both our regret guarantees and the min-max lower bounds are derived with the general consideration that the expert losses can have time-varying properties and are possibly unbounded. Our algorithm can be adapted for restrictive scenarios regarding both loss feedback and decision making. Our guarantees are universal, i.e., our end-algorithm can provide regret guarantee against any competitor sequence in a min-max optimal manner with logarithmic complexity. Note that, to our knowledge, for the prediction with expert advice problem, our algorithms are the first to produce such universally optimal, adaptive and truly online guarantees with no prior knowledge., Comment: 29 pages, 1 figure, single column
- Published
- 2022
9. An Auto-Regressive Formulation for Smoothing and Moving Mean with Exponentially Tapered Windows
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Signal Processing ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We investigate an auto-regressive formulation for the problem of smoothing time-series by manipulating the inherent objective function of the traditional moving mean smoothers. Not only the auto-regressive smoothers enforce a higher degree of smoothing, they are just as efficient as the traditional moving means and can be optimized accordingly with respect to the input dataset. Interestingly, the auto-regressive models result in moving means with exponentially tapered windows.
- Published
- 2022
10. Efficient Minimax Optimal Global Optimization of Lipschitz Continuous Multivariate Functions
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Computer Science - Computational Complexity ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
In this work, we propose an efficient minimax optimal global optimization algorithm for multivariate Lipschitz continuous functions. To evaluate the performance of our approach, we utilize the average regret instead of the traditional simple regret, which, as we show, is not suitable for use in the multivariate non-convex optimization because of the inherent hardness of the problem itself. Since we study the average regret of the algorithm, our results directly imply a bound for the simple regret as well. Instead of constructing lower bounding proxy functions, our method utilizes a predetermined query creation rule, which makes it computationally superior to the Piyavskii-Shubert variants. We show that our algorithm achieves an average regret bound of $O(L\sqrt{n}T^{-\frac{1}{n}})$ for the optimization of an $n$-dimensional $L$-Lipschitz continuous objective in a time horizon $T$, which we show to be minimax optimal.
- Published
- 2022
11. A Log-Linear Time Sequential Optimal Calibration Algorithm for Quantized Isotonic L2 Regression
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We study the sequential calibration of estimations in a quantized isotonic L2 regression setting. We start by showing that the optimal calibrated quantized estimations can be acquired from the traditional isotonic L2 regression solution. We modify the traditional PAVA algorithm to create calibrators for both batch and sequential optimization of the quantized isotonic regression problem. Our algorithm can update the optimal quantized monotone mapping for the samples observed so far in linear space and logarithmic time per new unordered sample.
- Published
- 2022
12. Robust, Nonparametric, Efficient Decomposition of Spectral Peaks under Distortion and Interference
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Electrical Engineering and Systems Science - Signal Processing ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Audio and Speech Processing ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We propose a decomposition method for the spectral peaks in an observed frequency spectrum, which is efficiently acquired by utilizing the Fast Fourier Transform. In contrast to the traditional methods of waveform fitting on the spectrum, we optimize the problem from a more robust perspective. We model the peaks in spectrum as pseudo-symmetric functions, where the only constraint is a nonincreasing behavior around a central frequency when the distance increases. Our approach is more robust against arbitrary distortion, interference and noise on the spectrum that may be caused by an observation system. The time complexity of our method is linear, i.e., $O(N)$ per extracted spectral peak. Moreover, the decomposed spectral peaks show a pseudo-orthogonal behavior, where they conform to a power preserving equality.
- Published
- 2022
13. Second Order Regret Bounds Against Generalized Expert Sequences under Partial Bandit Feedback
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Computer Science - Information Theory ,Electrical Engineering and Systems Science - Signal Processing ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We study the problem of expert advice under partial bandit feedback setting and create a sequential minimax optimal algorithm. Our algorithm works with a more general partial monitoring setting, where, in contrast to the classical bandit feedback, the losses can be revealed in an adversarial manner. Our algorithm adopts a universal prediction perspective, whose performance is analyzed with regret against a general expert selection sequence. The regret we study is against a general competition class that covers many settings (such as the switching or contextual experts settings) and the expert selection sequences in the competition class are determined by the application at hand. Our regret bounds are second order bounds in terms of the sum of squared losses and the normalized regret of our algorithm is invariant under arbitrary affine transforms of the loss sequence. Our algorithm is truly online and does not use any preliminary information about the loss sequences., Comment: arXiv admin note: substantial text overlap with arXiv:2109.09212, arXiv:2009.04372
- Published
- 2022
14. Blind Source Separation for Mixture of Sinusoids with Near-Linear Computational Complexity
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Electrical Engineering and Systems Science - Signal Processing ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Audio and Speech Processing ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We propose a multi-tone decomposition algorithm that can find the frequencies, amplitudes and phases of the fundamental sinusoids in a noisy observation sequence. Under independent identically distributed Gaussian noise, our method utilizes a maximum likelihood approach to estimate the relevant tone parameters from the contaminated observations. When estimating $M$ number of sinusoidal sources, our algorithm successively estimates their frequencies and jointly optimizes their amplitudes and phases. Our method can also be implemented as a blind source separator in the absence of the information about $M$. The computational complexity of our algorithm is near-linear, i.e., $\tilde{O}(N)$.
- Published
- 2022
15. Merging Knockout and Round-Robin Tournaments: A Flexible Linear Elimination Tournament Design
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Computer Science and Game Theory ,Computer Science - Discrete Mathematics ,Computer Science - Machine Learning ,Mathematics - Combinatorics ,Statistics - Machine Learning - Abstract
We propose a new tournament structure that combines the popular knockout tournaments and the round-robin tournaments. As opposed to the extremes of divisive elimination and no elimination, our tournament aims to eliminate the participants as linearly as possible as a form of subtractive elimination. Our design is flexible in the sense that it can be adapted to any number of players $N$ and any number of matches $M$. Our design satisfies many properties that are desirable for a tournament to select a winner and can be adapted to rank all the participating players.
- Published
- 2022
16. Natural Hierarchical Cluster Analysis by Nearest Neighbors with Near-Linear Time Complexity
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Data Structures and Algorithms ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
We propose a nearest neighbor based clustering algorithm that results in a naturally defined hierarchy of clusters. In contrast to the agglomerative and divisive hierarchical clustering algorithms, our approach is not dependent on the iterative working of the algorithm, in the sense that the partitions of the hierarchical clusters are purely defined in accordance with the input dataset. Our method is a universal hierarchical clustering approach since it can be implemented as bottom up or top down versions, both of which result in the same clustering. We show that for certain types of datasets, our algorithm has near-linear time and space complexity.
- Published
- 2022
17. A Linearithmic Time Locally Optimal Algorithm for the Multiway Number Partition Optimization
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Data Structures and Algorithms ,Computer Science - Machine Learning ,Mathematics - Combinatorics ,Mathematics - Optimization and Control - Abstract
We study the problem of multiway number partition optimization, which has a myriad of applications in the decision, learning and optimization literature. Even though the original multiway partitioning problem is NP-hard and requires exponential time complexity algorithms; we formulate an easier optimization problem, where our goal is to find a solution that is locally optimal. We propose a linearithmic time complexity $O(N\log N)$ algorithm that can produce such a locally optimal solution. Our method is robust against the input and requires neither positive nor integer inputs.
- Published
- 2022
18. Smoothing with the Best Rectangle Window is Optimal for All Tapered Rectangle Windows
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Signal Processing ,Mathematics - Optimization and Control - Abstract
We investigate the optimal selection of weight windows for the problem of weighted least squares. We show that weight windows should be symmetric around its center, which is also its peak. We consider the class of tapered rectangle window weights, which are nonincreasing away from the center. We show that the best rectangle window is optimal for such window definitions. We also extend our results to the least absolutes and more general case of arbitrary loss functions to find similar results.
- Published
- 2022
19. Nonconvex Extension of Generalized Huber Loss for Robust Learning and Pseudo-Mode Statistics
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Computation - Abstract
We propose an extended generalization of the pseudo Huber loss formulation. We show that using the log-exp transform together with the logistic function, we can create a loss which combines the desirable properties of the strictly convex losses with robust loss functions. With this formulation, we show that a linear convergence algorithm can be utilized to find a minimizer. We further discuss the creation of a quasi-convex composite loss and provide a derivative-free exponential convergence rate algorithm.
- Published
- 2022
20. Low Regret Binary Sampling Method for Efficient Global Optimization of Univariate Functions
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Computer Science - Computational Complexity ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
In this work, we propose a computationally efficient algorithm for the problem of global optimization in univariate loss functions. For the performance evaluation, we study the cumulative regret of the algorithm instead of the simple regret between our best query and the optimal value of the objective function. Although our approach has similar regret results with the traditional lower-bounding algorithms such as the Piyavskii-Shubert method for the Lipschitz continuous or Lipschitz smooth functions, it has a major computational cost advantage. In Piyavskii-Shubert method, for certain types of functions, the query points may be hard to determine (as they are solutions to additional optimization problems). However, this issue is circumvented in our binary sampling approach, where the sampling set is predetermined irrespective of the function characteristics. For a search space of $[0,1]$, our approach has at most $L\log (3T)$ and $2.25H$ regret for $L$-Lipschitz continuous and $H$-Lipschitz smooth functions respectively. We also analytically extend our results for a broader class of functions that covers more complex regularity conditions.
- Published
- 2022
21. Efficient, Anytime Algorithms for Calibration with Isotonic Regression under Strictly Convex Losses
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Computer Science - Computational Complexity ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We investigate the calibration of estimations to increase performance with an optimal monotone transform on the estimator outputs. We start by studying the traditional square error setting with its weighted variant and show that the optimal monotone transform is in the form of a unique staircase function. We further show that this staircase behavior is preserved for general strictly convex loss functions. Their optimal monotone transforms are also unique, i.e., there exist a single staircase transform that achieves the minimum loss. We propose a linear time and space algorithm that can find such optimal transforms for specific loss settings. Our algorithm has an online implementation where the optimal transform for the samples observed so far are found in linear space and amortized time when the samples arrive in an ordered fashion. We also extend our results to cases where the functions are not trivial to individually optimize and propose an anytime algorithm, which has linear space and pseudo-linearithmic time complexity.
- Published
- 2021
22. Near-Linear Time Algorithm with Near-Logarithmic Regret Per Switch for Mixable/Exp-Concave Losses
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We investigate the problem of online learning, which has gained significant attention in recent years due to its applicability in a wide range of fields from machine learning to game theory. Specifically, we study the online optimization of mixable loss functions with logarithmic static regret in a dynamic environment. The best dynamic estimation sequence that we compete against is selected in hindsight with full observation of the loss functions and is allowed to select different optimal estimations in different time intervals (segments). We propose an online mixture framework that uses these static solvers as the base algorithm. We show that with the suitable selection of hyper-expert creations and weighting strategies, we can achieve logarithmic and squared logarithmic regret per switch in quadratic and linearithmic computational complexity, respectively. For the first time in literature, we show that it is also possible to achieve near-logarithmic regret per switch with sub-polynomial complexity per time. Our results are guaranteed to hold in a strong deterministic sense in an individual sequence manner.
- Published
- 2021
23. Generalized Translation and Scale Invariant Online Algorithm for Adversarial Multi-Armed Bandits
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
We study the adversarial multi-armed bandit problem and create a completely online algorithmic framework that is invariant under arbitrary translations and scales of the arm losses. We study the expected performance of our algorithm against a generic competition class, which makes it applicable for a wide variety of problem scenarios. Our algorithm works from a universal prediction perspective and the performance measure used is the expected regret against arbitrary arm selection sequences, which is the difference between our losses and a competing loss sequence. The competition class can be designed to include fixed arm selections, switching bandits, contextual bandits, or any other competition of interest. The sequences in the competition class are generally determined by the specific application at hand and should be designed accordingly. Our algorithm neither uses nor needs any preliminary information about the loss sequences and is completely online. Its performance bounds are the second order bounds in terms of sum of the squared losses, where any affine transform of the losses has no effect on the normalized regret., Comment: arXiv admin note: substantial text overlap with arXiv:2009.04372
- Published
- 2021
24. A Quadratic Time Locally Optimal Algorithm for NP-hard Equal Cardinality Partition Optimization
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Data Structures and Algorithms ,Computer Science - Machine Learning ,Mathematics - Combinatorics ,Mathematics - Optimization and Control - Abstract
We study the optimization version of the equal cardinality set partition problem (where the absolute difference between the equal sized partitions' sums are minimized). While this problem is NP-hard and requires exponential complexity to solve in general, we have formulated a weaker version of this NP-hard problem, where the goal is to find a locally optimal solution. The local optimality considered in our work is under any swap between the opposing partitions' element pairs. To this end, we designed an algorithm which can produce such a locally optimal solution in $O(N^2)$ time and $O(N)$ space. Our approach does not require positive or integer inputs and works equally well under arbitrary input precisions. Thus, it is widely applicable in different problem scenarios.
- Published
- 2021
25. Efficient Locally Optimal Number Set Partitioning for Scheduling, Allocation and Fair Selection
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Data Structures and Algorithms ,Computer Science - Machine Learning ,Mathematics - Combinatorics - Abstract
We study the optimization version of the set partition problem (where the difference between the partition sums are minimized), which has numerous applications in decision theory literature. While the set partitioning problem is NP-hard and requires exponential complexity to solve (i.e., intractable); we formulate a weaker version of this NP-hard problem, where the goal is to find a locally optimal solution. We show that our proposed algorithms can find a locally optimal solution in near linear time. Our algorithms require neither positive nor integer elements in the input set, hence, they are more widely applicable.
- Published
- 2021
26. Nonparametric Extrema Analysis in Time Series for Envelope Extraction, Peak Detection and Clustering
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Signal Processing ,Quantitative Finance - Mathematical Finance ,Statistics - Machine Learning - Abstract
In this paper, we propose a nonparametric approach that can be used in envelope extraction, peak-burst detection and clustering in time series. Our problem formalization results in a naturally defined splitting/forking of the time series. With a possibly hierarchical implementation, it can be used for various applications in machine learning, signal processing and mathematical finance. From an incoming input signal, our iterative procedure sequentially creates two signals (one upper bounding and one lower bounding signal) by minimizing the cumulative $L_1$ drift. We show that a solution can be efficiently calculated by use of a Viterbi-like path tracking algorithm together with an optimal elimination rule. We consider many interesting settings, where our algorithm has near-linear time complexities.
- Published
- 2021
27. Generalized Huber Loss for Robust Learning and its Efficient Minimization for a Robust Statistics
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
We propose a generalized formulation of the Huber loss. We show that with a suitable function of choice, specifically the log-exp transform; we can achieve a loss function which combines the desirable properties of both the absolute and the quadratic loss. We provide an algorithm to find the minimizer of such loss functions and show that finding a centralizing metric is not that much harder than the traditional mean and median.
- Published
- 2021
28. Cumulative Regret Analysis of the Piyavskii--Shubert Algorithm and Its Variants for Global Optimization
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Mathematics - Optimization and Control - Abstract
We study the problem of global optimization, where we analyze the performance of the Piyavskii--Shubert algorithm and its variants. For any given time duration $T$, instead of the extensively studied simple regret (which is the difference of the losses between the best estimate up to $T$ and the global minimum), we study the cumulative regret up to time $T$. For $L$-Lipschitz continuous functions, we show that the cumulative regret is $O(L\log T)$. For $H$-Lipschitz smooth functions, we show that the cumulative regret is $O(H)$. We analytically extend our results for functions with Holder continuous derivatives, which cover both the Lipschitz continuous and the Lipschitz smooth functions, individually. We further show that a simpler variant of the Piyavskii-Shubert algorithm performs just as well as the traditional variants for the Lipschitz continuous or the Lipschitz smooth functions. We further extend our results to broader classes of functions, and show that, our algorithm efficiently determines its queries; and achieves nearly minimax optimal (up to log factors) cumulative regret, for general convex or even concave regularity conditions on the extrema of the objective (which encompasses many preceding regularities). We consider further extensions by investigating the performance of the Piyavskii-Shubert variants in the scenarios with unknown regularity, noisy evaluation and multivariate domain.
- Published
- 2021
29. Optimally Efficient Sequential Calibration of Binary Classifiers to Minimize Classification Error
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Computer Science - Computational Complexity - Abstract
In this work, we aim to calibrate the score outputs of an estimator for the binary classification problem by finding an 'optimal' mapping to class probabilities, where the 'optimal' mapping is in the sense that minimizes the classification error (or equivalently, maximizes the accuracy). We show that for the given target variables and the score outputs of an estimator, an 'optimal' soft mapping, which monotonically maps the score values to probabilities, is a hard mapping that maps the score values to $0$ and $1$. We show that for class weighted (where the accuracy for one class is more important) and sample weighted (where the samples' accurate classifications are not equally important) errors, or even general linear losses; this hard mapping characteristic is preserved. We propose a sequential recursive merger approach, which produces an 'optimal' hard mapping (for the observed samples so far) sequentially with each incoming new sample. Our approach has a logarithmic in sample size time complexity, which is optimally efficient.
- Published
- 2021
30. Optimal and Efficient Algorithms for General Mixable Losses against Switching Oracles
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
We investigate the problem of online learning, which has gained significant attention in recent years due to its applicability in a wide range of fields from machine learning to game theory. Specifically, we study the online optimization of mixable loss functions in a dynamic environment. We introduce online mixture schemes that asymptotically achieves the performance of the best dynamic estimation sequence of the switching oracle with optimal regret redundancies. The best dynamic estimation sequence that we compete against is selected in hindsight with full observation of the loss functions and is allowed to select different optimal estimations in different time intervals (segments). We propose two mixtures in our work. Firstly, we propose a tractable polynomial time complexity algorithm that can achieve the optimal redundancy of the intractable brute force approach. Secondly, we propose an efficient logarithmic time complexity algorithm that can achieve the optimal redundancy up to a constant multiplicity gap. Our results are guaranteed to hold in a strong deterministic sense in an individual sequence manner.
- Published
- 2021
31. QoE Evaluation for Adaptive Video Streaming: Enhanced MDT with Deep Learning
- Author
-
Gokcesu, Hakan, Ercetin, Ozgur, Kalem, Gokhan, and Ergut, Salih
- Subjects
Computer Science - Networking and Internet Architecture ,Electrical Engineering and Systems Science - Signal Processing - Abstract
The network performance is usually assessed by drive tests, where teams of people with specially equipped vehicles physically drive out to test various locations throughout a radio network. However, intelligent and autonomous troubleshooting is considered a crucial enabler for 5G- and 6G-networks. In this paper, we propose an architecture for performing virtual drive tests by facilitating radio-quality data from the user equipment. Our architecture comprises three main components: i) a pattern recognizer that learns a typical pattern for the application from application Key Performance Indicators (KPI); ii) a predictor for mapping network KPI with the application KPI; iii) an anomaly detector that compares the predicted application performance with that of the typical application pattern. In this work, we use a commercial state-of-the-art network optimization tool to collect network and application KPI at different geographical locations and at various times of the day for training an initial learning model. We perform extensive numerical analysis to demonstrate key parameters impacting correct video quality prediction and anomaly detection. We show that the playback time is the single most important parameter affecting the video quality, since video packets are usually buffered ahead of time during the playback. However, radio frequency (RF) performance indicators characterizing the quality of the cellular connection improve the QoE estimation in exceptional cases. We demonstrate the efficacy of our approach by showing that the mean maximum F1-score of our method is 77%. Finally, the proposed architecture is flexible and autonomous, and it can operate with different user applications as long as the relevant user-based traces are available., Comment: 14 pages, 14 figures
- Published
- 2021
32. Recursive Experts: An Efficient Optimal Mixture of Learning Systems in Dynamic Environments
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Sequential learning systems are used in a wide variety of problems from decision making to optimization, where they provide a 'belief' (opinion) to nature, and then update this belief based on the feedback (result) to minimize (or maximize) some cost or loss (conversely, utility or gain). The goal is to reach an objective by exploiting the temporal relation inherent to the nature's feedback (state). By exploiting this relation, specific learning systems can be designed that perform asymptotically optimal for various applications. However, if the framework of the problem is not stationary, i.e., the nature's state sometimes changes arbitrarily, the past cumulative belief revision done by the system may become useless and the system may fail if it lacks adaptivity. While this adaptivity can be directly implemented in specific cases (e.g., convex optimization), it is mostly not straightforward for general learning tasks. To this end, we propose an efficient optimal mixture framework for general sequential learning systems, which we call the recursive experts for dynamic environments. For this purpose, we design hyper-experts that incorporate the learning systems at our disposal and recursively merge in a specific way to achieve minimax optimal regret bounds up to constant factors. The multiplicative increases in computational complexity from the initial system to our adaptive system are only logarithmic-in-time factors.
- Published
- 2020
33. A Generalized Online Algorithm for Translation and Scale Invariant Prediction with Expert Advice
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
In this work, we aim to create a completely online algorithmic framework for prediction with expert advice that is translation-free and scale-free of the expert losses. Our goal is to create a generalized algorithm that is suitable for use in a wide variety of applications. For this purpose, we study the expected regret of our algorithm against a generic competition class in the sequential prediction by expert advice problem, where the expected regret measures the difference between the losses of our prediction algorithm and the losses of the 'best' expert selection strategy in the competition. We design our algorithm using the universal prediction perspective to compete against a specified class of expert selection strategies, which is not necessarily a fixed expert selection. The class of expert selection strategies that we want to compete against is purely determined by the specific application at hand and is left generic, which makes our generalized algorithm suitable for use in many different problems. We show that no preliminary knowledge about the loss sequence is required by our algorithm and its performance bounds, which are second order, expressed in terms of sums of squared losses. Our regret bounds are stable under arbitrary scalings and translations of the losses.
- Published
- 2020
34. Minimax Optimal Algorithms for Adversarial Bandit Problem with Multiple Plays
- Author
-
Vural, N. Mert, Gokcesu, Hakan, Gokcesu, Kaan, and Kozat, Suleyman S.
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
We investigate the adversarial bandit problem with multiple plays under semi-bandit feedback. We introduce a highly efficient algorithm that asymptotically achieves the performance of the best switching $m$-arm strategy with minimax optimal regret bounds. To construct our algorithm, we introduce a new expert advice algorithm for the multiple-play setting. By using our expert advice algorithm, we additionally improve the best-known high-probability bound for the multi-play setting by $O(\sqrt{m})$. Our results are guaranteed to hold in an individual sequence manner since we have no statistical assumption on the bandit arm gains. Through an extensive set of experiments involving synthetic and real data, we demonstrate significant performance gains achieved by the proposed algorithm with respect to the state-of-the-art algorithms.
- Published
- 2019
35. Universal Online Convex Optimization with Minimax Optimal Second-Order Dynamic Regret
- Author
-
Gokcesu, Hakan and Kozat, Suleyman S.
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,90C25 (Primary) 68Q25, 93C40 (Secondary) - Abstract
We introduce an online convex optimization algorithm which utilizes projected subgradient descent with optimal adaptive learning rates. Our method provides second-order minimax-optimal dynamic regret guarantee (i.e. dependent on the sum of squared subgradient norms) for a sequence of general convex functions, which may not have strong convexity, smoothness, exp-concavity or even Lipschitz-continuity. The regret guarantee is against any comparator decision sequence with bounded path variation (i.e. sum of the distances between successive decisions). We generate the lower bound of the worst-case second-order dynamic regret by incorporating actual subgradient norms. We show that this lower bound matches with our regret guarantee within a constant factor, which makes our algorithm minimax optimal. We also derive the extension for learning in each decision coordinate individually. We demonstrate how to best preserve our regret guarantee in a truly online manner, when the bound on path variation of the comparator sequence grows in time or the feedback regarding such bound arrives partially as time goes on. We further build on our algorithm to eliminate the need of any knowledge on the comparator path variation, and provide minimax optimal second-order regret guarantees with no a priori information. Our approach can compete against all comparator sequences simultaneously (universally) in a minimax optimal manner, i.e. each regret guarantee depends on the respective comparator path variation. We discuss modifications to our approach which address complexity reductions for time, computation and memory. We further improve our results by making the regret guarantees also dependent on comparator sets' diameters in addition to the respective path variations., Comment: 30 pages, 5 figure, single
- Published
- 2019
36. Accelerating Min-Max Optimization with Application to Minimal Bounding Sphere
- Author
-
Gokcesu, Hakan, Gokcesu, Kaan, and Kozat, Suleyman Serdar
- Subjects
Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We study the min-max optimization problem where each function contributing to the max operation is strongly-convex and smooth with bounded gradient in the search domain. By smoothing the max operator, we show the ability to achieve an arbitrarily small positive optimality gap of $\delta$ in $\tilde{O}(1/\sqrt{\delta})$ computational complexity (up to logarithmic factors) as opposed to the state-of-the-art strong-convexity computational requirement of $O(1/\delta)$. We apply this important result to the well-known minimal bounding sphere problem and demonstrate that we can achieve a $(1+\varepsilon)$-approximation of the minimal bounding sphere, i.e. identify an hypersphere enclosing a total of $n$ given points in the $d$ dimensional unbounded space $\mathbb{R}^d$ with a radius at most $(1+\varepsilon)$ times the actual minimal bounding sphere radius for an arbitrarily small positive $\varepsilon$, in $\tilde{O}(n d /\sqrt{\varepsilon})$ computational time as opposed to the state-of-the-art approach of core-set methodology, which needs $O(n d /\varepsilon)$ computational time., Comment: 12 pages, 1 figure, preprint, [v0] 2018
- Published
- 2019
37. Minimax Optimal Online Stochastic Learning for Sequences of Convex Functions under Sub-Gradient Observation Failures
- Author
-
Gokcesu, Hakan and Kozat, Suleyman S.
- Subjects
Computer Science - Machine Learning ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We study online convex optimization under stochastic sub-gradient observation faults, where we introduce adaptive algorithms with minimax optimal regret guarantees. We specifically study scenarios where our sub-gradient observations can be noisy or even completely missing in a stochastic manner. To this end, we propose algorithms based on sub-gradient descent method, which achieve tight minimax optimal regret bounds. When necessary, these algorithms utilize properties of the underlying stochastic settings to optimize their learning rates (step sizes). These optimizations are the main factor in providing the minimax optimal performance guarantees, especially when observations are stochastically missing. However, in real world scenarios, these properties of the underlying stochastic settings may not be revealed to the optimizer. For such a scenario, we propose a blind algorithm that estimates these properties empirically in a generally applicable manner. Through extensive experiments, we show that this empirical approach is a natural combination of regular stochastic gradient descent and the minimax optimal algorithms (which work best for randomized and adversarial function sequences, respectively)., Comment: 25 pages, 6 figures, preprint, single column
- Published
- 2019
38. QoE Evaluation in Adaptive Streaming: Enhanced MDT with Deep Learning
- Author
-
Gokcesu, Hakan, Ercetin, Ozgur, Kalem, Gokhan, and Ergut, Salih
- Published
- 2023
- Full Text
- View/download PDF
39. Nearly Optimal Scheduling of Wireless Ad Hoc Networks in Polynomial Time
- Author
-
Kose, Alper, Evirgen, Noyan, Gokcesu, Hakan, Gokcesu, Kaan, and Medard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
In this paper, we address the scheduling problem in wireless ad hoc networks by exploiting the computational advantage that comes when such scheduling problems can be represented by claw-free conflict graphs where we consider a wireless broadcast medium. It is possible to formulate a scheduling problem of network coded flows as finding maximum weighted independent set (MWIS) in the conflict graph of the network. Finding MWIS of a general graph is NP-hard leading to an NP-hard complexity of scheduling. In a claw-free conflict graph, MWIS can be found in polynomial time leading to a throughput-optimal scheduling. We show that the conflict graph of certain wireless ad hoc networks are claw-free. In order to obtain claw-free conflict graphs in general networks, we suggest introducing additional conflicts (edges) while keeping the decrease in MWIS size minimal. To this end, we introduce an iterative optimization problem to decide where to introduce edges and investigate its efficient implementation. Besides, we exemplify some physical modifications to manipulate the conflict graph of a network and also propose a mixed scheduling strategy for specific networks. We conclude that claw breaking method by adding extra edges can perform nearly optimal under the necessary assumptions., Comment: This work is to be submitted to the IEEE for possible publication
- Published
- 2017
40. An Asymptotically Optimal Algorithm for Communicating Multiplayer Multi-Armed Bandit Problems
- Author
-
Evirgen, Noyan, Kose, Alper, and Gokcesu, Hakan
- Subjects
Computer Science - Learning - Abstract
We consider a decentralized stochastic multi-armed bandit problem with multiple players. Each player aims to maximize his/her own reward by pulling an arm. The arms give rewards based on i.i.d. stochastic Bernoulli distributions. Players are not aware about the probability distributions of the arms. At the end of each turn, the players inform their neighbors about the arm he/she pulled and the reward he/she got. Neighbors of players are determined according to an Erd{\H{o}}s-R{\'e}nyi graph with connectivity $\alpha$. This graph is reproduced in the beginning of every turn with the same connectivity. When more than one player choose the same arm in a turn, we assume that only one of the players who is randomly chosen gets the reward where the others get nothing. We first start by assuming players are not aware of the collision model and offer an asymptotically optimal algorithm for $\alpha = 1$ case. Then, we extend our prior work and offer an asymptotically optimal algorithm for any connectivity but zero, assuming players aware of the collision model. We also study the effect of $\alpha$, the degree of communication between players, empirically on the cumulative regret by comparing them with traditional multi-armed bandit algorithms., Comment: This work is an extension of the paper [arXiv:1711.01628] which has been accepted to the 2017 IEEE ICMLA and submitted to Elsevier Signal Processing
- Published
- 2017
41. Universal Online Convex Optimization With Minimax Optimal Second-Order Dynamic Regret
- Author
-
Gokcesu, Hakan, primary and Kozat, Suleyman Serdar, additional
- Published
- 2024
- Full Text
- View/download PDF
42. Enhancing QoE Assessment in FWA: Leveraging Network KPIs and User Feedback Analysis
- Author
-
Gokcesu, Hakan, primary, Ercetin, Ozgur, additional, and Kalem, Gokhan, additional
- Published
- 2023
- Full Text
- View/download PDF
43. Deep Learning-Based QoE Prediction for Streaming Services in Mobile Networks
- Author
-
Huang, Gan, primary, Ercetin, Ozgur, additional, Gokcesu, Hakan, additional, and Kalem, Gokhan, additional
- Published
- 2022
- Full Text
- View/download PDF
44. Regret Analysis of Global Optimization in Univariate Functions with Lipschitz Derivatives
- Author
-
Gokcesu, Kaan and Gokcesu, Hakan
- Subjects
FOS: Computer and information sciences ,Computer Science::Machine Learning ,Statistics::Machine Learning ,Computer Science - Machine Learning ,Optimization and Control (math.OC) ,FOS: Mathematics ,Mathematics - Optimization and Control ,Machine Learning (cs.LG) - Abstract
In this work, we study the problem of global optimization in univariate loss functions, where we analyze the regret of the popular lower bounding algorithms (e.g., Piyavskii-Shubert algorithm). For any given time $T$, instead of the widely available simple regret (which is the difference of the losses between the best estimation up to $T$ and the global optimizer), we study the cumulative regret up to that time. With a suitable lower bounding algorithm, we show that it is possible to achieve satisfactory cumulative regret bounds for different classes of functions. For Lipschitz continuous functions with the parameter $L$, we show that the cumulative regret is $O(L\log T)$. For Lipschitz smooth functions with the parameter $H$, we show that the cumulative regret is $O(H)$. We also analytically extend our results for a broader class of functions that covers both the Lipschitz continuous and smooth functions individually.
- Published
- 2021
45. Graph-Theoretical Dynamic User Pairing for Downlink NOMA Systems
- Author
-
Kose, Alper, primary, Koca, Mutlu, additional, Anarim, Emin, additional, Medard, Muriel, additional, and Gokcesu, Hakan, additional
- Published
- 2021
- Full Text
- View/download PDF
46. A Novel Method for Scheduling of Wireless Ad Hoc Networks in Polynomial Time
- Author
-
Kose, Alper, primary, Gokcesu, Hakan, additional, Evirgen, Noyan, additional, Gokcesu, Kaan, additional, and Medard, Muriel, additional
- Published
- 2021
- Full Text
- View/download PDF
47. Minimax Optimal Algorithms for Adversarial Bandit Problem With Multiple Plays
- Author
-
Vural, Nuri Mert, primary, Gokcesu, Hakan, additional, Gokcesu, Kaan, additional, and Kozat, Suleyman S., additional
- Published
- 2019
- Full Text
- View/download PDF
48. Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical Structures
- Author
-
Mohaghegh Neyshabouri, Mohammadreza, primary, Gokcesu, Kaan, additional, Gokcesu, Hakan, additional, Ozkan, Huseyin, additional, and Kozat, Suleyman Serdar, additional
- Published
- 2019
- Full Text
- View/download PDF
49. Sequential Outlier Detection Based on Incremental Decision Trees
- Author
-
Gokcesu, Kaan, primary, Neyshabouri, Mohammadreza Mohaghegh, additional, Gokcesu, Hakan, additional, and Kozat, Suleyman Serdar, additional
- Published
- 2019
- Full Text
- View/download PDF
50. An Adaptive Algorithm for Online Interference Cancellation in EMG Sensors.
- Author
-
Gokcesu, Kaan, Ergeneci, Mert, Ertan, Erhan, and Gokcesu, Hakan
- Abstract
One of the biggest issues encountered in the analysis of sensitive electromyography (EMG) sensor data is the power line interference (PLI). Conventional methods in literature either lose valuable sensor data or inadequately decrease the power line noise. Instead of filtering out predetermined frequencies, adaptively estimating the spectrum of PLI can provide better performance. This paper introduces an online adaptive algorithm that removes the power line interference in real time without disturbing the true EMG data. Our method sequentially processes the biomedical signal to properly estimate and remove the PLI component. Through experiments with real EMG data, we compared our method to the five state-of-the-art techniques. Our algorithm outperformed all of them with the highest SNR gain (3.6 dB on average) and with the least disturbance of the true EMG signal (0.0152 dB loss on average). Our method reduces the PLI the most while keeping the valuable sensor data loss at its minimum in comparison to the state-of-the-art. Reducing the noise without disturbing the valuable sensor data provides higher quality signals with decreased interference, which can be better processed and used in biomedical research. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.