Back to Search Start Over

Parallel algorithms for maximizing one-sided $\sigma$-smooth function

Authors :
Zhang, Hongxiang
Cheng, Yukun
Wu, Chenchen
Xu, Dachuan
Du, Dingzhu
Publication Year :
2022

Abstract

In this paper, we study the problem of maximizing a monotone normalized one-sided $\sigma$-smooth ($OSS$ for short) function $F(x)$, subject to a convex polytope. This problem was first introduced by Mehrdad et al. \cite{GSS2021} to characterize the multilinear extension of some set functions. Different with the serial algorithm with name Jump-Start Continuous Greedy Algorithm by Mehrdad et al. \cite{GSS2021}, we propose Jump-Start Parallel Greedy (JSPG for short) algorithm, the first parallel algorithm, for this problem. The approximation ratio of JSPG algorithm is proved to be $((1-e^{-\left(\frac{\alpha}{\alpha+1}\right)^{2\sigma}}) \epsilon)$ for any any number $\alpha\in(0,1]$ and $\epsilon>0$. We also prove that our JSPG algorithm runs in $(O(\log n/\epsilon^{2}))$ adaptive rounds and consumes $O(n \log n/\epsilon^{2})$ queries. In addition, we study the stochastic version of maximizing monotone normalized $OSS$ function, in which the objective function $F(x)$ is defined as $F(x)=\mathbb{E}_{y\sim T}f(x,y)$. Here $f$ is a stochastic function with respect to the random variable $Y$, and $y$ is the realization of $Y$ drawn from a probability distribution $T$. For this stochastic version, we design Stochastic Parallel-Greedy (SPG) algorithm, which achieves a result of $F(x)\geq(1 -e^{-\left(\frac{\alpha}{\alpha+1}\right)^{2\sigma}}-\epsilon)OPT-O(\kappa^{1/2})$, with the same time complexity of JSPG algorithm. Here $\kappa=\frac{\max \{5\|\nabla F(x_{0})-d_{o}\|^{2}, 16\sigma^{2}+2L^{2}D^{2}\}}{(t+9)^{2/3}}$ is related to the preset parameters $\sigma, L, D$ and time $t$.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2206.05841
Document Type :
Working Paper