1. An Asymptotically Tight Learning Algorithm for Mobile-Promotion Platforms
- Author
-
Zhichao Feng, Milind Dawande, Ganesh Janakiraman, and Anyan Qi
- Subjects
History ,Polymers and Plastics ,Operations research ,Total cost ,Computer science ,business.industry ,Strategy and Management ,Regret ,Time horizon ,Bidding ,Management Science and Operations Research ,Online advertising ,Industrial and Manufacturing Engineering ,Stochastic programming ,Advertising campaign ,Business and International Management ,Bid price ,business - Abstract
Operating under both supply-side and demand-side uncertainties, a mobile-promotion platform conducts advertising campaigns for individual advertisers. Campaigns arrive dynamically over time, which is divided into seasons; each campaign requires the platform to deliver a target number of mobile impressions from a desired set of locations over a desired time interval. The platform fulfills these campaigns by procuring impressions from publishers, who supply advertising space on apps via real-time bidding on ad exchanges. Each location is characterized by its win curve, that is, the relationship between the bid price and the probability of winning an impression at that bid. The win curves at the various locations of interest are initially unknown to the platform, and it learns them on the fly based on the bids it places to win impressions and the realized outcomes. Each acquired impression is allocated to one of the ongoing campaigns. The platform’s objective is to minimize its total cost (the amount spent in procuring impressions and the penalty incurred due to unmet targets of the campaigns) over the time horizon of interest. Our main result is a bidding and allocation policy for this problem. We show that our policy is the best possible (asymptotically tight) for the problem using the notion of regret under a policy, namely the difference between the expected total cost under that policy and the optimal cost for the clairvoyant problem (i.e., one in which the platform has full information about the win curves at all the locations in advance): The lower bound on the regret under any policy is of the order of the square root of the number of seasons, and the regret under our policy matches this lower bound. We demonstrate the performance of our policy through numerical experiments on a test bed of instances whose input parameters are based on our observations at a real-world mobile-promotion platform. This paper was accepted by Baris Ata, stochastic models and simulation. Supplemental Material: The online appendices are available at https://doi.org/10.1287/mnsc.2022.4441 .
- Published
- 2023