Back to Search Start Over

Opportunistic View Materialization with Deep Reinforcement Learning

Authors :
Liang, Xi
Elmore, Aaron J.
Krishnan, Sanjay
Publication Year :
2019

Abstract

Carefully selected materialized views can greatly improve the performance of OLAP workloads. We study using deep reinforcement learning to learn adaptive view materialization and eviction policies. Our insight is that such selection policies can be effectively trained with an asynchronous RL algorithm, that runs paired counter-factual experiments during system idle times to evaluate the incremental value of persisting certain views. Such a strategy obviates the need for accurate cardinality estimation or hand-designed scoring heuristics. We focus on inner-join views and modeling effects in a main-memory, OLAP system. Our research prototype system, called DQM, is implemented in SparkSQL and we experiment on several workloads including the Join Order Benchmark and the TPC-DS workload. Results suggest that: (1) DQM can outperform heuristic when their assumptions are not satisfied by the workload or there are temporal effects like period maintenance, (2) even with the cost of learning, DQM is more adaptive to changes in the workload, and (3) DQM is broadly applicable to different workloads and skews.<br />Comment: 14 Pages

Subjects

Subjects :
Computer Science - Databases

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1903.01363
Document Type :
Working Paper