Back to Search Start Over

Distributive Pre-Training of Generative Modeling Using Matrix-Product States

Authors :
Lin, Sheng-Hsuan
Kuijpers, Olivier
Peterhansl, Sebastian
Pollmann, Frank
Publication Year :
2023

Abstract

Tensor networks have recently found applications in machine learning for both supervised learning and unsupervised learning. The most common approaches for training these models are gradient descent methods. In this work, we consider an alternative training scheme utilizing basic tensor network operations, e.g., summation and compression. The training algorithm is based on compressing the superposition state constructed from all the training data in product state representation. The algorithm could be parallelized easily and only iterates through the dataset once. Hence, it serves as a pre-training algorithm. We benchmark the algorithm on the MNIST dataset and show reasonable results for generating new images and classification tasks. Furthermore, we provide an interpretation of the algorithm as a compressed quantum kernel density estimation for the probability amplitude of input data.<br />Comment: 7+2 pages, 1+2 figures; Position paper in QTNML Workshop, NeurIPS 2021; See https://tensorworkshop.github.io/NeurIPS2021/accepted_papers/MPS_MNIST.pdf

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.14787
Document Type :
Working Paper