Back to Search Start Over

NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance

Authors :
Husistein, Raphael T.
Reiher, Markus
Eckhoff, Marco
Publication Year :
2024

Abstract

Artificial neural networks have been shown to be state-of-the-art machine learning models in a wide variety of applications, including natural language processing and image recognition. However, building a performant neural network is a laborious task and requires substantial computing power. Neural Architecture Search (NAS) addresses this issue by an automatic selection of the optimal network from a set of potential candidates. While many NAS methods still require training of (some) neural networks, zero-cost proxies promise to identify the optimal network without training. In this work, we propose the zero-cost proxy Network Expressivity by Activation Rank (NEAR). It is based on the effective rank of the pre- and post-activation matrix, i.e., the values of a neural network layer before and after applying its activation function. We demonstrate the cutting-edge correlation between this network score and the model accuracy on NAS-Bench-101 and NATS-Bench-SSS/TSS. In addition, we present a simple approach to estimate the optimal layer sizes in multi-layer perceptrons. Furthermore, we show that this score can be utilized to select hyperparameters such as the activation function and the neural network weight initialization scheme.<br />Comment: 12 pages, 4 figures, 10 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.08776
Document Type :
Working Paper