1. Regularization of linear machine learning problems
- Author
-
Liu, S., Kabanikhin, S. I., and Strijhak, S. V.
- Subjects
Mathematics - Numerical Analysis - Abstract
In this paper, we consider the simplest version of a linear neural network (LNN). Assuming that for training (constructing an optimal weight matrix $Q$) we have a set of training pairs, i.e. we know the input data \begin{equation} G=\left\{g^{\left(1\right)},g^{\left(2\right)},\cdots,g^{\left(K\right)}\right\}, \end{equation} as well as the correct answers to these input data \begin{equation} H=\left\{h^{\left(1\right)},h^{\left(2\right)},\cdots,h^{\left(K\right)}\right\}. \end{equation} We will study the possibilities of constructing a weight matrix $Q$ of a neural network that will give correct answers to arbitrary input data based on the connection of the specified problem with a system of linear algebraic equations (SLAE). Consider a class of neural networks in which each neuron has only one output signal and performs linear operations. We will show how such LNEs are reduced to SLAEs. Since the questions $G$ and the correct answers $H$ are known to us, the desired weight matrix $Q$ must satisfy the equations \begin{equation} Qg^{\left(k\right)}=h^{\left(k\right)}, k=1,2,\cdots,K. \end{equation} It is required to restore $Q$. In the general case, the matrix $Q$ is rectangular $Q=Q_{MN}=\left\{q_{mn}\right\}$, $m$ is the row number, and $g^{\left(k\right)}\in\mathbb{R}^N$, $h^{\left(k\right)}\in\mathbb{R}^M$. Let $G_{NK}$ be a matrix composed of columns $g^{\left(1\right)}, g^{\left(2\right)},\cdots,g^{\left(k\right)}$, and $H_{MK}$ be a matrix composed of columns $h^{\left(1\right)},h^{\left(2\right)},\cdots,h^{\left(k\right)}$. Then, with respect to $Q_{MN}$, we obtain a matrix SLAE. \begin{equation} Q_{MN}G_{NK}=H_{MK}. \end{equation} This paper will present methods for regularizing the constructed system., Comment: in Russian language
- Published
- 2024