当前位置: X-MOL 学术Found. Comput. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
From Kernel Methods to Neural Networks: A Unifying Variational Formulation
Foundations of Computational Mathematics ( IF 3 ) Pub Date : 2023-10-17 , DOI: 10.1007/s10208-023-09624-9
Michael Unser

The minimization of a data-fidelity term and an additive regularization functional gives rise to a powerful framework for supervised learning. In this paper, we present a unifying regularization functional that depends on an operator \(\textrm{L}\) and on a generic Radon-domain norm. We establish the existence of a minimizer and give the parametric form of the solution(s) under very mild assumptions. When the norm is Hilbertian, the proposed formulation yields a solution that involves radial-basis functions and is compatible with the classical methods of machine learning. By contrast, for the total-variation norm, the solution takes the form of a two-layer neural network with an activation function that is determined by the regularization operator. In particular, we retrieve the popular ReLU networks by letting the operator be the Laplacian. We also characterize the solution for the intermediate regularization norms \(\Vert \cdot \Vert =\Vert \cdot \Vert _{L_p}\) with \(p\in (1,2]\). Our framework offers guarantees of universal approximation for a broad family of regularization operators or, equivalently, for a wide variety of shallow neural networks, including the cases (such as ReLU) where the activation function is increasing polynomially. It also explains the favorable role of bias and skip connections in neural architectures.



中文翻译:

从核方法到神经网络:统一的变分公式

数据保真度项和加性正则化函数的最小化产生了一个强大的监督学习框架。在本文中,我们提出了一个统一的正则化函数,它依赖于算子\(\textrm{L}\)和通用的 Radon 域范数。我们建立了最小化器的存在性,并在非常温和的假设下给出了解的参数形式。当范数为希尔伯特时,所提出的公式产生一个涉及径向基函数并且与机器学习的经典方法兼容的解决方案。相比之下,对于总变差范数,解决方案采用两层神经网络的形式,其激活函数由正则化算子确定。特别是,我们通过让算子为拉普拉斯算子来检索流行的 ReLU 网络。我们还用\(p\in (1,2]\)来表征中间正则化范数\(\Vert \cdot \Vert =\Vert \cdot \Vert _{L_p}\)的解决方案。我们的框架提供了保证对于广泛的正则化算子家族,或者同等地,对于各种浅层神经网络,包括激活函数以多项式递增的情况(例如 ReLU),它还解释了偏差和跳跃连接的有利作用。神经架构。

更新日期:2023-10-17
down
wechat
bug