当前位置: X-MOL 学术Found. Comput. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks
Foundations of Computational Mathematics ( IF 3 ) Pub Date : 2022-11-09 , DOI: 10.1007/s10208-022-09595-3
Jonathan W. Siegel , Jinchao Xu

In this article, we study approximation properties of the variation spaces corresponding to shallow neural networks with a variety of activation functions. We introduce two main tools for estimating the metric entropy, approximation rates, and n-widths of these spaces. First, we introduce the notion of a smoothly parameterized dictionary and give upper bounds on the nonlinear approximation rates, metric entropy, and n-widths of their absolute convex hull. The upper bounds depend upon the order of smoothness of the parameterization. This result is applied to dictionaries of ridge functions corresponding to shallow neural networks, and they improve upon existing results in many cases. Next, we provide a method for lower bounding the metric entropy and n-widths of variation spaces which contain certain classes of ridge functions. This result gives sharp lower bounds on the \(L^2\)-approximation rates, metric entropy, and n-widths for variation spaces corresponding to neural networks with a range of important activation functions, including ReLU\(^k\) activation functions and sigmoidal activation functions with bounded variation.



中文翻译:

浅层神经网络的近似率、度量熵和 n 宽度的明确界限

在本文中,我们研究了对应于具有多种激活函数的浅层神经网络的变化空间的逼近特性。我们介绍了两个主要工具来估计这些空间的度量熵、逼近率和n宽度。首先,我们引入了平滑参数化字典的概念,并给出了非线性逼近率、度量熵和它们的绝对凸包的n宽度的上限。上限取决于参数化的平滑顺序。该结果应用于与浅层神经网络相对应的岭函数字典,并且在许多情况下改进了现有结果。接下来,我们提供了一种降低度量熵和n的方法-包含某些类别的岭函数的变化空间的宽度。这个结果给出了与具有一系列重要激活函数的神经网络相对应的变空间的\(L^2\)逼近率、度量熵和n宽度的急剧下界,包括 ReLU \(^k\)激活函数和具有有界变化的 sigmoidal 激活函数。

更新日期:2022-11-10
down
wechat
bug