当前位置: X-MOL 学术Found. Comput. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces
Foundations of Computational Mathematics ( IF 3 ) Pub Date : 2023-07-12 , DOI: 10.1007/s10208-023-09607-w
Philipp Grohs , Felix Voigtlaender

We study the computational complexity of (deterministic or randomized) algorithms based on point samples for approximating or integrating functions that can be well approximated by neural networks. Such algorithms (most prominently stochastic gradient descent and its variants) are used extensively in the field of deep learning. One of the most important problems in this field concerns the question of whether it is possible to realize theoretically provable neural network approximation rates by such algorithms. We answer this question in the negative by proving hardness results for the problems of approximation and integration on a novel class of neural network approximation spaces. In particular, our results confirm a conjectured and empirically observed theory-to-practice gap in deep learning. We complement our hardness results by showing that error bounds of a comparable order of convergence are (at least theoretically) achievable.



中文翻译:

通过神经网络近似空间的采样复杂度界限证明深度学习中的理论与实践差距

我们研究基于点样本的(确定性或随机)算法的计算复杂性,用于逼近或积分可以通过神经网络很好地逼近的函数。此类算法(最著名的是随机梯度下降及其变体)在深度学习领域广泛使用。该领域最重要的问题之一涉及是否有可能通过此类算法实现理论上可证明的神经网络逼近率。我们通过证明一类新型神经网络逼近空间上的逼近和积分问题的硬度结果来否定这个问题。特别是,我们的结果证实了深度学习中的推测和实证观察到的理论与实践之间的差距。我们通过证明可比较的收敛阶数的误差界限是(至少在理论上)可以实现的来补充我们的硬度结果。

更新日期:2023-07-12
down
wechat
bug