Approximation with Random Shallow ReLU Networks with Applications to Model Reference Adaptive Control
CoRR(2024)
摘要
Neural networks are regularly employed in adaptive control of nonlinear
systems and related methods o reinforcement learning. A common architecture
uses a neural network with a single hidden layer (i.e. a shallow network), in
which the weights and biases are fixed in advance and only the output layer is
trained. While classical results show that there exist neural networks of this
type that can approximate arbitrary continuous functions over bounded regions,
they are non-constructive, and the networks used in practice have no
approximation guarantees. Thus, the approximation properties required for
control with neural networks are assumed, rather than proved. In this paper, we
aim to fill this gap by showing that for sufficiently smooth functions, ReLU
networks with randomly generated weights and biases achieve L_∞ error
of O(m^-1/2) with high probability, where m is the number of neurons. It
suffices to generate the weights uniformly over a sphere and the biases
uniformly over an interval. We show how the result can be used to get
approximations of required accuracy in a model reference adaptive control
application.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要