Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness | |
Jin, Pengzhan2,3; Lu, Lu1; Tang, Yifa2,3; Karniadakis, George Em1 | |
刊名 | NEURAL NETWORKS |
2020-10-01 | |
卷号 | 130页码:85-99 |
关键词 | Neural networks Generalization error Learnability Data distribution Cover complexity Neural network smoothness |
ISSN号 | 0893-6080 |
DOI | 10.1016/j.neunet.2020.06.024 |
英文摘要 | The accuracy of deep learning, i.e., deep neural networks, can be characterized by dividing the total error into three main types: approximation error, optimization error, and generalization error. Whereas there are some satisfactory answers to the problems of approximation and optimization, much is known about the theory of generalization. Most existing theoretical works for generalization to explain the performance of neural networks in practice. To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness. We introduce the cover complexity (CC) to measure the difficulty learning a data set and the inverse of the modulus of continuity to quantify neural network smoothness. A quantitative bound for expected accuracy/error is derived by considering both the CC and neural network smoothness. Although most of the analysis is general and not specific to neural networks, we validate our theoretical assumptions and results numerically for neural networks by several data sets of images. The numerical results confirm that the expected error of trained networks scaled with the square root of the number of classes has a linear relationship with respect to the CC. We observe a clear consistency between test loss and neural network smoothness during the training process. In addition, we demonstrate empirically that the neural network smoothness decreases when the network size increases whereas the smoothness is insensitive to training dataset size. (C) 2020 Elsevier Ltd. All rights reserved. |
资助项目 | DOE PhILMs project[de-sc0019453] ; AFOSR[FA9550-17-1-0013] ; Major Project on New Generation of Artificial Intelligence from MOST of China[2018AAA0101002] ; National Natural Science Foundation of China[11771438] ; DARPA AIRA grant[HR00111990025] |
WOS研究方向 | Computer Science ; Neurosciences & Neurology |
语种 | 英语 |
出版者 | PERGAMON-ELSEVIER SCIENCE LTD |
WOS记录号 | WOS:000567813200009 |
内容类型 | 期刊论文 |
源URL | [http://ir.amss.ac.cn/handle/2S8OKBNM/52160] |
专题 | 中国科学院数学与系统科学研究院 |
通讯作者 | Karniadakis, George Em |
作者单位 | 1.Brown Univ, Div Appl Math, Providence, RI 02912 USA 2.Chinese Acad Sci, Acad Math & Syst Sci, ICMSEC, LSEC, Beijing 100190, Peoples R China 3.Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China |
推荐引用方式 GB/T 7714 | Jin, Pengzhan,Lu, Lu,Tang, Yifa,et al. Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness[J]. NEURAL NETWORKS,2020,130:85-99. |
APA | Jin, Pengzhan,Lu, Lu,Tang, Yifa,&Karniadakis, George Em.(2020).Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness.NEURAL NETWORKS,130,85-99. |
MLA | Jin, Pengzhan,et al."Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness".NEURAL NETWORKS 130(2020):85-99. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论