Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

2015 IEEE International Conference on Computer Vision (ICCV)(2015)

引用 23909|浏览1914
暂无评分
摘要
Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on our PReLU networks (PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass human-level performance (5.1%, Russakovsky et al.) on this visual recognition challenge.
更多
查看译文
关键词
human-level performance,ILSVRC 2014 winner,ImageNet 2012 classification dataset,network architectures,rectifier nonlinearities,robust initialization method,overfitting risk,model fitting,PReLU,parametric rectified linear unit,rectifier neural networks,state-of-the-art neural networks,rectified activation units,ImageNet classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要