Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/107549
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiao, Z.-
dc.contributor.authorCarneiro, G.-
dc.date.issued2016-
dc.identifier.citationProceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision, 2016, pp.1-8-
dc.identifier.isbn9781509006410-
dc.identifier.issn2472-6737-
dc.identifier.urihttp://hdl.handle.net/2440/107549-
dc.description.abstractDeep feedforward neural networks with piecewise linear activations are currently producing the state-of-the-art results in several public datasets (e.g., CIFAR-10, CIFAR- 100, MNIST, and SVHN). The combination of deep learning models and piecewise linear activation functions allows for the estimation of exponentially complex functions with the use of a large number of subnetworks specialized in the classification of similar input examples. During the training process, these subnetworks avoid overfitting with an implicit regularization scheme based on the fact that they must share their parameters with other subnetworks. Using this framework, we have made an empirical observation that can improve even more the performance of such models. We notice that these models assume a balanced initial distribution of data points with respect to the domain of the piecewise linear activation function. If that assumption is violated, then the piecewise linear activation units can degenerate into purely linear activation units, which can result in a significant reduction of their capacity to learn complex functions. Furthermore, as the number of model layers increases, this unbalanced initial distribution makes the model ill-conditioned. Therefore, we propose the introduction of batch normalisation units into deep feedforward neural networks with piecewise linear activations, which drives a more balanced use of these activation units, where each region of the activation function is trained with a relatively large proportion of training samples. Also, this batch normalisation promotes the pre-conditioning of very deep learning models. We show that by introducing maxout and batch normalisation units to the network in network model results in a model that produces classification results that are better than or comparable to the current state of the art in CIFAR-10, CIFAR-100, MNIST, and SVHN datasets.-
dc.description.statementofresponsibilityZhibin Liao, Gustavo Carneiro-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE Winter Conference on Applications of Computer Vision-
dc.rightsCopyright © 2016, IEEE-
dc.source.urihttp://dx.doi.org/10.1109/wacv.2016.7477624-
dc.subjectTraining, data models, machine learning, feedforward neural networks, ear, image color analysis.-
dc.titleOn the importance of normalisation layers in deep learning with piecewise linear activation units-
dc.typeConference paper-
dc.contributor.conference2016 IEEE Winter Conference on Applications of Computer Vision (WACV 2016) (7 Mar 2016 - 10 Mar 2016 : Lake Placid, NY)-
dc.identifier.doi10.1109/WACV.2016.7477624-
dc.relation.granthttp://purl.org/au-research/grants/arc/CE140100016-
pubs.publication-statusPublished-
dc.identifier.orcidLiao, Z. [0000-0001-9965-4511]-
dc.identifier.orcidCarneiro, G. [0000-0002-5571-6220]-
Appears in Collections:Aurora harvest 8
Computer Science publications

Files in This Item:
File Description SizeFormat 
RA_hdl_107549.pdf
  Restricted Access
Restricted Access2.11 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.