Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Ear density estimation from high resolution RGB imagery using deep learning technique|
De Solan, B.
|Citation:||Agricultural and Forest Meteorology, 2019; 264:225-234|
|Simon Madec, Xiuliang Jin, Hao Lu, Benoit De Solan, Shouyang Liu, Florent Duyme, Emmanuelle Heritier, Frédéric Baret|
|Abstract:||Wheat ear density estimation is an appealing trait for plant breeders. Current manual counting is tedious and inefficient. In this study we investigated the potential of convolutional neural networks (CNNs) to provide accurate ear density using nadir high spatial resolution RGB images. Two different approaches were investigated, either using the Faster-RCNN state-of-the-art object detector or with the TasselNet local count regression network. Both approaches performed very well (rRMSE≈6%) when applied over the same conditions as those prevailing for the calibration of the models. However, Faster-RCNN was more robust when applied to a dataset acquired at a later stage with ears and background showing a different aspect because of the higher maturity of the plants. Optimal spatial resolution for Faster-RCNN was around 0.3 mm allowing to acquire RGB images from a UAV platform for high-throughput phenotyping of large experiments. Comparison of the estimated ear density with in-situ manual counting shows reasonable agreement considering the relatively small sampling area used for both methods. Faster-RCNN and in-situ counting had high and similar heritability (H²≈85%), demonstrating that ear density derived from high resolution RGB imagery could replace the traditional counting method.|
|Keywords:||Wheat ear density; object detection; object counting; convolutional neural networks; phenotyping; broad-sense heritability|
|Rights:||© 2018 Elsevier B.V. All rights reserved.|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.