Abrams, Jesse F. ; Vashishtha, Anand ; Wong, Seth T. ; Nguyen, An ; Mohamed, Azlan ; Wieser, Sebastian ; Kuijper, Arjan ; Wilting, Andreas ; Mukhopadhyay, Anirban (2019)
Habitat-Net: Segmentation of habitat images using deep learning.
In: Ecological Informatics, 51
doi: 10.1016/j.ecoinf.2019.01.009
Artikel, Bibliographie
Kurzbeschreibung (Abstract)
Understanding environmental factors that influence forest health, as well as the occurrence and abundance of wildlife, is a central topic in forestry and ecology. However, the manual processing of field habitat data is time-consuming and months are often needed to progress from data collection to data interpretation. To shorten the time to process the data we propose here Habitat-Net: a novel deep learning application based on Convolutional Neural Networks (CNN) to segment habitat images of tropical rainforests. Habitat-Net takes color images as input and after multiple layers of convolution and deconvolution, produces a binary segmentation of the input image. We worked on two different types of habitat datasets that are widely used in ecological studies to characterize the forest conditions: canopy closure and understory vegetation. We trained the model with 800 canopy images and 700 understory images separately and then used 149 canopy and 172 understory images to test the performance of Habitat-Net. We compared the performance of Habitat-Net to the performance of a simple threshold based method, manual processing by a second researcher and a CNN approach called U-Net, upon which Habitat-Net is based. Habitat-Net, U-Net and simple thresholding reduced total processing time to milliseconds per image, compared to 45 s per image for manual processing. However, the higher mean Dice coefficient of Habitat-Net (0.94 for canopy and 0.95 for understory) indicates that accuracy of Habitat-Net is higher than that of both the simple thresholding (0.64, 0.83) and U-Net (0.89, 0.94). Habitat-Net will be of great relevance for ecologists and foresters, who need to monitor changes in their forest structures. The automated workflow not only reduces the time, it also standardizes the analytical pipeline and, thus, reduces the degree of uncertainty that would be introduced by manual processing of images by different people (either over time or between study sites).
Typ des Eintrags: | Artikel |
---|---|
Erschienen: | 2019 |
Autor(en): | Abrams, Jesse F. ; Vashishtha, Anand ; Wong, Seth T. ; Nguyen, An ; Mohamed, Azlan ; Wieser, Sebastian ; Kuijper, Arjan ; Wilting, Andreas ; Mukhopadhyay, Anirban |
Art des Eintrags: | Bibliographie |
Titel: | Habitat-Net: Segmentation of habitat images using deep learning |
Sprache: | Englisch |
Publikationsjahr: | 2019 |
Titel der Zeitschrift, Zeitung oder Schriftenreihe: | Ecological Informatics |
Jahrgang/Volume einer Zeitschrift: | 51 |
DOI: | 10.1016/j.ecoinf.2019.01.009 |
URL / URN: | https://doi.org/10.1016/j.ecoinf.2019.01.009 |
Kurzbeschreibung (Abstract): | Understanding environmental factors that influence forest health, as well as the occurrence and abundance of wildlife, is a central topic in forestry and ecology. However, the manual processing of field habitat data is time-consuming and months are often needed to progress from data collection to data interpretation. To shorten the time to process the data we propose here Habitat-Net: a novel deep learning application based on Convolutional Neural Networks (CNN) to segment habitat images of tropical rainforests. Habitat-Net takes color images as input and after multiple layers of convolution and deconvolution, produces a binary segmentation of the input image. We worked on two different types of habitat datasets that are widely used in ecological studies to characterize the forest conditions: canopy closure and understory vegetation. We trained the model with 800 canopy images and 700 understory images separately and then used 149 canopy and 172 understory images to test the performance of Habitat-Net. We compared the performance of Habitat-Net to the performance of a simple threshold based method, manual processing by a second researcher and a CNN approach called U-Net, upon which Habitat-Net is based. Habitat-Net, U-Net and simple thresholding reduced total processing time to milliseconds per image, compared to 45 s per image for manual processing. However, the higher mean Dice coefficient of Habitat-Net (0.94 for canopy and 0.95 for understory) indicates that accuracy of Habitat-Net is higher than that of both the simple thresholding (0.64, 0.83) and U-Net (0.89, 0.94). Habitat-Net will be of great relevance for ecologists and foresters, who need to monitor changes in their forest structures. The automated workflow not only reduces the time, it also standardizes the analytical pipeline and, thus, reduces the degree of uncertainty that would be introduced by manual processing of images by different people (either over time or between study sites). |
Freie Schlagworte: | Image segmentation, Convolutional Neural Networks (CNN), Deep learning |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Graphisch-Interaktive Systeme 20 Fachbereich Informatik > Mathematisches und angewandtes Visual Computing |
Hinterlegungsdatum: | 26 Jun 2019 11:43 |
Letzte Änderung: | 26 Jun 2019 11:43 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |