Training Convolutional Neural Networks to Detect Waste in Train Carriages DOI

Nathan Western,

Xianwen Kong,

Mustafa Suphi Erden

et al.

Published: Sept. 13, 2021

This research constitutes a systematic investigation of the effect image view on Convolutional Neural Networks (CNNs) when trained to detect waste in train carriages. Additionally, this identifies neural network architecture and training conditions for use an automated cleaning robot. Specifically, we investigate relationship between size CNN dataset, whether these images are taken from sympathetic application, effectiveness networks. Three datasets were constructed specifically research; large dataset 58,300 studio variety conditions, smaller 4,515 actual items trains, 7,290 trains used test CNNs. The captured perspective hypothetical robot that would provide comparison MobileNetV2, ShuffleNet, SqueezeNet CNNs based their suitability implementation system, optimum do so. Training with "robot-eye view" resulted average increase classification accuracy 10.5%, largest being 26%, compared larger various poses. ShuffleNet was identified as optimally performing detection, achieving 88.61% small end use. MobileNetV2 found perform images, even if less specific application network.

Language: Английский

Training Convolutional Neural Networks to Detect Waste in Train Carriages DOI

Nathan Western,

Xianwen Kong,

Mustafa Suphi Erden

et al.

Published: Sept. 13, 2021

This research constitutes a systematic investigation of the effect image view on Convolutional Neural Networks (CNNs) when trained to detect waste in train carriages. Additionally, this identifies neural network architecture and training conditions for use an automated cleaning robot. Specifically, we investigate relationship between size CNN dataset, whether these images are taken from sympathetic application, effectiveness networks. Three datasets were constructed specifically research; large dataset 58,300 studio variety conditions, smaller 4,515 actual items trains, 7,290 trains used test CNNs. The captured perspective hypothetical robot that would provide comparison MobileNetV2, ShuffleNet, SqueezeNet CNNs based their suitability implementation system, optimum do so. Training with "robot-eye view" resulted average increase classification accuracy 10.5%, largest being 26%, compared larger various poses. ShuffleNet was identified as optimally performing detection, achieving 88.61% small end use. MobileNetV2 found perform images, even if less specific application network.

Language: Английский

Citations

0