Penerapan Algoritma Deep Learning Pada Robot Deteksi Botol
:
https://doi.org/10.32409/jikstik.23.4.3759
Keywords:
Deep Learning, Robot Deteksi, CNN, Botol, Jaringan Saraf TiruanAbstract
Dalam penelitian ini, algoritma deep learning digunakan untuk robot otonom yang mendeteksi botol. Pengembangan robot yang mampu berinteraksi dengan lingkungan bergantung pada kemampuan robot untuk mendeteksi objek. Dalam penelitian ini, algoritma deep learning digunakan untuk mengidentifikasi dan mendeteksi botol dalam berbagai kondisi pencahayaan dan sudut pengambilan gambar. Algoritma ini terutama menggunakan model berbasis jaringan saraf tiruan (neural networks), seperti Convolutional Neural Networks (CNN). Studi ini menunjukkan bahwa penggunaan model deep learning meningkatkan akurasi deteksi botol hingga 95%. Ini menunjukkan bahwa model ini dapat digunakan dengan baik dalam sistem robotika kontemporer
Downloads
References
M. Soori, B. Arezoo, and R. Dastres, “Artificial intelligence, machine learning and deep learning in advanced robotics, a review,” Jan. 01, 2023, KeAi Communications Co. doi: 10.1016/j.cogr.2023.04.001.
L. G. Divyanth et al., “Estimating depth from RGB images using deep learning for robotic applications in apple orchards,” Smart Agricultural Technology, vol. 6, Dec. 2023, doi: 10.1016/j.atech.2023.100345.
S. Zhou et al., “Neurosurgical robots in China: State of the art and future prospect,” Nov. 17, 2023, Elsevier Inc. doi: 10.1016/j.isci.2023.107983.
C. Zhuang et al., “Deep learning-based semantic segmentation of human features in bath scrubbing robots,” Biomimetic Intelligence and Robotics, vol. 4, no. 1, Mar. 2024, doi: 10.1016/j.birob.2024.100143.
M. L. Dezaki, S. Hatami, A. Zolfagharian, and M. Bodaghi, “A pneumatic conveyor robot for color detection and sorting,” Cognitive Robotics, vol. 2, pp. 60–72, Jan. 2022, doi: 10.1016/j.cogr.2022.03.001.
U. Aulia, I. Hasanuddin, M. Dirhamsyah, and N. Nasaruddin, “A new CNN-BASED object detection system for autonomous mobile robots based on real-world vehicle datasets,” Heliyon, vol. 10, no. 15, Aug. 2024, doi: 10.1016/j.heliyon.2024.e35247.
H. Sekkat, O. Moutik, B. El Kari, Y. Chaibi, T. A. Tchakoucht, and A. El Hilali Alaoui, “Beyond simulation: Unlocking the frontiers of humanoid robot capability and intelligence with Pepper’s open-source digital twin,” Heliyon, vol. 10, no. 14, Jul. 2024, doi: 10.1016/j.heliyon.2024.e34456.
T. Sekine et al., “Robotic e-skin for high performance stretchable acceleration sensor via combinations of novel soft and functional polymers,” Appl Mater Today, vol. 33, Aug. 2023, doi: 10.1016/j.apmt.2023.101877.
R. Fernandez-Fernandez, J. G. Victores, and C. Balaguer, “Deep Robot Sketching: An application of Deep Q-Learning Networks for human-like sketching,” Cogn Syst Res, vol. 81, pp. 57–63, Sep. 2023, doi: 10.1016/j.cogsys.2023.05.004.
J. Shanley et al., “Collaborative robotics to enable ultra-high-throughput IR-MALDESI,” SLAS Technol, p. 100163, Jul. 2024, doi: 10.1016/j.slast.2024.100163.
C. Qin, A. Song, H. Li, L. Zhu, X. Zhang, and J. Wang, “Overcoming the cognition-reality gap in robot-to-human handovers with anisotropic variable force guidance,” Comput Struct Biotechnol J, vol. 24, pp. 185–195, Dec. 2024, doi: 10.1016/j.csbj.2024.02.020.
J. Kim et al., “Macroscopic mapping of microscale fibers in freeform injection molded fiber-reinforced composites using X-ray scattering tensor tomography,” Compos B Eng, vol. 233, Mar. 2022, doi: 10.1016/j.compositesb.2022.109634.
R. Fernandez-Fernandez, J. G. Victores, and C. Balaguer, “Deep Robot Sketching: An application of Deep Q-Learning Networks for human-like sketching,” Cogn Syst Res, vol. 81, pp. 57–63, Sep. 2023, doi: 10.1016/j.cogsys.2023.05.004.
E. Okafor, M. Oyedeji, and M. Alfarraj, “Deep reinforcement learning with light-weight vision model for sequential robotic object
Downloads
Published
How to Cite
Issue
Section
Categories
![](https://ejournal.jak-stik.ac.id/data/gambar1/grafik.png)