Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board
<< Previous Next >>

2020, 2(1): 12-27

Published Date:2020-2-20 DOI: 10.1016/j.vrih.2019.12.002

A smart assistance system for cable assembly by combining wearable augmented reality with portable visual inspection

Abstract

Background
Assembly guided by paper documents is the most widespread type used in the process of aircraft cable assembly. This process is very complicated and requires assembly workers with high-level skills. The technologies of wearable Augmented Reality (AR) and portable visual inspection can be exploited to improve the efficiency and the quality of cable assembly.
Methods
In this study, we propose a smart assistance system for cable assembly that combines wearable AR with portable visual inspection. Specifically, a portable visual device based on binocular vision and deep learning is developed to realize fast detection and recognition of cable brackets that are installed on aircraft airframes. A Convolutional Neural Network (CNN) is then developed to read the texts on cables after images are acquired from the camera of the wearable AR device. An authoring tool that was developed to create and manage the assembly process is proposed to realize visual guidance of the cable assembly process based on a wearable AR device. The system is applied to cable assembly on an aircraft bulkhead prototype.
Results
The results show that this system can recognize the number, types, and locations of brackets, and can correctly read the text of aircraft cables. The authoring tool can assist users who lack professional programming experience in establishing a process plan, i.e., assembly outline based on AR for cable assembly.
Conclusions
The system can provide quick assembly guidance for aircraft cable with texts, images, and a 3D model. It is beneficial for reducing the dependency on paper documents, labor intensity, and the error rate.

Keyword

Cable assembly ; Visual inspection ; Text reading ; Wearable AR ; Deep learning

Cite this article

Lianyu ZHENG, Xinyu LIU, Zewu AN, Shufei LI, Renjie ZHANG. A smart assistance system for cable assembly by combining wearable augmented reality with portable visual inspection. Virtual Reality & Intelligent Hardware, 2020, 2(1): 12-27 DOI:10.1016/j.vrih.2019.12.002

References

1. Mora N, Rachidi F, Pelissou P, Junge A. Numerical simulation of the overall transfer impedance of shielded spacecraft harness cable assemblies. IEEE Transactions on Electromagnetic Compatibility, 2015, 57(4): 894–902 DOI:10.1109/temc.2015.2404928

2. Geng J H, Zhang S M, Yang B. A publishing method of lightweight three-dimensional assembly instruction for complex products. Journal of Computing and Information Science in Engineering, 2015, 15(3): 031004 DOI:10.1115/1.4029753

3. Görgel P, Simsek A. Face recognition via deep stacked denoising sparse autoencoders (DSDSA). Applied Mathematics and Computation, 2019, 355: 325–342 DOI:10.1016/j.amc.2019.02.071

4. Matiz S, Barner K E. Inductive conformal predictor for convolutional neural networks: Applications to active learning for image classification. Pattern Recognition, 2019, 90: 172–182 DOI:10.1016/j.patcog.2019.01.035

5. Zhuang N, Zhang Q, Pan C H, Ni B B, Xu Y, Yang X K, Zhang W J. Recognition oriented facial image quality assessment via deep convolutional neural network. Neurocomputing, 2019, 358: 109–118 DOI:10.1016/j.neucom.2019.04.057

6. Patra S, Bhardwaj K, Bruzzone L. A spectral-spatial multicriteria active learning technique for hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017, 10(12): 5213–5227 DOI:10.1109/jstars.2017.2747600

7. Guienko G, Doytsher Y. Geographic information system data for supporting feature extraction from high-resolution aerial and satellite images. Journal of Surveying Engineering, 2003, 129(4): 158–164 DOI:10.1061/(asce)0733-9453(2003)129:4(158)

8. Yang F, Ma Z, Xie M. A Robust Character Segmentation Approach for License Plate. International Conference on Communications. IEEE Xplore, 2007

9. Han J, Yao J, Zhao J, Tu J M, Liu Y H. Multi-oriented and scale-invariant license plate\r detection based on convolutional neural networks. Sensors, 2019, 19(5): 1175 DOI:10.3390/s19051175

10. Zhou X Y, Yao C, Wen H, Wang Y Z, Zhou S C, He W R, Liang J J. EAST: an efficient and accurate scene text detector. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA, IEEE, 2017 DOI:10.1109/cvpr.2017.283

11. Caudell T P, Mizell D W. Augmented reality: an application of heads-up display technology to manual manufacturing processes. In: Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. Kauai, HI, USA, IEEE, 1992 DOI:10.1109/hicss.1992.183317

12. O'B Holt P, Ritchie J M, Day P N, Simmons J E L, Robinson G, Russell G T, Ng F M. Immersive virtual reality in cable and pipe routing: design metaphors and cognitive ergonomics. Journal of Computing and Information Science in Engineering, 2004, 4(3): 161–170 DOI:10.1115/1.1759696

13. Erkoyuncu J A, del Amo I F, Dalle Mura M, Roy R, Dini G. Improving efficiency of industrial maintenance with context aware adaptive authoring in augmented reality. CIRP Annals-Manufacturing Technology, 2017, 6(1):465–468 DOI:10.1016/j.cirp.2017.04.006

14. Hao X, Zhang G, Ma S. Deep learning. International Journal of Semantic Computing, 2016, 10(3):417–439 DOI:10.1142/S1793351X16500045

15. Lin T Y, Dollar P, Girshick R, He K M, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA, IEEE, 2017 DOI:10.1109/cvpr.2017.106

16. Zhong S H, Wu J X, Zhu Y Y, Liu P Q, Jiang J M, Liu Y. Visual orientation inhomogeneity based convolutional neural networks. In: 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI). San Jose, CA, USA, IEEE, 2016 DOI:10.1109/ictai.2016.0079

17. Xu D, Li H. Geometric moment invariants. Pattern Recognition, 2008, 41(1): 240–249 DOI:10.1016/j.patcog.2007.05.001

18. Dung C V, Sekiya H, Hirano S, Okatani T, Miki C. A vision-based method for crack detection in gusset plate welded joints of steel bridges using deep convolutional neural networks. Automation in Construction, 2019, 102: 217–229 DOI:10.1016/j.autcon.2019.02.013

19. Yang C X, Li W J, Lin Z Y. Vehicle object detection in remote sensing imagery based on multi-perspective convolutional neural network. ISPRS International Journal of Geo-Information, 2018, 7(7): 249 DOI:10.3390/ijgi7070249

Related

1. Zhiyuan ZHANG, Yuchao DAI, Jiadai SUN, Deep learning based point cloud registration: an overview Virtual Reality & Intelligent Hardware 2020, 2(3): 222-246

2. Zike YAN, Hongbin ZHA, Flow-based SLAM: From geometry computation to learning Virtual Reality & Intelligent Hardware 2019, 1(5): 435-460

3. Yuanyuan SHI, Yunan LI, Xiaolong FU, Kaibin MIAO, Qiguang MIAO, Review of dynamic gesture recognition Virtual Reality & Intelligent Hardware 2021, 3(3): 183-206

4. Xiaojiao SONG, Jianjun ZHU, Jingfan FAN, Danni AI, Jian YANG, Topological distance-constrained feature descriptor learning model for vessel matching in coronary angiographies Virtual Reality & Intelligent Hardware 2021, 3(4): 287-301

5. Wei LYU, Zhong ZHOU, Lang CHEN, Yi ZHOU, A survey on image and video stitching Virtual Reality & Intelligent Hardware 2019, 1(1): 55-83