1.北方民族大学 计算机科学与工程学院,宁夏 银川 750021
2.北方民族大学 图像图形智能处理国家民委重点实验室,宁夏 银川 750021
3.宁夏医科大学 医学信息与工程学院,宁夏 银川 750004
[ "周 涛(1977—),男,博士,教授,博士生导师,CSIG理事,CSS理事,主要从事计算机辅助诊断,医学图像分析与处理,模式识别等方面的研究。E-mail:zhoutaonxmu@126.com" ]
[ "杜玉虎(1999—),男,硕士研究生,主要从事图像图形智能处理方面的研究。E-mail:cy_dyh@163.com" ]
扫 描 看 全 文
周涛,杜玉虎,石道宗等.强化特征提取能力的下颌骨骨折检测3M-YOLOv5网络[J].光学精密工程,2023,31(21):3178-3191.
ZHOU Tao,DU Yuhu,SHI Daozong,et al.Mandibular fracture detection with 3M-YOLOv5 network based on enhanced feature extraction capability[J].Optics and Precision Engineering,2023,31(21):3178-3191.
周涛,杜玉虎,石道宗等.强化特征提取能力的下颌骨骨折检测3M-YOLOv5网络[J].光学精密工程,2023,31(21):3178-3191. DOI: 10.37188/OPE.20233121.3178.
ZHOU Tao,DU Yuhu,SHI Daozong,et al.Mandibular fracture detection with 3M-YOLOv5 network based on enhanced feature extraction capability[J].Optics and Precision Engineering,2023,31(21):3178-3191. DOI: 10.37188/OPE.20233121.3178.
针对人工智能辅助骨折部位治疗时由于骨折部位通常伴随着出血等症状,不同体位所拍摄的CT影像存在较大差异,骨折部位大小不一,以及受到出血部位以及周围组织的干扰,骨折部位的特征提取不充分、骨折部位检测精度不高的问题,设计了一种3M-YOLOv5网络来检测下颌骨骨折部位。在特征提取网络中采用密集模块,利用密集连接特性提高网络的特征提取能力;采用局部全局注意力模块来提取CT影像的全局信息;构造一个轻量化的多尺度密集块,以较少的参数量提取骨折部位的多尺度特征;在特征增强网络中设计跨维度双向特征融合模块,使得特征图的高度、宽度以及通道之间有所交互,同时引入可训练的权重来平衡不同尺度特征图的融合重要性。为了验证3M-YOLOv5网络的有效性,在自建数据集上进行消融实验和对比实验。实验结果表明,在置信度阈值取0.5时,3M-YOLOv5网络的mAP值、F1值、召回率、精确率分别为99.17%,99.06%,98.81%和99.32%。所提出的下颌骨骨折CT影像检测网络能够较好地检测出影像中的骨折部位,辅助医生制定治疗方案。
For artificial intelligence assistance in the detection of fracture sites, the fracture sites are usually accompanied by bleeding and other symptoms. Further, CT images taken in different positions have large differences, the size of fracture sites varies, and the bleeding sites and surrounding tissues may interfere with the detection of fracture sites, leading to insufficient feature extraction and the problem of low detection accuracy. Therefore, the 3M-YOLOv5 network is designed to detect mandibular fracture sites. First, the dense module is used in the feature extraction network to improve the feature extraction capability of the network by using the dense connection property. The local and global attention module (lgaM) is used to extract the global information of CT images. Second, a lightweight multiscale dense block (lmdM) is designed to extract the multiscale features of the fracture sites with fewer parameters. Third, a cross-dimension bidirectional feature fusion module (cdbfM) is designed in the feature enhancement network to make the height, width, and channel of the feature maps interact with each other, and trainable weights are introduced to balance the fusion importance of the feature maps with different scales. Finally, to verify the effectiveness of the proposed network, ablation and comparison experiments are conducted on a self-built dataset. The results show that when the confidence threshold is 0.5, the mAP value, F1 value, recall rate, and precision rate of the proposed network are 99.17%, 99.06%, 98.81%, and 99.32%, respectively. The proposed CT image detection network for mandibular fracture can better detect the fracture sites in the image than existing methods, which is a good reference for doctors to make a corresponding treatment plan based on the detection results.
目标检测下颌骨骨折YOLOv5跨维度注意力密集连接神经网络
target detectionmandibular fractureYOLOv5cross dimension attentiondensely connected neural network
ROCCIA F, SOBRERO F, RAVEGGI E, et al. European multicenter prospective analysis of the use of maxillomandibular fixation for mandibular fractures treated with open reduction and internal fixation[J]. Journal of Stomatology, Oral and Maxillofacial Surgery, 2023, 124(1): 101376. doi: 10.1016/j.jormas.2023.101390http://dx.doi.org/10.1016/j.jormas.2023.101390
ZHOU T, HOU S B, LU H L, et al.. Exploring the improved mechanism of U-Net and its application in medical image segmentation [J]. Journal of Biomedical Engineering, 2022,39(04): 806-825.
MENG X H, WU D J, WANG Z, et al. A fully automated rib fracture detection system on chest CT images and its impact on radiologist performance[J].Skeletal Radiology, 2021, 50(9): 1821-1828. doi: 10.1007/s00256-021-03709-8http://dx.doi.org/10.1007/s00256-021-03709-8
ZHOU Q Q, TANG W, WANG J S, et al. Automatic detection and classification of rib fractures based on patients' CT images and clinical information via convolutional neural network[J].European Radiology, 2021, 31(6): 3815-3825. doi: 10.1007/s00330-020-07418-zhttp://dx.doi.org/10.1007/s00330-020-07418-z
XUE L Y, YAN W N, LUO P, et al. Detection and localization of hand fractures based on GA_Faster R-CNN[J]. Alexandria Engineering Journal, 2021, 60(5): 4555-4562. doi: 10.1016/j.aej.2021.03.005http://dx.doi.org/10.1016/j.aej.2021.03.005
KITAMURA G, CHUNG C Y, MOORE B E. Ankle fracture detection utilizing a convolutional neural network ensemble implemented with a small sample, de novo training, and multiview incorporation[J].Journal of Digital Imaging, 2019, 32(4): 672-677. doi: 10.1007/s10278-018-0167-7http://dx.doi.org/10.1007/s10278-018-0167-7
YUAN G, LIU H Z, JIANG L, et al. CCE-Net: a rib fracture diagnosis network based on contralateral, contextual, and edge enhanced modules[J]. Biomedical Signal Processing and Control, 2022, 75: 103620. doi: 10.1016/j.bspc.2022.103620http://dx.doi.org/10.1016/j.bspc.2022.103620
LIU P R, LU L, CHEN Y F, et al. Artificial intelligence to detect the femoral intertrochanteric fracture: the arrival of the intelligent-medicine era[J]. Frontiers in Bioengineering and Biotechnology, 2022, 10: 927926. doi: 10.3389/fbioe.2022.927926http://dx.doi.org/10.3389/fbioe.2022.927926
WU J F, LIU N J, LI X J, et al. Convolutional neural network for detecting rib fractures on chest radiographs: a feasibility study[J].BMC Medical Imaging, 2023, 23(1): 1-12. doi: 10.1186/s12880-023-00975-xhttp://dx.doi.org/10.1186/s12880-023-00975-x
WARIN K, LIMPRASERT W, SUEBNUKARN S, et al. Assessment of deep convolutional neural network models for mandibular fracture detection in panoramic radiographs[J]. International Journal of Oral and Maxillofacial Surgery, 2022, 51(11): 1488-1494. doi: 10.1016/j.ijom.2022.03.056http://dx.doi.org/10.1016/j.ijom.2022.03.056
WANG X B, XU Z N, TONG Y H, et al. Detection and classification of mandibular fracture on CT scan using deep convolutional neural network[J].Clinical Oral Investigations, 2022, 26(6): 4593-4601. doi: 10.1007/s00784-022-04427-8http://dx.doi.org/10.1007/s00784-022-04427-8
VINAYAHALINGAM S, VAN NISTELROOIJ N, VAN GINNEKEN B, et al. Detection of mandibular fractures on panoramic radiographs using deep learning[J]. Scientific Reports, 2022, 12: 19596. doi: 10.1038/s41598-022-23445-whttp://dx.doi.org/10.1038/s41598-022-23445-w
SON D M, YOON Y A, KWON H J, et al. Automatic detection of mandibular fractures in panoramic radiographs using deep learning[J]. Diagnostics, 2021, 11(6): 933. doi: 10.3390/diagnostics11060933http://dx.doi.org/10.3390/diagnostics11060933
ZHOU T, YE X Y, LU H L, et al. Dense convolutional network and its application in medical image analysis[J]. BioMed Research International, 2022, 2022: 23848301-22. doi: 10.1155/2022/2384830http://dx.doi.org/10.1155/2022/2384830
MEHTA S, RASTEGARI M. MobileViT: light-weight, general-purpose, and mobile-friendly vision transformer[C]. Computer Vision and Pattern Recognition, 2021, arXiv preprint arXiv:2110.02178. doi: 10.48550/arXiv.2110.02178http://dx.doi.org/10.48550/arXiv.2110.02178
ZHANG S, CHE S B, LIU Z, et al. A real-time and lightweight traffic sign detection method based on ghost-YOLO[J].Multimedia Tools and Applications, 2023, 82(17): 26063-26087. doi: 10.1007/s11042-023-14342-zhttp://dx.doi.org/10.1007/s11042-023-14342-z
WANG S, QU Z, LI C, et al. BANet: Small and multi-object detection with a bidirectional attention network for traffic scenes[J]. Engineering Applications of Artificial Intelligence, 2023, 117: 105504. doi: 10.1016/j.engappai.2022.105504http://dx.doi.org/10.1016/j.engappai.2022.105504
CHEN Y T, XIA R L, YANG K, et al. MFFN: image super-resolution via multi-level features fusion network[J].The Visual Computer, 2023: 1-16. doi: 10.1007/s00371-023-02795-0http://dx.doi.org/10.1007/s00371-023-02795-0
JOSEPH R, ALI F. YOLOv3:An incremental improvement [C]. Computer Vision and Pattern Recognition, 2018, arXiv preprint arXiv:1804.02767.
ALEXEY B, WANG C Y, LIAO H Y M, et al.. YOLOv4:Optimal speed and accuracy of object detection [J]. Image and Video Processing,2020, arXiv preprint arXiv:2004.10934, 2020.
REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/tpami.2016.2577031http://dx.doi.org/10.1109/tpami.2016.2577031
ZHOU X Y, WANG D Q, KRÄHENBÜHL P, et al. Objects as Points [C]. Computer Vision and Pattern Recognition, 2019: 1904.07850.
GE Z, LIU S, WANG F, et al. YOLOX: Exceeding YOLO Series in 2021[EB/OL]. 2021: arXiv: 2107.08430. https://arxiv.org/abs/2107.08430.pdfhttps://arxiv.org/abs/2107.08430.pdf.
WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-art for Real-time Object Detectors[EB/OL]. 2022: arXiv: 2207.02696. https://arxiv.org/abs/2207.02696.pdfhttps://arxiv.org/abs/2207.02696.pdf. doi: 10.1109/cvpr52729.2023.00721http://dx.doi.org/10.1109/cvpr52729.2023.00721
ZHOU T, CHANG X Y, LU H L, et al. Pooling operations in deep learning: from “invariable” to “variable”[J]. BioMed Research International, 2022, 2022: 1-17. doi: 10.1155/2022/4067581http://dx.doi.org/10.1155/2022/4067581
0
浏览量
12
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构