超声图像融合卷积神经网络技术诊断骶尾部术中获得性压力性损伤的研究

Study on diagnosis of sacrococcygeal intraoperative acquired pressure injury using ultrasound and convolutional neural networks

  • 摘要:
    目的 结合超声图像与深度学习算法的卷积神经网络(CNN)构建快速高效诊断骶尾部术中获得性压力性损伤(IAPI)的人工智能模型。
    方法 采集2024年12月—2025年6月新疆维吾尔自治区人民医院行仰卧位手术的893例患者的3 182张骶尾部超声图像数据, 由2名资深超声专家分别进行判读,将判读结果一致的IAPI 798张、非IAPI 887张图像纳入本研究。2组图像分别按照7∶1.5∶1.5的比例随机划分为训练集(n=1 178)、测试集(n=255)、验证集(n=252)。使用ResNet50、ResNet101、DenseNet121、DenseNet161神经网络框架搭建IAPI超声图像自动化诊断的深度学习模型,评价模型诊断的准确率、敏感度、精确度、精确率、F1值。筛选性能最佳的模型进行验证,并评估模型的适用性。同时,对最佳模型在IAPI超声图像判别诊断关键区域进行分析。
    结果 4种模型中以紧密链接DenseNet161性能表现最佳,在独立测试集上表现出色,整体准确率达0.937, 在IAPI数据中的精确率、召回率与F1值分别为0.957、0.909、0.932。进一步基于阈值的整体性能评估,模型的受试者工作特征曲线的曲线下面积(AUC)与精确率-召回率的AUC分别为0.974、0.976, 证明模型具有较好的类别区分能力与对正类样本精准识别的稳定性。使用Grad-CAM、Captum、Lime和SHAP这4种分析工具及多种语义提取聚类分析对最佳模型进行深度分析,模型可准确抓取IAPI的超声影像相关关键区域,使用最佳模型对全部数据集进行语义提取聚类分析最佳表现的UMAP在阳性与阴性样本的轮廓系数为0.784。
    结论 本研究成功探索并建立了基于CNN联合超声图像的自动化快速诊断骶尾部IAPI模型,为深度学习在IAPI诊疗领域的应用可行性及广阔前景提供了理论依据。

     

    Abstract:
    Objective To construct an artificial intelligence model for the rapid and efficient diagnosis of sacrococcygeal intraoperative acquired pressure injury (IAPI) by combining ultrasound images with a convolutional neural network (CNN).
    Methods A total of 3 182 ultrasound images of the sacrococcygeal region were collected from 893 patients who underwent surgery in the supine position at People's Hospital of Xinjiang Uygur Autonomous Region from December 2024 to June 2025. Two senior ultrasound experts independently interpreted the images, and 798 IAPI images and 887 non-IAPI images with consistent interpretations were included in this study. The images of two groups were randomly divided into training (n=1 178), testing (n=255), and validation (n=252) sets in a ratio of 7∶1.5∶1.5. Deep learning models for the automated diagnosis of IAPI ultrasound images were constructed usingResNet50, ResNet101, DenseNet121, and DenseNet161 neural network frameworks. The accuracy, sensitivity, precision, recall, and F1 score of the models' diagnoses were evaluated. The best-performing model was selected for validation and its applicability was assessed. Additionally, the key regions for discriminating and diagnosing IAPI ultrasound images using the best model were analyzed.
    Results Among the four models, DenseNet161 with its dense connections, exhibited the best performance, demonstrating excellent results on the independent testing set with an overall accuracy of 0.937. The precision, recall, and F1 score in the IAPI data were 0.957, 0.909, and 0.932, respectively. Further evaluation of the overall performance based on thresholds showed that the area under the receiver operating characteristic curve (AUC) and the AUC of the precision-recall curve for the model were 0.974 and 0.976, respectively, indicating the model's good ability to distinguish between categories and its stability in accurately identifying positive samples. A in-depth analysis of the best model was conducted using four analytical tools (Grad-CAM, Captum, Lime, and SHAP) and multiple semantic extraction clustering analyses. The results showed that the model could accurately capture the key regions related to IAPI ultrasound images. When using the best model for semantic extraction clustering analysis of the entire dataset, the UMAP with the best performance had a silhouette coefficient of 0.784 for positive and negative samples.
    Conclusion This study successfully explores and establishes a CNN-based model combined with ultrasound images for the automated and rapid diagnosis of sacrococcygeal IAPI, providing a theoretical basis for the feasibility and broad prospects of applying deep learning in the field of IAPI diagnosis and treatment.

     

/

返回文章
返回