Advertisement

Trends in the application of deep learning networks in medical image analysis: Evolution between 2012 and 2020

Open AccessPublished:November 23, 2021DOI:https://doi.org/10.1016/j.ejrad.2021.110069

      Abstract

      Purpose

      To evaluate the general rules and future trajectories of deep learning (DL) networks in medical image analysis through bibliometric and hot spot analysis of original articles published between 2012 and 2020.

      Methods

      Original articles related to DL and medical imaging were retrieved from the PubMed database. For the analysis, data regarding radiological subspecialties; imaging techniques; DL networks; sample size; study purposes, setting, origins and design; statistical analysis; funding sources; authors; and first authors’ affiliation was manually extracted from each article. The Bibliographic Item Co-Occurrence Matrix Builder and VOSviewer were used to identify the research topics of the included articles and illustrate the future trajectories of studies.

      Results

      The study included 2685 original articles. The number of publications on DL and medical imaging has increased substantially since 2017, accounting for 97.2% of all included articles. We evaluated the rules of the application of 47 DL networks to eight radiological tasks on 11 human organ sites. Neuroradiology, thorax, and abdomen were frequent research subjects, while thyroid was under-represented. Segmentation and classification tasks were the primary purposes. U-Net, ResNet, and VGG were the most frequently used Convolutional neural network-derived networks. GAN-derived networks were widely developed and applied in 2020, and transfer learning was highlighted in the COVID-19 studies. Brain, prostate, and diabetic retinopathy-related studies were mature research topics in the field. Breast- and lung-related studies were in a stage of rapid development.

      Conclusions

      This study evaluates the general rules and future trajectories of DL network application in medical image analyses and provides guidance for future studies.

      Keywords

      Abbreviations:

      BICOMB (Bibliographic Item Co-Occurrence Matrix Builder), CNN (convolutional neural network), CT (computed tomography), DL (deep learning), ECG (electrocardiography), EEG (electroencephalography), gCLUTO (Graphical Clustering Toolkit), HE (haematoxylin-eosin staining), ICC (intraclass correlation coefficient), MRI (magnetic resonance imaging), OCT (optical coherence tomography), PET (positron emission tomography), WSI (whole section imaging)

      1. Introduction

      Medical imaging has been an auxiliary tool used in disease diagnoses for over 100 years [
      • Margulis A.R.
      Whitehouse lecture. Radiologic imaging: changing costs, greater benefits.
      ]. Knowledge obtained from medical images has significantly improved the efficiency of radiological workflow with the assistance of computer-aided diagnostic techniques. Machine learning methods, particularly emerging deep learning (DL) neural networks, have outperformed traditional technologies and accelerated the progress in this field [
      • LeCun Y.
      • Bengio Y.
      • Hinton G.
      Deep learning.
      ]. Feature maps extracted by DL networks from medical images acquired using computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), mammography, ultrasound, and histopathology, provide valuable information [
      • Bach P.B.
      • Jett J.R.
      • Pastorino U.
      • Tockman M.S.
      • Swensen S.J.
      • Begg C.B.
      Computed tomography screening and lung cancer outcomes.
      ,
      • Negendank W.
      Studies of human tumors by MRS: a review.
      ,
      • Raichle M.E.
      Positron emission tomography. Progress in brain imaging.
      ]. DL networks are state-of-the-art methods for medical image analysis tasks, such as image detection, segmentation, and classification [
      • Havaei M.
      • Davy A.
      • Warde-Farley D.
      • Biard A.
      • Courville A.
      • Bengio Y.
      • Pal C.
      • Jodoin P.-M.
      • Larochelle H.
      Brain tumor segmentation with deep neural networks.
      ,
      • Zhang J.
      • Xie Y.
      • Wu Q.i.
      • Xia Y.
      Medical image classification using synergic deep learning.
      ,
      • Schmuelling L.
      • Franzeck F.C.
      • Nickel C.H.
      • Mansella G.
      • Bingisser R.
      • Schmidt N.
      • Stieltjes B.
      • Bremerich J.
      • Sauter A.W.
      • Weikert T.
      • Sommer G.
      Deep learning-based automated detection of pulmonary embolism on CT pulmonary angiograms: no significant effects on report communication times and patient turnaround in the emergency department nine months after technical implementation.
      ].
      The development of new networks, lack of standardised knowledge, and interconnection between the networks and tasks obscure the rules of network applications in medical image analysis. Clinical applications of new DL networks are continuously reported, which further complicates matters. Zhao et al. integrated fully convolutional networks and conditional random fields for the segmentation of brain tumours [
      • Zhao X.
      • Wu Y.
      • Song G.
      • Li Z.
      • Zhang Y.
      • Fan Y.
      A deep learning model integrating FCNNs and CRFs for brain tumor segmentation.
      ]. Other studies used long-short term memory and residual convolutional neural network for the same task [
      • Bao S.
      • Wang P.
      • Mok T.C.W.
      • Chung A.C.S.
      3D randomized connection network with graph-based label inference.
      ,
      • Chang K.
      • Bai H.X.
      • Zhou H.
      • Su C.
      • Bi W.L.
      • Agbodza E.
      • Kavouridis V.K.
      • Senders J.T.
      • Boaro A.
      • Beers A.
      • Zhang B.
      • Capellini A.
      • Liao W.
      • Shen Q.
      • Li X.
      • Xiao B.
      • Cryan J.
      • Ramkissoon S.
      • Ramkissoon L.
      • Ligon K.
      • Wen P.Y.
      • Bindra R.S.
      • Woo J.
      • Arnaout O.
      • Gerstner E.R.
      • Zhang P.J.
      • Rosen B.R.
      • Yang L.
      • Huang R.Y.
      • Kalpathy-Cramer J.
      Residual convolutional neural network for the determination of IDH status in low- and high-grade gliomas from MR imaging.
      ]. Convolutional neural networks (CNN) are most frequently used in image characterisation [
      • Biswas M.
      • Kuppili V.
      • Saba L.
      • Edla D.R.
      • Suri H.S.
      • Cuadrado-Godia E.
      • Laird J.R.
      • Marinhoe R.T.
      • Sanches J.M.
      • Nicolaides A.
      • Suri J.S.
      State-of-the-art review on deep learning in medical imaging.
      ]. Emerging CNN-derived in-depth networks, such as ResNet (up to 152 layers), are used increasingly in image classification [
      • Zhang T.
      • Wang Y.
      • Sun Y.
      • Yuan M.
      • Zhong Y.
      • Li H.
      • Yu T.
      • Wang J.
      High-resolution CT image analysis based on 3D convolutional neural network can enhance the classification performance of radiologists in classifying pulmonary non-solid nodules.
      ]. Networks with shallow layers, such as AlexNet (8 layers) and VGG (19 layers), are also widely used for similar tasks [
      • Litjens G.
      • Kooi T.
      • Bejnordi B.E.
      • Setio A.A.A.
      • Ciompi F.
      • Ghafoorian M.
      • van der Laak J.A.W.M.
      • van Ginneken B.
      • Sánchez C.I.
      A survey on deep learning in medical image analysis.
      ,
      • Krizhevsky A.
      • Sutskever I.
      • Hinton G.E.
      ImageNet classification with deep convolutional neural networks.
      ,
      • Yamanakkanavar N.
      • Choi J.Y.
      • Lee B.
      MRI segmentation and classification of human brain using deep learning for diagnosis of Alzheimer's disease: a survey.
      ]. Furthermore, supervised learning, such as Inception V3, and unsupervised learning, such as deep belief networks and autoencoders, have repeatedly been utilised in the early diagnosis of Alzheimer’s disease [
      • Ding Y.
      • Sohn J.H.
      • Kawczynski M.G.
      • Trivedi H.
      • Harnish R.
      • Jenkins N.W.
      • Lituiev D.
      • Copeland T.P.
      • Aboian M.S.
      • Aparici C.M.
      • Behr S.C.
      • Flavell R.R.
      • Huang S.Y.
      • Zalocusky K.A.
      • Nardo L.
      • Seo Y.
      • Hawkins R.A.
      • Pampaloni M.H.
      • Hadley D.
      • Franc B.L.
      A deep learning model to predict a diagnosis of alzheimer disease by using F-18-FDG PET of the Brain.
      ,
      • Ortiz A.
      • Munilla J.
      • Górriz J.M.
      • Ramírez J.
      Ensembles of deep learning architectures for the early diagnosis of the Alzheimer's disease.
      ,
      • Kim J.
      • Lee B.
      Identification of Alzheimer's disease and mild cognitive impairment using multimodal sparse hierarchical extreme learning machine.
      ]. Additionally, tasks of segmentation and prognosis have increased in DL studies alongside classification tasks, and the use of generative adversarial networks (GANs) are now more frequently reported [
      • Yi X.
      • Walia E.
      • Babyn P.
      Generative adversarial network in medical imaging: a review.
      ,
      • Becker A.S.
      • Jendele L.
      • Skopek O.
      • Berger N.
      • Ghafoor S.
      • Marcon M.
      • Konukoglu E.
      Injecting and removing suspicious features in breast imaging with CycleGAN: a pilot study of automated adversarial attacks using neural networks on small images.
      ].
      The lack of standardised knowledge regarding DL network utilisation in medical image analysis hinders the development of DL and continue to confuse researchers. The use of DL networks in medical image analysis has been reviewed [
      • Litjens G.
      • Kooi T.
      • Bejnordi B.E.
      • Setio A.A.A.
      • Ciompi F.
      • Ghafoorian M.
      • van der Laak J.A.W.M.
      • van Ginneken B.
      • Sánchez C.I.
      A survey on deep learning in medical image analysis.
      ,
      • Yi X.
      • Walia E.
      • Babyn P.
      Generative adversarial network in medical imaging: a review.
      ,
      • Wang Y.u.
      • Ge X.
      • Ma H.e.
      • Qi S.
      • Zhang G.
      • Yao Y.
      Deep learning in medical ultrasound image analysis: a review.
      ,
      • Karimi D.
      • Dou H.
      • Warfield S.K.
      • Gholipour A.
      Deep learning with noisy labels: exploring techniques and remedies in medical image analysis.
      ]. However, reviews systematising the current characteristics and future trajectories of DL development based on large-scale literature analyses are rare.
      Bibliometrics are used for the statistical analysis of literature to review the evolution, developing trends, and hot spots in research fields [
      • William C.S.W.
      • Hood W.
      The literature of bibliometrics, scientometrics, and informetrics.
      ,
      • Guler A.T.
      • Waaijer C.J.F.
      • Palmblad M.
      Scientific workflows for bibliometrics.
      ]. The co-word biclustering analysis approach is highly beneficial [
      • Cheng Y.
      • Church G.M.
      Biclustering of expression data.
      ,
      • Bhattacharya S.B.
      PK, Mapping a research area at the micro level using co-word analysis.
      ]. Medical Subject Headings (MeSH) terms and keywords represent the main content of the articles [
      • Coletti M.H.
      • Bleich H.L.
      Medical subject headings used to search the biomedical literature.
      ]. A co-word network of MeSH terms/keywords uses the frequency of a group of terms/keywords in the same article to evaluate its correlation with the subject content by measuring the distance between the terms in the network. The co-word clustering analysis of high-frequency MeSH terms extracted from the articles identified popular topics related to human diseases and the corresponding radiological methods and tasks, and the co-word analysis of high-frequency keywords complements the hot spots of specific DL networks.
      We performed a systematic review of original articles on DL networks in medical image analyses published in PubMed. Recent developments, current characteristics, and future trajectories of DL networks in medical image analyses were analysed to summarise the existing knowledge of DL and provide suggestions for future studies.

      2. Materials and methods

      Institutional review board approval was not required for this study since no experiments on animals or human beings were conducted.

      2.1 Literature retrieval

      Original articles published since the inception of PubMed until December 31, 2020 meeting the following criteria were included: literature content related to ‘deep learning’ and ‘medical imaging’. The detailed search strategy was as follows: (deep learning [MeSH Terms] OR deep learning [Title/Abstract]) AND (diagnostic imaging [MeSH Terms] OR diagnostic imaging [Title/Abstract] OR medical imag*[Title/Abstract]) AND (“0000/00/00″[Date - Publication]: ”2020/12/31″[Date - Publication]).
      Original articles that clearly stated the objectives or hypotheses, and provided specifically articulated Methods and Results sections were selected. Literature retrieval was performed on February 2nd, 2021.

      2.2 Information extraction

      The full text of the included articles was reviewed by four bibliometric experts from the School of Medical Informatics, China Medical University to extract the following information: (1) radiological subspecialties, (2) imaging techniques, (3) DL network names, (4) study purposes, (5) sample size of research data, (6) study setting, (7) study design, (8) statistical analysis, (9) funding sources, (10) author numbers, (11) first authors’ affiliations, and (12) study origins.
      Radiological subspecialties were classified into neuroradiology, head and neck, thyroid, breast, thoracic, cardiac, abdominal, musculoskeletal, genitourinary, ophthalmology, dermatology, and miscellaneous (not conforming to any of the aforementioned categories). Imaging techniques included ultrasound, CT, MRI, fundus imaging (CFP), electrocardiography (ECG), electroencephalography (EEG), optical coherence tomography (OCT), microscopy, dermoscopy, endoscopy, mammography, histopathological imaging, X-ray imaging, other, and mixed (more than one radiological technique). Networks’ names appearing only once in the included articles were removed because of weak representation. Thus, 47 DL networks listed in Supplementary data 1 were summarised. The study purposes included image segmentation, pre-processing, monitoring, acquisition, identification, diagnosis, detection, and classification, based on consensus in the radiology field [
      • Zhang L.J.
      • Wang Y.F.
      • Yang Z.L.
      • Schoepf U.J.
      • Xu J.
      • Lu G.M.
      • Li E.
      Radiology research in mainland China in the past 10 years: a survey of original articles published in Radiology and European Radiology.
      ]. Sample sizes were recorded based on the number of patient samples, images, patches, videos, and not reported. Study settings were divided into single- or multi-centre. Study designs were categorised as prospective or retrospective. Statistical analyses were classified as present or absent. Funding sources were categorised as public, private, both, or other (not reported). The author numbers were classified into: <4, 4–7, and >7. The first authors’ affiliations were classified as radiology (including radiology, nuclear medicine, and other imaging-related specialties), medicine or related specialties (including internal medicine, paediatrics, psychiatry, neurology, and dermatology), surgery or related specialties (including surgery, obstetrics and gynaecology, orthopaedics, anaesthesiology, pathology), or other (including basic science, laboratory, or other research institutes).
      The Newcastle-Ottawa scale was used to assess the quality of each included article [
      • Schiaffino S.
      • Calabrese M.
      • Melani E.F.
      • Trimboli R.M.
      • Cozzi A.
      • Carbonaro L.A.
      • Di Leo G.
      • Sardanelli F.
      Upgrade rate of percutaneously diagnosed pure atypical ductal hyperplasia: systematic review and meta-analysis of 6458 lesions.
      ]. To measure the bias of information extraction among the bibliometric experts, 100 studies were randomly selected, and the information above was independently reviewed by two other investigators.

      2.3 MeSH terms/keywords extraction and co-word analysis

      The Bibliographic Item Co-Occurrence Matrix Builder (BICOMB) [

      L.W. Cui, L. Yan, H. Zhang, Y.F. Hou, Y.N. Huang, et al., Development of a Text Mining System based on the Co-occurrence of Bibliographic Items in Literature, New Technology of Library and Information Service, 2008, pp. 70–75.

      ] was applied to determine the frequency rankings of major MeSH terms, subheadings, authors, journals, countries, languages, and publication years from the included articles.
      The matrix data extracted from BICOMB was input into the Graphical Clustering Toolkit (gCLUTO) to visualise data clustering among the high-frequency MeSH terms and the original articles [

      K. Lab, Webcite gCLUTO-Graphical Clustering Toolkit, WEBC GCLUTO GRAPH CL.

      ]. gCLUTO grouped a set of data items to maximise the similarity within the clusters and minimise the similarity between clusters, and subsequently generated a matrix visualisation and mountain visualisation from the clustering result to explain the characteristics of the imported matrix data. Cluster categories from 1 to 9 were selected and run nine times in the preliminary experiment to obtain the optimal clusters of MeSH terms. In the gCLUTO mountain, the peak height was proportional to the similarity of items within the clusters, the dimension was proportional to the volume of terms within the clusters, and the distance among peaks indicated the similarity of the clusters. The colours of the peaks represented the intra-cluster standard deviations from red (low) and blue (high).
      A strategic coordinate [
      • Law J.B.
      • Courtial J.-P.
      • Whittaker J.
      Policy and the mapping of scientific change: a co-word analysis of research into environmental acidification.
      ] was established based on the matrix data from BICOMB and clustering results from gCLUTO, which showed the cluster maturity and the correlation between the clusters. The centrality degree (correlation between the clusters) was plotted against density (maturity). A high centrality degree indicates that the cluster is close to the centre of the research field, while a high density indicates a mature cluster development and a strong association between the studies within the cluster.
      High-frequency keywords in the included literature were analysed using VOSviewer [
      • van Eck N.J.
      • Waltman L.
      Software survey: VOSviewer, a computer program for bibliometric mapping.
      ]. As with natural language vocabularies, keywords were not standardised; however, it was considered beneficial to include a specific analysis of DL networks rather than MeSH terms [
      • Bornmann L.
      • Haunschild R.
      • Hug S.E.
      Visualizing the context of citations referencing papers published by Eugene Garfield: a new type of keyword co-occurrence analysis.
      ,
      • Waltman L.
      • van Eck N.J.
      • Noyons E.C.M.
      A unified approach to mapping and clustering of bibliometric networks.
      ]. The high-frequency keywords were automatically classified into clusters by the VOSviewer and displayed in bibliometric maps.

      2.4 Statistical analysis

      Statistical analysis was conducted in R (www.r-project.org). Variance analysis was used to examine the differences and trends in articles categorised by the 12 variables. The intraclass correlation coefficient (ICC) was used as an index to analyse the consistency of information extraction from the 100 studies between observers. P < 0.05 was considered statistically significant.

      3. Results

      Among the 3411 articles retrieved initially, 726 were excluded for the following reasons: (a) 527 were not original research articles, (b) 114 were unrelated to DL or medical imaging, (c) 33 were not published in English, and (d) 52 showed 2021 as the publication year. Thus, 2685 original articles were included. Information extracted from 100 randomly selected articles yielded an ICC of 0.97 between different investigators. The average Newcastle-Ottawa scale score was 4.68.
      The publication of articles in this field has increased annually. The growth rate was the fastest in 2016 (346.2%), with the number of publications peaking in 2020 (at 1006). The volume of publications is shown in Fig. 1, and the specific data of study components are presented in Supplementary data 2. Of these studies, 18.9% were neuroradiology-related, followed by thoracic (13.2%) and abdominal (8.6%) studies. In neuroradiology (n = 507), clinical studies were on segmentation tasks (n = 146), such as brain tumors, stroke lesions, and multiple sclerosis lesions, followed by classification (n = 116) and detection (n = 80) tasks. In thoracic studies (n = 354), classification tasks (n = 113), such as the classification of lung nodule, gene mutation status, and metastatic state, accounted for the largest proportion, followed by detection (n = 104) and segmentation (n = 54). In 2020, thoracic studies exceeded neuroradiology for the first time in literature volume, with 44.7% of original articles related to “COVID-19” or “coronavirus”.
      Figure thumbnail gr1
      Fig. 1The statistics of annual publications based on radiological subspecialties.
      Among the radiological tasks, segmentation (24.2%), classification (23.1%), and detection (20.5%) received the most interest, However, monitoring and prognosis tasks are relatively rare (2.2%). Among the imaging modalities, MRI (24.4%), CT (22.0%), and mixed (9.0%) are the most studied. The general rule of application of the 47 DL networks to the eight radiological tasks on 11 human organs is shown in Fig. 2. CNN (n = 1626), including CNN-based and CNN-derived DL studies, was the most frequently used network type, with U-Net (n = 639) being applied most frequently in image segmentation (n = 349), image acquisition tasks (n = 115), and neuroradiology (n = 119) studies. ResNet (n = 437) and VGG (n = 322) were grouped in the same subcluster, and widely used in classification (n = 151 and 102, respectively) and detection (n = 106 and 72, respectively). ResNet was more frequently used in thoracic (n = 79), ophthalmology (n = 49), and abdominal (n = 48) studies, while VGG was used more frequently in thoracic (n = 45), ophthalmology (n = 40), and neuroradiology (n = 37) studies. GAN (n = 186), ranked fourth, accounting for 2.2% of the statistics in 2012–2019, 3.9% of the statistics from 2012 to 2020, with 9.4% studies using GAN-based model in 2020. GANs were most applied in image acquisition tasks (n = 84) for image enhancement and image synthesis, neuroradiology (n = 32), thoracic (n = 23) and head and neck (n = 20) studies.
      Figure thumbnail gr2
      Fig. 2Heatmap of the application of 47 deep learning networks to 11 human organs in eight radiological tasks. The x-axis represents the deep learning networks; the y-axis represents tasks and human organs. The colour depth of each element correlates to its frequency.
      Overall, 1546 (57.6%) and 1131 (42.1%) studies were single- and multi-centre studies, respectively. The sample size statistics are presented in Fig. 3. The data type was declared in 2677 (99.7%) studies. In studies where the number of images and patches exceeded 10,000 (n = 505), the number was presented as 10,000 in Fig. 3 to present the distribution of the data sample size of all the studies more clearly. After the exclusion of six studies that did not mention the sample size, the median number of images and patients in the remaining studies were 1700 and 165, respectively. For the studies published in 2020 alone, the median number of images and patients included were 2182 and 217, respectively.
      Figure thumbnail gr3
      Fig. 3Statistics of the sample size of the included studies. Overall, 505 studies used over 10,000 images or patches, and they are presented as 10,000 in order to show the distribution of the data sample size of all the studies more clearly.
      Table 1 presents the details of the publications in this field annually from 2012 to 2020. The reports originated from 43 countries, including the US (28.5%), China (22.3%), and South Korea (7.8%). Additionally, 39.2% of studies were contributed by 4–7 authors. In 56.4% of studies, the first authors’ affiliations were computer science, laboratory, or other research institutes, with only 23.0% having affiliations related to radiology. Government funding was reported in 62.5% of studies.
      Table 1Details of publications of DL networks’ application in medical image analyses from 2012 to 2020. We present the top three ranked items of radiological tasks, techniques, and study origin involved.
      Statistics of variables201220132014201520162017201820192020Total
      Radiological tasks
       Segmentation1323153089266245654
       Classification0114145183221250625
       Detection0300933104185220554
      Radiological techniques
       MRI06241225122269216656
       CT0002436103189256590
       Mixed0144393386101241
      Sample size (image)
       ≤2500172621632303833461059
       2501–9999001492437123151349
       >9999001042281181166455
      Sample size (patients)
       ≤10001004124913697299
       101–49901216133189130273
      >4990012011226792195
      First author
       Other154926851596066201515
       Radiology0322102673241259616
       Surgery-related0000121497272296
       Medicine-related0112837738155258
      Study origin
       The United States04541345145318238772
       China001372982204272598
       South Korea0000310416689209
      Funding sources
       Government1771037972666306241679
       Other020372996237258632
       Both0000015558084234
       Private000019375340140
      An optimised threshold of the BICOMB high-frequency MeSH terms was set as 36 in this study. As shown in Fig. 4, 37 MeSH terms were identified and divided into four clusters. The corresponding strategic coordinates of the MeSH term clusters are shown in Fig. 5. Cluster 0, in the second quadrant, mainly denoted research topics involving computer-aided radiographic image interpretation of COVID-19 pneumonia and solitary pulmonary nodules using CT images. Results indicated that subjects related to pulmonary nodule analysis are at a well-developed stage, which is consistent with the clinical practice that this topic has long been discussed along with the application of deep learning to medical images. As an emerging subject, the study of COVID-19 has rapidly developed during 2020. However, it is not currently a core topic in this field since current studies mainly focus on image-based auxiliary diagnosis of COVID-19, indicating the future potential of the image-based investigation on prognosis in COVID-19. Cluster 1, in the second quadrant, denoted the application of computer-aided diagnosis for breast neoplasms and melanoma. Cluster 2, in the first quadrant, denoted neural networks applied in image enhancement, computer-aided radiotherapy planning, and image processing for brain neoplasm and Alzheimer’s disease studies using PET. Results indicated that the subjects in this cluster are core topics in this field and at a well-developed stage. This finding is concordant with the statistics in this study that neuroradiology accounted for the most studies among all the radiological subspecialties, indicating that the majority of included clinical studies have been conducted in this field. Cluster 3, in the first quadrant, denoted the development of artificial intelligence and machine learning algorithms, including computer-aided image interpretation and automated pattern recognition for diabetic retinopathy studies using OCT and microscopy, indicating that OCT and microscopy are the specific radiological modalities for diabetic retinopathy, and OCT and microscopy-derived images for DL and machine learning analysis are widely recognized and mature strategies in this field. In mountain visualisation, Cluster 0 and Cluster 1 showed high deviations, which is concordant with the fact that the clinical concerns between COVID-19 and solitary pulmonary nodules in Cluster 0 vary significantly in this field. For COVID-19, classification is the primary purpose, while accurate detection is the primary purpose of the studies for solitary pulmonary nodules. Similar results could also be found in breast neoplasms and melanoma in Cluster 1. However, Clusters 2 and Cluster 3 showed low deviations as the radiological subspecialty in Cluster 2 is concentrated on neuroradiology, and the clinical concern is focused on diabetic retinopathy in Cluster 3. The MeSH terms occurring at a high frequency are listed in Supplementary data 3.
      Figure thumbnail gr4
      Fig. 4The MeSH terms co-occurrence matrix, cluster mountain map, and 37 high-frequency MeSH terms and subheadings are visualised to illustrate the correlation of studies in this field. Both the x-axis and y-axis in the co-occurrence matrix represent the 37 high-frequency main MeSH terms (the terms on the x-axis from left to right are the same as the terms on the y-axis from top to bottom); the red block represents the co-occurrence between two corresponding words.
      Figure thumbnail gr5
      Fig. 5Strategic coordinates of the original articles published in the field. Cluster 2 and Cluster 3 in the first quadrant indicates that the intra-cluster correlation of the topics in these clusters is high, which denotes that the topics are well developed. Their inter-cluster correlations with other clusters are also high, and hence are at the core of the field. Cluster 0 and Cluster 1 in the second quadrant show that although the intraclass correlation of the topics is good, the topics are not closely related to other clusters.
      The co-occurrence clustering of keywords by the VOSviewer with a frequency > 11 times is shown in Fig. 6. After excluding repeated items, 38 high-frequency keywords were divided into six clusters. VOS-Cluster 1 denoted the development of GAN and U-Net for data augmentation and segmentation tasks of prostate cancer. VOS-Cluster 2 denoted artificial intelligence, radiology, and radiomics studies for diagnosis, particularly using ultrasound. VOS-Cluster 3 denoted deep CNNs developed in computer vision and digital pathology, especially in OCT image analysis and breast cancer studies. VOS-Cluster 4 denoted the development of CNNs in lung cancer studies and radiotherapy using CT and MRI. VOS-Cluster 5 denoted neural networks development utilised in classification, detection, and segmentation tasks. VOS-Cluster 6 denoted the development of transfer learning in medical image analysis of COVID-19 pneumonia.
      Figure thumbnail gr6
      Fig. 6The 38 high-frequency keywords are visualised as 6 VOS-clusters. Each point represents a keyword, and label and circle sizes indicate the importance. Six different colours are assigned to the different clusters.
      To evaluate the impact of the studies published in 2020 on the hot spots and trends in this field, an ad-hoc experiment was conducted to compare the results of the 2012–2019 data with the analysis results of the 2012–2020 data. The pre–2020 data were obtained on January 13, 2020, retrieval dates were from the inception of PubMed to December 31, 2019, and the retrieval strategies and screening conditions were the same as the strategy mentioned above. The results of the analysis are presented in Supplementary data 4. In addition, the raw data of the 2685 literatures included in the study, the crucial information manually extracted from each literature, and the source code for statistical analysis in R have been shared via the following GitHub repository: https://github.com/DaisyW666/Deep_learning_on_Medical_image_analysis.

      4. Discussion

      This study evaluated the recent developments, current characteristics, future trends, and research hot spots of DL networks in medical image analysis through a systematic analysis of 2685 original articles published in PubMed. We comprehensively summarised the general rules of application of DL networks to medical image analysis, including 8 radiological tasks, 11 human organ sites, 14 imaging modalities, and 47 DL networks. The research hot spots in this field were illustrated by the analysis of co-word clustering of MeSH terms and keywords.
      Studies related to neuroradiology, thorax, and abdomen have been the most popular subspecialties (40.7%), whereas studies related to thyroid and dermatology studies are under-reported (3.9%). In 2020, thorax-related studies exceeded other studies owing to the COVID-19 pandemic (18.7% of 2020 studies). Based on MeSH terms clustering, brain-, breast-, and lung-related diseases were research hot spots. In addition, the strategic coordinates indicated that the clusters of breast-, lung- and dermatology-related studies were located in the second quadrant, and they were mature with intense intra-cluster correlation. However, studies of breast- and lung- related diseases were in the third quadrant in the pre-2020 data, indicating that these two trends have achieved good development in 2020. However, results indicated that the correlation of these topics with other research topics still needs to be strengthened. In contrast, brain, prostate, and diabetic retinopathy-related studies were in the first quadrant, indicating a well-developed research topic with intense intra-cluster and inter-cluster correlations. This may relate to the more readily available public datasets of these topics, for instance, DRIVE and various challenges hosted by MICCA and ISBI. The studies related to thyroid and multiple human organs, and some imaging modalities, such as histopathological images and dermoscopy, were of less interest, suggesting that these research directions hold potential for more discoveries.
      Compared with other existing reviews in the field [
      • Wang Y.u.
      • Ge X.
      • Ma H.e.
      • Qi S.
      • Zhang G.
      • Yao Y.
      Deep learning in medical ultrasound image analysis: a review.
      ,
      • Shen D.
      • Wu G.
      • Suk H.-I.
      Deep learning in medical image analysis.
      ,
      • Cai L.
      • Gao J.
      • Zhao D.
      A review of the application of deep learning in medical image classification and segmentation.
      ], we evaluated each of the current DL networks and their application in various human organs and radiological tasks. Except for CNN, the top nine networks accounted for 94.1% of the articles. As shown in Fig. 2, the most popular network in the research of a particular task and organ can be inferred based on the colour shades of the corresponding colour blocks. Clinical classification-related tasks, such as classification, identification, auxiliary diagnosis, and detection, accounted for 59.9% of all studies, whereas image pre-processing and monitoring tasks accounted for 6.3%. Our results indicated that certain DL networks and clinical tasks have garnered widespread attention, and the difference in development is significant when compared with other networks and clinical tasks. In neuroradiology studies, segmentation of the regions of interest, such as stroke lesions and brain tumours, is the most widely researched radiological task. However, our findings indicated that the detection of brain cancer, Alzheimer's disease, multiple sclerosis, and other neuroradiological diseases obtained a faster growth rate than other tasks in neuroradiology-related studies. It could be concluded that segmentation of brain tissue or lesions has long been a common task in neuroradiology in recent years [
      • Choi Y.
      • Nam Y.
      • Lee Y.S.
      • Kim J.
      • Ahn K.-J.
      • Jang J.
      • Shin N.-Y.
      • Kim B.-S.
      • Jeon S.-S.
      IDH1 mutation prediction using MR-based radiomics in glioblastoma: comparison between manual and fully automated deep learning-based approach of tumor segmentation.
      ], whereas the emerging development of lesion detection using DL networks is the current research trend. Based on the above results, it could be inferred that DL networks hold the potential to assist in identifying more neurological diseases and promote artificial intelligence-based clinical decision-making.
      Additionally, in 2020, 44.7% of thorax-related studies were COVID-19-related studies. This finding indicates that the COVID-19 outbreak has accelerated the development of artificial intelligence-based diagnosis for the lungs, and DL networks hold the potential to provide valuable information for the auxiliary diagnosis of emerging diseases in a timely manner. With increasing DL network-based studies for symptom classification and risk stratification of COVID-19 in 2020 [
      • Javor D.
      • Kaplan H.
      • Kaplan A.
      • Puchner S.B.
      • Krestan C.
      • Baltzer P.
      Deep learning analysis provides accurate COVID-19 diagnosis on chest computed tomography.
      ,
      • Jin C.
      • Chen W.
      • Cao Y.
      • Xu Z.
      • Tan Z.
      • Zhang X.
      • Deng L.
      • Zheng C.
      • Zhou J.
      • Shi H.
      • Feng J.
      Development and evaluation of an artificial intelligence system for COVID-19 diagnosis.
      ], suggested that the DL networks hold potential to breakthroughs in predicting clinical outcomes and acquiring explainable knowledge in the future COVID-19 studies.
      Our study also indicated that some studies developed new networks based on existing widely used networks such as AlexNet, U-Net, ResNet, VGG, and GoogLeNet [
      • Zhang T.
      • Wang Y.
      • Sun Y.
      • Yuan M.
      • Zhong Y.
      • Li H.
      • Yu T.
      • Wang J.
      High-resolution CT image analysis based on 3D convolutional neural network can enhance the classification performance of radiologists in classifying pulmonary non-solid nodules.
      ,
      • Zhang J.
      • Liu Z.
      • Du B.
      • He J.
      • Li G.
      • Chen D.
      Binary tree-like network with two-path Fusion Attention Feature for cervical cell nucleus segmentation.
      ,
      • Gong K.
      • Wu D.
      • Arru C.D.
      • Homayounieh F.
      • Neumark N.
      • Guan J.
      • Buch V.
      • Kim K.
      • Bizzo B.C.
      • Ren H.
      • Tak W.Y.
      • Park S.Y.
      • Lee Y.R.
      • Kang M.K.
      • Park J.G.
      • Carriero A.
      • Saba L.
      • Masjedi M.
      • Talari H.
      • Babaei R.
      • Mobin H.K.
      • Ebrahimian S.
      • Guo N.
      • Digumarthy S.R.
      • Dayan I.
      • Kalra M.K.
      • Li Q.
      A multi-center study of COVID-19 patient prognosis using deep learning-based CT image analysis and electronic health records.
      ]. Since GAN was proposed, DCGAN [

      A. Radford, L. Metz, S. Chintala, Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, 2015, p. arXiv:1511.06434.

      ], WGAN [

      S.C. Martin Arjovsky, Léon Bottou, Wasserstein generative adversarial networks, in: International Conference on Machine Learning vol. 70, 2017, pp. 214–223.

      ], CycleGAN [
      • Becker A.S.
      • Jendele L.
      • Skopek O.
      • Berger N.
      • Ghafoor S.
      • Marcon M.
      • Konukoglu E.
      Injecting and removing suspicious features in breast imaging with CycleGAN: a pilot study of automated adversarial attacks using neural networks on small images.
      ], BEGAN [

      D. Berthelot, T. Schumm, L. Metz, BEGAN: Boundary Equilibrium Generative Adversarial Networks, 2017, pp. arXiv:1703.10717.

      ], and BigBiGAN [
      • Wang H.
      • Wang L.
      • Lee E.H.
      • Zheng J.
      • Zhang W.
      • Halabi S.
      • Liu C.
      • Deng K.
      • Song J.
      • Yeom K.W.
      Decoding COVID-19 pneumonia: comparison of deep learning and radiomics CT image signatures.
      ] were optimised for domain adaptation, data augmentation, and image-to-image translation. These findings indicated that optimisation and improvement of the original DL networks were effective for DL application in medical image analysis. Therefore, it may be meaningful for future studies to reconstruct existing networks to improve their performance for specific clinical tasks. In addition, our results indicated that GAN ranked among the high-frequency keywords in the studies from 2012 to 2020 but did not appear in the high-frequency keywords in the pre-2020 data, indicating that the GAN-based networks were widely developed and applied in 2020 [
      • Loey M.
      • Smarandache F.
      • Khalifa N.E.M.
      Within the lack of chest COVID-19 X-ray dataset: a novel detection model based on GAN and deep transfer learning.
      ]. Presently, GANs are mostly used in image enhancement tasks of neuroradiology, thorax, and head and neck subspecialties. However, in 2020, the statistics indicated that studies have applied GAN-based model for lesion detection and semantic feature extraction in the lungs [
      • Song J.
      • Wang L.
      • Ng N.N.
      • Zhao M.
      • Shi J.
      • Wu N.
      • Li W.
      • Liu Z.
      • Yeom K.W.
      • Tian J.
      Development and validation of a machine learning model to explore tyrosine kinase inhibitor response in patients with stage IV EGFR variant–positive non–small cell lung cancer.
      ,
      • Schwyzer M.
      • Ferraro D.A.
      • Muehlematter U.J.
      • Curioni-Fontecedro A.
      • Huellner M.W.
      • von Schulthess G.K.
      • Kaufmann P.A.
      • Burger I.A.
      • Messerli M.
      Automated detection of lung cancer at ultralow dose PET/CT by deep neural networks – initial results.
      ]. Thus, with the proposed new algorithms based on GAN, it will be a new trend in image generation and enhancement, as well as in the application of GAN-based image feature extraction in clinical-related studies.
      MRI, CT, and mixed (more than one) radiological techniques were the most used imaging methods (55.3%), while ECG, EEG, and endoscopy were seldom used (2.6%). However, for rarely used imaging modalities, small sample size and variation in tumour expression may limit DL network training [
      • Tajbakhsh N.
      • Shin J.Y.
      • Gurudu S.R.
      • Hurst R.T.
      • Kendall C.B.
      • Gotway M.B.
      • Jianming L.
      Convolutional neural networks for medical image analysis: full training or fine tuning?.
      ]. Ultrasound was the third most popular radiation technology before 2020. However, the number of multiple radiological techniques and X-rays exceeded ultrasound after 2020, suggesting that recent DL studies are increasingly inclined to suit various radiation modes and multiple radiological techniques.
      This study provides suggestions for future studies in this field. Although there was a survey on the research in this field [
      • Litjens G.
      • Kooi T.
      • Bejnordi B.E.
      • Setio A.A.A.
      • Ciompi F.
      • Ghafoorian M.
      • van der Laak J.A.W.M.
      • van Ginneken B.
      • Sánchez C.I.
      A survey on deep learning in medical image analysis.
      ], our study complements the statistical analysis of 2610 articles published since 2017, thus providing crucial information for the development of research in this field. In addition, before 2020, 10% of the studies were lung-related, and low intra-cluster and extra-cluster correlations were observed in the lung-related cluster. However, with the inclusion of the 2020 studies, COVID-19 led to an explosion of lung-related research, and transfer learning using new networks has matured with the emergence of COVID-19 studies, as shown in VOS-cluster 6. In view of the rapid development of the COVID-19 hotspot in 2020, incorporation of transfer learning and emerging DL networks hold promise for future DL studies on topics where DL remains under-developed, such as thyroid-related studies and studies on ECG, EEG, and endoscopy-based imaging modalities. Moreover, we found that clinical studies on auxiliary diagnosis accounted for a sizeable proportion, and studies on clinical monitoring and prognosis have recently attracted increasing attention. However, the proportion of first authors with a non-clinical background was 56.4%. Therefore, appropriate clinical training of researchers in this field should be highly considered. Notably, the number of surgeons was 5.8% higher than radiologists in the studies between 2012 and 2019. However, in the studies between 2012 and 2020, the number of radiologists was 108.1% higher than surgeons. Due to the surge in COVID-19 studies in 2020, it could be concluded that the interest of radiologists in this research field has grown faster than that of other clinicians.
      This study had certain limitations. Since our primary concern was medical images, we only included literature published in PubMed. Thus, future studies should include other databases such as Embase, and add an analysis of references, to verify the robustness of our results. In addition, publications that became available online after February 2nd, 2021, were not included. Moreover, all the articles in this study were of equal importance; however, some studies should have been weighted as being more significant. In future studies, an optimal weight distribution rule should be included. Lastly, we only used the countries of origin and affiliations of the first authors, but not those of the other authors, for the statistical analyses, which should be considered in future studies.
      In conclusion, this comprehensive summary of the general rules of DL networks applied to medical image analysis and the recent developments, current characteristics, and future trends in this field will provide valuable guidance for future DL studies in medical image analysis.

       CRediT authorship contribution statement

      Lu Wang: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration. Hairui Wang: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration. Yingna Huang: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration. Baihui Yan: Conceptualization, Investigation, Writing – original draft, Writing – review & editing. Zhihui Chang: Conceptualization, Investigation, Writing – original draft, Writing – review & editing. Zhaoyu Liu: Conceptualization, Investigation, Writing – original draft, Writing – review & editing. Mingfang Zhao: Conceptualization, Investigation, Writing – original draft, Writing – review & editing. Lei Cui: Conceptualization, Resources, Writing – original draft, Writing – review & editing. Jiangdian Song: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration, Funding acquisition. Fan Li: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration, Funding acquisition.

      Declaration of Competing Interest

      The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

      Acknowledgements

      This study has received funding from the National Key Research and Development Program of China (2016YFC1303800) and the National Natural Science Foundation of China (82001904).

      Appendix A. Supplementary material

      The following are the Supplementary data to this article:

      References

        • Margulis A.R.
        Whitehouse lecture. Radiologic imaging: changing costs, greater benefits.
        AJR Am. J. Roentgenol. 1981; 136: 657-665
        • LeCun Y.
        • Bengio Y.
        • Hinton G.
        Deep learning.
        Nature. 2015; 521: 436-444
        • Bach P.B.
        • Jett J.R.
        • Pastorino U.
        • Tockman M.S.
        • Swensen S.J.
        • Begg C.B.
        Computed tomography screening and lung cancer outcomes.
        JAMA. 2007; 297: 953-961
        • Negendank W.
        Studies of human tumors by MRS: a review.
        NMR Biomed. 1992; 5: 303-324
        • Raichle M.E.
        Positron emission tomography. Progress in brain imaging.
        Nature. 1985; 317: 574-575
        • Havaei M.
        • Davy A.
        • Warde-Farley D.
        • Biard A.
        • Courville A.
        • Bengio Y.
        • Pal C.
        • Jodoin P.-M.
        • Larochelle H.
        Brain tumor segmentation with deep neural networks.
        Med. Image Anal. 2017; 35: 18-31
        • Zhang J.
        • Xie Y.
        • Wu Q.i.
        • Xia Y.
        Medical image classification using synergic deep learning.
        Med. Image Anal. 2019; 54: 10-19
        • Schmuelling L.
        • Franzeck F.C.
        • Nickel C.H.
        • Mansella G.
        • Bingisser R.
        • Schmidt N.
        • Stieltjes B.
        • Bremerich J.
        • Sauter A.W.
        • Weikert T.
        • Sommer G.
        Deep learning-based automated detection of pulmonary embolism on CT pulmonary angiograms: no significant effects on report communication times and patient turnaround in the emergency department nine months after technical implementation.
        Eur. J. Radiol. 2021; 141: 109816
        • Zhao X.
        • Wu Y.
        • Song G.
        • Li Z.
        • Zhang Y.
        • Fan Y.
        A deep learning model integrating FCNNs and CRFs for brain tumor segmentation.
        Med. Image Anal. 2018; 43: 98-111
        • Bao S.
        • Wang P.
        • Mok T.C.W.
        • Chung A.C.S.
        3D randomized connection network with graph-based label inference.
        IEEE Trans. Image Process.: Publ. IEEE Signal Process. Soc. 2018; 27: 3883-3892
        • Chang K.
        • Bai H.X.
        • Zhou H.
        • Su C.
        • Bi W.L.
        • Agbodza E.
        • Kavouridis V.K.
        • Senders J.T.
        • Boaro A.
        • Beers A.
        • Zhang B.
        • Capellini A.
        • Liao W.
        • Shen Q.
        • Li X.
        • Xiao B.
        • Cryan J.
        • Ramkissoon S.
        • Ramkissoon L.
        • Ligon K.
        • Wen P.Y.
        • Bindra R.S.
        • Woo J.
        • Arnaout O.
        • Gerstner E.R.
        • Zhang P.J.
        • Rosen B.R.
        • Yang L.
        • Huang R.Y.
        • Kalpathy-Cramer J.
        Residual convolutional neural network for the determination of IDH status in low- and high-grade gliomas from MR imaging.
        Clin. Cancer Res.: Offic. J. Am. Assoc. Cancer Res. 2018; 24: 1073-1081
        • Biswas M.
        • Kuppili V.
        • Saba L.
        • Edla D.R.
        • Suri H.S.
        • Cuadrado-Godia E.
        • Laird J.R.
        • Marinhoe R.T.
        • Sanches J.M.
        • Nicolaides A.
        • Suri J.S.
        State-of-the-art review on deep learning in medical imaging.
        Front. Biosci. (Landmark Ed.). 2019; 24: 392-426
        • Zhang T.
        • Wang Y.
        • Sun Y.
        • Yuan M.
        • Zhong Y.
        • Li H.
        • Yu T.
        • Wang J.
        High-resolution CT image analysis based on 3D convolutional neural network can enhance the classification performance of radiologists in classifying pulmonary non-solid nodules.
        Eur. J. Radiol. 2021; 141: 109810
        • Litjens G.
        • Kooi T.
        • Bejnordi B.E.
        • Setio A.A.A.
        • Ciompi F.
        • Ghafoorian M.
        • van der Laak J.A.W.M.
        • van Ginneken B.
        • Sánchez C.I.
        A survey on deep learning in medical image analysis.
        Med. Image Anal. 2017; 42: 60-88
        • Krizhevsky A.
        • Sutskever I.
        • Hinton G.E.
        ImageNet classification with deep convolutional neural networks.
        Commun. Acm. 2017; 60: 84-90
        • Yamanakkanavar N.
        • Choi J.Y.
        • Lee B.
        MRI segmentation and classification of human brain using deep learning for diagnosis of Alzheimer's disease: a survey.
        Sensors (Basel). 2020; 20: 3243https://doi.org/10.3390/s20113243
        • Ding Y.
        • Sohn J.H.
        • Kawczynski M.G.
        • Trivedi H.
        • Harnish R.
        • Jenkins N.W.
        • Lituiev D.
        • Copeland T.P.
        • Aboian M.S.
        • Aparici C.M.
        • Behr S.C.
        • Flavell R.R.
        • Huang S.Y.
        • Zalocusky K.A.
        • Nardo L.
        • Seo Y.
        • Hawkins R.A.
        • Pampaloni M.H.
        • Hadley D.
        • Franc B.L.
        A deep learning model to predict a diagnosis of alzheimer disease by using F-18-FDG PET of the Brain.
        Radiology. 2019; 290: 456-464
        • Ortiz A.
        • Munilla J.
        • Górriz J.M.
        • Ramírez J.
        Ensembles of deep learning architectures for the early diagnosis of the Alzheimer's disease.
        Int. J. Neural Syst. 2016; 26: 1650025https://doi.org/10.1142/S0129065716500258
        • Kim J.
        • Lee B.
        Identification of Alzheimer's disease and mild cognitive impairment using multimodal sparse hierarchical extreme learning machine.
        Human Brain Map. 2018; 39: 3728-3741
        • Yi X.
        • Walia E.
        • Babyn P.
        Generative adversarial network in medical imaging: a review.
        Med. Image Anal. 2019; 58: 101552https://doi.org/10.1016/j.media.2019.101552
        • Becker A.S.
        • Jendele L.
        • Skopek O.
        • Berger N.
        • Ghafoor S.
        • Marcon M.
        • Konukoglu E.
        Injecting and removing suspicious features in breast imaging with CycleGAN: a pilot study of automated adversarial attacks using neural networks on small images.
        Eur. J. Radiol. 2019; 120: 108649
        • Wang Y.u.
        • Ge X.
        • Ma H.e.
        • Qi S.
        • Zhang G.
        • Yao Y.
        Deep learning in medical ultrasound image analysis: a review.
        IEEE Access. 2021; 9: 54310-54324
        • Karimi D.
        • Dou H.
        • Warfield S.K.
        • Gholipour A.
        Deep learning with noisy labels: exploring techniques and remedies in medical image analysis.
        Med Image Anal. 2020; 65: 101759
        • William C.S.W.
        • Hood W.
        The literature of bibliometrics, scientometrics, and informetrics.
        Scientometrics. 2001; 52: 291-314
        • Guler A.T.
        • Waaijer C.J.F.
        • Palmblad M.
        Scientific workflows for bibliometrics.
        Scientometrics. 2016; 107: 385-398
        • Cheng Y.
        • Church G.M.
        Biclustering of expression data.
        Proc. Int. Conf. Intell. Syst. Mol. Biol. 2000; 8: 93-103
        • Bhattacharya S.B.
        PK, Mapping a research area at the micro level using co-word analysis.
        Scientometrics. 1998; 43: 359-372
        • Coletti M.H.
        • Bleich H.L.
        Medical subject headings used to search the biomedical literature.
        J. Am. Med. Inform. Assoc.: JAMIA. 2001; 8: 317-323
        • Zhang L.J.
        • Wang Y.F.
        • Yang Z.L.
        • Schoepf U.J.
        • Xu J.
        • Lu G.M.
        • Li E.
        Radiology research in mainland China in the past 10 years: a survey of original articles published in Radiology and European Radiology.
        Eur. Radiol. 2017; 27: 4379-4382
        • Schiaffino S.
        • Calabrese M.
        • Melani E.F.
        • Trimboli R.M.
        • Cozzi A.
        • Carbonaro L.A.
        • Di Leo G.
        • Sardanelli F.
        Upgrade rate of percutaneously diagnosed pure atypical ductal hyperplasia: systematic review and meta-analysis of 6458 lesions.
        Radiology. 2020; 294: 76-86
      1. L.W. Cui, L. Yan, H. Zhang, Y.F. Hou, Y.N. Huang, et al., Development of a Text Mining System based on the Co-occurrence of Bibliographic Items in Literature, New Technology of Library and Information Service, 2008, pp. 70–75.

      2. K. Lab, Webcite gCLUTO-Graphical Clustering Toolkit, WEBC GCLUTO GRAPH CL.

        • Law J.B.
        • Courtial J.-P.
        • Whittaker J.
        Policy and the mapping of scientific change: a co-word analysis of research into environmental acidification.
        Scientometrics. 1988; 14: 251-264
        • van Eck N.J.
        • Waltman L.
        Software survey: VOSviewer, a computer program for bibliometric mapping.
        Scientometrics. 2009; 84: 523-538
        • Bornmann L.
        • Haunschild R.
        • Hug S.E.
        Visualizing the context of citations referencing papers published by Eugene Garfield: a new type of keyword co-occurrence analysis.
        Scientometrics. 2018; 114: 427-437
        • Waltman L.
        • van Eck N.J.
        • Noyons E.C.M.
        A unified approach to mapping and clustering of bibliometric networks.
        J. Inform. 2010; 4: 629-635
        • Shen D.
        • Wu G.
        • Suk H.-I.
        Deep learning in medical image analysis.
        Ann. Rev. Biomed. Eng. 2017; 19: 221-248
        • Cai L.
        • Gao J.
        • Zhao D.
        A review of the application of deep learning in medical image classification and segmentation.
        Ann. Transl. Med. 2020; 8: 713
        • Choi Y.
        • Nam Y.
        • Lee Y.S.
        • Kim J.
        • Ahn K.-J.
        • Jang J.
        • Shin N.-Y.
        • Kim B.-S.
        • Jeon S.-S.
        IDH1 mutation prediction using MR-based radiomics in glioblastoma: comparison between manual and fully automated deep learning-based approach of tumor segmentation.
        Eu. J. Radiol. 2020; 128: 109031
        • Javor D.
        • Kaplan H.
        • Kaplan A.
        • Puchner S.B.
        • Krestan C.
        • Baltzer P.
        Deep learning analysis provides accurate COVID-19 diagnosis on chest computed tomography.
        Eur. J. Radiol. 2020; 133: 109402
        • Jin C.
        • Chen W.
        • Cao Y.
        • Xu Z.
        • Tan Z.
        • Zhang X.
        • Deng L.
        • Zheng C.
        • Zhou J.
        • Shi H.
        • Feng J.
        Development and evaluation of an artificial intelligence system for COVID-19 diagnosis.
        Nature Commun. 2020; 11: 5088
        • Zhang J.
        • Liu Z.
        • Du B.
        • He J.
        • Li G.
        • Chen D.
        Binary tree-like network with two-path Fusion Attention Feature for cervical cell nucleus segmentation.
        Comput. Biol. Med. 2019; 108: 223-233
        • Gong K.
        • Wu D.
        • Arru C.D.
        • Homayounieh F.
        • Neumark N.
        • Guan J.
        • Buch V.
        • Kim K.
        • Bizzo B.C.
        • Ren H.
        • Tak W.Y.
        • Park S.Y.
        • Lee Y.R.
        • Kang M.K.
        • Park J.G.
        • Carriero A.
        • Saba L.
        • Masjedi M.
        • Talari H.
        • Babaei R.
        • Mobin H.K.
        • Ebrahimian S.
        • Guo N.
        • Digumarthy S.R.
        • Dayan I.
        • Kalra M.K.
        • Li Q.
        A multi-center study of COVID-19 patient prognosis using deep learning-based CT image analysis and electronic health records.
        Eur. J. Radiol. 2021; 139: 109583
      3. A. Radford, L. Metz, S. Chintala, Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, 2015, p. arXiv:1511.06434.

      4. S.C. Martin Arjovsky, Léon Bottou, Wasserstein generative adversarial networks, in: International Conference on Machine Learning vol. 70, 2017, pp. 214–223.

      5. D. Berthelot, T. Schumm, L. Metz, BEGAN: Boundary Equilibrium Generative Adversarial Networks, 2017, pp. arXiv:1703.10717.

        • Wang H.
        • Wang L.
        • Lee E.H.
        • Zheng J.
        • Zhang W.
        • Halabi S.
        • Liu C.
        • Deng K.
        • Song J.
        • Yeom K.W.
        Decoding COVID-19 pneumonia: comparison of deep learning and radiomics CT image signatures.
        Eur. J. Nucl. Med. Mole. Imaging. 2021; 48: 1478-1486
        • Loey M.
        • Smarandache F.
        • Khalifa N.E.M.
        Within the lack of chest COVID-19 X-ray dataset: a novel detection model based on GAN and deep transfer learning.
        Symmetry-Basel. 2020; 12: 651https://doi.org/10.3390/sym12040651
        • Song J.
        • Wang L.
        • Ng N.N.
        • Zhao M.
        • Shi J.
        • Wu N.
        • Li W.
        • Liu Z.
        • Yeom K.W.
        • Tian J.
        Development and validation of a machine learning model to explore tyrosine kinase inhibitor response in patients with stage IV EGFR variant–positive non–small cell lung cancer.
        JAMA Netw. Open. 2020; 3
        • Schwyzer M.
        • Ferraro D.A.
        • Muehlematter U.J.
        • Curioni-Fontecedro A.
        • Huellner M.W.
        • von Schulthess G.K.
        • Kaufmann P.A.
        • Burger I.A.
        • Messerli M.
        Automated detection of lung cancer at ultralow dose PET/CT by deep neural networks – initial results.
        Lung Cancer. 2018; 126: 170-173
        • Tajbakhsh N.
        • Shin J.Y.
        • Gurudu S.R.
        • Hurst R.T.
        • Kendall C.B.
        • Gotway M.B.
        • Jianming L.
        Convolutional neural networks for medical image analysis: full training or fine tuning?.
        IEEE Trans. Med. Imaging. 2016; 35: 1299-1312