Artificial Intelligence in Pancreaticobiliary Disease

Read Article

The rapid rise of artificial intelligence (AI) applications in the field of gastroenterology has led to the recent rollout of sophisticated hardware and software systems used to aid detection and diagnosis in the endoscopy suite. These systems have primarily been applied to esophagoduodenoscopies and colonoscopies. The literature is comparatively scant in the realm of pancreaticobiliary endoscopy. This review aims to discuss the few studies that have been published evaluating AI-assisted endoscopic ultrasound (EUS) and endoscopic retrograde cholangiopancreatography (ERCP). While the preliminary data are encouraging the area as a whole demands further robust clinical study in addition to close consideration of the logistical and ethical challenges that machine (ML) and deep learning (DL) present.


Artificial intelligence (AI) applied to medicine has seen a recent acceleration in both interest and clinical implementation after five decades of development, dating back to at least the 1970s.1 In that time AI has been deployed to aid in solving the diagnostic and prognostic questions in every field of medicine from radiology to dermatology.2

AI was born as a branch of computer science with the hope of creating computer systems that could perform tasks classically requiring human input or intelligence. Machine learning (ML) is a subset of AI in which computer algorithms “learn” from training data sets by performing specific tasks and analyses on said data. ML algorithms designed for certain tasks, for instance visual recognition (e.g., recognizing lung nodules on a chest x-ray), are trained on large data sets, and the resulting trained algorithm is termed a “model.” These models are then validated on different data sets to ascertain their positive and negative predictive value. Deep learning (DL) is a subset of ML that relies on an artificial neural network (ANN), which allows for multiple layers of features to be extracted from raw data to create more complex predictive outputs, often with even less human guidance than traditional ML algorithms. These DL systems learn through successive training data sets to produce outputs that are increasingly similar to

the target output typically predetermined by experts in the field. Other AI modalities include natural language processing (NLP) which gives computers the ability to understand both text and spoken word while ambient clinical intelligence (ACI) monitors and reacts to inputs from its environment akin to Siri or Alexa. 

Over the past two decades AI has become an increasingly important topic of discussion in gastroenterology.3–5 The applications of AI in this realm are wide in scope and in general fall into one of two categories: visual tasks and combination tasks. A typical approach to a visual task is first to develop a model based on a labeled test set of still images or video (e.g., a database of colon polyp images). The next step is to validate the model on a separate data set to measure performance, and in some cases “tune” the algorithm for optimal performance. Some of the early critical progress in computer vision for gastroenterology has been in the areas of esophagogastroduodenoscopy (EGD)6– 9 and colonoscopy10–12 where many academic and industry teams have developed computer-assisted detection (CADe) software for colon polyp detection and other indications. There is a smaller subset of studies evaluating the use of AI in video capsule endoscopy (VCE)13,14 to similarly aid in detection and diagnosis of small bowel pathology. While a great deal of progress has been made in computer vision for colonoscopy and upper endoscopy, with numerous randomized control trials (RCTs) evaluating clinical use of CADe and CADx systems in real-time, the same cannot yet be said for the field of pancreaticobiliary endoscopy. The development and implementation of AI tools for endoscopic retrograde cholangiopancreatography (ERCP) and endoscopic ultrasound (EUS) is still in a nascent stage. This gap is likely explained by two key barriers: 1) pancreaticobiliary endoscopy relies on a broader mix of complex visual data (i.e., ultrasound, fluoroscopy, and endoscopy) therefore presenting a greater challenge for model development and 2) the total volume of pancreaticobiliary procedures is much smaller compared to general endoscopic procedures, making it more difficult to collect data for model development (and perhaps also less of a compelling investment for industry). This review

aims to highlight the AI systems in active development for advanced endoscopic procedures to treat pancreaticobiliary disease, highlighting some of the key barriers and opportunities that lie ahead.

Endoscopic Ultrasound

EUS has emerged as an impactful modality to evaluate conditions of the pancreas, gallbladder, and liver in addition to a wide variety of other indications. A particularly most common use is in fluid and tissue sampling to aid in diagnosing pancreatic cysts and malignancy. EUS interpretation appears to be operator-dependent15–17 with significant operator heterogeneity across various studies, highlighting a need for tools to help standardize diagnosis in difficult clinical scenarios.18 While the development of AI for EUS has been relatively limited in the past decade, there have been several important trials utilizing ML and DL systems to aid in detection and differentiation of various benign and malignant pancreatic lesions. EUS elastography has been one area of recent AI investigational efforts. Elastography measures the relative stiffness and density of tissue and has been used to help differentiate between pancreatic cancer and inflammatory changes (e.g., chronic pancreatitis). EUS elastography has been shown to have good sensitivity and specificity for differentiating cancer and chronic pancreatitis in one large meta-analysis.19 In 2008 Saftoiu and colleagues20 generated mean hue histograms from EUS elastography videos of 68 patients to which an extended neural network algorithm was applied to differentiate between chronic pancreatitis and pancreatic cancer. The algorithm reported an average training performance of 97% and an average testing performance of 95%. Sensitivity and specificity were 91.4% and 87.9% respectively, which is comparable to the sensitivity and specificity of previously published literature on the use of EUS elastography for diagnosis, with strong positive (88.9%) and negative (90.6%) predictive values. Major limitations of this study include its small sample size and use of normal pancreas cases with no definitive mass to train the algorithm. A follow up study21 expanded on their original dataset and designed a blinded prospective cohort study utilizing 774 EUS elastography recordings from

recorded for each patient. Hue histograms were then examined by an experienced operator and given a final clinical diagnosis. A mathematical model using ANN input of the hue video vectors and the known final diagnosis was then generated and the two compared. The study reported a training accuracy of 91.14% and a testing accuracy of 84.27%. Sensitivity and specificity were 87.59% and 82.94% respectively, with PPV and NPV 96.25% and 57.22%. While the study enrolled predominantly pancreatic cancer patients (47 patients with chronic pancreatitis and 211 patients with pancreatic cancer), it illustrates the usefulness of neural networks in accurate diagnosis of solid pancreatic masses using EUS elastography. These studies show the high diagnostic accuracy reported when incorporating artificial intelligence and neural networks to EUS elastography image interpretation. Continuing to add to these image databases and incorporate them into EUS elastography software may further enhance diagnostic differentiation of pancreatic parenchymal diseases. Zhu and colleagues22 designed a diagnostic prospective study including EUS data from 388 patients (126 with chronic pancreatitis and 262 with pancreatic ductal adenocarcinoma) utilizing a support vector machine (SVM) to build an algorithm that distinguishes benign and cancerous image samples. Classification performance was robust with a testing accuracy

of 94.2%. Sensitivity and specificity were 96.25% and 93.38% respectively, with a PPV and NPV of 92.21% and 96.68%. As with much of the work in this subfield the system was designed using a single classifier and no head-to-head comparison of different classifiers (e.g., ANNs) was performed. In addition, this form of image analysis was completed post hoc thereby limiting its general ability to be utilized as a dynamic tool in real-time clinical scenarios. Alongside the rise of ML and DL in medicine there has also been a movement towards utilizing more advanced imaging modalities (e.g., chromoendoscopy, magnification endoscopy, contrast enhanced EUS, etc.) to permit more subtle clinical interrogation of GI lesions. These advanced imaging modalities provide ample opportunity for AI innovation. In 2015 Saftoiu and colleages23 continued to expand their work with ANNs in pancreaticobiliary endoscopy by applying similar training and testing protocols to data captured with contrast-enhanced harmonic EUS (CEH-EUS). This study of 167 patients with intra-abdominal masses (55 with chronic pancreatitis and 112 with pancreatic cancer) trained an ANN with 94.64% sensitivity and 94.44% specificity. PPV and NPV were 92.21% and 96.68% respectively. While the majority of these data were obtained contemporaneously, roughly 25% of those patients had no fine needle aspirate (FNA) sample collected at the time of CEH-EUS. These cases required confirmatory testing by surgery (n=15) or followup (n=23). One source of additional bias in this study is the fact that the same investigators who performed the CEH-EUS also performed the EUS-guided investigator who was blinded to FNA results.

Larger studies focused on traditional EUS gained more traction in the mid-2010s. In 2016 Ozkan and colleagues24 created a CADx system using data from 172 patients with an imaging data set comprised of 130 non-cancer and 202 pancreatic cancer samples. Patients were further sub-divided into three different age groups: < 40, 40-60, and > 60 (as we know appearance of the pancreas on EUS changes throughout a person’s lifespan). The ANN processed 20 features and classified each sample as either benign or malignant. The testing accuracy of this CADx system for the < 40, 40-60, and > 60 age groups were 92%, 94.11%, and 91.66% respectively. Sensitivity and specificity for all age groups were 83.3% and 93.33% respectively. Limitations of this study include the small sample size utilized for the age < 40 subgroup in particular. There was also no differentiation between noncancerous pancreas pathologies (e.g., chronic pancreatitis, pseudocysts, polyps, etc.). As with the work of Zhu and colleagues this CADx system was designed to conduct post hoc analysis of the EUS images rather than providing real-time information to guide clinical decision-making. In 2019 two additional retrospective studies performed by Kuwahara and colleagues25 and Kurita and colleagues26 were performed to differentiate malignant IPMN from benign fine need aspiration (EUS-FNA) of the mass. Despite this limitation all computer analysis of CEH-EUS videos were performed by

pancreatic cystic lesions. The former included 50 patients (27 with low- or intermediate-grade dysplasia and 23 with high-grade dysplasia or invasive carcinoma) and utilized 3970 still EUS images to build a CADx system with 94.0% testing accuracy and sensitivity and specificity of 95.7% and 92.6% respectively. The latter included 85 patients and utilized a DL system to transform multiple data points (e.g., CEA level, cytology obtain via FNA, cyst fluid analysis of surgical and endoscopic specimens) and output a predictive value to differentiate benign and malignant cystic lesions with a testing accuracy of 92.9% and sensitivity and specificity of 95.7% and 91.9% respectively. While these studies demonstrate a higher level of testing accuracy the generalizability of their results are limited by the relatively small sample sizes. There is a renewed focus on utilizing AI’s computing power for quality control and training in EUS. In 2020 Zhang and colleagues27 constructed a system called BP MASTER to aid endoscopist training in EUS. The standard EUS procedure was divided into 6 discrete stations based on pancreatic anatomy, utilizing 19,486 images. Test set included 396 video clips and system performance was compared to EUS expert determination. The algorithm achieved 94.2% and 82.4% testing accuracy in station classification at internal and external validation respectively. The accuracy of this system was deemed comparable to that of expert opinion. This study paves the way for AI in EUS to be utilized not only for real-time clinical decision and procedural quality improvement,

but also potentially to support trainee education. Indeed there is a growing body of literature28–32 that suggests AI-assisted or virtual reality simulators can serve as a useful supplemental to conventional training. EUS-based neural networks are also under investigation as a means of differentiating autoimmune pancreatitis (AIP) from pancreatic adenocarcinoma (PDAC). Marya and colleagues33 performed a study in 2021 to explore this question. Still images and videos from 583 patients were used to create a convolutional neural network (CNN) that was able to distinguish AIP (n=AIP) from normal pancreas (NP, n=73) with 99% sensitivity and 98% specificity. This CNN was also able to distinguish AI from chronic pancreatitis (CP, n=72) with 94% sensitivity and 71% specificity. Finally, the CNN distinguished AIP from PDAC (n=292)

with 90% sensitivity and 93% specificity. In total the neural network distinguished AIP from all other conditions with 90% sensitivity and 95% specificity. Given the suboptimal nature of sampling techniques to diagnose AIP this CNN is promising as a more expeditious method to obtain the diagnosis. The most recent and complete systematic review on the application of AI in EUS diagnosis of pancreatic malignancies was published by Goyal and colleagues in 2022.34 In this systematic review, 11 studies utilizing AI in diagnosing pancreatic cancer were included, with a total of 2292 patients. The patient population was predominantly pancreatic cancer (n=1383), with the remaining divided between pancreatic neuroendocrine tumors (PNET; n=3) and IPMN (n=27). Neural networks were the most studied AI modality (n=9 studies), with the remainder being SVM. Overall, the sensitivity of the AI systems in diagnosing pancreatic cancer was high, ranging from 83-100%, with a specificity of 50-99%, PPV of 75-99% and NPV of 57-100%. Subgroup analysis of the studies differentiating pancreatic cancer from chronic pancreatitis reported higher sensitivity (96%), specificity (93%) and accuracy (94%) when using SVM as compared to those that utilized ANN. This comprehensive systematic review summarizes what is outlined in the current review, and again supports the use of AI systems in improving diagnostic yield for pancreatic cancer identification by EUS.

Endoscopic Retrograde Cholangiopancreatography

AI model development in the world of endoscopic retrograde cholangiopancreatography (ERCP) has made somewhat less progress than in the realm of EUS. Proposed areas for development of AI in ERCP include: 1) characterization of strictures, 2) risk predictors for iatrogenic pancreatitis, and 3) prediction tools and guidance for difficulty of biliary duct cannulation.35 One key area of challenge in ERCP has been the differentiation between indeterminate and malignant biliary strictures, which has been hampered by relatively low diagnostic yield of cytology brushings and subjectivity of cholangioscopic findings.36–38 Contemporaneous development with AI applications in EUS, Jovanovic and colleagues39 designed a prospective study which aimed to identify patients with suspected choledocholithiasis most suitable for therapeutic ERCP. Data from 181 patients at a tertiary care endoscopy center were utilized. An ANN-generated predictive score based on laboratory values (e.g., alkaline phosphatase, total bilirubin, aspartate aminotransferase, alanine aminotransferase, C-

reactive protein) and features of the common bile duct (CBD) on transcutaneous ultrasound. This model displayed good discriminant ability with 92% testing accuracy in identifying patients with choledocholithiasis, suggesting ANN-generated predictive scores can be useful risk stratification tools in routine clinical practice.

It is well known that difficult cannulations increase the risk of post-ERCP pancreatitis and in turn contribute to significant morbidity and mortality, and thus utilizing AI to predict difficult cannulation by ampulla appearance has been attempted.40 In 2021 Kim and colleagues41 built an AI system to identify the ampulla of Vater (AOV) and assess difficulty of pancreatic duct cannulation during ERCP, using a sample of 531 patients for which images from 451 were used to annotate AOV location. Cannulation difficulty data were based on binary classification. The model created was able to detect AOV with precision of 76.2% and classify cannulation difficulty with recall of 71.9% in easy cases (requiring < 5 minutes) and 61.1% in difficult cases. These metrics are on par with expert determination and demonstrates the real-time clinical applicability of AI in advanced endoscopy. These promising findings also pave a path for AI systems to improve quality control and training in ERCP.


As applications of AI continue to expand in the field of gastroenterology it will be important to develop benchmark data sets that are large and heterogeneous to allow for consistency in training and testing new AI systems. These data sets may also circumvent the need for head-to-head randomized control trials comparing the many subtypes of AI systems. Another goal of this burgeoning subfield should be robust trial design. As delineated by Glissen and colleagues3 the call for more prospective randomized control studies evaluating the use of AI in endoscopy is met with the equally important need for more structure in trial design and reporting. Detailing the level of human involvement in input data manipulation and baseline expertise requirements of users, for example, will be crucial. Clearly stating which data are missing and how these data were treated in statistical analysis are critical. Identify the differences in the training and testing data sets including the eligibility criteria for inclusion in each study is also imperative. As the demand for more robust studies in this area accelerates as will our need to address the logistical42 and ethical43 challenges inherent in this work. One important logistical challenge is the manpower required to catalog images and videos to build these benchmark data sets. Another challenge is siloed data, a product of institutions utilizing different electronic medical records, endoscopy systems, and image processing software, which will make it difficult to share and integrate these data into useful AI applications. There also remain many ethical considerations in utilizing ML/ DL systems for routine clinical care including informed consent, privacy and transparency of data use, external regulation, and algorithmic bias. The latter is currently being explored and of utmost importance as this bias can contribute to pre-existing health inequities in gastroenterology and hepatology. It is clear that bias has the potential to affect nearly every aspect of ML/DL implementation in clinical practice as outlined by Uche-Anya, Anyane-Yeboa and colleagues including: research problem selection, data collection, outcome measure selection, algorithm development, and clinical deployment.44 As the field of gastroenterology continues to harness the power of AI in pancreaticobiliary endoscopy it will be important for future clinical trial design to prioritize transparency, standardized research methodology and terminology, and equity. The implications of this technology are far-reaching and, if broadly adopted, can lead to a profound change in clinical practice and outcomes.


  1. Topol EJ. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. First edition. Basic Books; 2019.
  2. Ramesh AN, Kambhampati C, Monson JRT, Drew PJ.
    Artificial intelligence in medicine. Ann R Coll Surg Engl. 2004;86(5):334-338. doi:10.1308/147870804290
  3. Glissen Brown JR, Waljee AK, Mori Y, Sharma P, Berzin TM. Charting a path forward for clinical research in artificial intelligence and gastroenterology. Dig Endosc. Published online April 19, 2021:den.13974. doi:10.1111/den.13974
  4. Berzin TM, Parasa S, Wallace MB, Gross SA, Repici A, Sharma P. Position statement on priorities for artificial intelligence in GI endoscopy: a report by the ASGE Task Force.
    Gastrointest Endosc. 2020;92(4):951-959. doi:10.1016/j.
  5. Glissen Brown JR, Berzin TM. Adoption of New Technologies. Gastrointest Endosc Clin N Am. 2021;31(4):743-758. doi:10.1016/j.giec.2021.05.010
  6. Luo H, Xu G, Li C, et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study.
    Lancet Oncol. 2019;20(12):1645-1654. doi:10.1016/S14702045(19)30637-0
  7. de Groof AJ, Struyvenberg MR, van der Putten J, et al. Deep-Learning System Detects Neoplasia in Patients With Barrett’s Esophagus With Higher Accuracy Than Endoscopists in a Multistep Training and Validation Study With Benchmarking.
    Gastroenterology. 2020;158(4):915-929.e4. doi:10.1053/j. gastro.2019.11.030
  8. Arribas J, Antonelli G, Frazzoni L, et al. Standalone performance of artificial intelligence for upper GI neoplasia: a meta-analysis. Gut. 2021;70(8):1458-1468. doi:10.1136/ gutjnl-2020-321922
  9. Lui TKL, Tsui VWM, Leung WK. Accuracy of artificial intelligence–assisted detection of upper GI lesions: a systematic review and meta-analysis. Gastrointest Endosc. 2020;92(4):821-830.e9. doi:10.1016/j.gie.2020.06.034
  10. Wang P, Berzin TM, Glissen Brown JR, et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut. 2019;68(10):1813-1819. doi:10.1136/ gutjnl-2018-317500
  11. Mohan BP, Facciorusso A, Khan SR, et al. Real-time computer aided colonoscopy versus standard colonoscopy for improving adenoma detection rate: A meta-analysis of randomized-controlled trials. EClinicalMedicine. 2020;2930:100622. doi:10.1016/j.eclinm.2020.100622
  12. Barua I, Vinsard DG, Jodal HC, et al. Artificial intelligence for polyp detection during colonoscopy: a systematic review and meta-analysis. Endoscopy. 2021;53(03):277-284. doi:10.1055/a-1201-7165
  13. Zou Y, Li L, Wang Y, Yu J, Li Y, Deng WJ. Classifying digestive organs in wireless capsule endoscopy images based on deep convolutional neural network. In: 2015 IEEE International Conference on Digital Signal Processing (DSP). IEEE; 2015:1274-1278. doi:10.1109/ICDSP.2015.7252086
  14. Chahal D, Byrne MF. A primer on artificial intelligence and its application to endoscopy. Gastrointest Endosc. 2020;92(4):813-820.e4. doi:10.1016/j.gie.2020.04.074
  15. Wallace MB, Hawes RH, Durkalski V, et al. The reliability of EUS for the diagnosis of chronic pancreatitis: interobserver agreement among experienced endosonographers.
    Gastrointest Endosc. 2001;53(3):294-299. doi:10.1016/ S0016-5107(01)70401-4
  16. Stevens T, Lopez R, Adler DG, et al. Multicenter comparison of the interobserver agreement of standard EUS scoring and Rosemont classification scoring for diagnosis of chronic pancreatitis. Gastrointest Endosc. 2010;71(3):519-doi:10.1016/j.gie.2009.10.043
  17. Del Pozo D, Poves E, Tabernero S, et al. Conventional versus Rosemont endoscopic ultrasound criteria for chronic pancreatitis: Interobserver agreement in same day back-to-back procedures. Pancreatology. 2012;12(3):284-287. doi:10.1016/j. pan.2012.03.054
  18. Yamamiya A, Irisawa A, Kashima K, et al. Interobserver Reliability of Endoscopic Ultrasonography: Literature Review. Diagnostics. 2020;10(11):953. doi:10.3390/diagnostics10110953
  19. Li X. Endoscopic ultrasound elastography for differentiating between pancreatic adenocarcinoma and inflammatory masses: A meta-analysis. World J Gastroenterol. 2013;19(37):6284. doi:10.3748/wjg.v19.i37.6284
  20. Săftoiu A, Vilmann P, Gorunescu F, et al. Neural network analysis of dynamic sequences of EUS elastography used for the differential diagnosis of chronic pancreatitis and pancreatic cancer. Gastrointest Endosc. 2008;68(6):1086-1094. doi:10.1016/j.gie.2008.04.031
  21. Săftoiu A, Vilmann P, Gorunescu F, et al. Efficacy of an Artificial Neural Network–Based Approach to Endoscopic Ultrasound Elastography in Diagnosis of Focal Pancreatic Masses. Clin Gastroenterol Hepatol. 2012;10(1):84-90.e1. doi:10.1016/j.cgh.2011.09.014
  22. Zhu M, Xu C, Yu J, et al. Differentiation of Pancreatic Cancer and Chronic Pancreatitis Using Computer-Aided Diagnosis of Endoscopic Ultrasound (EUS) Images: A Diagnostic Test. Arlt A, ed. PLoS ONE. 2013;8(5):e63820. doi:10.1371/journal.pone.0063820
  23. Săftoiu A, Vilmann P, Dietrich CF, et al. Quantitative contrast-enhanced harmonic EUS in differential diagnosis of focal pancreatic masses (with videos). Gastrointest Endosc. 2015;82(1):59-69. doi:10.1016/j.gie.2014.11.040
  24. Ozkan M, Cakiroglu M, Kocaman O, et al. Age-based computer-aided diagnosis approach for pancreatic cancer on endoscopic ultrasound images. Endosc Ultrasound. 2016;5(2):101. doi:10.4103/2303-9027.180473
  25. Kuwahara T, Hara K, Mizuno N, et al. Usefulness of Deep Learning Analysis for the Diagnosis of Malignancy in Intraductal Papillary Mucinous Neoplasms of the Pancreas. Clin Transl Gastroenterol. 2019;10(5):e00045. doi:10.14309/ ctg.0000000000000045
  26. Kurita Y, Kuwahara T, Hara K, et al. Diagnostic ability of artificial intelligence using deep learning analysis of cyst fluid in differentiating malignant from benign pancreatic cystic lesions. Sci Rep. 2019;9(1):6893. doi:10.1038/s41598-
  27. Zhang J, Zhu L, Yao L, et al. Deep learning–based pancreas segmentation and station recognition system in EUS: development and validation of a useful training tool (with video).
    Gastrointest Endosc. 2020;92(4):874-885.e3. doi:10.1016/j. gie.2020.04.071
  28. Finocchiaro M, Cortegoso Valdivia P, Hernansanz A, et al. Training Simulators for Gastrointestinal Endoscopy: Current and Future Perspectives. Cancers. 2021;13(6):1427. doi:10.3390/cancers13061427
  29. Huang L, Liu J, Wu L, et al. Impact of ComputerAssisted System on the Learning Curve and Quality in Esophagogastroduodenoscopy: Randomized Controlled Trial. Front Med. 2021;8:781256. doi:10.3389/fmed.2021.781256
  30. Khan R, Plahouras J, Johnston BC, Scaffidi MA, Grover SC, Walsh CM. Virtual reality simulation training in endoscopy: a Cochrane review and meta-analysis. Endoscopy.
    2019;51(07):653-664. doi:10.1055/a-0894-4400
  31. Mahmood T, Scaffidi MA, Khan R, Grover SC. Virtual reality simulation in endoscopy training: Current evidence and future directions. World J Gastroenterol. 2018;24(48):5439-5445. doi:10.3748/wjg.v24.i48.5439
  32. Harpham-Lockyer L. Role of virtual reality simulation in endoscopy training. World J Gastrointest Endosc. 2015;7(18):1287. doi:10.4253/wjge.v7.i18.1287
  33. Marya NB, Powers PD, Chari ST, et al. Utilisation of artificial intelligence for the development of an EUS-convolutional neural network model trained to enhance the diagnosis of autoimmune pancreatitis. Gut. 2021;70(7):1335-1344. doi:10.1136/gutjnl-2020-322821
  34. Goyal H, Sherazi SAA, Gupta S, et al. Application of artificial intelligence in diagnosis of pancreatic malignancies by endoscopic ultrasound: a systemic review. Ther Adv Gastroenterol. 2022;15:175628482210938. doi:10.1177/17562848221093873
  35. Ahmad OF, Stassen P, Webster GJ. Artificial intelligence in biliopancreatic endoscopy: Is there any role? Best Pract Res Clin Gastroenterol. 2021;52-53:101724. doi:10.1016/j. bpg.2020.101724
  36. Han S, Tatman P, Mehrotra S, et al. Combination of ERCPBased Modalities Increases Diagnostic Yield for Biliary Strictures. Dig Dis Sci. 2021;66(4):1276-1284. doi:10.1007/ s10620-020-06335-x
  37. Navaneethan U, Njei B, Lourdusamy V, Konjeti R, Vargo JJ, Parsi MA. Comparative effectiveness of biliary brush cytology and intraductal biopsy for detection of malignant biliary strictures: a systematic review and meta-analysis.
    Gastrointest Endosc. 2015;81(1):168-176. doi:10.1016/j. gie.2014.09.017
  38. Smoczynski M, Jablonska A, Matyskiel A, et al. Routine brush cytology and fluorescence in situ hybridization for assessment of pancreatobiliary strictures. Gastrointest Endosc. 2012;75(1):65-73. doi:10.1016/j.gie.2011.08.040
  39. Jovanovic P, Salkic NN, Zerem E. Artificial neural network predicts the need for therapeutic ERCP in patients with suspected choledocholithiasis. Gastrointest Endosc.
    2014;80(2):260-268. doi:10.1016/j.gie.2014.01.023
  40. Thaker AM, Mosko JD, Berzin TM. Post-endoscopic retrograde cholangiopancreatography pancreatitis. Gastroenterol Rep. 2015;3(1):32-40. doi:10.1093/gastro/gou083
  41. Kim T, Kim J, Choi HS, et al. Artificial intelligence-assisted analysis of endoscopic retrograde cholangiopancreatography image for identifying ampulla and difficulty of selective cannulation. Sci Rep. 2021;11(1):8381. doi:10.1038/s41598-
  42. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17(1):195. doi:10.1186/ s12916-019-1426-2
  43. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artif Intell Healthc.
    Published online 2020:295-336. doi:10.1016/B978-0-12-
  44. Chen IY, Pierson E, Rose S, Joshi S, Ferryman K, Ghassemi M. Ethical Machine Learning in Healthcare. Annu Rev Biomed Data Sci. 2021;4(1):123-144. doi:10.1146/annurevbiodatasci-092820-114757

Download Tables, Images & References