Artificial intelligence in the interventional management of liver disease: a narrative review from foundational concepts to clinical applications
Sections
Open Access Review
Artificial intelligence in the interventional management of liver disease: a narrative review from foundational concepts to clinical applications

Affiliation:

Division of Vascular and Interventional Radiology, Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, 27599 NC, USA

Email: hyeon_yu@med.unc.edu

ORCID: https://orcid.org/0000-0002-4318-2575

Hyeon Yu
*

Explor Dig Dis. 2026;5:1005109 DOI: https://doi.org/10.37349/edd.2026.1005109

Received: November 20, 2025 Accepted: January 11, 2026 Published: January 18, 2026

Academic Editor: Han Moshage, University of Groningen, The Netherlands

The article belongs to the special issue Advances in Hepato-gastroenterology: Diagnosis, Prognostication, and Disease Stratification

Abstract

Interventional radiology (IR) is an ideal domain for artificial intelligence (AI) due to its data-intensive nature. This review provides a targeted guide for clinicians on AI applications in liver interventions, specifically focusing on hepatocellular carcinoma and portal hypertension. Key findings from recent literature demonstrate that AI models achieve high accuracy in predicting the response to transarterial chemoembolization and in non-invasively estimating the hepatic venous pressure gradient. Furthermore, emerging deep learning architectures, such as Swin Transformers, are outperforming traditional mRECIST criteria in longitudinal treatment monitoring. Despite these technical successes, the transition from “code to bedside” is hindered by limited external validation and the “black box” nature of complex algorithms. We conclude that the future of IR lies in the “AI-augmented” interventional radiologist paradigm, in which AI serves as a precision tool for patient selection and procedural safety rather than as a replacement for clinical judgment.

Keywords

artificial intelligence, interventional radiology, liver disease, hepatocellular carcinoma, portal hypertension, narrative review

Introduction

Liver cancer is a global health challenge, with hepatocellular carcinoma (HCC) being its most common form, and its management is a cornerstone of modern interventional radiology (IR) practice [1, 2]. Similarly, portal hypertension (PHT), a major sequela of chronic liver disease, often requires complex, image-guided interventions, such as the placement of a transjugular intrahepatic portosystemic shunt (TIPS) [35]. The treatment of these conditions has been revolutionized by a growing arsenal of minimally invasive procedures, including transarterial chemoembolization (TACE), radioembolization (TARE), thermal ablation, and portal vein embolization (PVE) [610].

The modern management of liver disease now generates an overwhelming amount of disparate data, from advanced multimodality imaging and radiomics to clinical, laboratory, and genomic information [11]. Managing this complex data to make personalized treatment decisions presents a significant challenge. Artificial intelligence (AI) has emerged as a powerful tool to meet this challenge, and the field of IR is exceptionally well-suited for its application [12]. Unlike other interventional specialties, IR is an inherently data-rich field where entire image-guided procedures are recorded in a standardized digital format, creating an ideal ecosystem for AI-powered innovation [13]. This positions interventional radiologists not just as consumers of AI technology, but as potential leaders in developing groundbreaking tools for the entire interventional community [13].

In routine practice, interventionalists rely primarily on qualitative visual assessments of imaging and static staging systems, such as the Barcelona Clinic Liver Cancer (BCLC) criteria, to guide therapeutic decisions [14]. However, these conventional methods often fail to account for the complex biological micro-heterogeneity of liver tumors or the highly variable hemodynamics of PHT [15]. This leads to a “one-size-fits-all” approach in which, for example, up to 40% of patients may not achieve the predicted response to TACE or may develop unforeseen complications, such as overt hepatic encephalopathy (OHE) after TIPS placement [1618]. AI offers a transformative solution to these shortcomings by extracting sub-visual “radiomic” features and processing multidimensional datasets that exceed human cognitive capacity, enabling a shift from generalized protocols to truly personalized medicine [15].

The path to clinical integration is not without obstacles. Advanced deep learning (DL) models require vast amounts of high-quality data, yet IR datasets are often smaller and less standardized than those in diagnostic radiology, influenced by operator variability, diverse patient conditions, and specific procedural contexts [19]. Furthermore, a lack of formal training in AI among clinicians can create a barrier to understanding, trusting, and effectively participating in the development and deployment of these powerful new tools [19, 20]. This gap is underscored by the fact that while the total number of FDA-cleared AI algorithms has surged to over 1,250, the share specifically dedicated to interventional procedures remains minimal, highlighting the specialty’s continuous unmet need for a clear roadmap from “code to bedside” [21].

The objective of this review is to offer a comprehensive guide for practicing interventional radiologists, bridging the gap between foundational AI concepts and their real-world clinical applications in the management of liver disease. We will first review a concise primer on essential AI terminology, frameworks, and life cycles. The core of the manuscript will then survey the current and emerging applications of AI in the management of HCC and PHT. Finally, we will discuss overarching challenges and outline the future of the field, guided by the research priorities established by major international IR societies [21, 22].

An IR-focused primer on AI fundamentals

To critically evaluate and integrate AI into clinical practice, a foundational understanding of its core concepts is essential [20]. AI is a broad, umbrella term for computer-based systems that perform tasks requiring human-like intelligence, such as pattern recognition and problem-solving [11, 23]. Within this field, machine learning (ML) is a key subset where algorithms are not explicitly programmed but instead learn complex, non-linear relationships directly from data [11, 20]. DL is a further specialization of ML that uses advanced architectures, including artificial neural networks (ANNs) with many layers, to automatically extract and learn features from complex data (e.g., medical images) with minimal human intervention [11, 20, 24] (Figure 1).

The foundational hierarchy and examples of artificial intelligence (AI), machine learning, deep learning, and generative AI.

Core architectures and clinical models

The way a model learns is defined by its training data. In supervised learning, the most common approach in medicine, the model is trained on a dataset where inputs are labeled with the correct outputs (e.g., CT images labeled with the presence or absence of PHT) [11, 20]. In contrast, unsupervised learning uses unlabeled data, and the model’s task is to find hidden patterns or relationships on its own [20]. There are two types of DL architecture particularly relevant to the interventional management of liver disease.

  • Convolutional neural networks (CNNs): For years, CNNs have been the workhorse of medical imaging [11, 25]. Inspired by the human visual cortex, they are exceptionally good at processing grid-like data, such as images, to perform tasks including the classification and segmentation of liver tumors [23, 2527].

  • Transformers: Originally developed for natural language processing (for example, the models behind ChatGPT), Transformers are now achieving state-of-the-art results in medical imaging [19, 28]. Architectures such as the Swin Transformer hierarchically capture both global and local features, making them highly effective for analyzing high-resolution, 3D medical images (e.g., CT and MRI scans) for tasks such as prognostic modeling in HCC [2832].

Specific DL models, categorized by their interventional task—such as ProgSwin-UNETR for monitoring TACE response or the aHVPG Model for non-invasive pressure estimation—are detailed in Table 1.

 Example deep learning models and their interventional relevance.

Model namePrimary architecture/TypeCore clinical taskRelevant IR procedure/Application
ProgSwin-UNETRSwin Transformer/DLLongitudinal prognosis stratificationMonitoring HCC response after TACE
aHVPG ModelAutoML/CNNNon-invasive prediction of HVPG (pressure gradient)PHT diagnosis; TIPS candidacy
Swin-UNETRCNN/Transformer Hybrid3D segmentation of tumors and organs at riskY-90 dosimetry planning; ablation simulation
Neuro-Vascular AssistReal-Time AI SystemReal-time safety monitoring (detects migrating embolic agents)Visceral or neuro-embolization
ChatGPT (GPT-4)Large Language Model (LLM)Statistical analysis and data interpretationAccelerating clinical research and protocol design
K-Net/MobileViTCNN/Transformer Hybrid (Dual-Stage)High-accuracy segmentation and classificationNodule/Lesion triage and feature analysis

IR: interventional radiology; DL: deep learning; HCC: hepatocellular carcinoma; TACE: transarterial chemoembolization; ML: machine learning; CNN: convolutional neural network; HVPG: hepatic venous pressure gradient; PHT: portal hypertension; TIPS: transjugular intrahepatic portosystemic shunt; AI: artificial intelligence.

Radiomics: extracting data from images

A key process enabling AI in radiology is radiomics, which involves the high-throughput extraction of a large number of quantitative features from medical images [3, 13]. This process converts images into mineable data, capturing characteristics of tumor shape, intensity, and texture that are often imperceptible to the human eye [24, 25]. This is particularly well-suited for characterizing the heterogeneity of HCC from CT and MRI scans [24]. These radiomic features can then be fed into ML models to build tools that predict diagnosis, treatment response, or prognosis [1, 13, 24].

Classifying AI systems in IR

To help clinicians better understand and evaluate different AI tools, several classification frameworks have been proposed. One pragmatic approach categorizes AI systems based on their complexity and interpretability, distinguishing between simple, fully explainable models and complex, non-interpretable black-box models that require more scrutiny [20]. More recently, specific frameworks were developed to score the level of technological integration for robotic and navigation systems [33, 34]. These include the Levels of Autonomy in Surgical Robotics (LASR) scale, which rates a system’s ability to act independently, and the novel Levels of Integration of Advanced Imaging and AI (LIAI2) scale, which assesses how deeply AI is embedded into the procedural workflow [33, 35]. As will be discussed, a systematic review of currently available systems in IR found that most remain at a low level on both of these scales [33] (Table 2).

 Levels of autonomy (LASR) and AI integration (LIAI2) classification scales.

ScalePurposeRangeCore concept at the highest level
LASR (autonomy)Classifies the robot’s degree of independence from human control0 (no autonomy) to 5 (full autonomy)Full autonomy: The system performs the entire procedure based on predefined objectives without human intervention.
LIAI2 (integration)Classifies the sophistication and depth of integration of AI and advanced imaging within the workflow1 (guided assistance) to 5 (full autonomous navigation)Full autonomous navigation: The system fully integrates advanced imaging and AI to independently perform and navigate the intervention.

AI: artificial intelligence; LASR: Levels of Autonomy in Surgical Robotics; LIAI2: Levels of Integration of Advanced Imaging and AI.

The AI project lifecycle: from data to deployment

The process of building and implementing an AI model follows a rigorous, multi-stage lifecycle (Table 3). For interventional radiologists, understanding this pipeline—from initial data acquisition to final deployment—is essential for interpreting research and ensuring clinical relevance.

 The AI project lifecycle: from data to clinical deployment.

StageKey stepsActivitiesIR relevance and goal
Conception & DataDefine problemIdentify a clear clinical question (e.g., predicting TACE response)The goal is to obtain sufficient, high-quality data despite scarcity challenges in IR
AcquisitionGather multimodal data, including imaging, labs, and clinical records
Preprocessing & CurationLabelingAssign a “ground truth” to the data (e.g., manual tumor segmentation or classifying patient outcomeClinician expertise is required to perform accurate labeling and to ensure features reflect meaningful pathology
Feature extractionTransform images into quantitative data (e.g., radiomic features from HCC texture)
Validation & TestingSplit dataSeparate the dataset into training, validation, and untouched testing setsThis stage establishes rigor: models must prove accuracy on unseen patients to be considered trustworthy for clinical decision-making
External validationTest the final model on data from a different center to prove generalizability
Mitigate overfittingEnsure the model performs well on new data and does not fail due to over-memorization
Deployment & IntegrationEvaluationQuantify performance using clinical metrics such as AUC, sensitivity, and specificityThe goal is to achieve genuine clinical benefit by overcoming practical barriers and securing clinician trust before routine use
Workflow integrationEnsure the tool fits seamlessly into the IR suite and minimizes disruption to existing protocols

AI: artificial intelligence; TACE: transarterial chemoembolization; IR: interventional radiology; HCC: hepatocellular carcinoma; AUC: Area Under the Curve.

Problem definition and data acquisition

The life of an AI model begins with defining a clinically relevant problem (e.g., predicting TACE non-response) and identifying the necessary data inputs. In IR, this involves synthesizing multimodal data, including imaging, laboratory values, and clinical records, which is critical given IR’s inherent multimodal nature [11]. The goal is to develop instruments for image segmentation, simulation, registration, and multimodality image fusion [21]. Given that IR datasets are often limited, acquiring high-quality, ethically sourced data is the primary logistical hurdle [19, 36].

Data preprocessing and curation

Once acquired, raw data must be preprocessed. This crucial step includes image registration, normalization, and labeling. For supervised models, labeling is the labor-intensive process of assigning a ground truth to the input data (e.g., manually segmenting a tumor or labeling a patient as a one-hot encoding responder) [11]. This labeling often requires the expertise of interventional radiologists [37]. Preprocessing also involves feature extraction, transforming images into quantitative radiomics data. Radiomic features capture characteristics of tumor shape, intensity, and texture that are often imperceptible to the human eye, particularly in HCC [1].

Model training, validation, and testing

The available dataset is split into three distinct sets to ensure robust evaluation. The training set is used to adjust the model’s internal parameters (weights) iteratively. The validation set is used during training to fine-tune model hyperparameters and prevent overfitting (when the model memorizes the training data but fails to generalize). The testing set is a portion of the original data kept entirely separate and used only once, at the end, to assess final real-world performance. Crucially, modern high-quality research also requires external validation—testing the model on data collected from a different institution or population to confirm generalizability. Studies that fail to perform external validation risk a significant drop in accuracy in clinical practice, a phenomenon known as overfitting, in which a model performs well on its training data but fails on new, unseen data.

Performance evaluation and clinical deployment

Model performance is quantified using key metrics relevant to clinical risk, such as the Area Under the Curve (AUC), accuracy, sensitivity, and specificity. However, the process does not end with high metrics. The final stage is deployment—the envisioned pathway from a lab algorithm to routine clinical use. This requires mitigating ethical biases and ensuring the AI tool seamlessly integrates into existing workflows, minimizing disruption to the IR suite [37]. Deployment is the ultimate safeguard for determining whether the AI tool can provide a genuine clinical benefit.

AI applications in the interventional management of liver disease

AI demonstrates significant potential across the full spectrum of interventional liver disease management. Current research and early clinical applications can be organized by the primary clinical problems they aim to solve: managing HCC, assessing and treating PHT, and optimizing patients for surgical resection (Table 4).

 AI applications in liver interventions: from routine shortcomings to AI solutions.

PhaseClinical applicationConventional methodAI methodologyAI advantageReference(s)
Pre-proceduralPredicting TACE responseVisual CT/MRI & BCLC staging. High inter-observer variability; fails to capture sub-visual tumor heterogeneity.Multimodal models using radiomics, DL, and clinical data (ALBI, BCLC, AFP).AUROC > 0.85. Improves patient selection and avoids futile procedures.[6, 28, 37, 38]
Non-invasive PHT assessmentInvasive HVPG measurement. Procedural risk and requirement for highly specialized expertise.Radiomics and DL models (e.g., aHVPG) are analyzing CT features of the liver and spleen.Non-invasively estimate HVPG to stratify risk and guide TIPS candidacy.[1, 4, 53]
Predicting post-TIPS complicationsClinical scores (MELD, Child-Pugh). Limited predictive power for post-TIPS OHE.Radiomics, ANNs, and various ML models.Accurately forecast the risk of OHE for counseling.[3, 4]
Predicting PVE success2D/3D CT volumetry. Volume does not always equal function; difficult to predict actual hypertrophy kinetics.Multimodal models using Statistical Shape Models to quantify 3D liver anatomy.Forecast FLR hypertrophy to optimize surgical planning.[7]
Intra-proceduralTreatment simulationStandard anatomical landmarks. Fails to account for heat-sink effects or perfusion-based boundaries.DL models to predict ablation zones and simulate Y-90 radioembolization dosimetry.Optimize probe placement and increase quantitative accuracy of dosimetry.[24, 3941]
Image quality improvementConventional imaging filters. High radiation dose or poor visualization due to artifacts.DLR for dose reduction; DL to reduce metal artifacts or generate synthetic DSA.Lower radiation dose; improve visualization and safety during procedures.[19, 24, 47]
Post-proceduralLongitudinal monitoring of HCCmRECIST criteria. Does not account for dynamic metabolic changes or internal necrosis patterns.DL (Transformers) using multi-time-point MRI data to track tumor changes.More accurate prognostic stratification than diameter-based criteria.[28]
Detecting tumor recurrenceManual surveillance review. Potential for human error in identifying subtle early progression.ML, radiomics, and CNNs.Automated and early detection of LTP after ablation.[24, 46]

AI: artificial intelligence; TACE: transarterial chemoembolization; BCLC: Barcelona Clinic Liver Cancer; DL: deep learning; ALBI: albumin-bilirubin; AFP: alpha-fetoprotein; AUROC: area under the receiver operating characteristic curve; PHT: portal hypertension; HVPG: hepatic venous pressure gradient; TIPS: transjugular intrahepatic portosystemic shunt; MELD: model for end-stage liver disease; OHE: overt hepatic encephalopathy; ANNs: artificial neural networks; ML: machine learning; FLR: future liver remnant; Y-90: Yttrium-90; DLR: DL reconstruction; DSA: digital subtraction angiography; HCC: hepatocellular carcinoma; mRECIST: Modified Response Evaluation Criteria in Solid Tumors; CNNs: convolutional neural networks; LTP: local tumor progression.

Hepatocellular carcinoma

As a primary focus of interventional oncology, HCC has become a key use case for AI development, with a particular emphasis on predicting patient response to locoregional therapies [21].

Pre-procedural patient selection for TACE

TACE is a cornerstone therapy for intermediate-stage HCC, but patient response is highly variable [6, 37]. Consequently, the most studied application of AI in interventional oncology is the development of models to predict which patients will benefit from TACE before the procedure is performed [24, 37]. Systematic reviews and meta-analyses have confirmed that these AI models demonstrate strong predictive performance. One meta-analysis of 11 studies found that AI models achieved high pooled area under the receiver operating characteristic (ROC) curve (AUROC) values of 0.89 on internal validation and 0.81 on external validation, confirming their robustness [38].

A consistent theme is that multimodal models that integrate different data types yield the best results. Systematic reviews involving 23 studies with 4,486 patients have found that models combining clinical variables [including albumin-bilirubin (ALBI) grade, BCLC stage, and alpha-fetoprotein (AFP) level] with radiologic features (including tumor diameter, distribution, and peritumoral arterial enhancement) achieve higher predictive performance than models using clinical or imaging features alone [6, 37]. However, a recent meta-analysis added nuance, finding no statistically significant performance difference between advanced DL models and traditional handcrafted radiomics (HCR) models, nor between models with and without added clinical data [38]. The authors suggested that AI models may be able to implicitly learn clinical information directly from imaging data [38].

Longitudinal monitoring of treatment response

AI is also advancing beyond static, single-time-point assessments to perform longitudinal analysis that tracks tumor changes over time. A state-of-the-art study by Yao et al. [28] developed a DL model, ProgSwin-UNETR, to predict the long-term prognosis of HCC patients by analyzing a series of arterial-phase MRI scans taken at three different time points: before treatment, after the first TACE, and after the second TACE. By learning from these dynamic changes, the model stratified patients into four distinct risk groups with high accuracy (AUC of 0.92) and significantly outperformed both traditional radiomics models and the standard Modified Response Evaluation Criteria in Solid Tumors (mRECIST) criteria in predicting patient survival [28].

Intra- and post-procedural applications for HCC

Beyond TACE, AI tools are being developed for other liver-directed therapies.

  • AI for Y-90 dosimetry and planning: Segmentation of organs at risk and tumors is a critical, labor-intensive step in Y-90 TARE dosimetry planning. CNNs have been successfully developed for the automated segmentation of lungs, liver, and tumors on Tc-99m MAA SPECT/CT images, drastically reducing operator time [39]. AI is also being used to improve the technical accuracy of the dosimetry itself. A DL framework that employs CNNs for scatter correction and absorbed dose-rate estimation was developed to mitigate the impact of poor image quality from bremsstrahlung SPECT. This model was found to outperform the conventional Monte Carlo (MC) dosimetry method in virtual patient studies by 66% in Normalized Mean Absolute Error (NMAE), offering faster computation and higher accuracy [40]. Crucially, advanced AI tools that incorporate multimodal data are necessary because standard anatomical segmentation is insufficient for Y-90 TARE planning. Using contrast-enhanced Cone-Beam CT (CBCT) to define liver perfusion territories (LPTs), one study found that using standard anatomical landmarks instead of perfusion-based boundaries could lead to dosimetric errors of up to 21 Gy in the left liver lobe, highlighting the critical value of AI-assisted image registration and segmentation of functional territories [41].

  • Treatment simulation and image quality: For thermal ablation, AI-driven treatment simulation models can predict the size and shape of an ablation zone before the procedure, accounting for real-world factors such as the heat-sink effect from nearby blood vessels [24, 42, 43]. For radioembolization, AI can be used to automate the segmentation of the liver and tumors on planning scans for dosimetry and to simulate the biodistribution of Y-90 microspheres [24, 3941, 44]. For follow-up, AI models are demonstrating high accuracy (AUC up to 0.99) in detecting local tumor progression (LTP) on surveillance CT scans after thermal ablation [24, 45, 46].

  • Image quality improvement: AI enhances the quality of the images that guide interventions. DL reconstruction (DLR) algorithms reduce image noise and enable significant reductions in radiation dose during CT-guided procedures while maintaining image quality [24, 47]. Another application involves DL models that generate high-quality, artifact-free synthetic digital subtraction angiography (DSA) images for abdominal angiography, overcoming motion-related misregistration and potentially reducing radiation exposure [19, 24, 47].

  • Dynamic video analysis and real-time AI: While most current AI applications in IR rely on static pre-procedural imaging, significant progress involves the analysis of dynamic, video-based data generated during fluoroscopy and angiography. Unlike static CT or MRI, procedural video requires AI models capable of temporal reasoning—understanding how structures or tools move over time. Methodologies developed in adjacent fields, such as real-time polyp detection in colonoscopy, provide a valuable roadmap for IR [4850]. In gastroenterology, DL models (e.g., CNNs combined with temporal filtering) have reached high levels of accuracy in identifying lesions on live video feeds, reducing “miss rates” significantly. Transferring these approaches to IR could enable real-time “computer-aided detection” of subtle findings, such as the early detection of liquid embolic migration or the automated tracking of catheter tips during complex navigation [51, 52]. Such tools could shift the role of AI from a pre-procedural planning aid to an active, “over-the-shoulder” safety monitor during live interventions.

Portal hypertension

AI is developing powerful, non-invasive tools to assist in diagnosing and managing PHT and its complications.

  • Non-invasive diagnosis and risk stratification: The gold standard for assessing the severity of PHT is the invasive measurement of the hepatic venous pressure gradient (HVPG) [4, 53]. To overcome this, an automated AI model (aHVPG) was developed that uses radiomics from CT scans of the liver and spleen to accurately estimate the HVPG, significantly outperforming conventional non-invasive tools [53]. Another multimodal model combined clinical data (portal vein diameter, Child-Pugh score) with radiomic and DL features extracted from the non-tumorous liver parenchyma to predict the presence of PHT [1].

  • Predicting post-TIPS complications: For patients undergoing TIPS, a major concern is the risk of post-procedural OHE. Several AI approaches—including CT-based radiomics, ANNs, and other ML models—have been successfully used to predict the risk of post-TIPS OHE, with models consistently achieving high AUROCs greater than 0.80 [3, 4].

Surgical optimization (pre-hepatectomy IR)

AI is also being applied to PVE, a critical IR procedure performed to induce hypertrophy of the future liver remnant (FLR) before a major hepatectomy for colorectal liver metastases [7]. The success of the subsequent surgery depends on achieving adequate liver growth. A recent state-of-the-art, multicenter study developed an ML model to predict post-PVE outcomes, including the final FLR percentage [7]. This study is a benchmark for advanced AI methodology, as it integrated multimodal data (clinical, laboratory, and radiomic features) and introduced a novel Statistical Shape Model to mathematically quantify the 3D shape of the liver as a predictive feature. Critically, the study validated its model on an external dataset from a separate institution, demonstrating strong generalizability and addressing a common limitation in AI research [7].

Biliary interventions and advanced fatty liver

Beyond oncology and PHT, AI is increasingly relevant in the management of biliary obstructive diseases and metabolic liver conditions. In biliary interventions, DL models, such as CNNs, are being developed to assist in the automated mapping of the biliary tree from magnetic resonance cholangiopancreatography (MRCP) [54]. These tools can accurately detect common bile duct stones (90.5%) and distinguish between benign and malignant biliary strictures with high sensitivity (82.4%), potentially guiding complex interventions, such as percutaneous transhepatic biliary drainage (PTBD), by reducing the reliance on extensive ductal opacification on fluoroscopy [54, 55]. Furthermore, novel augmented reality (AR) navigation systems are emerging that automatically register the biliary anatomy to 3D CT coordinates, allowing for precise real-time tracking of interventional instrument tips during biliary procedures [56].

In the context of Metabolic Dysfunction-Associated Steatotic Liver Disease (MASLD), AI-powered ultrasound and CT tools now facilitate the non-invasive quantification of liver fat and fibrosis [57]. DL algorithms applied to non-enhanced CT scans can automatically measure liver attenuation and convert it to a fat fraction, achieving high correlation with manual measurements and traditional MRI-PDFF (r2 = 0.92) [58]. This capability is critical for IR when assessing procedural safety; for instance, quantifying the degree of steatosis or fibrosis in the non-tumorous liver is essential for predicting the risk of post-embolization liver failure or evaluating the quality of the FLR after PVE [57, 58].

Key challenges on the path to clinical integration

The integration of AI into the interventional management of liver disease is rapidly moving from a theoretical possibility to a clinical reality. The evidence demonstrates that AI is poised to enhance every phase of the IR workflow. In the pre-procedural phase, AI models are showing robust performance in predicting treatment outcomes for core liver-directed procedures, including TACE and TIPS [3, 6]. Intra-procedurally, advanced imaging techniques are reducing radiation dose and improving image quality [24, 46]. Post-procedurally, AI automates the labor-intensive tasks of surveillance and follow-up, offering a level of consistency that can surpass human performance [59]. However, the path from a promising algorithm to a fully integrated and trusted clinical tool is paved with significant challenges (Table 5).

 Key challenges and future directions for AI in liver interventions.

CategoryKey pointsDescriptionReference(s)
ChallengesData-related hurdlesScarcity of large, high-quality, and standardized IR datasets.
“Garbage in, garbage out”: poor image quality leads to AI failure.
Ethical issues surrounding data privacy, ownership, and security.
[19, 23, 36, 59, 60]
Methodological barriersThe “black box” problem and the need for explainable AI (XAI).
Lack of external validation, leading to overfitting.
High heterogeneity across studies makes comparing results difficult.
[1, 20, 23, 28, 37, 38, 61, 62]
Clinical & ethical dilemmasRisk of amplifying existing societal biases (algorithmic bias).
Unclear accountability for AI-related adverse events.
Difficulty with practical workflow integration.
Risk of “futile technologization” (expensive tech with marginal benefit).
[19, 35, 37, 60]
Future directionsA guided research agendaThe SIR Foundation has prioritized HCC as a key use case.
Immediate research needs include tools for segmentation, simulation, and navigation.
A top priority is creating shared data commons to accelerate research.
[21]
Emerging technologiesShift toward powerful, adaptable foundation models.
Use of generative AI (e.g., ChatGPT) as a research tool.
Creation of “synthetic cohorts” to serve as control arms in clinical trials.
[19, 20, 64]
Ensuring quality & trustWidespread adoption of standardized reporting guidelines, such as the iCARE checklist, is essential to ensure future research is reproducible, transparent, and trustworthy.[19, 63]

AI: artificial intelligence; IR: interventional radiology; HCC: hepatocellular carcinoma; iCARE: Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation.

Data-related hurdles

The performance of any AI model is fundamentally dependent on the data used to train it. A primary challenge in IR is the relative scarcity of large, high-quality, and standardized datasets compared to diagnostic radiology, which can limit the development of robust models [19, 36]. This is compounded by the “garbage in, garbage out” principle; AI models can fail if the input data is of poor quality, as demonstrated in studies where data heterogeneity and quality issues are a primary concern [36].

Furthermore, the use of patient data raises profound ethical questions about data privacy, ownership, and security [23, 60]. A proposed framework suggests treating de-identified patient data for secondary research as a “public good” that can be shared to advance medicine but not sold for profit, a concept that requires broad consensus and strong governance to implement [60].

Methodological and technical barriers

A common concern among clinicians is the “black box” nature of many DL models, where the reasoning behind a prediction is not easily understood [20, 23]. This lack of interpretability can be a major barrier to clinical trust. Consequently, a key area of modern AI research is explainable AI (XAI), which aims to open this black box and provide insights into the model’s decision-making process [61]. Techniques such as Grad-CAM++, which generate heatmaps to visualize the image regions an AI is focusing on, are a practical example of XAI in action [28].

The field is also challenged by a lack of methodological rigor. Systematic reviews and meta-analyses have found significant heterogeneity across studies, with different research groups using varied algorithms and datasets, making it difficult to compare results directly [38, 62]. Many studies are single-center and lack external validation, raising questions about their generalizability and sometimes leading to overfitting, where a model performs well on its training data but fails on new, unseen data [1, 37].

Clinical and ethical dilemmas

Beyond the data and methods, several ethical and practical issues must be addressed.

  • Algorithmic bias: AI models trained on historical healthcare data can inadvertently learn and amplify existing societal biases related to race, socioeconomic status, or geography, potentially worsening healthcare disparities [19, 60]. For example, an AI trained on data where disadvantaged patients have worse outcomes might learn to recommend against treating them [60].

  • Accountability for errors: A critical question is who is responsible when an AI-related adverse event occurs. A reasonable framework approaches this similarly to medical device litigation, where fault could lie with the developer for a flawed product or with the clinician for its improper use [60].

  • Workflow integration: Many AI tools developed in a research setting are difficult to integrate into complex clinical workflows. Cumbersome requirements, such as the need for manual tumor segmentation before a model can be used, are a major barrier to practical adoption [37].

  • Futile technologization: Finally, there is a risk of developing expensive, sophisticated technologies that provide only marginal clinical benefit, a phenomenon termed “futile technologization” [35]. Experts caution that innovation must be rigorously evaluated to ensure it is driven by clinical relevance and improves patient outcomes, rather than by commercial pressure [35].

Discussion

The findings of this narrative review underscore a pivotal shift in the interventional management of liver disease. Traditional clinical decision-making relies on a diverse yet often subjective set of data modalities that are frequently prone to inter-observer variability and high cognitive load [23]. The integration of AI, particularly DL and radiomics, represents a paradigm shift from qualitative visual assessments to a quantitative, data-driven approach that extracts diagnostic and prognostic information often imperceptible to the human eye [15, 24].

In the management of HCC, AI models have demonstrated an ability to stratify patient risk with an accuracy that matches or, in some cases, surpasses that of expert radiologists, particularly in predicting response to locoregional therapies, such as TACE [6, 38]. Similarly, the development of non-invasive tools for PHT assessment addresses a critical clinical need by providing support for early intervention and personalized treatment strategies without the risks associated with invasive HVPG measurement [4, 53]. Beyond oncology and cirrhosis, the emerging use of AI in biliary obstructive diseases and automated fatty liver quantification further broadens the scope of the “AI-augmented” interventionalist, allowing for more precise procedural guidance and comprehensive risk assessment [54, 55].

The clinical impact of these tools lies in their potential to standardize diagnostic quality and optimize outcomes [13]. While currently most advanced in diagnostic and post-processing tasks, interventional applications are beginning to mature, offering measurable gains in catheter navigation, probe placement, and ablation success [33, 52].

Challenges and future directions

Despite the transformative potential of AI in hepatology and IR, several significant hurdles remain that impede its widespread clinical adoption (Table 5).

  • Methodological and data barriers: Most current AI research is retrospective and limited by small, single-center datasets, which raises concerns regarding the generalizability and robustness of models across different clinical environments [19, 36].

  • Interpretability and trust: The “black box” nature of complex DL architectures erodes clinician trust, as the rationale behind AI-generated recommendations is often opaque [20, 61].

  • Workflow integration: Practical implementation faces logistical hurdles, including the need for institutional support, interoperability with existing electronic health records, and clinician training [22, 37].

  • Ethical and regulatory issues: Algorithmic bias, data privacy concerns, and the lack of clear legal liability frameworks for AI-driven errors remain critical issues needing resolution [19, 60].

Moving forward, the field must be guided by the research priorities established by the SIR Foundation Research Consensus Panel, which emphasized the creation of “shared data commons” and prioritized HCC as the primary use case for personalized, AI-driven algorithms [21]. Future research must prioritize multicenter, prospective validation and the development of XAI to improve model transparency [28, 63]. Technological paradigms are already shifting toward foundation models and generative AI (e.g., ChatGPT) to accelerate clinical research and statistical analysis [20, 64]. Furthermore, innovative concepts such as “synthetic cohorts” of virtual patients may soon mitigate the difficulties of clinical trial recruitment [19]. Ultimately, the successful and responsible integration of AI will depend on the adoption of standardized reporting guidelines, such as the Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE) checklist, to ensure future research is reproducible and trustworthy [19, 63].

Conclusions

AI is no longer a futuristic concept, but an active force poised to revolutionize the interventional management of liver disease. By extracting sub-visual radiomic features and processing complex datasets, AI provides a measurable advantage over routine qualitative methods in predicting TACE response, non-invasively assessing PHT, and forecasting surgical outcomes. However, the transition from “code to bedside” requires the IR community to lead efforts in data standardization and methodological rigor. AI will not replace the interventionalist but will instead create an “AI-augmented” paradigm, where clinicians are empowered by precision tools to deliver safer, personalized, and more effective care for patients with liver disease.

Abbreviations

AI: artificial intelligence

ANNs: artificial neural networks

AUC: Area Under the Curve

AUROC: area under the receiver operating characteristic curve

BCLC: Barcelona Clinic Liver Cancer

CNNs: convolutional neural networks

DL: deep learning

FLR: future liver remnant

HCC: hepatocellular carcinoma

HVPG: hepatic venous pressure gradient

IR: interventional radiology

ML: machine learning

OHE: overt hepatic encephalopathy

PHT: portal hypertension

PVE: portal vein embolization

TACE: transarterial chemoembolization

TARE: transarterial radioembolization

TIPS: transjugular intrahepatic portosystemic shunt

XAI: explainable artificial intelligence

Declarations

Author contributions

HY: Conceptualization, Investigation, Writing—original draft, Writing—review & editing. The author read and approved the submitted version.

Conflicts of interest

The author declares that there are no conflicts of interest or competing financial interests to disclose.

Ethical approval

As this is a narrative review of previously published literature and does not involve original human or animal research, ethical approval from an Institutional Review Board (IRB) was not required.

Consent to participate

Not applicable.

Consent to publication

Not applicable; this manuscript does not contain individual patient data or identifying information.

Availability of data and materials

No new datasets were generated or analyzed during the preparation of this narrative review. All information presented is derived from cited, publicly available literature.

Funding

No external funding was received for this study.

Copyright

© The Author(s) 2026.

Publisher’s note

Open Exploration maintains a neutral stance on jurisdictional claims in published institutional affiliations and maps. All opinions expressed in this article are the personal views of the author(s) and do not represent the stance of the editorial team or the publisher.

References

He Y, Gao Q, Mo S, Huang K, Liao Y, Liang T, et al. Artificial intelligence algorithm was used to establish and verify the prediction model of portal hypertension in hepatocellular carcinoma based on clinical parameters and imaging features. J Gastrointest Oncol. 2025;16:15975. [DOI] [PubMed] [PMC]
Mauro E, de Castro T, Zeitlhoefler M, Sung MW, Villanueva A, Mazzaferro V, et al. Hepatocellular carcinoma: Epidemiology, diagnosis and treatment. JHEP Rep. 2025;7:101571. [DOI] [PubMed] [PMC]
Kalo E, Read S, George J, Majumdar A, Ahlenstiel G. Can Artificial Intelligence and Machine Learning Transform Prediction and Treatment of Post-Transjugular Intrahepatic Portosystemic Shunt (TIPS) Overt Hepatic Encephalopathy? Gastro Hep Adv. 2024;4:100560. [DOI] [PubMed] [PMC]
Wang Q, Jiao J, Zhang C. Application of artificial intelligence in portal hypertension and esophagogastric varices. World J Gastroenterol. 2025;31:108508. [DOI] [PubMed] [PMC]
Hung ML, Lee EW. Role of Transjugular Intrahepatic Portosystemic Shunt in the Management of Portal Hypertension: Review and Update of the Literature. Clin Liver Dis. 2019;23:73754. [DOI] [PubMed]
Keshavarz P, Nezami N, Yazdanpanah F, Khojaste-Sarakhsi M, Mohammadigoldar Z, Azami M, et al. Prediction of treatment response and outcome of transarterial chemoembolization in patients with hepatocellular carcinoma using artificial intelligence: A systematic review of efficacy. Eur J Radiol. 2025;184:111948. [DOI] [PubMed]
Kuhn TN, Engelhardt WD, Kahl VH, Alkukhun A, Gross M, Iseke S, et al. Artificial Intelligence-Driven Patient Selection for Preoperative Portal Vein Embolization for Patients with Colorectal Cancer Liver Metastases. J Vasc Interv Radiol. 2025;36:47788. [DOI] [PubMed]
Ayyub J, Dabhi KN, Gohil NV, Tanveer N, Hussein S, Pingili S, et al. Evaluation of the Safety and Efficacy of Conventional Transarterial Chemoembolization (cTACE) and Drug-Eluting Bead (DEB)-TACE in the Management of Unresectable Hepatocellular Carcinoma: A Systematic Review. Cureus. 2023;15:e41943. [DOI] [PubMed] [PMC]
Patel KR, Menon H, Patel RR, Huang EP, Verma V, Escorcia FE. Locoregional Therapies for Hepatocellular Carcinoma: A Systematic Review and Meta-Analysis. JAMA Netw Open. 2024;7:e2447995. [DOI] [PubMed] [PMC]
Soykan EA, Aarts BM, Lopez-Yurda M, Kuhlmann KFD, Erdmann JI, Kok N, et al. Predictive Factors for Hypertrophy of the Future Liver Remnant After Portal Vein Embolization: A Systematic Review. Cardiovasc Intervent Radiol. 2021;44:135566. [DOI] [PubMed] [PMC]
Letzen B, Wang CJ, Chapiro J. The Role of Artificial Intelligence in Interventional Oncology: A Primer. J Vasc Interv Radiol. 2019;30:3841.e1. [DOI] [PubMed]
Gurgitano M, Angileri SA, Rodà GM, Liguori A, Pandolfi M, Ierardi AM, et al. Interventional Radiology ex-machina: impact of Artificial Intelligence on practice. Radiol Med. 2021;126:9981006. [DOI] [PubMed] [PMC]
Boeken T, Feydy J, Lecler A, Soyer P, Feydy A, Barat M, et al. Artificial intelligence in diagnostic and interventional radiology: Where are we now? Diagn Interv Imaging. 2023;104:15. [DOI] [PubMed]
Forner A, Reig M, Bruix J. Hepatocellular carcinoma. Lancet. 2018;391:130114. [DOI] [PubMed]
Gillies RJ, Kinahan PE, Hricak H. Radiomics: Images Are More than Pictures, They Are Data. Radiology. 2016;278:56377. [DOI] [PubMed] [PMC]
Lencioni R, de Baere T, Soulen MC, Rilling WS, Geschwind JH. Lipiodol transarterial chemoembolization for hepatocellular carcinoma: A systematic review of efficacy and safety data. Hepatology. 2016;64:10616. [DOI] [PubMed]
Lencioni R. Management of hepatocellular carcinoma with transarterial chemoembolization in the era of systemic targeted therapy. Crit Rev Oncol Hematol. 2012;83:21624. [DOI] [PubMed]
Thornburg B. Hepatic Encephalopathy following Transjugular Intrahepatic Portosystemic Shunt Placement. Semin Intervent Radiol. 2023;40:2628. [DOI] [PubMed] [PMC]
Lesaunier A, Khlaut J, Dancette C, Tselikas L, Bonnet B, Boeken T. Artificial intelligence in interventional radiology: Current concepts and future trends. Diagn Interv Imaging. 2025;106:510. [DOI] [PubMed]
Warren BE, Bilbily A, Gichoya JW, Conway A, Li B, Fawzy A, et al. An Introductory Guide to Artificial Intelligence in Interventional Radiology: Part 1 Foundational Knowledge. Can Assoc Radiol J. 2024;75:55867. [DOI] [PubMed]
Chapiro J, Allen B, Abajian A, Wood B, Kothary N, Daye D, et al. Proceedings from the Society of Interventional Radiology Foundation Research Consensus Panel on Artificial Intelligence in Interventional Radiology: From Code to Bedside. J Vasc Interv Radiol. 2022;33:111320. [DOI] [PubMed]
Najafi A, Cazzato RL, Meyer BC, Pereira PL, Alberich A, López A, et al. CIRSE Position Paper on Artificial Intelligence in Interventional Radiology. Cardiovasc Intervent Radiol. 2023;46:13037. [DOI] [PubMed]
Kallini JR, Moriarty JM. Artificial Intelligence in Interventional Radiology. Semin Intervent Radiol. 2022;39:3417. [DOI] [PubMed] [PMC]
Matsui Y, Ueda D, Fujita S, Fushimi Y, Tsuboyama T, Kamagata K, et al. Applications of artificial intelligence in interventional oncology: An up-to-date review of the literature. Jpn J Radiol. 2025;43:16476. [DOI] [PubMed] [PMC]
D’Amore B, Smolinski-Zhao S, Daye D, Uppot RN. Role of Machine Learning and Artificial Intelligence in Interventional Oncology. Curr Oncol Rep. 2021;23:70. [DOI] [PubMed]
Iezzi R, Goldberg SN, Merlino B, Posa A, Valentini V, Manfredi R. Artificial Intelligence in Interventional Radiology: A Literature Review and Future Perspectives. J Oncol. 2019;2019:6153041. [DOI] [PubMed] [PMC]
Rawat W, Wang Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017;29:2352449. [DOI] [PubMed]
Yao L, Adwan H, Bernatz S, Li H, Vogl TJ. Artificial intelligence for multi-time-point arterial phase contrast-enhanced MRI profiling to predict prognosis after transarterial chemoembolization in hepatocellular carcinoma. Radiol Med. 2025;130:151739. [DOI] [PubMed] [PMC]
Tan D, Zhai Y, Hu Z, Xu B, Zheng T, Chen Y, et al. Dual-stage artificial intelligence-powered screening for accurate classification of thyroid nodules: enhancing fine needle aspiration biopsy precision. Quant Imaging Med Surg. 2025;15:571938. [DOI] [PubMed] [PMC]
Chang SC, Wang P, Wang W, Su TH, Kao JH, Lin C. A BCLC Staging System for Hepatocellular Carcinoma using Swin Transformer and CT Imaging. Annu Int Conf IEEE Eng Med Biol Soc. 2024:14. [DOI] [PubMed]
Yan S, Wang C, Chen W, Lyu J. Swin transformer-based GAN for multi-modal medical image translation. Front Oncol. 2022;12:942511. [DOI] [PubMed] [PMC]
Liu P, Song Y, Chai M, Han Z, Zhang Y. Swin-UNet++: A Nested Swin Transformer Architecture for Location Identification and Morphology Segmentation of Dimples on 2.25Cr1Mo0.25V Fractured Surface. Materials (Basel). 2021;14:7504. [DOI] [PubMed] [PMC]
Cornelis FH, Filippiadis DK, Wiggermann P, Solomon SB, Madoff DC, Milot L, et al. Evaluation of navigation and robotic systems for percutaneous image-guided interventions: A novel metric for advanced imaging and artificial intelligence integration. Diagn Interv Imaging. 2025;106:15768. [DOI] [PubMed]
Lee A, Baker TS, Bederson JB, Rapoport BI. Levels of autonomy in FDA-cleared surgical robots: a systematic review. NPJ Digit Med. 2024;7:103. [DOI] [PubMed] [PMC]
Bonnet B, Tselikas L. Robotics and artificial intelligence in the real world of interventional radiology: Innovation or illusion? Diagn Interv Imaging. 2025;106:1456. [DOI] [PubMed]
Mazaheri S, Loya MF, Newsome J, Lungren M, Gichoya JW. Challenges of Implementing Artificial Intelligence in Interventional Radiology. Semin Intervent Radiol. 2021;38:5549. [DOI] [PubMed] [PMC]
Cho EEL, Law M, Yu Z, Yong JN, Tan CS, Tan EY, et al. Artificial Intelligence and Machine Learning Predicting Transarterial Chemoembolization Outcomes: A Systematic Review. Dig Dis Sci. 2025;70:53342. [DOI] [PubMed]
Kiani I, Razeghian I, Valizadeh P, Esmaeilian Y, Jannatdoust P, Khosravi B. Performance of Artificial Intelligence Models in Predicting Responsiveness of Hepatocellular Carcinoma to Transarterial Chemoembolization (TACE): A Systematic Review and Meta-Analysis. J Am Coll Radiol. 2026;23:7688. [DOI] [PubMed]
Chaichana A, Frey EC, Teyateeti A, Rhoongsittichai K, Tocharoenchai C, Pusuwan P, et al. Automated segmentation of lung, liver, and liver tumors from Tc-99m MAA SPECT/CT images for Y-90 radioembolization using convolutional neural networks. Med Phys. 2021;48:787790. [DOI] [PubMed] [PMC]
Jia Y, Li Z, Akhavanallaf A, Fessler JA, Dewaraja YK. 90Y SPECT scatter estimation and voxel dosimetry in radioembolization using a unified deep learning framework. EJNMMI Phys. 2023;10:82. [DOI] [PubMed] [PMC]
Rangraz EJ, Coudyzer W, Maleux G, Baete K, Deroose CM, Nuyts J. Multi-modal image analysis for semi-automatic segmentation of the total liver and liver arterial perfusion territories for radioembolization. EJNMMI Res. 2019;9:19. [DOI] [PubMed] [PMC]
He K, Liu X, Shahzad R, Reimer R, Thiele F, Niehoff J, et al. Advanced Deep Learning Approach to Automatically Segment Malignant Tumors and Ablation Zone in the Liver With Contrast-Enhanced CT. Front Oncol. 2021;11:669437. [DOI] [PubMed] [PMC]
Lin Z, Li G, Chen J, Chen Z, Chen Y, Lin S. Effect of heat sink on the recurrence of small malignant hepatic tumors after radiofrequency ablation. J Cancer Res Ther. 2016;12:C1538. [DOI] [PubMed]
Plachouris D, Tzolas I, Gatos I, Papadimitroulas P, Spyridonidis T, Apostolopoulos D, et al. A deep-learning-based prediction model for the biodistribution of 90Y microspheres in liver radioembolization. Med Phys. 2021;48:742738. [DOI] [PubMed]
Crombé A, Palussière J, Catena V, Cazayus M, Fonck M, Béchade D, et al. Radiofrequency ablation of lung metastases of colorectal cancer: could early radiomics analysis of the ablation zone help detect local tumor progression? Br J Radiol. 2023;96:20201371. [DOI] [PubMed] [PMC]
Ren H, An C, Fu W, Wu J, Yao W, Yu J, et al. Prediction of local tumor progression after microwave ablation for early-stage hepatocellular carcinoma with machine learning. J Cancer Res Ther. 2023;19:97887. [DOI] [PubMed]
Matsumoto T, Endo K, Yamamoto S, Suda S, Tomita K, Kamei S, et al. Dose length product and outcome of CT fluoroscopy-guided interventions using a new 320-detector row CT scanner with deep-learning reconstruction and new bow-tie filter. Br J Radiol. 2022;95:20211159. [DOI] [PubMed] [PMC]
Kim Y, Keum J, Kim J, Chun J, Oh S, Kim K, et al. Real-World Colonoscopy Video Integration to Improve Artificial Intelligence Polyp Detection Performance and Reduce Manual Annotation Labor. Diagnostics (Basel). 2025;15:901. [DOI] [PubMed] [PMC]
Yao L, Xiong H, Li Q, Wang W, Wu Z, Tan X, et al. Validation of artificial intelligence-based bowel preparation assessment in screening colonoscopy (with video). Gastrointest Endosc. 2024;100:72836.e9. [DOI] [PubMed]
Soleymanjahi S, Huebner J, Elmansy L, Rajashekar N, Lüdtke N, Paracha R, et al. Artificial Intelligence-Assisted Colonoscopy for Polyp Detection: A Systematic Review and Meta-analysis. Ann Intern Med. 2024;177:165263. [DOI] [PubMed]
Kono K, Sakakura Y, Fujimoto T. Real-Time Artificial Intelligence-Assisted Middle Meningeal Artery Embolization Using Liquid Embolic Agents for Chronic Subdural Hematoma: A Preliminary Experience. Neurosurgery. 2025;[Epub ahead of print]. [DOI] [PubMed]
Sakakura Y, Masuo O, Fujimoto T, Terada T, Kono K. Pioneering artificial intelligence-based real time assistance for intracranial liquid embolization in humans: an initial experience. J Neurointerv Surg. 2025;17:74852. [DOI] [PubMed]
Yu Q, Huang Y, Li X, Pavlides M, Liu D, Luo H, et al. An imaging-based artificial intelligence model for non-invasive grading of hepatic venous pressure gradient in cirrhotic portal hypertension. Cell Rep Med. 2022;3:100563. [DOI] [PubMed] [PMC]
Luo B, Li Z, Zhang K, Wu S, Chen W, Fu N, et al. Using deep learning models in magnetic resonance cholangiopancreatography images to diagnose common bile duct stones. Scand J Gastroenterol. 2024;59:11824. [DOI] [PubMed]
Saraiva MM, Ribeiro T, González-Haba M, Castillo BA, Ferreira JPS, Boas FV, et al. Deep Learning for Automatic Diagnosis and Morphologic Characterization of Malignant Biliary Strictures Using Digital Cholangioscopy: A Multicentric Study. Cancers (Basel). 2023;15:4827. [DOI] [PubMed] [PMC]
Yang S, Wang Y, Ai D, Geng H, Zhang D, Xiao D, et al. Augmented Reality Navigation System for Biliary Interventional Procedures With Dynamic Respiratory Motion Correction. IEEE Trans Biomed Eng. 2024;71:70011. [DOI] [PubMed]
Liu F, Bi M, Jing X, Ding H, Zeng J, Zheng R, et al. Multiparametric US for Identifying Metabolic Dysfunction-associated Steatohepatitis: A Prospective Multicenter Study. Radiology. 2024;310:e232416. [DOI] [PubMed]
Graffy PM, Sandfort V, Summers RM, Pickhardt PJ. Automated Liver Fat Quantification at Nonenhanced Abdominal CT for Population-based Steatosis Assessment. Radiology. 2019;293:33442. [DOI] [PubMed] [PMC]
van Tongeren OLRM, Vanmaele A, Rastogi V, Hoeks SE, Verhagen HJM, de Bruin JL. Volume Measurements for Surveillance after Endovascular Aneurysm Repair using Artificial Intelligence. Eur J Vasc Endovasc Surg. 2025;69:6170. [DOI] [PubMed]
Rockwell HD, Cyphers ED, Makary MS, Keller EJ. Ethical Considerations for Artificial Intelligence in Interventional Radiology: Balancing Innovation and Patient Care. Semin Intervent Radiol. 2023;40:3236. [DOI] [PubMed] [PMC]
van Rijswijk RE, Bogdanovic M, Roy J, Yeung KK, Zeebregts CJ, Geelkerken RH, et al. Multimodal Artificial Intelligence Model for Prediction of Abdominal Aortic Aneurysm Shrinkage After Endovascular Repair (the ART in EVAR study). J Endovasc Ther. 2025;15266028251314359. [DOI] [PubMed]
Robertshaw H, Karstensen L, Jackson B, Sadati H, Rhode K, Ourselin S, et al. Artificial intelligence in the autonomous navigation of endovascular interventions: a systematic review. Front Hum Neurosci. 2023;17:1239374. [DOI] [PubMed] [PMC]
Anibal JT, Huth HB, Boeken T, Daye D, Gichoya J, Muñoz FG, et al. Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE). J Vasc Interv Radiol. 2025;36:13818.e4. [DOI] [PubMed]
Prontera PP, Prusciano FR, Lattarulo M, Tsaturyan A, Addabbo F, Sciorio C, et al. ChatGPT artificial intelligence in clinical data analysis: an example comparing standard vs fusion prostate biopsy outcomes after robotic-assisted radical prostatectomy (RaRP). Arch Ital Urol Androl. 2025;97:13596. [DOI] [PubMed]
Cite this Article
Export Citation
Yu H. Artificial intelligence in the interventional management of liver disease: a narrative review from foundational concepts to clinical applications. Explor Dig Dis. 2026;5:1005109. https://doi.org/10.37349/edd.2026.1005109
Article Metrics

View: 139

Download: 7

Times Cited: 0