Jordi Casas-Roma

Work place: e-Health Center, Universitat Oberta de Catalunya (UOC), Barcelona, Spain

E-mail: jcasasr@cvc.uab.cat

Website: https://orcid.org/0000-0002-0617-3303

Research Interests:

Biography

Jordi Casas-Roma holds a PhD in Computer Science from the Universitat Auto`noma de Barcelona (UAB, 2014), an MSc in Advanced Artificial Intelligence from the Universidad Nacional de Educacio´n a Distancia (UNED, 2011) and a BSc in Computer Science from the Universitat Auto`noma de Barcelona (UAB, 2002). He is a lecturer in the Computer Science Department at the UAB, where he teaches programming, machine learning, reinforcement learning and complex networks. Dr. Jordi Casas-Roma is currently an associate researcher at the Computer Vision Centre (CVC-UAB). His main research interests include artificial intelligence, machine learning, graph mining and data privacy.

Author Articles
Employing Counterfactual Methods to Interpret Convolutional Network Findings in X-Ray Image Detection

By Maider Abad Eusebio Garcia Ferran Prados Jordi Casas-Roma

DOI: https://doi.org/10.5815/ijigsp.2026.02.01, Pub. Date: 8 Apr. 2026

In the rapidly evolving landscape of medical diagnostics, efficient and accurate tools for disease identification are crucial. This study analyzes three convolutional neural network (CNN) architectures—IRV2, ResNet50, and DenseNet121—pre-trained on ImageNet and RadImageNet datasets for respiratory disease diagnosis using chest radiographs. We used over 10,000 chest X-ray images, including COVID-19, pneumonia, and control cases, to train and evaluate these models. RadImageNet-trained models, particularly ResNet50, achieved superior performance with 94.49% accuracy, 93.92% sensitivity, and 95.59% precision compared to ImageNet-trained counterparts, though the improvement was not statistically significant in most cases. To enhance interpretability, we developed a counterfactual-based method generating visual explanations of critical areas influencing diagnostic outcomes. This approach, not requiring access to training data or model internals, identifies image parts that could change the predicted diagnosis if altered. It aids in understanding model reasoning and can correct misclassifications, successfully reclassifying up to 40.91% of previously misclassified images through our masking method. By providing clear, independent visual explanations, our method aims to foster trust in AI-assisted diagnoses among medical professionals. While preliminary results are promising, further validation with medical experts will help confirm the clinical relevance of the highlighted regions. This will strengthen the transparency and interpretability of AI decision-making in healthcare. The visual nature of these explanations offers a valuable tool for interpreting complex medical image classification models and may enhance the synergy between AI systems and human expertise in diagnostic processes.

[...] Read more.
Other Articles