Research

My research focuses on the intersection of artificial intelligence and healthcare, particularly in Medical Image Analysis and Explainable AI (XAI). I work on developing reliable and transparent deep learning models that can support real-world clinical decision-making. I am also interested in building trustworthy and secure medical AI systems, ensuring that these technologies are robust, safe, and suitable for practical healthcare use. The key areas I have explored or am currently working on are listed below.
1. Medical Image Analysis & Explainable AI (XAI)
Medical Image Analysis leverages artificial intelligence to interpret complex clinical scans (such as Ultrasounds, MRIs, and CTs) to automate and enhance diagnostic tasks like lesion detection, segmentation, and disease classification. This field is crucial for accelerating early diagnosis, reducing clinical workloads, and minimizing human error in healthcare. However, the practical adoption of these AI systems is severely bottlenecked by two major challenges. First, models often suffer from "domain shift," failing to generalize across diverse datasets captured by different hospital machines. Second, traditional deep learning models operate as "black boxes," providing predictions without transparent reasoning. Explainable AI (XAI) is essential to address this trust deficit. While some visualizers exist, the field currently lacks robust, intrinsically interpretable models that can provide quantitative, clinically aligned reasoning. Without solving these lackings of generalization and transparency, AI cannot be safely deployed in real-world clinical decision-making. Related Paper:
  • HyFormer-Net: A Synergistic CNN-Transformer with Interpretable Multi-Scale Fusion for Breast Lesion Segmentation and Classification in Ultrasound Images. Mohammad Amanour Rahman. [Under Review] [PDF]
2. Federated Learning & Privacy-Preserving Medical AI
Federated Learning (FL) is a decentralized machine learning paradigm that enables multiple healthcare institutions to collaboratively train robust AI models without ever sharing or transferring raw patient data. In the medical domain, where diagnostic data is highly sensitive and protected by strict privacy regulations (such as HIPAA), FL is paramount. It allows the creation of generalized, unbiased algorithms by learning from a diverse, multi-institutional population while keeping patient records strictly within the hospital firewalls. Despite its immense potential, Privacy-Preserving Medical AI faces significant limitations. The inherent heterogeneity of data across different clinics makes model convergence and cross-site consistency extremely difficult. Furthermore, maintaining the interpretability of models within a decentralized network remains a critical lacking. The field currently struggles to resolve the trilemma of ensuring strict data privacy (e.g., through differential privacy), maintaining high diagnostic performance, and providing consistent, trustworthy clinical explanations across diverse healthcare silos. Overcoming these barriers is essential for scaling collaborative medical AI. Related Paper:
  • FedXAI: Privacy-Preserving Federated Learning with Intrinsic Explainability for Medical Imaging. Mohammad Amanour Rahman. [Under Review]