TY - GEN
T1 - Self-attention Capsule Network for Tissue Classification in Case of Challenging Medical Image Statistics
AU - Hoogi, Assaf
AU - Wilcox, Brian
AU - Gupta, Yachee
AU - Rubin, Daniel
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - We propose the first Self-Attention Capsule Network that was designed to deal with unique core challenges of medical imaging, specifically for tissue classification. These challenges are - significant data heterogeneity with statistics variability across imaging domains, insufficient spatial context and local fine-grained details, and limited training data. Moreover, our proposed method solves limitations of the baseline Capsule Networks (CapsNet) such as handling complicated challenging data and limited computational resources. To cope with these challenges, our method is composed of a self-attention module that simplifies the complexity of the input data such that the CapsNet routing mechanism can be efficiently used, while extracting much richer contextual information, compared with CNNs. To demonstrate the strengths of our method, it was extensively evaluated on three diverse medical datasets and three natural benchmarks. The proposed method outperformed other methods we compared with in classification accuracy but also in robustness, within and across different datasets and domains.
AB - We propose the first Self-Attention Capsule Network that was designed to deal with unique core challenges of medical imaging, specifically for tissue classification. These challenges are - significant data heterogeneity with statistics variability across imaging domains, insufficient spatial context and local fine-grained details, and limited training data. Moreover, our proposed method solves limitations of the baseline Capsule Networks (CapsNet) such as handling complicated challenging data and limited computational resources. To cope with these challenges, our method is composed of a self-attention module that simplifies the complexity of the input data such that the CapsNet routing mechanism can be efficiently used, while extracting much richer contextual information, compared with CNNs. To demonstrate the strengths of our method, it was extensively evaluated on three diverse medical datasets and three natural benchmarks. The proposed method outperformed other methods we compared with in classification accuracy but also in robustness, within and across different datasets and domains.
UR - http://www.scopus.com/inward/record.url?scp=85151120949&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-25066-8_10
DO - 10.1007/978-3-031-25066-8_10
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85151120949
SN - 9783031250651
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 219
EP - 235
BT - Computer Vision – ECCV 2022 Workshops, Proceedings
A2 - Karlinsky, Leonid
A2 - Michaeli, Tomer
A2 - Nishino, Ko
PB - Springer Science and Business Media Deutschland GmbH
T2 - 17th European Conference on Computer Vision, ECCV 2022
Y2 - 23 October 2022 through 27 October 2022
ER -