Verified @kth.se
Professor of medical image processing and visualization, Department of Biomedical Engineering and Health Systems
KTH Royal Institute of Technology
Medical image processing
Machine learning
Medical visualization
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Frida Dohlmar, Björn Morén, Michael Sandborg, Örjan Smedby, Alexander Valdman, Torbjörn Larsson, and Åsa Carlsson Tedgren
Brachytherapy, ISSN: 15384721, eISSN: 18731449, Pages: 407-415, Published: 1 May 2023
Elsevier BV
Bharti Kataria, Jenny Öman, Michael Sandborg, and Örjan Smedby
European Journal of Radiology Open, eISSN: 23520477, Published: January 2023
Elsevier BV
Konstantinos Poulakis, Joana B. Pereira, J.-Sebastian Muehlboeck, Lars-Olof Wahlund, Örjan Smedby, Giovanni Volpe, Colin L. Masters, David Ames, Yoshiki Niimi, Takeshi Iwatsubo, Daniel Ferreira, Eric Westman, , and
Nature Communications, eISSN: 20411723, Published: December 2022
Springer Science and Business Media LLC
AbstractUnderstanding Alzheimer’s disease (AD) heterogeneity is important for understanding the underlying pathophysiological mechanisms of AD. However, AD atrophy subtypes may reflect different disease stages or biologically distinct subtypes. Here we use longitudinal magnetic resonance imaging data (891 participants with AD dementia, 305 healthy control participants) from four international cohorts, and longitudinal clustering to estimate differential atrophy trajectories from the age of clinical disease onset. Our findings (in amyloid-β positive AD patients) show five distinct longitudinal patterns of atrophy with different demographical and cognitive characteristics. Some previously reported atrophy subtypes may reflect disease stages rather than distinct subtypes. The heterogeneity in atrophy rates and cognitive decline within the five longitudinal atrophy patterns, potentially expresses a complex combination of protective/risk factors and concomitant non-AD pathologies. By alternating between the cross-sectional and longitudinal understanding of AD subtypes these analyses may allow better understanding of disease heterogeneity.
Benjamin Klintström, Lilian Henriksson, Rodrigo Moreno, Alexandr Malusek, Örjan Smedby, Mischa Woisetschläger, and Eva Klintström
European Radiology Experimental, eISSN: 25099280, Published: December 2022
Springer Science and Business Media LLC
Abstract Background As bone microstructure is known to impact bone strength, the aim of this in vitro study was to evaluate if the emerging photon-counting detector computed tomography (PCD-CT) technique may be used for measurements of trabecular bone structures like thickness, separation, nodes, spacing and bone volume fraction. Methods Fourteen cubic sections of human radius were scanned with two multislice CT devices, one PCD-CT and one energy-integrating detector CT (EID-CT), using micro-CT as a reference standard. The protocols for PCD-CT and EID-CT were those recommended for inner- and middle-ear structures, although at higher mAs values: PCD-CT at 450 mAs and EID-CT at 600 (dose equivalent to PCD-CT) and 1000 mAs. Average measurements of the five bone parameters as well as dispersion measurements of thickness, separation and spacing were calculated using a three-dimensional automated region growing (ARG) algorithm. Spearman correlations with micro-CT were computed. Results Correlations with micro-CT, for PCD-CT and EID-CT, ranged from 0.64 to 0.98 for all parameters except for dispersion of thickness, which did not show a significant correlation (p = 0.078 to 0.892). PCD-CT had seven of the eight parameters with correlations ρ > 0.7 and three ρ > 0.9. The dose-equivalent EID-CT instead had four parameters with correlations ρ > 0.7 and only one ρ > 0.9. Conclusions In this in vitro study of radius specimens, strong correlations were found between trabecular bone structure parameters computed from PCD-CT data when compared to micro-CT. This suggests that PCD-CT might be useful for analysing bone microstructure in the peripheral human skeleton.
Jiangchang Xu, Bolun Zeng, Jan Egger, Chunliang Wang, Örjan Smedby, Xiaoyi Jiang, and Xiaojun Chen
Physics in Medicine and Biology, ISSN: 00319155, eISSN: 13616560, Published: 7 September 2022
IOP Publishing
Abstract Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Mehdi Astaraki, Örjan Smedby, and Chunliang Wang
Medical Image Analysis, ISSN: 13618415, eISSN: 13618423, Published: August 2022
Elsevier BV
Fabian Sinzinger, Mehdi Astaraki, Örjan Smedby, and Rodrigo Moreno
Frontiers in Oncology, eISSN: 2234943X, Published: 27 April 2022
Frontiers Media SA
ObjectiveSurvival Rate Prediction (SRP) is a valuable tool to assist in the clinical diagnosis and treatment planning of lung cancer patients. In recent years, deep learning (DL) based methods have shown great potential in medical image processing in general and SRP in particular. This study proposes a fully-automated method for SRP from computed tomography (CT) images, which combines an automatic segmentation of the tumor and a DL-based method for extracting rotational-invariant features.MethodsIn the first stage, the tumor is segmented from the CT image of the lungs. Here, we use a deep-learning-based method that entails a variational autoencoder to provide more information to a U-Net segmentation model. Next, the 3D volumetric image of the tumor is projected onto 2D spherical maps. These spherical maps serve as inputs for a spherical convolutional neural network that approximates the log risk for a generalized Cox proportional hazard model.ResultsThe proposed method is compared with 17 baseline methods that combine different feature sets and prediction models using three publicly-available datasets: Lung1 (n=422), Lung3 (n=89), and H&N1 (n=136). We observed comparable C-index scores compared to the best-performing baseline methods in a 5-fold cross-validation on Lung1 (0.59 ± 0.03 vs. 0.62 ± 0.04). In comparison, it slightly outperforms all methods in inter-data set evaluation (0.64 vs. 0.63). The best-performing method from the first experiment reduced its performance to 0.61 and 0.62 for Lung3 and H&N1, respectively.DiscussionThe experiments suggest that the performance of spherical features is comparable with previous approaches, but they generalize better when applied to unseen datasets. That might imply that orientation-independent shape features are relevant for SRP. The performance of the proposed method was very similar, using manual and automatic segmentation methods. This makes the proposed model useful in cases where expert annotations are not available or difficult to obtain.
Irene Brusini, Eilidh MacNicol, Eugene Kim, Örjan Smedby, Chunliang Wang, Eric Westman, Mattia Veronese, Federico Turkheimer, and Diana Cash
Neurobiology of Aging, ISSN: 01974580, eISSN: 15581497, Volume: 109, Pages: 204-215, Published: January 2022
Elsevier BV
The difference between brain age predicted from MRI and chronological age (the so-called BrainAGE) has been proposed as an ageing biomarker. We analyse its cross-species potential by testing it on rats undergoing an ageing modulation intervention. Our rat brain age prediction model combined Gaussian process regression with a classifier and achieved a mean absolute error (MAE) of 4.87 weeks using cross-validation on a longitudinal dataset of 31 normal ageing rats. It was then tested on two groups of 24 rats (MAE = 9.89 weeks, correlation coefficient = 0.86): controls vs. a group under long-term environmental enrichment and dietary restriction (EEDR). Using a linear mixed-effects model, BrainAGE was found to increase more slowly with chronological age in EEDR rats (p=0.015 for the interaction term). Cox regression showed that older BrainAGE at 5 months was associated with higher mortality risk (p=0.03). Our findings suggest that lifestyle-related prevention approaches may help to slow down brain ageing in rodents and the potential of BrainAGE as a predictor of age-related health outcomes.
Mehdi Astaraki, Guang Yang, Yousuf Zakko, Iuliana Toma-Dasu, Örjan Smedby, and Chunliang Wang
Frontiers in Oncology, eISSN: 2234943X, Published: 17 December 2021
Frontiers Media SA
ObjectivesBoth radiomics and deep learning methods have shown great promise in predicting lesion malignancy in various image-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access to the same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventional radiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancy prediction on an open database that consists of 1297 manually delineated lung nodules.MethodsConventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images. Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lung nodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selection and class balancing, as well as separating the features learned in the nodule target region and the background/context region. By pooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two sets with respect to malignancy prediction.ResultsThe best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achieved AUROC values (mean ± standard deviations) of 0.792 ± 0.025, 0.801 ± 0.018, and 0.817 ± 0.032, respectively through 5-fold cross-validation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, as well as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based models achieved AUROC values of 0.921 ± 0.010, 0.824 ± 0.021, and 0.936 ± 0.011, respectively. We achieved the best prediction accuracy from the hybrid feature set (AUROC: 0.938 ± 0.010).ConclusionThe end-to-end deep-learning model outperforms conventional radiomics out of the box without much fine-tuning. On the other hand, fine-tuning the models lead to significant improvements in the prediction performance where the conventional and deep-feature based radiomics models achieved comparable results. The hybrid radiomics method seems to be the most promising model for lung nodule malignancy prediction in this comparative study.
B Kataria, J Nilsson Althén, Ö Smedby, A Persson, H Sökjer, and M Sandborg
Radiation Protection Dosimetry, ISSN: 01448420, eISSN: 17423406, Volume: 195, Issue: 3-4, Pages: 177-187, Published: 1 October 2021
Oxford University Press (OUP)
Abstract Traditional filtered back projection (FBP) reconstruction methods have served the computed tomography (CT) community well for over 40 years. With the increased use of CT during the last decades, efforts to minimise patient exposure, while maintaining sufficient or improved image quality, have led to the development of model-based iterative reconstruction (MBIR) algorithms from several vendors. The usefulness of the advanced modeled iterative reconstruction (ADMIRE) (Siemens Healthineers) MBIR in abdominal CT is reviewed and its noise suppression and/or dose reduction possibilities explored. Quantitative and qualitative methods with phantom and human subjects were used. Assessment of the quality of phantom images will not always correlate positively with those of patient images, particularly at the higher strength of the ADMIRE algorithm. With few exceptions, ADMIRE Strength 3 typically allows for substantial noise reduction compared to FBP and hence to significant (≈30%) patient dose reductions. The size of the dose reductions depends on the diagnostic task.
Mehdi Astaraki, Yousuf Zakko, Iuliana Toma Dasu, Örjan Smedby, and Chunliang Wang
Physica Medica, ISSN: 11201797, eISSN: 1724191X, Pages: 146-153, Published: March 2021
Elsevier BV
PURPOSE
Low-Dose Computed Tomography (LDCT) is the most common imaging modality for lung cancer diagnosis. The presence of nodules in the scans does not necessarily portend lung cancer, as there is an intricate relationship between nodule characteristics and lung cancer. Therefore, benign-malignant pulmonary nodule classification at early detection is a crucial step to improve diagnosis and prolong patient survival. The aim of this study is to propose a method for predicting nodule malignancy based on deep abstract features.
METHODS
To efficiently capture both intra-nodule heterogeneities and contextual information of the pulmonary nodules, a dual pathway model was developed to integrate the intra-nodule characteristics with contextual attributes. The proposed approach was implemented with both supervised and unsupervised learning schemes. A random forest model was added as a second component on top of the networks to generate the classification results. The discrimination power of the model was evaluated by calculating the Area Under the Receiver Operating Characteristic Curve (AUROC) metric.
RESULTS
Experiments on 1297 manually segmented nodules show that the integration of context and target supervised deep features have a great potential for accurate prediction, resulting in a discrimination power of 0.936 in terms of AUROC, which outperformed the classification performance of the Kaggle 2017 challenge winner.
CONCLUSION
Empirical results demonstrate that integrating nodule target and context images into a unified network improves the discrimination power, outperforming the conventional single pathway convolutional neural networks.
Örjan Smedby
Proceedings of SPIE - The International Society for Optical Engineering, ISSN: 0277786X, eISSN: 1996756X, Volume: 11804, Published: 2021
SPIE
A central research topic in medical image processing is the development of imaging biomarkers, i.e. image-based numeric measures of the degree (or probability) of disease. Typically, they rely on segmentation of an anatomical or pathological structure in a radiological image, followed by quantitative measurement. With much of traditional image processing methods being supplanted by machine learning techniques, the identification of new imaging biomarkers is also often made with such techniques, in particular deep learning. Successful examples include quantitative assessment of Alzheimer’s disease and Parkinson’s disease based on brain MRI data, as well as image-based brain age estimation.
I. Blystad, J. B. M. Warntjes, Ö Smedby, P. Lundberg, E.-M. Larsson, and A. Tisell
Scientific Reports, eISSN: 20452322, Published: 1 December 2020
Springer Science and Business Media LLC
Abstract Malignant gliomas are primary brain tumours with an infiltrative growth pattern, often with contrast enhancement on magnetic resonance imaging (MRI). However, it is well known that tumour infiltration extends beyond the visible contrast enhancement. The aim of this study was to investigate if there is contrast enhancement not detected visually in the peritumoral oedema of malignant gliomas by using relaxometry with synthetic MRI. 25 patients who had brain tumours with a radiological appearance of malignant glioma were prospectively included. A quantitative MR-sequence measuring longitudinal relaxation (R1), transverse relaxation (R2) and proton density (PD), was added to the standard MRI protocol before surgery. Five patients were excluded, and in 20 patients, synthetic MR images were created from the quantitative scans. Manual regions of interest (ROIs) outlined the visibly contrast-enhancing border of the tumours and the peritumoral area. Contrast enhancement was quantified by subtraction of native images from post GD-images, creating an R1-difference-map. The quantitative R1-difference-maps showed significant contrast enhancement in the peritumoral area (0.047) compared to normal appearing white matter (0.032), p = 0.048. Relaxometry detects contrast enhancement in the peritumoral area of malignant gliomas. This could represent infiltrative tumour growth.
Indranil Guha, Benjamin Klintström, Eva Klintström, Xiaoliu Zhang, Örjan Smedby, Rodrigo Moreno, and Punam K Saha
Physics in Medicine and Biology, ISSN: 00319155, eISSN: 13616560, Published: 25 November 2020
IOP Publishing
Osteoporosis, characterized by reduced bone mineral density and micro-architectural degeneration, significantly enhances fracture-risk. There are several viable methods for trabecular bone micro-imaging, which widely vary in terms of technology, reconstruction principle, spatial resolution, and acquisition time. We have performed an excised cadaveric bone specimen study to evaluate different CT-imaging modalities for trabecular bone micro-structural analysis. Excised cadaveric bone specimens from the distal radius were scanned using micro-CT and four in vivo CT imaging modalities: HR-pQCT, dental CBCT, whole-body MDCT, and extremity CBCT. A new algorithm was developed to optimize soft thresholding parameters for individual in vivo CT modalities for computing quantitative bone volume fraction maps. Finally, agreement of trabecular bone micro-structural measures, derived from different in vivo CT imaging, with reference measures from micro-CT imaging was examined. Observed values of most trabecular measures, including trabecular bone volume, network area, transverse and plate-rod micro-structure, thickness, and spacing, for in vivo CT modalities were higher than their micro-CT-based reference values. In general, HR-pQCT-based trabecular bone measures were closer to their reference values as compared to other in vivo CT modalities. Despite large differences in observed values of measures among modalities, high linear correlation (r ∈ [0.94 0.99]) was found between micro-CT and in vivo CT-derived measures of trabecular bone volume, transverse and plate micro-structural volume, and network area. All HR-pQCT-derived trabecular measures, except the erosion index, showed high correlation (r ∈ [0.91 0.99]). The plate-width measure showed a higher correlation (r ∈ [0.72 0.91]) among in vivo and micro-CT modalities than its counterpart binary plate-rod characterization-based measure erosion index (r ∈ [0.65 0.81]). Although a strong correlation was observed between micro-structural measures from in vivo and micro-CT imaging, large shifts in their values for in vivo modalities warrant proper scanner calibration prior to adopting in multi-site and longitudinal studies.
Neeraj Kumar, Ruchika Verma, Deepak Anand, Yanning Zhou, Omer Fahri Onder, Efstratios Tsougenis, Hao Chen, Pheng-Ann Heng, Jiahui Li, Zhiqiang Hu, Yunzhi Wang, Navid Alemi Koohbanani, Mostafa Jahanifar, Neda Zamani Tajeddin, Ali Gooya, Nasir Rajpoot, Xuhua Ren, Sihang Zhou, Qian Wang, Dinggang Shen, Cheng-Kun Yang, Chi-Hung Weng, Wei-Hsiang Yu, Chao-Yuan Yeh, Shuang Yang, Shuoyu Xu, Pak Hei Yeung, Peng Sun, Amirreza Mahbod, Gerald Schaefer, Isabella Ellinger, Rupert Ecker, Orjan Smedby, Chunliang Wang, Benjamin Chidester, That-Vinh Ton, Minh-Triet Tran, Jian Ma, Minh N. Do, Simon Graham, Quoc Dang Vu, Jin Tae Kwak, Akshaykumar Gunda, Raviteja Chunduri, Corey Hu, Xiaoyang Zhou, Dariush Lotfi, Reza Safdari, Antanas Kascenas, Alison O'Neil, Dennis Eschweiler, Johannes Stegmaier, Yanping Cui, Baocai Yin, Kailin Chen, Xinmei Tian, Philipp Gruening, Erhardt Barth, Elad Arbel, Itay Remer, Amir Ben-Dor, Ekaterina Sirazitdinova, Matthias Kohl, Stefan Braunewell, Yuexiang Li, Xinpeng Xie, Linlin Shen, Jun Ma, Krishanu Das Baksi, Mohammad Azam Khan, Jaegul Choo, Adrian Colomer, Valery Naranjo, Linmin Pei, Khan M. Iftekharuddin, Kaushiki Roy, Debotosh Bhattacharjee, Anibal Pedraza, Maria Gloria Bueno, Sabarinathan Devanathan, Saravanan Radhakrishnan, Praveen Koduganty, Zihan Wu, Guanyu Cai, Xiaojie Liu, Yuqin Wang, and Amit Sethi
IEEE Transactions on Medical Imaging, ISSN: 02780062, eISSN: 1558254X, Pages: 1380-1391, Published: May 2020
Institute of Electrical and Electronics Engineers (IEEE)
Irene Brusini, Olof Lindberg, J-Sebastian Muehlboeck, Örjan Smedby, Eric Westman, and Chunliang Wang
Frontiers in Neuroscience, ISSN: 16624548, eISSN: 1662453X, Published: 24 January 2020
Frontiers Media SA
Konstantinos Poulakis, Daniel Ferreira, Joana B. Pereira, Örjan Smedby, Prashanthi Vemuri, and Eric Westman
Aging, ISSN: 19454589, Pages: 12622-12647, Published: 2020
Impact Journals, LLC
Mehdi Astaraki, Chunliang Wang, Gabriel Carrizo, Iuliana Toma-Dasu, and Örjan Smedby
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 03029743, eISSN: 16113349, Volume: 11993 LNCS, Pages: 316-323, Published: 2020
Springer International Publishing
Bharti Kataria, Jonas Nilsson Althén, Örjan Smedby, Anders Persson, Hannibal Sökjer, and Michael Sandborg
European Journal of Radiology, ISSN: 0720048X, eISSN: 18727727, Volume: 122, Published: January 2020
Elsevier BV
Maria Holstensson, Örjan Smedby, Gavin Poludniowski, Alejandro Sanchez-Crespo, Irina Savitcheva, Michael Öberg, Per Grybäck, Stefan Gabrielson, Patricia Sandqvist, Erika Bartholdson, and Rimma Axelsson
Physics in Medicine and Biology, ISSN: 00319155, eISSN: 13616560, Published: 5 December 2019
IOP Publishing
Xiahai Zhuang, Lei Li, Christian Payer, Darko Štern, Martin Urschler, Mattias P. Heinrich, Julien Oster, Chunliang Wang, Örjan Smedby, Cheng Bian, Xin Yang, Pheng-Ann Heng, Aliasghar Mortazi, Ulas Bagci, Guanyu Yang, Chenchen Sun, Gaetan Galisot, Jean-Yves Ramel, Thierry Brouard, Qianqian Tong, Weixin Si, Xiangyun Liao, Guodong Zeng, Zenglin Shi, Guoyan Zheng, Chengjia Wang, Tom MacGillivray, David Newby, Kawal Rhode, Sebastien Ourselin, Raad Mohiaddin, Jennifer Keegan, David Firmin, and Guang Yang
Medical Image Analysis, ISSN: 13618415, eISSN: 13618423, Published: December 2019
Elsevier BV
Bharti Kataria, Jonas Nilsson Althén, Örjan Smedby, Anders Persson, Hannibal Sökjer, and Michael Sandborg
BMC Medical Imaging, eISSN: 14712342, Published: 9 August 2019
Springer Science and Business Media LLC
Mehdi Astaraki, Chunliang Wang, Giulia Buizza, Iuliana Toma-Dasu, Marta Lazzeroni, and Örjan Smedby
Physica Medica, ISSN: 11201797, eISSN: 1724191X, Pages: 58-65, Published: April 2019
Elsevier BV
Mehdi Astaraki, Iuliana Toma-Dasu, Örjan Smedby, and Chunliang Wang
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 03029743, eISSN: 16113349, Volume: 11769 LNCS, Pages: 249-256, Published: 2019
Springer International Publishing
Amirreza Mahbod, Gerald Schaefer, Isabella Ellinger, Rupert Ecker, Örjan Smedby, and Chunliang Wang
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 03029743, eISSN: 16113349, Volume: 11435 LNCS, Pages: 75-82, Published: 2019
Springer International Publishing