Venue: Journal of Medical Systems, Springer (2023)
There has been an explosive growth in research over the last decade exploring machine learning techniques for analyzing chest X-ray (CXR) images for screening cardiopulmonary abnormalities. In particular, we have observed a strong interest in screening for tuberculosis (TB). This interest has coincided with the spectacular advances in deep learning (DL) that is primarily based on convolutional neural networks (CNNs). These advances have resulted in signifcant research contributions in DL techniques for TB screening using CXR images. We review the research studies published over the last fve years (2016- 2021). We identify data collections, methodical contributions, and highlight promising methods and challenges. Further, we discuss and compare studies and identify those that ofer extension beyond binary decisions for TB, such as region-of-interest localization. In total, we systematically review 54 peer-reviewed research articles and perform meta-analysis.
Preview PaperProvide a FeedbackMachine learning is an efective and accurate technique to diagnose COVID-19 infections using image data, and chest X-Ray (CXR) is no exception. Considering privacy issues, machine learning scientists end up receiving less medical imaging data. Federated Learning (FL) is a privacy-preserving distributed machine learning paradigm that generates an unbiased global model that follows local model (from clients) without exposing their personal data. In the case of heterogeneous data among clients, vanilla or default FL mechanism still introduces an insecure method for updating models. Therefore, we proposed SecureFed—a secure aggregation method—which ensures fairness and robustness. In our experiments, we employed COVID-19 CXR dataset (of size 2100 positive cases) and compared it with the existing FL frameworks such as FedAvg, FedMGDA+, and FedRAD. In our comparison, we primarily considered robustness (accuracy) and fairness (consistency). As the SecureFed produced consistently better results, it is generic enough to be considered for multimodal data.
Preview PaperProvide a Feedback