ML-CDS 2019: Multimodal Learning for Clinical Decision Support

If you've come looking for Symposium on AI for Biomedical Imaging Across Scales site click HERE


In conjunction with MICCAI 2019, Shenzhen, China.

Diagnostic decision-making (using images and other clinical data) is still very much an art for many physicians in their practices today due to a lack of quantitative tools and measurements. The field had seen little growth from the original rule-based expert systems of the 50s which proved impractical for clinical use due to incompleteness of rules and their partial applicability to a given patient. Despite the tremendous development of medical image analysis algorithms in the MICCAI community, their translation to clinical practice has been slow, particularly in the area of clinical decision support.

With advances in electronic patient record systems, a large number of pre-diagnosed patient data sets are now becoming available. These data sets are often multimodal consisting of images (x-ray, CT, MRI), videos and other time series, and textual data (free text reports and structured clinical data). In addition, with the routine availability of whole slide scanning technology, whole slide imaging is given rise to the field of digital pathology. Additionally for patient diagnosis and prognosis, multi-omics (e.g. genomics, proteomics) data is also obtained. This therefore provides the opportunity for multi-modal and multi-scale characterization of a patient disease profile.

Analyzing these multimodal sources for disease-specific information across patients can reveal important similarities between patients and hence their underlying diseases and potential treatments. Researchers are now beginning to develop multimodal learning techniques on disease-specific information in modalities to find supporting evidence for a disease or to automatically learn associations of symptoms and their appearance in imaging. The role of clinical knowledge is also being actively explored. Benchmarking frameworks such as ImageCLEF (Image retrieval track in the Cross-Language Evaluation Forum) and Visceral have expanded over the past five years to include large medical image collections for testing. However, accurate ground truth labeling of large scale datasets is proving to be a challenging problem.

The goal of this workshop is to bring together imaging researchers and clinicians working in machine learning on multimodal data sets for clinical decision support and treatment planning to present and discuss latest developments in the field. Specifically, researchers interested in multimodal learning, biomedical imaging, medical image retrieval, data mining, text retrieval, and machine learning/AI communities will be co-located with clinicians who use computer-aided diagnosis and clinical decision support tools to not only discuss new techniques of multimodal learning but also their translation to clinical decision support in practice. We are looking for original, high-quality submissions that address innovative research and development in the learning of multimodal medical data for use in clinical decision support and treatment planning.