Speech/tongue data analytics
Our current research in speech and tongue data analytics focuses on developing advanced MRI techniques for analyzing tongue motion during speech. We have created statistical multimodal atlases of 4D tongue motion and high-resolution vocal tract models using structural MRI. We are employing deep learning methods to differentiate post-cancer from healthy tongue muscle coordination patterns during speech. Our future research in this area is likely to explore more sophisticated AI models for real-time speech analysis, potentially leading to improved diagnostic tools for speech disorders and more effective rehabilitation techniques for patients with oral cancers or neurological conditions affecting speech.
Brain Tumor Analysis
In brain tumor analysis, our current research involves developing unsupervised domain adaptation techniques for tumor segmentation and proposing incremental learning methods for heterogeneous structure segmentation in brain MRI. Our work also includes creating symmetric-constrained irregular structure inpainting for brain MRI registration with tumor pathology. Future directions may involve integrating multi-modal imaging data for more accurate tumor characterization, developing AI models capable of predicting tumor growth and treatment response, and exploring the potential of federated learning to leverage data from multiple institutions while maintaining patient privacy.
Cardiac Image/Motion Analysis
Our current cardiac image and motion analysis research focuses on quantitative analysis methods for myocardial perfusion SPECT guided by coronary CT angiography and the creation of 4D statistical atlases of the human heart from gated PET images. We are also exploring segmentation of cardiac structures using successive subspace learning from cine MRI. Our future research may delve into AI-driven personalized cardiac modeling, real-time analysis of cardiac function during interventional procedures, and the development of predictive models for cardiovascular disease progression and treatment outcomes.
Anatomical and Functional Atlases
The development of anatomical and functional atlases is a significant area of our research, with current work including 4D multimodal atlases of tongue motion during speech, high-resolution vocal tract atlases, and multi-subject atlases from structural tongue MRI. Our future research in this field may focus on creating more comprehensive, population-specific atlases that account for variations across different demographics. We may also pursue efforts to develop dynamic, AI-updated atlases that evolve with new data inputs, providing increasingly accurate representations of anatomical and functional variability.
(Unsupervised) Domain Adaptation
Our current research in unsupervised domain adaptation includes the development of adversarial methods with conditional and label shift. We have also worked on adapting off-the-shelf source segmenters for target medical image segmentation. Our future research may explore more sophisticated domain adaptation techniques that can handle extreme domain shifts, multi-source domain adaptation for medical imaging, and the development of domain-invariant features that can generalize across a wide range of medical imaging modalities and acquisition parameters.
Interpretable Machine/Deep Learning
While not a primary focus, our work touches on interpretable machine learning through successive subspace learning for disease classification and the development of interpretable deep learning methods in medical imaging contexts. Our future research in this area is likely to focus on developing AI models that provide clear, clinically relevant explanations for their decisions. This may include the integration of domain knowledge into AI architectures, the development of attention-based models that highlight important image regions, and the creation of hybrid models that combine the strengths of both rule-based and deep learning approaches to enhance interpretability while maintaining high performance.