This group aims to bring together machine learning researchers, data scientists, and domain experts from diverse backgrounds, career stages, and disciplines to develop algorithms and tools that transfer knowledge across tasks and domains to improve the performance of learning algorithms on data of multiple modalities in real-world applications. We call this problem meta-learning for multimodal data, adopting a broad definition of meta-learning to bring researchers and practitioners in related areas together.
For example, in healthcare, there are data of multiple modalities including medical images, health-monitoring data, electronic health records, and multi-omics data. In practice, clinicians often make decisions using data from more than one modality and leveraging experience from related tasks/domains. To advance AI for such real-world problems, we aim to collaboratively develop meta-learning algorithms and tools that can leverage experience and knowledge on individual modalities, domains, and tasks to tackle real-world challenges in analysing data of multiple modalities across multiple domains for various tasks.
To bring together researchers to develop innovative and practical algorithms as well as accessible and sustainable open-source software tools to advance research on meta-learning for multimodal data and tackle real-world challenges, e.g. in healthcare.
To establish an engaging community of researchers from multiple disciplines, including multimodal learning, transfer learning, domain adaptation, and data integration, and create more opportunities for early career researchers (ECRs) to take leadership roles and power future growth.
To create accessible materials suitable for dissemination to non-researchers and the general public, including online courses, tutorials, podcasts, and blogs.
Developed with support from the Alan Turing Institute.