dsMTL: a computational framework for privacy-preserving, distributed multi-task machine learning : data and text mining

In multi-cohort machine learning studies, it is critical to differentiate between effects that are reproducible across cohorts and those that are cohort-specific. Multi-task learning (MTL) is a machine learning approach that facilitates this differentiation through the simultaneous learning of predi...

Full description

Saved in:
Bibliographic Details
Main Authors: Cao, Han (Author) , Zhang, Youcheng (Author) , Baumbach, Jan (Author) , Burton, Paul R. (Author) , Dwyer, Dominic (Author) , Koutsouleris, Nikolaos (Author) , Matschinske, Julian (Author) , Marcon, Yannick (Author) , Rajan, Sivanesan (Author) , Rieger, Thilo (Author) , Ryser-Welch, Patricia (Author) , Späth, Julian (Author) , Herrmann, Carl (Author) , Schwarz, Emanuel (Author)
Format: Article (Journal)
Language:English
Published: 08 September 2022
In: Bioinformatics
Year: 2022, Volume: 38, Issue: 21, Pages: 4919-4926
ISSN:1367-4811
DOI:10.1093/bioinformatics/btac616
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.1093/bioinformatics/btac616
Verlag, kostenfrei, Volltext: https://academic.oup.com/bioinformatics/article/38/21/4919/6694043?login=true
Get full text
Author Notes:Han Cao, Youcheng Zhang, Jan Baumbach, Paul R Burton, Dominic Dwyer, Nikolaos Koutsouleris, Julian Matschinske, Yannick Marcon, Sivanesan Rajan, Thilo Rieg, Patricia Ryser-Welch, Julian Späth, The COMMITMENT Consortium, Carl Herrmann and Emanuel Schwarz
Description
Summary:In multi-cohort machine learning studies, it is critical to differentiate between effects that are reproducible across cohorts and those that are cohort-specific. Multi-task learning (MTL) is a machine learning approach that facilitates this differentiation through the simultaneous learning of prediction tasks across cohorts. Since multi-cohort data can often not be combined into a single storage solution, there would be the substantial utility of an MTL application for geographically distributed data sources.Here, we describe the development of ‘dsMTL’, a computational framework for privacy-preserving, distributed multi-task machine learning that includes three supervised and one unsupervised algorithms. First, we derive the theoretical properties of these methods and the relevant machine learning workflows to ensure the validity of the software implementation. Second, we implement dsMTL as a library for the R programming language, building on the DataSHIELD platform that supports the federated analysis of sensitive individual-level data. Third, we demonstrate the applicability of dsMTL for comorbidity modeling in distributed data. We show that comorbidity modeling using dsMTL outperformed conventional, federated machine learning, as well as the aggregation of multiple models built on the distributed datasets individually. The application of dsMTL was computationally efficient and highly scalable when applied to moderate-size (n < 500), real expression data given the actual network latency.dsMTL is freely available at https://github.com/transbioZI/dsMTLBase (server-side package) and https://github.com/transbioZI/dsMTLClient (client-side package).Supplementary data are available at Bioinformatics online.
Item Description:Gesehen am 01.08.2023
Physical Description:Online Resource
ISSN:1367-4811
DOI:10.1093/bioinformatics/btac616