Sine Series Approximation of the Mod Function for Bootstrapping of Approximate HE DOI

Charanjit S. Jutla,

Nathan Manohar

Lecture notes in computer science, Год журнала: 2022, Номер unknown, С. 491 - 520

Опубликована: Янв. 1, 2022

Язык: Английский

SAFELearn: Secure Aggregation for private FEderated Learning DOI
Hossein Fereidooni, Samuel Marchal, Markus Miettinen

и другие.

Опубликована: Май 1, 2021

Federated learning (FL) is an emerging distributed machine paradigm which addresses critical data privacy issues in by enabling clients, using aggregation server (aggregator), to jointly train a global model without revealing their training data. Thereby, it improves not only but also efficient as uses the computation power and of potentially millions clients for parallel. However, FL vulnerable so-called inference attacks malicious aggregators can infer information about clients' from updates. Secure restricts central aggregator learn summation or average updates clients. Unfortunately, existing protocols secure suffer high communication, computation, many communication rounds.In this work, we present SAFELearn, generic design private systems that protects against have analyze individual aggregation. It flexibly adaptable efficiency security requirements various applications be instantiated with MPC FHE. In contrast previous works, need 2 rounds each iteration, do use any expensive cryptographic primitives on tolerate dropouts, rely trusted third party. We implement benchmark instantiation our two-party computation. Our implementation aggregates 500 models more than 300K parameters less 0.5 seconds.

Язык: Английский

Процитировано

141

Truly privacy-preserving federated analytics for precision medicine with multiparty homomorphic encryption DOI Creative Commons
David Froelicher, Juan Ramón Troncoso-Pastoriza, Jean Louis Raisaro

и другие.

Nature Communications, Год журнала: 2021, Номер 12(1)

Опубликована: Окт. 11, 2021

Abstract Using real-world evidence in biomedical research, an indispensable complement to clinical trials, requires access large quantities of patient data that are typically held separately by multiple healthcare institutions. We propose FAMHE, a novel federated analytics system that, based on multiparty homomorphic encryption (MHE), enables privacy-preserving analyses distributed datasets yielding highly accurate results without revealing any intermediate data. demonstrate the applicability FAMHE essential analysis tasks, including Kaplan-Meier survival oncology and genome-wide association studies medical genetics. our system, we accurately efficiently reproduce two published centralized setting, enabling insights not possible from individual institutions alone. Our work represents necessary key step towards overcoming privacy hurdle multi-centric scientific collaborations.

Язык: Английский

Процитировано

121

PPML-Omics: A privacy-preserving federated machine learning method protects patients’ privacy in omic data DOI Creative Commons
Juexiao Zhou, Siyuan Chen, Yulian Wu

и другие.

Science Advances, Год журнала: 2024, Номер 10(5)

Опубликована: Янв. 31, 2024

Modern machine learning models toward various tasks with omic data analysis give rise to threats of privacy leakage patients involved in those datasets. Here, we proposed a secure and privacy-preserving method (PPML-Omics) by designing decentralized differential private federated algorithm. We applied PPML-Omics analyze from three sequencing technologies addressed the concern major under representative deep models. examined breaches depth through attack experiments demonstrated that could protect patients' privacy. In each these applications, was able outperform methods comparison same level guarantee, demonstrating versatility simultaneously balancing capability utility analysis. Furthermore, gave theoretical proof PPML-Omics, suggesting first mathematically guaranteed robust generalizable empirical performance protecting data.

Язык: Английский

Процитировано

18

Efficient Bootstrapping for Approximate Homomorphic Encryption with Non-sparse Keys DOI
Jean-Philippe Bossuat, Christian Mouchet, Juan Ramón Troncoso-Pastoriza

и другие.

Lecture notes in computer science, Год журнала: 2021, Номер unknown, С. 587 - 617

Опубликована: Янв. 1, 2021

Язык: Английский

Процитировано

91

Efficient Dropout-Resilient Aggregation for Privacy-Preserving Machine Learning DOI
Ziyao Liu, Jiale Guo, Kwok‐Yan Lam

и другие.

IEEE Transactions on Information Forensics and Security, Год журнала: 2022, Номер 18, С. 1839 - 1854

Опубликована: Апрель 14, 2022

With the increasing adoption of data-hungry machine learning algorithms, personal data privacy has emerged as one key concerns that could hinder success digital transformation. As such, Privacy-Preserving Machine Learning (PPML) received much attention from both academia and industry. However, organizations are faced with dilemma that, on hand, they encouraged to share enhance ML performance, but other potentially be breaching relevant regulations. Practical PPML typically allows multiple participants individually train their models, which then aggregated construct a global model in privacy-preserving manner, e.g., based multi-party computation or homomorphic encryption. Nevertheless, most important applications large-scale PPML, by aggregating clients' gradients update for federated learning, such consumer behavior modeling mobile application services, some inevitably resource-constrained devices, may drop out system due mobility nature. Therefore, resilience aggregation become an problem tackled. In this paper, we propose scalable scheme can tolerate dropout at any time, is secure against semi-honest active malicious adversaries setting proper parameters. By replacing communication-intensive building blocks seed pseudo-random generator, relying additive property Shamir secret sharing scheme, our outperforms state-of-the-art schemes up 6.37$\times$ runtime provides stronger dropout-resilience. The simplicity makes it attractive implementation further improvements.

Язык: Английский

Процитировано

65

Privacy-Preserving Aggregation in Federated Learning: A Survey DOI
Ziyao Liu, Jiale Guo,

Wenzhuo Yang

и другие.

IEEE Transactions on Big Data, Год журнала: 2022, Номер unknown, С. 1 - 20

Опубликована: Июль 15, 2022

Over the recent years, with increasing adoption of Federated Learning (FL) algorithms and growing concerns over personal data privacy, Privacy-Preserving (PPFL) has attracted tremendous attention from both academia industry. Practical PPFL typically allows multiple participants to individually train their machine learning models, which are then aggregated construct a global model in privacy-preserving manner. As such, Aggregation (PPAgg) as key protocol received substantial research interest. This survey aims fill gap between large number studies on PPFL, where PPAgg is adopted provide privacy guarantee, lack comprehensive protocols applied FL systems. reviews proposed address security issues The focus placed construction an extensive analysis advantages disadvantages these selected solutions. Additionally, we discuss open-source frameworks that support PPAgg. Finally, highlight significant challenges future directions for applying systems combination other technologies further improvement.

Язык: Английский

Процитировано

51

LSFL: A Lightweight and Secure Federated Learning Scheme for Edge Computing DOI
Zhuangzhuang Zhang, Libing Wu, Chuanguo Ma

и другие.

IEEE Transactions on Information Forensics and Security, Год журнала: 2022, Номер 18, С. 365 - 379

Опубликована: Ноя. 14, 2022

Nowadays, many edge computing service providers expect to leverage the computational power and data of nodes improve their models without transmitting data. Federated learning facilitates collaborative training global among distributed sharing Unfortunately, existing privacy-preserving federated applied this scenario still faces three challenges: 1) It typically employs complex cryptographic algorithms, which results in excessive overhead; 2) cannot guarantee Byzantine robustness while preserving privacy; 3) Edge have limited may drop out frequently. As a result, be effectively scenarios. Therefore, we propose lightweight secure scheme LSFL, combines features Byzantine-Robustness. Specifically, design Lightweight Two-Server Secure Aggregation protocol, utilizes two servers enable model aggregation. This protects privacy prevents from influencing We implement evaluate LSFL LAN environment, experiment show that meets fidelity, security, efficiency goals, maintains accuracy compared popular FedAvg scheme.

Язык: Английский

Процитировано

50

Eluding Secure Aggregation in Federated Learning via Model Inconsistency DOI
Dario Pasquini, Danilo Francati, Giuseppe Ateniese

и другие.

Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Год журнала: 2022, Номер unknown, С. 2429 - 2443

Опубликована: Ноя. 7, 2022

Secure aggregation is a cryptographic protocol that securely computes the of its inputs. It pivotal in keeping model updates private federated learning. Indeed, use secure prevents server from learning value and source individual provided by users, hampering inference data attribution attacks.

Язык: Английский

Процитировано

48

DisBezant: Secure and Robust Federated Learning Against Byzantine Attack in IoT-Enabled MTS DOI Creative Commons
Xindi Ma, Qi Jiang, Mohammad Shojafar

и другие.

IEEE Transactions on Intelligent Transportation Systems, Год журнала: 2022, Номер unknown, С. 1 - 11

Опубликована: Янв. 1, 2022

With the intelligentization of Maritime Transportation System (MTS), Internet Thing (IoT) and machine learning technologies have been widely used to achieve intelligent control routing planning for ships. As an important branch learning, federated is first choice train accurate joint model without sharing ships' data directly. However, there are still many unsolved challenges while using in IoT-enabled MTS, such as privacy preservation Byzantine attacks. To surmount above challenges, a novel mechanism, namely DisBezant, designed secure Byzantine-robust MTS. Specifically, credibility-based mechanism proposed resist attack non-iid (not independent identically distributed) dataset which usually gathered from heterogeneous The credibility introduced measure trustworthiness uploaded knowledge ships updated based on their shared information each epoch. Then, we design efficient privacy-preserving gradient aggregation protocol two-party calculation protocol. help central server, can accurately recognise attackers update global parameters privately. Furthermore, theoretically discussed efficiency DisBezant. verify effectiveness our evaluate it over three real datasets results demonstrate that DisBezant efficiently effectively learning. Although 40% nodes participants, them ensure training.

Язык: Английский

Процитировано

42

Encrypted federated learning for secure decentralized collaboration in cancer image analysis DOI Creative Commons
Daniel Truhn, Soroosh Tayebi Arasteh, Oliver Lester Saldanha

и другие.

Medical Image Analysis, Год журнала: 2023, Номер 92, С. 103059 - 103059

Опубликована: Дек. 7, 2023

Artificial intelligence (AI) has a multitude of applications in cancer research and oncology. However, the training AI systems is impeded by limited availability large datasets due to data protection requirements other regulatory obstacles. Federated swarm learning represent possible solutions this problem collaboratively models while avoiding transfer. these decentralized methods, weight updates are still transferred aggregation server for merging models. This leaves possibility breach privacy, example model inversion or membership inference attacks untrusted servers. Somewhat-homomorphically-encrypted federated (SHEFL) solution because only encrypted weights transferred, performed space. Here, we demonstrate first successful implementation SHEFL range clinically relevant tasks image analysis on multicentric radiology histopathology. We show that enables which outperform locally trained perform par with centrally trained. In future, can enable multiple institutions co-train without forsaking governance ever transmitting any decryptable

Язык: Английский

Процитировано

33