Sine Series Approximation of the Mod Function for Bootstrapping of Approximate HE DOI

Charanjit S. Jutla,

Nathan Manohar

Lecture notes in computer science, Год журнала: 2022, Номер unknown, С. 491 - 520

Опубликована: Янв. 1, 2022

Язык: Английский

SAFELearn: Secure Aggregation for private FEderated Learning DOI
Hossein Fereidooni, Samuel Marchal, Markus Miettinen

и другие.

Опубликована: Май 1, 2021

Federated learning (FL) is an emerging distributed machine paradigm which addresses critical data privacy issues in by enabling clients, using aggregation server (aggregator), to jointly train a global model without revealing their training data. Thereby, it improves not only but also efficient as uses the computation power and of potentially millions clients for parallel. However, FL vulnerable so-called inference attacks malicious aggregators can infer information about clients' from updates. Secure restricts central aggregator learn summation or average updates clients. Unfortunately, existing protocols secure suffer high communication, computation, many communication rounds.In this work, we present SAFELearn, generic design private systems that protects against have analyze individual aggregation. It flexibly adaptable efficiency security requirements various applications be instantiated with MPC FHE. In contrast previous works, need 2 rounds each iteration, do use any expensive cryptographic primitives on tolerate dropouts, rely trusted third party. We implement benchmark instantiation our two-party computation. Our implementation aggregates 500 models more than 300K parameters less 0.5 seconds.

Язык: Английский

Процитировано

143

Truly privacy-preserving federated analytics for precision medicine with multiparty homomorphic encryption DOI Creative Commons
David Froelicher, Juan Ramón Troncoso-Pastoriza, Jean Louis Raisaro

и другие.

Nature Communications, Год журнала: 2021, Номер 12(1)

Опубликована: Окт. 11, 2021

Abstract Using real-world evidence in biomedical research, an indispensable complement to clinical trials, requires access large quantities of patient data that are typically held separately by multiple healthcare institutions. We propose FAMHE, a novel federated analytics system that, based on multiparty homomorphic encryption (MHE), enables privacy-preserving analyses distributed datasets yielding highly accurate results without revealing any intermediate data. demonstrate the applicability FAMHE essential analysis tasks, including Kaplan-Meier survival oncology and genome-wide association studies medical genetics. our system, we accurately efficiently reproduce two published centralized setting, enabling insights not possible from individual institutions alone. Our work represents necessary key step towards overcoming privacy hurdle multi-centric scientific collaborations.

Язык: Английский

Процитировано

125

PPML-Omics: A privacy-preserving federated machine learning method protects patients’ privacy in omic data DOI Creative Commons
Juexiao Zhou, Siyuan Chen, Yulian Wu

и другие.

Science Advances, Год журнала: 2024, Номер 10(5)

Опубликована: Янв. 31, 2024

Modern machine learning models toward various tasks with omic data analysis give rise to threats of privacy leakage patients involved in those datasets. Here, we proposed a secure and privacy-preserving method (PPML-Omics) by designing decentralized differential private federated algorithm. We applied PPML-Omics analyze from three sequencing technologies addressed the concern major under representative deep models. examined breaches depth through attack experiments demonstrated that could protect patients' privacy. In each these applications, was able outperform methods comparison same level guarantee, demonstrating versatility simultaneously balancing capability utility analysis. Furthermore, gave theoretical proof PPML-Omics, suggesting first mathematically guaranteed robust generalizable empirical performance protecting data.

Язык: Английский

Процитировано

19

Efficiency Optimization Techniques in Privacy-Preserving Federated Learning With Homomorphic Encryption: A Brief Survey DOI

Qipeng Xie,

Siyang Jiang, Linshan Jiang

и другие.

IEEE Internet of Things Journal, Год журнала: 2024, Номер 11(14), С. 24569 - 24580

Опубликована: Июль 8, 2024

Язык: Английский

Процитировано

18

Efficient Bootstrapping for Approximate Homomorphic Encryption with Non-sparse Keys DOI
Jean-Philippe Bossuat, Christian Mouchet, Juan Ramón Troncoso-Pastoriza

и другие.

Lecture notes in computer science, Год журнала: 2021, Номер unknown, С. 587 - 617

Опубликована: Янв. 1, 2021

Язык: Английский

Процитировано

91

Efficient Dropout-Resilient Aggregation for Privacy-Preserving Machine Learning DOI
Ziyao Liu, Jiale Guo, Kwok‐Yan Lam

и другие.

IEEE Transactions on Information Forensics and Security, Год журнала: 2022, Номер 18, С. 1839 - 1854

Опубликована: Апрель 14, 2022

With the increasing adoption of data-hungry machine learning algorithms, personal data privacy has emerged as one key concerns that could hinder success digital transformation. As such, Privacy-Preserving Machine Learning (PPML) received much attention from both academia and industry. However, organizations are faced with dilemma that, on hand, they encouraged to share enhance ML performance, but other potentially be breaching relevant regulations. Practical PPML typically allows multiple participants individually train their models, which then aggregated construct a global model in privacy-preserving manner, e.g., based multi-party computation or homomorphic encryption. Nevertheless, most important applications large-scale PPML, by aggregating clients' gradients update for federated learning, such consumer behavior modeling mobile application services, some inevitably resource-constrained devices, may drop out system due mobility nature. Therefore, resilience aggregation become an problem tackled. In this paper, we propose scalable scheme can tolerate dropout at any time, is secure against semi-honest active malicious adversaries setting proper parameters. By replacing communication-intensive building blocks seed pseudo-random generator, relying additive property Shamir secret sharing scheme, our outperforms state-of-the-art schemes up 6.37$\times$ runtime provides stronger dropout-resilience. The simplicity makes it attractive implementation further improvements.

Язык: Английский

Процитировано

65

VerSA: Verifiable Secure Aggregation for Cross-Device Federated Learning DOI Creative Commons
Changhee Hahn, Ho-Dong Kim, Minjae Kim

и другие.

IEEE Transactions on Dependable and Secure Computing, Год журнала: 2021, Номер 20(1), С. 36 - 52

Опубликована: Ноя. 9, 2021

In privacy-preserving cross-device federated learning, users train a global model on their local data and submit encrypted models, while an untrusted central server aggregates the models to obtain updated model. Prior work has demonstrated how verify correctness of aggregation in such setting. However, verification relies strong assumptions, as trusted setup among all under unreliable network conditions, or it suffers from expensive cryptographic operations, bilinear pairing. this paper, we scrutinize mechanism prior propose recovery attack, demonstrating that most can be leaked within reasonable time (e.g., $98\%$ are recovered 21 h). Then, VerSA , verifiable secure protocol for learning. does not require any between minimizing cost by enabling both utilize only lightweight pseudorandom generator prove aggregation. We experimentally confirm efficiency diverse datasets, is orders magnitude faster than work.

Язык: Английский

Процитировано

58

Privacy-Preserving Aggregation in Federated Learning: A Survey DOI
Ziyao Liu, Jiale Guo,

Wenzhuo Yang

и другие.

IEEE Transactions on Big Data, Год журнала: 2022, Номер unknown, С. 1 - 20

Опубликована: Июль 15, 2022

Over the recent years, with increasing adoption of Federated Learning (FL) algorithms and growing concerns over personal data privacy, Privacy-Preserving (PPFL) has attracted tremendous attention from both academia industry. Practical PPFL typically allows multiple participants to individually train their machine learning models, which are then aggregated construct a global model in privacy-preserving manner. As such, Aggregation (PPAgg) as key protocol received substantial research interest. This survey aims fill gap between large number studies on PPFL, where PPAgg is adopted provide privacy guarantee, lack comprehensive protocols applied FL systems. reviews proposed address security issues The focus placed construction an extensive analysis advantages disadvantages these selected solutions. Additionally, we discuss open-source frameworks that support PPAgg. Finally, highlight significant challenges future directions for applying systems combination other technologies further improvement.

Язык: Английский

Процитировано

52

LSFL: A Lightweight and Secure Federated Learning Scheme for Edge Computing DOI
Zhuangzhuang Zhang, Libing Wu, Chuanguo Ma

и другие.

IEEE Transactions on Information Forensics and Security, Год журнала: 2022, Номер 18, С. 365 - 379

Опубликована: Ноя. 14, 2022

Nowadays, many edge computing service providers expect to leverage the computational power and data of nodes improve their models without transmitting data. Federated learning facilitates collaborative training global among distributed sharing Unfortunately, existing privacy-preserving federated applied this scenario still faces three challenges: 1) It typically employs complex cryptographic algorithms, which results in excessive overhead; 2) cannot guarantee Byzantine robustness while preserving privacy; 3) Edge have limited may drop out frequently. As a result, be effectively scenarios. Therefore, we propose lightweight secure scheme LSFL, combines features Byzantine-Robustness. Specifically, design Lightweight Two-Server Secure Aggregation protocol, utilizes two servers enable model aggregation. This protects privacy prevents from influencing We implement evaluate LSFL LAN environment, experiment show that meets fidelity, security, efficiency goals, maintains accuracy compared popular FedAvg scheme.

Язык: Английский

Процитировано

51

Eluding Secure Aggregation in Federated Learning via Model Inconsistency DOI
Dario Pasquini, Danilo Francati, Giuseppe Ateniese

и другие.

Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Год журнала: 2022, Номер unknown, С. 2429 - 2443

Опубликована: Ноя. 7, 2022

Secure aggregation is a cryptographic protocol that securely computes the of its inputs. It pivotal in keeping model updates private federated learning. Indeed, use secure prevents server from learning value and source individual provided by users, hampering inference data attribution attacks.

Язык: Английский

Процитировано

48