Sine Series Approximation of the Mod Function for Bootstrapping of Approximate HE DOI

Charanjit S. Jutla,

Nathan Manohar

Lecture notes in computer science, Journal Year: 2022, Volume and Issue: unknown, P. 491 - 520

Published: Jan. 1, 2022

Language: Английский

SAFELearn: Secure Aggregation for private FEderated Learning DOI
Hossein Fereidooni, Samuel Marchal, Markus Miettinen

et al.

Published: May 1, 2021

Federated learning (FL) is an emerging distributed machine paradigm which addresses critical data privacy issues in by enabling clients, using aggregation server (aggregator), to jointly train a global model without revealing their training data. Thereby, it improves not only but also efficient as uses the computation power and of potentially millions clients for parallel. However, FL vulnerable so-called inference attacks malicious aggregators can infer information about clients' from updates. Secure restricts central aggregator learn summation or average updates clients. Unfortunately, existing protocols secure suffer high communication, computation, many communication rounds.In this work, we present SAFELearn, generic design private systems that protects against have analyze individual aggregation. It flexibly adaptable efficiency security requirements various applications be instantiated with MPC FHE. In contrast previous works, need 2 rounds each iteration, do use any expensive cryptographic primitives on tolerate dropouts, rely trusted third party. We implement benchmark instantiation our two-party computation. Our implementation aggregates 500 models more than 300K parameters less 0.5 seconds.

Language: Английский

Citations

143

Truly privacy-preserving federated analytics for precision medicine with multiparty homomorphic encryption DOI Creative Commons
David Froelicher, Juan Ramón Troncoso-Pastoriza, Jean Louis Raisaro

et al.

Nature Communications, Journal Year: 2021, Volume and Issue: 12(1)

Published: Oct. 11, 2021

Abstract Using real-world evidence in biomedical research, an indispensable complement to clinical trials, requires access large quantities of patient data that are typically held separately by multiple healthcare institutions. We propose FAMHE, a novel federated analytics system that, based on multiparty homomorphic encryption (MHE), enables privacy-preserving analyses distributed datasets yielding highly accurate results without revealing any intermediate data. demonstrate the applicability FAMHE essential analysis tasks, including Kaplan-Meier survival oncology and genome-wide association studies medical genetics. our system, we accurately efficiently reproduce two published centralized setting, enabling insights not possible from individual institutions alone. Our work represents necessary key step towards overcoming privacy hurdle multi-centric scientific collaborations.

Language: Английский

Citations

125

PPML-Omics: A privacy-preserving federated machine learning method protects patients’ privacy in omic data DOI Creative Commons
Juexiao Zhou, Siyuan Chen, Yulian Wu

et al.

Science Advances, Journal Year: 2024, Volume and Issue: 10(5)

Published: Jan. 31, 2024

Modern machine learning models toward various tasks with omic data analysis give rise to threats of privacy leakage patients involved in those datasets. Here, we proposed a secure and privacy-preserving method (PPML-Omics) by designing decentralized differential private federated algorithm. We applied PPML-Omics analyze from three sequencing technologies addressed the concern major under representative deep models. examined breaches depth through attack experiments demonstrated that could protect patients' privacy. In each these applications, was able outperform methods comparison same level guarantee, demonstrating versatility simultaneously balancing capability utility analysis. Furthermore, gave theoretical proof PPML-Omics, suggesting first mathematically guaranteed robust generalizable empirical performance protecting data.

Language: Английский

Citations

19

Efficiency Optimization Techniques in Privacy-Preserving Federated Learning With Homomorphic Encryption: A Brief Survey DOI

Qipeng Xie,

Siyang Jiang, Linshan Jiang

et al.

IEEE Internet of Things Journal, Journal Year: 2024, Volume and Issue: 11(14), P. 24569 - 24580

Published: July 8, 2024

Language: Английский

Citations

18

Efficient Bootstrapping for Approximate Homomorphic Encryption with Non-sparse Keys DOI
Jean-Philippe Bossuat, Christian Mouchet, Juan Ramón Troncoso-Pastoriza

et al.

Lecture notes in computer science, Journal Year: 2021, Volume and Issue: unknown, P. 587 - 617

Published: Jan. 1, 2021

Language: Английский

Citations

91

Efficient Dropout-Resilient Aggregation for Privacy-Preserving Machine Learning DOI
Ziyao Liu, Jiale Guo, Kwok‐Yan Lam

et al.

IEEE Transactions on Information Forensics and Security, Journal Year: 2022, Volume and Issue: 18, P. 1839 - 1854

Published: April 14, 2022

With the increasing adoption of data-hungry machine learning algorithms, personal data privacy has emerged as one key concerns that could hinder success digital transformation. As such, Privacy-Preserving Machine Learning (PPML) received much attention from both academia and industry. However, organizations are faced with dilemma that, on hand, they encouraged to share enhance ML performance, but other potentially be breaching relevant regulations. Practical PPML typically allows multiple participants individually train their models, which then aggregated construct a global model in privacy-preserving manner, e.g., based multi-party computation or homomorphic encryption. Nevertheless, most important applications large-scale PPML, by aggregating clients' gradients update for federated learning, such consumer behavior modeling mobile application services, some inevitably resource-constrained devices, may drop out system due mobility nature. Therefore, resilience aggregation become an problem tackled. In this paper, we propose scalable scheme can tolerate dropout at any time, is secure against semi-honest active malicious adversaries setting proper parameters. By replacing communication-intensive building blocks seed pseudo-random generator, relying additive property Shamir secret sharing scheme, our outperforms state-of-the-art schemes up 6.37$\times$ runtime provides stronger dropout-resilience. The simplicity makes it attractive implementation further improvements.

Language: Английский

Citations

65

VerSA: Verifiable Secure Aggregation for Cross-Device Federated Learning DOI Creative Commons
Changhee Hahn, Ho-Dong Kim, Minjae Kim

et al.

IEEE Transactions on Dependable and Secure Computing, Journal Year: 2021, Volume and Issue: 20(1), P. 36 - 52

Published: Nov. 9, 2021

In privacy-preserving cross-device federated learning, users train a global model on their local data and submit encrypted models, while an untrusted central server aggregates the models to obtain updated model. Prior work has demonstrated how verify correctness of aggregation in such setting. However, verification relies strong assumptions, as trusted setup among all under unreliable network conditions, or it suffers from expensive cryptographic operations, bilinear pairing. this paper, we scrutinize mechanism prior propose recovery attack, demonstrating that most can be leaked within reasonable time (e.g., $98\%$ are recovered 21 h). Then, VerSA , verifiable secure protocol for learning. does not require any between minimizing cost by enabling both utilize only lightweight pseudorandom generator prove aggregation. We experimentally confirm efficiency diverse datasets, is orders magnitude faster than work.

Language: Английский

Citations

58

Privacy-Preserving Aggregation in Federated Learning: A Survey DOI
Ziyao Liu, Jiale Guo,

Wenzhuo Yang

et al.

IEEE Transactions on Big Data, Journal Year: 2022, Volume and Issue: unknown, P. 1 - 20

Published: July 15, 2022

Over the recent years, with increasing adoption of Federated Learning (FL) algorithms and growing concerns over personal data privacy, Privacy-Preserving (PPFL) has attracted tremendous attention from both academia industry. Practical PPFL typically allows multiple participants to individually train their machine learning models, which are then aggregated construct a global model in privacy-preserving manner. As such, Aggregation (PPAgg) as key protocol received substantial research interest. This survey aims fill gap between large number studies on PPFL, where PPAgg is adopted provide privacy guarantee, lack comprehensive protocols applied FL systems. reviews proposed address security issues The focus placed construction an extensive analysis advantages disadvantages these selected solutions. Additionally, we discuss open-source frameworks that support PPAgg. Finally, highlight significant challenges future directions for applying systems combination other technologies further improvement.

Language: Английский

Citations

52

LSFL: A Lightweight and Secure Federated Learning Scheme for Edge Computing DOI
Zhuangzhuang Zhang, Libing Wu, Chuanguo Ma

et al.

IEEE Transactions on Information Forensics and Security, Journal Year: 2022, Volume and Issue: 18, P. 365 - 379

Published: Nov. 14, 2022

Nowadays, many edge computing service providers expect to leverage the computational power and data of nodes improve their models without transmitting data. Federated learning facilitates collaborative training global among distributed sharing Unfortunately, existing privacy-preserving federated applied this scenario still faces three challenges: 1) It typically employs complex cryptographic algorithms, which results in excessive overhead; 2) cannot guarantee Byzantine robustness while preserving privacy; 3) Edge have limited may drop out frequently. As a result, be effectively scenarios. Therefore, we propose lightweight secure scheme LSFL, combines features Byzantine-Robustness. Specifically, design Lightweight Two-Server Secure Aggregation protocol, utilizes two servers enable model aggregation. This protects privacy prevents from influencing We implement evaluate LSFL LAN environment, experiment show that meets fidelity, security, efficiency goals, maintains accuracy compared popular FedAvg scheme.

Language: Английский

Citations

51

Eluding Secure Aggregation in Federated Learning via Model Inconsistency DOI
Dario Pasquini, Danilo Francati, Giuseppe Ateniese

et al.

Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Journal Year: 2022, Volume and Issue: unknown, P. 2429 - 2443

Published: Nov. 7, 2022

Secure aggregation is a cryptographic protocol that securely computes the of its inputs. It pivotal in keeping model updates private federated learning. Indeed, use secure prevents server from learning value and source individual provided by users, hampering inference data attribution attacks.

Language: Английский

Citations

48