6 results on '"Avestimehr, Salman"'
Search Results
2. Federated Learning for Clients' Data Privacy Assurance in Food Service Industry.
- Author
-
Taheri Gorji, Hamed, Saeedi, Mahdi, Mushtaq, Erum, Kashani Zadeh, Hossein, Husarik, Kaylee, Shahabi, Seyed Mojtaba, Qin, Jianwei, Chan, Diane E., Baek, Insuck, Kim, Moon S., Akhbardeh, Alireza, Sokolov, Stanislav, Avestimehr, Salman, MacKinnon, Nicholas, Vasefi, Fartash, and Tavakolian, Kouhyar
- Subjects
DATA privacy ,DEEP learning ,MACHINE learning ,FOOD service ,ASSURANCE services ,FOOD industry - Abstract
The food service industry must ensure that service facilities are free of foodborne pathogens hosted by organic residues and biofilms. Foodborne diseases put customers at risk and compromise the reputations of service providers. Fluorescence imaging, empowered by state-of-the-art artificial intelligence (AI) algorithms, can detect invisible residues. However, using AI requires large datasets that are most effective when collected from actual users, raising concerns about data privacy and possible leakage of sensitive information. In this study, we employed a decentralized privacy-preserving technology to address client data privacy issues. When federated learning (FL) is used, there is no need for data sharing across clients or data centralization on a server. We used FL and a new fluorescence imaging technology and applied two deep learning models, MobileNetv3 and DeepLabv3+, to identify and segment invisible residues on food preparation equipment and surfaces. We used FedML as our FL framework and Fedavg as the aggregation algorithm. The model achieved training and testing accuracies of 95.83% and 94.94% for classification between clean and contamination frames, respectively, and resulted in intersection over union (IoU) scores of 91.23% and 89.45% for training and testing, respectively, of segmentation of the contaminated areas. The results demonstrated that using federated learning combined with fluorescence imaging and deep learning algorithms can improve the performance of cleanliness auditing systems while assuring client data privacy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Basil: A Fast and Byzantine-Resilient Approach for Decentralized Training.
- Author
-
Elkordy, Ahmed Roushdy, Prakash, Saurav, and Avestimehr, Salman
- Subjects
BASIL ,GROUP rings ,DATA distribution ,PEER-to-peer architecture (Computer networks) ,INFORMATION sharing - Abstract
Decentralized (i.e., serverless) training across edge nodes can suffer substantially from potential Byzantine nodes that can degrade the training performance. However, detection and mitigation of Byzantine behaviors in a decentralized learning setting is a daunting task, especially when the data distribution at the users is heterogeneous. As our main contribution, we propose Basil, a fast and computationally efficient Byzantine-robust algorithm for decentralized training systems, which leverages a novel sequential, memory-assisted and performance-based criteria for training over a logical ring while filtering the Byzantine users. In the IID dataset setting, we provide the theoretical convergence guarantees of Basil, demonstrating its linear convergence rate. Furthermore, for the IID setting, we experimentally demonstrate that Basil is robust to various Byzantine attacks, including the strong Hidden attack, while providing up to absolute ~16% higher test accuracy over the state-of-the-art Byzantine-resilient decentralized learning approach. Additionally, we generalize Basil to the non-IID setting by proposing Anonymous Cyclic Data Sharing (ACDS), a technique that allows each node to anonymously share a random fraction of its local non-sensitive dataset (e.g., landmarks images) with all other nodes. Finally, to reduce the overall latency of Basil resulting from its sequential implementation over the logical ring, we propose Basil+ that enables Byzantine-robust parallel training across groups of logical rings, and at the same time, it retains the performance gains of Basil due to sequential training within each group. Furthermore, we experimentally demonstrate the scalability gains of Basil+ through different sets of experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Private Retrieval, Computing, and Learning: Recent Progress and Future Challenges.
- Author
-
Ulukus, Sennur, Avestimehr, Salman, Gastpar, Michael, Jafar, Syed A., Tandon, Ravi, and Tian, Chao
- Subjects
INTERNET privacy ,DISTRIBUTED computing ,DATA privacy ,CYBERSPACE ,INFORMATION retrieval ,GRID computing ,PARALLEL processing - Abstract
Most of our lives are conducted in the cyberspace. The human notion of privacy translates into a cyber notion of privacy on many functions that take place in the cyberspace. This article focuses on three such functions: how to privately retrieve information from cyberspace (privacy in information retrieval), how to privately leverage large-scale distributed/parallel processing (privacy in distributed computing), and how to learn/train machine learning models from private data spread across multiple users (privacy in distributed (federated) learning). The article motivates each privacy setting, describes the problem formulation, summarizes breakthrough results in the history of each problem, and gives recent results and discusses some of the major ideas that emerged in each field. In addition, the cross-cutting techniques and interconnections between the three topics are discussed along with a set of open problems and challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. One model to unite them all: Personalized federated learning of multi-contrast MRI synthesis.
- Author
-
Dalmaz, Onat, Mirza, Muhammad U., Elmas, Gokberk, Ozbey, Muzaffer, Dar, Salman U.H., Ceyani, Emir, Oguz, Kader K., Avestimehr, Salman, and Çukur, Tolga
- Subjects
- *
FEDERATED learning , *INDIVIDUALIZED instruction , *MAGNETIC resonance imaging , *LATENT variables , *DATA distribution - Abstract
Curation of large, diverse MRI datasets via multi-institutional collaborations can help improve learning of generalizable synthesis models that reliably translate source- onto target-contrast images. To facilitate collaborations, federated learning (FL) adopts decentralized model training while mitigating privacy concerns by avoiding sharing of imaging data. However, conventional FL methods can be impaired by the inherent heterogeneity in the data distribution, with domain shifts evident within and across imaging sites. Here we introduce the first personalized FL method for MRI Synthesis (pFLSynth) that improves reliability against data heterogeneity via model specialization to individual sites and synthesis tasks (i.e., source-target contrasts). To do this, pFLSynth leverages an adversarial model equipped with novel personalization blocks that control the statistics of generated feature maps across the spatial/channel dimensions, given latent variables specific to sites and tasks. To further promote communication efficiency and site specialization, partial network aggregation is employed over later generator stages while earlier generator stages and the discriminator are trained locally. As such, pFLSynth enables multi-task training of multi-site synthesis models with high generalization performance across sites and tasks. Comprehensive experiments demonstrate the superior performance and reliability of pFLSynth in MRI synthesis against prior federated methods. [Display omitted] • A novel personalized federated learning method for multi-contrast MRI synthesis. • A novel generator equipped with personalization blocks to improve model specialization. • Partial network aggregation to improve communication efficiency and personalization. • State-of-the-art performance in MRI synthesis for common and variable tasks across sites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Partial model averaging in Federated Learning: Performance guarantees and benefits.
- Author
-
Lee, Sunwoo, Sahu, Anit Kumar, He, Chaoyang, and Avestimehr, Salman
- Subjects
- *
FEDERATED learning , *FIXED effects model - Abstract
Local Stochastic Gradient Descent (SGD) with periodic model averaging (FedAvg) is a foundational algorithm in Federated Learning. The algorithm independently runs SGD on multiple clients and periodically averages the model across all the clients. This periodic model averaging potentially causes a significant model discrepancy across the clients making the global loss converge slowly. While recent advanced optimization methods tackle the issue focused on non-IID settings, there still exists the model discrepancy issue due to the underlying periodic model averaging. We propose a partial model averaging framework that mitigates the model discrepancy issue in Federated Learning. The partial averaging encourages the local models to stay close to each other on parameter space, and it enables to more effectively minimize the global loss. We extensively evaluate the performance of the partial averaging strategy using CIFAR-10/100 and FEMNIST benchmarks. Given a fixed number of training iterations and a large number of clients (128), the partial averaging achieves up to 2.2% higher accuracy than the periodic full averaging. • A novel partial model averaging scheme that accelerates the federated optimization. • Frequent partial model synchronizations strongly suppress the model discrepancy across clients. • While maintaining the total data transfer size, the partial averaging accelerate the convergence of global loss. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.