NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Federated learning is a method that enables multiple devices or organizations, referred to as clients, to collaboratively train a shared machine learning model without sharing their private data. Although it has shown promise in real-world applications, it still faces several challenges. One major issue is the lack of fairness, where clients who contribute higher-quality data do not often see their efforts properly reflected in the final model, especially in applications where each client's data is large and valuable. Another challenge is that federated learning algorithms are often not designed to cope in real-time environments, such as traffic and autonomous systems, where data is received continuously. Furthermore, in real-world environments, clients are susceptible to poor network conditions or limited communication capacity, which can inhibit collaboration. Moreover, federated learning is liable to privacy leakage from a powerful adversary. Although local differential privacy can prevent privacy leakage, it reduces model performance, making it hard to balance privacy and accuracy. This project aims to address these pressing issues by designing strategies to fairly incentivize clients in challenging, dynamic environments while maintaining strong privacy protections. It overcomes the limitations of the existing game theory approach, which requires knowledge of a utility function difficult to compute in federated learning. Moreover, unlike the game theory approach, it does not utilize money to incentivize clients to contribute high-quality data because budget constraints will result in inadequate compensation, which will discourage clients with high-quality data from further collaboration. Other existing methods based on reinforcement learning are computationally demanding and impractical under resource-limited conditions. Therefore, this project will incentivize clients based on the quality of their data contributions in real-world harsh environments for various applications while ensuring data privacy. Moreover, it will foster collaborations among industry partners necessary for economic growth. This project aims to develop effective methods for incentivizing clients to contribute high-quality data in cross-silo federated learning, while ensuring privacy in dynamic environments. Unlike existing game-theoretic or deep reinforcement learning approaches, it introduces novel strategies across three thrusts. Thrust 1 focuses on incentive mechanisms when there are no communication bottlenecks, using optimistic mirror descent and Hedge techniques to assess and reward data quality. Thrust 2 addresses communication constraints, based on bandit optimization and difference compression. Thrust 3 enhances privacy through local differential privacy with tree-based aggregation and the optimistic follow-the-regularized-leader technique. The project will provide rigorous mathematical foundations and will ensure reproducibility by sharing open-source implementations. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $514K
2030-09-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M