NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
This project aims to address the critical need for trustworthy artificial intelligence and machine learning (AI/ML) in applications that directly impact individuals and society. As AI/ML systems, particularly those powered by Reinforcement Learning (RL), are increasingly used in personalized healthcare, education, commerce, and large language models, it is imperative to ensure these systems are reliable and ethical. This project aims to advance trustworthy RL by addressing the significant challenges of data privacy, robustness against corruption, and fairness across diverse user demographics. The project's broader significance lies in establishing fundamental connections between trustworthy RL and other machine learning areas, contributing to a comprehensive framework for responsible AI/ML development and deployment. Educational components include a “no-regret” learning plan leveraging the principal investigator's popular blog, collaborations with industry through NSF centers and institutes, and a summer camp for local high school students. Ultimately, this project seeks to enhance scientific literacy, promote workforce development, inform public policy, and improve the responsible development and deployment of AI/ML technologies for societal benefit. The goal of this research is to investigate fundamental limits and algorithmic principles for trustworthy sequential decision-making. To this end, the project aims to develop a novel reduction from RL to statistical or online learning that integrates privacy, robustness, and fairness, enabling the leverage of established results in these better-understood domains. Specifically, the research focuses on four thrusts: (1) establishing the first theoretical results for private RL under general function approximations; (2) investigating the interplay between robustness (addressing data corruption and heavy tails) and privacy (considering both central and local differential privacy); (3) developing a novel framework for group fairness in RL based on constrained RL; and (4) applying this understanding to improve the alignment process of large language models. The research employs techniques from RL, stochastic optimization, online learning, information theory, and game theory to derive minimax lower bounds, develop general-purpose algorithms, and analyze the trade-offs between utility, privacy, robustness, and fairness. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $500K
2030-09-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M