NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
This project seeks to develop responsible language models (LMs) with rigorous guarantees. LMs have significantly advanced deep learning, reshaping the modeling and resolution of predictive tasks, yet in the real world, their deployment often results in errors. This undermines trust in artificial intelligence (AI) technologies and necessitates the development of advanced AI algorithms that address the unique complexities and emergent capabilities of LMs. New research underscores the inherent relation between uncertainty and various responsible AI principles, highlighting the critical role of uncertainty estimation in ensuring reliable decision-making, especially in sectors like healthcare and autonomous driving. This project will explore uncertainty's decisive yet largely unrecognized role in enhancing LM reliability. Considering the unique characteristics of LMs that present new opportunities and challenges beyond the scope of traditional uncertainty estimation methods, this project uses conformal prediction, which uses past experience to help determine confidence in new predictions. It leverages conformal prediction’s advantageous features like lightweight design, rigorous approach, and informative output. It further integrates these qualities with the emergent properties of LMs to enhance their performance and reliability. The successful outcome of the project will expand the fundamental understanding of responsible LMs, enable effective LM-augmented tools, and advance conformal prediction research to a new frontier, thus positively impacting the overall value of various large-scale data and foundation models and responsible AI education. This project is structured around three interconnected research aims. The first aim quantifies uncertainty for LMs with theoretical guarantee, exploring conformal prediction across different LM settings, including closed-source, open-source LMs, and scenarios challenging the fundamental exchangeability assumption in conformal prediction. It will then demonstrate how conformal prediction can be applied to detect and mitigate hallucinations to improve LM reliability. The second aim focuses on merging conformal prediction's robust uncertainty estimation with LMs' self-correction capabilities, using reliable prediction sets to iteratively refine LMs and reduce uncertainty through conformal prediction-guided Chain-of-Thought reasoning and external knowledge integration. The third aim introduces an uncertainty-based reliability measures and develops processing error mitigation strategies that utilize conformal prediction to strategically enhance responsible AI in both closed and open-source LMs, ensuring more equitable outcomes across a range of application scenarios. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $350K
2030-04-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
New York Systems Change and Inclusive Opportunities Network (NY SCION)
Labor — up to $310000020251M
Trade Adjustment Assistance (TAA)
Labor — up to $2779372424.6M
Occupational Safety & Health - Training & Education (OSH T&E)
Labor — up to $590000020.3M
The Charter School Revolving Loan Fund Program
State Treasurer's Office — up to $100000.3M
The Charter School Revolving Loan Fund Program
State Treasurer's Office — up to $100000.3M
CEFA Bond Financing Program
State Treasurer's Office — up to $15000M