NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Artificial Intelligence (AI) systems can take advantage of complex patterns hidden within vast pools of data to make inferences about the world. However, modern systems are too large and complex to manually analyze, and come with little to no guarantees on how they work. A key challenge that remains is how to explain the reasoning behind an AI system, and answer the question: why did a model make a prediction? Such explanations are necessary for doctors and scientists to trust AI systems in high-stakes applications. However, existing explanations can result in highly misleading conclusions, resulting in injury and harm when deployed in downstream applications. This project aims to bridge the gap from formal verification to explainability, to create a new paradigm of explanations with provable assurances that can be relied upon in practice. The project's novelties are formal specifications for explanability, a verification framework for certifying explanations, and a class of AI systems with certified explanations. The project's impacts are heightened trust in AI systems when deployed, trusted scientific discovery, and translation of trustworthy AI to scientific domains. The outcomes of this project are being integrated into both undergraduate and graduate courses in artificial intelligence to bolster and motivate the technical course material. The project aims to develop certificates for AI explanations, to build trust in AI systems via formally verified guarantees. The investigator is investigating two core research thrusts. The first thrust builds a verification framework for feature attributions, including developing specifications for explanations, computing lower-bounds for verification, and estimating probabilistic certificates. The second thrust designs architectures that are well-suited for verification of explanations. These architectures differ in the varying degrees of assumed access to the base AI model being explained, including differentiable certificates for full access, explainable wrappers for gradient access, and gray-box techniques for application programming interface (API) access. The project aims to assess verified explanations in scientific domains including cosmology, surgery, and psychology to assess real-world practicality. The team is sharing project results through open-source software packages, and creating new tools for broader access. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $247K
2030-05-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Canada Foundation for Innovation — Innovation Fund
Canada Foundation for Innovation — up to $50M
Human Frontier Science Program 2025-2027
NSF — up to $21.2M
Entrepreneurial Fellowships to Enhance U.S. Competitiveness
NSF — up to $15.0M
MATERNAL, INFANT AND EARLY CHILDHOOD HOMEVISITING GRANT PROGRAM - PROJECT ADDRESS: 1500 JEFFERSON STREET SE, OLYMPIA, WA...
Department of Health and Human Services — up to $12.0M
MATERNAL, INFANT AND EARLY CHILDHOOD HOMEVISITING GRANT PROGRAM - PROJECT ABSTRACT PROJECT TITLE: MATERNAL, INFANT A...
Department of Health and Human Services — up to $10.9M
Canada Excellence Research Chairs (CERC)
Government of Canada — up to $10M