NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
As artificial intelligence (AI) systems are increasingly being used to make important decisions with limited human oversight, ensuring these decisions align with societal values becomes critical for accountability. However, teaching AI to make decisions that truly reflect human values is difficult, particularly as people cannot spell out their preferences for every situation. Further, AI must act appropriately even in novel situations that people have never faced before. Today's AI systems try to extrapolate from limited human input, which can lead to unpredictable or undesirable outcomes. These systems also lack the means to be accountable for their decisions, as they provide little justification to back up their output. In this project, we draw inspiration from centuries of legal practice to reimagine how AI systems can more robustly reason their way to a decision. This project focuses on how judges interpret laws through examining past cases and using them to justify decisions in new cases. By teaching AI to reason in a similar way to judges - comparing new problems to old ones, staying consistent with past decisions, and describing its choice of analogy - this project will help develop AI systems that can explain their decisions by citing prior examples and constructing clear arguments. This project develops a novel approach to aligning AI to a community's values by translating case-based reasoning (CBR) from legal jurisprudence into computational systems. The research has three key research goals: (1) identifying cognitive and contextual factors that make CBR a legitimate decision-making model; (2) developing AI systems that ground reasoning in precedent cases with auditable traces; and (3) evaluating CBR's effectiveness for accountable AI decision-making. In the first phase of the project, the project will conduct empirical studies with legal experts and laypeople to understand CBR processes. Phase 2 creates standardized datasets and automated evaluation benchmarks for AI models, including real-world legal cases, expert-crafted closed-world examples, and procedurally-generated challenge cases. Phase 3 implements and evaluates a proof-of-concept CBR system incorporating rule retrieval, case retrieval, and reasoning modules that produce decisions with explicit precedent citations and argument chains. The expected contributions of this project include: computational models formalizing analogical reasoning in legal contexts; open-source datasets and evaluation frameworks for CBR in AI systems; and empirical insights into how case-based explanations affect user trust and perceived legitimacy. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $300K
2027-09-30
We'll draft the complete application against NSF's requirements, run a quality review, and email you a submission-ready PDF plus an editable Word doc within 5 business days. Most orders deliver in 24-48 hours. Flat $399, any grant size.
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
Subscribe for Pro access · Includes AI drafting + templates + PDF export
Canada Foundation for Innovation — Innovation Fund
Canada Foundation for Innovation — up to $50M
Human Frontier Science Program 2025-2027
NSF — up to $21.2M
Entrepreneurial Fellowships to Enhance U.S. Competitiveness
NSF — up to $15.0M
MATERNAL, INFANT AND EARLY CHILDHOOD HOMEVISITING GRANT PROGRAM - PROJECT ADDRESS: 1500 JEFFERSON STREET SE, OLYMPIA, WA...
Department of Health and Human Services — up to $12.0M
MATERNAL, INFANT AND EARLY CHILDHOOD HOMEVISITING GRANT PROGRAM - PROJECT ABSTRACT PROJECT TITLE: MATERNAL, INFANT A...
Department of Health and Human Services — up to $10.9M
Genome Canada — Large-Scale Genomics Research
Genome Canada — up to $10M