NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Deep learning excels in error-tolerant applications with abundant data, but struggles in scientific settings where accuracy is critical, such as solving physics-based equations with limited observations. The core challenge lies in non-convex optimization, where traditional training lacks guarantees of reliability. This project develops a rigorous framework to control optimization accuracy by aligning iterative updates with mathematically sound “ideal descent paths” derived from the underlying physics. By dynamically adapting model complexity and certifying each step’s accuracy, we aim to overcome the unpredictability of non-convex optimization, enabling trustworthy artificial intelligence (AI) for high-stakes applications in engineering, medicine, and beyond. This work provides foundational tools to ensure AI-driven scientific predictions are both accurate and actionable. This project addresses the fundamental challenge of uncertain optimization success in physics-informed deep learning. Non-convex objective landscapes severely impede on accuracy control when dealing with error-sensitive problems, especially those involving partial differential equations (PDEs). The proposed approach establishes a mathematically grounded framework that enforces optimization accuracy control through residual-based loss functions that are “variationally correct”. This means that the loss is always proportional to the current approximation error with respect to model-compliant norms derived from variational formulations of the underlying PDE model. This enables a posteriori error control, which is crucial in a new methodology based on an “ideal descent path” paradigm. This reinterprets standard training as a controlled “perturbation” of a provably convergent convex process in an ambient Hilbert space, given by the variational formulation. Each iterative step is monitored to meet carefully calibrated error tolerances anchored to the infinite-dimensional reference problem. Adaptive a posteriori error criteria dynamically trigger network expansions via natural gradient flows when required by precision thresholds. This prevents over-parameterization and guarantees physically valid solutions. A systematic integration of theoretically justified optimization, physics-compliant error control, and adaptive architecture growth gives rise to the first end-to-end framework for certifiably accurate physics-informed learning. The resulting methodology demonstrates transformative potential for high-stakes applications—from inverse problems to multiscale modeling—where conventional deep learning lacks reliability guarantees. By advancing the mathematical foundations of scientific machine learning, the project delivers practical tools for domains requiring rigorous uncertainty quantification. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $250K
2028-07-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M