NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Stochastic optimization is a fundamental research discipline and a workhorse of learning algorithms. It addresses problems that are stochastic—or random—in nature, such as those arising in the training of machine learning models. The existing theory underlying most learning and optimization algorithms often relies on the simplifying assumption that the data examples observed during training are representative of the data on which the model will be tested or deployed. However, this assumption rarely holds in practice. For example, a facial recognition model trained on broad U.S. data may exhibit varying performance across states with differing demographics, raising concerns about both accuracy and fairness. Similarly, in e-commerce, customer behavior can shift dynamically in response to pricing strategies. This research aims to develop robust algorithms capable of handling these dynamic and uncertain data scenarios. The work will advance optimization techniques to address fundamental supervised learning tasks, yielding algorithms with provable error guarantees that are both computationally and data efficient. These advances will enhance our understanding of learning in dynamic and uncertain data environments, which are central to modern machine learning. Broader impacts include fostering cross-disciplinary collaborations, mentoring students, and organizing a workshop to engage diverse early-career researchers, thereby supporting education, diversity, and innovation in science. The project will develop new optimization-inspired algorithms to address learning under two models of distributional shifts, forming two main research thrusts. The first thrust focuses on scenarios where the training and testing data distributions differ, studied through the distributionally robust optimization framework. The goal is to train learning models that perform well under worst-case test scenarios within a predefined ambiguity set. By leveraging the structured nature of fundamental tasks in regression and classification, this research aims to address limitations of existing approaches, which often rely on overly general assumptions and produce overly pessimistic results. The second thrust addresses situations where data distributions shift in response to the trained model, such as in performative prediction settings. The key challenge is to achieve stability under these shifts, which translates into solving stochastic fixed-point equations and nonconvex optimization problems. The focus is on advancing algorithmic techniques to tackle structured learning problems, providing guarantees for tasks such as learning single-index models. The proposed research is expected to contribute novel algorithmic techniques in stochastic optimization and learning theory, particularly in areas such as min-max optimization, stochastic fixed-point problems, and learning under label noise. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $236K
2029-12-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M