NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
From online advertising to content delivery networks and recommendation systems, many modern technologies rely on algorithms that must operate in real time without knowing what will happen next. Researchers in computer science, operations research, engineering, and other fields have developed powerful online optimization and learning tools for making effective decisions in the face of uncertainty, by helping systems learn from past outcomes to improve performance over time. However, these methods are vulnerable to attackers aiming to disrupt the system: fake reviews, ad fraud, denial-of-service attacks, and other attacks can corrupt the algorithms' learning process. Researchers have begun developing "corruption-robust" learning algorithms that are more resilient to attacks; however, significant barriers remain in translating these theoretical advances into real-world systems. This project aims to reduce those barriers by designing learning algorithms that are both theoretically sound and practical to implement, enabling more robust decision-making in real-world applications. This work directly supports the national interest by strengthening the resilience of critical cyberinfrastructure and advancing the scientific foundations of trustworthy AI. This project advances the theoretical foundations of online optimization and learning by explicitly incorporating robustness to security threats and adversarial corruptions. While online learning has been extensively studied for decades across computer science, control theory, economics, and operations research, a rigorous understanding of its vulnerabilities and resilience under malicious attacks remains underdeveloped. This project fills this gap by developing new corruption-robust algorithms for sequential decision-making problems, particularly in settings involving combinatorial action spaces and resource constraints in both single-agent and multi-agent learning environments, addressing various classes of adversarial threats. The research involves designing algorithms with provable performance guarantees in the presence of corruption, analyzing their theoretical properties, and developing efficient implementations. To validate its impact, the project evaluates the proposed methods in two real-world applications: probabilistic maximum coverage for content delivery networks and online learning to rank in social and e-commerce platforms. The project's outcomes will contribute broadly to secure and trustworthy decision-making systems. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $250K
2028-09-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $49 fee · Includes AI drafting + templates + PDF export
National STEM Teacher Corps Pilot Program: Rural Advancement of Students in STEM via Excellent Teacher Support: A Statewide Maine Alliance
NSF — up to $5M
NRT-IPP: Smart Construction, Infrastructure, and Buildings through Education, Research, and Cutting-edge Technology
NSF — up to $4.5M
AI Research Institute on Interaction for AI Assistants (ARIA)
NSF — up to $4M
FEC: Good Fire: Enhance Spatial and Temporal Efficacy of Prescribed Fire and Managed Wildfire Use
NSF — up to $4.0M
MRI: Track 2 Acquisition of a GPU-Accelerated Computing Cluster for Computationally Intensive and AI Research in North Dakota
NSF — up to $3.8M
TRAILBLAZER: Biomaterials for Programming Tissue Development
NSF — up to $3M