NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
We live in a 3D world, and our perception systems, despite only receiving 2D retinal percepts, can effortlessly understand the underlying 3D structure, e.g., observing a set of stacked dishes, we understand how they support one another. Even more remarkably, we understand not just what it is, but also what can be, e.g., removing a dish in the middle can cause the tower to collapse. While the computer vision community has made impressive advances in developing computational systems that can reconstruct the underlying 3D from visual input, these systems do not understand about actions and their effects in this context. This project will bridge this gap and build perception systems which have an actionable understanding of the 3D world they observe. Such systems can be broadly useful across applications in computer vision, robotics, and mixed reality, e.g., allowing robots to intelligently act and efficiently learn in generic scenarios and helping virtual assistants better understand and guide their user's actions. This project will also contribute to the development of undergraduate and graduate students via research engagements and development of a specialized course on embodied agents, as well as benefit the community at large through dissemination of research and organization of tutorials. To achieve its goal, this project will make research contributions along three thrusts: a) developing approaches to learn about 2D affordances (what actions can be performed) and world models (understanding the effect an action may have), b) investigating techniques for similarly scalable learning for 3D, and c) leveraging the learned ‘world models’ for closing the perception-action loop for real-world manipulation. Specifically, this project will introduce mechanisms for learning expressive 2D affordance and world models from large-scale internet data, while exploring varying parametrizations for actions and states. To enable learning for 3D representations, this project will tackle the lack of training data via a framework for 3D reconstruction of generic interaction videos as well as novel a “reprojection consistency” formulation that side-steps the requirement of 3D supervision by leveraging generic 2D models. Lastly, current perception systems that learn about interactions often rely on passive (human or robot) data, and this project will develop mechanisms for actively using and improving the learned perception systems in context of real-world manipulation. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $349K
2030-05-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Canada Foundation for Innovation — Innovation Fund
Canada Foundation for Innovation — up to $50M
Human Frontier Science Program 2025-2027
NSF — up to $21.2M
Entrepreneurial Fellowships to Enhance U.S. Competitiveness
NSF — up to $15.0M
MATERNAL, INFANT AND EARLY CHILDHOOD HOMEVISITING GRANT PROGRAM - PROJECT ADDRESS: 1500 JEFFERSON STREET SE, OLYMPIA, WA...
Department of Health and Human Services — up to $12.0M
MATERNAL, INFANT AND EARLY CHILDHOOD HOMEVISITING GRANT PROGRAM - PROJECT ABSTRACT PROJECT TITLE: MATERNAL, INFANT A...
Department of Health and Human Services — up to $10.9M
Genome Canada — Large-Scale Genomics Research
Genome Canada — up to $10M