NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Machine learning (ML) and artificial intelligence (AI) technologies are being adopted for a wide range of applications. This has spurred interest in the use of ML and AI for chip design. Chip design is currently heavily automated using electronic design automation (EDA) software. Recent work has shown that AI and ML methods can further increase the level of automation as well as improve the quality of existing EDA tools. However, ML methods pose their own dangers and risks. They have been shown to be easily tricked by small changes in their inputs. They can also be easily "backdoored" by modifying only a tiny fraction of their training data. While these risks have been extensively studied in other domains, their impact has not been extensively examined in AI/ML based EDA and chip design tools. This project's novelties are (1) the first comprehensive look at the impact of input and training data perturbations and attacks on the quality, performance and security of AI/ML based EDA tools; and (2) the first thorough investigation into mechanisms to defend against such attacks. The project's broader significance and importance is that enables the trustworthy adoption of AI/ML methods in the chip design industry, resulting in greatly enhanced productivity and chip design quality, while ensuring trustworthiness. The project pursues these aims in three research thrusts. Thrust 1 focuses on discovering meaningful and contextual perturbations of inputs to the different steps in the EDA flow, starting from design specification to logic synthesis, test-point insertion and physical design. To this end, the project investigates a new "EDA vs. EDA" threat model, where tools from competing vendors in the same seek to degrade each other's performance by injecting targeted functionality-preserving transformations in the inputs of a downstream tool. Thrust 2 evaluates the impact of training data poisoning and backdooring attacks on ML-based EDA tools spanning both pre- and post-silicon use cases. This Thrust demonstrates how carefully inserted stealthy triggers like netlist and layout patterns, comments in RTL code or temporal sequences of instructions can result in undesirable outcomes. Thrust 3 builds robust ML-based EDA tools that can withstand attacks demonstrated in the prior two thrusts. This Thrust explores techniques for meaningfully infusing ML for EDA with constraints from the relevant EDA domains during model training. Overall, the three Thrusts synergistically work together to create the foundations for next-generation trustworthy AI/ML-native EDA, while also training hardware design students with a fundamental understanding of security and trust concerns in AI/ML. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $180K
2028-09-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $49 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M