NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
This project's goal is to build better methods for assessing privacy risks in machine learning (ML) models trained using data in table-based formats. ML models trained on tabular data (e.g., patient records, loan application records) are commonly used in privacy-sensitive domains such as health or finance. This makes them valuable targets for attackers who want to steal private data. One critical threat to privacy in ML models is model inversion attacks, in which adversaries strategically query the model to infer attributes of the data used to build it. Model inversion attacks have been well-studied in image datasets, but are much less understood in table-based datasets. Further, attribute inference risks are often studied as a global property of the model; however, because training data may be unbalanced in terms of what it captures about the world, specific groups or individuals may be at much higher risk of attribute inference than others. Finally, models in sensitive domains are often trained using a technique called "federated learning", where multiple participants who each have some private data (but not enough to train a model) can jointly train a model without having to share the sensitive data directly. Federated learning has the potential to protect privacy, but it also poses new risks if some of the participants are adversaries. To address these questions, the project team will develop methods for auditing attribute inference risks and disparities in both centralized and federated learning ML models, along with defenses aimed at mitigating these risks. Together, the work will increase the privacy of people whose data is used in machine learning models, allowing them to be used more safely in important applications. The technical aims of the project will be accomplished through three interconnected thrusts. First, the team will develop a framework to systematically audit attribute inference vulnerabilities by introducing an adaptable adversary model, designing novel attack algorithms, and developing an automated ML privacy auditing tool for comparative analysis across a wide spectrum of adversaries. Second, the team will develop the first-ever mathematical formalization to characterize disparity in attribute inference risks, along with novel attack techniques that exploit attribute inference disparity and target more vulnerable groups and nested groups by analyzing the ML model behaviors. Third, the project team will develop robust defenses for different phases of the ML pipeline, including data pre-processing, training, and inference, to mitigate attribute inference attacks and disparity in both centralized and federated settings. The novel defenses will balance privacy and utility by focusing on high-risk records and will also ensure privacy and utility fairness by designing novel selective and adaptive differential privacy-based and subspace learning-based solutions. The team will also leverage the research to create security competitions aimed at model inversion and public dashboards of privacy vulnerabilities in machine learning models in order to increase the impact of the project. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $379K
2030-06-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M