NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Ensuring software is secure is a fundamental challenge in today's technology-driven world. To improve software security, software development best practices recommend that developers begin development with security in mind by following a structured process called "threat modeling". Threat modeling is a structured brainstorming process where developers review a system's parts, asking what could go wrong and how they could fix it, identifying threats and mitigations, respectively. There are many recommended threat modeling processes, but it is not clear which are best. Developing guidelines and support for this essential process requires understanding relevant human decision-making and collaborative problem solving. There have been some efforts to study threat modeling practice, but these have either been very expensive or the authors have had to make design decisions that reduce study costs but potentially limit result reliability. This project includes experiments comparing experimental design approaches to assess their effects. This approach will help future researchers design reliable threat modeling experiments while minimizing study cost. The resulting best practices will be shared with threat modeling researchers and incorporated into professional education as well as integrated into courses in security and software systems engineering. This project will increase threat modeling research reliability by empirically evaluating best practices and tradeoffs for experiment design. Researchers are undertaking qualitative studies and controlled experiments in four areas: (1) investigations of current threat modeling practices in real-world settings, (2) experiments assessing the impact of task design, such as, the level of system specification detail, (3) experiments assessing the study environment's effect, including participants' security expertise, and (4) comparisons of measures used to assess threat modeling performance. The results are being combined into research guidelines for human-centric threat modeling, which can be used as a reference for future researchers to help them develop more reliable results to improve threat modeling practice and software security generally. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $316K
2030-05-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Category I: CloudBank 2: Accelerating Science and Engineering Research in the Commercial Cloud
NSF — up to $24M
Category I: Nexus: A Confluence of High-Performance AI and Scientific Computing with Seamless Scaling from Local to National Resources
NSF — up to $24.0M
Research Infrastructure: Mid-scale RI-1 (MI:IP): Dual-Doppler 3D Mobile Ka-band Rapid-Scanning Volume Imaging Radar for Earth System Science
NSF — up to $20.0M
A Scientific Ocean Drilling Coordinating Office for the US Community
NSF — up to $17.6M
Category I: AMA27: Sustainable Cyber-infrastructure for Expanding Participation
NSF — up to $13.8M
Graduate Research Fellowship Program (GRFP)
NSF — up to $9.0M