Request for Proposals: Science of Trustworthy AI
Proposal Due Date | May 17th, 2026 by 11:59pm AoE |
|---|---|
Notification of Decision | Summer 2026 |
Funding Tiers | Tier 1: Up to $1M (1-3 years) Tier 2: $1M-5M+ (1-3 years) |
Informational Webinars | March 11th, 2026, 10-11am ET. Register here |
Contact email | trustworthyai@schmidtsciences.org |
Link to FAQ |
Schmidt Sciences invites proposals for the Science of Trustworthy AI program, which supports technical research that improves our ability to understand, predict, and control risks from frontier AI systems while enabling their trustworthy deployment.
This Request for Proposals is grounded in our Research Agenda, which defines the scientific scope and priorities. The questions in each subsection guide what we consider in scope; they are not an exhaustive checklist. Proposals need not match any question(s) verbatim, but should clearly advance the underlying scientific objectives of our research agenda and explain why the work advances the science of trustworthy AI. We expect strong proposals—especially at funding Tier 2—to take a clear stand on a small number of core questions and pursue them deeply, rather than addressing many agenda items superficially.
The research agenda has three connected aims:
Aim 1: Characterize and forecast misalignment in frontier AI systems: why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.
Aim 2: Develop generalizable measurement and intervention: advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).
Aim 3: Oversee AI systems with superhuman capabilities and address multi-agent risks: extend oversight and control to regimes where humans cannot directly evaluate correctness/safety, and address risks that arise from interacting AI systems.
Preference will be given to proposals from collaborations among multiple PIs and labs. For Aim 3, we are considering grouping projects together to expedite rapid empirical progress on effective superhuman oversight. More broadly, we encourage collaboration across this agenda and expect to support shared compute and targeted convenings, where helpful.
Funding Tiers
We invite applicants to apply to either or both funding tiers. Applicants may submit more than one proposal to each tier.
- Tier 1: Up to $1M (1-3 years)
Tier 2: $1-5M+ (1-3 years)
Although we expect to fund projects at both tiers, we are most interested in ambitious Tier 2 proposals that, if successful, would change what the field believes is possible for understanding, measuring, or controlling risks from frontier AI systems.
Access to Resources
Schmidt Sciences aims to support the compute needs of ambitious and risky AI research.
Applicants may request either funding for compute or access to Schmidt Sciences’ computing resources (subject to availability and terms). The computing resources offer access to cutting edge GPUs and CPUs, accompanied by large-scale data storage and high-speed networking. Please see the application form for more information.
Beyond compute, Schmidt Sciences offers a range of support:
Software engineering support through the Virtual Institute for Scientific Software
API credits with frontier model providers
Opportunities to engage with the program’s community through convenings and workshops
Eligibility
We invite individual researchers, research teams, research institutions, and multi-institution collaborations across universities, national laboratories, institutes, and non-profit research organizations. We are open globally and encourage collaborations across geographic boundaries.
Indirect costs must be at or below 10% to comply with our policy.
Selection Criteria
Proposals will be evaluated by Schmidt Sciences staff and external reviewers holistically. Key considerations include:
Research Agenda Fit. Does the proposal clearly engage with the intention behind the scientific questions and objectives in the research agenda?
Scientific Quality and Rigor. Is the proposed work technically sound, well-motivated, and capable of producing generalizable insight?
Potential Impact. If successful, would it materially advance the science of trustworthy AI and is there a plausible argument for why this will meaningfully reduce risks from frontier AI systems (ideally through ambitious, field-shaping contributions)?
Feasibility and Scope. Is the project appropriately scoped for the requested budget and duration?
Team Expertise. Is the team well-suited to execute the proposed work, with relevant technical expertise, sufficient capacity, and a level of time commitment commensurate with the ambition of the project?
Cost Effectiveness. Is the proposed budget reasonable and well-justified given the project’s goals and planned activities?
Tiers 1 and 2 have the same selection criteria, with a higher bar for tier 2 projects.
For Tier 2, priority will be given to projects which are demonstrably a primary focus for the lead investigator(s).
Common reasons proposals are non-competitive
Proposals lack a core focus
Proposals suggest tools/benchmarks/evaluations without a credible validity argument (e.g., construct validity, predictive validity, robustness under optimization pressure)
Proposals describe vague methods (“we will explore…”) instead of concrete activities, experiments, analyses, and baselines
Proposals do not state clearly what would happen if the project succeeds, and what we learn if it fails
2026 Science of Trustworthy AI RFP
Request for Proposals: Science of Trustworthy AI
Proposal Due Date | May 17th, 2026 by 11:59pm AoE |
|---|---|
Notification of Decision | Summer 2026 |
Funding Tiers | Tier 1: Up to $1M (1-3 years) Tier 2: $1M-5M+ (1-3 years) |
Informational Webinars | March 11th, 2026, 10-11am ET. Register here |
Contact email | trustworthyai@schmidtsciences.org |
Link to FAQ |
Schmidt Sciences invites proposals for the Science of Trustworthy AI program, which supports technical research that improves our ability to understand, predict, and control risks from frontier AI systems while enabling their trustworthy deployment.
This Request for Proposals is grounded in our Research Agenda, which defines the scientific scope and priorities. The questions in each subsection guide what we consider in scope; they are not an exhaustive checklist. Proposals need not match any question(s) verbatim, but should clearly advance the underlying scientific objectives of our research agenda and explain why the work advances the science of trustworthy AI. We expect strong proposals—especially at funding Tier 2—to take a clear stand on a small number of core questions and pursue them deeply, rather than addressing many agenda items superficially.
The research agenda has three connected aims:
Aim 1: Characterize and forecast misalignment in frontier AI systems: why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.
Aim 2: Develop generalizable measurement and intervention: advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).
Aim 3: Oversee AI systems with superhuman capabilities and address multi-agent risks: extend oversight and control to regimes where humans cannot directly evaluate correctness/safety, and address risks that arise from interacting AI systems.
Preference will be given to proposals from collaborations among multiple PIs and labs. For Aim 3, we are considering grouping projects together to expedite rapid empirical progress on effective superhuman oversight. More broadly, we encourage collaboration across this agenda and expect to support shared compute and targeted convenings, where helpful.
Funding Tiers
We invite applicants to apply to either or both funding tiers. Applicants may submit more than one proposal to each tier.
- Tier 1: Up to $1M (1-3 years)
Tier 2: $1-5M+ (1-3 years)
Although we expect to fund projects at both tiers, we are most interested in ambitious Tier 2 proposals that, if successful, would change what the field believes is possible for understanding, measuring, or controlling risks from frontier AI systems.
Access to Resources
Schmidt Sciences aims to support the compute needs of ambitious and risky AI research.
Applicants may request either funding for compute or access to Schmidt Sciences’ computing resources (subject to availability and terms). The computing resources offer access to cutting edge GPUs and CPUs, accompanied by large-scale data storage and high-speed networking. Please see the application form for more information.
Beyond compute, Schmidt Sciences offers a range of support:
Software engineering support through the Virtual Institute for Scientific Software
API credits with frontier model providers
Opportunities to engage with the program’s community through convenings and workshops
Eligibility
We invite individual researchers, research teams, research institutions, and multi-institution collaborations across universities, national laboratories, institutes, and non-profit research organizations. We are open globally and encourage collaborations across geographic boundaries.
Indirect costs must be at or below 10% to comply with our policy.
Selection Criteria
Proposals will be evaluated by Schmidt Sciences staff and external reviewers holistically. Key considerations include:
Research Agenda Fit. Does the proposal clearly engage with the intention behind the scientific questions and objectives in the research agenda?
Scientific Quality and Rigor. Is the proposed work technically sound, well-motivated, and capable of producing generalizable insight?
Potential Impact. If successful, would it materially advance the science of trustworthy AI and is there a plausible argument for why this will meaningfully reduce risks from frontier AI systems (ideally through ambitious, field-shaping contributions)?
Feasibility and Scope. Is the project appropriately scoped for the requested budget and duration?
Team Expertise. Is the team well-suited to execute the proposed work, with relevant technical expertise, sufficient capacity, and a level of time commitment commensurate with the ambition of the project?
Cost Effectiveness. Is the proposed budget reasonable and well-justified given the project’s goals and planned activities?
Tiers 1 and 2 have the same selection criteria, with a higher bar for tier 2 projects.
For Tier 2, priority will be given to projects which are demonstrably a primary focus for the lead investigator(s).
Common reasons proposals are non-competitive
Proposals lack a core focus
Proposals suggest tools/benchmarks/evaluations without a credible validity argument (e.g., construct validity, predictive validity, robustness under optimization pressure)
Proposals describe vague methods (“we will explore…”) instead of concrete activities, experiments, analyses, and baselines
Proposals do not state clearly what would happen if the project succeeds, and what we learn if it fails