The U.S. Artificial Intelligence Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST), is seeking information and insights from stakeholders on current and future practices and methodologies for the responsible development and use of chemical and biological (chem-bio) AI models.
The rapid advancement of the use of AI in the chemical and biological sciences has led to the development of increasingly powerful chem-bio AI models. By reducing the time and resources required for experimental testing and validation, chem-bio AI models can accelerate progress in areas such as drug discovery, medical countermeasure development, and precision medicine. As with other AI models, there is a need to understand and mitigate potential risks associated with misuse of chem-bio AI models.
Examples of chem-bio AI models include but are not limited to:
- foundation models trained using chemical and/or biological data
- protein design tools
- small biomolecule design tools
- viral vector design tools
- genome assembly tools
- experimental simulation tools
- autonomous experimental platforms
The concept of dual use biological research is defined in the 2024 United States Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential (USG DURC/PEPP Policy). The dual use nature of these chem-bio AI tools presents unique challenges—while they can significantly advance beneficial research and development, they could also potentially be misused to cause harm, such as through the design of more virulent or toxic pathogens and toxins or biological agents that can evade existing biosecurity measures.
As chem-bio AI models become more capable and accessible, it is important to proactively address safety and security considerations. The scientific community has taken steps to address these issues, as demonstrated by a recent community statement outlining values and guiding principles for the responsible development of AI. This statement articulated several voluntary commitments in support of such values and principles that were adopted by agreement by more than one hundred individual signatories.
To address this complex issue, the Request for Information encourages respondents to provide concrete examples, best practices, case studies, and actionable recommendations where possible. Responses may inform AISI’s overall approach to biosecurity evaluations and mitigations.
Example key questions include:
- What current and possible future evaluation methodologies, evaluation tools, and benchmarks exist for assessing the dual-use capabilities and risks of chem-bio AI models?
- How might chem-bio AI models strengthen and/or weaken existing biodefense and biosecurity measures, such as nucleic acid synthesis screening? What work has your organization done or is your organization currently conducting in this area to strengthen these existing measures? How can chem-bio AI models be used to strengthen these measures?
- How might existing AI safety evaluation methodologies (e.g., benchmarking, automated evaluations, and red teaming) be applied to chem-bio AI models? How can these approaches be adapted to potentially specialized architectures of chem-bio AI models? What are the strengths and limitations of these approaches in this specific area?
- What areas of research are needed to better understand the risks associated with the interaction of multiple chem-bio AI models or a chem-bio AI model and other AI model into an end-to-end workflow or automated laboratory environments for synthesizing chem-bio materials independent of human intervention?
- What new or emerging evaluation methodologies could be developed for evaluating chem-bio AI models that are intended for legitimate purposes but may output potentially harmful designs?
- To what extent is it possible to have generalizable evaluation methodologies that apply across different types of chem-bio AI models? To what extent do evaluations have to be tailored to specific types of chem-bio AI models?
- What are the most significant challenges in developing better evaluations for chem-bio AI models? How might these challenges be addressed?
- How would you include stakeholders or experts in the risk assessment process? What feedback mechanisms would you employ for stakeholders to contribute to the assessment and ensure transparency in the assessment process?
Feedback is due before December 3, 2024. Comments must be submitted electronically via the Federal e-Rulemaking Portal.
Learn more: Federal Register Docket No. 240920-0247.