As artificial intelligence (AI) systems rapidly integrate into everyday life and scientific enterprise, a new frontier of biosecurity risk has emerged—one where malicious actors may exploit generative AI to design, synthesize, and deploy bioweapons. A recent law review article by Janelle Radcliffe, published in the William & Mary Environmental Law and Policy Review, warns that AI-facilitated bioterrorism is not a hypothetical future threat—it is a present danger requiring urgent attention from governments, public health institutions, and global security stakeholders.
The Rising Risk: How AI Supercharges Bioterrorism
Radcliffe’s analysis underscores that while bioterrorism—defined as the deliberate release of harmful biological agents—has long posed a threat, AI significantly lowers the technical and informational barriers to executing such attacks. Platforms like ChatGPT and other large language models (LLMs) are now widely accessible and can, intentionally or unintentionally, provide step-by-step information on how to produce biotoxins, evade detection, or weaponize pathogens.
This development is particularly concerning in the context of agroterrorism, a subtype of bioterrorism focused on disrupting food systems and agriculture. Historical programs, such as the Soviet Union’s bioweapons efforts targeting livestock and crops, demonstrated the devastating economic and societal impact such attacks could have. Today, AI could make these strategies more attainable to lone actors or non-state groups.
AI Democratizes Dual-Use Knowledge
One of the article’s key findings is that the democratization of AI tools also democratizes access to dual-use biotechnology knowledge—information that can be used for both beneficial and harmful purposes. In one MIT case study cited by Radcliffe, researchers showed that AI systems could generate lists of pathogens, procurement strategies, and even evasion tactics for bypassing DNA screening protocols—all within an hour.
Similarly, generative drug discovery platforms, when instructed to maximize toxicity instead of therapeutic potential, produced tens of thousands of molecules—some similar to VX nerve agent—in under six hours. These examples highlight how easily tools built for scientific advancement can be repurposed for destructive ends.
The Regulatory Gap—and a Call to Action
Radcliffe critiques the current U.S. regulatory framework, particularly the limitations of the Federal Trade Commission (FTC), which lacks the mandate and resources to handle national security threats posed by AI misuse. She proposes the creation of a dedicated federal agency: the Data Privacy, Cybersecurity, and Artificial Intelligence Regulating Department. This agency would oversee:
- Public information controls to limit AI access to dangerous biochemical knowledge.
- Threat modeling systems to detect and prevent AI-assisted bioterrorism plots.
- Legal enforcement mechanisms, including criminalizing the use of AI in terror-related acts.
- Cross-agency collaboration, especially with the Environmental Protection Agency (EPA), to close existing biosecurity gaps.
The paper also emphasizes the role of the Library of Congress in developing a curated, government-backed AI knowledge base—offering researchers a safe, vetted alternative to generative platforms.
Why This Matters for Public Health and National Security
While this issue may seem remote or academic, the implications are deeply relevant to everyone. Bioterrorism threats enabled by AI could target food systems, water supplies, public health infrastructure, and major cities. An attack leveraging even a moderately effective biotoxin could disrupt hospitals, incite panic, and strain global supply chains. In a post-COVID world, public confidence in biosafety is already fragile; an AI-assisted biological event could do lasting damage to both lives and institutions.
Moreover, the national interest is clear. The ability to preempt AI-driven bioterrorism will become a defining capability of 21st-century homeland security. As adversaries develop their own offensive biotech tools, the U.S. must prioritize not only rapid detection and response but also upstream prevention—beginning with stronger AI governance.
Toward a New Era of Biosecurity Preparedness
As Radcliffe concludes, the convergence of AI and biotechnology creates a complex, fast-moving threat landscape. For professionals in public health, CBRNE defense, and global health security, the message is urgent: biosecurity strategies must evolve now. That means advocating for better safeguards, shaping AI regulation, supporting scientific integrity, and preparing for scenarios where AI isn’t just a research assistant—but a force multiplier for biological threats.
For more detailed information, case studies, and proposed legislation, readers are encouraged to consult the full article in the William & Mary Environmental Law and Policy Review.
Radcliffe, Janelle. “Assessing the Accelerated Threat of Bioterrorism in the Age of AI.” William & Mary Environmental Law and Policy Review, vol. 49, no. 3, 2025, Article 10.