Category: Technology | Published: 2025-10-03
The Research
This month (September 2025), a team led by Brian Hie at Stanford and the Arc Institute revealed that generative AI models can now design entire genome-scale viruses that work in practice. These were not simulations or theoretical sequences. The viruses were tested and validated in a lab, and in some cases outperformed their natural equivalents.
Synthetic Versions Created
The AI-created viruses were synthetic versions of ΦX174, a bacteriophage that infects E. coli. Using large language models trained on genetic data, the team designed dozens of new variants. Lab tests showed many of these were viable and highly infectious against bacterial hosts.
In three separate experiments, the synthetic phages (viruses that infect and kill bacteria) infected and killed bacteria more effectively than natural ΦX174. The researchers reported that, in one case, the natural version didn’t even make the top five.
Why the Research Was Done
The main motivation for the research was medical, as phage therapy is attracting renewed interest due to rising antibiotic resistance. These viruses, which infect and kill bacteria, could offer a way to replace or support conventional antibiotics, particularly in cases where resistance has made treatments less effective.
However, the study also appears to serve a broader purpose, showing that generative AI can now be used to design entire working genomes. The authors described their work as a foundation for designing “useful living systems at the genome scale” using AI.
This development may push AI-generated biology into a new category, where tasks that once took years of research can now potentially be achieved through prompt engineering and model inference, supported by laboratory validation.
How It Was Done
The researchers used two purpose-built large language models, Evo 1 and Evo 2, both trained on known phage genomes (the full genetic codes of viruses that infect bacteria). Rather than editing existing DNA, the models generated entirely new sequences designed to function as viable viruses.
These designs were then synthesised and tested in controlled lab environments to determine infectivity, replication capability and fitness against E. coli. Several synthetic phages performed better than their natural counterparts, suggesting the models were not only functional but capable of optimisation.
The authors limited the release of full model weights and data to prevent misuse, but the methodology has been published as a preprint and is accessible to the wider scientific community.
Why Existing Safeguards May Not Be Enough
One of the most serious concerns raised by the Stanford study is that current safety mechanisms may no longer be sufficient. For example, while the researchers restricted release of their full model and data, similar tools could still be developed elsewhere using publicly available genome databases.
A separate paper published the same month by Jonathan and Tal Feldman tested how well existing safety systems performed. They looked at popular protein interaction models used to screen for dangerous biological activity. These systems are meant to act as filters, flagging up synthetic sequences that might pose a risk. However, the study found that most of the models failed to identify known viral threats, including variants of SARS-CoV-2. This raises major doubts about the reliability of AI filters in high-risk areas like synthetic biology.
It seems that the problem is being made worse by the growing availability of commercial gene synthesis services. For example, companies around the world now offer to manufacture DNA to order. If their safety checks depend on filters that cannot spot risky sequences, there is a real risk that harmful organisms could be produced without being detected. This may not be intentional, but the outcome could still be serious.
The researchers argue that AI tools should not be used without human oversight, especially when they are capable of designing whole genome sequences. Manual checks, containment procedures, and layers of validation will be needed before this kind of technology can be safely deployed at scale.
Why the Supply Chain Also Needs to Respond
It should be noted here that this is not just a problem for researchers. For example, any business involved in the broader synthetic biology supply chain could be affected. That includes companies supplying lab equipment, reagents, DNA synthesis, or even cloud computing for AI training.
If an AI-designed virus were to cause harm, liability could reach across multiple parties. The business that designed it, the company that synthesised it, the lab that tested it, and even the suppliers of biological components could all come under scrutiny. Each will need to review their processes, safety documentation and contracts to ensure responsibilities are clearly defined.
Insurance may also need to change because existing life sciences policies may not account for AI-generated biological risks. Cyber insurance is unlikely to cover this type of incident unless clearly stated. Legal teams will need to assess whether AI-generated genomes qualify for intellectual property protection, and who is liable if something goes wrong.
These are no longer just theoretical questions, as the design and production of synthetic organisms is moving well beyond high-security labs. With generative tools becoming more powerful and widely accessible, any business involved in the chain may now be exposed to new operational, reputational, or legal risks.
Growing Pressure for International Coordination
The lack of consistent international regulation is another major concern. For example, while the UK has some of the strongest biosafety frameworks in the world, many other jurisdictions have not yet addressed the risks of AI in synthetic biology. This creates potential loopholes, where harmful work could be carried out in less regulated environments.
Global organisations such as the World Health Organisation and the InterAcademy Partnership have already started highlighting the need for joined-up rules. Several experts have proposed an international licensing system for high-risk AI models used in biological design, similar to the controls already in place for nuclear materials and dangerous chemicals.
There is also increasing concern about open-source models. While openness in research has supported progress in many fields, unrestricted access to tools capable of designing viruses poses a different kind of risk. The Stanford team made a point of withholding their model weights to prevent misuse. However, others may not take the same approach.
UK businesses that work with international partners will need to ensure those partners follow equivalent safety protocols. It may no longer be enough to comply with domestic regulations alone. Auditing suppliers, reviewing overseas collaborations, and maintaining clear contractual safeguards will all become more important.
Commercial Interest Is Already Accelerating
Despite the risks, commercial interest in AI-designed biology is growing quickly. Companies are exploring how the technology could support applications in medicine, agriculture, food safety, environmental protection and bioengineering.
Phages, (viruses that infect bacteria), could, for example, be designed to target specific bacterial threats in farming, reducing reliance on antibiotics. Also, similar approaches could be used to clean up industrial waste or detect harmful microbes in supply chains. Each of these use cases will require rigorous testing, but the potential benefits are drawing attention.
Market forecasts even suggest that the global synthetic biology sector could exceed £40 billion within five years. If AI becomes part of the standard toolset for