Biotechnology and artificial intelligence: risks from research for security and proliferation of biological weapons

sprungmarken_marker_4455

Background and central aspects of the topic

Rapid advances in the analysis and synthesis of genetic material (DNA), and in the targeted modification of genes in a wide range of organisms through genome editing, have simplified and greatly expanded the ability to study and modify biological systems and organisms over the past 10 to 15 years. Computational analysis and design processes have been an important tool in this context. These developments, often summarised under the term synthetic biology (TAB Working Report No. 164 "Synthetic biology - the next phase  of biotechnology and genetic engineering"), are increasingly converging with recent advances in the fields of artificial intelligence (AI) and the automation of laboratory processes, up to automated laboratories that can be used via cloud services (cloud labs). These new possibilities have already led to revolutionary developments in basic research (e.g. the study of gene function and disease), industrial biotechnology (the design of new metabolic pathways in microorganisms) and medicine (CAR T-cell therapies, gene therapies).

At the same time, synthetic biology and its applications have since their inception been accompanied by debates about the risks of misuse or possible (laboratory) accidents, especially with regard to the potential widening of access to technological developments for people who are not involved in established and registered public or private scientific institutions. In the context of these discussions on biosecurity and biosafety issues, mandatory regulations have been developed in the USA for certain dual-use research projects (Dual Use Research of Concern) and for the introduction of new properties (gain of function) into certain pathogens.

More recently, possible new or increased security risks from misuse have been discussed, especially in the context of further technological advances in DNA synthesis technologies (including desktop DNA printers), cloud labs and new AI models (such as ChatGTP-like large language models - LLM and biodesign tools). Various documents from different actors on this topic come mainly from the US, including a report published in October 2023 by the non-profit organisation Nuclear Threat Initiative (NTI) on possible impacts and governance measures. Calls for action to address such risks and safeguarding biological research against dangers from AI have also recently been articulated in an executive order on AI issued by US President Joe Biden.

Objective and approach

Built on a broad  information base, the project will provide a focused analysis of the potential security risks of recent biotechnological developments and their interactions with developments in AI. It also aims to identify control and regulatory options and discuss ways to further develop them to minimise these security risks and strengthen the non-proliferation of potential biological weapons.

To this end, an up-to-date overview of relevant developments in biotechnology and synthetic biology and their convergence with recent AI developments will be provided. On that basis, possible security risks discussed in the scientific literature and by various societal actors (e.g. developers/companies, researchers, policy-advice organisations, security experts) will be presented and the underlying arguments or evidence discussed. Finally, building on this, the existing control and regulatory options at the national (German) and supra- or international level will be characterised, the scientific, social and political debate on their sufficiency or the need for new measures will be examined, and options for action will be derived. As usual, the TAB analysis will be based on publicly available knowledge, supplemented by expert assessments in the form of written papers, interviews or workshops.

Project status

A call for tenders for an expert opinion is currently being prepared.