What Is Happening Now
Life science technologies have recently made remarkable advances. Progress in synthetic biology and genome analysis has dramatically accelerated the development of medical technologies that once required long periods of time. A symbolic example is the mRNA vaccines that were rapidly developed and distributed in response to COVID-19. At the same time, these technological foundations also carry the risk of being misused in unforeseen ways. Between spring and summer 2023, 23andMe, a company providing DNA testing services, suffered a major cyberattack, resulting in the genetic information of approximately seven million people being leaked[1]. It was reported that the leaked data included information that could identify individuals of specific ethnic backgrounds[2].
In the 21st century, science and technology are penetrating various sectors of society without being confined to specific fields or purposes. This transformation is particularly evident in the life sciences. Traditional distinctions such as “military vs. civilian,”“public vs. private,”or “good vs. bad” are increasingly destabilized. Technologies now exhibit what may be described as an “omni-use” characteristic[3]. Derived from the Latin term omnis, meaning “all” or “every,” omni-use refers to a condition in which technologies transcend the binary framework assumed by the traditional concept of dual-use – most notably the division between military and civilian applications. Instead, technologies diffuse broadly throughout society, and their potential uses expand in ways that are nearly unlimited.
Moreover, when life sciences increasingly converge with artificial intelligence (AI), the omni-use character of these technologies becomes even more pronounced. AI-driven analysis of genetic sequences and the optimization of biological design significantly expand possibilities in research and medicine. At the same time, however, they create situations in which outcomes are generated without direct human intention or explicit value judgement at every stage of the process. As a result, it has become increasingly difficult to determine in advance at what point a desirable application may deviate into misuse, and how far regulatory frameworks should extend.
Under such omni-use conditions, the boundaries between fields such as bioethics, economic activity, and national security are likewise becoming increasingly fluid. The use of biological data introduced for public health purposes, for example, may in a different context, become linked to human rights violations or forms of social exclusion. In this sense, the value and implications of technology vary significantly depending on the institutional and social environments in which it is applied. What is now required is to envision a new governance framework that takes as its starting point the assumption that life science technologies can be used in all directions.

Biotechnology in the Omni-Use Era
Recent advances in biotechnology extend far beyond improvements in experimental techniques; they are transforming the very foundations of how science understands, manipulates, and designs life. The refinement of genome editing technologies, the engineering of biological systems through synthetic biology, and the rapid development of large-scale biological data analysis using AI have collectively reshaped the field. Biology is shifting from a discipline primarily concerned with observing life to one increasingly focused on designing life[4].
Biotechnology originated in the 1970s with the development of recombinant DNA technology. In the 2000s, the emergence of synthetic biology further expanded the field, enabling the artificial design and manipulation of life at the molecular and genetic levels. In 2008, scientists succeeded in synthesizing an entire bacterial genome, marking a significant milestone in making the design of life from scratch a tangible scientific reality. The introduction of CRISPR-Cas9[5] in 2012 made gene editing both highly precise and relatively inexpensive.
Techniques that were once confined to specialized facilities have since spread to universities, private companies, cloud laboratories, and even independent scientists, contributing to a substantial democratization of biotechnology. This trajectory has been further accelerated by advances in AI. AI systems can now analyze vast amounts of genetic data and predict the functions and interactions of genetic sequences, greatly expanding the scope and speed of biological research.
As a result, biotechnology has brought significant benefits to fields such as medicine, public health, and environmental protection. At the same time, however, the very same knowledge and techniques may increase the potential for deliberate risks and unintended social consequences. The core challenge today does not lie in technological progress itself. Rather, it stems from the rapid diffusion of these technologies and their simultaneous use by diverse stakeholders across multiple contexts.
Uncertainty and Indeterminacy in Biotechnology
As biotechnology advances toward the capacity to “design life,” the uncertainties surrounding these technologies have become multilayered, expanding in ways that are deeply interconnected. This is not merely a transitional problem associated with technological development. Rather, it possesses the potential to fundamentally challenge existing frameworks of regulations, ethics, and even security.
First, there is uncertainty inherent in the technology itself. Although genome editing techniques enable highly precise manipulation of target genes, it remains difficult to fully understand and control the complex networks of interactions that characterize living systems as a whole. Consequently, modifications undertaken for therapeutic or research purposes may produce unintended harmful effects. Similarly, if a malicious actor alters a pathogen, unforeseen levels of pathogenicity or transmissibility could emerge. AI-driven design introduces additional risks: as such systems depend on training data that may be incomplete or biased, they may inadvertently generate genetic designs that enhance virulence or increase the ease of transmission.
Second, there is ethical indeterminacy. Enhancing resistance to disease through genetic modification may offer substantial benefits in medicine and public health. Yet there is no established social consensus regarding the limits of such interventions. The boundaries between therapy and enhancement, research and application, and benevolent and malicious intent remain ambiguous. As a result, it is difficult to draw clear lines in advance regarding permissible and impermissible uses of these technologies.
Third, there is uncertainty accompanied by the destabilization of traditional security concepts. In the era of omni-use, AI-generated genetic sequences and design methodologies can spread across borders instantaneously, and the actors capable of employing them are no longer confined to states. As biotechnology shifts from being concentrated in physical laboratories to being digitally designed, widely shared, and geographically dispersed, states find it harder to monitor and control its development and use.
Importantly, these uncertainties do not exist independently; they are closely intertwined and mutually reinforcing. When technical effects of a technology are uncertain, it is difficult to make confident ethical decisions about its uses. And when ethical standards remain unclear, this creates gray zones in which harmful or malicious actors can operate with less oversight, increasing security risks. Within this chain of interrelated challenges, life sciences hold enormous promise, yet the risk of misuse, particularly for biological weapons, also increases qualitatively.
Under such conditions, traditional approaches that seek to identify and restrict dangerous research in advance are no longer sufficient. Although state-centered institutions remain important, their reach faces structural limitations in an omni-use technology environment shaped by the convergence of AI and the life sciences.

Toward Co-Evolutionary Governance
What is required today is not the simple prohibition of specific technologies, but a shift toward what may be called “co-evolutionary governance.” This approach seeks to understand simultaneously how technology shapes society and how societal choices, in turn, shape the ways technology is developed and used. Rather than treating security, science and technology, ethics, and institutions as separate domains, it recognizes them as mutually influencing spheres that must be coordinated. Such a perspective is indispensable for governing life sciences in the era of omni-use.
Co-evolutionary governance does not mean multiplying regulatory authorities or expanding control mechanisms. Instead, it envisions a framework in which diverse actors – states, corporations, universities, and civil society – engage according to their respective roles and responsibilities, collectively assuming both the benefits and the risks of technological development. States establish legal systems and international rules. Companies ensure transparency and accountability in research and use of their products. Universities and research institutions uphold standards of research ethics. Citizens participate in public deliberation and contribute to public oversight. Institutions responsible for security and crisis response can further strengthen societal resilience by identifying early signs of technological misuse and sharing relevant information with the scientific community.
A crucial first step is to create mechanisms that bridge research practice and policymaking. This includes institutional arrangements that consider ethical and security implications from the early stages of research, as well as forums in which experts and citizens can engage in sustained dialogue. The European Union’s initiative on “Responsible Research and Innovation (RRI)[6]” offers a useful reference point. This approach promotes the early integration of ethical reflection, public engagement, and risk assessment into the research and innovation process, ensuring that scientific and technological development remains aligned with societal values and expectations. Japan, too, would benefit from developing precautionary and participatory institutional designs.
Effective governance also requires robust systems for information sharing. Establishing international frameworks capable of tracking how AI models and genetic data are used – by whom, when, and for what purpose – would help prevent misuse while preserving academic freedom. This would not entail constant surveillance of research, but rather, the development of traceability, reporting, and safety evaluation mechanisms for high-risk applications. Achieving this will require cooperation among governments, researchers, private firms, and security agencies with regulatory rules that evolve alongside technological change. Although challenging, such adaptive governance is increasingly necessary in rapidly advancing technological fields, and may be more realistically achieved through flexible standards and multi-stakeholder approaches.
In the era of omni-use, governance cannot rest solely with the state. Rather, society as a whole must share responsibility for both risk and oversight, engaging in processes of collective reflection and adjustment. As AI and biotechnology increasingly converge, what is needed is neither unconditional trust in technology nor excessive fear. Instead, governance must be adaptive and learning-oriented, updating institutions in light of persistent uncertainty and enabling technology to mature alongside society itself.

(2026/03/19)
Notes
- 1 Robert Booth, “DNA testing firm 23andMe fined £2.3m by UK regulator for 2023 data hack,” The Guardian, June 17, 2025.
- 2 Jonathan Stempel, “23andMe settles data breach lawsuit for $30 million,” Reuters, September 14, 2024.
- 3 Brigitte Dekker, Maaike Okano-Heijmans, “Emerging technologies and competition in the Fourth Industrial Revolution: The need for new approaches to export controls,” Strategic Trade Research Institute, Strategic Trade Review, Volume 6, Issue 9, Winter/Spring 2020, pp. 53–68.
- 4 Nariyoshi Shinomiya, Kiwako Tanaka, “The Security Implications of Developments in Biotechnology,” International Institute for Strategic Studies, Research Papers, February 20, 2025.
- 5 CRISPR-Cas9 is a genome-editing technology that enables scientists to precisely modify DNA sequences in living organisms in a relatively simple, efficient, and low-cost manner.
- 6 European Commission, Responsible Research and Innovation: Europe’s Ability to Respond to Societal Challenges, Publications Office of the European Union, 2014.
