How the evolution of synthetic biology, quantum computing, and AI threatens to disrupt the current world order
Timothy Shoup & August Liljenberg
November 1, 2022
Some technological advances are so great that they create ruptures in our understanding of what is possible. Eighty years ago, such a rift took place via the invention of the atom bomb, transforming the way we conceive of warfare and global order. Today, rapid advancements in scientific and technological progress hold that very same potential. From artificial intelligence to synthetic biology, experts and policymakers are beginning to dissect the potential consequences of adding unfamiliar, highly advanced, and potentially devastating new additions to the toolboxes of adversarial powers.
When referring to world order, we often operate within a ‘Great Power’ discourse and assume that geopolitical disruptions require geopolitical might. The democratisation of destructive technologies, however, will likely create the conditions for smaller non-state actors, and even individuals, to have a greater impact on an international level. Already today, the intelligence and information thresholds required to use advanced existing technologies in a competent manner are declining fast, and deeply acquired technical expertise requiring years, or even decades of education and training is becoming less of a factor. For the sake of simplicity, three areas in which emerging technologies pose a threat might be considered: synthetic biology, quantum computing, and artificial intelligence.
Similar to how individuals and small groups build computer viruses today, doit- yourself biohacking tools could in the future significantly lower the bar for individuals with very basic training in biology to enhance biological pathogens that could, in theory, drive the next global pandemic. Whereas concern was once directed towards off-beat experiments held at university labs – such as the case of a Dutch virologist successfully making the 2011 H5N1 ‘Swine Flu virus’ more transmissible to humans – today, technology is making ‘garage biology’ an international threat. Gene-editing tools such as CRISPR have drastically lowered barriers for potential bioterrorist groups to modify (or replicate) deadly pathogens, with some arguing that it could cost as little as $10,000 to bioengineer smallpox at home. We have already arrived at a world where, via the internet, several laboratory processes can be followed almost as if they were recipes for chocolate cake, and the genomes of a range of organisms and pathogens are already publicly available online.
Whereas the risks of synthetic biology can exist on a micro-scale, the race towards quantum computing operates at a global level. Quantum computing holds enormous potential for biotechnology, artificial intelligence, and machine learning, despite its applications having not arrived just yet. The incredibly fast crunchtimes of such computers also pose a threat to current data encryption methods; theoretically, the first country to achieve quantum supremacy would have a decisive advantage at engaging in cyberwarfare against any target nation’s military systems.
Such potential has caused a techno-nationalist ‘quantum race’ among global powers. In the US, directives were issued under the Trump administration making it harder for Chinese students to study quantum-related degrees, and Washington has also made efforts to block the Netherlands’ export of lasers integral to quantum computing to China. On the other hand, China’s policy has been the reverse, inviting academics from the UK and US through its ‘Thousand Talents Plan’ to gather the best minds within quantum computing. Given the extent to which strategic intelligence, military warfare, and corporate IP strategies are based on certain assumptions for cybersecurity, the proliferation of quantum computing platforms could massively remake the globally contested space.
The potential artificial intelligence holds is far more open to interpretation, however. According to the American political scientist Ian Bremmer, we have been living in an “AI Cold War” since 2018, the year when China announced its ambition of becoming a world leader within AI by 2030. Although parallels could be drawn here with the race towards quantum computing – both rely on advanced semiconductors – a key difference exists in terms of who participates. Big Tech manufacturers such as Meta Platforms, Amazon, Microsoft, and Alphabet have all heavily invested and tuned in to the AI marathon. The binary view of an ‘AI cold war’ risks neglecting the immense amounts of data, and therefore control over information and communication, that Big Tech companies hold. In fact, some scholars argue that the rhetoric itself surrounding an ‘AI cold war’ poses an equally existential risk in the immediate term as AI does itself. The philosopher Stephen Cave has, for example, developed a model demonstrating that greater enmity between regional AI powers, as well as simply possessing greater information about other powers’ capabilities, significantly increased the risk of AI “corner-cutting” and ignoring safety protocols. Within AI, the risk of not having a code of conduct – let alone a technological lingua franca in the scenario where AI becomes sufficiently advanced – is much too great of a risk to ignore.
Although immensely destructive, the consequences of the atom bomb operated within a utilitarian calculation which was universally understood: to kill or not to kill possibly millions of people. Such ‘simple’ decision-making processes could be curtailed by the doctrine of Mutually Assured Destruction. There were, in other words, simple consequences from a complex technology. The future capabilities of advanced technologies pose complex ethical dilemmas without intuitive unwritten codes of conduct. Where exactly does agency lie in highly automated military processes, or in those directed by artificial intelligence? How can we pursue R&D within quantum computing without simultaneously causing an arms race that risks compromising the world’s encryption systems?
Unfortunately, pondering on the consequences of future technologies for geopolitics is a topic better at providing questions than answers. Ultimately, the winding roads of such futures depend on the values held by citizens, what individuals think of the trade-offs between liberty and freedom versus a sense of safety and security, as well as who they will turn to in order to meet those needs. Such a world, characterised by uncertainty around the development of emerging technologies, may very well require greatly amplified capacities for preventative policing as a means of global governance. Envisioning such an outcome is challenging and even disturbing, alternating between the polarities of a unipolar world order marred by ubiquitous global surveillance, and the replacement of nation states with a system of decentralisation bordering on anarchy. We exist now at the crucial moment where possibility and deliberation intersect – if we choose the path of violent destruction, humanity will have to bear the blame.
This is a featured article from our latest issue of FARSIGHT: A World Pulled Apart?