An interview with Morten Bache, Scientific Director of the Novo Nordisk Foundation’s Quantum Computing Programme
Illustration: Sophia Prieto
Simon Fuglsang Østergaard & August Leo Liljenberg
January 5, 2023
In this interview with Morten Bache, Scientific Director of Quantum Technologies at the Novo Nordisk Foundation (NNF), we discuss the implications of the organisation’s recently established ‘NNF Quantum Computing Programme’, an initiative in collaboration with the University of Copenhagen’s Niels Bohr Institute aiming towards the development of what could be the world’s first fully functional quantum computer. Kickstarted by a DKK1.5 billion investment by the Foundation itself, the programme will create a collaborative ecosystem consisting of researchers from Massachusetts Institute of Technology and Yale University (United States), Delft University of Technology (the Netherlands), University of Toronto (Canada), and the Technical University of Denmark and Aarhus University (Denmark). The programme will ultimately be nestled in the birthplace of quantum mechanics itself – the Niels Bohr Institute at the University of Copenhagen.
Why should a quantum computer be built in Denmark?
We use the problem-solving abilities of computers far more than we realise. Beyond the physical hardware of our smartphones and PCs, every time we use a search engine, we are by proxy using immense computing technology in servers and data centres. When we talk about future issues that we need to solve, such as climate change or developing life-saving drugs in the pharmaceutical industry, then we will hit a brick wall if we continue to only use classical computers. There are some very specific problems that classical computers will never be able to solve that a quantum computer might be able to. With one of the strongest quantum research clusters in the world, we think that Denmark has the right conditions to create a world-class team and ecosystem for quantum computing.
Can you explain how a quantum computer is technically different from a classical one?
What differentiates a quantum computer from a classical one is the presence of qubits. Simply put, quantum chips act on a subatomic level allowing photons or electrons to exist in a superposition of multiple states simultaneously. An ordinary computer’s Central Processing Unit (CPU) works by collecting streams of electrical impulses, digitised to be either a 1 or 0, to encode information. By virtue of quantum physics, in a Quantum Processor Unit (QPU), quantum bits, or qubits, can exist in the multidimensional state of a superposition of 1 and 0 at the same time. Another difference is that the processing capacity of classical computers increases linearly the more bits are added, while raising the number of qubits for a quantum computer results in exponential growth. Building the computer language on qubits therefore gives additional degrees of freedom compared to the digital 0 or 1 of a classical computer. Programmers can exploit this freedom when planning how to solve a problem or designing entirely new quantum AI algorithms.
You mention that the programme is aiming to develop Denmark’s, if not potentially the world’s, first fully functional quantum computer. What’s the state of quantum computing today?
We are currently at an early stage within quantum technology – perhaps comparable to where classical computers were in the 1960s. Although we can currently use quantum computers to solve very simple problems, after a certain calculation time, too much noise is created, which disturbs the interpretability of the final result, and the calculation must be stopped prior to this. ‘Noise’ in this context can be understood as when the qubits of a quantum chip start to lose information to their external environment over time. The current QPU’s are therefore said to operate in the ‘Noisy Intermediary Scale Quantum’ (NISQ) ‘regime’, and this constrains what kinds of problems we can solve on a quantum computer today.
Today’s commercial and functional quantum computers lie around this NISQ barrier – the 50 to 100 qubit mark. IBM currently boasts the largest one as of 2022, Osprey, at 433-qubits. They use superconducting materials to make the qubits, and these need to be cooled down to very close to absolute zero temperature (-273 °C), colder than outer space, in order to protect the qubits and avoid too much noise creeping in. This is done using a cooling mechanism called dilution refrigeration, and they’re aiming to develop a 1000-qubit computer by 2023. However, this will still be nowhere near full functionality. Our initiative is aiming for a quantum computer with on the order of 1 million qubits by 2034, which will have an automated and actively running error-correction system that reduces noise. Achieving the 500,000 to 1 million-qubit mark is essential to solving the complex problems of the future. Under today’s qubit limits the calculations required would produce far too much noise to then adjust the results and still obtain a somewhat accurate result.
As of now, the refrigeration technique employed by IBM and similar companies has a maximum capacity around the 10,000-qubit mark and scaling it up to more qubits would require dozens of refrigeration units with communication channels between them – visualise an entire quantum computing warehouse. So, you can imagine, using this technique, our minimum qubit goal would take up a lot of space. Therefore, our programme will look for alternatives in other qubit technologies to find a promising, scalable qubit platform that can be taken towards the goal of 1 million qubits. One example is building a quantum computer based on light – photonic qubits – meaning that one can connect and communicate between various subunits using fibre optic cables, which is a very mature technology thanks to telecommunication. Such a quantum computer would still not be compact in size – it never will be – but it would be small enough to fit into a meeting room, even for a 1 million-qubit computer.
How will the technology behind the quantum chip be safeguarded?
Since we believe the programme should foster a cooperative quantum ecosystem, as well as ensure that the technology behind the actual QPU itself doesn’t fall into the wrong hands, we have had to create a unique organisational structure. We do not want a situation where cyberwarfare is being waged by nefarious actors using technology that we have developed. Therefore, The QPU’s manufacturing facility will be established as a partner company (Quantum Foundry P/S) co-located within the programme. This ensures the entity can hold the intellectual property rights for fabricating the chips in the future, and due to the potential security risks associated with the technology, it will be locked in a special part of the Niels Bohr Institute only accessible with authorised permission. This structure allows for unique collaboration opportunities where external actors, academic or private, can engage with the Quantum Foundry P/S by signing a collaboration agreement, for example.
A major reason for why a company was founded inside the programme is that if the QPU manufacturing facility must be patented, its secrets will be released when the patent is disclosed. That’s why it is often desirable to protect the technology as a trade secret instead, and unlike an academic institution, a private company can do this. In this way, our QPU manufacturing facility will remain ‘closed off’ to the public and protected a bit like the recipe of Coca-Cola. There is no need to patent the Coca-Cola recipe if the way it’s brewed is a secret and can remain a secret. Luckily, it’s not possible to reverse engineer the recipe the programme intends to use for fabricating quantum chips. If a fully functional quantum chip was to fall into the hands of an undesired third party, they wouldn’t be able to properly understand how it was built. Sure, they’d be able to identify what materials were used to make it, but not the intricate details of how it was constructed piece by piece. Likewise, you can pinpoint the different chemical components of the Coca-Cola recipe, but you still don’t know how the drink was made.
What makes this initiative have an edge over competing private companies?
Big industry players such as IBM, Google, Microsoft, and Honeywell have all said that they plan on developing fully functional quantum computers as well. It is our view, however, that when you are at the very beginning phases of developing a novel technology, it’s worth promoting collaboration rather than competition. The challenge is that the business models of these companies cause them to fence their technology in and prevent them from collaborating with one another. Our organisational model ensures that the programme has the advantages of openness associated with academia, collaborating with other research clusters
and industry actors, while anchoring the intellectual property rights of how the quantum chip itself is built in Denmark. This cooperative model is currently unheard of within the quantum computing community, and we strongly believe that one cannot reach the goal of full functionality without these conditions. The programme is not aiming to have a production line of quantum computers; it’s trying to define what kind of technology we need to get a fully functional quantum computer that can be used by everyone. This is different from the approach a big tech company would take. What they’re interested in is getting customers and making some money along the way, which often comes at the expense of collaborating and expanding the existing ecosystem.
Then there’s also the crucial question of how quantum technology can be used for nefarious intent. When it comes to digitalisation, we often discuss issues of cybersecurity and data privacy only after a certain technology has been launched. These are key questions for quantum computing too; we need to ensure a democratic voice is implemented in the process of their technological development, something which I think is more easily achieved when done collaboratively.
How will our lives be affected by quantum computing in, say, 2050?
I think that quantum chips will almost certainly be positioned in a hybrid computational context, meaning that if you’re lucky, you won’t be thinking about quantum computers very much. Quantum computers will consist of one part of the computer technology pipeline. Upon a search request for personalised medicine for example, your personal computer might send a request to a much larger computer, which will then send one to a quantum computer, and then finally filtered it back to your device via the same pipeline. The aim is to get these different types of computers to talk and exchange information with one another via some cleverly written software. In the minds of software and hardware producers, the less aware we are of quantum computers, the better – it means their job has been successful.
What is the probability that your team will be the first to reach the quantum goal?
We believe that within the next 10 to 12 years, we will be able to develop a fully functional quantum computer. Some think that 2050 is more realistic. And sure, we are being optimistic. There is a chance that we don’t reach our million-qubits minimum target, but rather ten or one-hundred thousand, where perhaps 1% of the qubits performing the actual calculations are error-corrected for noise. The problem is that you really need to get into the million-qubit range to obtain worthwhile results from incredibly complex problems when only 1% of the qubits are error-corrected.To answer your question, the likelihood is somewhere higher than 1%, and somewhere lower than 50%. Right, I can tell you are not satisfied with that – let’s say a 20% chance then. There are currently maybe five or six other countries that could also achieve it – we’re all neck-in-neck. And if someone else beats us to building the first one, then the technology that the programme has created will nonetheless still be available and highly relevant.
This is a featured article from FARSIGHT: Visions of a Connected Future