Why new qubit may give ultrafast quantum computing a boost


Microsoft announced last month it had created a “topological qubit,” which the company says can power a quantum computer more reliably than previously developed quantum qubits and which they believe will speed development of ultrafast quantum computers capable of tackling the toughest computing challenges, far beyond the capability of even supercomputers built through conventional means.

The decades-old field of quantum computing seeks to harness the unusual forces at play at the subatomic level. Key is the idea of “superposition,” that something can be in two states at once.

In classical computing, information is stored as bits, either a 1 or a 0. In quantum computing, superposition means that information can be stored in a qubit as a 1 or a 0 or a combination. This increases the computer’s power exponentially.

In December, for example, Google unveiled a quantum chip that completed a computation in just five minutes that would take a conventional supercomputer 10 septillion years.

Microsoft’s topological qubit is constructed of indium arsenide and aluminum, which becomes a superconductor at very low temperatures. It is the result of nearly two decades’ work by a Microsoft team led by Chetan Nayak, Microsoft technical fellow and professor at the University of California at Santa Barbara.

In this edited conversation, Nayak, who got his start in physics as a Harvard College undergraduate in the late 1980s and early 1990s, spoke with the Gazette about the advance and about his experience treading the sometimes-difficult path of discovery.


How is Microsoft’s new qubit — the topological qubit — different from ordinary ones?

A qubit is a quantum mechanical two-level system. It’s something that can be a 0 or a 1, like a regular bit, but because of quantum mechanics, it can also be a superposition between 0 and 1.

That happens when you get down to microscopic enough scales and, as features on microprocessors have been getting smaller and smaller, we have been getting to the limit where quantum mechanics is going to start to matter for classical computing. That’s a problem because you want 0s and 1s to be very well-defined and not fluctuate in an unwanted way. But it turns out that’s also an opportunity.

Richard Feynman — and others — recognized as far back as the 1980s that nature is ultimately quantum mechanical so, if you want to simulate nature, you need to simulate it with what we call a quantum computer.

So problems in quantum mechanics, such as simulating materials like high temperature superconductors, or in chemistry, such as simulating catalysts that could be used for nitrogen fixation to make fertilizers or break down microplastics, those kinds of material and chemistry problems mostly have to be solved by experimental, high throughput, trial and error. It’s expensive and time-consuming.

With a quantum computer, you could simulate those things because it operates and takes advantage of the same underlying physical principles that nature uses.

The danger, though, is that your qubits will be like Schrödinger’s cat. It can’t, in the real world, simultaneously be a superposition of being dead and alive because the environment effectively gets entangled with it and collapses the wave function.

So, the qubits will eventually — or in some cases pretty quickly — lose the superposition. Then you lose all of the extra juice that you get from quantum mechanics. That’s part of what quantum error correction is supposed to solve.

“Actually holding that physical processor in my hand, and feeling the reality of it, that was pretty cool.”

A topological qubit is based on the idea that, given that you need to do error correction and you are worried about the fragility of quantum states, the more you can have that occur at the hardware level, the better the situation you’re in.

The idea is that the quantum mechanical states — the quantum mechanical wave functions — have similar mathematical structures, and if you can engineer, or find in nature, a physical system which organizes itself into quantum mechanical states in which those wave functions have that topological structure, then the information you coded will be very stable — not infinitely stable, but extremely stable and potentially without other very painful tradeoffs.

Maybe it doesn’t have to be huge; it doesn’t have to be slow; and it could be easy to control, because the amount of control signals that you have to put in is generally smaller. It’s hitting a sweet spot of embedding a lot of stability and rigidity to the wave functions without other painful tradeoffs.

So, it’s a more stable, a more robust system than the qubits being used now. How close is this to powering an actual computer?

Our ultimate goal is to have a million-qubit quantum computer. That’s a scale at which quantum computers are going to be able to solve these valuable problems, like new materials and chemistry.

It was in thinking about scale that we charted the roadmap we have. We didn’t want any solutions or any technologies that could only get to 100 or 1,000 qubits. Today, we only have a handful of qubits, as you saw on the chip that we’ve been showing off, but we have a roadmap to much larger systems.

We entered into a contract with DARPA, the Defense Advanced Research Projects Agency. Details aren’t public, but we have promised to deliver something pretty serious, that’s going to have fault tolerance, on a pretty aggressive timeline. It’s not going to be a million yet, but it’s going to be far enough along the road that it’s going to be very clear that we can get all the way there.

Life’s short. This is something I want to see in years, not decades, and our CEO does too.

It sounds like there were a number of major hurdles. What did you find most challenging?

In trying to make topological qubits, the situation for us in some ways was like going back to the early days of classical computing when people were building computers with vacuum tubes.

Semiconductors weren’t well understood, so there was a lot of fundamental research going on to understand what they are exactly. Sometimes they look like metals and sometimes they look like insulators. The fact that you can tune them in between is where their power is: the switchability and control.

People had to understand what properties were intrinsic and what properties were just due to some devices being dirtier than others. That led to the development of the transistor, but the first applications were years away — it was a while before it was computers — then came integrated circuits and you’re off and running.

We understood that you had to have the right material in order to get this new state of matter. We also understood at a reasonably early stage that the material had to have certain properties. It was going to have to be a hybrid between a superconductor and a semiconductor. It was going to need to put together many of the nice properties of a semiconductor and many of the cool properties of a superconductor. And we’d have to do this without introducing too many impurities or imperfections in the process.

Once we realized that was the first problem, the zeroth order thing that you can’t even get to “go” until you solve, and focused a lot of effort on that, then we were in a much better place.

“Our ultimate goal is to have a million-qubit quantum computer. … We didn’t want any solutions or any technologies that could only get to 100 or 1,000 qubits.”

Of course, in early days there’s going to be a lot of wandering around trying to figure it out. But I think the first step to solving a problem is clearly formulating what the problem is. If you don’t have a precise formulation of the problem, you’re probably not going to get to the solution. A very precise statement of the problem relied heavily on our ability to simulate these devices.

But we couldn’t use off-the-shelf simulations that people use in the semiconductor industry. The ideal thing would be if we had a quantum computer, which could simulate materials, but we didn’t have that.

So we had to develop custom, in-house simulations that enabled us to figure out the right materials combination and, of course, how to develop the synthesis and fabrication methods to make these new material types.

The third piece of that is testing. Once we had those three pieces, that wasn’t a guarantee of success, but that at least meant that we had a really good game plan and the ability to start turning the flywheel.

How did it feel to actually hold the chip in your hand?

It was pretty amazing, but when I first got chills down my spine was when I started seeing the data from one of these chips, where it looked like we expected it to. That was within the last year and one of those moments where there were absolutely chills down my spine and I said, “Oh, wow.”

In 19 years of work, there were setbacks, but especially in the last, let’s say, four years, there were a lot of moments where I said, “We actually kind of know what we’re doing here, and I see a path forward.”

There were a couple of times when we surprised ourselves with how fast we were able to go. But, without a doubt, actually holding that physical processor in my hand, and feeling the reality of it, that was pretty cool.

When you graduated from Harvard College in the early 1990s, your degree was in physics?

Yes, my undergrad degree was in physics at Harvard. I was there ’88 to ’92, and it was fantastic. I lived in Dunster House. I was back there last year to visit one of the labs. I got to run along the Charles River that morning and just walking from the hotel through Harvard Square over to the Jefferson Lab brought back a lot of good memories, though the Square has changed a lot.

I’m still in touch with my roommates and close friends from my time at Harvard. We have a WhatsApp thread that we all stay in touch on.

There’s not a lot of faculty there now who were there when I was a student, but there are a few emeritus professors and lots of great new faculty there, whom I didn’t know as a student but have known professionally as a physicist over the last 10 to 15 years.

You got started on this specific path with your doctoral work at Princeton?

I trace it back to some things at the end of my last year in Princeton, that’s when I first headed down this path. When I was an undergrad, I was interested in things vaguely like this, but quantum computing wasn’t really a field.

There’s been skepticism from some quarters expressed about your data. How do you answer the skeptics who say they don’t believe your results?

First, skepticism is healthy in science. It’s a normal part of the process, and anytime you do something really new, there should be skepticism.

We presented a lot of new results at the Station Q conference. It’s a conference that we have regularly, almost every year, in Santa Barbara that brings together over 100 people from across the field, from both universities and industry. There were one or two scientists from Harvard and also from Google and Intel.

The people who were at the conference heard about it for 90 minutes, got to ask questions, and were there for the rest of the conference to ask questions in informal discussions and over coffee, over dinner, and so on. But the rest of the community hasn’t heard it yet, hasn’t seen a paper yet, and there are a lot of questions.

So, there’s a group of people who’ve had a lot of exposure to the latest results, and that group is excited and has given very positive feedback, both on the work and the results. People who haven’t heard all the latest results are skeptical, and that’s natural.

I’m going to give a talk at the American Physical Society Global Summit — this is the 100th anniversary of quantum mechanics, the 100th anniversary of Schrodinger discovering his equation. I’m giving a talk there, and a lot more people will get to hear about our latest results.

We’re also putting out a paper in roughly the same time frame, so a lot more people are going to have a chance to see the very latest data and judge for themselves.

What happens next?

We put out a paper last week that lays out a roadmap. It’s not everything that we’ve shared with DARPA, but it’s the part that we think that we can make publicly available. We’re full speed ahead.

We are interested in these really big problems that ultimately come down to understanding nature better.

Some of my earliest work in physics involved trying to understand high-temperature superconductors. That was a big deal when they were first discovered because superconductivity was thought to be a phenomenon that only occurred at extremely low temperatures.

Then it was discovered that you can actually have things become superconducting above liquid nitrogen temperatures. It’s not fully understood why or how that happens, so our ability to make better versions of it or things that work at even higher temperatures is limited because we don’t even know where to look.

So I’m excited that some of these big scientific problems from the beginning of my career that I knew were important but didn’t know how to make progress on are things that we’ll be able to attack now with a quantum computer.



Source link

About The Author

Scroll to Top