The post Defining logical qubits: criteria for Resilient Quantum Computation appeared first on Microsoft Azure Quantum Blog.

]]>In June 2023, we offered how quantum computing must graduate through three implementation levels (quantum computing implementation levels QCILs) to achieve utility scale: Level 1 Foundational, Level 2 Resilient, Level 3 Scale. All quantum computing technologies today are at Level 1. And while NISQ machines are all around us, they do not offer practical quantum advantage. True utility will only come from orchestrating resilient computation across a sea of logical qubits something that, to the best of our current knowledge, can only be achieved with error correction and fault tolerance. Fault tolerance will be a necessary and essential ingredient in any quantum supercomputer, and for any practical quantum advantage.The first step toward the goal of reaching practical quantum advantage is to demonstrate resilient computation on a logical qubit. However, just one logical qubit will not be enough; ultimately the goal is to show that quantum error correction helps non-trivial computation instead of hindering, and an important element of this non-triviality is the interaction between qubits and their entanglement. Demonstrating an error corrected resilient computation, initially on two logical qubits, that outperforms the same computation on physical qubits, will mark the first demonstration of a resilient computation in our field's history.The race is on to demonstrate a resilient logical qubit but what is a meaningful demonstration? Before our industry can declare a victory on reaching Level 2 for a given quantum computing hardware and claim the demonstration of a resilient logical qubit, it's important to align on what this means.Criteria of Level 2: resilient quantum computation

How should we define a logical qubit? The most meaningful definition of a logical qubit hinges on what one can do with that qubit demonstrating a qubit that can only remain idle, that is, be preserved in a memory, is not meaningful if one cannot demonstrate non-trivial operations as well. Therefore, it makes sense to define a logical qubit such that it allows some non-trivial, encoded computation to be performed on it.Distinct hardware comes with distinct native operations. This presents a significant challenge in formally defining a logical qubit; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that mark the entrance into the resilient level of quantum computation. In other words, these are the criteria for calling something a "logical qubit".Entrance criteria to Level 2Exiting Level 1 NISQ computing and entering Level 2 Resilient quantum computing is achieved when fewer errors are observed on the output of a logical circuit using quantum error correction than on the same analogous physical circuit without error correction.We argue that a demonstration of the resilient level of quantum computation must satisfy the following criteria:

involve at least 2 logical qubitsdemonstrate convincingly large separation (ideally 5-10x) of logical error rate < physical error rate on the non-trivial logical circuitcorrect all individual circuit faults ("fault distance" must be at least 3)implement a non-trivial logical operation that generates entanglement between logical qubitsThe justification for these is self-evident being able to correct errors is how resiliency is achieved and demonstrating an improvement over physical error rates is precisely what we mean by resiliency but we feel that it is worth emphasizing the requirement for logical entanglement. Our goal is to achieve advantage with a quantum computer, and an important ingredient to advantage is entanglement across at least 2 logical qubits.The distinction between Resilient Level and the Scale Level is also important to emphasize a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine. For this reason, we find it important to allow some forms of post-selection, with the following requirements

Post-selection acceptance criteria must be computable in real-time (but may be implemented in post-processing for demonstration);scalable post-selection (rejection rate can be made vanishingly small)if post-selection is not scalable, it must at least correct all low weight errors in the computations (with the exception of state-preparation, since post-selection in state-preparation is scalable);In other words, post-selection must be either fully compatible with scalability, or it must still allow for demonstration of the key ingredients of error correction, not simply error detection.Measuring progress across Level 2Once a quantum computing hardware has entered the Resilient Level, it is important to also be able to measure continued progress toward Level 3. Not every type of quantum computing hardware will achieve Level 3 Scale, as the requirements to reach Scale include achieving upwards of 1000 logical qubits with logical error rates better than 10-12 and mega-rQOPS and more.Progress toward scale may be measured along four axes: universality, scalability, fidelity, composability. We offer the following ideas to the community on how to measure progress across these four axes, such that we as a community can benchmark progress in the resilient level of utility scale quantum computation:

Universality: universality typically splits into two components: Clifford group gates and non-Clifford group gates. Does one have a set of high-fidelity Clifford-complete logical operations? Does one have a set of high-fidelity universal logical operations? A typical strategy employed is to design the former, which can then be used in conjunction with a noisy non-Clifford state to realize a universal set of logical operations. Of course, different hardware may employ different strategies.Scalability: At its core, resource requirement for advantage must be reasonable (i.e., small fraction of Earth's resources or a person's lifetime). More technically, quantum resource overhead required should scale polynomially with target logical error rate of any quantum algorithm. Note also that some systems may achieve very high fidelity but may have limited numbers of physical qubits, so that improving the error correction codes in the most obvious way (increasing distance) may be difficult.Fidelity: Logical error rates of all operations improve with code size (sub-threshold). More strictly, one would like to see logical error rate is better than physical error rate (sub-pseudothreshold). Progress on this axis can be measured with Quantum Characterization Verification & Validation (QCVV) performed at the logical level, or with other operational tasks such as Bell inequality violations and self-testing protocols.Composability: Composable gadgets for all logical operations. Criteria to advance from Level 2 to Level 3, a Quantum SupercomputerThe exit of the resilient level of logical computation, and the achievement of the world's first quantum supercomputer, will be marked by large depth computations on high fidelity circuits involving upwards of hundreds of logical qubits. For example, a logical circuit on ~100+ logical qubits with a universal set of composable logical operations hitting a fidelity of ~10e-8 or better. Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits with logical error rate of 10^-12 and a mega-rQOPS. Performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS).Conclusion

It's no doubt an exciting time to be in quantum computing. Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage. If you have thoughts on these criteria for a logical qubit, or how to measure progress, we'd love to hear from you.

The post Defining logical qubits: criteria for Resilient Quantum Computation appeared first on Microsoft Azure Quantum Blog.

]]>The post Microsoft and Photonic join forces on the path to quantum at scale appeared first on Microsoft Azure Quantum Blog.

]]>By combining Photonic’s novel spin-photon architecture that natively supports quantum communication over standard telecom wavelengths with the global scale and state-of-the-art infrastructure of Azure, we will work together to integrate quantum networking capabilities into everyday operating environments. Together, we aim to deliver new technologies that will enable reliable quantum communication over long distances and accelerate scientific research and development with quantum computing devices to be integrated into Azure Quantum Elements.

"We are thrilled about joining forces with Photonic in improving the world through quantum technologies. There is an opportunity to ignite new capabilities across the quantum ecosystem extending beyond computing, such as networking and sensing, and unlocking applications and scientific discovery at scale across chemistry, materials science, metrology, communications, and many other fields. The capabilities we aim to deliver with Photonic can enable this vision and bring about quantum's impact far more quickly than otherwise possible."

Jason Zander, Executive Vice President of Strategic Missions and Technologies, Microsoft.

Realizing this vision requires a fundamental capability: entanglement distribution over long distances. Photonic’s unique architecture is based on highly connected silicon spin qubits with a spin-photon interface. By using a qubit with a photon interface, this novel approach communicates using ultralow-loss standard telecom fibers and wavelengths. When paired with the Microsoft global infrastructure, platforms, and scale of the Azure cloud, this technology will integrate new quantum networking capabilities into everyday operating environments.

Together, Microsoft and Photonic will address three stages of quantum networking.

- At the
**Stage 1**physical layer, we will aim to deliver entanglement between two separate quantum devices via photons through telecom fiber. - To enable the
**Stage 2**link layer, we will aim to deliver a never-before demonstrated quantum repeater that can capture, entangle, and hold quantum information reliably for a short time. - Finally, at the
**Stage 3**network layer, we will focus on delivering from our co-innovation collaboration a reliable quantum repeater, one that is fault-tolerant and operational with our Azure cloud. With this technology, we can overcome any limitations on distance in the network, and enable the ability to create a full-scale, global quantum internet.

"It will take a global ecosystem to unlock the full promise of quantum computing. No company or country can do it alone. That's why we're incredibly excited to be partnering with Microsoft to bring forth these new quantum capabilities. Their extensive global infrastructure, proven platforms, and the remarkable scale of the Azure cloud make them the ideal partner to unleash the transformative potential of quantum computing and accelerate innovation across the quantum computing ecosystem."

Dr. Stephanie Simmons, founder and Chief Quantum Officer of Photonic, and the Co-Chair of Canada's National Quantum Strategy Advisory Board.

It is only through global collaboration and co-innovation that we will be able to empower people to unlock solutions to the biggest challenges facing our industries, and our world. Just like the cloud democratized access to supercomputersonce available only to governments, research universities, and the most resourced corporationswe are on a mission to engineer a fault-tolerant quantum supercomputing ecosystem at scale on Azure. We announced last June our roadmap to a Level 3 quantum supercomputer along with peer-reviewed research demonstrating that we've achieved our first milestone.

Scientific discovery is crucial to our global future, and we want to empower scientists today with the best available offerings in the ecosystem, which is why as part of our co-innovation collaboration we plan to integrate Photonic’s unique quantum hardware into our Azure Quantum Elements offering as it becomes available. Our collaboration with Photonic seeks to enable scientific exploration at Level 1, foundational quantum computing with a firm commitment to reach higher levels of resilience and scale on the path to quantum supercomputing in the future.

With Azure Quantum Elements, your quantum solutions will be completely integrated with high-value advancements in high-performance computing (HPC) and AI so you can transform your research and development processes today with the certainty that you will be ready to adopt quantum supercomputing at scale seamlessly in the future. You can sign-up for our Private Preview of Azure Quantum Elements now.

To learn more about how Microsoft and Photonic will be working together to advance the next stages of quantum networking and empower the quantum ecosystem with new capabilities, register for the January episode of the Quantum Innovator Series.

Photonic is building a scalable, fault-tolerant and unified quantum computing and networking platform, uniquely based on proven spin qubits in silicon. Photonic's platform offers a native telecom networking interface and the manufacturability of silicon. Headquartered in Vancouver, Canada, Photonic also has offices in the United States and the United Kingdom. To learn more about the company, visit their website.

The post Microsoft and Photonic join forces on the path to quantum at scale appeared first on Microsoft Azure Quantum Blog.

]]>The post Quantum networking: A roadmap to a quantum internet appeared first on Microsoft Azure Quantum Blog.

]]>As with quantum computers, quantum networks are not meant to replace their classical counterparts. In fact, classical networking will remain the foundation of this technology. Quantum networking will extend the existing networks to enable the exchange of quantum informationwhether between quantum computers or classical endpoints. In turn, this means that a quantum network has the potential to unlock new capabilities by connecting remote quantum computers, solving larger-scale problems distributed on quantum clusters, and enabling precision metrology through entangled sensor networks.

At its core, quantum communication concerns the sending and receiving of quantum information. Whereas today's conventional communication systems are based on classical physics, quantum communication employs the principles of quantum mechanics. The key to quantum networking is the sharing of quantum entanglement. For instance, to transmit (or “teleport") a qubit state in a quantum network, both the sender and receiver first share a generic resource: two entangled qubitseach getting one of the two. When the sender is ready to transmit a particular qubit state, they entangle it with their half of the entangled pair and measure. This produces two bits to send to the receiver over a classical network, who uses these and the other half of the entangled pair to reconstruct the self-same state.

With this in mind, we are looking for an intentional approach to defining quantum network challenges that captures interoperability across each layer of a "quantum networking stack." I am currently thinking about this as evolving through three stages.

**Stage 1:**Any network is built on top of point-to-point connections. I expect that the initial stage of quantum network development will be defined by technology that enables a quantum analog of Physical layer of the networking stack, where entanglement can be established between two separate quantum devices.**Stage 2:**As there are limitations to scaling point-to-point connections, I view the next stage of quantum networking as being defined by technology that enables the analog of a Link layer. At this stage, a quantum device can support and manage connections with many sites, delivering entanglement to any two as required.**Stage 3**: The final stage of development should be characterized by technology that enables a Network layer for reliable long-distance quantum communication through a complex network, which relies on resilient quantum hardware at the sites.

I recognize that the mapping of the technological stages to networking layers is not perfect. Notably, a critical device to overcome distance limitations in a quantum network will be the quantum repeater. Such a device will perform entanglement swapping to reliably extend the distance between which two devices can become entangled, and so belongs to Stage 3 technologies. Yet in the networking stack, it is part of the Physical layer. Nonetheless, I feel that the driving factor in the future development of quantum networks will revolve around network connectivity, enabled by continuously improving quantum hardware.

I imagine that "the quantum internet" can mean very different things to different people. Perhaps it is best then to discuss *a* quantum internet, which simply refers to a large system of distributed quantum computers interconnected with quantum links. This quantum internet is a separate, but co-existent, network alongside a classical network, which, in fact, might be the internet.

Today, there are several approaches for establishing entanglement between nearby noisy quantum machines (NISQ) in labs, so currently, we are in the **first** **stage **towards developing a quantum internet. However, to scale truly large networks, we believe it's important to build upon current technology and use photons at telecom wavelengths.

It's impossible to scale a network where each pair of sites must communicate through a point-to-point connection. Thus, the **second stage** in this roadmap is to develop quantum devices that can on-demand distribute entanglement to multiple quantum endpoints. For instance, a "quantum hub" would have as its sole purpose distributing entanglement to any two neighboring sites, thereby relieving them of the need to have point-to-point connections with each other. Such a device could then enable a quantum local-area NISQ network.

One might view such a quantum hub as a NISQ repeater, as from the endpoint's perspective their communication has made one "hop" through the network. However, without resilient quantum hardware one cannot expect useful entanglement to survive more than a handful of such hops, restricting the network to a local area.

In this language, a quantum internet may be considered as a wide-area quantum network, which requires establishing entanglement between distant endpoints through multiple hops through the network. In the **third stage** of quantum networking, we can accomplish these hops through a method called "entanglement swapping" and ensure reliability by employing methods such as entanglement distillation.

The envisioned stages of quantum networking are from the perspective of transmitting quantum information between remote quantum devices. However, these stages may also be appropriate for quantum networks between classical sites. This is the case for quantum key distribution (QKD), where the only requirement at the endpoints is to create or detect photons. Current QKD hardware, based on the "BB84" protocol, may be considered in **stage one** as it relies on a point-to-point connection between two QKD devices. In the QKD protocol "E91", a central device distributes entangled pairs to the end-users, and so a QKD system that uses this protocol could be considered as **stage two**. Device-independent QKD additionally performs self-testing to ensure the correct behavior of the system; while an imperfect analogy, this could be considered akin to reliability and so form **stage three** of development.

Today, QKD is considered part of the quantum-safe effort to provide security systems that are not vulnerable to quantum cryptanalysis. Although it does provide a different approach to some cryptographic tasks, it has fundamental technical limitations and therefore cannot be viewed as a complete solution. At Microsoft, our Quantum Safe migration effort is focused on post-quantum cryptographic algorithms, as recommended by cybersecurity agencies globally. Read more about our efforts in the space.

As per the title of this blog, I have focused on quantum networking from the view of creating a quantum internet. Having separated endpoints is foundational for many quantum communication applications. For example, the security of some quantum protocols for anonymous voting relies on the voters being separated. In quantum metrology one uses the phenomenon that measuring half of an entangled system instantaneously affects the other half, regardless of the distance between them, to enable precise timing and position verification. In distributed quantum computing, blind computing protocols allow one party to delegate the computation of a quantum algorithm to another without revealing the input, output, or even the algorithm that was run.

Nonetheless, one should not think the only value of quantum networking is linking distant quantum computers. Modern supercomputers are built from many networked computing nodes that can operate as a single system. Perhaps future quantum computers will follow a similar design; the stages of development above would apply equally to switching and routing of quantum information in such a quantum cluster.

There is still lots of work ahead, and as an industry we must continue to separate signal from noise when evaluating technological progress. However, as I continue to engage with both customers and our Azure Quantum team, my excitement for the possibilities ahead of us only grows. I believe that the collective genius and input from the community are important for refining the framing of quantum networking. We invite your comments and perspectives so that we can make progress together toward its future. For more information, you can visit the following resources:

- You can visit our latest blog about the Quantum Computing Implementation Levels and recent achievements in this space.
- You can learn more about how we're accelerating scientific discovery with supercomputing capabilities, AI, and quantum computing with Azure Quantum Elements.
- You can discover more about our comprehensive approach to quantum safety.

The post Quantum networking: A roadmap to a quantum internet appeared first on Microsoft Azure Quantum Blog.

]]>The post Azure Quantum learning resources enable getting ready for a quantum supercomputer appeared first on Microsoft Azure Quantum Blog.

]]>We recently shared our roadmap to a quantum supercomputer, and announced that we've achieved the first milestone on that roadmap, creating and controlling Majorana. With this breakthrough, we demonstrated the physics necessary to create a new type of qubit that is small, fast, and digitally controllableall of which are required to advance to a fault-tolerant, scaled machine, and critically, to unlock the path to a quantum supercomputer.

Reaching practical quantum advantage will require progressing across three quantum computing implementation levels. Today, all quantum computers are at the first level, Foundational, with machines made of noisy physical qubits (referred to as "NISQ" devices). As quantum computers progress, we'll move to the second level, Resilient, with machines made of 10s to 100s of reliable qubits (called "logical" qubits, each consisting of many physical qubits), and ultimately to the third level, Scale, with programmable quantum supercomputers capable of truly demonstrating useful quantum advantage. Our recent physics breakthrough is the first step towards advancing to the next level.

Understanding what it takes to progress through these levels is crucial not just for measuring industry progress, but also for developing a robust strategy to build a quantum-ready community. After all, it will not be due to scientific and engineering innovations alone that we will be able to achieve scaleultimately it will be thanks to the many people globally that make it happen. The road to scale will be galvanized by more diverse minds coming together around the table to accelerate graduating from one quantum computing implementation level to the next.

In pursuit of empowering more people with quantum knowledge, at IEEE Quantum Week and Quantum World Congress, **we're excited to announce the availability of new learning resources, including the Azure Quantum katas**: free, AI-assisted interactive tutorials to accelerate quantum computing learning and exploration. These resources build on the tools and platforms we've been developing for years in the Azure Quantum team, and enable learning not only for foundational quantum hardware available today, but also for the scaled quantum supercomputers of tomorrow.

So, what are Azure Quantum katas and why try them?

Several years ago, I taught a quantum algorithms and programming course at the University of Washington with Mariia Mykhailova, Principal Software Engineer at Microsoft Quantum. We were eager to introduce students to quantum computing and empower them with the knowledge of how to write quantum programs. Students learned how Q# programs could express complex quantum algorithm designs and were asked to explore quantum algorithms with quantum advantage and write their own programs that might run on fault-tolerant quantum supercomputers. We wanted students to really understand how to programmatically express quantum algorithms at scale, and that as an industry we'd have to move beyond NISQ devices to truly unlock the power of quantum computing.

But learning to program a quantum computer requires developing quantum fitnessstarting small, gaining strength in the concepts, and eventually commanding the techniques. There's also value in having a coach right alongside. And so, with this in mind, we built the course curriculum around an open-source project of exercises, called katas, which we released the year before and expanded to support the course. Students could solve the exercises, implement their solutions as Q# code and get immediate feedback, in turn allowing them to learn through practice, and subsequently develop their own more-complex quantum programs. Excitingly, some of those students liked it so much they joined us for internships, and one became a member of our quantum team. Several students also contributed additional katas to the project. Our collaboration with the University of Washington continues and through mentored projects, other students went on to develop Q# programs to understand just how many resources a quantum algorithm may need.

Witnessing the impact of katas firsthand led us to ask how we could bring these learning tools to a larger audience. This is why we're excited to bring these exercises to even more people globally, directly in the browser, to start on or continue their quantum learning path.

The Azure Quantum "katas" are free, self-paced programming exercises that teach the elements of quantum computing and the Q# programming language (the Japanese word for "form", a "kata" is a pattern for practicing and learning new skills). Each kata begins by explaining theory and concepts related to a quantum computing topic. These are followed by short, interactive coding exercises to help test your knowledge. The exercises are fully contained within the browser, no Azure subscription is required. These tutorials can help expand your knowledge of quantum computing and programming, starting with fundamentals such as qubit manipulation, and progressing to more advanced topics such as quantum algorithm development. Perhaps best of all, the new tutorials are integrated with Copilot in Azure Quantum, a natural language chat interface to help you learn quantum concepts and programming faster than ever before.

These kata exercises build on a continuum of tools already within Azure Quantum to empower people across all levels of expertise. For developers already familiar with quantum coding, Azure Quantum's Resource Estimator is another tool that allows you to create and refine quantum solutions to run on future, scaled quantum machines by modelling how many qubits will be needed to run an application, how long it will take to run, and which qubit technologies will be better suited to solving a specific problem.

- Whether you're starting your own learning journey, exploring quantum hardware, or developing quantum algorithms for the future, Azure Quantum offers a platform for your quantum exploration and innovationyou can also read the peer-reviewed research demonstrating that we've achieved the first milestone of our quantum roadmap.
- For enterprises interested in accelerating scientific discovery today, you can learn more about the recently announced Azure Quantum Elements, Microsoft's system for computational chemistry and materials science combining the latest breakthroughs in HPC, AI, and quantum computing.

We are excited to connect with you during IEEE Quantum Week 2023, to answer your questions and explore the possibilities for advancing your quantum research and development with Azure Quantum.

Please join us live or online at the following panels, workshops and tutorials:

**Monday, September 18, 2023:**10:00 AM to 4:30 PM PST**Workshop:**Developing Responsible and Ethical Quantum Computing for Societal Benefit, including Dr. Nathan Baker, Cottonwood Room.

**Monday, September 18, 2023:**10:00 AM to 4:30 PM PST**Workshop:**Progress and Challenges in Quantum Intermediate Representations (QIR), including Stefan Wernli, Larch Room.

**Monday, September 18, 2023:**10:00 AM to 10:45 AM PST**Talk:**Teaching quantum computing using Microsoft Quantum Development Kit and Azure Quantum, with Mariia Mykhailova, Regency C.

**Thursday, September 21, 2023:**8:00 AM to 9:30 AM PST**Keynote:**Accelerating Scientific Discovery with Quantum Supercomputing with Krysta Svore, Grand Ballroom.

**Thursday, September 21, 2023:**10:00 AM to 4:30 PM PST**Workshop:**Quantum Resource Estimation, including Mariia Mykhailova and Wim van Dam, Auditorium.

**Thursday, September 21, 2023:**03:00 PM to 4:00 PM PST**Panel:**Real Time Decoding in the Fault Tolerant Era, including Nicolas Delfosse, Cedar A.

**Friday, September 22, 2023:**10:00 AM to 11:30 PM PST**Panel:**Fostering DEIA Culture and Environment in Industry, including Krysta Svore, Regency B.

If you are interested in connecting with us during Quantum World Congress 2023, join us live or on-demand online for our session:

**Wednesday, September 27, 2023:**1:30 PM EST

**Session:** How our collective genius can unlock growth and progress with Quantum, with Dr. Krysta Svore in the Main Theatre at Capital One Hall in Tysons, VA.

The post Azure Quantum learning resources enable getting ready for a quantum supercomputer appeared first on Microsoft Azure Quantum Blog.

]]>The post Announcing season 2 of the Microsoft Quantum Innovator Series appeared first on Microsoft Azure Quantum Blog.

]]>Get the inside, first-hand account of the Microsoft strategy to scaled quantum computing. In this series, you will hear directly from the Microsoft Azure Quantum scientists and leaders about the path to quantum at scale and how you can get involved today.

- Be among the first to learn about recent advancements.
- Get inspired to drive quantum innovation in your organization.
- Discover how quantum will transform various industries in the coming years.

It will take the world's collective genius to realize the full promise of quantum computing. With increasing private, government, and academic investment in quantum research, now is the perfect time for innovators and developers to get ahead of the curve and cultivate their quantum computing knowledge and skills. Join this webinar to learn how Microsoft can help you become quantum-ready with world-class programming tutorials and a broad variety of learning materials and tools available through Azure Quantum.

**Dr. Wim van Dam**, **Principal Researcher**, **Advanced Quantum Development, Microsoft**

Wim van Dam is a Principal Researcher in the Advanced Quantum Development group at Microsoft. His research focuses on quantum computation and quantum communication and his main interest is the development of new quantum algorithms that deliver a significant acceleration when compared with traditional, classical algorithms. Before joining Microsoft, Dr. van Dam was Head of Quantum Algorithms at QC Ware and a professor in the Departments of Computer Science and Physics at University of California, Santa Barbara.

**Mariia Mykhailova, Principal Quantum Software Engineer, Advanced Quantum Development, Microsoft**

Mariia Mykhailova is a Principal Software Engineer in the Advanced Quantum Development group at Microsoft. She works on developing software for fault-tolerant quantum computation. Mariia is also a part-time lecturer at Northeastern University, teaching Introduction to Quantum Computing since 2020, and the author of O'Reilly book, "Q# Pocket Guide".

Catalyzed by a new generation of AI, the world’s most advanced AI models are powering breakthroughs in chemistry and helping to usher in a new era of scientific discovery that will transform society. Even bigger breakthroughs will come with quantum supercomputing. Join this webinar to learn how Microsoft is accelerating chemistry and materials science with Azure Quantum Elements and how industry innovators are transforming their research and development with quantum computing today.

**Dr. Nathan Baker**, **Head of Partnerships for Chemistry and Materials**, **Azure Quantum, Microsoft**

Nathan Baker is the Head of Partnerships for Chemistry and Materials, Azure Quantum at Microsoft. Previously, Nathan was a Laboratory Fellow in the Physical and Computational Sciences Directorate at Pacific Northwest National Laboratory (PNNL) and a faculty member at Washington University in St. Louis with roles that included Associate Professor (tenured) of Biochemistry and Molecular Biophysics and Director of the Biophysics PhD program. His research interests include the development of new algorithms in applied mathematics and data science to support applications in chemistry, biology, and other domains. Dr. Baker is a member of the Washington State Academy of Sciences, Fellow of the American Association for the Advancement of Science (AAAS), and a former Alfred P. Sloan Research Fellow.

Season one of the Quantum Innovator series kicked off with our first event, "Have you started developing for practical quantum advantage?" with Dr. Krysta Svore, distinguished engineer and VP of Quantum Software, Microsoft. During this webinar, you can:

- Learn what's required for scalable quantum computing and what can be done now to get ready for it.
- See the new Azure Quantum Resource Estimatorthe first end-to-end toolset that provides estimates for the number of logical and physical qubits as well as runtime required to execute quantum applications on post-NISQ, fault-tolerant quantum computers.
- Understand the number of qubits required for a quantum solution and the differences between qubit technologies.
- Explore how Microsoft is empowering innovators today by co-designing tools to optimize quantum solutions and to run small instances of algorithms on today's diverse and maturing quantum systems and prepare for tomorrow's scaled quantum computers.
- Participate in a live Q&A chat with the Azure Quantum team and be one of the first to hear about recent advancements.

**Krysta Svore | Distinguished Engineer and Vice President of Advanced Quantum Development, Quantum at Microsoft**

Dr. Svore has published over 70 refereed articles and filed over 30 patents. She is a Fellow of the American Association for the Advancement of Science. She won the 2010 Yahoo! Learning to Rank Challenge with a team of colleagues, received an ACM Best of 2013 Notable Article award, and was recognized as one of Business Insider's Most Powerful Female Engineers of 2018. A Kavli Fellow of the National Academy of Sciences, she also serves as an advisor to the National Quantum Initiative, the Advanced Scientific Computing Advisory Committee of the Department of Energy, and the ISAT Committee of DARPA, in addition to numerous other quantum centers and initiatives globally.

In our second episode from the first season, we focused on why Microsoft decided to design its quantum machine with topological qubitsan approach that is both more challenging and more promising than othersand what's next for Microsoft's hardware ambitions. This episode shares more about Microsoft's quantum hardware journey, specifically touching on Microsoft's physics breakthrough outlined in Dr. Nayak's paper, and will also focus on the physics behind the topological qubit. Join our speaker Chetan Nayak, Technical Fellow and and VP of Quantum Hardware and Systems Engineering, Microsoft to:

- Learn about topological phases in physics and how they are applied to quantum computing.
- Explore how topological properties create a level of protection that can, in principle, help a qubit retain quantum information despite what's happening in the environment around it.
- Understand the role of the topological gap and the recently discovered Majorana zero modes, and how together they impact a topological qubit's stability, size, and speed.
- Learn how to examine the raw data and analysis from Microsoft's hardware research on Azure Quantum.
- Use interactive Jupyter notebooks and explore what's next in engineering the world's first topological qubit.
- Participate in a live Q&A chat with the Azure Quantum team and be one of the first to hear about recent advancements.

**Chetan Nayak | Technical Fellow and VP of Quantum Hardware and Systems Engineering, Microsoft**

Dr. Nayak is a pioneer of the study of quantum matter, including topological and non-equilibrium phases. He holds a bachelor's degree from Harvard and a PhD in physics from Princeton. He was an assistant, associate, and full professor at UCLA, a visiting professor at Nihon University in Tokyo, and is a professor of physics at UCSB. Chetan was a trustee of the Aspen Center for Physics and an editor of Annals of Physics. He is a Fellow of the American Physical Society and a recipient of an Alfred P. Sloan Foundation Fellowship and a National Science Foundation CAREER award. He has published more than 150 refereed articles with more than 20,000 citations and has been granted more than 20 patents.

Our third episode from season one, featured Matthias Troyer, Microsoft Technical Fellow, discussing what kind of problems we can solve today with quantum simulation. Learn how years of Microsoft research reveal that the discovery of new chemicals, materials, and drugs that will ultimately help solve the world's most challenging problems will greatly benefit from quantum computing. Dr. Troyer will explain what is happening today and how chemical and materials science innovators can get started on their quantum journey:

- Learn how real progress can be made today by combining high performance computing (HPC), state-of-the-art machine learning, and quantum knowledge to fundamentally transform our ability to model and predict the outcome of chemical processes.
- Get real-world insights from co-innovation projects happening right now with leading chemical and materials science companies around the world.
- Find out how researchers in chemical and materials fields can get started on their quantum journey today.
- Participate in a live Q&A chat with the Azure Quantum team and be one of the first to hear about recent advancements.

**Matthias Troyer | Technical Fellow and Corporate Vice President, Microsoft**

Matthias Troyer is Technical Fellow and Corporate Vice President at Microsoft, working on the system architecture of quantum computers and their applications. After receiving his PhD in 1994 from ETH Zurich in Switzerland and spending time as a postdoc at the University of Tokyo he has been professor of Computational Physics at ETH Zurich until joining Microsoft in 2017. Matthias is a Fellow of the American Physical Society and President of the Aspen Center for Physics. He is recipient of the Hamburg Prize for Theoretical Physics and the Rahman Prize for Computational Physics of the American Physical Society "for pioneering numerical work in many seemingly intractable areas of quantumphysics and for providing efficient sophisticated computer codes to the community."

- Learn more about Azure Quantum.
- Sign-up to learn more about the Azure Quantum Elements private preview.
- Visit the Azure Quantum Elements website.
- Check out our Microsoft Quantum Innovator Series webinars.

The post Announcing season 2 of the Microsoft Quantum Innovator Series appeared first on Microsoft Azure Quantum Blog.

]]>The post Accelerating materials discovery with AI and Azure Quantum Elements appeared first on Microsoft Azure Quantum Blog.

]]>More than ever, scientists need new technologies to help solve many of the most pressing issues facing society like reversing climate change, addressing food insecurity, and developing lifesaving therapeutics. Fundamentally, these problems are chemistry and materials science challenges, and some will require the transformational power of a scaled quantum computer. While we are on a path to engineer a quantum supercomputer, we are also making investments in High-performance computing (HPC) and AI to empower researchers to accelerate scientific discovery and make rapid progress toward impactful solutions for our most pressing problems today.

That is why we recently announced the private preview of Azure Quantum Elements, a comprehensive system to empower R&D teams in chemistry and materials science with scale, speed, and accuracy by integrating the latest breakthroughs in HPC, AI, and quantum computing. Researchers and product developers can screen candidates, study mechanisms, and design both molecules and materials through state-of-the-art computing capabilities and enterprise-grade services. Industry innovators, including **BASF****, AkzoNobel, AspenTech, ****Johnson Matthey****, SCGC**, and **1910Genetics** have already adopted Azure Quantum Elements to transform their research and development.

In a recent post, we highlighted how we're scaling the applications of molecular dynamics (MD) simulations with HPC capabilities in Azure Quantum Elements. Such workloads play an important role in life sciences by simulating the structure and dynamics of proteins, the ligands bound to them, and their associated affinities. This structural exploration can accelerate the innovation of better pharmaceuticals by modeling drug molecules and their relevant protein binding sites.

In addition to applications in life sciences, MD simulations also play valuable roles in materials discovery by explaining relationships between material composition, structure, and dynamic properties. MD-calculated properties, such as thermal conductivity, ionic conductivity, and more, are often important filters in materials discovery pipelines. These MD-based filters can help researchers winnow a pool of materials candidates to a select few based on desired properties, which can then be tested in experimental settings.

With traditional HPC-based computational material discovery, density functional theory (DFT) is typically used as the engine for computing forces in MD simulations. DFT-based calculation workflows have allowed researchers to explore and evaluate thousands of materials candidates. However, these calculations come at a significant computational cost. A single static DFT calculation, for instance, can require several minutes of CPU time. Geometric optimization can demand tens to hundreds of such calculations, while MD simulations can require millions or more.

To accelerate computational materials discovery processes, we combined HPC calculations with three new AI models relating material structure to energy, force, and stress; electronic band gap; as well as bulk and shear moduli mechanical properties. The models were trained on millions of materials simulation data points to bypass HPC calculations by quickly predicting materials properties. Those capabilities allow researchers to filter material candidates based on properties like stability, reactivity, ionic conductivity, and more. When used as a force field, the AI materials models provide a 1,500-fold speedup over DFT calculations for geometric optimization of small systems with less than 100 atoms^{1}. This speedup will be even greater for larger systems, due to the linear scaling of the AI model's execution time with system size and the much less favorable scaling of most DFT models. This result exemplifies the power of AI to perform thousands of calculations in the time required for a single HPC simulation.

To demonstrate these acceleration capabilities, we developed a pipeline of AI- and HPC-based screening calculations allowing us to analyze tens of millions of initial candidates and narrow them down to a small sample set that best suits a particular manufacturing application. By combining both AI and HPC methods, we achieved remarkable acceleration in certain computational steps.

The AI models used for this discovery process improve upon a graph neural network (GNN)-based universal interatomic potential, trained on a massive database of structural calculations performed by the Materials Project over the past decade^{2}. That original model achieved top accuracy in a benchmark for thermodynamic materials stability predictions with the lowest overall prediction mean absolute error^{3}, in turn emerging as a leader for AI-guided materials discovery.

To achieve these results, we started with approximately 30 million candidate materials, generated by replacing elements in known crystal structures with a sampling of elements across a subset of the periodic table, as shown in Figure 1. We then screened this pool of candidates with a workflow that combined our AI materials models with traditional HPC-based simulations.

The first phase of screening relied on fast AI model inference calls. The AI models were used to evaluate materials stability: this step narrowed our search space from about 30 million to approximately 500,000 candidates, avoiding materials that may decompose spontaneously. The AI models were also used to screen materials for important functional properties such as redox potential and electronic band gap, reducing the search space to about 800 candidates. The second phase of screening relied on physics-based simulations accelerated with our AI models. The power of Azure HPC was used for DFT calculations to verify the properties predicted through fast AI screening in the first phase. Fast AI models have a non-zero error rate, so DFT validation re-computes the properties that the AI models predicted as a higher-accuracy filter. This verification step was followed by MD simulations to model structural fluctuations in the material. Next, we used AI-accelerated MD simulations to evaluate the dynamic properties of the materials, such as atomic diffusivity. These AI-accelerated simulations used fast AI model inference calls for forces at each MD time step, rather than the much slower traditional approach of DFT-based force calculations. This second phase of screening narrowed the field to approximately 150 candidates. From here, we assessed certain practical considerationssuch as novelty, mechanical properties, and materials availabilityto identify a final set of approximately 20 candidate materials worth pursuing in a lab.

This case study highlights both the scale and speed of HPC plus AI solutions as we were able to screen 30 million candidates in approximately one week, demonstrating the research acceleration that Azure Quantum Elements provides. While the work of Microsoft optimized this workflow for a specific manufacturing scenario, the materials AI models and associated HPC simulations have broad applications across diverse chemistry and materials science scenarios and demonstrates the overall feasibility of AI-accelerated materials discovery.

At Microsoft, we see great potential to accelerate chemistry and materials advances by integrating Azure's scaled HPC solutions with AI models tuned for scientific research. We also know that scaled quantum computing will deliver breakthrough accuracy in modeling the forces and energies of highly complex chemical systems, allowing insights into spaces that are currently intractable for classical computing. While we continue to achieve breakthrough milestones on the path to a quantum supercomputer, Azure Quantum Elements includes workflows and tools to prepare for a quantum future, providing solutions to determine which problems can be solved classically versus which require a quantum computer and estimate the number of qubits and runtimes required for various quantum chemistry calculations. Furthermore, customers can start experimenting with existing quantum hardware, and get priority access to the future quantum supercomputer from Microsoft once available.

We are excited to see how the power of the Azure cloud will help you. For more information, please visit the following resources:

- Sign up to learn more about the private preview of Azure Quantum Elements.
- Visit the Azure Quantum Elements website.
- Read our previous blog post about Unlocking the power of Azure for Molecular Dynamics.
- Check out our Microsoft Quantum Innovator Series webinars.

^{1. }Traditional approaches require approximately 78 CPU hours or 4,680 CPU minutes per structural relaxation. In this internal study, our AI models required a little more than 3 CPU minutes per structural relaxation, an over 1,500-fold speed up.

^{2. }A universal graph deep learning interatomic potential for the periodic table, Nature Computational Science, 2022.

^{3.}Matbench Discovery: Can machine learning identify stable crystals?, ICLR, 2023.

The post Accelerating materials discovery with AI and Azure Quantum Elements appeared first on Microsoft Azure Quantum Blog.

]]>The post Microsoft achieves first milestone towards a quantum supercomputer appeared first on Microsoft Azure Quantum Blog.

]]>In keeping with that goal, we are making three important announcements today.

**Azure Quantum Elements**accelerates scientific discovery so that organizations can bring innovative products to market more quickly and responsibly. This system empowers researchers to make advances in chemistry and materials science with scale, speed, and accuracy by integrating the latest breakthroughs in high-performance computing (HPC), AI, and quantum computing. The private preview launches in a few weeks, and you can sign-up today to learn more.**Copilot in Azure Quantum**helps scientists use natural language to reason through complex chemistry and materials science problems. With Copilot in Azure Quantum a scientist can accomplish complex tasks like generating the underlying calculations and simulations, querying and visualizing data, and getting guided answers to complicated concepts. Copilot also helps people learn about quantum and write code for today's quantum computers. It's a fully integrated browser-based experience available to try for free that has a built-in code editor, quantum simulator, and seamless code compilation.**Roadmap to Microsoft’s quantum supercomputer**is now published along with peer-reviewed research demonstrating that we've achieved the first milestone.

The path to quantum supercomputing is not unlike the path to today's classical supercomputers. The pioneers of early computing machines had to advance the underlying technology to improve their performance before they could scale up to large architectures. That's what motivated the change from vacuum tubes to transistors and then to integrated circuits. Fundamental changes to the underlying technology will also precipitate the development of a quantum supercomputer.

As the industry progresses, quantum hardware will fall into one of three categories of Quantum Computing Implementation Levels:

*Level 1Foundational: Quantum systems that run on noisy physical qubits which includes all of today's Noisy Intermediate Scale Quantum (NISQ) computers.*

Microsoft has brought these quantum machinesthe world's best, with the highest quantum volumes in the industryto the cloud with Azure Quantum including IonQ, Pasqal, Quantinuum, QCI, and Rigetti. These quantum computers are great for experimentation as an on-ramp to scaled quantum computing. At the Foundational Level, the industry measures progress by counting qubits and quantum volume.

*Level 2Resilient: Quantum systems that operate on reliable logical qubits.*

Reaching the Resilient Level requires a transition from noisy physical qubits to reliable logical qubits. This is critical because noisy physical qubits cannot run scaled applications directly. The errors that inevitably occur will spoil the computation. Hence, they must be corrected. To do this adequately and preserve quantum information, hundreds to thousands of physical qubits will be combined into a logical qubit which builds in redundancy. However, this only works if the physical qubits' error rates are below a threshold value; otherwise, attempts at error correction will be futile. Once this stability threshold is achieved, it is possible to make reliable logical qubits. However, even logical qubits will eventually suffer from errors. The key is that they must remain error-free for the duration of the computation powering the application. The longer the logical qubit is stable, the more complex an application it can run. In order to make a logical qubit more stable (or, in other words, to reduce the logical error rate), we must either increase the number of physical qubits per logical qubit, make the physical qubits more stable, or both. Therefore, there is significant gain to be made from more stable physical qubits as they enable more reliable logical qubits, which in turn can run increasingly more sophisticated applications. That's why the performance of quantum systems in the Resilient Level will be measured by their reliability, as measured by logical qubit error rates.

*Level 3Scale: Quantum supercomputers that can solve impactful problems which even the most powerful classical supercomputers cannot.*

This level will be reached when it becomes possible to engineer a scaled, programmable quantum supercomputer that will be able to solve problems that are intractable on a classical computer. Such a machine can be scaled up to solve the most complex problems facing our society. As we look ahead, we need to define a good figure of merit that captures what a quantum supercomputer can do. This measure of a supercomputer's performance should help us understand how capable the system is of solving impactful problems. We offer such a figure of merit: **reliable Quantum Operations Per Second (rQOPS), **which measures how many reliable operations can be executed in a second. A quantum supercomputer will need at least one million rQOPS.

The rQOPS metric counts operations that remain reliable for the duration of a practical quantum algorithm so that there is an assurance that it will run correctly. As we shall see below, this metric encapsulates the full system performance (as opposed to solely the physical qubit performance) and combines three key factors that are critical for scaling up to execute valuable quantum applications: scale, reliability, and speed.

The first time rQOPS is detected is at Level 2, but, it becomes meaningful at Level 3. To solve valuable scientific problems, the first quantum supercomputer will need to deliver at least one million rQOPS, with an error rate of, at most, 10^{-12} or only one for every trillion operations. At one million rQOPS, a quantum supercomputer could simulate simple models of correlated materials, aiding in the creation of better superconductors, for example. In order to solve the most challenging commercial chemistry and materials science problems, a supercomputer will need to continue to scale to one billion rQOPS and beyond, with an error rate of at most 10^{-18} or one for every quintillion operations. At one billion rQOPS, chemistry and materials science research will be accelerated by modeling new configurations and interactions of molecules.

Our industry as a whole has yet to achieve this goal, which can only happen once we transition from the NISQ era to achieving a reliable qubit**. **While today's quantum computers are all performing at an rQOPS value of zero, this metric quantifies where tomorrow's quantum computers need to be to deliver value.

A rQOPS is given by the number *Q* of logical qubits in the quantum system multiplied by the hardware's logical clock speed *f *:

*rQOPS = Q f .*

It is expressed with a corresponding logical error rate *p _{L}*, which indicates the maximum tolerable error rate of the operations on the logical qubits.

The rQOPS accounts for the three key factors of scale, speed, and reliability: scale through the number of reliable qubits; speed through the dependence on the clock speed; and reliability through encoding of physical qubits into logical qubits and the corresponding logical error rate *p _{L}*.

To facilitate calculating how many rQOPS an algorithm will require, we've updated the Azure Quantum Resource Estimator to output the rQOPS and *p _{L}* for the user's choice of quantum algorithm and quantum hardware architecture. This tool enables quantum innovators to develop and refine algorithms to run on tomorrow's scaled quantum computers by revealing the rQOPS and run time required to run applications on different hardware architectures.

In the plots shown below, we illustrate the requirements (numbers of physical qubits and physical clock speed) needed for one million rQOPS with *p _{L}*=10

**Figure 1**: Requirements to achieve 1M rQOPS, with a 10^{-12} logical error rate and at least 1,000 reliable logical qubits. The physical hardware trade-offs between clock speed and qubits are shown for devices with physical error rates of 1/1000 and 1/1,000,000.

**Figure 2**: Requirements to achieve 1G rQOPS, with a 10^{-18} logical error rate. The physical hardware trade-offs between clock speed and qubits are shown for devices with physical error rates of 1/1000 and 1/1,000,000.

A quantum supercomputer must be powered by reliable logical qubits, each of which is formed from many physical qubits. The more stable the physical qubit is, the easier it is to scale up because you need fewer of them. Over the years, Microsoft researchers have fabricated a variety of qubits used in many of today's NISQ computers, including spin, transmon, and gatemon qubits. However, we concluded that none of these qubits is perfectly suited to scale up.

That's why we set out to engineer a brand-new qubit with inherent stability at the hardware level. It has been an arduous development path in the near term because it required that we make a physics breakthrough that has eluded researchers for decades. Overcoming many challenges, we're thrilled to share that a peer-reviewed paper, published in Physical Review B, a journal of the American Physical Society, establishes that **Microsoft has achieved the first milestone towards creating a reliable and practical quantum supercomputer.**

In this paper we describe how we engineered a device in which we can controllably induce a topological phase of matter characterized by Majorana Zero Modes (MZMs).

The topological phase can enable highly stable qubits with small footprints, fast gate times, and digital control. However, disorder can destroy the topological phase and obscure its detection. Our paper reports on devices with low enough disorder to pass the topological gap protocol, thereby demonstrating this phase of matter and paving the way for a new stable qubit. The published version of the paper shows data from additional devices measured after initial presentations of this breakthrough. We have added extensive tests of the TGP with simulations that further validate it. Moreover, we have developed a new measurement of the disorder level in our devices which demonstrates how we were able to accomplish this milestone and has seeded further improvements.

To learn more about this accomplishment, you can read the paper, analyze the data yourself in our interactive Jupyter notebooks, and watch this summary video.

1. **Create and control Majoranas:** Achieved.

2. **Hardware-protected qubit:** The hardware-protected qubit (historically referred to as a topological qubit) will have built-in error protection. This unique qubit will scale to support a reliable qubit, and will enable engineering of a quantum supercomputer because it will be:

- SmallEach of our hardware-protected qubits will be less than 10 microns on a side, so one million can fit in the area of the smart chip on a credit card, enabling a single-module machine of practical size.
- FastEach qubit operation will take less than one microsecond. This means problems can be solved in weeks rather than decades or centuries.
- ControllableOur qubits will be controlled by digital voltage pulses to ensure that a machine with millions of them doesn't have an excessive error rate or require unattainable input/output bandwidth.

3. **High quality hardware-protected qubits:** Hardware-protected qubits that can be entangled and operated through braiding, reducing error rates with a series of quality advances.

4. **Multi-qubit system:** A variety of quantum algorithms can be executed when multiple qubits operate together as a programmable Quantum Processing Unit (QPU) in a full stack quantum machine.

5. **Resilient quantum system:** A quantum machine operating on reliable logical qubits, that demonstrates higher quality operations than the underlying physical qubits. This breakthrough enables the first rQOPS.

6. **Quantum supercomputer:** A quantum system capable of solving impactful problems even the most powerful classical supercomputers cannot with at least one million rQOPS with an error rate of at most 10^{-12} (one in a trillion).

We will reach Level 2, Resilient, of the Quantum Computing Implementation Levels at our fifth milestone and will achieve Level 3, Scale, with the sixth.

Today marks an important moment on our path to engineering a quantum supercomputer and ultimately empowering scientists to solve many of the hardest problems facing our planet. To learn more about how we're accelerating scientific discovery with Azure Quantum, check-out the virtual event with Satya Nadella, Microsoft Chairman and Chief Executive Officer, Jason Zander, Executive Vice President of Strategic Missions and Technologies, and Brad Smith, Vice Chair and President. To follow our journey and get the latest insider news on our hardware progress, register here.

The post Microsoft achieves first milestone towards a quantum supercomputer appeared first on Microsoft Azure Quantum Blog.

]]>The post Microsoft Quantum researchers make algorithmic advances to tackle intractable problems in physics and materials science appeared first on Microsoft Azure Quantum Blog.

]]>In their paper, Complexity of Implementing Trotter Steps, the authors improve upon pre-existing algorithms that rely on the so-called product formula methods, which date back to the 1990s when the first quantum simulation algorithm was proposed. The underlying idea is quite straightforward: we can simulate a general Hamiltonian system by simulating its component terms one at a time. In most situations, this only leads to an approximate quantum simulation, but the overall accuracy can be made arbitrarily high by repeating such Trotter steps sufficiently frequently.

So, what are the resources needed to run this algorithm on a quantum computer? The algorithm repeats an elementary Trotter step multiple times, hence the total complexity is given by the number of repetitions multiplied by the cost per step, the latter of which is further determined by the number of terms in the Hamiltonian. Unfortunately, this is not very attractive for long-range quantum systems as the number of terms involved can be too big to be practical. Consider, for instance, a system with all-to-all interactions. If the size of the system is N, then the number of terms is N^{2}, which also quantifies the asymptotic cost of Trotter steps. As a result, we are basically paying a quadratically higher cost to solve a simulation problem of just linear size. This issue becomes even worse for more general systems with many-body interactions. The question to ask then isis there a better implementation whose cost does not scale with the total number of Hamiltonian terms, overcoming this complexity barrier?

The answer to this question, as the paper shows, is twofold. If terms in the Hamiltonian are combined with arbitrary coefficients, then this high degree of freedom must be captured by any accurate quantum simulation, implying a cost proportional to the total term number. However, when the target Hamiltonian is structured with a lower degree of freedom, the paper provides a host of recursive techniques to lower the complexity of quantum simulation. In particular, this leads to an efficient quantum algorithm to simulate the electronic structure Hamiltonian, which models various important systems in materials science and quantum chemistry.

Recursive techniques have played an essential role in speeding up classical algorithms, such as those for sorting, searching, large integer and matrix multiplication, modular exponentiation, and Fourier transformations. Specifically, given a problem of size N, we do not aim to solve it directly; instead, we divide the target problem into M subproblems, each of which can be seen as an instance of the original one with size N/M and can be solved recursively using the same approach. This implies that the overall complexity C(N) satisfies the relation: C(N) = M C(N/M) + f(N), with f(N) denoting the additional cost to combine solutions of the subproblems. Mathematical analysis yields that, under certain realistic assumptions, the overall complexity C(N) has the same scaling as the combination cost f(N) up to a logarithmic factora powerful result sometimes known as "the master theorem." However, combining solutions can be much easier to handle than solving the full problem, so recursions essentially allow us to simplify the target problem almost for free!

Given the ubiquitous nature of recursions in classical computing, it is somewhat surprising that there were not many recursive quantum algorithms available. The paper from Low, Su, and collaborators develops recursive Trotter steps with a much lower implementation cost, suggesting the use of recursion as a promising new way to reduce the complexity of simulating many-body Hamiltonians.

The paper's result applies to a variety of long-range interacted Hamiltonians, including the Coulomb interaction between charged particles and the dipole-dipole interaction between molecules, both of which are ubiquitous in materials science and quantum chemistrya primary target application of quantum computers. In physics, impressive controls in recent experiments with trapped ions, Rydberg atoms, and ultracold atoms and polar molecules have enabled the possibility to study new phases of matter, which contributes to a growing interest in simulating such systems.

This research is part of the larger quantum computing effort at Microsoft. Microsoft has long been at the forefront of the quantum industry, serving as a pioneering force in the development of quantum algorithms tailored for simulating materials science and chemistry. This includes earlier efforts using quantum computers to elucidate reaction mechanisms in complex chemical systems targeting the open problem of biological nitrogen fixation in nitrogenase, as well as more recent quantum solutions to a carbon dioxide fixation catalyst with more than one order of magnitude savings in the computational cost.

The new results from the current work represent Microsoft's continuing progress to develop solutions for classically intractable problems on a future quantum machine with Azure Quantum.

- Want to learn more about Azure Quantum? We invite you to check-out our Microsoft Quantum Innovator Series.
- If you are interested in connecting, with our Azure Quantum team, please reach out at: QuantumInnovation@microsoft.com.

The post Microsoft Quantum researchers make algorithmic advances to tackle intractable problems in physics and materials science appeared first on Microsoft Azure Quantum Blog.

]]>The post Unlocking the power of Azure for Molecular Dynamics appeared first on Microsoft Azure Quantum Blog.

]]>Molecular dynamics (MD)the simulation of molecular interactionsis one computational problem that pushes the boundaries of what is possible with today's high-performance computing platforms. More powerful platforms for molecular dynamics simulations could unlock the development of new materials, new drugs, and more efficient batteries, so a team of Azure Quantum scientists recently set out to ask a fundamental question: what are Azure's capabilities for these types of simulations? Here's what we learned and how any scientist can use Azure to drive similar results today.

Molecular dynamics calculations pose unique high-speed communication challenges which require state-of-the-art computing capabilities. The Microsoft Azure cloud architecture helps researchers overcome these hurdles by allowing them to take advantage of the latest software and hardware developments required for chemistry and materials science research. By simplifying the provisioning of the necessary high-performance computing (HPC) resources, Azure helps scientists rapidly deploy and execute complex simulations of the structure and dynamics of macromolecules. This significantly accelerates chemical and materials innovation, for example enabling the creation of better pharmaceuticals by modeling biomolecules and their relevant properties at a faster pace.

Azure high-performance computing engineers have made significant progress in advancing the scale and networking speeds of the cloud platform. The latest Azure virtual machines use InfiniBand for low-latency communication across distributed nodes for differentiated scalability and performance gains. Our team has demonstrated excellent parallel efficiency and an increase of over 200 percent in benchmark simulations compared to previous virtual machines, particularly for larger simulations.

Customers are already taking advantage of Azure cloud HPC capabilities today. You can *read more **here.*

"With Azure HPC, we've seen about a 50 percent speedup on some of our chemistry calculations that we runwhich is critical for R&D because every second counts, not just for getting the results quickly, but also in terms of cost and throughput."Glenn Jones, Research Manager at Johnson Matthey Technology Center.

Molecular dynamics simulations translate atomic-scale forces and energies into molecular motion and are an important tool for both life sciences and materials science research. In life sciences, for example, molecular dynamics simulations are used to understand proteins, ligands, and their associated properties, which can be used to accelerate the discovery of better pharmaceuticals by modeling drug molecules and their relevant protein binding sites.

Molecular dynamics workloads stand to greatly benefit from specialized HPC systems whose architectures use both graphics processing units (GPUs) and central processing units (CPUs) with high-speed interconnects between them. Optimized MD simulations pose unique computational challenges related to time scale, sampling, and analysis, requiring powerful computing nodes with low-latency communicationsa task for which Azure is uniquely suited.

Biologically relevant events occur across a wide range of timeframes from fractions of a second to decades. Consider an example event that occurs within 1/1,000^{th} of a second. While this may seem short in real time, it represents a massive computational workload. To capture relevant chemical properties in these simulations, the system needs to compute the position and momentum of all the atoms very frequently. Often, these properties are calculated every 10^{-12} seconds for all atoms in the system. This means one would have to carry out at least a billion calculation steps to perform simulations on the same timescales as the event of interest.

Powerful computing resources in Azure make this computational problem more tractable. In addition to requiring state-of-the-art processorsincluding both CPUs and GPUsto accurately evaluate energies and forces, scalable molecular dynamics simulations need high-performance communication networks, because each calculation step requires messages to be passed between processors to communicate force and energy information. As the number of processors used in the simulation increases, so too does the need for faster communication, since simulation performance is extremely sensitive to the speed at which information can be passed between distributed nodes.

The analysis of molecular dynamics simulation trajectories also requires high-performance computing methods. To understand the simulation results, scientists must apply compute-intensive methods to analyze large volumes of trajectory data. This analysis requires advanced statistical methods, high-performance computing platforms, and chemists' expert knowledge to interpret results.

Microsoft is committed to making the most advanced computing resources available through Azure cloud services. The Azure cloud HPC platform allows researchers to take advantage of InfiniBand on HB Series virtual machines, which enables low-latency communication across distributed nodes for differentiated scalability and performance boosts.

We've seen strong results using Azure for molecular dynamics:

- When simulating a benchmark model for Satellite Tobacco Mosaic Virus (STMV) with 20 million atoms, our latest high-bandwidth (HB) VMs (v3) outperformed previous versions of HB VMs by 218 percent to 251 percent, while also reducing the cost per nanosecond by a third (32 percent to 36 percent).
- And while these simulations scale well on CPUs, they also scale on GPUsa configuration also supported by Azure. As we continue to bring new hardware and software to Azure HPC, further benchmark details will be published.
- Overall, these benchmarks illustrate Azure's continued capability to reduce the time and cost of complex simulation workloads by utilizing state-of-the-art configurations, such as VMs with InfiniBand technology.

Azure’s unique capabilities extend far beyond the life sciences and can be applied to many other high-performance and computing-intensive workflows, including those in materials science and chemical physics.

Increasingly, we see great potential to accelerate chemistry and materials advances by integrating Azure's scaled HPC solutions with the speed of groundbreaking AI models tuned for scientific research. At Microsoft, we have been exploring a full breadth of AI capabilities for decades with our internal research teams. With the broad range of AI tools in Azure, innovators can design workflows which harness AI models to sort through massive data sets and subsequently use HPC-based simulation insights to narrow those results. These scenarios are only possible with the deep integration of AI and HPC in Azure today, which will also include the power of quantum at scale to help researchers improve model accuracy in the future.

Many of the world’s most pressing problems require advanced computing and the ability to simulate complex systems, because many physical interactions and natural processes are too difficult to study with classical computation at sufficient levels of accuracy. For this reason, scaled quantum computers must be part of the architecture of the future. Since quantum mechanics explains the behavior or matter and energy on the smallest possible scalethe scale of atoms and subatomic particlesquantum computers are inherently capable of understanding and predicting the complexities of nature, like those in chemical and materials science.

Scaled quantum computing will deliver breakthrough accuracy in modeling the forces and energies of such systems, allowing insights into spaces that are currently intractable to explore. Microsoft is focused on engineering a scaled quantum machine with these capabilities right now. However, with the world’s future possibly in the balance, we constantly ask the question: “How can we empower scientists to accelerate progress today?”

At Microsoft, we continue to invest more in our cloud HPC infrastructure so that innovators can accelerate the pace of research and discoveryboth within chemical and materials applications and beyond.

We're excited to see how the power of the Azure cloud will help you.

- Learn more about accelerating NAMD on Azure HB-series VMs.
- Discover automated ways to model and understand chemical reactions with Azure HPC.
- Want to learn more about Azure Quantum? We invite you to check-out our Microsoft Quantum Innovator Series.
- If you are interested in connecting, with our Azure Quantum team, please reach out at: QuantumInnovation@microsoft.com.

The post Unlocking the power of Azure for Molecular Dynamics appeared first on Microsoft Azure Quantum Blog.

]]>The post Building a quantum-safe future appeared first on Microsoft Azure Quantum Blog.

]]>