Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»A Leap in Performance – New Breakthrough Boosts Quantum AI
    Technology

    A Leap in Performance – New Breakthrough Boosts Quantum AI

    By Los Alamos National LaboratoryAugust 14, 2023No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email
    Brain Technology Artificial Intelligence Concept Illustration
    A research team has demonstrated that overparametrization improves performance in quantum machine learning, a technique that surpasses the capabilities of classical computers. Their research offers insights for optimizing the training process in quantum neural networks, allowing for enhanced performance in practical quantum applications.

    More is better — to a point — when using a large number of parameters to train machine-learning models on quantum computers.

    A groundbreaking theoretical proof reveals that using a technique called overparametrization enhances performance in quantum machine learning for tasks that challenge traditional computers.

    “We believe our results will be useful in using machine learning to learn the properties of quantum data, such as classifying different phases of matter in quantum materials research, which is very difficult on classical computers,” said Diego Garcia-Martin, a postdoctoral researcher at Los Alamos National Laboratory. He is a co-author of a new paper by a Los Alamos team on the technique in Nature Computational Science.

    Garcia-Martin worked on the research in the Laboratory’s Quantum Computing Summer School in 2021 as a graduate student from the Autonomous University of Madrid.

    Machine learning, or artificial intelligence, usually involves training neural networks to process information — data — and learn how to solve a given task. In a nutshell, one can think of the neural network as a box with knobs, or parameters, that takes data as input and produces an output that depends on the configuration of the knobs.

    “During the training phase, the algorithm updates these parameters as it learns, trying to find their optimal setting,” Garcia-Martin said. “Once the optimal parameters are determined, the neural network should be able to extrapolate what it learned from the training instances to new and previously unseen data points.”

    Both classical and quantum AI share a challenge when training the parameters, as the algorithm can reach a sub-optimal configuration in its training and stall out.

    A leap in performance

    Overparametrization, a well-known concept in classical machine learning that adds more and more parameters, can prevent that stall-out.

    The implications of overparametrization in quantum machine learning models were poorly understood until now. In the new paper, the Los Alamos team establishes a theoretical framework for predicting the critical number of parameters at which a quantum machine learning model becomes overparametrized. At a certain critical point, adding parameters prompts a leap in network performance and the model becomes significantly easier to train.

    “By establishing the theory that underpins overparametrization in quantum neural networks, our research paves the way for optimizing the training process and achieving enhanced performance in practical quantum applications,” explained Martin Larocca, the lead author of the manuscript and postdoctoral researcher at Los Alamos.

    By taking advantage of aspects of quantum mechanics such as entanglement and superposition, quantum machine learning offers the promise of much greater speed, or quantum advantage, than machine learning on classical computers.

    Avoiding traps in a machine learning landscape

    To illustrate the Los Alamos team’s findings, Marco Cerezo, the senior scientist on the paper and a quantum theorist at the Lab, described a thought experiment in which a hiker looking for the tallest mountain in a dark landscape represents the training process. The hiker can step only in certain directions and assesses their progress by measuring altitude using a limited GPS system.

    In this analogy, the number of parameters in the model corresponds to the directions available for the hiker to move, Cerezo said. “One parameter allows movement back and forth, two parameters enable lateral movement, and so on,” he said. A data landscape would likely have more than three dimensions, unlike our hypothetical hiker’s world.

    With too few parameters, the walker can’t thoroughly explore and might mistake a small hill for the tallest mountain or get stuck in a flat region where any step seems futile. However, as the number of parameters increases, the walker can move in more directions in higher dimensions. What initially appeared as a local hill might turn out to be an elevated valley between peaks. With the additional parameters, the hiker avoids getting trapped and finds the true peak or the solution to the problem.

    Reference: “Theory of overparametrization in quantum neural networks” by Martín Larocca, Nathan Ju, Diego García-Martín, Patrick J. Coles and Marco Cerezo, 26 June 2023, Nature Computational Science.
    DOI: 10.1038/s43588-023-00467-6

    The study was funded by LDRD at Los Alamos National Laboratory.

    Artificial Intelligence DOE Los Alamos National Laboratory Machine Learning Quantum Information Science
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Decoding the Quantum Riddle: Learning Quantum Processes Made Easier

    New Method Exposes How Artificial Intelligence Works

    Science Made Simple: What Is Machine Learning?

    Science Made Simple: What Is Artificial Intelligence?

    Breakthrough Proof Clears Path for Quantum AI – Overcoming Threat of “Barren Plateaus”

    Quantum Machine Learning Hits a Limit: A Black Hole Permanently Scrambles Information That Can’t Be Recovered

    X-ray Experiments and Machine Learning Innovation Could Trim Years off Battery R&D

    Lack of Sleep Could Be a Problem for Artificial Intelligence

    Artificial Brains Need Sleep Too – Desperate AI Researchers Discover Way to Stabilize Neuromorphic Processors

    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Could Perseverance’s Mars Samples Hold the Secret to Ancient Life?

    Giant Fossil Discovery in Namibia Challenges Long-Held Evolutionary Theories

    Is There Anybody Out There? The Hunt for Life in Cosmic Oceans

    Paleontological Surprise: New Research Indicates That T. rex Was Much Larger Than Previously Thought

    Photosynthesis-Free: Scientists Discover Remarkable Plant That Steals Nutrients To Survive

    A Waste of Money: New Study Reveals That CBD Is Ineffective for Pain Relief

    Two Mile Long X-Ray Laser Opens New Windows Into a Mysterious State of Matter

    650 Feet High: The Megatsunami That Rocked Greenland’s East Coast

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Banana Apocalypse: Can Biologists Outsmart the Silent Killer?
    • Scientists Uncover Hidden Mechanism Behind Opioid Addiction – Discovery Could Revolutionize Addiction Treatment
    • How Sonic Technology Is Advancing Wind Detection on Mars
    • Harnessing Blue Energy: The Sustainable Power Source of Tomorrow
    • Mystery Solved: Scientists Discover Unique Evolutionary Branch of Snakes
    Copyright © 1998 - 2024 SciTechDaily. All Rights Reserved.
    • Latest News
    • Trending News
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.