Computer Science Events
[PAST EVENT] Jindi Wu, Computer Science - PhD proposal
Abstract:
Quantum computing holds the potential to solve problems that are intractable for classical computers. Driven by this promise, academia and industry have made significant strides over the past few decades, leading to exciting breakthroughs. However, the anticipated quantum advantage remains unrealized due to the limitations of current quantum computers. These machines, known as Noisy Intermediate-Scale Quantum (NISQ) computers, are constrained by a limited number of qubits and are highly prone to errors, affecting both scalability and reliability. Despite these challenges, NISQ computers still offer considerable potential for real-world applications. This proposal aims to push the boundaries of these practical applications by improving the scalability and reliability of quantum computing during the NISQ era.
Quantum machine learning (QML) is one of the most promising applications of quantum computing, but its performance is significantly hindered by the limited size of QML models due to the restricted number of qubits on current quantum computers. Additionally, because quantum operations are prone to noise, larger QML models, which involve more quantum operations, are increasingly affected by noise, rendering their output unreliable and making the models untrainable.
To address these issues, we propose the Scalable Quantum Neural Network (SQNN). SQNN implements large-scale QNNs by distributing their components across multiple quantum computers. The architecture is organized into two layers: in the first layer, each component processes a portion of the input data, extracting local features, while the second layer aggregates these outputs to analyze global features and produce the final result. By leveraging multiple NISQ devices collaboratively, we can significantly enhance the performance of QNNs. Additionally, this approach improves reliability and trainability, as each smaller component within the system is subject to fewer errors.
Moreover, improving the resource efficiency of QNN algorithms is critical to further enhancing scalability and reliability. One limitation of current QNN algorithms is their increasing demand for quantum resources as problem complexity grows. For example, a QNN-based classifier requires more resources as the number of target classes increases. We identified the underutilization of quantum information as a key factor. To address this, we propose the MORE, a resource-efficient QNN-based classifier that maximizes the use of quantum information from single qubits. Using quantum clustering and supervised learning techniques, MORE maintains a fixed quantum resource requirement regardless of the number of classes. Additionally, its small size makes it less susceptible to noise, maintaining reliable performance even for complex problems.
Finally, we outline a future research plan aimed at advancing quantum computing within the NISQ framework. This includes modeling quantum errors, optimizing quantum circuit compilation, and improving the security of cloud-based quantum services. These efforts will pave the way for future breakthroughs in quantum technology.
Bio:
Jindi Wu is a fifth-year Ph.D. candidate in the Department of Computer Science at William & Mary under the supervision of Professor Qun Li. Jindi’s research interests lie in quantum machine learning, quantum error modeling, quantum circuit optimization, and security of quantum services. Previously, she earned my M.S. in Computer Science from Syracuse University in 2020 and a B.S. in Information Security from Nanjing University of Aeronautics and Astronautics in 2017.
Sponsored by: Computer Science