Quantum–inspired Methods for Training Machine Learning Models

PDF (1855KB), PP.143-159

Views: 0 Downloads: 0

Author(s)

Nilesh T. Fonseka 1 Anuradha Mahasinghe 1,*

1. Department of Mathematics, University of Colombo, Colombo 03, Sri Lanka

* Corresponding author.

DOI: https://doi.org/10.5815/ijitcs.2025.06.08

Received: 21 Jul. 2025 / Revised: 26 Sep. 2025 / Accepted: 13 Nov. 2025 / Published: 8 Dec. 2025

Index Terms

Quadratic Unconstrained Binary Optimization, Quantum Approximate Optimization Algorithm, Sampling Variational Quantum Eigensolver, Multiple Linear Regression Model, D-wave, IBM Qiskit

Abstract

Machine learning model training, which ultimately optimizes a model’s cost function is usually a time- consuming and computationally intensive process on classical computers. This has been more intense due to the in- creased demand for large-scale data analysis, requiring unconventional computing paradigms like quantum computing to enhance training efficiency. Adiabatic quantum computers have excelled at solving optimization problems, which require the quadratic unconstrained binary optimization (QUBO) format of the problem of interest. In this study, the squared error minimization in the multiple linear regression model is reformulated as a QUBO problem enabling it to be solved using D-wave adiabatic quantum computers. Same formulation was used to obtain a solution using gate-based algorithms such as quantum approximate optimization algorithm (QAOA) and sampling variational quantum eigensolver (VQE) im- plemented via IBM Qiskit. The results obtained through these approaches in the context of runtime and mean squared error(MSE) were analyzed and compared to the classical approaches. Our experimental results indicate a runtime ad- vantage in the D-wave annealing approach over the classical Scikit learn regression approach. The time advantage can be observed when N>524288 compared to Sklearn Linear Regression and when N>65536  compared to Sklearn SGDRegressor. Support vector machine induced neural networks, where the margin-based entropy loss is converted into a QUBO with Lagrangian approach is also focused in this study concerning the applicability for nonlinear models.

Cite This Paper

Nilesh T. Fonseka, Anuradha Mahasinghe, "Quantum–inspired Methods for Training Machine Learning Models", International Journal of Information Technology and Computer Science(IJITCS), Vol.17, No.6, pp.143-159, 2025. DOI:10.5815/ijitcs.2025.06.08

Reference

[1]S. Morita and H. Nishimori, “Mathematical foundation of quantum annealing,” Journal of Mathematical Physics, vol. 49, no. 12, 2008.
[2]P. Date, D. Arthur, and L. Pusey-Nazzaro, “Qubo formulations for training machine learning models,” Scientific reports, vol. 11, no. 1, p. 10029, 2021.
[3]W. Ha¨ma¨la¨inen, “Class np, np-complete, and np-hard problems,” Sort, pp. 1–7, 2006.
[4]A. Mahasinghe, R. Hua, M. J. Dinneen, and R. Goyal, “Solving the hamiltonian cycle problem using a quantum computer,” in Proceedings of the Australasian Computer Science Week Multiconference, pp. 1–9, 2019.
[5]M. J. Dinneen, A. Mahasinghe, and K. Liu, “Finding the chromatic sums of graphs using a d-wave quantum com- puter,” The Journal of Supercomputing, vol. 75, pp. 4811–4828, 2019.
[6]A. Mahasinghe and Y. Jayasinghe, “An initial step toward a quantum annealing approach to the discrete logarithm problem,” Security and Privacy, vol. 5, no. 4, p. e234, 2022.
[7]A. Mahasinghe, V. Fernando, and P. Samarawickrama, “Qubo formulations of three np problems,” Journal of Infor- mation and Optimization Sciences, vol. 42, no. 7, pp. 1625–1648, 2021.
[8]R. K. Nath, H. Thapliyal, and T. S. Humble, “A review of machine learning classification using quantum annealing for real-world applications,” SN Computer science, vol. 2, no. 5, p. 365, 2021.
[9]S. Lloyd, M. Mohseni, and P. Rebentrost, “Quantum algorithms for supervised and unsupervised machine learning,” arXiv preprint arXiv:1307.0411, 2013.
[10]A. Mott, J. Job, J.-R. Vlimant, D. Lidar, and M. Spiropulu, “Solving a higgs optimization problem with quantum annealing for machine learning,” Nature, vol. 550, no. 7676, pp. 375–379, 2017.
[11]P. Date and T. Potok, “Adiabatic quantum linear regression,” Scientific reports, vol. 11, no. 1, p. 21905, 2021.
[12]N. Ide and M. Ohzeki, “Sparse signal reconstruction with qubo formulation in l0-regularized linear regression,” in 2022 International Symposium on Information Theory and Its Applications (ISITA), pp. 19–23, 2022.
[13]M. R. van Dommelen and F. Phillipson, “Qubo formulation for sparse sensor placement for classification,” in Inter- national Conference on Innovations for Community Services, pp. 17–35, Springer, 2024.
[14]M. Schuld and N. Killoran, “Quantum machine learning in feature hilbert spaces,” Physical review letters, vol. 122, no. 4, p. 040504, 2019.
[15]W. Guan, G. Perdue, A. Pesah, M. Schuld, K. Terashi, S. Vallecorsa, and J.-R. Vlimant, “Quantum machine learning in high energy physics,” Machine Learning: Science and Technology, vol. 2, no. 1, p. 011003, 2021.
[16]S. Y.-C. Chen, C.-H. H. Yang, J. Qi, P.-Y. Chen, X. Ma, and H.-S. Goan, “Variational quantum circuits for deep reinforcement learning,” IEEE access, vol. 8, pp. 141007–141024, 2020.
[17]E. Farhi, J. Goldstone, and S. Gutmann, “A quantum approximate optimization algorithm,” arXiv preprint arXiv:1411.4028, 2014.
[18]E. Farhi, D. Gamarnik, and S. Gutmann, “The quantum approximate optimization algorithm needs to see the whole graph: A typical case,” arXiv preprint arXiv:2004.09002, 2020.
[19]E. Farhi, J. Goldstone, S. Gutmann, and L. Zhou, “The quantum approximate optimization algorithm and the sherrington-kirkpatrick model at infinite size,” Quantum, vol. 6, p. 759, 2022.
[20]S. Lloyd, “Quantum approximate optimization is computationally universal,” arXiv preprint arXiv:1812.11075, 2018.
[21]J. S. Otterbach, R. Manenti, N. Alidoust, A. Bestwick, M. Block, B. Bloom, S. Caldwell, N. Didier, E. S. Fried, S. Hong, et al., “Unsupervised machine learning on a hybrid quantum computer,” arXiv preprint arXiv:1712.05771, 2017.
[22]C. Ciliberto, M. Herbster, A. D. Ialongo, M. Pontil, A. Rocchetto, S. Severini, and L. Wossnig, “Quantum machine learning: a classical perspective,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 474, no. 2209, p. 20170551, 2018.
[23]A. W. Harrow, A. Hassidim, and S. Lloyd, “Quantum algorithm for linear systems of equations,” Physical review letters, vol. 103, no. 15, p. 150502, 2009.
[24]A. Ge´ron, Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow.” O’Reilly Media, Inc.”, 2022.
[25]F. Glover, G. Kochenberger, and Y. Du, “A tutorial on formulating and using qubo models,” arXiv preprint arXiv:1811.11538, 2018.
[26]“Samplers—dwave-system 1.10.0 documentation.” Online, 2021. https://dwave- systemdocs.readthedocs.io/en/latest/reference/samplers.html (accessed Feb 28,2025).
[27]K. Blekos, D. Brand, A. Ceschini, C.-H. Chou, R.-H. Li, K. Pandya, and A. Summer, “A review on quantum approximate optimization algorithm and its variants,” Physics Reports, vol. 1068, pp. 1–66, 2024.
[28]D. A. Fedorov, B. Peng, N. Govind, and Y. Alexeev, “Vqe method: a short survey and recent developments,” Materials Theory, vol. 6, no. 1, p. 2, 2022.
[29]“Minimum eigen optimizer - qiskit optimization 0.6.1.” Online, 2019. https://qiskit-community.github.io/qiskit- optimization/tutorials/03 minimum eigen optimizer (accessed Feb. 28, 2025).
[30]M. Shoaib, “How to solve qubo problems using qiskit - mohammad shoaib - medium.” Online, 06 2023. https://medium.com/@shoaib6174/how-to-solve-qubo-problems-using-qiskit-f4eab6cc3061 (accessed Feb 28, 2025).
[31]scikit learn, “1.5. stochastic gradient descent — scikit-learn 0.23.2 documentation.” Online, n.d. https://scikit- learn.org/stable/modules/sgd.html (accessed Feb 28, 2025).
[32]M. Booth, S. Reinhardt, and A. Roy, “Partitioning optimization problems for hybrid classical/quantum execution technical report.” Online. https://github.com/dwavesystems/qbsolv/blob/master/qbsolv techReport.pdf (accessed March 3rd, 2025).
[33]Y. Tang, “Deep learning using support vector machines,” CoRR, abs/1306.0239, vol. 2, no. 1, 2013.