Meta-learning Approach for Time Series Forecasting: First-order MAML and Reptile

PDF (1794KB), PP.27-43

Views: 0 Downloads: 0

Author(s)

Pratik Zinjad 1,* Tushar Ghorpade 1 Vanita Mane 1

1. Department of Computer Engineering, Ramrao Adik Institute of Technology, Navi Mumbai, 400706, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijisa.2026.02.03

Received: 16 Oct. 2025 / Revised: 3 Dec. 2025 / Accepted: 17 Jan. 2026 / Published: 8 Apr. 2026

Index Terms

Few-Shot Learning, GRU, LSTM, MAML, Meta-learning, Reptile, Stock Price Prediction, Time Series Forecasting

Abstract

Forecasting time series data especially in volatile sectors like financial markets, shows significant challenges due to non-linearity, non-stationarity and noise in the data. Traditional forecasting models most likely fail to generalize effectively across varying tasks without extensive retraining. This study investigates the application of meta learning techniques, particularly First-Order Model-Agnostic Meta-Learning (FOMAML) and Reptile, to make adaptability and generalization better in time series forecasting tasks. An extensive empirical study was done using three neural networks as base models, namely Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU) and Feed Forward Neural Network (FFNN) applied to four real-world stocks: TCS, TATASTEEL, GRASIM and DJIAHD. The models were evaluated under few-shot learning(defined here as 211-shot learning using sliding window samples) conditions with varying iteration counts(outer loops or epochs) and their effectiveness was checked using some common standard metrics like RMSE(Root Mean Squared Error), MAE(Mean Absolute Error) and R²(Coefficient of Determination). Outcomes have shown that meta-learning approach notably performs much better than traditional models with MAML(First Order) in particular showing quicker task adaptation as well as stable convergence behavior, especially when it used with GRU and LSTM as base models, as validated empirically on the GRASIM dataset where the MAML with LSTM configuration attained around 81.9\% reduction in RMSE (dropping the value from 622.94 to 112.60 over the iterations). In all four stocks, reptile shows relatively steady performance. The study validates the potential of meta-learning as a powerful framework for time series forecasting problem in dynamic settings which offers robust algorithmic foundation for numerous future financial modeling applications.

Cite This Paper

Pratik Zinjad, Tushar Ghorpade, Vanita Mane, "Meta-learning Approach for Time Series Forecasting: First-order MAML and Reptile", International Journal of Intelligent Systems and Applications(IJISA), Vol.18, No.2, pp.27-43, 2026. DOI:10.5815/ijisa.2026.02.03

Reference

[1]Subhrajit Samanta, Shubhangi Ghosh, and Suresh Sundaram, “A Meta-cognitive Recurrent Fuzzy Inference System With Memory Neurons (McRFIS-MN) and its Fast Learning Algorithm for Time Series Forecasting,” in IEEE Symposium Series on Computational Intelligence SSCI, 2018.
[2]P. H. Li, D. X. Chen, H. Y. Huang, X. T. Wu, H. W. Jiang, and G. L. Ji, “A Novel Hybrid Scheme for Time Series Prediction Using LMS Filter and ISSA-based LSTM,” in 2022 IEEE Smartworld, Ubiquitous Intelligence Computing, Scalable Computing Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous Trusted Vehicles (SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta), Haikou, China, 2022, pp.343-350,doi:10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriCompMetaverse56740.2022.00070.
[3]Chelsea Finn, Pieter Abbeel, and Sergey Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning - Volume 70 (ICML’17), JMLR.org, 2017, pp. 1126–1135.
[4]Alex Nichol, Joshua Achiam, and John Schulman, “On First-Order Meta-Learning Algorithms,” ArXiv abs/1803.02999, 2018.
[5]Lo, A. W. (2004). The Adaptive Markets Hypothesis: Market Efficiency from an Evolutionary Perspective. Journal of Portfolio Management, 30(5), 15–29.
[6]X. Wen and W. Li, “Time Series Prediction Based on LSTM-Attention-LSTM Model,” in IEEE Access, vol. 11, pp. 48322-48331, 2023, doi: 10.1109/ACCESS.2023.3276628.
[7]C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proc. Int. Conf. Mach. Learn, 2017, pp. 1126–1135.
[8]Shin-Hung Chang, Cheng-Wen Hsu, Hsing-Ying Li, Wei-Sheng Zeng, and Jan-Ming Ho, “Short-Term Stock PriceTrend Prediction Using Meta-Learning,” 2021, pp. 978-1-6654-4207-7/21.
[9]Yanbing Song, Shaofei Zang, Jianwei ma, Huimin Li, and Jinfeng Lv, “Stock prediction based on Weighted meta extreme learning machine,” 2024, pp. 979-8-3503-7922-8/24.
[10]M. Maya, W. Yu, and X. Li, “Time series forecasting with missing data using neural network and meta-transfer learning,” in 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 2021, pp. 1-6, doi: 10.1109/SSCI50451.2021.9659864.
[11]Terence L. van Zyl, “Late Meta-learning Fusion Using Representation Learning for Time Series Forecasting,” https://doi.org/10.48550/arXiv.2303.11000.
[12]Xiao Yao, Jianlong Zhu, Guanying Huo, Ning Xu, Xiaofeng Liu, and Ce Zhang, “Model-agnostic multi-stage loss optimization meta learning,” International Journal of Machine Learning and Cybernetics, vol. 12, pp. 2349–2363, 2021, doi: https://doi.org/10.1007/s13042-021-01316-6.
[13]Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey, “Meta-Learning in Neural Networks: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, September 2022.
[14]Asit Barman, Swalpa Roy, Swagatam Das, and Paramartha Dutta, “Exploring the Horizons of Meta-Learning in Neural Networks: A Survey of the State-of-the-Art,” IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1-16, 2024, doi: 10.1109/TETCI.2024.3502355.
[15]MaƂgorzata Gutowska, Suzanne Little, and Andrew Mccarren, “Constructing a meta-learner for unsupervised anomaly detection,” 10.48550/arXiv.2304.11438, 2023.
[16]K. Noor and U. Fatima, “Meta Learning Strategies for Comparative and Efficient Adaptation to Financial Datasets,” in IEEE Access, vol. 13, pp. 24158-24170, 2025, doi: 10.1109/ACCESS.2024.3516490.
[17]M. Ma’sum, Md. Rasel Sarkar, Mahardhika Pratama, Savitha Ramasamy, S.G. Anavatti, Lin Liu, H. Habibullah, and Ryszard Kowalczyk, “Dynamic Long-Term Time-Series Forecasting via Meta Transformer Networks,” IEEE Transactions on Artificial Intelligence, pp. 1-11, 2024, doi: 10.1109/TAI.2024.3365775.
[18]B. Settles, “Active learning literature survey,” University of Wisconsin-Madison Department of Computer Sciences, Tech. Rep. 1648, 2009.
[19]Z. Chi, L. Gu, H. Liu, Y. Wang, Y. Yu, and J. Tang, “MetaFSCIL: A meta-learning approach for few-shot class incremental learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 14146–14155.
[20]H. Zhuang, Z. Weng, R. He, Z. Lin, and Z. Zeng, “GKEAL: Gaussian kernel embedded analytic learning for few-shot class incremental task,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2023, pp. 7746–7755.
[21]Xianyun Wen and Weibang Li, “Time series prediction based on LSTM-attention-LSTM model,” IEEE Access, pp. 1-1, 2023, doi: 10.1109/ACCESS.2023.3276628.
[22]Jaehyeon Son, Soochan Lee, and Gunhee Kim, “When Meta-Learning Meets Online and Continual Learning: A Survey,” IEEE transactions on pattern analysis and machine intelligence, pp. 1-1, 2024, doi: 10.1109/TPAMI.2024.3463709.
[23]Polina Pilyugina, Svetlana Medvedeva, Kirill Mosievich, Ilya Trofimov, Alina Kostromina, Dmitry Simakov, and Evgeny Burnaev, “A Large-Scale Empirical Study of Aligned Time Series Forecasting,” IEEE Access, pp. 1-1, 2024, doi: 10.1109/ACCESS.2024.3458391.
[24]Anna Vettoruzzo, M.-R Bouguelia, Joaquin Vanschoren, Thorsteinn Rognvaldsson, and Kc Santosh, “Advances and Challenges in Meta-Learning: A Technical Review,” IEEE transactions on pattern analysis and machine intelligence, pp. 1-1, 2024, doi: 10.1109/TPAMI.2024.