IJISA Vol. 18, No. 2, 8 Apr. 2026
Cover page and Table of Contents: PDF (size: 790KB)
PDF (790KB), PP.139-155
Views: 0 Downloads: 0
Neutrosophic Theory, Sudden Concept Drift, Unsupervised Drift Detection, Uncertainty Modeling, Non-Stationary Data Streams
Concept drift is a critical challenge in dynamic environments, where evolving data distributions can abruptly reduce predictive accuracy. Sudden drift requires reliable detection methods that minimize latency and false alarms, yet traditional detectors often depend on labeled data, delaying adaptation and limiting robustness.
This article introduces Neutrosophic Pseudo Labeling Sudden Drift Detection (N PSDD), a novel framework for unsupervised sudden drift detection based on neutrosophic theory. The method integrates neutrosophic clustering for pseudo labeling, block wise neural modeling, drift quantification via neutrosophic mean deviation, and adaptive threshold evaluation. By explicitly modeling truth, indeterminacy, and falsity, N PSDD captures uncertainty regions that conventional probabilistic measures fail to represent.
Experimental validation on synthetic and real world datasets demonstrates that N PSDD achieves competitive dtection latency (MTTD ≈ 23–35 instances), a lower false alarm rate (FAR ≤ 3.1%), a reduced missing drift rate (MDR ≤ 2.5%), and consistently higher G mean values (up to 0.91) than benchmark methods do. For example, on the Poker Hand dataset, N PSDD achieved MCC = 0.846 and accuracy ≈90%, while on electricity it reached MCC = 0.623 with FAR = 3.1%. In contrast, unsupervised baselines (KS WIN, HDD, MMD) yielded higher FAR (≈6–10%) and lower MCC (≤0.56), confirming their limitations in capturing real concept drift.
Overall, the N PSDD enhances the resilience of learning models under non stationary conditions and provides a robust solution for real time applications, including financial forecasting, fraud detection, and adaptive control systems.
Rania S. Lutfi, "A Neutrosophic-Based Unsupervised Approach for Sudden Drift Detection", International Journal of Intelligent Systems and Applications(IJISA), Vol.18, No.2, pp.139-155, 2026. DOI:10.5815/ijisa.2026.02.10
[1]Jing Lu, Aidong Liu, Feng Dong, Feng Gu, João Gama, and Guozhu Zhang. Learning under concept drift: A review. IEEE Transactions on Knowledge and Data Engineering, 31(12):2346–2363, Dec. 2019. doi:10.1109/TKDE.2018.2876857.
[2]João Gama, Indrė Žliobaitė, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. A survey on concept drift adaptation. ACM Computing Surveys (CSUR), 46(4):44:1–44:37, March 2014. doi:10.1145/2523813
[3]David A. Jenkins, Greg P. Martin, Matthew Sperrin, Nigel Peek, and Richard D. Riley. Continual updating and monitoring of clinical prediction models: Time for dynamic prediction systems? Diagnostic and Prognostic Research, 5(1), 2021. doi:10.1186/s41512-020-00090-3.
[4]Guilherme J. Aguiar and Alberto Cano. Class informed concept drift detection in multiclass data streams. International Journal of Data Science and Analytics, 21:42, 2026. doi:10.1007/s41060-025-00918-5.
[5]Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. Algorithmic learning in a random world. 2nd Ed. Cham: Springer, 2022. doi:10.1007/978-3-031-06649-8.
[6]Florentin Smarandache. Neutrosophic set–A generalization of the intuitionistic fuzzy set. 2000. [Online]. Available: https://fs.unm.edu/IFS-generalized.pdf. Accessed: Jun. 18, 2025.
[7]João Gama, Pedro Medas, Gladys Castillo, and Pedro Rodrigues. Learning with drift detection. In Ana L. C. Bazzan and Sílvia Labidi (eds.), Advances in Artificial Intelligence – SBIA 2004, Lecture Notes in Computer Science, vol. 3171, pages 286–295. Springer, Berlin, Heidelberg, 2004. doi:10.1007/978-3-540-28645-5_29.
[8]Manuel Baena-García, José del Campo-Ávila, Raúl Fidalgo, Albert Bifet, and Ricard Gavalda. Early drift detection method. In Proceedings of the 4th International Workshop on Knowledge Discovery from Data Streams, pages 77–86, 2006.
[9]Thiago Escovedo, Alex Koshiyama, André A. DaCruz, and Marco Vellasco. DetectA: Abrupt concept drift detection in non-stationary environments. Applied Soft Computing, 62:119–133, 2018. doi: 10.1016/j.asoc.2017.10.031.
[10]Omar A. Mahdi, Eric Pardede, Nasser Ali, and Jianxin Cao. Fast reaction to sudden concept drift in the absence of class labels. Applied Sciences, 10(2):606, 2020. doi:10.3390/app10020606.
[11]Katherine Chahuán-Jiménez. Neural network-based predictive models for stock market index forecasting. Journal of Risk and Financial Management, 17(6):242, 2024. doi:10.3390/jrfm17060242.
[12]A. R. M. S. Rani, C. R. Nirmala, Mohammed Aljohani, and B. R. Sreenivasa. A novel technique for detecting sudden concept drift in healthcare data using multilinear artificial intelligence techniques. Frontiers in Artificial Intelligence, 5, 2022. doi:10.3389/frai.2022.950659.
[13]Daniel N. Assis and Vinicius M. A. Souza. ADWIN-U: Adaptive windowing for unsupervised drift detection on data streams. Knowledge and Information Systems, 67:10005–10034, 2025. doi:10.1007/s10115-025-02523-1.
[14]An-Jie Kou, Xin Huang, and Wei-Xin Sun. Research on concept drift algorithm based on evolutionary computation. Discover Applied Sciences, 6(8), 2024. doi:10.1007/s42452-024-06097-5.
[15]Daniel Lukats, Oliver Zielinski, Alexander Hahn, and Frederic Stahl. A benchmark and survey of fully unsupervised concept drift detectors on real-world data streams. International Journal of Data Science and Analytics, 19(1):1–31, 2025. doi:10.1007/s41060-024-00620-y.
[16]Liang Zhao and Yuxin Shen. Proactive model adaptation against concept drift for online time series forecasting. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Toronto, Canada: ACM, pages 2020–2031, 2025. doi:10.1145/3690624.3709210.
[17]Christoph Raab, Moritz Heusinger, and Frank-Michael Schleif. Reactive soft prototype computing for concept drift streams. Neurocomputing, 416:340–351, 2020. doi: 10.1016/j.neucom.2019.11.111.
[18]Gabriel Ditzler and Robi Polikar. Hellinger distance based drift detection for nonstationary environments. In Proceedings of the IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE), Paris, France, pages 41–48, 2011. doi:10.1109/CIDUE.2011.5948491.
[19]Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13:723–773, 2012.
[20]Florentin Smarandache. Introduction to Neutrosophic Logic. American Research Press, 1999. [Online]. Available: http://fs.unm.edu/NeutrosophicLogic.pdf. Accessed: Jun. 18, 2025.
[21]R. Narmadhagnanam and A. E. Samuel. Application of secant span in medical diagnosis. Neutrosophic Systems and Applications, 18:40–45, 2024. doi: 10.61356/j.nswa.2024.18254.
[22]Mohamed M. El-Shahat, Hossam A. El-Khouly, and Ahmed M. Osman. Optimization of neutrosophic vendor-buyer economic order quantity model using particle swarm optimization. International Journal of Neutrosophic Science, 23(4):181–193, 2024. doi:10.54216/IJNS.230414.
[23]Yanhui Guo and Abdulkadir Sengur. NCM: Neutrosophic c-means clustering algorithm. Pattern Recognition, 48(8):2710–2724, 2015. doi: 10.1016/j.patcog.2015.02.018.
[24]Florentin Smarandache. Introduction to Neutrosophic Statistics. SciTech & Education Publishing, 2014. [Online]. Available: http://fs.unm.edu/NeutrosophicStatistics.pdf. Accessed: Jun. 18, 2025.
[25]Jun Ye and Wei Cui. Neutrosophic compound orthogonal neural network and its applications in neutrosophic function approximation. Symmetry, 11(2):147, 2019. doi:10.3390/sym11020147.
[26]Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 9, pages 249–256, 2010.
[27]Anup A. Waoo and Bhupendra K. Soni. Performance analysis of sigmoid and ReLU activation functions in deep neural network. In Algorithms for Intelligent Systems, Singapore: Springer, pages 39–52, 2021. doi:10.1007/978-981-16-2248-9_5.
[28]Vinicius M. A. Souza, Diego M. Dos Reis, André G. Maletzke, and Gustavo E. A. P. A. Batista. Challenges in benchmarking stream learning algorithms with real-world data. Data Mining and Knowledge Discovery, 34(6):1805–1858, 2020. doi:10.1007/s10618-020-00698-5.
[29]Kaggle. Give me some credit. [Online]. Available: https://www.kaggle.com/c/GiveMeSomeCredit. Accessed: Jun. 18, 2025.
[30]UCI Machine Learning Repository. Gas sensor array drift dataset. University of California, Irvine, 2012. [Online]. Available: https://archive.ics.uci.edu/ml/datasets/Gas+Sensor+Array+Drift+Dataset. Accessed: Jun. 18, 2025.
[31]UCI Machine Learning Repository. Poker hand dataset. University of California, Irvine, 2006. [Online]. Available: https://archive.ics.uci.edu/dataset/158/poker%20hand. Accessed: Jun. 18, 2025.
[32]OpenML. Dataset 150. [Online]. Available: https://www.openml.org/d/150. Accessed: Jun. 18, 2025.
[33]Jacob Montiel, Jesse Read, Albert Bifet, and Talel Abdessalem. Scikit-Multiflow: A multioutput streaming framework. Journal of Machine Learning Research, 19(72):1–5, 2018.
[34]Michael Karnick, Michael D. Muhlbaier, and Robi Polikar. Incremental learning in non-stationary environments with concept drift using a multiple classifier based approach. In Proceedings of the 19th International Conference on Pattern Recognition (ICPR), Tampa, FL, USA: IEEE, pages 1–4, 2008. doi:10.1109/ICPR.2008.4761062.
[35]Hamed Tarazodar, Kaveh Bagherifard, Saeed Nejatian, Hamid Parvin, and Reza Malekhosseini. Mitigating concept drift in data streams: An incremental decision tree approach. Soft Computing, 28(10):13083–13106, 2024. doi:10.1007/s00500-024-09921-7.
[36]Pedro M. Gonçalves, Sávio G. T. De Carvalho Santos, Ricardo S. M. Barros, and Daniel C. L. Vieira. A comparative study on concept drift detectors. Expert Systems with Applications, 41(18):8144–8156, 2014. doi: 10.1016/j.eswa.2014.07.019.
[37]Davide Chicco, Vladimir Starovoitov, and Giuseppe Jurman. The benefits of the Matthews correlation coefficient (MCC) over the diagnostic odds ratio (DOR) in binary classification assessment. IEEE Access, 9:47112–47124, 2021. doi:10.1109/ACCESS.2021.3068614.
[38]Martijn Pistorius and Mitja Stadje. On dynamic deviation measures and continuous-time portfolio optimization. Annals of Applied Probability, 27(6), 2017. doi:10.1214/17-AAP1282.