IJMECS Vol. 15, No. 3, Jun. 2023
Cover page and Table of Contents: PDF (size: 674KB)
The Malaysian university students faced various obstacles and had unmet needs when it is no longer an option to carry out virtual learning during the COVID-19 pandemic. These challenges faced by students call into question their actual use behaviour and true acceptance of the virtual learning environment (VLE). Thus, this study investigated the students’ actual use behaviour of VLEs, the factors that influence the actual use behaviour, the moderating effect of network connectivity on this relationship, and challenges faced by students while using this technology. The Technology Acceptance Model (TAM) and the Unified Theory Acceptance and Use of Technology 2 (UTAUT2) model were used as the theoretical basis of this study. An online survey was conducted among the International Medical University (IMU) students. The finding surmised that most of the students have adopted the VLE during the pandemic. The findings further revealed that factors such as hedonic motivation, perceived usefulness and perceived ease of use positively affect actual use behaviour, while network connectivity has no significant moderating effect on the relationship between the dimensions of the VLE and actual use behaviour. The key challenges include high cost associated with the VLE usage, and the students find the VLE is not entertaining or enjoyable. These results indicate that students will be inclined to accept the technology if there is high hedonic motivation, perceived usefulness, and perceived ease of use. Universities should focus on enhancing these factors to increase the acceptance of this technology among students, as VLEs have untapped potential for distance learning.[...] Read more.
ICT infrastructure and its effective application in the college have been a topic of conflict. Many studies clearly state that the usage of ICT in college has never been regulated by the students. If students are not trained properly in ICT in college, a large segment of society will always be unemployed, implying that sufficient attention and direction should be provided to the college by the college administration. Students can be educated via online and offline courses if ICT is properly managed. This research examined at the state of ICT at the college. The survey included administrative and teaching employees from seven institutions. We only selected institutions that previously have a thorough comprehension of the survey and understood how to use ICT effectively. However, several findings did not meet our expectations. Some colleges did not grasp the survey well enough. They were utterly unaware of how to use ICT in the present and future while keeping the interests of students and institutions in mind. Some college surveys had some variances, although they were minimal in comparison to the overall survey. Our data suggest that majority of the colleges do not understand about the correct usage of ICT. They are unaware that with ICT, learners can be made employable, and financially disadvantaged students can receive education at a low cost. They were uncertain about how to use ICT and how to advance ICT at the institution so that online and offline courses could begin. Our findings also imply that by interacting with other institutions throughout the world, ICT incompetence can be overcome. Government and college administration should work together to alleviate the ICT scarcity to some extent.[...] Read more.
The study was aimed at analyzing the effectiveness of blended education for economics students using information and communication technology (ICT). The research methods consisted of literature analysis, case method, comparative analysis, mathematical statistics, and statistical experiment. The article describes the following results. Three Russian universities (Vladivostok State University of Economics and Service [VSUES], Kemerovo State University [KemSU], and Ryazan State Radio Engineering University [RSREU]) have introduced ICT to implement a blended model for teaching economic disciplines. This made it possible to use the strengths of traditional classroom and distance electronic education, as well as to quickly correct the problems that arise at the initial stage of ICT implementation, especially when training systems are integrated into international educational projects. The field study enrolled 236 economics students from the above-mentioned universities. The obtained empirical data confirmed some hypotheses regarding the effectiveness of ICT in teaching economics students. The practical significance of the article lies in the possibilities of applying leading ICT technologies to improve the professional competences of future businesspeople in blended economic learning. The results obtained can help universities to shape a rational economic blended learning course to maximize the business impact for future careers in this field. Future researchers may pay attention to the effectiveness of using Massive Open Online Courses (MOOCs) in the context of improving the economic education of future entrepreneurs with the possibility of involving real business cases in their educational process.[...] Read more.
An early prediction of students' academic performance helps to identify at-risk students and enables management to take corrective actions to prevent them from going astray. Most of the research works in this field have used supervised machine learning approaches to their crafted datasets having numerous attributes or features. Since these datasets are not publicly available, it is hard to understand and compare the significance of the chosen features and the efficacy of the different machine learning models employed in the classification task. In this work, we analyzed 27 research papers published in the last ten tears (2011- 2021) that used machine learning models for predicting students' performance. We identify the most frequently used features in the private datasets, their interrelationships, and abstraction levels. We also explored three popular public datasets and performed statistical analysis like the Chi-square test and Person's correlation on its features. A minimal set of essential features is prepared by fusing the frequent features and the statistically significant features. We propose an algorithm for selecting a minimal set of features from any dataset with a given set of features. We compared the performance of different machine learning models on the three public datasets in two experimental setups- one with the complete feature set and the other with a minimal set of features. Compared to using the complete feature set, it is observed that most supervised models perform nearly identically and, in some cases, even better with the reduced feature set. The proposed method is capable of identifying the most essential feature set from any new dataset for predicting students' performance.[...] Read more.
Students with learning disorders (LD) are unable to perform certain set of tasks due to their difficulty in understanding & interpreting them. These tasks include, but are not limited to, solving simple Mathematical identities, understanding English Grammar related questions, spelling certain words, arranging words in sequence, etc. A wide variety of system models are proposed by researchers to analyze such issues with LD students, and recommend various remedies for the same. But a very few of these models are designed for end-to-end continuous learning support, which limits their applicability. Moreover, even fewer system models are designed to improve capabilities of LD students, via modification of system’s internal parameters. To cater these issues, a novel deep-learning model (DL2CSMBP) is proposed in this text, which assists in incrementally improving learning capabilities of LD children via statistical modelling of examination behavioural patterns. The model initially proposes design of a novel examination system that generates question sets based on student’s temporal performance, and collects their responses via an LD-friendly approach. These responses are processed using a deep learning model that extracts statistical characteristics from student responses. These characteristics include question skipping probability, percentage of correct answers, question revisit probability, time spent on each question, un-attempted questions, & frequently skipped question types. They were extracted from 12 different question types which include Basic English Grammar, Medium English Grammar, Advanced English Grammar, direct comprehension, inference comprehension, vocabulary comprehension, sequencing, spelling, synonyms, Mathematics (addition & subtraction), and finding the odd Man out. The results of these questions were evaluated for 80+ LD students, and their responses were observed. Based on these responses a customized 1D convolutional Neural Network (CNN) layer was trained, which assisted in improving classification performance. It was observed that the proposed model was able to identify LD students with 95.6% efficiency. The LD students were able to incrementally improve the performance by attempting a series of exam sessions. Due to this incremental performance improvement, the LD students were able to cover 28% more questions, and answer almost 97% of these questions with precision & correctness. Due to such promising results, the system is capable of real-time deployment, and can act as an automated schooling tool for LD students to incrementally improve their examination performance without need of medical & psychological experts. This can also assist in reducing depression among LD students, because they don’t need to interact with a physical doctor while improving their LD condition in real-time, thus suggesting its use in non-intrusive medical treatments of these students.[...] Read more.
The article develops a technology for finding tweet trends based on clustering, which forms a data stream in the form of short representations of clusters and their popularity for further research of public opinion. The accuracy of their result is affected by the natural language feature of the information flow of tweets. An effective approach to tweet collection, filtering, cleaning and pre-processing based on a comparative analysis of Bag of Words, TF-IDF and BERT algorithms is described. The impact of stemming and lemmatization on the quality of the obtained clusters was determined. Stemming and lemmatization allow for significant reduction of the input vocabulary of Ukrainian words by 40.21% and 32.52% respectively. And optimal combinations of clustering methods (K-Means, Agglomerative Hierarchical Clustering and HDBSCAN) and vectorization of tweets were found based on the analysis of 27 clustering of one data sample. The method of presenting clusters of tweets in a short format is selected. Algorithms using the Levenstein Distance, i.e. fuzz sort, fuzz set and Levenshtein, showed the best results. These algorithms quickly perform checks, have a greater difference in similarities, so it is possible to more accurately determine the limit of similarity. According to the results of the clustering, the optimal solutions are to use the HDBSCAN clustering algorithm and the BERT vectorization algorithm to achieve the most accurate results, and to use K-Means together with TF-IDF to achieve the best speed with the optimal result. Stemming can be used to reduce execution time. In this study, the optimal options for comparing cluster fingerprints among the following similarity search methods were experimentally found: Fuzz Sort, Fuzz Set, Levenshtein, Jaro Winkler, Jaccard, Sorensen, Cosine, Sift4. In some algorithms, the average fingerprint similarity reaches above 70%. Three effective tools were found to compare their similarity, as they show a sufficient difference between comparisons of similar and different clusters (> 20%).
The experimental testing was conducted based on the analysis of 90,000 tweets over 7 days for 5 different weekly topics: President Volodymyr Zelenskyi, Leopard tanks, Boris Johnson, Europe, and the bright memory of the deceased. The research was carried out using a combination of K-Means and TF-IDF methods, Agglomerative Hierarchical Clustering and TF-IDF, HDBSCAN and BERT for clustering and vectorization processes. Additionally, fuzz sort was implemented for comparing cluster fingerprints with a similarity threshold of 55%. For comparing fingerprints, the most optimal methods were fuzz sort, fuzz set, and Levenshtein. In terms of execution speed, the best result was achieved with the Levenshtein method. The other two methods performed three times worse in terms of speed, but they are nearly 13 times faster than Sift4. The fastest method is Jaro Winkler, but it has a 19.51% difference in similarities. The method with the best difference in similarities is fuzz set (60.29%). Fuzz sort (32.28%) and Levenshtein (28.43%) took the second and third place respectively. These methods utilize the Levenshtein distance in their work, indicating that such an approach works well for comparing sets of keywords. Other algorithms fail to show significant differences between different fingerprints, suggesting that they are not adapted to this type of task.