Cloud fog computing is a new paradigm that combines cloud computing and fog computing to boost resource efficiency and distributed system performance. Task scheduling is crucial in cloud fog computing because it decides the way computer resources are divided up across tasks. Our study suggests that the Shark Search Krill Herd Optimization (SSKHOA) method be incorporated into cloud fog computing's task scheduling. To enhance both the global and local search capabilities of the optimization process, the SSKHOA algorithm combines the shark search algorithm and the krill herd algorithm. It quickly explores the solution space and finds near-optimal work schedules by modelling the swarm intelligence of krill herds and the predator-prey behavior of sharks. In order to test the efficacy of the SSKHOA algorithm, we created a synthetic cloud fog environment and performed some tests. Traditional task scheduling techniques like LTRA, DRL, and DAPSO were used to evaluate the findings. The experimental results demonstrate that the SSKHOA outperformed the baseline algorithms in terms of task success rate increased 34%, reduced the execution time by 36%, and reduced makespan time by 54% respectively.[...] Read more.
Artificial life and other nature-inspired techniques have been applied to many problems in computer graphics. Some of these techniques are based on observations of organic systems, such as slime molds and flocking animals, and can mimic some of their behaviors and structures. The emergent behavior of these systems can improve the realism of procedurally-generated assets used in computer graphics applications, such as animation and texture maps. In this work, we provide a survey of these techniques and applications, including cellular automata, differential growth, reaction-diffusion, and Physarum. The techniques are compared and contrasted, and the common themes and patterns are elucidated to create a taxonomy which can be useful to researchers studying existing techniques and developing new ones.[...] Read more.
Due to the COVID-19 situation, all activities, including education, were shifted to online platforms. Consequently, instructors encountered increased challenges in evaluating students. In traditional assessment methods, instructors often face ambiguous cases when evaluating students’ competencies. Recent research has focused on the effectiveness of fuzzy logic in assessing students’ competencies, considering the presence of uncertain factors or multiple variables. Additionally, demographic characteristics, which can potentially influence students’ performance, are not typically utilized as inputs in the fuzzy logic method. Therefore, analyzing students’ performance by incorporating these factors is crucial in suggesting adjustments to teaching and learning strategies. In this study, we employ a combination of fuzzy logic and hierarchical linear regression to analyze students’ performance. The experiment involved 318 students from various programs and showed that the hybrid approach assessed students’ performance with greater nuance and adaptability when compared to a traditional method. Moreover, the findings in this study revealed the following: 1) There are differences in students’ performance between traditional and fuzzy evaluation methods; 2) The learning method is an impact on students’ fuzzy grades; 3) Students studying online do not perform better than those studying onsite. These findings suggest that instructors and educators should explore effective strategies being fair and suitable in assessment and learning.[...] Read more.
Accounting for 30.8% of the total economic volume of the western region by 2021, the "fourth pole" of China's economic development is progressively being established as the Chengdu-Chongqing twin-city(CCTC) Zone. Therefore, increasing Total Factor Energy Efficiency (TFEE) and reducing the disparities in energy efficiency amongst cities in the area are important for the economic development of the CCTC Economic Zone. Accordingly, this study employs an output-oriented, super-efficient DEA model with constant returns to scale to measure the total factor energy efficiency of 16 prefecture-level cities in the CCTC economic circle from 2006 to 2020. It also examines the spatial distribution and variation patterns of each city's prefecture-level total factor energy efficiency. Then, spatial autocorrelation and random forest were used to explore the interrelationship among the indicators of the drivers, and the variables were screened according to Gini importance, finally, PCA-GWR models and spatial panel regression models were constructed to dissect the key drivers. The empirical findings indicate that: (1) the average energy efficiency of Chongqing and Chengdu is only 0.708 and 0.788, and the overall efficiency shows a steady increase from 2009 to 2018. (2) The spatial distribution is mainly as follows: H-H agglomerations are distributed in the northeastern cities of the CCTC Economic Zone, L-L agglomerations are distributed in the south, H-L agglomerations and L-H agglomerations are distributed in the north-central part of Chengdu-Chongqing Economic Zone. (3) Industrial structure, population structure, and governmental behavior have significant effects on energy efficiency, with a population mortality rate as the inhibiting factor and the proportion of tertiary industry and policy behavior as the contributing factors. Based on the empirical findings, it is recommended to accelerate the industrial structure adjustment, implement the CCTC economic circle's core cities' target of energy-saving, and build a "community" bridge to improve energy efficiency and promote economic development.[...] Read more.
One of the most common diseases in the world is the chronic diabetes. Diabetes has a direct impact on the lives of millions of people worldwide. Diabetes can be controlled and improved with early diagnosis, but the majority of patients continue to live with it. There is a dispirit need to a system to anticipate and select the people who are most likely to be diabetes in the future. Diagnosing the future diseased person without taking any blood or glucose screening tests, is the main goal of this study. This paper proposed a deep-learning model for diabetes disease prediction. The proposed model consists of three main phases, data pre-processing, feature selection and finally different classifiers. Initially, during the data pre-processing stage, missing values are handled, and data normalization is applied to the data. Then, three techniques are used to select the most important features which are mutual information, chi-squared and Pearson correlation. After that, multiple machine learning classifiers are used. Four experiments are then conducted to test our models. Additionally, the effectiveness of the proposed model is evaluated against that of other well-known machine learning techniques. The accuracy, AUC, sensitivity, and F-measure of the linear regression classifier are higher than those of the other methods, according to experimental data, which show that it performs better. The suggested model worked better than traditional methods and had a high accuracy rate for predicting diabetic disease.[...] Read more.
The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.[...] Read more.