ISSN: 2074-904X (Print)
ISSN: 2074-9058 (Online)
DOI: https://doi.org/10.5815/ijisa
Website: https://www.mecs-press.org/ijisa
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 137
IJISA is committed to bridge the theory and practice of intelligent systems. From innovative ideas to specific algorithms and full system implementations, IJISA publishes original, peer-reviewed, and high quality articles in the areas of intelligent systems. IJISA is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of intelligent systems and applications.
IJISA has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJISA Vol. 17, No. 3, Jun. 2025
REGULAR PAPERS
Ensuring online security against automated attacks remains a critical challenge, as traditional CAPTCHA mechanisms often struggle to balance robustness and usability. This study proposes a novel intelligent and interactive CAPTCHA system that integrates advanced image processing techniques with a convolutional neural network (CNN)-based evaluation model to enhance security and user engagement. The proposed CAPTCHA dynamically generates images with randomized object placement, adaptive noise layers, and geometric transformations, making them resistant to AI-based solvers. Unlike conventional CAPTCHAs, this approach requires users to interact with images by selecting and marking specific objects, creating a human-in-the-loop validation process. For evaluation, a CNN-based classifier processes user selections and determines their validity. A lightweight embedded software module tracks user interactions in real-time, monitoring selection accuracy and response patterns to improve decision-making. The system was tested on 6,000 images across five categories (airplanes, cars, cats, motorcycles, and fish), with an 80% training and 20% testing split. Experimental results demonstrate a classification accuracy of 99.58%, validation accuracy of 96.15%, and a loss value of 0.2078. The CAPTCHA evaluation time was measured at 47–53 milliseconds for initial validation and 17–23 milliseconds for subsequent evaluations. These results confirm that the proposed CAPTCHA model effectively differentiates human users from bots while maintaining usability, demonstrating superior resilience against automated solvers compared to traditional approaches.
[...] Read more.The Traveling Salesman Problem (TSP) is a well-known NP-hard combinatorial optimization problem, commonly studied in computer science and operations research. Due to its complexity and broad applicability, various algorithms have been designed and developed from the viewpoint of intelligent search. In this paper, we propose a two-stage method based on the clustering concept integrated with an intelligent search technique. In the first stage, a set of clustering techniques - fuzzy c-means (FCM), k-means (KM), and k-mediods (KMD) - are employed independently to generate feasible routes for the TSP. These routes are then optimized in the second stage using an improved Genetic Algorithm (IGA). Actually, we enhance the traditional Genetic Algorithm (GA) through an advanced selection strategy, a new position-based heuristic crossover, and a supervised mutation mechanism (FIB). This IGA is implemented to the feasible routes generated in the clustering stage to search the optimized route. The overall solution approach results in three distinct pathways: FCM+IGA, KM+IGA, and KMD+IGA. Simulation results with 47 benchmark TSP datasets demonstrate that the proposed FCM+IGA performs better than both KM+IGA and KMD+IGA. Moreover, FCM+IGA outperforms other clustering-based algorithms and several state-of-the-art methods in terms of solution quality.
[...] Read more.Understanding the nutrient content of soils, such as nitrogen (N), phosphorus (P), potassium (K), pH, temperature, and moisture is key to dealing with soil variation and climate uncertainty. Effective soil nutrient management can increase plant resilience to climate change as well as improve water use. In addition, soil nutrients affect the selection of suitable plant types, considering that each plant has different nutritional needs. However, the lack of integration of soil nutrient analysis in agricultural practices leads to the inefficient use of inputs, impacting crop yields and environmental sustainability. This study aims to propose a soil nutrient assessment scheme that can recommend plant types using ensemble learning and remote sensing. Remote sensing proposals support performance broadly, while ensemble learning is helpful for precision agriculture. The results of this scheme show that the nutrient assessment with remote sensing provides an opportunity to evaluate soil conditions and select suitable plants based on the extraction of N, P, K, pH, TCI, and NDTI values. Then, Ensemble Learning algorithms such as Random Forest work more dominantly compared to XGBoost, AdaBoost, and Gradient Boosting, with an accuracy level of 0.977 and a precision of 0.980 in 0.895 seconds.
[...] Read more.Scheduling is an NP-hard problem, and heuristic algorithms are unable to find approximate solutions within a feasible time frame. Efficient task scheduling in Cloud Computing (CC) remains a critical challenge due to the need to balance energy consumption and deadline adherence. Existing scheduling approaches often suffer from high energy consumption and inefficient resource utilization, failing to meet stringent deadline constraints, especially under dynamic workload variations. To address these limitations, this study proposes an Energy-Deadline Aware Task Scheduling using the Water Wave Optimization (EDATSWWO) algorithm. Inspired by the propagation and interaction of water waves, EDATSWWO optimally allocates tasks to available resources by dynamically balancing energy efficiency and deadline adherence. The algorithm evaluates tasks based on their energy requirements and deadlines, assigning them to virtual machines (VMs) in the multi-cloud environment to minimize overall energy consumption while ensuring timely execution. Google Cloud workloads were used as the benchmark dataset to simulate real-world scenarios and validate the algorithm's performance. Simulation results demonstrate that EDATSWWO significantly outperforms existing scheduling algorithms in terms of energy efficiency and deadline compliance. The algorithm achieved an average reduction of energy consumption by 21.4%, improved task deadline adherence by 18.6%, and optimized resource utilization under varying workloads. This study highlights the potential of EDATSWWO to enhance the sustainability and efficiency of multi-cloud systems. Its robust design and adaptability to dynamic workloads make it a viable solution for modern cloud computing environments, where energy consumption and task deadlines are critical factors.
[...] Read more.Accurate electricity forecasting is vital for grid stability and effective energy management, particularly in regions like Benghazi, Libya, which face frequent load shedding, generation deficits, and aging infrastructure. This study introduces a data-driven framework to forecast electricity load, generation, and deficits for 2025 using historical data from two distinct years: 2019 (an instability year) and 2023 (a stability year). Various time series models were employed, including Autoregressive Integrated Moving Average (ARIMA), seasonal ARIMA, dynamic regression ARIMA, extreme gradient boosting, simple exponential smoothing, and Long Short-Term Memory (LSTM) neural networks. Data preprocessing steps—such as missing value imputation, outlier smoothing, and logarithmic transformation—are applied to enhance data quality. Model performance was evaluated using metrics such as mean squared error, root mean squared error, mean absolute error, and mean absolute percentage error. LSTM outperformed other models, achieving the lowest mentioned metric values for forecasting load, generation, and deficits, demonstrating its ability to handle non-stationarity, seasonality, and extreme events. The study’s key contribution is the development of an optimized LSTM framework tailored to North Benghazi’s electricity patterns, incorporating a rich dataset and exogenous factors like temperature and humidity. These findings offer actionable insights for energy policymakers and grid operators, enabling proactive resource allocation, demand-side management, and enhanced grid resilience. The research highlights the potential of advanced machine learning techniques to address energy-forecasting challenges in resource-constrained regions, paving the way for a more reliable and sustainable electricity system.
[...] Read more.This paper presents the development and implementation of an intelligent system for predicting the risk of diabetes spread using machine learning techniques. The core of the system relies on the analysis of the Pima Indians Diabetes dataset through k-nearest neighbours (k-NN), Random Forest, Logistic Regression, Decision Trees and XGBoost algorithms. After pre-processing the data, including normalization and handling missing values, the k-NN model achieved an accuracy of 77.2%, precision of 80.0%, recall of 85.0%, F1-score of 83.0% and ROC of 81.9%. The Random Forest model achieved an accuracy of 81.0%, precision of 87.0%, recall of 91.0%, F1-score of 89.0% and ROC of 90.0%. The Logistic Regression model achieved an accuracy of 60.0%, precision of 93.0%, recall of 61.0%, F1-score of 74.0% and ROC of 69.0%. The Decision Trees model achieved an accuracy of 79.0%, precision of 87.0%, recall of 89.0%, F1-score of 88.0% and ROC of 83.0%. In comparison, the XGBoost model outperformed with an accuracy of 83.0%, precision of 85.0%, recall of 96.0%, F1-score of 90.0% and ROC of 91.0%, indicating strong prediction capabilities. The proposed system integrates both hardware (continuous glucose monitors) and software (AI-based classifiers) components, ensuring real-time blood glucose level tracking and early-stage diabetes risk prediction. The novelty lies in the proposed architecture of a distributed intelligent monitoring system and the use of ensemble learning for risk assessment. The results demonstrate the system's potential for proactive healthcare delivery and patient-centred diabetes management.
[...] Read more.Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.The proliferation of Web-enabled devices, including desktops, laptops, tablets, and mobile phones, enables people to communicate, participate and collaborate with each other in various Web communities, viz., forums, social networks, blogs. Simultaneously, the enormous amount of heterogeneous data that is generated by the users of these communities, offers an unprecedented opportunity to create and employ theories & technologies that search and retrieve relevant data from the huge quantity of information available and mine for opinions thereafter. Consequently, Sentiment Analysis which automatically extracts and analyses the subjectivities and sentiments (or polarities) in written text has emerged as an active area of research. This paper previews and reviews the substantial research on the subject of sentiment analysis, expounding its basic terminology, tasks and granularity levels. It further gives an overview of the state- of – art depicting some previous attempts to study sentiment analysis. Its practical and potential applications are also discussed, followed by the issues and challenges that will keep the field dynamic and lively for years to come.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.In this paper, a new acquisition protocol is adopted for identifying individuals from electroencephalogram signals based on eye blinking waveforms. For this purpose, a database of 10 subjects is collected using Neurosky Mindwave headset. Then, the eye blinking signal is extracted from brain wave recordings and used for the identification task. The feature extraction stage includes fitting the extracted eye blinks to auto-regressive model. Two algorithms are implemented for auto-regressive modeling namely; Levinson-Durbin and Burg algorithms. Then, discriminant analysis is adopted for classification scheme. Linear and quadratic discriminant functions are tested and compared in this paper. Using Burg algorithm with linear discriminant analysis, the proposed system can identify subjects with best accuracy of 99.8%. The obtained results in this paper confirm that eye blinking waveform carries discriminant information and is therefore appropriate as a basis for person identification methods.
[...] Read more.Climate change, a significant and lasting alteration in global weather patterns, is profoundly impacting the stability and predictability of global temperature regimes. As the world continues to grapple with the far-reaching effects of climate change, accurate and timely temperature predictions have become pivotal to various sectors, including agriculture, energy, public health and many more. Crucially, precise temperature forecasting assists in developing effective climate change mitigation and adaptation strategies. With the advent of machine learning techniques, we now have powerful tools that can learn from vast climatic datasets and provide improved predictive performance. This study delves into the comparison of three such advanced machine learning models—XGBoost, Support Vector Machine (SVM), and Random Forest—in predicting daily maximum and minimum temperatures using a 45-year dataset of Visakhapatnam airport. Each model was rigorously trained and evaluated based on key performance metrics including training loss, Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), R2 score, Mean Absolute Percentage Error (MAPE), and Explained Variance Score. Although there was no clear dominance of a single model across all metrics, SVM and Random Forest showed slightly superior performance on several measures. These findings not only highlight the potential of machine learning techniques in enhancing the accuracy of temperature forecasting but also stress the importance of selecting an appropriate model and performance metrics aligned with the requirements of the task at hand. This research accomplishes a thorough comparative analysis, conducts a rigorous evaluation of the models, highlights the significance of model selection.
[...] Read more.This document presents two developed methods for solving the classification task of medical implant materials based on the compatible use of the Wiener Polynomial and SVM. The high accuracy of the proposed methodology for solving this task are experimentally confirmed. A comparison of the proposed methods with existing ones: Logistic Regression; Linear SVC; Random Forest; SVC (linear kernel); SVC (RBF kernel); Random Forest + Wiener Polynomial is carried out. The duration of training of all methods that described in work is investigated. The article presents the visualization of all method results for solving this task.
[...] Read more.Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.Alzheimer’s illness is an ailment of mind which results in mental confusion, forgetfulness and many other mental problems. It effects physical health of a person too. When treating a patient with Alzheimer's disease, a proper diagnosis is crucial, especially into earlier phases of condition as when patients are informed of the risk of the disease, they can take preventative steps before irreparable brain damage occurs. The majority of machine detection techniques are constrained by congenital (present at birth) data, however numerous recent studies have used computers for Alzheimer's disease diagnosis. The first stages of Alzheimer's disease can be diagnosed, but illness itself cannot be predicted since prediction is only helpful before it really manifests. Alzheimer’s has high risk symptoms that effects both physical and mental health of a patient. Risks include confusion, concentration difficulties and much more, so with such symptoms it becomes important to detect this disease at its early stages. Significance of detecting this disease is the patient gets a better chance of treatment and medication. Hence our research helps to detect the disease at its early stages. Particularly when used with brain MRI scans, deep learning has emerged as a popular tool for the early identification of AD. Here we are using a 12- layer CNN that has the layers four convolutional, two pooling, two flatten, one dense and three activation functions. As CNN is well-known for pattern detection and image processing, here, accuracy of our model is 97.80%.
[...] Read more.Timing-critical path analysis is one of the most significant terms for the VLSI designer. For the formal verification of any kinds of digital chip, static timing analysis (STA) plays a vital role to check the potentiality and viability of the design procedures. This indicates the timing status between setup and holding times required with respect to the active edge of the clock. STA can also be used to identify time sensitive paths, simulate path delays, and assess Register transfer level (RTL) dependability. Four types of Static Random Access Memory (SRAM) controllers in this paper are used to handle with the complexities of digital circuit timing analysis at the logic level. Different STA parameters such as slack, clock skew, data latency, and multiple clock frequencies are investigated here in their node-to-node path analysis for diverse SRAM controllers. Using phase lock loop (ALTPLL), single clock and dual clock are used to get the response of these controllers. For four SRAM controllers, the timing analysis shows that no data violation exists for single and dual clock with 50 MHz and 100 MHz frequencies. Result also shows that the slack for 100MHz is greater than that of 50MHz. Moreover, the clock skew value in our proposed design is lower than in the other three controllers because number of paths, number of states are reduced, and the slack value is higher than in 1st and 2nd controllers. In timing path analysis, slack time determines that the design is working at the desired frequency. Although 100MHz is faster than 50MHz, our proposed SRAM controller meets the timing requirements for 100MHz including the reduction of node to node data delay. Due to this reason, the proposed controller performs well compared to others in terms slack and clock skew.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.This article presents a new approach for image recognition that proposes to combine Conical Radon Transform (CRT) and Convolutional Neural Networks (CNN).
In order to evaluate the performance of this approach for pattern recognition task, we have built a Radon descriptor enhancing features extracted by linear, circular and parabolic RT. The main idea consists in exploring the use of Conic Radon transform to define a robust image descriptor. Specifically, the Radon transformation is initially applied on the image. Afterwards, the extracted features are combined with image and then entered as an input into the convolutional layers. Experimental evaluation demonstrates that our descriptor which joins together extraction of features of different shapes and the convolutional neural networks achieves satisfactory results for describing images on public available datasets such as, ETH80, and FLAVIA. Our proposed approach recognizes objects with an accuracy of 96 % when tested on the ETH80 dataset. It also has yielded competitive accuracy than state-of-the-art methods when tested on the FLAVIA dataset with accuracy of 98 %. We also carried out experiments on traffic signs dataset GTSBR. We investigate in this work the use of simple CNN models to focus on the utility of our descriptor. We propose a new lightweight network for traffic signs that does not require a large number of parameters. The objective of this work is to achieve optimal results in terms of accuracy and to reduce network parameters. This approach could be adopted in real time applications. It classified traffic signs with high accuracy of 99%.