Work place: MARS Research Laboratory LR17ES05, University of Sousse, Tunisia
Research Interests: Information Retrieval
Mohamed Nazih Omri received his Ph.D. in Computer Science from University of Jussieu, Paris, France, in 1994. He is a professor in computer science at the University of Sousse, Tunisia. From January 2011, he is a member of MARS (Modeling of Automated Reasoning Systems) Research Laboratory. His group conducts research on Information Retrieval, Data Base, Knowledge Base, and Web Services. He supervised more than 20 Ph.D. and Msc students in different fields of computer science. He is a reviewer of many international journals such as Information Fusion journal, Psihologija Journal, and many International Conferences such as AMIA, ICNC-FSKD, AMAI, SOMeT, etc.
DOI: https://doi.org/10.5815/ijcnis.2022.05.03, Pub. Date: 8 Oct. 2022
In recent years, the most exploited sources of information such as Facebook, Instagram, LinkedIn and Twitter have been considered to be the main sources of misinformation. The presence of false information in these social networks has a very negative impact on the opinions and the way of thinking of Internet users. To solve this problem of misinformation, several techniques have been used and the most popular is the sentiment analysis. This technique, which consists in exploring opinions on corpora of texts, has become an essential topic in this field. In this article, we propose a new approach, called Conversational Sentiment Analysis Model (CSAM), allowing, from a text written on a subject through messages exchanged between different users, called a conversation, to find the passages describing feelings, emotions, opinions and attitudes. This approach is based on: (i) the conditional probability in order to analyse sentiments of different conversation items in Twitter microblog, which are characterized by small sizes, the presence of emoticons and emojis, (ii) the aggregation of conversation items using the uncertainty theory to evaluate the general sentiment of conversation. We conducted a series of experiments based on the standard Semeval2019 datasets, using three standard and different packages, namely a library for sentiment analysis TextBlob, a dictionary, a sentiment reasoner Flair and an integration-based framework for the Vader NLP task. We evaluated our model with two dataset SemEval 2019 and ScenarioSA, the analysis of the results, which we obtained at the end of this experimental study, confirms the feasibility of our model as well as its performance in terms of precision, recall and F-measurement.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2022.03.05, Pub. Date: 8 Jun. 2022
Among the most important activities within a company we find that of quality management. This activity represents reflects the most rigorous way possible for a better organization of establishments in order to offer the best service to customers and to the various members of these establishments. This activity of quality management is a very delicate and sensitive task due to the large number of documents and business processes that are handled on a cyclical basis. For this reason, setting up a reliable and efficient system for managing the different aspects of the quality management process becomes a challenge for any company that seeks excellence. This article proposes a new intelligent approach to the need of the management of human and commercial resources within the companies for a good management of the process of quality management according to its own conception. Our approach allows any quality management manager to manage the different modules of a QMS according to the ISO 9001 standard through the different interfaces offered by our solution. The monitoring phase of this process through the implementation of a workflow orchestrator, jBpm.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2022.02.01, Pub. Date: 8 Apr. 2022
It is generally accepted that data production has experienced spectacular growth for several years due to the proliferation of new technologies such as new mobile devices, smart meters, social networks, cloud computing and sensors. In fact, this data explosion should continue and even accelerate. To find all of the documents responding to a request, any information search system develops a methodology to confirm whether or not the terms of each document correspond to those of the user's request. Most systems are based on the assumption that the terms extracted from the documents have been certain and precise. However, there are data in which this assumption is difficult to apply. The main objective of the work carried out within the framework of this article is to propose a new model of data service indexing in an uncertain environment, meaning that the data they contain can be untrustworthy, or they can be contradictory to another data source, due to failure in collection or integration mechanisms. The solution we have proposed is characterized by its Intelligent side ensured by an efficient fuzzy module capable of reasoning in an environment of uncertain and imprecise data. Concretely, our proposed approach is articulated around two main phases: (i) a first phase ensures the processing of uncertain data in a textual document and, (ii) the second phase makes it possible to determine a new method of uncertain syntactic indexing. We carried out a series of experiments, on different bases of standard tests, in order to evaluate our solution while comparing it to the approaches studied in the literature. We used different standard performance measures, namely precision, recall and F_measure. The results found showed that our solution is more efficient and more efficient than the main approaches proposed in the literature. The results show that the proposed approach realizes an efficient Big Data indexing solution in an Uncertain Environment that increases the Precision, the Recall and the F_measure measurements. Experimental results present that the proposed uncertain model obtained the best precision accuracy 0.395 with KDD database and the best recall accuracy 0.254 with the same database.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2022.01.01, Pub. Date: 8 Feb. 2022
With the constant increase of data induced by stakeholders throughout a product life cycle, companies tend to rely on project management tools for guidance. Business intelligence approaches that are project-oriented will help the team communicate better, plan their next steps, have an overview of the current project state and take concrete actions prior to the provided forecasts. The spread of agile working mindsets are making these tools even more useful. It sets a basic understanding of how the project should be running so that the implementation is easy to follow on and easy to use.
In this paper, we offer a model that makes project management accessible from different software development tools and different data sources. Our model provide project data analysis to improve aspects: (i) collaboration which includes team communication, team dashboard. It also optimizes document sharing, deadlines and status updates. (ii) planning: allows the tasks described by the software to be used and made visible. It will also involve tracking task time to display any barriers to work that some members might be facing without reporting them. (iii) forecasting to predict future results from behavioral data, which will allow concrete measures to be taken. And (iv) Documentation to involve reports that summarize all relevant project information, such as time spent on tasks and charts that study the status of the project. The experimental study carried out on the various data collections on our model and on the main models that we have studied in the literature, as well as the analysis of the results, which we obtained, clearly show the limits of these studied models and confirms the performance of our model as well as efficiency in terms of precision, recall and robustness.
DOI: https://doi.org/10.5815/ijisa.2021.06.01, Pub. Date: 8 Dec. 2021
Since its emergence, cloud computing has continued to evolve thanks to its ability to present computing as consumable services paid by use, and the possibilities of resource scaling that it offers according to client’s needs. Models and appropriate schemes for resource scaling through consolidation service have been considerably investigated, mainly, at the infrastructure level to optimize costs and energy consumption. Consolidation efforts at the SaaS level remain very restrained mostly when proprietary software are in hand. In order to fill this gap and provide software licenses elastically regarding the economic and energy-aware considerations in the context of distributed cloud computing systems, this work deals with dynamic software consolidation in commercial cloud data centers 〖DS〗^3 C. Our solution is based on heuristic algorithms and allows reallocating software licenses at runtime by determining the optimal amount of resources required for their execution and freed unused machines. Simulation results showed the efficiency of our solution in terms of energy by 68.85% savings and costs by 80.01% savings. It allowed to free up to
75% physical machines and 76.5% virtual machines and proved its scalability in terms of average execution time while varying the number of software and the number of licenses alternately.
Subscribe to receive issue release notifications and newsletters from MECS Press journals