IJISA Vol. 5, No. 3, Feb. 2013
Cover page and Table of Contents: PDF (size: 198KB)
Voltage stability is a major concern in planning and operations of power systems. It is well known that voltage instability and collapse have led to major system failures. Modern transmission networks are more heavily loaded than ever before to meet the growing demand. One of the major consequences resulted from such a stressed system is voltage collapse or instability. This paper presents maximum loadability identification of a load bus in a power transmission network. In this study, Fast Voltage Stability Index (FVSI) is utilized as the indicator of the maximum loadability termed as Qmax. In this technique, reactive power loading will be increased gradually at particular load bus until the FVSI reaches close to unity. Therefore, a critical value of FVSI was set as the maximum loadability point. This value ensures the system from entering voltage-collapse region. The main purpose in the maximum loadability assessment is to plan for the maximum allowable load value to avoid voltage collapse; which is important in power system planning risk assessment.
The most important task in security analysis is the problem of identifying the critical contingencies from a large list of credible contingencies and ranks them according to their severity. The condition of voltage stability in a power system can be characterized by the use of voltage stability indices. This paper presents fuzzy approach for ranking the contingencies using composite-index based on parallel operated fuzzy inference engine. The Line Flow index (L.F) and bus Voltage Magnitude (VM) of the load buses are expressed in fuzzy set notation. Further, they are evaluated using Fuzzy rules to obtain overall Criticality Index. Contingencies are ranked based on decreasing order of Criticality Index and then provides the comparison of ranking obtained with FVSI method.
This paper deals with an automatic document’s synthesis system. Our approach is based on the prior formal description of the semantics of the main elements (document, reader and his request) in the synthesis system. In this approach, semantic capture is based on ontology definition that is specified formally using Description Logics (DL). The DL inference techniques associated to production rules are then used to compute a document synthesis. Moreover, DL inference techniques are used to reason about each component.[...] Read more.
Mobile robot motion control is simplified to a DC motor motion control that may include gear system. The simplest and widespread approach to control the mobile robot motion is the differential drive style, it consists of two in-lines with each a DC motor. Both DC motors are independently powered so the desired movements will rely on how these two DC motors are commanded. Thedevelop design, model and control of Mechatronics mobile robotic system is presented in this paper. The developed robotic system is intended for research purposes as well as for educational process. The model of proposed mobile robot was created and verified using MATLAB-Simulink software.[...] Read more.
This paper presents a new approach to overcome one of the most known disadvantages of the well-known Kmeans clustering algorithm. The problems of classical Kmeans are such as the problem of random initialization of prototypes and the requirement of predefined number of clusters in the dataset. Randomly initialized prototypes can often yield results to converge to local rather than global optimum. A better result of Kmeans may be obtained by running it many times to get satisfactory results. The proposed algorithms are based on a new novel definition of densities of data points which is based on the k-nearest neighbor method. By this definition we detect noise and outliers which affect Kmeans strongly, and obtained good initial prototypes from one run with automatic determination of K number of clusters. This algorithm is referred to as Efficient Initialization of Kmeans (EI-Kmeans). Still Kmeans algorithm used to cluster data with convex shapes, similar sizes, and densities. Thus we develop a new clustering algorithm called Efficient Data Clustering Algorithm (EDCA) that uses our new definition of densities of data points. The results show that the proposed algorithms improve the data clustering by Kmeans. EDCA is able to detect clusters with different non-convex shapes, different sizes and densities.[...] Read more.
In this paper a reduced multiplicative tolerance - a measure of sensitivity analysis in multi-objective linear programming (MOLP) is presented. By using this new measure a method for ranking the set of efficient extreme solutions is proposed. The idea is to rank these solutions by values of the reduced tolerance. This approach can be applied to many MOLP problems, where sensitivity analysis is important for a decision maker. In the paper, applications of the presented methodology are shown in the market model and the transportation problem.[...] Read more.
Since 1999, cancer has been the leading cause of death under the age of 85 years and the eradication of this disease has been the long sought-after goal of scientists and physicians. Cancer is a disease in which abnormal cells divide uncontrollably. These abnormal cells have the ability to invade and destroy normal body cells, which is life threatening. One of the most important factors in effective cancer treatment is the detection of cancerous tumour cells in an early stage. Nanotechnology brings new hope to the arena of cancer detection research, owing to nanoparticles’ unique physical and chemical properties, giving them the potential to be used in the detection and monitoring of cancer. One such approach is quantum dots based detection which is rapid, easy and economical enabling quick point-of-care screening of cancer markers. QDs have got unique properties which make them ideal for detecting tumours. On the other hand, Gold nanoparticles have been in the bio-imaging spotlight due to their special optical properties. Au-NPs with strong surface-plasmon-enhanced absorption and scattering have allowed them to emerge as powerful imaging labels and contrast agents. This paper includes the comparative study of both the methods. Compared with quantum dots, the gold-nanoparticles are more than 200 times brighter on a particle-to-particle basis, although they are about 60 times larger by volume. Thus, Gold nanoparticles in suspension, offers advantages compared with quantum dots in that the gold appears to be non-toxic and the particles produce a brighter, sharper signal.[...] Read more.
Here at first we are going to give a brief history of the development of fuzzy entropy. Finally, new measures for entropy of fuzzy sets in continuous cases are introduced. In this article, our main purpose is to show that the entropy of fuzzy number is very much dependent on the selection of intervals. Another important thing which can be observed from the cases discussed is that the entropy of triangular fuzzy numbers is the same for the same choice of interval length and for non triangular fuzzy number this property does not hold.[...] Read more.
Software effort estimation is very crucial in software project planning. Accurate software estimation is very critical for a project success. There are many software prediction models and all of them utilize software size as a key factor to estimate effort. Function Points size metric is a popular method for estimating and measuring the size of application software based on the functionality of the software from the user’s point of view. While there is a great advancement in software development, the weight values assigned to count standard FP remains the same. In this paper the concepts of calibrating the function point weights using Type-2 fuzzy logic framework is provided whose aim is to estimate a more accurate software size for various software applications and to improve the effort estimation of software projects. Evaluation experiments have shown the framework to be promising.[...] Read more.
With drastic increase in internet traffic over last few years due to increase in number of internet users, IP traffic classification has gained significant importance for research community as well as various internet service providers for optimization of their network performance and for governmental intelligence organizations. Today, traditional IP traffic classification techniques such as port number and payload based direct packet inspection techniques are rarely used because of use of dynamic port number instead of well-known port number in packet headers and various cryptographic techniques which inhibit inspection of packet payload. Current trends are use of machine learning (ML) techniques for IP traffic classification. In this research paper, a real time internet traffic dataset has been developed using packet capturing tool for 2 second packet capturing duration and other datasets have been developed by reducing number of features of 2 second duration dataset using Correlation and Consistency based Feature Selection (FS) Algorithms. Then, five ML algorithms MLP, RBF, C4.5, Bayes Net and Naïve Bayes are employed for IP traffic classification with these datasets. This experimental analysis shows that Bayes Net is an effective ML technique for near real time and online IP traffic classification with reduction in packet capture duration and reduction in number of features characterizing each application sample with Correlation based FS Algorithm.[...] Read more.
Optical technologies are ubiquitous in telecommunications networks and systems, providing multiple wavelength channels of transport at 2.5 Gbit/sec to 40 Gbit/sec data rates over single fiber optic cables. Market pressures continue to drive the number of wavelength channels per fiber and the data rate per channel. This trend will continue for many years to come as electronic commerce grows and enterprises demand higher and reliable bandwidth over long distances. Electronic commerce, in turn, is driving the growth curves for single processor and multiprocessor performance in data base transaction and Web based servers. Ironically, the insatiable taste for enterprise network bandwidth, which has driven up the volume and pushed down the price of optical components for telecommunications, is simultaneously stressing computer system bandwidth increasing the need for new interconnection schemes and providing for the first time commercial opportunities for optical components in computer systems. The evolution of integrated circuit technology is causing system designs to move towards communication based architectures. We have presented the current tends of high performance system capacity of optical interconnection data transmission link in high performance optical communication and computing systems over wide range of the affecting parameters.[...] Read more.