Work place: Kiit University, Bhubaneswar, India
Research Interests: Human-Computer Interaction, Computer systems and computational processes, Image Compression, Image Manipulation, Image Processing
Siddharth Swarup Rautaray, PhD (Computer Science), Member IEEE is Professor at the School of Computer Engineering, KIIT University, Bhubaneswar. He has more than a decade of teaching and research experience. Dr Rautaray has published numbers of Research Papers in peer- reviewed International Journals and Conferences. His areas of interest are Image Processing, Data analytics, Human Computer Interaction. He can be reached at firstname.lastname@example.org.
DOI: https://doi.org/10.5815/ijieeb.2019.06.02, Pub. Date: 8 Nov. 2019
The heart is the most important part of the human body. Any abnormality in heart results heart related illness in which it obstructs blood vessels which causes heart attack, chest pain or stroke. Care and improvement of the health by the help of identification, prevention, and care of any kind of diseases is the main goal. So for this various prediction analysis methods are used which job is to identify the illness at prelim phase so that prevention and care of heart disease is done. This paper emphasizes on the care of heart diseases at a primitive phase so that it will lead to a successful cure. In this paper, diverse data mining classification method like Decision tree classification, Naive Bayes classification, Support Vector Machine classification, and k-NN classification are used for determination and safeguard of the diseases.[...] Read more.
DOI: https://doi.org/10.5815/ijieeb.2019.05.02, Pub. Date: 8 Sep. 2019
This presented research paper mainly studies the frequent itemsets mining approach for finding the most important attribute to overcome the existing problems in the extraction of relevant information by using data mining approaches from a huge amount of dataset. Firstly a state of art diagram for prediction is designed and data mining classifier like naive bayes, support vector machine, decision tree, k- nearest neighbour are compared and then proposed methodology with new techniques are proposed. Moreover, a new attribute filtering association frequent itemsets mining algorithm is presented. Then, by analyzing the feasibility of the proposed algorithm, the data mining classification classifier is compared. As a result, SVM produces the best result among all the classifier with attribute filtrating and without attribute filtrating. With attribute filtrating algorithm enhances the accuracy of all the other classifier.[...] Read more.
DOI: https://doi.org/10.5815/ijitcs.2018.09.03, Pub. Date: 8 Sep. 2019
Private organizations like offices, libraries, hospi-tals make use of computers for computerized database, when computers became a most cost-effective device.After than E.F Codd introduced relational database model i.e conventional database. Conventional database can be enhanced to temporal database. Conventional or traditional databases are structured in nature. But always we dont have the pre-organized data. We have to deal with different types of data. That data is huge and in large amount i.e Big data. Big data mostly emphasized into internal data sources like transaction, log data, emails etc. From these sources high-enriched information is extracted by the means of process text data mining or text analytics. Entity Extraction is a part of Text Analysis. An entity can be anything like people, companies, places, money, any links, phone number etc. Text documents, bLogposts or any long articles contain large number of entities in many forms. Extracting those entities to gain the valuable information is the main target. Extraction of entities is possible in natural language processing(NLP) with R language. In this research work we will briefly discuss about text analysis process and how to extract entities with different big data tools.[...] Read more.
DOI: https://doi.org/10.5815/ijieeb.2018.04.06, Pub. Date: 8 Jul. 2018
As the world is getting digitized the speed in which the amount of data is over owing from different sources in different format, it is not possible for the traditional system to compute and analysis this kind of big data for which big data tool like Hadoop is used which is an open source software. It stores and computes data in a distributed environment. In the last few years developing Big Data Applications has become increasingly important. In fact many organizations are depending upon knowledge extracted from huge amount of data. However traditional data technique shows a reduced performance, accuracy, slow responsiveness and lack of scalability. To solve the complicated Big Data problem, lots of work has been carried out. As a result various types of technologies have been developed. As the world is getting digitized the speed in which the amount of data is over owing from different sources in different format, it is not possible for the traditional system to compute and analysis this kind of big data for which big data tool like Hadoop is used which is an open source software. This research work is a survey about the survey of recent optimization technologies and their applications developed for Big Data. It aims to help to choose the right collaboration of various Big Data technologies according to requirements.[...] Read more.
DOI: https://doi.org/10.5815/ijmecs.2018.02.04, Pub. Date: 8 Feb. 2018
Big Data is an accumulation of data sets which are abundant and intricate in character. They comprise both structured and unstructured data that evolve abundant, so speedy they are not convenient by classical relational database systems or current analytical tools. Big Data Analytics is not linearly able to expand. It is a predefined schema. Now big data is very helpful for backup of data not for everything else. There is always a data introducing. It also helps to solve India’s big problems. It also helps to fill the data gap. Health care is the conservation or advancement of health along the avoidance, interpretation and medical care of disorder, bad health, abuse, and other substantial and spiritual deterioration in mortal. Health care is expressed by health experts in united health experts, specialists, physician associates, mid-wife, nursing, antibiotic, pharmacy, psychology and other health. This paper focuses on providing information in the area of big data analytics and its application in medical domain. Further it includes introduction, Challenging aspects and concerns, Big Data Analytics in use, Technical Specification, Research application, Industry application and Future applications.[...] Read more.
DOI: https://doi.org/10.5815/ijieeb.2017.05.03, Pub. Date: 8 Sep. 2017
The huge amount of data is being created every day by various organisations and users all over the world. Structured, semi-structured and unstructured data is being created at a very rapid speed from heterogeneous sources like reviews, ratings, feedbacks, shopping details, etc., it is termed as Big Data. This data generated from different users share many common patterns which can be filtered and analysed to give some recommendation regarding the product, goods or services in which a user is interested. Recommendation systems are the software tools used to give suggestions to users on the basis of their requirements. Today no system is available for suggesting a person on how to use their money for saving, where to invest and how to manage expenditures. Few consulting systems are available which provide investment and saving tips but they are not much effective and are much complex. The presented paper proposed a collaborative filtering based recommender system for financial analysis based on Saving, Expenditure and Investment using Apache Hadoop and Apache Mahout. Many savings and investment consulting systems are available but no system provides effective and efficient recommendation regarding management and beneficial utilisation of salary. The advantage of proposed recommender system is that it provides better suggestion to a person for saving, expenditure and investment of their salary which in turns maximises their wealth. Due to enormous amount of data involved, Apache Hadoop framework is used for distributed processing. Collaborative filtering and Apache Mahout is used for analysing the data and implementation of the recommender system.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.04.07, Pub. Date: 8 Apr. 2017
Data is one of the most important and vital aspect of different activities in today's world. Therefore vast amount of data is generated in each and every second. A rapid growth of data in recent time in different domains required an intelligent data analysis tool that would be helpful to satisfy the need to analysis a huge amount of data. Map Reduce framework is basically designed to process large amount of data and to support effective decision making. It consists of two important tasks named as map and reduce. Optimization is the act of achieving the best possible result under given circumstances. The goal of the map reduce optimization is to minimize the execution time and to maximize the performance of the system. This survey paper discusses a comparison between different optimization techniques used in Map Reduce framework and in big data analytics. Various sources of big data generation have been summarized based on various applications of big data.The wide range of application domains for big data analytics is because of its adaptable characteristics like volume, velocity, variety, veracity and value .The mentioned characteristics of big data are because of inclusion of structured, semi structured, unstructured data for which new set of tools like NOSQL, MAPREDUCE, HADOOP etc are required. The presented survey though provides an insight towards the fundamentals of big data analytics but aims towards an analysis of various optimization techniques used in map reduce framework and big data analytics.[...] Read more.
DOI: https://doi.org/10.5815/ijmecs.2014.08.05, Pub. Date: 8 Aug. 2014
In recent technology the popularity and demand of image processing is increasing due to its immense number of application in various fields. Most of these are related to biometric science like face recognitions, fingerprint recognition, iris scan, and speech recognition. Among them face detection is a very powerful tool for video surveillance, human computer interface, face recognition, and image database management. There are a different number of works on this subject. Face recognition is a rapidly evolving technology, which has been widely used in forensics such as criminal identification, secured access, and prison security. In this paper we had gone through different survey and technical papers of this field and list out the different techniques like Linear discriminant analysis, Viola and Jones classification and adaboost learning curvature analysis and discuss about their advantages and disadvantages also describe some of the detection and recognition algorithms, mention some application domain along with different challenges in this field. . We had proposed a classification of detection techniques and discuss all the recognition methods also.[...] Read more.
Subscribe to receive issue release notifications and newsletters from MECS Press journals