IJARCCE adheres to the suggestive parameters outlined by the University Grants Commission (UGC) for peer-reviewed journals, upholding high standards of research quality, ethical publishing, and academic excellence.
Facial Image Analysis: Recent Trends and Approaches
Soumya C S, Dr Thippeswamy G
|
Analysing the Data from Twitter using R
K. Sailaja Kumar, D. Evangelin Geetha, T. V. Sai Manoj
|
A Literature Survey on Computer-Aided Diagnosis in Detection and Classification of Polyp in Colon Cancer using CT Colonography
Akshay Godkhindi, Dr Dayananda P, Dr Sowmyarani C N
|
Comparative Study of Load Balancing Algorithms for Best-effort Applications in Cloud.
Chethan Venkatesh, Shiva Murthy .G
|
Predictive Analytics on Aviation Data
Arun Kumar R, Balaji S, Sai Manoj T V, Sailaja Kumar K, Evangelin Geetha
|
Early Performance Evaluation of Data Warehouse Systems: From UML to LQN models
Dr. Madhu Bhan, Dr. K. Rajanikanth, Dr. T.V. Suresh Kumar
|
Android Based Smart Door Locking System with Multi User and Multi Level Functionalities
Dr. Manish Kumar, Dr. M Hanumanthappa, Dr. T V Suresh Kumar, Mr. Amit Kumar Ojha
|
Study on Three Dimensional (3D) Password Authentication system
Nayana S, Dr. Niranjanamurthy M, Dr. Dharmendra Chahar
|
The Study of Big Data Analytics in E-Commerce
Pavithra B, Dr. Niranjanmurthy M, Kamal Shaker J, Martien Sylvester Mani F
|
Comparative Study of Data Mining Techniques in Crop Yield Prediction
Perpetua Noronha, Divya .J, Shruthi .B.S
|
Computed Tomography Images for Computer Aided Diagnosis of Lungs Cancer Feature Extraction
Shankaragowda B.B, Dr. Siddappa M, Dr. Suresha M
|
Security Enhancement on Cloud using Identity Based Encryption (IBE)
Shruthi N, Kumar P K, Swamy L N, Sukruth Gowda M A
|
Techniques of Semantic Analysis for Natural Language Processing – A Detailed Survey
Rajani S, M. Hanumanthappa
|
An Efficient Technique for Optimization of Test Cases with the Aid of HNN-MCS for Assessing Software Reliability
Lakshminarayana P, Dr T V Suresh Kumar
|
An Introduction on Mobile Malware and its Prevention
Shruti Prabhuli Ghatnatti, Spoorti A, Surekha B
|
A Model to Predict Words in the Sentence to Identify an Aligned Sequence of Article Blocks in e-Newspaper
Deepa Nagalavi, M. Hanumanthappa
|
Auction using Multi-Agent System
Ajitha S, Maruthi Prasad, T V Suresh Kumar
|
Abstract
Graph Based Approach for Automatic Text Summarization
Akash Ajampura Natesh, Somaiah Thimmaiah Balekuttira, Annapurna P Patil
Abstract:
Research in Automatic text summarization systems has gained momentum in recent times mostly due to the advances in natural language processing libraries and techniques. In this work, we have proposed a graph based approach for automatic text summarization. This approach uses the concept of computing how closely, significant words in a sentence are related to each other. This metric further weighs the significance of the sentences in the text document. NLTK library for python is used to build the automatic text summarization system based on this approach. The results obtained show that this technique is effective in producing high quality summaries.
Keywords:
Automatic Text Summarization, Extractive Summarization, Natural Language Processing, NLTK Library.
Abstract:
Biometric student attendance system increases the efficiency of the process of taking student attendance. This paper presents a simple and portable approach to student attendance in the form of an Internet of Things (IOT) based system that records the attendance using fingerprint based biometric scanner and stores them securely over cloud. This system aims to automate the cumbersome process of manually taking and storing student attendance records. It will also prevent proxy attendance, thus increasing the reliability of attendance records. The records are securely stored and can be reliably retrieved whenever required by the teacher.
Keywords:
Biometric, Fingerprint, IoT, Fingerprint Scanner, Attendance.
A Reliable Authentication with Graphical Password for Cloud Data
Shivanna K, Dr. Prabhu Deva S
Abstract:
In today's cloud computing technology, protecting data from unauthorized access is the major issue. The numerical passwords named as traditional authentication techniques are widely popular. The passwords consists of alphanumeric characters have their own drawbacks by shoulder surfing and dictionary attacks. To jump on such attacks, graphical passwords are implemented to make longer memorable and easy to use. In this paper, we will propose reliable authentication using graphical password for securing cloud data. We enable the platform for users to click on images as a password rather than group of alphanumeric character.
Keywords:
Authentication, Cloud Computing, Graphical Password, Click Points.
Abstract:
Efficient Image data storage and tagged Image Archive are quintessential in organizations due to the growing volume of images that are captured during various events that are conducted in organizations. Acquired Images could be indexed instantly using time stamp, image content and image context by means of object identification and association, face detection and matching, object feature extraction and identification. The indexing done using the foresaid criteria transforms each image into a feature vector with domain dependent attributes and is stored in a relational database with the image URL to felicitate query based image retrieval. The domain specific user query is designed to enable the query engine to extract each key attribute as per the weight associated in the proposed model. The efficiency and relevance of an image retrieval system rely on the visualization and the presentation of the search results. Thus the query matching with the feature vector is implemented by building a hierarchical decision tree.
Keywords:
CBIR, SIFT, QBIR, HOG, ROI, SSIM.
The Grid Based Approach to Find an Optimal Path for an Independent Robot
Jashwanth N B, Sachin Acharya T
Abstract:
The proposed system gives a method to explore static hindrances. It avoids the danger of impacts and allows the robot to reach the destination without any crash. The estimated test results with our proposed idea make a way for choice calculation that will have the capacity to lessen impact hazard, travel time and travel separations in huge element situations.
Keywords:
Grid based approach, Artificial Intelligence, Optimal path for robots, Line Follower.
Applying Data Mining Approach and Regression Model to Forecast Annual Yield of Major Crops in Different District of Karnataka
Shilpa Ankalaki, Neeti Chandra, Jharna Majumdar
Abstract:
We have entered the era of Big Data where data is emerging with 5V (Velocity, Variety, Volume, Value, Veracity), making it complex and useful for the predictive and descriptive analysis. Decision making with analysis is an important concern in modern agriculture. One such decision making process is related to the forecast of crop yield in various environmental and soil conditions. This basis of the work is based on this data mining process. The work deals with two subtasks, first, it implements and compares different clustering method for the districts having similar kind of productivity factors for crops, and second, forecasting the yield of the major crops for different districts.
Keywords:
Batchelor & Wilkins�, DBSCAN, AGNES, Multiple Linear Regression.
Energy Efficient and Reliable Routing Protocols In Wireless Sensor Networks- An Outlook
Waseem Ahmad Mir, Jameel Ahmad
Abstract:
Wireless sensor networks are enabling applications that previously were not practical. As new standards based networks are released and low power systems are continually developed, we will start to see the widespread deployment of wireless sensor networks in future. One of the pioneering aspects of WSNs is to provide reliable and efficient routing protocols as the networks cannot afford to waste energy at any cost and in addition to energy conservation life time of the nodes is also of primary concern. In this paper energy efficient routing protocols have been divided into four categories: Network Structure, Communication Model, Topology Based and Reliable Routing.
Keywords:
Wireless Sensor Networks, Reliability, Efficiency, Routing Protocols.
Applications of Mobile Cloud Computing and Big Data Analytics in Agriculture Sector- A Survey
Athmaja S, Hanumanthappa M
Abstract:
The objective of this literature survey was to identify the applications of Mobile Cloud Computing and Big Data Analytics techniques in the agriculture sector. Related literature from IEEE journals and other International journals were collected and reviewed. A conclusion is made by proposing a new model that uses mobile cloud computing and big data analytics techniques together to meet several challenges that the farmers are facing today in the agriculture sector. The proposed model helps farmers in making optimal decisions on their agricultural production and thereby reducing the post-harvest wastage of their products.
Keywords:
big data, analytics, mobile, cloud, computing, agriculture, farmer, machine learning.
A Study on Social Network Analysis through Data Mining Techniques – A detailed Survey
Annie Syrien, M. Hanumanthappa
Abstract:
The paper aims to have a detailed study on data collection, data preprocessing and various methods used in developing a useful algorithms or methodologies on social network analysis in social media. The recent trends and advancements in the big data have led many researchers to focus on social media. Web enabled devices is an another important reason for this advancement, electronic media such as tablets, mobile phones, desktops, laptops and notepads enable the users to actively participate in different social networking systems. Many research has also significantly shows the advantages and challenges that social media has posed to the research world. The principal objective of this paper is to provide an overview of social media research carried out in recent years.
Keywords:
data mining, social media, big data.
Openstack Architecture Design and Scalability Principles: An Overview
Pallavi V Patil, Dr Jagannatha S, Balaji B S
Abstract:
Openstack is an open source software platform for infrastructure provisioning in cloud and hence commonly deployed as Infrastructure as a Service (IaaS). Openstack Architecture has multiple flavours based on usage scenarios. Openstack has specific and coordinated components to manage hardware and storage pool which can be accessed through well define REST API endpoints, command line tool or Web UI. The main challenge in cloud offerings is to handle the growing demands for the infrastructure catering diverse needs. This results in dynamic infrastructure provisioning with efficient and robust scalability design. The main objective of Openstack cloud operator is to hide from user, the failure caused due to the resource limitation and to provision the required infrastructure adhering to Service Level Agreement (SLA). This paper gives in-depth understanding of Openstack design principles like Infrastructure segregation. Host Aggregates and Availability zones to achieve massive scalability in Openstack components and also discuss the architecture design of major Openstack components based on different usage scenarios.
Keywords:
IaaS, Openstack nova, Openstack neutron, Host Aggregates, Openstack Heat.
Personalization of Web Search and its Techniques: A Survey
Raghavendra R, Dr. Niranjanamurthy M, Pavan Kumar M
Abstract:
The quality of information accessing from the Web (WWW) is growing rapidly per year as per the user demand, so that the quality of the results provided by the search engine will be low. This makes the retrieval of the relevant information which is extracted from the search results for the user is made more difficult. The quality of the result generated by the search engine information depends on the needs of the users and the various search techniques involved in the web search systems. Web logs keep a track of the user activities on the search engine. These web logs are mined by web usage mining which is one of the methods of web mining, where they are the rich sources for the Web Personalization. So, Web personalization becomes a basic need in web search, hence here we discuss various techniques used in the Web Personalization.
Keywords:
Personalization, Web Usage Mining, Web Logs.
Visualisation of massive data from scholarly Article and Journal Database: A Novel Scheme
Gouri Ginde
Abstract:
Scholarly articles publishing and getting cited has become a way of life for academicians. These scholarly publications shape up the career growth of not only the authors but also of the country, continent and the technological domains. Author affiliations, country and other information of an author coupled with data analytics can provide useful and insightful results. However, massive and complete data is required to perform this research. Google scholar which is a comprehensive and free repository of scholarly articles has been used as a data source for this purpose. Data scraped from Google scholar when stored as a graph and visualized in the form of nodes and relationships, can offer discerning and concealed information. Such as, evident domain shift of an author, various research domains spread for an author, prediction of emerging domain and sub domains, detection of journal and author level citation cartel behaviours etc. The data from graph database is also used in computation of scholastic indicators for the journals. Eventually, econometric model, named Cobb Douglas model is used to compute the journal�s Modeling Internationality Index based on these scholastic indicators.
Keywords:
Data acquisition methods, Web scraping, Graph database, Neo4j, Data visualization, Cobb Douglas model.
A Survey on Content Based Image Retrieval by Using Various Features
Pakruddin .B, Imran Ulla khan
Abstract:
This paper proposes a survey on Content Based Image Retrieval via various features. Related work is the foundation for understanding and improving more knowledge regarding precise field. Content Based Image Retrieval is a technique for extracting features of images like color, shape, texture, blob detection, edge detection, contour detection and etc. Once features are extracted from a query image then same procedure will be used for extracting features of database images. For comparing features of both images Euclidian distance can be used to produce the best result. The key part of retrieval system is feature extraction. Until now, the only way of penetrating these collections was based on keyword indexing, text, or just by browsing. In this paper we survey some practical aspect of current CBIR and color histogram, texture, and shape for exact and efficient CBIR after doing the study of related works.
Keywords:
CBIR, ABIR, Precision, Recall.
A Framework for Selecting Suitable Software as a Service
Mamta Dadhich, Dr. Vijay Singh Rathore
Abstract:
Cloud computing is tremendously attractive as it enables a fundamental shift from capital intensive focus to a flexible operational management model. It is typically characterized by on-demand computing paradigm based on pay per use pricing model. SaaS is delivered over the internet where the software is hosted by someone else's system and delivered via web, on consumer�s demand. These days, many service providers are available to serve SaaS services. Customers need to choose the appropriate SaaS provider for fulfilling their requirements. It is difficult to adopt appropriate SaaS service for a consumer. This paper aims to design a framework named ASMAN framework, enables a SaaS consumer to adopt appropriate Software as a Service (SaaS) by comparing various parameters of different SaaS providers.
Keywords:
Cloud computing, software as a service, cloud service provider, users, parameters.
Abstract:
Electronic waste or E-waste contains disposed electrical or electronic devices. Electronic scrap components include CPUs, Phones, Chips, TV�s etc. These contain hazardous components like lead, cadmium, beryllium, or brominates flame retardants. Due to these hazardous components, developing countries are facing enormous challenges related to generation and management of E-waste. In hither paper, an path is made as far as calculating the current status of E-waste management in India over and above worldwide, because the current rules and guidelines. It is found that great part of recycling of E-waste is being handled by unconventional part that has less/no knowledge about the effects of exposure to hazardous substances.
Keywords:
E-waste, Hazardous waste, effects, management.
Comparative Study on Performance Testing with JMeter
Dr. Niranjanamurthy M, Kiran Kumar S, Anupama Saha, Dr. Dharmendra Chahar
Abstract:
Performance testing is an process of determine the speed or effectiveness of a computer, network, software program or device. The focus of Performance testing is checking a software program's Speed - Determines whether the application responds very quickly, Scalability Determines maximum user load the software application can handle. Stability - Determines if the application is stable under varying loads. Performance testing mainly divided into Stress and Load testing. jMeter is a Open Source testing software. It is 100% pure Java application for load and performance testing. In this paper we discussed performance testing tools and proposed best Performance tool for web application Industry.
Keywords:
Performance Testing, Types of performance testing, Load Testing, Stress testing, Volume testing, Scalability testing, JMeter, LoadRunner.
Abstract:
In Web Usage Mining the log files of the web server plays a vital role because it stores the different users browsing patterns and this records becomes an important source of knowledge for discovering the user pattern. Web Usage Pattern is a process of retrieving the users browsing patterns by considering their page navigations. Mining techniques is applied to the user�s behaviour for personalizing which is done based on transactions derived from user sessions. Sessionization is the process of identifying the user sessions, which is defined as set of pages visited by the same user within a given time of one particular visit of a web-site. This paper reviews the existing work done on the session identification techniques. An overview of available techniques for identifying the user sessions is being proposed for extraction of user patterns. By giving the overview of the techniques we can improve the quality of these techniques to be a novel one, by inventing new approaches and methods, and also we can work on by overcoming the flaws of the existing techniques which can be used as a highway for research and practice in this area.
Keywords:
Include Sessionization, Heuristic, Web log data, Personalization, Smart Miner.
Abstract:
Today�s world the internet is on boom. Its trending on a very high basis.Its a food for many organizations and apllications in computer field. The communication is high over the internet. It�s a good practice to have a secured communication over a peers. This paper attempts to survey on the peer-to-peer communication over a distributed networking. Its working and features are clearly explained.
Keywords:
P2P, Peers,Distributed Network, Network.
Facial Image Analysis: Recent Trends and Approaches
Soumya C S, Dr Thippeswamy G
Abstract:
In recent years face recognition has received substantial attention from both research com-munities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. Deep learning has recently achieved very promising results in a wide range of areas such as computer vision, speech recognition and natural language processing. It aims to learn hierarchical representations of data by using deep architecture models. Facial emotion recognition is one of the most important cognitive functions that our brain performs quite efficiently. State of the art facial emotion recognition techniques are mostly performance driven and do not consider the cognitive relevance of the model. Similarly, Facial image analysis through 3D spectral information is gaining lot of scope. Hyperspectral cameras provide useful discriminants for human face recognition that cannot be obtained by other imaging methods. Hence, the facial analysis through Hyperspectral imaging is a great advantage. In this paper, we try to comprehend the recent emerging technologies in the field of image analysis for faces.
Keywords:
Face recognition, image analysis, 3D image, Deep Learning, Hyperspectral Imaging.
K. Sailaja Kumar, D. Evangelin Geetha, T. V. Sai Manoj
Abstract:
Online Social Networks (OSNs) are the powerful medium for communication among the individuals to share their views on disastrous events happening in and around using the opportunities offered by the internet. This paper aims to analyze the meaningful real-time data about the disastrous events obtained from the most popular microblogging OSN �Twitter�. Tweets related to the target event are gathered based on the search query, extracted the keywords from the tweets and then analyzed the significance of those keywords in the events happened during and after the disaster using text mining. The data visualization analytics supported by the statistical software tool �R� is used to explain the discovered phenomena. Tweets are collected on �Jammu and Kashmir Floods� using Twitter API based on various search queries and around 1570 tweet messages were examined. The obtained corpus is then processed using text mining functions provided in �R�. A term document matrix is constructed to know the most frequent words, the distribution of the word frequencies and the association between them. The barplot is plotted to visualize the frequent words. Further the most popular keywords in the tweets and terms contained in the keywords are visualized by constructing a wordcloud from the term document matrix.
Keywords:
Disastrous Events, Online Social Networks (OSNs), R, Term Document Matrix, Word cloud
A Literature Survey on Computer-Aided Diagnosis in Detection and Classification of Polyp in Colon Cancer using CT Colonography
Akshay Godkhindi, Dr Dayananda P, Dr Sowmyarani C N
Abstract:
Colorectal cancer is a cancer that starts inside the colon or the rectum in large intestine. These cancers is likewise called colon cancer or rectal cancer, depending on wherein they start. Colon and rectal cancer are frequently grouped together because they've many features in common and most colorectal cancers begin as a increase on the internal lining of the colon or rectum called a polyp. A few types of polyps can change into cancer over the several years, but not all polyps end up in cancers. The risk of changing into a most cancers depends at the kind of polyp. Computer-aided detection (CADe) and analysis (CAD) has been a rapidly growing, potential area of research in medical imaging. Machine leaning (ML) plays a crucial role in CAD, because objects such as lesions and organs may not be represented accurately with the aid of an easy equation; as a consequence, medical pattern recognition essentially require �getting to know from examples.� Computed tomography (CT) Colonography or virtual colonoscopy makes use of special x-ray machine to have a look at the large intestine for cancer and growths known as polyps. All through the examination, a small tube is inserted a short distance into the rectum to permit for inflation with air at the same time as CT image of the colon and the rectum are taken. CT technologist determines those images to discover the severity of polyp based on its length. In this survey, we review the different papers and journals in the literature that attempted to address these problems and compare various pre-processing steps, classification and segmentation algorithms, feature set considered, which are used to detect and classify polyp in colon cancer and we also focus on various deep learning algorithms used in similar medical diagnosis and how efficiently it is used to solve problem.
Keywords:
Colorectal cancer, Computed tomography (CT) Colonography, polyp, Deep learning Algorithms.
Comparative Study of Load Balancing Algorithms for Best-effort Applications in Cloud.
Chethan Venkatesh, Shiva Murthy .G
Abstract:
Resource provisioning is the main requirement in a cloud environment, where several resources like memory, CPU, storage, etc.� are allocated to the requesting consumer process in such a way that the resources are utilized in an efficient manner and distributed fairly among the requesting processes. Load balancing happens to be a vital task in resource provisioning to ensure fairness in resource allocation. Best-effort applications do not place any constraints on the amount or the quantity of resources allocated and the timing of scheduling. This paper presents a comparative study of several exiting approaches and certain modified versions of the existing approaches for load balancing algorithms for best-effort applications.
Keywords:
Cloud Computing, Resource Provisioning, Load Balancing, Fault Tolerance
Arun Kumar R, Balaji S, Sai Manoj T V, Sailaja Kumar K, Evangelin Geetha
Abstract:
The technology developments in Aviation have been and will continue to be at the progressive of human technological and scientific development. The present paper analyses the airline revenue passenger miles on monthly base in the past ten years. Time series analysis is used to predict the increase in revenue passenger miles using the previous years� data from the corresponding time period. The paper also shows the random change of data which will be pre-processed for analysis. Moreover, it presents the seasonal change and the trend that the dataset is following. Based on all these factors, the revenue passenger miles for future years are predictedand the results are shown with graphical output.
Keywords:
Airline Revenue Passenger Miles, Prediction, Time Series, Moving Average Model
Early Performance Evaluation of Data Warehouse Systems: From UML to LQN models
Dr. Madhu Bhan, Dr. K. Rajanikanth, Dr. T.V. Suresh Kumar
Abstract:
If the performance of a Data Warehouse System is determined to be unacceptable, at the time of �acceptance testing� it can result in very expensive redesign and consequent delayed delivery or, in the worst case, complete non-use of the system! There is clearly a need for tools and techniques that enable performance analysis of designs to be done easily and reliably throughout the development process of Data warehouse systems. In this paper we demonstrate the derivation of Layered Queuing Network (LQN) Performance Models from a set of UML diagrams and an algorithm for deriving LQN model. LQN model is a very useful tool to analyse the performance of a system from abstract model so that the developer of Data warehouse systems is able to understand performance effects of various design decisions starting at early stages when changes are easy and less expensive.
Keywords:
Data warehouse Systems; Software Performance Prediction; UML; Queuing models.
Android Based Smart Door Locking System with Multi User and Multi Level Functionalities
Dr. Manish Kumar, Dr. M Hanumanthappa, Dr. T V Suresh Kumar, Mr. Amit Kumar Ojha
Abstract:
Smart door locking system is not a new concept. However with the advancement in technology, these systems also have become more advanced. The android based smart door lock system discussed in this paper is basically designed for multi mode operations like multi user and multi level user operation. Such system is very much required in Bank and Business organization. The system also gives functionalities for general user, where single user is authorized to operate the lock. The cost effective implementation with advanced functionality and easy to use interface makes the system very useful.
Keywords:
Arduino Uno, Android, Smart Phone.
Study on Three Dimensional (3D) Password Authentication system
Nayana S, Dr. Niranjanamurthy M, Dr. Dharmendra Chahar
Abstract:
Authentication is a procedure of validating who are you and whom you claim to be. Authentication secures the system from unauthorized people who have illegally accessed the right to handle the data present in the system. It also protects the system from potential threats. Though authentication is very strict procedure, with the developing new technologies it can be easily cracked and hacked to steal the user�s identity. Current authentication techniques that are in use today are Texted based, Token based, Biometrics based, Recognition/Graphical based etc. but each of this strategies are having their own drawbacks and own limitations. To overcome the disadvantages of these existing authentication techniques a new authentication technique called 3d password is introduced. 3d password scheme is a new strategy recognition patterns, textual passwords, biometrics and graphical passwords. One of the important concepts of 3d password schema is 3d virtual environment which contains real time object scenarios. Also 3d password is more secure and hard to break. This paper focuses on what is 3D Password?, Working of 3D password technique and various applications involved in it.
Keywords:
3d Password, Critical servers, Networking, Authentication, Advantages, Virtual Environment
Pavithra B, Dr. Niranjanmurthy M, Kamal Shaker J, Martien Sylvester Mani F
Abstract:
Big data is a compilation of huge data-sets that cannot be processed using conventional computing techniques. Big-data is not just only a data; slightly it has become a whole theme, which involves a variety of tools, techniques and frameworks. It refers to using complex datasets to drive focus, direction, and decision making within a company or organization. This is achieved by implementing applicable systems for gaining an accurate and deep understanding of the knowledge obtain by analysing the organization�s data. In this survey paper we have discussed the different types of data held and its inverse usage for e-commerce and also different ways of providing security and safety for the data when it is used in bulky services, we also have discussed about the issues in big data with respect to e-commerce and how e-commerce can make use of applications over big data in an effective.
Keywords:
Click-Stream Data, Web- Analytics, Predictive Analysis, Personalization, Dynamic Analysis.
Comparative Study of Data Mining Techniques in Crop Yield Prediction
Perpetua Noronha, Divya .J, Shruthi .B.S
Abstract:
Agriculture is the field of interest in today's technology emerging world. It is the main occupation and backbone of our country. As India�s population currently stands at 1.3 billion people and is projected to grow eight times of current population by 2024, its become a critical challenge for the farmers to feed the population. And also the various environmental changes in the developing world are posing an important threat to the agricultural economy. Hence food security enhancement requires the transition to agricultural production systems that are more productive. The need to incorporate Information technologies into the task of food production is very important. Crop yield prediction is one of the important factors that provide information for decision makers to maximize the crop productivity but it is a problem that needs to be solved based on available data. Data mining technology serves to be a better choice for this purpose and has become an interesting and recent research topic in agriculture to predict the crop yield. This paper presents a brief comparative study of various papers that deal with various techniques used to predict the crop yield. From the data that is readily available, the data mining techniques give a complete picture about the estimation of crop yield. Different data mining techniques that are in use for the crop yield estimation are K-Means, K-Nearest neighbor (KNN).
Keywords:
K-Means, K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Multiple Linear Regression (MLR).
Computed Tomography Images for Computer Aided Diagnosis of Lungs Cancer Feature Extraction
Shankaragowda B.B, Dr. Siddappa M, Dr. Suresha M
Abstract:
The main goal of our proposed algorithm is to obtain edges that the result is suitable for further application such as boundary detection, image segmentation and object identification. We propose a new approach based on edge detection method using Computed Tomography images. In this paper we introduce the Artificial Neural Network features the shape, edge characteristics, darkness of nodules and tested our results signs of Lungs cancer and investigate whether they are benign or malignant.
Keywords:
Computed Tomography, Lungs, Segmentation, Malignant, Benign.
Security Enhancement on Cloud using Identity Based Encryption (IBE)
Shruthi N, Kumar P K, Swamy L N, Sukruth Gowda M A
Abstract:
Cloud computing would be one of technologies which is going to play a vital role in the next generation of computer engineering field. The increased scalability and flexibility provided by the cloud computing has reduced the costs to a greater extent and therefore the technology has gained wide acceptance. The facility of Data outsourcing in the clouds enables the owner of the data to upload the data and other users can access the same. But, the data stored should be secure in the cloud servers. The data owner has lot of concern about security aspects present with the cloud computing. The data owners hesitate to adopt cloud computing services because of privacy protection issues of data and security of data. The proposed research work aims to undertake the critical issue of identity revocation wherein outsourcing computation into IBE has been introduced for the first time and a revocable IBE scheme in the server-aided setting has been proposed. This scheme offloads most of the key generation related operations to a Key Update Cloud Service Provider for key-issuing and key-update processes. Only a constant number of simple operations for PKG and users are left to perform locally. Data security is provided by using encryption, user authentication; re-encryption in the proposed data storage security model. The proposed system has also introduced outsourcing computation into IBE revocation, formalizes the security definition of outsourced revocable IBE for the first time to the best of our knowledge. Finally, experimental results have demonstrated the efficiency of the proposed construction.
Keywords:
Cryptography, Cloud, IBE
Techniques of Semantic Analysis for Natural Language Processing – A Detailed Survey
Rajani S, M. Hanumanthappa
Abstract:
Semantic analysis is an important part of natural language processing system. It determines the meaning of given sentence and represents that meaning in an appropriate form. Semantics, as a part of linguistics, aims to study the meaning in language. The language demonstrates a meaningful message because of the semantic interaction with the different linguistic levels. In this paper, survey is done on semantic analysis and explores different works that have been done in semantic analysis by different researchers. Few research papers have been considered for the analysis. In the examination, two important research fields are noticed, one of the popular statistical model called as LSA model and another active research area called as ontology which represents a set of primitives of domain of knowledge. In the analysis, it is noted that, LSA is used in automated evaluation against human evaluation and also used for extracting semantic information from textual information. Ontology technique is used to extract structure information from unstructured data, retrieving the information from database and in the semantic web applications.
Keywords:
NLP, Semantics, LSA, spring graph, Ontology, NLIDB, SW, SVD
An Efficient Technique for Optimization of Test Cases with the Aid of HNN-MCS for Assessing Software Reliability
Lakshminarayana P, Dr T V Suresh Kumar
Abstract:
The advance in software development has resulted in need for efficient and reliable software products. In recent the growth of software demands high reliability and safety, software reliability prediction becomes more and more essential. Software reliability is a key part of software quality. Various techniques for predicting software reliability have been proposed and evaluated in terms of their prediction performance; however, their actual contribution to business objectives such as quality improvement and cost reduction has been rarely assessed. The main aim of this work is to develop an efficient software reliability prediction method where soft computing is utilized. We are proposing a novel method of reliability prediction with the aid of Hybrid Neural network incorporated with optimization algorithm (HNN-MCS). The weight factor is globally optimized using the modified cuckoo search algorithm. Once the training is done the data are then tested in order to check the prediction accuracy of the proposed systems. Researchers considered different factors as inputs for training the network. The execution time is utilized in our proposed system for training the neural network and based on this the testing is done. The results in terms of actual and predicted failure rate are estimated.
Keywords:
Technique for Optimization of Test Cases with the Aid of HNN-MCS
An Introduction on Mobile Malware and its Prevention
Shruti Prabhuli Ghatnatti, Spoorti A, Surekha B
Abstract:
In today�s digital world, people are so into the technology they forget to keep secure their data and information which is stored on their devices. The rapid increase of economic issues in the world makes person to leak the data and make money in a hawkish way. This may become a threat to the individual�s life. The threat is termed as Malware which is injected in user�s device to access the information unknowingly. This paper attempts to describe the mobile malware and its precautions.
Keywords:
User Device, Mobile Malware, Detection Techniques, Malware Types.
A Model to Predict Words in the Sentence to Identify an Aligned Sequence of Article Blocks in e-Newspaper
Deepa Nagalavi, M. Hanumanthappa
Abstract:
e-Newspapers are made up of complex multi article page layout. Accordingly, each individual article is divided into multiple blocks which are not in reading order sequence. This paper proposes an approach to reconstruct the articles which includes the task of article aggregation with the English text reading order of blocks. Therefore an interpolation model is used to combine a part of speech based and a word based n-gram language models to predict the word in a sentence. This sequence probability model identifies the correct sequence of the blocks of article in English e-newspaper. Consequently, the operation is conducted by computing the probability of sequence from the given corpus.
Keywords:
HMM, N-Gram, Newspaper, NLP, POS Tagging, Word Prediction.
Abstract:
An Auction is a process of buying and selling goods or services by offering them up for bid, taking bids, and then selling the item to the highest bidder. In some cases, an auction may refer to any mechanism or set of trading rules for exchange. An auction is a procedure of buying and selling goods or services by bid and then selling the item to the highest bidder. Currently there are different strategies and many auction methodologies like English auction, Dutch auction, Sealed bid auctions and so on .In this paper, we introduce one such methodologies called the blind auction its simple predefined bidding ways for auction agents and bidding agents. To illustrate the feasibility of our approach, we implemented an agent- based auction systems demonstrating how agent based blind auctions generally happen using JADE, and illustrated how a flexible and complex bidding strategies can be precisely specified and efficiently executed.
Keywords:
Agents, Multi-Agent, Auction, Bidding, JADE.