1,162 results
Search Results
2. Survey paper on FIR Filter using Programming Reversible Logic Gate
- Author
-
Nashrah Fatima, Paresh Rawat, and Anjulata Choudhary
- Subjects
Relation (database) ,Finite impulse response ,Computer science ,business.industry ,02 engineering and technology ,Energy consumption ,020202 computer hardware & architecture ,Logic gate ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,Overhead (computing) ,Reversible computing ,Electricity ,business ,Algorithm - Abstract
Reversible computing is a version of computing in which the computational system to a degree is reversible. An essential circumstance for reversibility of a computational version is that the relation of the mapping states of transition features to their successors have to always be one-to-one. Reversible logic has emerged as an change design technique to the conventional logic, ensuing in decrease energy consumption and lesser circuit area. In this paper, a review of an efficient architecture of the reversible FIR filter structure is presented. For reaching low electricity, reversible good judgment mode of operation is implemented inside the design. Vicinity overhead is the tradeoff inside the proposed layout. From the synthesis consequences, the proposed low electricity FIR filter architecture offers power saving when as compared to the conventional layout. The vicinity overhead is for the proposed structure.
- Published
- 2016
3. Survey Paper on Random surfer model in Page Ranking Algorithm
- Author
-
Nikhil Kumar Singh, Rajendra J. Patel, and D Suthar Henilkumar
- Subjects
World Wide Web ,050210 logistics & transportation ,Ranking ,Computer science ,0502 economics and business ,05 social sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Algorithm ,Ranking (information retrieval) - Abstract
The World Wide Web have a large amount of data, the user are find the some relevant information from WWW. If user want to search any information than numbers of URL or number of pages are opened on the web. The user want to be the show the important information are show on the top of the list than this page have a high ranking compare to other and this page is a most relevant page on the web site. The page rank are depends upon the user searching or clicking on the web pages, but some problem are created on given the page ranking about the page, in this paper we discussed the some problem about the page ranking on the world wide web. General Terms Page Ranking Algorithm
- Published
- 2016
4. A Survey Paper of Structure Mining Technique using Clustering and Ranking Algorithm
- Author
-
Preetibala Deshmukh and Vikram Garg
- Subjects
Information retrieval ,Computer science ,Rank (computer programming) ,Structure mining ,computer.software_genre ,Fuzzy logic ,Ranking (information retrieval) ,Ranking ,Web mining ,Web page ,Data mining ,Cluster analysis ,Literature survey ,computer ,Algorithm - Abstract
A survey of various link analysis and clustering algorithms such as Page Rank, Hyperlink-Induced Topic Search, Weighted Page Rank based on Visit of Links K-Means, Fuzzy K-Means.Ranking algorithms illustrated, Weighted Page Rank is more efficient than Hyperlink-induced Topic Search Whereas clustering algorithms has described Fuzzy Soft, Rough K-Means is a mixture of Rough K-Means and fuzzy softest and provide efficient results than Rough K-Means approach and K-means. The literature survey shows how these algorithms are used for link-analysis and extracts the information, including contents and images from web pages efficiently.In new algorithm Weighted Page Content Rank user can get relevant and important pages easily as it employs web structure mining and web content mining. A webpage ranking analysis can be apply on the scenario where the searching and interaction with the numerous web data is required, so in order to provide effective result the technique can be used.
- Published
- 2015
5. A Survey Paper on: Frequent Pattern Analysis Algorithm from the Web Log Data
- Author
-
Samiksha Kankane and Vikram Garg
- Subjects
World Wide Web ,Task (computing) ,Web mining ,Computer science ,Process (engineering) ,Social media ,Data mining ,Line (text file) ,Server log ,Data structure ,computer.software_genre ,Algorithm ,computer - Abstract
Web data mining is an emerging research area where mining data is an important task and various algorithms has been proposed in order to solve the various issues related to the web mining in existing dataset. This paper focuses the concept of data mining and FP-Growth algorithm. As for FPGrowth algorithm, the effectiveness is limited by internal memory size because mining process is on the base of large tree-form data structure. This Research work concentrates on web usage mining and in particular focuses on discovering the web usage patterns of web sites from the server log files. This paper finds the procedure to work with the proposed technique which can be possible to remove the drawback of limitation of the existed technique in the web mining area. The various web usages mining technique can further work on various scientific area, medical area and social media application to approach for the research and security related area. A detail and pattern growth technique can help in getting more data and further on using line up algorithm we can illustrate the data states presentation effectively.
- Published
- 2015
6. An Improved Function Optimization Problem (IFOP) of Evolutionary Programming Algorithm A Survival Paper
- Author
-
S. Saravanan and R. Karthick
- Subjects
Mathematical optimization ,Mutation operator ,Truncation selection ,Computer science ,Function optimization ,business.industry ,Survival of the fittest ,Tournament selection ,Ranking ,Fitness proportionate selection ,Key (cryptography) ,Artificial intelligence ,business ,Algorithm ,Selection (genetic algorithm) ,Evolutionary programming - Abstract
Algorithms are based on some influential principles like Survival of the Fittest and with some natural phenomena in Genetic Inheritance. The key for searching the solution in improved function optimization problems are based only on Selection and Mutation operators. This paper reflects on Survival selection schemes specifically like Truncate Selection, Proportionate Selection, Tournament Selection and Ranking Based Selection. In this paper we calculate the best fittest value among the populations which is generated.
- Published
- 2012
7. Handwritten Digit Recognition using Machine and Deep Learning Algorithms
- Author
-
Rishika Kushwah, Samay Pashine, and Ritik Dixit
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Deep learning ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Perceptron ,Object (computer science) ,Convolutional neural network ,Machine Learning (cs.LG) ,Support vector machine ,Artificial Intelligence (cs.AI) ,ComputingMethodologies_PATTERNRECOGNITION ,Handwriting recognition ,Digit recognition ,Artificial intelligence ,business ,Algorithm ,MNIST database - Abstract
The reliance of humans over machines has never been so high such that from object classification in photographs to adding sound to silent movies everything can be performed with the help of deep learning and machine learning algorithms. Likewise, Handwritten text recognition is one of the significant areas of research and development with a streaming number of possibilities that could be attained. Handwriting recognition (HWR), also known as Handwritten Text Recognition (HTR), is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices [1]. Apparently, in this paper, we have performed handwritten digit recognition with the help of MNIST datasets using Support Vector Machines (SVM), Multi-Layer Perceptron (MLP) and Convolution Neural Network (CNN) models. Our main objective is to compare the accuracy of the models stated above along with their execution time to get the best possible model for digit recognition., Comment: 6 Pages, 13 figures, and 1 table
- Published
- 2020
8. Comparison Study of DIT and DIF Radix-2 FFT Algorithm
- Author
-
Ranbeer Rathore and Navneet Kaur
- Subjects
0209 industrial biotechnology ,Adder ,Decimation ,business.industry ,Orthogonal frequency-division multiplexing ,Computer science ,Fast Fourier transform ,02 engineering and technology ,Complex multiplier ,Discrete Fourier transform ,Multiplier (Fourier analysis) ,020901 industrial engineering & automation ,Radix ,Multiplier (economics) ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,business ,Algorithm ,Digital signal processing - Abstract
The fast fourier transform (FFT) is an important technique for image compression, digital signal processing and communication especially for application in multiple input multiple output OFDM system. The fast fourier transform are good algorithm and computed discrete fourier transform (DFT). In this paper, the comparison study of various FFT algorithm and compare all them. FFT algorithm is divided into two part i.e. decimation in time (DIT) and decimation in frequency (DIF). In DIT algorithm firstly computed multiplier then adder but in DIF firstly computed adder then multiplier. In this paper we study of different types of multiplier i.e. array multiplier; sing multiplier (Baugh Wooley) and complex multiplier. In proposed complex multiplier is consuming three multipliers. In further work in my dissertation in design to 8point, 16-point, 32-point, 64-point and 128-point radix FFT algorithm in different multiplier.
- Published
- 2016
9. Robust Algorithm for Super Resolution and Extracting Noise from DIP using Trimmed Median Filter
- Author
-
Vedant Rastogi and Kritika K.
- Subjects
Discrete wavelet transform ,business.industry ,Computer science ,05 social sciences ,Salt-and-pepper noise ,Filter (signal processing) ,Grayscale ,Gradient noise ,Noise ,symbols.namesake ,Dark-frame subtraction ,Gaussian noise ,Computer Science::Computer Vision and Pattern Recognition ,050501 criminology ,symbols ,Median filter ,Image noise ,Computer vision ,Value noise ,Artificial intelligence ,business ,Algorithm ,0505 law ,Interpolation - Abstract
the paper the most efficient model to implement super resolution using discrete wavelet transform is shown. The model is a three step process of image registration, interpolation and noise filtering using DWT. The paper resolutions the deduction of noise in the digital gray scale images. Normally, Data, text, picture, can be tainted by an additive noise with the process of the scanning. This method precludes the different type of noise such as Salt and Pepper noise (SP Noise) that reasons black and white spots in the original image. All the process is explained and simulation result is presented to prove the theory. KeywordsResolution, High resolution, Super resolution, Registration, Interpolation, Restoration, DWT.
- Published
- 2016
10. To Enhancement in Zang and Sang Thinning Algorithm to Improve Thinning Rate using Fuzzy Logic
- Author
-
Ravaljodh Singh and Lalita Bhutani
- Subjects
Connected component ,Basis (linear algebra) ,Pixel ,Thinning ,Computer science ,010401 analytical chemistry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,01 natural sciences ,Fuzzy logic ,Skeletonization ,0104 chemical sciences ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Thinning algorithm ,020201 artificial intelligence & image processing ,Algorithm - Abstract
The main concept in this paper is that to produce clear pixels of from an image which can be shown clearly. Many methods have been introduced in this area, but in this paper a new method has been proposed which is worked on the basis of skeletonization. In this method different parameters have been used and these are the performance evaluation of proposed technique in comparison with existing algorithms such as execution time, thinning rate, number of connected components, PSNR, MSE etc.
- Published
- 2016
11. Performance Analysis of LEACH with Machine Learning Algorithms in Wireless Sensor Networks
- Author
-
Sushma Jain and Sukhchandan Randhawa
- Subjects
Computer science ,business.industry ,Quality of service ,k-means clustering ,020206 networking & telecommunications ,02 engineering and technology ,Energy consumption ,Machine learning ,computer.software_genre ,Base station ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Cluster analysis ,business ,Wireless sensor network ,computer ,Algorithm ,Hierarchical routing - Abstract
Wireless Sensor Networks consist of thousands of power constrained micro sensors whose main task is to sense and report the target phenomena to the base station. Hierarchical routing plays an important role for transmitting the aggregated data to the sink. Sensor nodes are organized into number of clusters and within each cluster, cluster head is responsible for collecting the data and to report that data to the Base Station. Machine learning algorithms play an important role while selecting the cluster head based on various QoS parameters. In this paper, a hierarchical protocol LEACH is chosen for analyzing the impact of machine learning algorithms – KMeans and modified K-Means clustering on energy consumption of nodes by varying the type of input parameters. This paper covers the brief introduction of 802.15.4 based Wireless Sensor Networks, power models, machine learning algorithms for sensor clustering and simulation environment using NetSim.
- Published
- 2016
12. A Robust Method for Vehicle License Plate Recognition based on Harries Corner Algorithm and Artificial Neural Network
- Author
-
Deepak Gupta, Raj Mohan Singh, and Sangita Kumari
- Subjects
Artificial neural network ,Computer science ,Orientation (computer vision) ,010401 analytical chemistry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Corner detection ,020302 automobile design & engineering ,02 engineering and technology ,01 natural sciences ,GeneralLiterature_MISCELLANEOUS ,0104 chemical sciences ,Character (mathematics) ,0203 mechanical engineering ,Segmentation ,License ,Connected-component labeling ,Algorithm - Abstract
Automatic vehicle license plate recognition system plays an important role in Intelligent Transportation System. Even though many license plate recognition methods have been proposed in the past, further scope for improvement still exists. This paper has proposed a new approach to license plate recognition system based Harris corner detection algorithm. In this paper connected component analysis is used for character segmentation and artificial neural networks for character recognition. It is observed from the experiments that the developed system successfully identifies and recognizes the vehicle number plate under different illumination conditions and independent of orientation and scale of the
- Published
- 2016
13. Frequent Pattern Mining Algorithms Analysis
- Author
-
Aadhya Bhatt, Ritesh Giri, and Ananta Bhatt
- Subjects
0301 basic medicine ,Structure (mathematical logic) ,Apriori algorithm ,Computer science ,Brute-force search ,computer.software_genre ,Field (computer science) ,Term (time) ,03 medical and health sciences ,030104 developmental biology ,Tree structure ,Pattern recognition (psychology) ,Data mining ,computer ,Algorithm - Abstract
Frequent pattern mining is the most researched field in data mining. This paper provides comparative study of fundamental algorithms and performance analysis with respect to both execution time and memory usage. It also provides brief overview of current trends in frequent pattern mining and it applications. There are two categories of frequent pattern mining the algorithm, namely Apriori algorithm and Tree structure algorithm. The Apriori based algorithm uses generate and test strategy approach to find frequent pattern by constructing candidate items and checking their counts and frequency from transactional databases. The Tree structure algorithm uses a text only approach. There is no need to generate candidate item sets. Many tree based structures have been proposed to represent the data for efficient pattern discovery including FP-Tree, CAT-Tree, CAN-Tree, CP-Tree, and etc. Most of the tree based structure allows efficient mining with single scan over the database. In this paper, we describe the formatting guidelines for IJCA Journal Submission. General Terms Your general terms must be any term which can be used for general classification of the submitted material such as Pattern Recognition, Security, Algorithms et. al.
- Published
- 2016
14. Employee Scheduling based on Particle Swarm Optimization Algorithm and its Variation
- Author
-
Sonal Y. Sangale
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Physics::Physics Education ,Particle swarm optimization ,02 engineering and technology ,Scheduling (computing) ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Multi-swarm optimization ,business ,Assignment problem ,Algorithm - Abstract
problems are multi-constrained and NP-Hard problems. This paper deals with the faculty assignment problem. Objective of this paper is to assign the faculty to exam halls. This problem is tested on real world dataset from Rajarambapu institute of technology. This paper attempts to solve the problem by using particle swarm optimization algorithm and its variation and analysis of both approaches are shown.
- Published
- 2016
15. Minimization of Multiple Value function using Quine Mc-Cluskey Technique
- Author
-
Gajanan Sarate and Prashant S. Wankhade
- Subjects
Ternary numeral system ,Multivalued function ,Computer science ,02 engineering and technology ,Function (mathematics) ,021001 nanoscience & nanotechnology ,Multiplexer ,Reduction (complexity) ,Radix ,Minification ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,0210 nano-technology ,Ternary operation ,Algorithm ,Hardware_LOGICDESIGN - Abstract
This paper presents minimization technique for multiple value function using Quine Mc-cluskey‟s method. This paper provide steps for the minimization of multivalued function i.e. radix >2 digital system. The ternary digital system which radix=3 is considered here minimized function of MVL function implemented using decoder and multiplexer and answer is verified using ternary k-map. As the radix of system increases, the difficulties in the minimization or reduction of logic function is get increases. It becomes difficult to for higher radix to reduce the function design equation. In this paper we successfully applied Quine Mc-Cluskey‟s technique to ternary system. In this paper simplified expression designed using decoder and ternary gate. Same expression implemented using ternary multiplexer. Hardware required for both cases is evaluated. It incorporates all designed rules for ternary logic system design and gives the output in the form of Sum-of-Product (SOP) terms.
- Published
- 2016
16. A Survey on Algorithms for Limited Resources Optimization and Utilization by Games on Smartphones
- Author
-
Mugonza Robert, Nabaasa Evarist, and Deborah Natumanya
- Subjects
Multimedia ,Computer science ,business.industry ,020207 software engineering ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Mobile cloud computing ,Resource (project management) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer ,Limited resources ,Algorithm - Abstract
Smartphones have been characterized by their limited computational resources such as battery life, processor performance and storage capacity. The above mentioned limitations can be overcome by utilizing mobile cloud computing where the smartphone can utilize the sufficient cloud resources. A number of solutions have been suggested and developed to alleviate the issues with smartphone resource limitations however they are not efficient. This paper provides an overview on the existing algorithms for accessing resource hungry applications on the cloud clearly indicating their techniques and flaws. The paper also describes the directions for the future research. General Terms Algorithms for accessing resource hungry games stored on a cloud.
- Published
- 2016
17. Implementation of Sequence Generator by the Sequential Elements (D-Flip Flop) of Reversible Gates
- Author
-
Shefali Mamataj, Saravanan Chandran, and Biswajit Das
- Subjects
Sequence ,Computer science ,Optical computing ,Hardware_PERFORMANCEANDRELIABILITY ,Dissipation ,Topology ,law.invention ,Synchronization (alternating current) ,Generator (circuit theory) ,law ,Hardware_INTEGRATEDCIRCUITS ,Reversible computing ,Algorithm ,Flip-flop ,Hardware_LOGICDESIGN ,Quantum computer ,Electronic circuit - Abstract
Power dissipation is a significant factor in the field of today’s electrical or electronic designing. The most promising substitute to these issues is the reversible computing. The reversible circuits do not dissipate energy as much as irreversible circuits. The reversible circuits do not lose information and can also produce unique outputs from the specific inputs and vice versa. So in the view point of designing issues reversible logic is the most important field of research having applications in low power computing, quantum computing, optical computing, and other emerging computing technologies, bioinformatics and nanotechnology based systems. This paper proposes a new reversible gate and its various classical operations. Furthermore negative and positive edge triggered D flip-flop has been represented by using this reversible gate. Afterwards different sequence generators by the sequential elements of reversible gates (SGSERG) have been implemented for the generation of specific sequence. Sequence Generator is a circuit that generates a desired sequence of bits in synchronization with a clock and it is useful in the various fields of real life applications. A comparison has also been made for the D flip flop represented here to the existing D flip flop reported in the literature in terms of the number of reversible gates, constant input, garbage output and total logical calculation in this paper.
- Published
- 2016
18. Audio Steganography using Echo Hiding in Wavelet Domain with Pseudorandom Sequence
- Author
-
Sounak Lahiri
- Subjects
Cover (telecommunications) ,Steganography ,Computer science ,Noise (signal processing) ,Speech recognition ,05 social sciences ,050301 education ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Signal ,Haar wavelet ,Wavelet ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0503 education ,Algorithm ,Decoding methods ,Data transmission - Abstract
paper deals with the encoding of message bits in a cover audio carrier signal. It applies the basic concepts of audio steganography in transform domain to achieve higher efficiency in data transmission, while preserving the secrecy of the information being transmitted. The proposed model in this paper deals with the application of echo hiding of binary message bits in the carrier signal, in the transform coefficients obtained by applying 2D- discrete Haar Wavelet Transform on the cover signal. Moreover, this algorithm applies pseudorandom sequence to encode the data, which gives the method more efficiency and prevents unauthorized decoding at any moment. The application of echo hiding technique makes the stego signal more immune to noise and disturbances, during transmission through the channel network. The performance of this method is analyzed on the basis of the output SNR and PSNR values calculated for several test cases, that has been discussed later in the paper.
- Published
- 2016
19. CATCLUS – A Proposed Algorithm for Clustering Categorical Data
- Author
-
Srikanta Kolay and Kumar S. Ray
- Subjects
Measure (data warehouse) ,ComputingMethodologies_PATTERNRECOGNITION ,Computer science ,Data mining ,Cluster analysis ,computer.software_genre ,Categorical variable ,Algorithm ,computer - Abstract
Classification of categorical data always involves more complexities compared to the numerical data. Because, a firm outline cannot be drawn in case of categorical data. Different types of assumptions are followed by various researchers to treat such kind of data. Again, dissimilarity measures applied in case of numerical data cannot be applied directly in this case. In this paper, a new clustering algorithm for categorical data is proposed. The algorithm is using a newly devised dissimilarity measure. This paper only includes the theoretical description of the proposed algorithm with appropriate example.
- Published
- 2016
20. Design and Development of Efficient Cloud Scheduling Algorithm based on Load Balancing Analytics
- Author
-
Ch. Sridevi, S. Rama Krishna, and P. Suresh Varma
- Subjects
Computer science ,business.industry ,Distributed computing ,05 social sciences ,Cloud computing ,02 engineering and technology ,Load balancing (computing) ,computer.software_genre ,Turnaround time ,Scheduling (computing) ,Software ,Virtual machine ,Analytics ,Server ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Resource allocation ,020201 artificial intelligence & image processing ,The Internet ,business ,Algorithm ,computer ,050203 business & management - Abstract
Cloud computing involves sharing computing resources rather than having individual servers or personal devices to handle applications. Cloud computing architectures include the delivery of software, infrastructure, storage and technology enabled services over internet to the people and organizations on demand. Cloud scheduling is the process of allocating resources to the job requests in the form of Virtual Machines. In this paper we designed and developed a novel, efficient cloud scheduling algorithm based on load balancing analytics for allocation of physical resources in the form of virtual machine to the incoming job requests. In this paper we measured various cloud performance metrics like mean turnaround time and mean waiting time. The results obtained with this method compared with traditional methods like First Come First Serve (FCFS), two stage scheduling algorithms and observed considerable increase in the performance Metrics. General Terms Scheduling algorithm, resource allocation, Cloud computing, optimization, Virtual Machine.
- Published
- 2016
21. Discrete Fourier Transform Analysis with Different Window Techniques Algorithm
- Author
-
Rajesh Mehra and Bharti Thakur
- Subjects
Computer science ,Fast Fourier transform ,0202 electrical engineering, electronic engineering, information engineering ,Waveform ,Window (computing) ,02 engineering and technology ,Harmonic wavelet transform ,Algorithm ,Spectral leakage ,Discrete Fourier transform ,020202 computer hardware & architecture - Abstract
While designing the digital circuits in today‟s world, the most desired factors are high performance, speed and cost. FFT is one of the most efficient ways to meet these requirements. In this paper, authors have discussed the DFT algorithm on periodic waveform using different window techniques using the FFT algorithm. This paper shows that the window techniques reduces the spectral leakage and a higher order DFT can be realized very easily using a lower order FFT. Different window techniques are used here on periodic waveforms and simulation is done using Matlab 2015.
- Published
- 2016
22. CELBT: An Algorithm for Efficient Cost based Load Balancing in Cloud Environment
- Author
-
Nitin Mishra and Nishchol Mishra
- Subjects
Computer science ,business.industry ,Distributed computing ,Real-time computing ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Load balancing (computing) ,computer.software_genre ,Virtual machine ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,computer ,Algorithm - Abstract
Cloud computing is interesting era of research where the multiple component, heavy hardware and architecture take part to execute the data and user processes. To maintain different virtual machines, data centers at different locations are required. Sometimes may be over burden on any datacenter and failure or downtime of the datacenter occurred. Thus in order to provide solution a load balancing technique is used which directs the request to an appropriate server for fast computation. Various traditional techniques such as Throttle, Round Robin were introduced but having some drawback with computation time and communication cost which makes them less efficient. In this paper we present CELBT (Cost Effective Load Balancing Technique) which compute the same data transfer in less computation time and cost. In this paper an evaluation is performed on CloudAnalyst tool to compute the parameters like time, cost and found the algorithm proven its effectiveness while compared with existing algorithms.
- Published
- 2016
23. A Survey of various Web Page Ranking Algorithms
- Author
-
Mayuri Shinde and Sheetal Girase
- Subjects
Information retrieval ,Computer science ,business.industry ,Hyperlink ,law.invention ,Ranking (information retrieval) ,Spamming ,World Wide Web ,PageRank ,Ranking ,Web mining ,law ,Web page ,The Internet ,Web resource ,business ,Algorithm ,Link analysis - Abstract
Identification of opinion leader is very important in this world of internet because with the identified opinion leaders in any application area such as Knowledge related sites, followers or other individuals can get valuable information more efficiently through direct communication with opinion leader. Internet i.e. WWW (World Wide Web) is the huge and very popular way of information broadcasting and communication. This huge www has so many web structures and within one structure there would be millions of web resources (contents, links) may exist. There are large numbers of webpages on the web which are linked to each other through hyperlinks. So, graph based techniques can be used to identify opinion leader i.e. techniques for ranking the results to provide the "best" results first. Different algorithms are there which are used for link analysis i.e. for ranking the web pages like PageRank (PR), Weighted PageRank (WPR), Hyperlink-Induced Topic Search (HITS), Spamming Resistant Expertise Analysis and Ranking (SPEAR) etc. This paper is focused on the study of different ranking techniques. Further this paper shows advantages, limitations and comparison of these techniques. General Terms Data and Web Mining.
- Published
- 2015
24. Reliability based Generator Maintenance Scheduling using Integer Coded Differential Evolution Algorithm
- Author
-
R. Balamurugan, L. Lakshminarasimman, and G. Balaji
- Subjects
Electric power system ,Mathematical optimization ,Optimization problem ,Job shop scheduling ,Planned maintenance ,Computer science ,Production cost ,Differential evolution ,Optimal maintenance ,Particle swarm optimization ,Algorithm ,Scheduling (computing) - Abstract
In this paper, Generator Maintenance Scheduling (GMS) in a vertically integrated power system is considered. The objective of the GMS problem is to find the particular time interval for maintenance of power generating units with an intention of maximizing the security of the power system. In this paper, scheduling of generating units for planned preventive maintenance is formulated as a mixed integer optimization problem by considering maximizing the average value of reliability index subject to a set of nonlinear constraints. Integer Coded Differential Evolution (ICDE) algorithm is developed to solve the GMS problem. The Lagrange Multiplier method is used to find the overall production cost for the maintenance schedule that is obtained using ICDE algorithm. To demonstrate the effectiveness of the proposed approach, two test systems are considered and are validated by comparing results obtained with that of Integer Coded Particle Swarm Optimization. The test results reveal the capability of the proposed ICDE algorithm in finding optimal maintenance schedule for the generator maintenance scheduling problem.
- Published
- 2015
25. A New Static Load Balancing Algorithm in Cloud Computing
- Author
-
Abhay Kumar Agarwal and Atul Raj
- Subjects
Computer science ,business.industry ,Cloud testing ,Distributed computing ,Cloud computing ,Parallel computing ,Load balancing (computing) ,business ,Algorithm - Abstract
This paper proposes an algorithm that we named as a New Static load balancing algorithm in cloud computing. The proposed algorithm is using the concept of both Active Monitoring Load Balancing Algorithm and Throttled Load Balancing Algorithm. The detailed design, pseudo code and implementation of algorithm are also presented in this paper. The results (Overall Response Time and Datacenter Processing Time) obtained are compared with the results of Throttled Load Balancing Algorithm. This comparison is done after implementing and analysing each of the existing algorithms discussed in this paper, and found that Throttled Load Balancing Algorithm is best among all the existing. The other sections in the paper are introduction, related works, conclusion etc. General Terms Cloud Computing, Load Balancing.
- Published
- 2015
26. Number Plate Recognition (ANR) using Crimmins Complementary Hulling Algorithm
- Author
-
Kiran Sonavane
- Subjects
Noise ,Speckle pattern ,Computer science ,Salt (cryptography) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Track (rail transport) ,Algorithm ,Image (mathematics) - Abstract
The crime of stolen vehicle in the metro city has rise now a days, to track the stolen vehicle the cops required the help of automatic Number Plate recognition (ANR). The use of ANR are not limited to the track stolen car but many more like automatic car parking system, automatic toll collection system etc. but the ANR system has many issues in implementation. Some ANR uses hardware repository to track the number of the vehicle which uses the CCTV camera to capture vehicle image. the extracting the number from the vehicle image. the image has to be noise free. In proposed method I implemented the new approach to remove the noise from vehicle image which is generated by the camera due to low sunlight as a result image will contain salt and paper noise. In proposed method crimmins speckle algorithm is used to remove the salt and paper noise.
- Published
- 2015
27. Compression of Noisy Images based on Sparsification using Discrete Rajan Transform
- Author
-
M. V. Subramanyam, Kethepalli Mallikarjuna, and Kodati Satya Prasad
- Subjects
Lossless compression ,Discrete wavelet transform ,business.industry ,Computer science ,Gaussian ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data compression ratio ,Speckle noise ,symbols.namesake ,Digital image ,Computer Science::Computer Vision and Pattern Recognition ,symbols ,Discrete cosine transform ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Transform coding ,Data compression ,Image compression - Abstract
mage compression is usually carried out to reduce the amount of data required to store or communicate a digital image or video. The basic idea involved in the reduction process is removal of redundant data. Image compression exploits the fact that all images are not equally likely. In this regard a good number of Compression algorithms have been developed by researchers. As an alternative to the available traditional approaches, this paper presents the use of Discrete Rajan Transform for sparsification and image compression of noisy images. Discrete Rajan Transform is effective in introducing sparsity in images and thereby improving compressibility, the compromise being acceptable loss of data. In this paper, images with Gaussian, Poisson, Salt and pepper, and speckle noise have been investigated using the proposed method and a brief analysis is carried out in terms of perception of images as well in terms of three important parameters, Peak Signal-to-Noise Ratio, Mean Squared Error and Compression Ratio. On simulation, it was observed that DRT yielded higher quality image than the other candidate transforms used, namely Discrete Cosine Transform and Discrete Wavelet Transform.
- Published
- 2015
28. Additive and Multiplicative Noise Removal by using Gradient Histogram Preservations Approach
- Author
-
Nikita Roy and Vismay Jain
- Subjects
Normalization (statistics) ,Computer science ,Image quality ,Noise reduction ,Gaussian ,Fast Fourier transform ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Iterative reconstruction ,Multiplicative noise ,symbols.namesake ,Image texture ,Histogram ,Median filter ,Computer vision ,Value noise ,Noise measurement ,Covariance matrix ,business.industry ,Shot noise ,Speckle noise ,Salt-and-pepper noise ,White noise ,Filter (signal processing) ,Covariance ,Gaussian filter ,Gradient noise ,Additive white Gaussian noise ,Gaussian noise ,Computer Science::Computer Vision and Pattern Recognition ,symbols ,Artificial intelligence ,business ,Algorithm ,Image compression - Abstract
denoising is a traditional yet essential issue in low level vision. Existing denoising technique denoise image but these techniques doesn't concern about multiplicative noise removals. Due to that image texture are not preserved and PSNR value does not properly improved. Image denoising technique uses a novel Gradient Histogram Preservation (GHP) algorithm which preserves image quality. Presently, this technique denoises only additive noise removal. It cannot be applied to non-additive removal, such as multiplicative, Poisson noise and signal- independent noise and it also takes more time in calculations. Since both the noises are dissimilar in nature therefore it is difficult to eliminate both the noises by using single filter. To solve the above issue ,in this paper a novel GHP approach is used to remove additive white Gaussian noise (AWGN) effectively. Since speckle noise is multiplicative in nature; it is converted into additive noise by logarithmic transformation method before apply GHP algorithm. In this paper we use the approach that is to acquire a logarithmic transformation, calculate a covariance matrix of the transformed data, generate random number which follows mean zero and variance/covariance c times the variance/covariance computed in the previous step, then take antilog of the normalized data and apply novel technique using, Fast Fourier Transfer (FFT), Gaussian filter, local content metrics texture ,Iterative Histogram Specifications (IHS) which can denoise both types of noise removal, additive and non-additive noise removal and also takes less calculation time.. In image processing FFT is used in a wide variety of applications, like image analysis, image reconstruction, image filtering and image compression . Gaussian separating is utilized to obscure pictures and evacuate clamor. The proposed algorithm offers to remove the multiplicative noise and improves the visual quality of images. Keywordsnoise, Texture, histogram specifications, sparse matrix representations, local content matrix
- Published
- 2015
29. A Maximum Likelihood (ML) based OSTBC-OFDM System over Multipath Fading Channels
- Author
-
Jaggari Manasa, K. Chenna Kesava Reddy, and T Ramaswamy
- Subjects
Computer science ,Orthogonal frequency-division multiplexing ,Speech recognition ,Transmitter ,Data_CODINGANDINFORMATIONTHEORY ,Multiplexing ,Computer Science::Performance ,Space–time block code ,Rician fading ,Fading ,Algorithm ,Multipath propagation ,Decoding methods ,Computer Science::Information Theory ,Communication channel ,Rayleigh fading - Abstract
This paper proposes a space-time block-coded orthogonal frequency-division multiplexing (STBC-OFDM) scheme for frequency-selective fading channels which does not require channel knowledge either at the transmitter or at the receiver. This paper proposed a generalized maximum likelihood estimation decoding algorithm. Due to the orthogonality nature of STBC, the rule for decoding was reduced to a single step. The performance investigation of the proposed system was done over Rayleigh fading channel and also over the Rician fading channel. Simulation results reveal the performance of proposed STBC communication system is near optimum.
- Published
- 2015
30. File Sharing between Peer-to-Peer using Network Coding Algorithm
- Author
-
V.R. Chirchi and U Rathod Vijay
- Subjects
Network architecture ,business.industry ,Computer science ,Distributed computing ,Network delay ,Throughput ,Network topology ,Computer Science::Digital Libraries ,Network traffic control ,Network simulation ,Shared resource ,Intelligent computer network ,File sharing ,Linear network coding ,business ,Algorithm ,Network management station ,Computer network - Abstract
coding is a good improvement of network routing to improve network throughput and provide high reliability. It allows a node to generate output messages by encoding its received messages. Peer-to-peer networks are a perfect place to apply network coding due to two reasons: 1. in peer-to-peer network, the topology is not fixed. So, it is very much easier to create the topology which suits the network coding; 2. Peer-to-peer network every nodes are end hosts, so it is easier to perform the complex operation related to network coding like decoding and encoding rather than storing and forwarding the message. In this paper, as propose an algorithm to apply network coding algorithm to peer-to-peer file sharing which employs a peer-to-peer network to distribute files resided in a web server or a file server. The scheme exploits a special type of network topology called combination network. It was proved that combination networks can achieve unbounded network coding gain measured by the ratio of network throughput with network coding to that without network coding. Here network coding algorithm encodes a file into multiple messages and divides peers into multiple groups with each group responsible for relaying one of the messages. The encoding algorithm is designed to satisfy the property that any subset of the messages can be used to decode the original file as long as the size of the subset is sufficiently large. To meet this requirement, here first define an algorithm which satisfies the desired property, and then connect peers in the same group to flood the corresponding message, and connect peers in different groups to distribute messages for decoding. This paper has considered number of theoretical and practical scenarios where network coding or its variant is applied on peer-to-peer file sharing based on Network coding with the aim to improve performance parameters like throughput and reliability. This paper has mainly focused on the comparative analysis of file sharing between peer-to-peer using network coding algorithms. KeywordsCoding Algorithm, Peer-to-Peer Networks, Web
- Published
- 2015
31. Designing of Testing Framework through Finite State Automata for Object-Oriented Systems using UML
- Author
-
Sadhanan Verma and Ajay Pratap
- Subjects
Object-oriented programming ,Finite-state machine ,Computer science ,business.industry ,Programming language ,Applications of UML ,System testing ,computer.software_genre ,Software ,Test case ,Unified Modeling Language ,Use case ,Class diagram ,State diagram ,business ,computer ,Algorithm ,Test data ,computer.programming_language - Abstract
Many researches to testing object-oriented systems (OOSs) have been proposed for the past decade. After all, almost all large OO software specifications still contain incompleteness, inconsistency, and ambiguity. The framework can be defined using any state-based specification notation and used to derive test cases from state-based specifications in this paper, it is demonstrated using the RSML notation. A state transition diagram (STD) derived from RSML specification provides a complete behavior of a given OOS. System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams. The goal is here to support the derivation of functional system test requirements, which will be transformed into test cases once we have detailed design information. In this paper, we describe a methodology in a practical way and illustrate it with an example. In this paper a framework that formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process.
- Published
- 2015
32. Maximum Clique Conformance Measure for Graph Coloring Algorithms
- Author
-
Abdulmutaleb Alzubi, Mohammad Mahbubul Hassan, and Mohammad Malkawi
- Subjects
Clique ,Theoretical computer science ,Job shop scheduling ,Computer science ,Graph theory ,Clique (graph theory) ,Clique graph ,Graph ,Greedy coloring ,Clique problem ,Graph coloring ,Algorithm ,MathematicsofComputing_DISCRETEMATHEMATICS ,Register allocation - Abstract
The Graph Coloring Problem (GCP) is an essential problem in graph theory, and it has many applications such as the exam scheduling problem, register allocation problem, timetabling, and frequency assignment. The maximum clique problem is also another important problem in graph theory and it arises in many real life applications like managing the social networks. These two problems have received a lot of attention by the scientists, not only for their importance in the real life applications, but also for their theoretical aspect. Solving these problems however remains a challenge, and the proposed algorithms for this purpose apply to rather small graphs, whereas many real life application graphs encompass hundreds or thousands of nodes. This paper presents a new measure for evaluating the efficiency of graph coloring algorithms. The new measure computes the clique conformance index (CCI) of a graph coloring algorithm. CCI measures the rate of deviation of a coloring algorithm from the maximum clique during the process of coloring a graph. The paper presents empirical measurement for two coloring algorithms proposed by the authors. General Terms Algorithms, Graph Coloring
- Published
- 2015
33. An Efficient Duplication Record Detection Algorithm for Data Cleansing
- Author
-
Maria Anjum, Mariam Rehman, and Arfa Skandar
- Subjects
Data cleansing ,Basis (linear algebra) ,Computer science ,Variation (game tree) ,Data mining ,Focus (optics) ,Research process ,computer.software_genre ,Digital library ,Algorithm ,computer ,Term (time) - Abstract
purpose of this research was to review, analyze and compare algorithms lying under the empirical technique in order to suggest the most effective algorithm in terms of efficiency and accuracy. The research process was initiated by collecting the relevant research papers with the query of "duplication record detection" from IEEE database. After that, papers were categorized on the basis of different techniques proposed in the literature. In this research, the focus was made on empirical technique. The papers lying under this technique were further analyzed in order to come up with the algorithms. Finally, the comparison was performed in order to come up with the best algorithm i.e. DCS++. The selected algorithm was critically analyzed in order to improve its working. On the basis of limitations of selected algorithm, variation in algorithm was proposed and validated by developed prototype. After implementation of existing DCS++ and its proposed variation, it was found that the proposed variation in DCS++ producing better results in term of efficiency and accuracy. The algorithms lying under the empirical technique of duplicate records deduction were focused. The research material was gathered from the single digital library i.e. IEEE. A restaurant dataset was selected and the results were evaluated on the specified dataset which can be considered as a limitation of the research. The existing algorithm i.e. DCS++ and proposed variation in DCS++ were implemented in C#. As a result, it was concluded that proposed algorithm is performing outstanding than the existing algorithm.
- Published
- 2015
34. A Threshold based Algorithm to Detect Peripapillary Atrophy for Glaucoma Diagnosis
- Author
-
Jahin Majumdar
- Subjects
medicine.anatomical_structure ,Pixel ,Computer science ,Region of interest ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,medicine ,Glaucoma ,Human eye ,Peripapillary atrophy ,Fundus (eye) ,medicine.disease ,Algorithm ,Optic disc - Abstract
presence of a Peripapillary Atrophy (PPA) is one of the conditions for Glaucoma to develop. This paper is divided into three parts. The first part of this paper describes the terminology related to the diagnosis of glaucoma. The second part of this paper describes various existing algorithms to detect and segment human PPA from a digital fundus retinal image. The paper compares the performances and contrasts the various shortcomings of these described algorithms. The third part of this paper proposes a threshold-based algorithm to detect the PPA of a human eye to aid the diagnosis of Glaucoma. The proposed algorithm calculates the Red by Green ratio for each pixel in the Region of Interest (ROI) and segments the Optic Disc (OD) from the PPA, having different pixel ratios. The algorithm can be further improved by applying sub-algorithms of false region elimination. The proposed algorithm should, theoretically, overcome most of the problems faced by the described algorithms.
- Published
- 2015
35. A Novel Evolutionary Approach to Software Cost Estimation
- Author
-
Disha Guleria and Aman Kaushik
- Subjects
Source lines of code ,Software ,Optimization problem ,Cost estimate ,business.industry ,COCOMO ,Computer science ,Computer Science::Software Engineering ,business ,Algorithm - Abstract
Opinion Dynamics is a novel approach to solve complex optimization problems. This paper proposes and implements Human Opinion Dynamics for tuning the parameters of COCOMO model for Software Cost Estimation. The input is coding size or lines of code and the output is effort in Person-Months. Mean Absolute Relative Error and Prediction are the two objectives considered for the fine tuning of parameters. The dataset considered is COCOMO. The current paper demonstrates that use of human opinion dynamics illustrated promising results. It has been observed that when compared with standard COCOMO it gives better results.
- Published
- 2015
36. Intra Prediction Mode Decision for H.264
- Author
-
R B Mamatha and N Keshaveni
- Subjects
Rate–distortion optimization ,Computer science ,Digital video ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,Data mining ,AVC-Intra ,computer.software_genre ,Algorithm ,computer ,Data compression - Abstract
Digital video compression has become an integral part of the way we create, consume visual information and communicate, over the last few decades. A robust technique of compression is proposed in this paper for video compression. Reduction of irrelevant or redundant data in order to save the storage space requirements and processing time is nothing but compression. Here H.264/AVC (Advance Video Coding) video coding standard is used for compression. Where I-frames are divided into different macro blocks (MB) and each MB is efficiently compressed using 4 x4 and 16 x16 blocks in H.264/AVC intra prediction. Choosing one of 9 prediction modes for each 4 x4 block with reduced time and less complexity is still a bottleneck. In this paper a gradient based fast intra prediction mode selection method for 4x4 and 16x16 is proposed. The proposed method divides an input frames into variable block sizes based on the texture and performs few mode examinations based on the gradient direction in the given slice. Respective best mode with minimum cost is selected using sum of absolute difference (SAD) and rate distortion optimization (RDO). This process is carried for all the video input frames. Each compressed video frames are combined finally to get a compressed video output.
- Published
- 2015
37. Multiple Complex Extreme Learning Machine using Holomorphic Mapping for Prediction of Wind Power Generation System
- Author
-
Akhil Pratap Singh and Ram Govind Singh
- Subjects
Artificial neural network ,Computer science ,Holomorphic function ,Feedforward neural network ,Ensemble learning ,Algorithm ,Complex number ,Generalization error ,Wind speed ,Extreme learning machine - Abstract
In this paper, a wind prediction system for the wind power generation using ensemble of multiple complex extreme learning machines (C-ELM) is proposed. The extreme learning machines is a single layer feed forward neural network having a fast learning and better generalization ability than the gradient-based learning methods. C-ELM is chosen as base classifier because it is very suitable for processing of non-linear data. For using the wind data in complex domain the wind speed and direction are represented as a complex number. This paper uses the elegant theory of conformal mapping to find better transformations in the complex domain for enhancing its prediction capability. Finally, to improve the generalization ability of the prediction system and to reduce the error encountered in single model predictors, an ensemble of multiple C-ELMS is used. The individual CELM model in the ensemble has different activation functions of the hidden layer neuron. Performance analysis proves that the predictions generated through our method are effective when compared to other complex valued neural network prediction systems.
- Published
- 2015
38. Medium Access Probability of Cognitive Radio Network at 1900 MHz and 2100 MHz
- Author
-
M. R. Amin, Risala Tasin Khan, and Md. Imdadul Islam
- Subjects
business.industry ,Computer science ,Nakagami distribution ,Probability density function ,Context (language use) ,Data_CODINGANDINFORMATIONTHEORY ,symbols.namesake ,Cognitive radio ,Rician fading ,Computer Science::Networking and Internet Architecture ,symbols ,Fading ,Rayleigh scattering ,Telecommunications ,business ,Algorithm ,Random variable ,Computer Science::Information Theory - Abstract
Conventionally fading analysis of wireless LAN or MAN means small scale fading i.e. wide fluctuation of received signal with small variation of time and distance. In analysis of fading channel we consider the received signal in power, voltage or SNR as a random variable then statistical probability density function (pdf) like Rayleigh, Rician or Nakagami-m is used to get the probability of different phenomena. Most of the pdf is governed by two parameters: mean and variance of the random variable. In recent literature the mean value is taken constant but in this paper we consider the mean value as a slowly varying random variable and depend on the parameters of large scale fading. In this paper the concept of large and small scale fading is combined, in analysis of performance of cognitive radio network in context of medium access probability specially at 1900MHz and 2100MHz.
- Published
- 2015
39. A Review of Area Efficient High Speed Multiplier Design
- Author
-
Priya Barange, Paresh Rawat, and Sunil Malviya
- Subjects
Very-large-scale integration ,Adder ,business.industry ,Computer science ,Hardware description language ,Semiconductor ,VHDL ,Carry-select adder ,Multiplier (economics) ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Arithmetic ,business ,Algorithm ,computer ,computer.programming_language - Abstract
paper presents the review of high speed multiplier factor style within which the comparison of the VLSI style of the Carry look-ahead adder (CLAA) and also the VLSI style of the Carry select adder (CSLA) supported unsigned multiplier factor. The multipliers styles during this paper were exploitation VHDL (Very High Speed Integration Hardware Description Language) for unsigned information. Here is to planned, the semiconductor style methodologies to higher levels of abstraction and partly to hurry integration, however conjointly to confirm their styles area unit filmable to changes in specifications or system style. In nano scale fabrication of multiplier factor during this paper wee planned a ninety nm multiplier factor style and analysis the performance.
- Published
- 2015
40. A Review of VLSI Structure for the Implementation of Matrix Multiplication
- Author
-
Ruchi Thakkar and Paresh Rawat
- Subjects
Very-large-scale integration ,Adder ,Computer science ,Multiplication ,Multiplier (economics) ,Image processing ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Arithmetic ,Algorithm ,Matrix multiplication - Abstract
Matrix multiplication is the kernel operation used in many transform, image processing and digital signal processing application. In this paper, we have studied for parallel-parallel input and single output (PPI-SO), parallel-parallel input and multiple output (PPI-MO) and parallel-parallel fixed input and multiple output (PFI-MO) matrix-matrix multiplication. It is also a well-known fact that the multiplier and adder unit forms an integral part of matrix multiplication. Due to this regard, high speed multiplier and adder become the need of the day. In this paper, we have studied of Vedic mathematics multiplier using compressors.
- Published
- 2015
41. Hardware Design and Implementation of Adaptive Canny Edge Detection Algorithm
- Author
-
Mohammad Abu Yousuf, P K Mithun Kumar, and Ferdous Hossain
- Subjects
Pixel ,Computer science ,business.industry ,Noise (signal processing) ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Gaussian filter ,symbols.namesake ,Computer Science::Computer Vision and Pattern Recognition ,symbols ,Canny edge detector ,Monochrome ,Enhanced Data Rates for GSM Evolution ,business ,Algorithm ,Computer hardware - Abstract
In this paper, a hardware system for adaptive Canny edge detection algorithm is designed and simulated for a 128 pixel, 8-bit monochrome linescan camera. The system is designed to detect objects as they move along a conveyor belt in a manufacturing environment, the camera observe dark objects on a light conveyor belt . Here adaptive Canny algorithm is used to increase the accuracy of output objects. In traditional Canny need to set two threshold values manually, so there are some defects to different images but this paper puts forward an adaptive threshold values base on mean and median values. The output result of adaptive Canny proves its accuracy is high. There are multiple steps to implement adaptive Canny. First, Gaussian filter is used to smooth and remove noise. Second, compute the gradient magnitude. Third, nonmaximum suppression is applied in which the algorithm removes pixels that are not part of an edge. Hysteresis uses two threshold values, upper and lower. A pixel will be marked as an edge if it’s gradient lies in between of lower and upper threshold values. A pixel will be discarded if it’s gradient below the lower or beyond the upper threshold values. Eventually, the pixels gradient is between the two threshold values will be connected as marked edge. General Terms Noise Reduction of Images, Compute Gradient Magnitude and Angle, Non-Maximum Suppression, Hysteresis Thresholding, Adaptive Canny Edge Detection Algorithms
- Published
- 2015
42. Probability Density Function of EMG Signals based on Hand Movements in Time and Frequency Domain
- Author
-
Payal Patial and Ramanpreet Kaur
- Subjects
Burr distribution ,Computer science ,Frequency domain ,Speech recognition ,Fast Fourier transform ,Log-logistic distribution ,Negative binomial distribution ,Probability density function ,Time domain ,Algorithm ,Communication channel - Abstract
This paper attempts to estimate the probability density function of hand movements by using EMG signals. Several hand grasps generated from different hand movements, we have analyzed Tip and Lateral. Four well known pdf functions for good fitness of test are Log Logistics (3P), Johnson, Dagum (4P) and Burr (4P), that have been tested. The probability density function has been carried out in time domain and FFT domain as well as in DWT domain. It was observed that there are different distributions for different hand movements, which describe the samples most accurately with the movements of hand with respect to two channels; channel 1 and channel 2. In this scenario, channels 1 is placed on upper limb and channel 2 placed on lower limbs as a reference channel. Although, Burr distribution and Log logistic distribution along with that Dagum has been a good fit for most of the data, it is shown in this paper that Non Negative distribution (Dagum (4P) and Burr (4P) distribution) is a better choice for estimating the Tip and Lateral hand movements. General Terms This method is used for classification purpose. This is the another method of classification of data set in probability density function. This is used as a clinic/ engineering field as a designing of prosthetic arm and hand.
- Published
- 2015
43. Thwarting Sybil Attack using ElGamal Algorithm
- Author
-
Himanshu Gupta, Arvind Negi, Punit Sharma, and Shefali Khatri
- Subjects
business.industry ,Computer science ,Node (networking) ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Mobile ad hoc network ,Computer security ,computer.software_genre ,Sybil attack ,Wireless ,business ,Algorithm ,computer ,ElGamal encryption ,Physical security - Abstract
MANET is an independent and infrastructureless network comprising of self configurable mobile nodes connected via wireless links. MANET is susceptible to various attacks because of some loopholes present in MANET like dynamic topology, zero central administration, limited physical security etc. MANET is prone to numerous malicious attacks one such attack among them is SYBIL ATTACK. In Sybil attack multiple identities are presented by a single physical node. This attack has serious impact on network functionality. In this paper principle focus is on preventing Sybil attacks using ElGamal algorithm. The concept of ElGamal algorithm has been used in this paper so as to deal with Sybil attacks. General terms Sybil, ElGamal, Legitimate
- Published
- 2015
44. Comparing Linear Search and Binary Search Algorithms to Search an Element from a Linear List Implemented through Static Array, Dynamic Array and Linked List
- Author
-
CK Kumbharana and Vimal P. Parmar
- Subjects
Incremental heuristic search ,Binary search algorithm ,Theoretical computer science ,Computer science ,Linked list ,computer.software_genre ,Uniform binary search ,List update problem ,Jump search ,Search algorithm ,Binary search tree ,Beam stack search ,Combinatorial search ,Beam search ,Data mining ,Self-organizing list ,computer ,Algorithm ,Analysis of algorithms ,Linear search - Abstract
retrieval is the most fundamental requirement for any kind of computing application and which requires search operation to be performed from massive databases implemented by various data structures. Searching an element from the list is the fundamental aspects in computing world. Numbers of algorithms are developed for searching an element among which linear search and binary search are the most popular algorithms. In this paper researcher has made efforts to compare these both algorithms to implement on various data structures and to find out the solution to implement binary search on linked linear list. This paper also analyzes both the algorithms at some extent for the applicability and execution efficiency. This paper also analyzes the few data structures to implement these algorithms. At last based on the linear search and binary search algorithms, one algorithm is designed to function on linked linear list.
- Published
- 2015
45. Performance Analysis of Fast Block Matching Motion Estimation Algorithms
- Author
-
Nasir Mahmud Khokhar and Wail Harasani
- Subjects
Theoretical computer science ,Computer science ,Search algorithm ,Motion estimation ,MPEG-1 ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Motion vector ,Algorithm ,Encoder ,Data compression ,Block (data storage) - Abstract
Motion Estimation (ME) is an integral part of any video encoder and a large number of Block Matching Motion Estimation (BMME) Algorithms are proposed to cope the computational complexity and increase quality of ME process requirement. Therefore, it is necessary to evaluate the performance of these ME algorithms for different motion activities. In this paper five fast famous BMME algorithms are considered to evaluate their performance on the basis of ME time, search points, PSNR and Means Square Error (MSE). The algorithms evaluated in this paper are considered for state of the art video compression standards like MPEG 1, to MPEG4 and H.261 to H.264. Results show that the PSNR of Diamond Search (DS) is best for all test video sequences, whereas, Hardware Modified DS takes maximum number of search points to calculate motion vector. Moreover, hexagon search algorithm takes minimum number of search points but its PSNR is considerably lower than the other algorithms.
- Published
- 2015
46. Approximate String Matching Algorithms: A Brief Survey and Comparison
- Author
-
Syeda ShabnamHasan, Fareal Ahmed, and Rosina Surovi Khan
- Subjects
Theoretical computer science ,Computer science ,Ball (bearing) ,Approximate string matching ,Lipschitz continuity ,Algorithm ,Computer Science::Databases ,Distance measures ,Vector space - Abstract
Many database applications require similarity based retrieval on stored text and/or multimedia objects. This is an area of increasing research interest in the sectors of database, data mining, information retrieval and knowledge discovery. This paper presents a brief survey on the existing approximate string matching algorithms by primarily demonstrating three families of algorithms — the Brute force, the Lipschitz Embeddings and the Ball Partitioning algorithms. While Brute Force performs approximate string matching based on distance measures of the query object from each string stored in the database, Lipschitz Embeddings uses a far more efficient approach which embeds the stored strings in database in vector space so that the distances of embedded strings approximates the actual distances. Ball Partitioning algorithm, much more efficient than Brute force but less efficient than Lipschitz algorithm, performs search in approximate string matching based on distances where queries operate on an arbitrary search hierarchy. The paper compares and makes an analysis of these three algorithms which are suitable for approximate matching of strings stored in database text files, an issue much required in the context of similarity based retrieval of objects. The work can be extended for future work by taking into account a larger number of algorithms suited to approximate string matching for the benefit of a wider scope of comparisons and picking out the most optimal one. General Terms Algorithms for Approximate String Matching.
- Published
- 2015
47. A New Ranking Algorithm for Search Engine: Content’s Weight based Page Ranking
- Author
-
Charanjit Kaur Swaran Singh, Arvinder Singh Kang, and Vijay Laxmi
- Subjects
Information retrieval ,Computer science ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,Rank (computer programming) ,Crawling ,computer.software_genre ,Ranking (information retrieval) ,Search engine ,Ranking ,Ranking SVM ,Web page ,Content (measure theory) ,Learning to rank ,Data mining ,Algorithm ,computer - Abstract
The objective of this paper is to propose a new ranking technique or algorithm for web page rank. Different search engines use various ranking algorithms for the ranking of the web pages. Web Page ranking algorithm works only on the web page repository or indexed pages to the search engine for ranking. In reality, search engines work in two phases one is crawling phase and another is ranking phase. This proposed Algorithm is based on the ranking phase. In this Paper purposed a new ranking algorithm which will be known as Content‟s Weight Based Page Ranking Algorithm (CWPRA). This algorithm Calculate the Page Rank based upon the weight of contents which is related to user query. Hope this algorithm provide the best, accurate and sufficient data or information or matter to the user according to the need.
- Published
- 2016
48. Vertically Scrambled Caesar Cipher Method
- Author
-
Asiya Abdus Salam and Ruba Mahmoud Al Salah
- Subjects
Broadcasting (networking) ,Transmission (telecommunications) ,Computer science ,business.industry ,Transfer (computing) ,Substitution (logic) ,Key (cryptography) ,Caesar cipher ,Encryption ,business ,Algorithm - Abstract
In this paper, a new technique for protected and locked broadcasting of message is presented. This approach uses improved version of ciphering with the combination of double phase encryption. To ripen this method of encryption, a simple technique of vertically selecting the text for ciphering is used. A 6 x 6 matrix based on the alphabets used in the text message. If the message is lengthy, the matrix can duplicate itself accordingly. Message will be fit in the matrix and remaining cells of the matrix will be filled by alphabets. After getting vertically scrambled text, substitution techniques for ciphering is used further to ensure secured transfer of message. The receiver will get to know about the length of the text and shift key for decryption procedure. By using this double phase encryption, the transmission of message will become more secure and robust. The main target of the technique proposed in this paper is that the information cannot be customized by any outsider or intruder.
- Published
- 2015
49. Nature Inspired Recommender Algorithms for Collaborative Web based Learning Environments
- Author
-
Lakshmi Sunil Prakash and Dinesh Kumar Saini
- Subjects
Computer science ,Process (engineering) ,Web based learning ,Similarity (psychology) ,Collaborative filtering ,Nature inspired ,Recommender system ,Algorithm - Abstract
been proposed based on the nature inspired algorithms. In this paper attempt is made to propose a Nature Inspired Algorithms based architecture for recommender system for web based learning environments. The paper also compares between the traditional recommender systems and the nature inspired algorithm recommender systems. Collaborative filtering is proposed for personalized recommendations; user and item attributes are used as filtration parameter. Attributes and rating of the user’s similarity is used for collaborative filtering process. Hybrid collaborative filtering is proposed for user and item attribute that can alleviate the sparsity issue in the recommender systems. Traditional systems are studied in detail and all the possible limitations of the traditional systems are bought under attention.
- Published
- 2015
50. Fast Adaptive Image Encryption using Chaos by Dynamic State Variables Selection
- Author
-
Daniel Roohbakhsh and Mahdi Yaghoobi
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Chaotic ,Encryption ,Disk encryption theory ,Deterministic encryption ,Probabilistic encryption ,Computer Science::Computer Vision and Pattern Recognition ,Computer Science::Multimedia ,Logistic map ,business ,Algorithm ,Computer Science::Cryptography and Security ,Key size - Abstract
new image encryption scheme based on high dimensional compound chaotic systems is proposed in this paper. Common chaotic image encryption, will perform encryption algorithm on pixels one by one which make the process slower, in this paper we called logistic map to choose and encrypt appropriate number of pixels makes the algorithm swifter and much more reliable by enlarging the key length.
- Published
- 2015
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.