242 results on '"Salwani Abdullah"'
Search Results
152. A hybrid approach for learning concept hierarchy from Malay text using artificial immune network
- Author
-
Mohd Zakree Ahmad Nazri, Azuraliza Abu Bakar, Salwani Abdullah, and Siti Mariyam Shamsuddin
- Subjects
Ontology learning ,business.industry ,Computer science ,Artificial immune system ,Conceptual clustering ,Particle swarm optimization ,Machine learning ,computer.software_genre ,Computer Science Applications ,Robustness (computer science) ,Theory of computation ,Unsupervised learning ,Artificial intelligence ,business ,Cluster analysis ,computer - Abstract
A concept hierarchy is an integral part of an ontology but it is expensive and time consuming to build. Motivated by this, many unsupervised learning methods have been proposed to (semi-) automatically develop a concept hierarchy. A significant work is the Guided Agglomerative Hierarchical Clustering (GAHC) which relies on linguistic patterns (i.e., hypernyms) to guide the clustering process. However, GAHC still relies on contextual features to build the concept hierarchy, thus data sparsity still remains an issue in GAHC. Artificial Immune Systems are known for robustness, noise tolerance and adaptability. Thus, an extension to the GAHC is proposed by hybridizing it with Artificial Immune Network (aiNet) which we call Guided Clustering and aiNet for Learning Concept Hierarchy (GCAINY). In this paper, we have tested GCAINY using two parameter settings. The first parameter setting is obtained from the literature as a baseline parameter setting and second is by automatic parameter tuning using Particle Swarm Optimization (PSO). The effectiveness of the GCAINY is evaluated on three data sets. For further validations, a comparison between GCAINY and GAHC has been conducted and with statistical tests showing that GCAINY increases the quality of the induced concept hierarchy. The results reveal that the parameters value found by using PSO significantly produce better concept hierarchy than the vanilla parameter. Thus it can be concluded that the proposed approach has greater ability to be used in the field of ontology learning.
- Published
- 2010
153. Population Initialisation Methods for Fuzzy Job-Shop Scheduling Problems: Issues and Future Trends
- Author
-
Iman Mousa Shaheed, Salwani Abdullah, and Syaimak Abdul Shukor
- Subjects
0209 industrial biotechnology ,education.field_of_study ,Future studies ,General Computer Science ,Job shop scheduling ,Operations research ,Computer science ,Population ,General Engineering ,02 engineering and technology ,Fuzzy logic ,Current analysis ,Scheduling (computing) ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,General Agricultural and Biological Sciences ,education - Abstract
Scheduling job shops in the real-world manufacturing environment is a multifarious task that involved various and multiple components, solutions and approach. Fuzzy Job-Shop Scheduling Problems (Fuzzy JSSPs) are most commonly addressed by the population-based Meta-heuristic algorithms. These algorithms usually derive near-optimum solutions within reasonable computational times, almost by two main steps; the initialisation and then improvement step. Numerous theoretical studies pointed out that a Meta-heuristic performance is mainly affected by the performance of its initialisation method. The main purpose of this paper is to understand the existing trend and concerns of issues in population initialisation for Fuzzy JSSPs research by examining the published articles and furthermore to provide comprehension insight and future direction on these methods. Therefore, this paper determined to review and classify the existing literature on Fuzzy JSSPs and analyse the performance of the initialisation methods used to identify their possible limitations. In consequence, previous works outlined three potential methods for initial solutions generation, which are Random-based, priority rules-based, and heuristic methods. However, the current analysis showed that Heuristic-based initialisation approach remains lacking in the Fuzzy JSSPs domain in spite of its successful performance in the crisp JSSP domain, especially, its capability to generate high-quality initial population that consists of optimal or near optimal solutions. Furthermore, this paper identifies probable gaps and reveals several performance limitations in the existing methods, which demands for an urgent solution to develop alternatives. Promising suggestions for future studies are also provided that may lead to new Heuristic Initialisation methods to be proposed in order to overcome the existing shortcomings .
- Published
- 2018
154. A Survey on Proactive, Active and Passive Fault Diagnosis Protocols for WSNs: Network Operation Perspective
- Author
-
Houbing Song, Salwani Abdullah, Nabil Alrajeh, Amjad Mehmood, and Mithun Mukherjee
- Subjects
Computer science ,network operation ,active ,02 engineering and technology ,lcsh:Chemical technology ,Fault (power engineering) ,Computer security ,computer.software_genre ,Biochemistry ,Network operations center ,Article ,Field (computer science) ,Analytical Chemistry ,Resource (project management) ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,wireless sensor networks ,Instrumentation ,passive ,Wireless network ,020208 electrical & electronic engineering ,Perspective (graphical) ,proactive ,020206 networking & telecommunications ,fault diagnosis ,Atomic and Molecular Physics, and Optics ,Software deployment ,computer ,Wireless sensor network - Abstract
Although wireless sensor networks (WSNs) have been the object of research focus for the past two decades, fault diagnosis in these networks has received little attention. This is an essential requirement for wireless networks, especially in WSNs, because of their ad-hoc nature, deployment requirements and resource limitations. Therefore, in this paper we survey fault diagnosis from the perspective of network operations. To the best of our knowledge, this is the first survey from such a perspective. We survey the proactive, active and passive fault diagnosis schemes that have appeared in the literature to date, accenting their advantages and limitations of each scheme. In addition to illuminating the details of past efforts, this survey also reveals new research challenges and strengthens our understanding of the field of fault diagnosis.
- Published
- 2018
155. A tabu-based large neighbourhood search methodology for the capacitated examination timetabling problem
- Author
-
Barry McCollum, Edmund K. Burke, Salwani Abdullah, Moshe Dror, and Samad Ahmadi
- Subjects
Marketing ,Mathematical optimization ,021103 operations research ,Operations research ,Computer science ,Strategy and Management ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Neighbourhood search ,Tabu search ,Graph ,Management Information Systems ,Scheduling (computing) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Timetabling problem - Abstract
Neighbourhood search algorithms are often the most effective known approaches for solving partitioning problems. In this paper, we consider the capacitated examination timetabling problem as a partitioning problem and present an examination timetabling methodology that is based upon the large neighbourhood search algorithm that was originally developed by Ahuja and Orlin. It is based on searching a very large neighbourhood of solutions using graph theoretical algorithms implemented on a so-called improvement graph. In this paper, we present a tabu-based large neighbourhood search, in which the improvement moves are kept in a tabu list for a certain number of iterations. We have drawn upon Ahuja–Orlin's methodology incorporated with tabu lists and have developed an effective examination timetabling solution scheme which we evaluated on capacitated problem benchmark data sets from the literature. The capacitated problem includes the consideration of room capacities and, as such, represents an issue that is of particular importance in real-world situations. We compare our approach against other methodologies that have appeared in the literature over recent years. Our computational experiments indicate that the approach we describe produces the best known results on a number of these benchmark problems.
- Published
- 2007
156. Fuzzy Population-Based Meta-Heuristic Approaches for Attribute Reduction in Rough Set Theory
- Author
-
Mafarja Majdi, Salwani Abdullah, and Najmeh S. Jaddi
- Subjects
Fuzzy Logic ,Memetic Algorithms ,Rough Set Theory ,Record to Record Algorithm ,Attribute Reduction ,Great Deluge Algorithm - Abstract
One of the global combinatorial optimization problems in machine learning is feature selection. It concerned with removing the irrelevant, noisy, and redundant data, along with keeping the original meaning of the original data. Attribute reduction in rough set theory is an important feature selection method. Since attribute reduction is an NP-hard problem, it is necessary to investigate fast and effective approximate algorithms. In this paper, we proposed two feature selection mechanisms based on memetic algorithms (MAs) which combine the genetic algorithm with a fuzzy record to record travel algorithm and a fuzzy controlled great deluge algorithm, to identify a good balance between local search and genetic search. In order to verify the proposed approaches, numerical experiments are carried out on thirteen datasets. The results show that the MAs approaches are efficient in solving attribute reduction problems when compared with other meta-heuristic approaches., {"references":["R. Jensen and Q. Shen, \"Fuzzy-rough sets for descriptive dimensionality\nreduction,\" presented at the Fuzzy Systems, 2002. FUZZ-IEEE'02.\nProceedings of the 2002 IEEE International Conference on, 2002.","L. Ke, Z. Feng, and Z. Ren, \"An efficient ant colony optimization\napproach to attribute reduction in rough set theory,\" Pattern Recognition\nLetters, vol. 29, 2008, pp. 1351-1357.","S. Theodoridis and K. Koutroumbas, Pattern Recognition, Third ed.\nOrlando, FL, USA Academic Press, Inc. , 2006.","Z. Pawalk, \"Rough sets: theoretical aspects of reasoning about data,\" ed:\nDordrecht: Kluwer Academic Publishers, 1991.","Z. Pawlak, \"Rough Sets,\" International Journal of Information and\nComputer Sciences, vol. 11, 1982, pp. 341-356.","Z. Pawlak, \"Rough sets and data analysis,\" presented at the Fuzzy\nSystems Symposium, 1996. 'Soft Computing in Intelligent Systems and\nInformation Processing', 1996.","Z. Pawlak and A. Skowron, \"Rudiments of rough sets,\" Information\nsciences, vol. 177, 2007, pp. 3-27.","R. W. Swiniarski and A. Skowron, \"Rough set methods in feature\nselection and recognition,\" Pattern Recognition Letters, vol. 24, 2003,\npp. 833-849.","A. Skowron and C. Rauszer, \"The discernibility matrices and functions\nin information systems,\" Intelligent decision support–Handbook of\napplications and advances of the rough set theory, 1992, pp. 311–362.\n[10] R. Jensen and Q. Shen, \"Semantics-Preserving Dimensionality\nReduction: Rough and Fuzzy-Rough-Based Approaches,\" IEEE Trans.\non Knowl. and Data Eng., vol. 16, 2004, pp. 1457-1471.\n[11] J. G. Bazan, A. Skowron, and P. Synak, \"Dynamic Reducts as a Tool for\nExtracting Laws from Decisions Tables,\" presented at the Proceedings\nof the 8th International Symposium on Methodologies for Intelligent\nSystems, 1994.\n[12] J. Wroblewski, \"Finding minimal reducts using genetic algorithms,\"\npresented at the 2nd Annual Joint Conf. on Information Sciences,\nWrightsville Beach, NC, 1995.\n[13] A.-R. Hedar, J. Wang, and M. Fukushima, \"Tabu search for attribute\nreduction in rough set theory,\" Soft Comput., vol. 12, 2006, pp. 909-918.\n[14] J. Wang, A.-R. Hedar, G. Zheng, and S. Wang, \"Scatter Search for\nRough Set Attribute Reduction,\" presented at the Bio-Inspired\nComputing: Theories and Applications, 2007. BIC-TA 2007. Second\nInternational Conference on, 2007.\n[15] S. Abdullah and N. S. Jaddi, \"Great Deluge Algorithm for Rough Set\nAttribute Reduction,\" in Database Theory and Application, Bio-Science\nand Bio-Technology. vol. 118, Y. Zhang, A. Cuzzocrea, J. Ma, K.-i.\nChung, T. Arslan, and X. Song, Eds., ed: Springer Berlin Heidelberg,\n2010, pp. 189-197.\n[16] S. K. Jihad and S. Abdullah, \"Investigating composite neighbourhood\nstructure for attribute reduction in rough set theory,\" presented at the\nIntelligent Systems Design and Applications (ISDA), 10th International\nConference, 2010.\n[17] Y. Z. Arajy and S. Abdullah, \"Hybrid variable neighbourhood search\nalgorithm for attribute reduction in Rough Set Theory,\" presented at the\nIntelligent Systems Design and Applications (ISDA), 2010.\n[18] S. Abdullah, N. R. Sabar, M. Z. A. Nazri, H. Turabieh, and B.\nMcCollum, \"A constructive hyper-heuristics for rough set attribute\nreduction,\" presented at the Intelligent Systems Design and Applications\n(ISDA), 2010.\n[19] M. Mafarja and S. Abdullah, \"Modified great deluge for attribute\nreduction in rough set theory,\" presented at the Fuzzy Systems and\nKnowledge Discovery (FSKD), 2011.\n[20] D. Koller and M. Sahami, \"Toward Optimal Feature Selection,\"\npresented at the International Conference on Machine Learning 1996.\n[21] R. Kohavi and G. H. John, \"Wrappers for feature subset selection,\"\nArtificial Intelligence vol. 97, 1997, pp. 273-324.\n[22] G. H. John, R. Kohavi, and K. Pfleger, \"Irrelevant Features and the\nSubset Selection Problem\", 1994.\n[23] R. Jensen and Q. Shen, Computational Intelligence and Feature\nSelection: Rough and Fuzzy Approaches: Wiley-IEEE Press, 2008.\n[24] J. Dong, N. Zhong, and S. Ohsuga, \"Using Rough Sets with Heuristics\nfor Feature Selection,\" presented at the Proceedings of the 7th\nInternational Workshop on New Directions in Rough Sets, Data Mining,\nand Granular-Soft Computing, 1999.\n[25] J. Bazan, H. S. Nguyen, S. H. Nguyen, P. Synak, and J. Wróblewski,\n\"Rough set algorithms in classification problem,\" Rough Set Methods\nand Applications. Physica Verlag, 2000, pp. 49–88.\n[26] P. Moscato, \"On Evolution, Search, Optimization, Genetic Algorithms\nand Martial Arts - Towards Memetic Algorithms,\" California Instiyute\nof Technology, Pasadena, CA1989.\n[27] C. Guerra-Salcedo, S. Chen, D. Whitley, and S. Smith, \"Fast and\naccurate feature selection using hybrid genetic strategies,\" presented at\nthe Genetic and Evolutionary Computation Conf, 1999.\n[28] I. S. Oh, J. S. Lee, and B. R. Moon, \"Hybrid genetic algorithms for\nfeature selection,\" Pattern Analysis and Machine Intelligence, IEEE\nTransactions on, vol. 26, 2004, pp. 1424-1437.\n[29] N. Krasnogor, \"Studies on the Theory and Design Space of Memetic\nAlgorithms\" PhD, University of the West of England, 2002, 2002. [30] W. E. Hart, N. Krasnogor, and J. E. Smith, \"Recent Advances in\nMemetic Algorithms,\" vol. 166, 2005.\n[31] J. H. Holland, Adaptation in natural and artificial systems: MIT Press,\n1992.\n[32] J. Yang and V. G. Honavar, \"Feature Subset Selection Using a Genetic\nAlgorithm,\" IEEE Intelligent Systems, vol. 13, pp. 44-49, 1998.\n[33] Z. Michalewicz, Genetic algorithms+ data structures: Springer, 1996.\n[34] L. A. Zadeh, \"Fuzzy sets,\" Information and Control, vol. 8, 1965, pp.\n338-353.\n[35] R. Jensen and Q. Shen, \"New approaches to fuzzy-rough feature\nselection,\" Fuzzy Systems, IEEE Transactions on, vol. 17, 2009, pp. 824-\n838.\n[36] E. Cox, The fuzzy systems handbook: a practitioner's guide to building,\nusing, and maintaining fuzzy systems: Academic Press Professional, Inc.,\n1994.\n[37] H. Zimmermann, Fuzzy Set Theory and its Applications: Kluwer\nAcademic Publishers, Boston, 1996.\n[38] G. Dueck, \"New Optimization Heuristics The Great Deluge Algorithm\nand the Record-to-Record Travel,\" Journal of Computational Physics,,\nvol. 104, 1993, pp. 86-92.\n[39] R. Jensen and Q. Shen, \"Finding Rough Set Reducts with Ant Colony\nOptimization,\" in Proceedings of the 2003 UK Workshop on\nComputational Intelligence, ed, 2003, pp. 15–22\n[40] A. Øhrn, \"Discernibility and rough sets in medicine: tools and\napplications,\" Department of Computer and Information Science,\nNorwegian University of Science and Technology, Trondheim, Norway,\n1999.\n[41] S. Abdullah, N.R. Sabar, M.Z. Ahmad Nazri, M. Ayob. \"An Exponential\nMonte-Carlo algorithm for feature selection problems\", Computers &\nIndustrial Engineering. vol 67 (2014), pp 160-167."]}
- Published
- 2015
- Full Text
- View/download PDF
157. Comparisons between artificial neural networks and fuzzy logic models in forecasting general examinations results
- Author
-
Razali Yaakob, Salwani Abdullah, and Rusmizi Ab Ghani
- Subjects
Artificial neural network ,business.industry ,Computer science ,Mathematics education ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,computer ,Outcome (game theory) ,Fuzzy logic ,Backpropagation ,Data modeling - Abstract
MARA Junior Science College (MRSM) Lenggong is one of the educational institutes under Majlis Amanah Rakyat (MARA). Based on the current academic performance and selected criteria of 6A's in the Penilaian Menengah Rendah (PMR, now it is known as PT3), rationally there should be no reason for the failure to achieve excellent results in the Sijil Pelajaran Malaysia (SPM). However, every time the results are announced, the average school achievement grade (GPS) does not meet the performance goals of an average grade of 1.00 for PMR and below 2.00 for SPM, even though it has been in operation for 10 years. Therefore, this research aimed at identifying the influencing factors that affected the students' academic performance. Early prediction is one of the strategies performed in order to improve the students' performance. Neural network and fuzzy logic models are used to realize the accurate prediction based on three factors namely demography, academic and co-curricular activities, including a combination of all three factors. Demography, academic and co-curricular information for the year 2008 to 2010 SPM candidates of MRSM Lenggong are the data sample used. It can be concluded that the prediction outcome using the neural network model shows that the academic factor influences the students' academic performance with the prediction accuracy around 93.65%. Meanwhile, the fuzzy logic model gives an opposite result, where the students' academic performance has also been influenced by the demography factor with an accuracy of 87.00%. Although different techniques yield different results, it is undeniable that the combination of demography and academic factors establishes a solid outcome in identifying the students' present and future academic performances.
- Published
- 2015
158. Performance analysis of a finite radon transform in OFDM system under different channel models
- Author
-
Farrah Salwani Abdullah, Rashid A. Fayadh, M. S. Anuar, F. Malek, and Sameer A. Dawood
- Subjects
Orthogonal frequency-division multiplexing ,Fast Fourier transform ,Data_CODINGANDINFORMATIONTHEORY ,Discrete Fourier transform ,Computer Science::Performance ,symbols.namesake ,Additive white Gaussian noise ,Fourier transform ,Modulation ,Bit error rate ,symbols ,Electronic engineering ,Fading ,Computer Science::Information Theory ,Mathematics - Abstract
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resulted in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.
- Published
- 2015
159. Intelligent Double Treatment Iterative Algorithm for Attribute Reduction Problems
- Author
-
Yahya Z. Arajy, Salwani Abdullah, and Saif Kifah
- Subjects
Iterative method ,business.industry ,Stability (learning theory) ,Neighbourhood (graph theory) ,computer.software_genre ,Machine learning ,Reduction (complexity) ,Set (abstract data type) ,Benchmark (computing) ,Data mining ,Rough set ,Artificial intelligence ,business ,computer ,Selection (genetic algorithm) ,Mathematics - Abstract
Attribute reduction is a combinatorial optimization problem in data mining that aims to find minimal reducts from large set of attributes. The problem is exacerbated if the number of instances is large. Therefore, this paper concentrates on a double treatment iterative improvement algorithm with intelligent selection on composite neighbourhood structure to solve the attribute reduction problems and to obtain near optimal reducts. The algorithm works iteratively with only accepting an improved solution. The proposed approach has been tested on a set of 13 benchmark datasets taken from the University of California, Irvine (UCI) machine learning repository in line with the state-of-the-art methods. The 13 datasets have been chosen due to the differences in size and complexity in order to test the stability of the proposed algorithm. The experimental results show that the proposed approach is able to produce competitive results for the tested datasets.
- Published
- 2015
160. Short-Term Non-ionizing 2.45 GHz WBAN RF Exposure Does not Affect Human Physiological Measures and Cognitive Performance Exposed by Wearable Textile Monopole Antenna
- Author
-
Noor Anida Abu Talib, Fairul Afzal Ahmad Fuad, Fareq Malek, Hasliza A. Rahim, Ping Jack Soh, Farrah Salwani Abdullah, Che Muhammad Nor Che Isa, and Nurbaizatul Hisham
- Subjects
business.industry ,Body area network ,Electronic engineering ,Wearable computer ,Medicine ,Radio frequency ,Effects of sleep deprivation on cognitive performance ,Affect (psychology) ,business ,Non-ionizing radiation ,Monopole antenna ,Biomedical engineering ,Cognitive test - Abstract
This paper presents a novel study of evaluating non-ionizing effect of 2.45 GHz Wireless Body Area Networks (WBAN) radiofrequency (RF) electromagnetic fields (EMF) exposure on human physiological parameters and cognitive performance. The study aims to test the hypothesis whether exposure of 2.45 GHz WBAN radio electromagnetic fields may affect physiological parameters and cognitive performance of human. Twenty healthy volunteers are involved in the test and exposed to 2.45 GHz WBAN RF radiation, emitted by a planar textile monopole antenna. Physiological measures of body temperature, systolic blood pressure, diastolic blood pressure and heart rate are obtained, along with cognitive performance outcomes. Results indicated that no significant difference is observed in any of the three sessions (pre, exposure and post) for all physiological parameters, p > 0.05. Insufficient evident is found to indicate a difference in mean of all cognitive tests between WBAN RF and Sham exposure sessions, p’s > 0.05.
- Published
- 2015
161. Nature-Inspired Chemical Reaction Optimisation Algorithm for Handling Nurse Rostering Problem
- Author
-
Salwani Abdullah and Yahya Z. Arajy
- Subjects
education.field_of_study ,Operations research ,business.industry ,Process (engineering) ,media_common.quotation_subject ,Population ,Workload ,Variable (computer science) ,Nursing ,Nurse scheduling problem ,Medicine ,Quality (business) ,business ,education ,Metaheuristic ,Selection (genetic algorithm) ,media_common - Abstract
The optimisation of the nurse rostering problem is chosen in this work seeking to improve the organization of hospital duties and to elevate health care by enhancing the quality of the decision-making process. Nurse rostering is a difficult and complex problem with a large number of demands and requirements that conflict with hospital workload constraints in terms of employee work regulations and personal preferences. We propose a variable population-based metaheuristic algorithm, the chemical reaction optimisation (CRO), to solve the NRP at the First International Nurse Rostering Competition (2010). The CRO algorithm features an adaptive search procedure that systematically controls the selection between an intensive search strategy and diversification search based on specific criteria to reach the best solution. Computational results were measured with three complexity levels as a total of 30 variant instances based on real-world constraints.
- Published
- 2015
162. Performance enhancement of rake-receiver using continuous and discrete wavelet transforms analysis through NLOS propagation
- Author
-
Hilal A. Fadhil, Farah Salwani Abdullah, F. Malek, Rashid A. Fayadh, and Sameer A. Dawood
- Subjects
Discrete wavelet transform ,Engineering ,Wavelet ,Continuous wavelet ,business.industry ,Rake ,Electronic engineering ,Wavelet transform ,Rake receiver ,Data_CODINGANDINFORMATIONTHEORY ,business ,Continuous wavelet transform ,Wavelet packet decomposition - Abstract
In this paper, three levels of analysis and synthesis filter banks were used to create coefficients for a continuous wavelet transform (CWT) and discrete wavelet transform (DWT). The main property of these wavelet transform schemes is their ability to construct the transmitted signal across a log-normal fading channel over additive white Gaussian noise (AWGN). Wireless rake-receiver structure was chosen as a major application to reduce the inter-symbol interference (ISI) and to minimize the noise. In this work, a new scheme of rake receiver is proposed to receive indoor, multi-path components (MPCs) for ultra-wideband (UWB) wireless communication systems. Rake receivers consist of a continuous wavelet rake (CW-rake) and a discrete wavelet rake (DW-rake), and they use huge bandwidth (7.5 GHz), as reported by the Federal Communications Commission (FCC). The indoor channel models chose for analysis in this research were the non line-of-sight (LOS) channel model (CM4 from 4 to 10 meters) to show the behavior ...
- Published
- 2015
163. Composites Based on Rice Husk Ash/Polyester for Use as Microwave Absorber
- Author
-
Ee Meng Cheng, Wei-Wen Liu, Mohd Asri Jusoh, Farah Salwani Abdullah, Liyana Zahid, Muhammad Nadeem Iqbal, Nur Sabrina Md Noorpi, F. H. Wee, Yeng Seng Lee, Nurhakimah Mohd Mokhtar, Fareq Malek, and Ahmad Makmom Abdullah
- Subjects
Polyester ,Materials science ,Loss factor ,Reflection loss ,Dielectric ,Composite material ,Absorption (electromagnetic radiation) ,Husk ,Microwave absorber ,Microwave - Abstract
This paper is to study the dielectric properties and microwave absorption of rice husk ash/polyester (RHAP) composites. The RHAP composites prepared with different weight ratio (40–80 wt. %) of rice husk ash (RHA) loading into polyester was fabricated. A rectangular waveguide transmission line method was used to measure the dielectric properties of the RHAP composites. The RHAP samples were designed and simulated with different thickness and percentage of RHA loading by using Computer Simulation Technology Microwave Studio (CST-MWS) software. The materials, their dielectrics properties measurement and microwave absorption result over 12.4–18 GHz (Ku-Band) frequency range are discussed. The dielectric properties of composites increase with increasing loading the RA wt%. The RHAP sample with different thickness was investigated in Ku-Band. The microwave absorption was up to 99.84 %, using 80 % loading of RHA with thickness 15 mm at 13.81 GHz, and showing the RHAP composites is applicable as microwave absorber.
- Published
- 2015
164. Wireless rake-receiver using adaptive filter with a family of partial update algorithms in noise cancellation applications
- Author
-
Farah Salwani Abdullah, F. Malek, Jaafar A. Aldhaibani, Rashid A. Fayadh, Hilal A. Fadhil, and M. K. Salman
- Subjects
Adaptive filter ,Least mean squares filter ,Recursive least squares filter ,Interference (communication) ,Computer science ,business.industry ,Electronic engineering ,Wireless ,Rake receiver ,Direct-sequence spread spectrum ,business ,Algorithm ,Active noise control - Abstract
For high data rate propagation in wireless ultra-wideband (UWB) communication systems, the inter-symbol interference (ISI), multiple-access interference (MAI), and multiple-users interference (MUI) are influencing the performance of the wireless systems. In this paper, the rake-receiver was presented with the spread signal by direct sequence spread spectrum (DS-SS) technique. The adaptive rake-receiver structure was shown with adjusting the receiver tap weights using least mean squares (LMS), normalized least mean squares (NLMS), and affine projection algorithms (APA) to support the weak signals by noise cancellation and mitigate the interferences. To minimize the data convergence speed and to reduce the computational complexity by the previous algorithms, a well-known approach of partial-updates (PU) adaptive filters were employed with algorithms, such as sequential-partial, periodic-partial, M-max-partial, and selective-partial updates (SPU) in the proposed system. The simulation results of bit error ra...
- Published
- 2015
165. The Effects of Government and State Ownership on Dividends
- Author
-
Siti Salwani Abdullah, Ahmad Ridhuwan Abdullah, and Razman Hafifi Redzuan
- Subjects
General Engineering - Abstract
This study aims to investigate the relationship between government ownership and dividend policy of Malaysian listed companies. Specifically, the objective of the study is to examine whether government and state ownerships influence dividend payout and dividend per share. The study used a sample of 400 companies, w h i c h were randomly chosen. Two dependent variables were used as a proxy for dividend namely the dividend per share (DPS) and dividend payout ratio (DPR), while 8 government agencies (EPF, LTH, KWAP, LTAT, MKD, KNB, PNB and STATE) represented government ownership. Since dividends are truncated, the Tobit model was utilized to examine the effect of government ownership. The findings showed that there is no relationship between government ownership and dividends when using the DPS as the dependent variable. However, when DPR is used, the result showed that government ownership could affect the dividend policy. Furthermore, it is found that privately funded government agencies were more likely to affect dividends. This result indicates that these government agencies influence the proportion of earnings distribution or the payable amount rather than the amount of dividend per unit of shares.
- Published
- 2014
166. Solving feature selection problem using intelligent double treatment iterative composite neighbourhood structure algorithm
- Author
-
Saif Kifah, Yahya Z. Arajy, and Salwani Abdullah
- Subjects
Computer science ,Structure algorithm ,020207 software engineering ,Feature selection ,02 engineering and technology ,computer.software_genre ,Computer Science Applications ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Data mining ,Computer Vision and Pattern Recognition ,Metaheuristic ,computer ,Neighbourhood (mathematics) - Abstract
Attribute reduction is one of the main contributions in rough set theory (RST) that tries to discover all possible reducts by eliminating redundant attributes while maintaining the information of the problem in hand. In this paper, we propose a meta-heuristic methodology called a double treatment iterative improvement algorithm with intelligent selection of composite neighbourhood structure, to solve the attribute reduction problems and to obtain near optimal reducts. The algorithm works iteratively by only accepting an improved solution. The proposed approach has been tried on a set of 13 benchmark datasets taken from the University of California Irvine (UCI) machine learning repository in line with the state-of-the-art methods. Thirteen datasets have been chosen due to the differences in size and complexity in order to test the stability of the proposed algorithm. The experimental results demonstrate that the proposed approach is able to produce competitive results for the tested datasets.
- Published
- 2017
167. A Robust Intelligent Construction Procedure for Job-Shop Scheduling
- Author
-
Salwani Abdullah and Majid Abdolrazzagh-Nezhad
- Subjects
Mathematical optimization ,education.field_of_study ,Job shop scheduling ,Heuristic (computer science) ,Computer science ,media_common.quotation_subject ,Population ,Initialization ,Approximation algorithm ,Computer Science Applications ,Control and Systems Engineering ,Benchmark (computing) ,Quality (business) ,Point (geometry) ,Electrical and Electronic Engineering ,education ,media_common - Abstract
This paper proposes a robust intelligent technique to produce the initial population close to the optimal solution for the job-shop scheduling problem (JSSP). The proposed technique is designed by a new heuristic based on an intelligent skip from the primal point of the solution space to a better one that considers a new classification of jobs on machines. This new classification is named mPlates-Jobs. The main advantages of the proposed technique are its capability to produce any size of the initial population, its proximity to the optimal solution, and its capability to observe the best-known solution in the generated initial population for benchmark datasets. The comparison of the experimental results with those of Kuczapski’s, Yahyaoui’s, Moghaddam and Giffler’s, and Thompson’s initialization techniques, which are considered the four state-of-the-art initialization techniques, proves the abovementioned advantages. In this study, the proposed intelligent initialization technique can be considered a fast and intelligent heuristic algorithm to solve the JSSP based on the quality of its results. DOI: http://dx.doi.org/10.5755/j01.itc.43.3.3536
- Published
- 2014
168. Comparison the performance of OFDM system based on multiwavelet transform with different modulation schemes
- Author
-
Farah Salwani Abdullah, Sameer A. Dawood, M. S. Anuar, Rashid A. Fayadh, F. Malek, and Mohd Hariz Mohd Fakri
- Subjects
Quadrature modulation ,Computer science ,Orthogonal frequency-division multiplexing ,business.industry ,Intersymbol interference ,Modulation ,Pulse-position modulation ,Bit error rate ,Electronic engineering ,Fading ,Telecommunications ,business ,Multipath propagation ,Amplitude and phase-shift keying ,Quadrature amplitude modulation ,Phase-shift keying - Abstract
Orthogonal frequency division multiplexing (OFDM) is a very interesting approach for high data rate transmission in a multipath fading environment that leads to intersymbol interference (ISI). In this paper, two steps are used to improve the error rate performance of OFDM system. First, the discrete multiwavelet transform (DMWT) is proposed instead of fast Fourier transform (FFT) to obtain high orthogonality between subcarriers and hence reduce (ISI). Second, the performance of OFDM based on DMWT is examined for different modulation schemes such as M-PSK (M-ary phase shift keying) and M-QAM (M-ary quadrature amplitude modulation) to achieve high data rate. The simulation results demonstrated that, for high capacity data rate transmission, the M-QAM modulation is better than the M-PSK modulation.
- Published
- 2014
169. Selective update euclidean direction search algorithm for adaptive filtering in indoor wireless rake-receiver
- Author
-
Rashid A. Fayadh, Farah Salwani Abdullah, Sameer A. Dawood, F. Malek, and Hilal A. Fadhil
- Subjects
business.industry ,Computer science ,Spread spectrum ,Adaptive filter ,Search algorithm ,Bit error rate ,Wireless ,Rake receiver ,Fading ,business ,Algorithm ,Computer Science::Information Theory ,Computer network ,Active noise control - Abstract
For high data rate propagation and indoor obstacles in wireless ultra-wideband (UWB) communication systems, the signal fading and noises are influencing the performance of the wireless systems. In this paper, the rake-receiver was presented with the spread signal by time-hopping spread spectrum (TH-SS) technique. The adaptive filter used in rake-receiver structure was shown with adjusting the receiver tap weights using Euclidean Direction Search (EDS) algorithm to support the weak signals by noise cancellation. To minimize the data convergence speed and to reduce the computational complexity by the previous algorithms, a well-known approach of selective-updates (SU) adaptive filter was employed with algorithms, such as sequential-selective, periodic-selective, and M-max-selective in the proposed system. The simulation results of bit error rate (BER) versus signal-to-noise ratio (SNR) are illustrated to show the performance of selective-update algorithm that has nearly comparable performance with the full update adaptive filters.
- Published
- 2014
170. Performance analysis of multi-carrier code division multiple access system based on over-sampling multiwavelet transform over wireless channel
- Author
-
Sameer A. Dawood, F. Malek, Farrah Salwani Abdullah, M. S. Anuar, and Rashid A. Fayadh
- Subjects
Discrete wavelet transform ,Computer science ,Code division multiple access ,Orthogonal frequency-division multiplexing ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Fast Fourier transform ,Data_CODINGANDINFORMATIONTHEORY ,MIMO-OFDM ,Computer Science::Performance ,symbols.namesake ,Additive white Gaussian noise ,Channel state information ,Computer Science::Networking and Internet Architecture ,symbols ,Electronic engineering ,Fading ,Computer Science::Information Theory - Abstract
In this paper, an over-sampling inverse discrete multiwavelet transform (IDMWT) is suggested as a modulator strategy instead of inverse fast Fourier transform (IFFT) in the realization of the multicarrier code division multiple access (MC-CDMA) system. The suggested strategy was applied on the MC-CDMA with additive white Gaussian noise (AWGN) channel, flat fading channel and frequency selective fading channel. Simulation results showed that, the proposed method gives a better bit error rate (BER) efficiency than the traditional MC-CDMA model based on fast Fourier transform (FFT) and MC-CDMA based on discrete wavelet transform (DWT).
- Published
- 2014
171. Electromagnetic algorithm for tuning the structure and parameters of neural networks
- Author
-
Nasser R. Sabar, Ayad Turky, and Salwani Abdullah
- Subjects
Physical neural network ,Probabilistic neural network ,Recurrent neural network ,Artificial neural network ,Computer science ,business.industry ,Time delay neural network ,Deep learning ,Feedforward neural network ,Artificial intelligence ,Stochastic neural network ,business ,Algorithm - Abstract
Electromagnetic algorithm is a population based meta-heuristic which imitates the attraction and repulsion of sample points. In this paper, we propose an electromagnetic algorithm to simultaneously tune the structure and parameter of the feed forward neural network. Each solution in the electromagnetic algorithm contains both the design structure and the parameters values of the neural network. This solution later will be used by the neural network to represents its configuration. The classification accuracy returned by the neural network represents the quality of the solution. The performance of the proposed method is verified by using the well-known classification benchmarks and compared against the latest methodologies in the literature. Empirical results demonstrate that the proposed algorithm is able to obtain competitive results, when compared to the best-known results in the literature.
- Published
- 2014
172. Variable Neighbourhood Iterated Improvement Search Algorithm for Attribute Reduction Problems
- Author
-
Saif Kifah, Salwani Abdullah, and Yahya Z. Arajy
- Subjects
Mathematical optimization ,Theoretical computer science ,Jump search ,Search algorithm ,Iterated local search ,business.industry ,Iterated function ,Beam search ,Local search (optimization) ,Best-first search ,business ,Hill climbing ,Mathematics - Abstract
Attribute reduction is one of the main contributions in Rough Set Theory RST that tries to find all possible reducts by eliminating redundant attributes while maintaining the information of the problem in hand. In this paper, we propose a meta-heuristic approach called a Variable Neighbourhood Iterated Improvement Search VNS-IIS algorithm for attribute reduction. It is a combination of the variable neighbourhood search with the iterated search algorithm where two local search algorithms i.e. a random iterated local search and a sequential iterated local search algorithm are employed in a parallel strategy. In VNS-IIS, an improved solution will always be accepted. The proposed method has been tested on the 13 well-known datasets that are available in the UCI machine learning repository. Experimental results show that the VNS-IIS is able to obtain competitive results when compared with other approaches mentioned in the literature in terms of minimal reducts.
- Published
- 2014
173. Difference loss tangent layer microwave absorber effect absorption in X-band frequency
- Author
-
F. H. Wee, Ee Meng Cheng, Yeng Seng Lee, Farrah Salwani Abdullah, Muhammad Nadeem Iqbal, F. Malek, Wei-Wen Liu, and Z. Y. Liyana
- Subjects
Materials science ,business.industry ,X band ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Microwave engineering ,computer.software_genre ,Microwave absorber ,Simulation software ,Optics ,Dissipation factor ,business ,Absorption (electromagnetic radiation) ,Layer (electronics) ,computer ,Microwave - Abstract
This paper presents the different layers of microwave absorbing materials (MAM) effects microwave absorption performance. The different layers microwave absorbers were simulated using Computer Simulation Technology (CST) Microwave Studio simulation software. The absorption of different layer MAM will be investigated. The absorption in microwave frequency between different arrangement layers of MAM was compared.
- Published
- 2013
174. Anechoic characteristics of a metal backed anechoic agro-waste for EMC applications
- Author
-
F. Malek, Farrah Salwani Abdullah, Yeng Seng Lee, N. F. M. Yusof, Muhammad Nadeem Iqbal, and Liyana Zahid
- Subjects
Anechoic chamber ,Transmission line ,Computer science ,EMI ,C band ,business.industry ,Acoustics ,Electronic engineering ,Electromagnetic compatibility ,Microelectronics ,business ,Electrical impedance ,Electromagnetic interference - Abstract
Electromagnetic Interference (EMI) is a serious threat to the modern unprotected microelectronics-based integrated systems. Various techniques are implemented, to protect the mission critical and sensitive systems, against the EMI threats. One of the most important techniques is the use of absorbing material to suppress the EMI at the sources and/or receptors to ensure their Electromagnetic compatibility (EMC). Rice husk is an agro-waste and had been identified, as a naturally available cheaper anechoic material, in recent years. Some of the investigations, based on the transmission line theory, in the C band frequency range from 4 to 8 GHz, are presented in this paper. The main goal of this paper was to investigate the potential use of a single layer of the agro-waste for no reflection (perfectly matched) condition. We fabricated samples by mixing the Rice Husk Powder (RHP) with a commercially available, an easy to apply and non-toxic, glue and complex permittivity was measured. These measured values were then used to determine the perfect impedance matched condition and finally we simulate the material using CST MWS software.
- Published
- 2013
175. Taguchi-Based Parameter Designing of Genetic Algorithm for Artificial Neural Network Training
- Author
-
Abdul Razak Hamdan, Salwani Abdullah, and Najmeh Sadat Jaddi
- Subjects
Meta-optimization ,Series (mathematics) ,Artificial neural network ,business.industry ,Computer science ,Population-based incremental learning ,Computer Science::Neural and Evolutionary Computation ,Machine learning ,computer.software_genre ,Set (abstract data type) ,Taguchi methods ,Genetic algorithm ,Artificial intelligence ,Time series ,business ,computer - Abstract
A number of properties of Artificial Neural Networks (ANNs) make them suitable for many applications such as time series prediction problem. However, lack of training model which finds a global optimal set of weights has been disadvantaged in some real-world problems. Genetic algorithm is an optimization procedure which is superior at exploring a search space in an intelligent method. In this paper we present a genetic-based algorithm to optimize the weights and biases of the ANN. In this work we tune the parameters of the genetic algorithm using Taguchi method. To test the method two standard time series prediction problems are employed. The results are compared to the methods in the literature. The comparison showed the superiority of the proposed method.
- Published
- 2013
176. Comparison between Record to Record Travel and Great Deluge Attribute Reduction Algorithms for Classification Problem
- Author
-
Majdi Mafarja and Salwani Abdullah
- Subjects
Reduction (complexity) ,Local optimum ,Computer science ,Simulated annealing ,Optimisation algorithm ,Rough set ,Data mining ,Great Deluge algorithm ,computer.software_genre ,Algorithm ,computer ,Great deluge - Abstract
In this paper, two single-solution-based meta-heuristic methods for attribute reduction are presented. The first one is based on a record-to-record travel algorithm, while the second is based on a Great Deluge algorithm. These two methods are coded as RRT and m-GD, respectively. Both algorithms are deterministic optimisation algorithms, where their structures are inspired by and resemble the Simulated Annealing algorithm, while they differ in the acceptance of worse solutions. Moreover, they belong to the same family of meta-heuristic algorithms that try to avoid stacking in the local optima by accepting non-improving neighbours. The obtained reducts from both algorithms were passed to ROSETTA and the classification accuracy and the number of generated rules are reported. Computational experiments confirm that RRT m-GD is able to select the most informative attributes which leads to a higher classification accuracy.
- Published
- 2013
177. Hybridizing Meta-heuristics Approaches for Solving University Course Timetabling Problems
- Author
-
Arwa Alqudsi, Khalid Shaker, Hamid A. Jalab, and Salwani Abdullah
- Subjects
Set (abstract data type) ,Mathematical optimization ,Computer science ,Benchmark (computing) ,Neighbourhood (graph theory) ,Metaheuristic ,Curriculum ,Tabu search ,Variety (cybernetics) ,Course (navigation) - Abstract
In this paper we have presented a combination of two meta-heuristics, namely great deluge and tabu search, for solving the university course timetabling problem. This problem occurs during the assignment of a set of courses to specific timeslots and rooms within a working week and subject to a variety of hard and soft constraints. Essentially a set of hard constraints must be satisfied in order to obtain a feasible solution and satisfying as many as of the soft constraints as possible. The algorithm is tested over two databases: eleven enrolment-based benchmark datasets representing one large, five medium and five small problems and curriculum-based datasets used and developed from the International Timetabling Competition, ITC2007 UD2 problems. A new strategy has been introduced to control the application of a set of neighbourhood structures using the tabu search and great deluge. The results demonstrate that our approach is able to produce solutions that have lower penalties on all the small and medium problems in eleven enrolment-based datasets and can produce solutions with comparable results on the curriculum-based datasets with lower penalties on several data instances when compared against other techniques from the literature.
- Published
- 2013
178. Comparison between compactness and connectedness criteria in data clustering
- Author
-
Abdelaziz I. Hammouri and Salwani Abdullah
- Subjects
Fuzzy clustering ,Information Systems and Management ,Applied Mathematics ,Single-linkage clustering ,Correlation clustering ,Constrained clustering ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Determining the number of clusters in a data set ,010104 statistics & probability ,ComputingMethodologies_PATTERNRECOGNITION ,CURE data clustering algorithm ,Consensus clustering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,0101 mathematics ,Cluster analysis ,computer ,Mathematics ,Information Systems - Abstract
Data clustering is the first step in data mining. It aims at finding homogeneous groups of objects based on the degree of similarity and dissimilarity of their attributes. Most of the existing clustering methods are based on a single criterion to measure the goodness of clusters. In most cases, these methods are not suitable for different types of datasets with different characteristics. In this study, biogeography-based optimisation BBO and great deluge GD algorithms are combined to address the data clustering as single objective optimisation problem; two versions of the proposed approach that employed two different clustering criteria as the objective function have been investigated using fourteen 2D synthetic benchmark datasets. The quality of the obtained clusters of both versions of the proposed approach is insufficient with respect to the external evaluation function i.e. F-measure. Thus, the data-clustering problem preferred to be tackled as multi-objective clustering algorithms.
- Published
- 2016
179. A Differential Evolution Algorithm for the University course timetabling problem
- Author
-
Arwa Hatem, Salwani Abdullah, and Khalid Shaker
- Subjects
Set (abstract data type) ,Mathematical optimization ,Computational complexity theory ,Simple (abstract algebra) ,Differential evolution ,Mutation (genetic algorithm) ,Convergence (routing) ,Benchmark (computing) ,Evolutionary computation ,Mathematics - Abstract
The University course timetabling problem is known as a NP-hard problem. It is a complex problem wherein the problem size can become huge due to limited resources (e.g. amount of rooms, their capacities and number availability of lecturers) and the requirements for these resources. The university course timetabling problem involves assigning a given number of events to a limited number of timeslots and rooms under a given set of constraints; the objective is to satisfy the hard constraints and minimize the violation of soft constraints. In this paper, a Differential Evolution (DE) algorithm is proposed. DE algorithm relies on the mutation operation to reduce the convergence time while reducing the penalty cost of solution. The proposed algorithm is tested over eleven benchmark datasets (representing one large, five medium and five small problems). Experimental results show that our approach is able to generate competitive results when compared with previous available approaches. Possible extensions upon this simple approach are also discussed.
- Published
- 2012
180. Preface
- Author
-
Abdul Razak Hamdan, Mohamed Rawidean Mohd Kassim, Mohd Zakree Ahmad Nazri, Zulaiha Ali Othman, Siti Mariyam Shamsuddin, Azuraliza Abu Bakar, Fazel Famili, and Salwani Abdullah
- Subjects
Computer science ,Data mining ,computer.software_genre ,computer - Published
- 2012
181. Modified great deluge for attribute reduction in rough set theory
- Author
-
Majdi Mafarja and Salwani Abdullah
- Subjects
Scheme (programming language) ,media_common.quotation_subject ,Process (computing) ,Great Deluge algorithm ,computer.software_genre ,Set (abstract data type) ,Reduction (complexity) ,Benchmark (computing) ,Quality (business) ,Data mining ,Rough set ,computer ,Algorithm ,computer.programming_language ,media_common ,Mathematics - Abstract
Attribute reduction can be defined as a process of selecting a minimal subset of attributes (based on a rough set theory as a mathematical tool) from an original set with least lose of information. In this work, a modified great deluge algorithm has been employed on attribute reduction problems, where the search space is divided into three regions. In each region, the water level is updated using a different scheme based on the quality of the current solution, instead of using a linear mechanism which is used in the original great deluge algorithm. The proposed approach is tested on 13 standard benchmark datasets and able to obtain promising results when compared to state-of-the-art approaches.
- Published
- 2011
182. Gravitational search algorithm with heuristic search for clustering problems
- Author
-
Zalinda Othman, Abdolreza Hatamlou, and Salwani Abdullah
- Subjects
Mathematical optimization ,Search algorithm ,law ,Population-based incremental learning ,Canopy clustering algorithm ,Beam search ,A* search algorithm ,Best-first search ,Min-conflicts algorithm ,Algorithm ,FSA-Red Algorithm ,Mathematics ,law.invention - Abstract
In this paper, we present an efficient algorithm for cluster analysis, which is based on gravitational search and a heuristic search algorithm. In the proposed algorithm, called GSA-HS, the gravitational search algorithm is used to find a near optimal solution for clustering problem, and then at the next step a heuristic search algorithm is applied to improve the initial solution by searching around it. Four benchmark datasets are used to evaluate and to compare the performance of the presented algorithm with two other famous clustering algorithms, i.e. K-means and particle swarm optimization algorithm. The results show that the proposed algorithm can find high quality clusters in all the tested datasets.
- Published
- 2011
183. Gene selection in microarray data from multi-objective perspective
- Author
-
Salwani Abdullah and Shahram Sabzevari
- Subjects
Truncation selection ,Computational complexity theory ,business.industry ,Computer science ,Dimensionality reduction ,Pareto principle ,Feature selection ,computer.software_genre ,Machine learning ,Multi-objective optimization ,Gene chip analysis ,Feature (machine learning) ,Artificial intelligence ,Data mining ,business ,computer - Abstract
Microarray technology provides a platform to study expression level of thousands of genes simultaneously, but its high dimensionality and noisy nature forces the usage of dimensionality reduction techniques. Among these techniques feature selection seems to be more favorable due to its goal to preserve feature semantic. Feature selection is also called gene selection while applied to genetic data. Inherently, gene selection objectives are manifold which makes it a proper candidate for multi-objective optimization. There are three different ways to deal with fitness evaluation in multi-objective literature. Between these three the Pareto base approach seems to deliver more promising advantages to the biologist, but it did not grab that much attention till now, probably due to its computational complexity. The intention of this paper is to provide an insight to gene selection problem from multi-objective perspective. Although, covering all the proposed methods are impossible, but hopefully those algorithms discussed here are enough to show the common trend in multi-objective gene selection in microarray data.
- Published
- 2011
184. Robust start for population-based algorithms solving job-shop scheduling problems
- Author
-
Majid Abdolrazzagh Nezhad and Salwani Abdullah
- Subjects
education.field_of_study ,Mathematical optimization ,Schedule ,Sequence ,Job shop scheduling ,Computer science ,Path (graph theory) ,Population ,Benchmark (computing) ,Initialization ,Approximation algorithm ,education ,Algorithm - Abstract
Most of the methods to solve job-shop scheduling problem (JSSP) are population-based and one of the strategies to reduce the time to reach the optimal solution is to produce an initial population that firstly has suitable distribution on space solution, secondly some of its points settle nearby to the optimal solution and lastly generate it in the shortest possible time. But since JSSP is one of the most difficult NP-complete problems and its space solution is complex, most of the previous researchers have preferred to utilize random methods or priority rules for producing initial population. In this paper, by mapping each schedule to a unique sequence of jobs on machines matrix (SJM), we have proposed the novel concept of plates, and have redefined and adapted concepts of tail and head path and have designed evaluator functions between SJM matrix and its corresponding schedule aiming at identifying gaps in the obtained schedule, we have proposed three novel initialization procedures. The proposed procedures have been run on 73 benchmark datasets and their results have been compared with some existing initialization procedures and even some approximation algorithms for solving JSSP. Based on this comparison, we have seen the proposed procedures have the significant advantage both in the quality-generated points and in the time producing them. The more interesting point in the implementation of proposed procedures on some datasets is that we see the best known solution in the produced initial population.
- Published
- 2011
185. Optimisation model of selective cutting for Timber Harvest Planning in Peninsular Malaysia
- Author
-
Abdul Razak Hamdan, Roslan Ismail, Munaisyah Abdullah, and Salwani Abdullah
- Subjects
Agriculture ,business.industry ,Environmental science ,Tropics ,Agricultural engineering ,business ,Profit (economics) - Abstract
Timber Harvest Planning (THP) model is used to determine which forest areas to be harvested in different time periods with objective to maximize profit subject to harvesting regulations. Various THP models have been developed in the Western countries based on optimisation approach to generate an optimal or feasible harvest plan. However similar studies have gained less attention in Tropical countries. Thus, this study proposes an optimisation model of THP that reflects selective cutting in Peninsular Malaysia. The model was tested on seven blocks that consists a total of 636 trees with different size and species. We found that, optimisation approach generates selectively timber harvest plan with higher volume and less damage.
- Published
- 2011
186. Preface
- Author
-
Abdul Razak Hamdan, Fazel A. Famili, Graham Kendall, Hafiz Mohd. Sarim, Ibrahim H. Osman, Salwani Abdullah, and Zalinda Othman
- Abstract
2011 3rd Conference on Data Mining and Optimization, DMO 2011, 28 June 2011 through 29 June 2011, Putrajaya
- Published
- 2011
187. Application of Gravitational Search Algorithm on Data Clustering
- Author
-
Salwani Abdullah, Abdolreza Hatamlou, and Hossein Nezamabadi-pour
- Subjects
Determining the number of clusters in a data set ,Fuzzy clustering ,Data stream clustering ,CURE data clustering algorithm ,Correlation clustering ,Canopy clustering algorithm ,Constrained clustering ,Data mining ,computer.software_genre ,Cluster analysis ,computer ,Mathematics - Abstract
Data clustering, the process of grouping similar objects in a set of observations is one of the attractive and main tasks in data mining that is used in many areas and applications such as text clustering and information retrieval, data compaction, fraud detection, biology, computer vision, data summarization, marketing and customer analysis, etc. The well-known k-means algorithm, which widely applied to the clustering problem, has the drawbacks of depending on the initial state of centroids and may converge to the local optima rather than global optima. A data clustering algorithm based on the gravitational search algorithm (GSA) is proposed in this research. In this algorithm, some candidate solutions for clustering problem are created randomly and then interact with one another via Newton's gravity law to search the problem space. The performance of the presented algorithm is compared with three other well-known clustering algorithms, including k-means, genetic algorithm (GA), and particle swarm optimization algorithm (PSO) on four real and standard datasets. Experimental results confirm that the GSA is a robust and viable method for data clustering.
- Published
- 2011
188. Data Clustering Using Big Bang–Big Crunch Algorithm
- Author
-
Masumeh Hatamlou, Abdolreza Hatamlou, and Salwani Abdullah
- Subjects
Mathematical optimization ,education.field_of_study ,Optimization problem ,Big Crunch ,Computer science ,Population ,Big bang big crunch algorithm ,Cost approach ,Physics::History of Physics ,Crunch ,General Relativity and Quantum Cosmology ,Physics::Popular Physics ,Representative point ,Cluster analysis ,education - Abstract
The Big Bang–Big Crunch (BB–BC) algorithm is a new optimization method that is based on one of the theories of the evolution of the universe namely the Big Bang and Big Crunch theory. According to this method, in the Big Bang phase some candidate solutions to the optimization problem are randomly generated and spread all over the search space. In the Big Crunch phase, randomly distributed candidate solutions are drawn into a single representative point via a center of population or minimal cost approach. This paper presents BB-BC based novel approach for data clustering. The simulation results indicate the applicability and potential of this algorithm on data clustering.
- Published
- 2011
189. Hybrid Artificial Bee Colony Search Algorithm Based on Disruptive Selection for Examination Timetabling Problems
- Author
-
Salwani Abdullah and Malek Alzaqebah
- Subjects
education.field_of_study ,Disruptive selection ,business.industry ,Computer science ,Population ,Artificial bee colony algorithm ,Search algorithm ,Simulated annealing ,Local search (optimization) ,Artificial intelligence ,business ,education ,Bees algorithm ,Premature convergence - Abstract
Artificial Bee Colony (ABC) is a population-based algorithm that employed the natural metaphors, based on foraging behavior of honey bee swarm. In ABC algorithm, there are three categories of bees. Employed bees select a random solution and apply a random neighborhood structure (exploration process), onlooker bees choose a food source depending on a selection strategy (exploitation process), and scout bees involves to search for new food sources (scouting process). In this paper, firstly we introduce a disruptive selection strategy for onlooker bees, to improve the diversity of the population and the premature convergence, and also a local search (i.e. simulated annealing) is introduced, in order to attain a balance between exploration and exploitation processes. Furthermore, a self adaptive strategy for selecting neighborhood structures is added to further enhance the local intensification capability. Experimental results show that the hybrid ABC with disruptive selection strategy outperforms the ABC algorithm alone when tested on examination timetabling problems.
- Published
- 2011
190. A constructive hyper-heuristics for rough set attribute reduction
- Author
-
Nasser R. Sabar, Barry McCollum, Mohd Zakree Ahmad Nazri, Hamza Turabieh, and Salwani Abdullah
- Subjects
Mathematical optimization ,business.industry ,Heuristic ,Machine learning ,computer.software_genre ,Set (abstract data type) ,Reduction (complexity) ,Fitness proportionate selection ,Benchmark (computing) ,Artificial intelligence ,Rough set ,business ,Heuristics ,computer ,Metaheuristic ,Mathematics - Abstract
Hyper-heuristics can be defined as search method for selecting or generating heuristics to solve difficult problem. A high level heuristic therefore operate on a set of low level heuristics with the overall aim of selecting the most suitable set of low level heuristics at a particular point in generating an overall solution. In this work, we propose a set of constructive hyper-heuristics for solving attribute reduction problems. At the high level, the hyper-heuristics (at each iteration) adaptively select the most suitable low level heuristics using roulette wheel selection mechanism. Whilst, at the underlying low level, four low level heuristics are used to gradually, and indirectly construct the solution. The proposed hyper-heuristics has been evaluated on a widely used UCI datasets. Results show that our hyper-heuristic produces good quality solutions when compared against other metaheuristic and outperforms other approaches on some benchmark instances.
- Published
- 2010
191. A multi-objective post enrolment course timetabling problems: A new case study
- Author
-
Hamza Turabieh, Barry McCollum, Paul McMullan, and Salwani Abdullah
- Subjects
Mathematical optimization ,Computer science ,Population size ,Genetic algorithm ,Sorting ,Processor scheduling ,Scheduling (computing) - Abstract
This paper presents a multi-objective post enrolment course timetabling problem as a new case study. We added a new soft constraint to the original single objective problem to both increase the complexity and represent a real world course timetabling problem. The new soft constraint introduced here attempts to minimize the total number of waiting timeslots in between courses for every student in a day. We proposed a Non-dominated Sorting Genetic Algorithm-II with a variable population size, called NSGA-II VPS, based on a given lifetime for each individual that is evaluated at the time of its birth. The algorithm was tested on the standard benchmark problems and experimental results show that the proposed method demonstrably improved upon the original approach (NSGA-II).
- Published
- 2010
192. Incorporating Great Deluge with Kempe Chain Neighbourhood Structure for the Enrolment-Based Course Timetabling Problem
- Author
-
Khalid Shaker, Barry McCollum, Paul McMullan, and Salwani Abdullah
- Subjects
Theoretical computer science ,Kempe chain ,Operations research ,Computer science ,Great Deluge algorithm ,Timetabling problem ,Neighbourhood (mathematics) ,Great deluge - Abstract
In general, course timetabling refers to assignment processes that assign events (courses) to a given rooms and timeslots subject to a list of hard and soft constraints. It is a challenging task for the educational institutions. In this study we employed a great deluge algorithm with kempe chain neighbourhood structure as an improvement algorithm. The Round Robin (RR) algorithm is used to control the selection of neighbourhood structures within the great deluge algorithm. The performance of our approach is tested over eleven benchmark datasets (representing one large, five medium and five small problems). Experimental results show that our approach is able to generate competitive results when compared with previous available approaches. Possible extensions upon this simple approach are also discussed.
- Published
- 2010
193. Great Deluge Algorithm for Rough Set Attribute Reduction
- Author
-
Najmeh Sadat Jaddi and Salwani Abdullah
- Subjects
Set (abstract data type) ,Reduction (complexity) ,Simple (abstract algebra) ,Computer science ,Benchmark (computing) ,Process (computing) ,Rough set ,Data mining ,Great Deluge algorithm ,Public domain ,computer.software_genre ,Algorithm ,computer - Abstract
Attribute reduction is the process of selecting a subset of features from the original set of features that forms patterns in a given dataset. It can be defined as a process to eliminate redundant attributes and at the same time is able to avoid any information loss, so that the selected subset is sufficient to describe the original features. In this paper, we present a great deluge algorithm for attribute reduction in rough set theory (GD-RSAR). Great deluge is a meta-heuristic approach that is less parameter dependent. There are only two parameters needed; the time to “spend” and the expected final solution. The algorithm always accepts improved solutions. The worse solution will be accepted if it is better than the upper boundary value or “level”. GD-RSAR has been tested on the public domain datasets available in UCI. Experimental results on benchmark datasets demonstrate that this approach is effective and able to obtain competitive results compared to previous available methods. Possible extensions upon this simple approach are also discussed.
- Published
- 2010
194. Dual Sequence Simulated Annealing with Round-Robin Approach for University Course Timetabling
- Author
-
Barry McCollum, Khalid Shaker, Salwani Abdullah, and Paul McMullan
- Subjects
Set (abstract data type) ,Mathematical optimization ,Sequence ,Computer science ,Simulated annealing ,Benchmark (computing) ,Neighbourhood (graph theory) ,Adaptive simulated annealing ,Round-robin scheduling ,Algorithm ,Selection (genetic algorithm) - Abstract
The university course timetabling problem involves assigning a given number of events into a limited number of timeslots and rooms under a given set of constraints; the objective is to satisfy the hard constraints (essential requirements) and minimize the violation of soft constraints (desirable requirements). In this study we employed a Dual-sequence Simulated Annealing (DSA) algorithm as an improvement algorithm. The Round Robin (RR) algorithm is used to control the selection of neighbourhood structures within DSA. The performance of our approach is tested over eleven benchmark datasets. Experimental results show that our approach is able to generate competitive results when compared with other state-of-the-art techniques.
- Published
- 2010
195. Controlling Multi Algorithms Using Round Robin for University Course Timetabling Problem
- Author
-
Salwani Abdullah and Khalid Shaker
- Subjects
Set (abstract data type) ,Constraint (information theory) ,Computer science ,Simulated annealing ,Benchmark (computing) ,Round-robin scheduling ,Timetabling problem ,Algorithm ,Hill climbing ,Course (navigation) - Abstract
The university course timetabling problem (CTTP) involves assigning a given number of events into a limited number of timeslots and rooms under a given set of constraint. The objective is to satisfy the hard constraints (essential requirements) and minimise the violation of soft constraints (desirable requirements). In this study, we apply three algorithms to the CTTP problem: Great Deluge, Simulated Annealing and Hill Climbing. We use a Round Robin Scheduling Algorithm (RR) as a strategy to control the application of these three algorithms. The performance of our approach is tested over eleven benchmark datasets: one large, five medium and five small problems. Competitive results have been obtained when compared with other state-of-the-art techniques.
- Published
- 2010
196. A SURVEY: PARTICLE SWARM OPTIMIZATION BASED ALGORITHMS TO SOLVE PREMATURE CONVERGENCE PROBLEM
- Author
-
Bahareh Nakisa, Mohd Zakree Ahmad Nazri, Mohammad Naim Rastgoo, Salwani Abdullah, Bahareh Nakisa, Mohd Zakree Ahmad Nazri, Mohammad Naim Rastgoo, and Salwani Abdullah
- Abstract
Particle Swarm Optimization (PSO) is a biologically inspired computational search and optimization method based on the social behaviors of birds flocking or fish schooling. Although PSO is represented in solving many well-known numerical test problems, but it suffers from the premature convergence. A number of basic variations have been developed due to solve the premature convergence problem and improve quality of solution founded by the PSO. This study presents a comprehensive survey of the various PSO-based algorithms. As part of this survey, we include a classification of the approaches and we identify the main features of each proposal. In the last part of the study, some of the topics within this field that are considered as promising areas of future research are listed.
- Published
- 2014
197. Preface
- Author
-
Abdul Razak Hamdan, Azuraliza Abu Bakar, Barry McCollum, Fazel Famili, and Salwani Abdullah
- Subjects
Computer science ,Data mining ,computer.software_genre ,computer - Published
- 2009
198. Texture analysis for diagnosing paddy disease
- Author
-
Saad Abdullah, Salwani Abdullah, Nunik Noviana Kurniawati, and Siti Norul Huda Sheikh Abdullah
- Subjects
Pixel ,business.industry ,Binary image ,Feature extraction ,Image segmentation ,Texture (music) ,Otsu's method ,Spot color ,symbols.namesake ,symbols ,RGB color model ,Computer vision ,Artificial intelligence ,business ,Mathematics - Abstract
The objective of this research is to develop a diagnosis system to recognize the paddy diseases, which are Blast Disease (BD), Brown-Spot Disease (BSD), and Narrow Brown-Spot Disease (NBSD). This paper concentrates on extracting paddy features through off-line image. The methodology involves converting the RGB images into a binary image using variable, global and automatic threshold based on Otsu method. A morphological algorithm is used to remove noises by using region filling technique. Then image characteristics consisting of lesion percentage, lesion type, boundary color, spot color, and broken paddy leaf color are extracted from paddy leaf images. Consequently, by employing production rule technique, the paddy diseases are recognized about 87.5 percent of accuracy rates. This prototype has a very great potential to be further improved in the future.
- Published
- 2009
199. A Hybridization of Electromagnetic-Like Mechanism and Great Deluge for Examination Timetabling Problems
- Author
-
Barry McCollum, Salwani Abdullah, and Hamza Turabieh
- Subjects
Mathematical optimization ,Engineering ,Simple (abstract algebra) ,business.industry ,Benchmark (computing) ,Process (computing) ,business ,Great deluge - Abstract
In this paper, we present a hybridization of an electromagnetic-like mechanism (EM) and the great deluge (GD) algorithm. This technique can be seen as a dynamic approach as an estimated quality of a new solution and a decay rate are calculated each iteration during the search process. These values are depending on a force value calculated using the EM approach. It is observed that applying these dynamic values help generate high quality solutions. Experimental results on benchmark examination timetabling problems demonstrate the effectiveness of this hybrid EM-GD approach compared with previous available methods. Possible extensions upon this simple approach are also discussed.
- Published
- 2009
200. Investigation on Image Processing Techniques for Diagnosing Paddy Diseases
- Author
-
Salwani Abdullah, Nunik Noviana Kurniawati, Siti Norul Huda Sheikh Abdullah, and Saad Abdullah
- Subjects
Pixel ,business.industry ,Computer science ,Binary image ,Feature extraction ,Image processing ,Pattern recognition ,Image segmentation ,Thresholding ,Otsu's method ,symbols.namesake ,symbols ,RGB color model ,Computer vision ,Artificial intelligence ,business - Abstract
The main objective of this research is to develop a prototype system for diagnosing paddy diseases, which are Blast Disease (BD), Brown-Spot Disease (BSD), and Narrow Brown-Spot Disease (NBSD). This paper concentrates on extracting paddy features through off-line image. The methodology involves image acquisition, converting the RGB images into a binary image using automatic thresholding based on local entropy threshold and Otsu method. A morphological algorithm is used to remove noises by using region filling technique. Then, the image characteristics consisting of lesion type, boundary colour, spot colour, and broken paddy leaf colour are extracted from paddy leaf images. Consequently, by employing production rule technique, the paddy diseases are recognized about 94.7 percent of accuracy rates. This prototype has a very great potential to be further improved in the future.
- Published
- 2009
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.