75 results on '"Suriayati Chuprat"'
Search Results
2. Tamper Detection and Localization for Quranic Text Watermarking Scheme Based on Hybrid Technique
- Author
-
Nilam Nur Amir Sjarif, M. A. Shahidan, Ali A. R. Alkhafaji, Nurulhuda Firdaus Mohd Azmi, Suriayati Chuprat, and Haslina Md Sarkan
- Subjects
Scheme (programming language) ,business.industry ,Computer science ,Computer Science Applications ,Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Digital watermarking ,computer.programming_language - Published
- 2021
- Full Text
- View/download PDF
3. Payload Capacity Scheme for Quran Text Watermarking Based on Vowels with Kashida
- Author
-
Nurulhuda Firdaus Mohd Azmi, Ali A. R. Alkhafaji, Osamah Ibrahim Khalaf, Haslina Md Sarkan, Ehab Nabiel Alkhanak, Suriayati Chuprat, M. A. Shahidan, and Nilam Nur Amir Sjarif
- Subjects
Scheme (programming language) ,Kashida ,Holy quran ,Capacity ,Cover (telecommunications) ,Computer science ,Speech recognition ,Payload (computing) ,Quran text watermarking ,Reversing technique ,Imperceptibility ,Arabic text ,Peak signal-to-noise ratio ,Computer Science Applications ,Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,Embedding ,Electrical and Electronic Engineering ,Vowels ,computer ,Digital watermarking ,computer.programming_language - Abstract
The most sensitive Arabic text available online is the digital Holy Quran. This sacred Islamic religious book is recited by all Muslims worldwide including non-Arabs as part of their worship needs. Thus, it should be protected from any kind of tampering to keep its invaluable meaning intact. Different characteristics of Arabic letters like the vowels (), Kashida (extended letters), and other symbols in the Holy Quran must be secured from alterations. The cover text of the Quran and its watermarked text are different due to the low values of the Peak Signal to Noise Ratio (PSNR) and Embedding Ratio (ER). A watermarking technique with enhanced attributes must, therefore, be designed for the Quran’s text using Arabic vowels with kashida. The gap addressed by this paper is to improve the security of Arabic text in the Holy Quran by using vowels with kashida. The purpose of this paper is to enhance the Quran text watermarking scheme based on a reversing technique. The methodology consists of four phases: The first phase is a pre-processing followed by the second phase-the embedding process phase—which will hide the data after the vowels. That is, if the secret bit is “1”, then the kashida is inserted; however, the kashida is not inserted if the bit is “0”. The third phase is the extraction process and the last phase is to evaluate the performance of the proposed scheme by using PSNR (for the imperceptibility) and ER (for the capacity). The experimental results show that the proposed method of imperceptibility insertion is also optimized with the help of a reversing algorithm. The proposed strategy obtains a 90.5% capacity. Furthermore, the proposed algorithm attained 66.1% which is referred to as imperceptibility.
- Published
- 2021
- Full Text
- View/download PDF
4. Students’ Characteristics of Student Model in Intelligent Programming Tutor for Learning Programming: A Systematic Literature Review
- Author
-
Rajermani Thinakaran and Suriayati Chuprat
- Subjects
General Computer Science - Published
- 2022
- Full Text
- View/download PDF
5. Metamodel for Enterprise Architecture: A Systematic Literature Review
- Author
-
Intan Maizura Arzimi, Mohd Naz'ri Mahrin, Surya Sumarni Hussein, Nur Azaliah Abu Bakar, Noor Hafizah Hassan, and Suriayati Chuprat
- Published
- 2021
- Full Text
- View/download PDF
6. Modified Nominal Group Technique (NGT) for Evaluating HyTEE Model (Hybrid Software Change Management Tool with Test Effort Estimation)
- Author
-
Mazidah Mat Rejab, Nurulhuda Firdaus Mohd Azmi, and Suriayati Chuprat
- Published
- 2021
- Full Text
- View/download PDF
7. Acute lymphoblastic leukemia segmentation using local pixel information
- Author
-
Saif S. Al-jaboriy, Wafaa Mustafa Abduallah, Suriayati Chuprat, and Nilam Nur Amir Sjarif
- Subjects
Computer science ,Lymphoblastic Leukemia ,Cell segmentation ,Image processing ,02 engineering and technology ,01 natural sciences ,Artificial Intelligence ,Precursor cell ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,Sensitivity (control systems) ,010306 general physics ,Pixel ,Artificial neural network ,business.industry ,Pattern recognition ,medicine.anatomical_structure ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Bone marrow ,Artificial intelligence ,Noise (video) ,business ,Software - Abstract
The severity of acute lymphoblastic leukemia depends on the percentages of blast cells (abnormal white blood cells) in bone marrow or peripheral blood. The manual microscopic examination of bone marrow is less accurate, time-consuming, and susceptible to errors, thus making it difficult for lab workers to accurately recognize the characteristics of blast cells. Researchers have adopted different computational methods to identify the nature of blast cells; however, these methods are incapable of accurately segmenting leukocyte cells due to some major disadvantages, such as lack of contrast between objects and background, sensitivity to gray-scale, sensitivity to noise in images, and large computational size. Therefore, it is indispensable to develop a new and improved technique for leukocyte cell segmentation. In the present research, an automatic leukocyte cell segmentation process was introduced that is based on machine learning approach and image processing technique. Further, the characteristics of blast cells were extracted using 4-moment statistical features and artificial neural networks (ANNs). It was found that the proposed method yielded a blasts cell segmentation accuracy of 97% under different lighting conditions.
- Published
- 2019
- Full Text
- View/download PDF
8. Resource Allocation in Cloud Computing using Heuristic Load Balancing Algorithm
- Author
-
Anup Shrestha, Suriayati Chuprat, and Nandini Mukherjee
- Abstract
Cloud computing is becoming more popular, unlike conventional computing, due to its added advantages. This is because it offers utility-based services to its subscribers upon their demand. Furthermore, this computing environment provides IT services to its users where they pay for every use. However, the increasing number of tasks requires virtual machines for them to be accomplished quickly. Load balancing a critical concern in cloud computing due to the massive increase in users' numbers. This paper proposes the best heuristic load balancing algorithm that will schedule a strategy for resource allocation that will minimize make span (completion time) in any technology that involves use cloud computing. The proposed algorithm performs better than other load balancing algorithms.
- Published
- 2020
- Full Text
- View/download PDF
9. Hybrid Heuristic Load Balancing Algorithm For Resource Allocation In Cloud Computing
- Author
-
Suriayati Chuprat, Anup Shrestha, and Nandini Mukherjee
- Subjects
Computer science ,Heuristic (computer science) ,business.industry ,Virtual machine ,Cloud computing ,Completion time ,Load balancing (computing) ,business ,computer.software_genre ,computer ,Algorithm - Abstract
Cloud computing is becoming more popular, unlike conventional computing, due to its added advantages. This is because it offers utility-based services to its subscribers upon their demand. Furthermore, this computing environment provides IT services to its users where they pay for every use. However, the increasing number of tasks requires virtual machines for them to be accomplished quickly. Load balancing a critical concern in cloud computing due to the massive increase in users' numbers. This paper proposes the best heuristic load balancing algorithm that will schedule a strategy for resource allocation that will minimize make span (completion time) in any technology that involves use cloud computing. The proposed algorithm performs better than other load balancing algorithms.
- Published
- 2020
- Full Text
- View/download PDF
10. SMS Spam Message Detection using Term Frequency-Inverse Document Frequency and Random Forest Algorithm
- Author
-
Yazriwati Yahya, Haslina Md Sarkan, Nurulhuda Firdaus Mohd Azmi, Suriayati Chuprat, Suriani Mohd Sam, and Nilam Nur Amir Sjarif
- Subjects
Service (systems architecture) ,Short Message Service ,Computer science ,InformationSystems_INFORMATIONSYSTEMSAPPLICATIONS ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,020206 networking & telecommunications ,02 engineering and technology ,Computer security ,computer.software_genre ,Random forest ,Term (time) ,ComputingMethodologies_PATTERNRECOGNITION ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,tf–idf ,computer ,General Environmental Science - Abstract
The daily traffic of Short Message Service (SMS) keeps increasing. As a result, it leads to dramatic increase in mobile attacks such as spammers who plague the service with spam messages sent to the groups of recipients. Mobile spams are a growing problem as the number of spams keep increasing day by day even with the filtering systems. Spams are defined as unsolicited bulk messages in various forms such as unwanted advertisements, credit opportunities or fake lottery winner notifications. Spam classification has become more challenging due to complexities of the messages imposed by spammers. Hence, various methods have been developed in order to filter spams. In this study, methods of term frequency-inverse document frequency (TF-IDF) and Random Forest Algorithm will be applied on SMS spam message data collection. Based on the experiment, Random Forest algorithm outperforms other algorithms with an accuracy of 97.50%.
- Published
- 2019
- Full Text
- View/download PDF
11. Experimenting Dynamic Clonal Selection (DCS) for Parallel Multiple Interest Topics of User Profile Adaptation in Content Based Filtering
- Author
-
Haslina Md Sarkan, Suriayati Chuprat, Norziha Megat Mohd. Zainuddin, Nurulhuda Firdaus Mohd Azmi, Yazriwati Yahya, and Nilam Nur Amir Sjarif
- Subjects
User profile ,Forgetting ,Computer science ,Human–computer interaction ,Artificial immune system ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,Profiling (information science) ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,General Environmental Science ,Clonal selection - Abstract
User profile for information filtering is an inspiring issue through distinctive profiling features plus profiling desires. The profile of the user should be accomplished of constant learning and be unable to remember past interest. Forgetting on the past interest is necessary for maintaining present demonstration of the user’s profiling. Profile that does not forget the past interest will eventually become drenched with not relevant features of user’s interest. The dynamic nature of the user profiling offers the use of Artificial Immune Systems (AIS). The clonal selection theory in natural immune system has gain the insight to researchers to create an algorithm that evolve the repertoire by means of selection, cloning and mutation. This paper discusses the experimentation of dynamic clonal selection (DCS) algorithm in adapting user profile for content-based filtering. The experiment focused on the scenario of multiple interest of topics in user profiling for content-based filtering. The result shows that the algorithm of DCS is applicable with classification over multiple topics of interest in which changing interest of topics are tracked in data over time with a form of maintaining and enhancing range of the repertoire.
- Published
- 2019
- Full Text
- View/download PDF
12. Risk Management Framework for Distributed Software Team: A Case Study of Telecommunication Company
- Author
-
Wan Suzila Wan Husin, Azri Azmi, Yazriwati Yahya, Nurulhuda Firdaus Mohd Azmi, Nilam Nur Amir Sjarif, and Suriayati Chuprat
- Subjects
Process management ,business.industry ,Computer science ,Risk management framework ,Organizational culture ,020206 networking & telecommunications ,Context (language use) ,02 engineering and technology ,Knowledge sharing ,Survey methodology ,Software ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,business ,Risk management ,General Environmental Science - Abstract
Distributed software development (DSD) has grown rapidly over the last few years and present definite risk to the software industry which need to be carefully analyzed and managed. The risks relating to DSD include lack of trust, ineffective communication, time zone difference, cultural differences such as language and corporate culture along with knowledge sharing challenges. Software risk management approach in DSD, however, is still inadequate and needed further attention. The aim of this paper is to identify the components involved in risk management process related to DSD, and finally to enhance existing risk management framework used in the organization to accommodate the distributed nature of the team. The quantitative approach which is survey method has been chosen as an appropriate research method to achieve the objectives of this paper. The results show that communication is the most prevalent risk faced by the team members in the current risk management practice. The data from the literature and survey were used to expand the list of identified risk in the organization and integrates communication element into the existing framework. The communication element emphasized on the collaboration and commitment of the stakeholders from every site so as to improve risk management in a distributed context.
- Published
- 2019
- Full Text
- View/download PDF
13. Determining Factors Influencing the Acceptance of Cloud Computing Implementation
- Author
-
Suriayati Chuprat, Roslina Ibrahim, Mohd Talmizie Amron, and Nur Azaliah Abu Bakar
- Subjects
Government ,Knowledge management ,business.industry ,Vendor ,Computer science ,Data security ,Cloud computing ,Business continuity ,Scalability ,General Earth and Planetary Sciences ,Thematic analysis ,business ,General Environmental Science ,Social influence - Abstract
Cloud computing (CC) has attracted many organizations to invest in this virtual storage technology since it is seen to be able to help businesses in managing and sharing data in a more flexible, cost saving, and business scalability. However, there are many issues faced by CC, such as data security concern, the high cost during the set-up process, designing the cloud model as well as high dependency on the cloud providers. Apart from these issues, several studies have also highlighted that several other factors are influencing the acceptance of CC implementation, especially in the organizational environment. Therefore, this paper aims to review and identify the relevant factors that influence the acceptance of CC implementation in the organization. This study reviewed 55 articles related to CC implementation, and a total of 21 factors have been obtained through the several processes. These factors were arranged according to the frequency based on the thematic analysis method. As a result, 21 factors were obtained and ranked; Compatibility, Top Management Support, Relative Advantage, Security, Complexity, External Pressure, IT Knowledge, Cost, Trust, Trialability, Regulations & Government Support, Innovativeness, External expertise, Sharing & Collaboration, User experiences, Awareness, Firm Size, Social Influence, Task, Vendor Support, and Business Continuity. This study managed to unveil a new context in cloud computing studies as it provides more insights on the factors that may have been untouched.
- Published
- 2019
- Full Text
- View/download PDF
14. The Validity and Reliability Evaluation of Instruments for Cloud Computing Acceptance Study
- Author
-
Suriayati Chuprat, Roslina Ibrahim, Mohd Talmizie Amron, and Nur Azaliah Abu Bakar
- Subjects
Data collection ,010308 nuclear & particles physics ,business.industry ,Computer science ,05 social sciences ,Validity ,Cloud computing ,01 natural sciences ,Data science ,Exploratory factor analysis ,Cronbach's alpha ,0502 economics and business ,0103 physical sciences ,Content validity ,050211 marketing ,business ,Reliability (statistics) ,Face validity - Abstract
Online data storage technology over the cloud network has become an option for many organizations, even for personal use. The benefits of cloud computing enable many organizations, including the public sector, to use this technology to provide the best service experience. However, there is an issue with the implementation of cloud-based applications when their usage is less than the number of applications offered. Therefore, a study on the acceptance of cloud computing in the public sector should be conducted. This paper aims to evaluate the validity and reliability of the instrument for cloud computing acceptance in Malaysian public sectors. The developed instruments are analyzed through validity and reliability phases. The validity analysis phase involves two stages of face validity and expert validity. The Content Validity Index (CVI) is used, and the feedback of the panel is considered in improving the items used. The reliability phase was conducted by performing an analysis to evaluate Cronbach ‘alpha for each item and also testing using Exploratory Factor Analysis (EFA). The final instrument contained 71 items of 5-point Likert scale multiple-choice options, classified under 15 variables. As a result, this instrument is successfully validated and are reliable to be used in the actual data collection.
- Published
- 2020
- Full Text
- View/download PDF
15. A model for technological aspect of e-learning readiness in higher education
- Author
-
Mohd Naz'ri Mahrin, Rasimah Che Mohd Yusoff, Asma Ali Mosa Al-araibi, and Suriayati Chuprat
- Subjects
Knowledge management ,Higher education ,business.industry ,05 social sciences ,Educational technology ,050301 education ,Information technology ,Cloud computing ,Electronic media ,Library and Information Sciences ,Education ,Empirical research ,Information and Communications Technology ,Computer literacy ,0502 economics and business ,050211 marketing ,business ,0503 education - Abstract
The rate of adoption of e-learning has increased significantly in most higher education institutions in the world. E-learning refers to the use of electronic media, educational technology, also; information and communication technology (ICT) in the educational process. The aim for adopting e-learning is to provide students with educational services via the use of ICT. Thus, students can access educational resources from anywhere and at any time. However, the successful implementation of e-learning relies on the readiness to be able to initiate this system because, without proper readiness, the project will probably fail. E-learning readiness refers to the assessment of how ready an institution is to adopt and implement an e-learning project. One of the most important aspects of e-learning readiness is the technological aspect, which plays an important role in implementing an effective and efficient e-learning system. There is currently a lack of arguments concerning the factors that shape the technological aspect of e-learning readiness. The focus of this study is concentrated on the technological aspect of e-learning readiness. A model is proposed which includes eight technological factors, specifically: Software; Hardware; Connectivity; Security; Flexibility of the system; Technical Skills and Support; cloud computing; and Data center. A quantitative study was conducted at six Malaysian public universities, with survey responses from 374 Academic staff members who use e-learning. The empirical study confirmed that seven of the technological factors have a significant effect on e-learning readiness, while one factor (cloud computing) has not yet had a significant impact on e-learning readiness.
- Published
- 2018
- Full Text
- View/download PDF
16. Secured Data Partitioning through Sequence based Mapping and Random Order of Data Separation
- Author
-
Hazila Hasan, Mohd Naz'ri Mahrin, and Suriayati Chuprat
- Subjects
Random order ,General Computer Science ,Computer science ,Data separation ,Data partitioning ,Algorithm ,Sequence (medicine) - Published
- 2018
- Full Text
- View/download PDF
17. Data Reconstruction Through Sequence Based Mapping in Secured Data Partitioning
- Author
-
Othman Mohd Yusop, Hazila Hasan, Suriayati Chuprat, and Haslina Md Sarkan
- Subjects
Health (social science) ,General Computer Science ,Database ,business.industry ,Computer science ,General Mathematics ,Data reconstruction ,Big data ,Search engine indexing ,General Engineering ,Cloud computing ,computer.software_genre ,Partition (database) ,Secret sharing ,Education ,General Energy ,Data partitioning ,Confidentiality ,Data mining ,business ,computer ,General Environmental Science - Abstract
Recent researches have proposed data partitioning technique with secret sharing to enhance the security in cloud computing. However, its complexity in reconstructing while preserving confidentiality has a limitation of practical use, specifically when it involves a large amount of data. In this paper, we explored the existing mapping technique called partition based indexing that is being used to reconstruct the shares. Nevertheless, we found that its efficiency has decreased when the amount of data increased. Thus, this has motivated us to propose a sequence based mapping to increase the efficiency of data reconstruction in secured data partitioning with secret sharing. The proposed technique has been evaluated through a series of simulation using 10000 data. The performance was evaluated based on the time taken to achieve data reconstruction for different number of shares. As a result, we proved that our proposal, which is named as a sequence based mapping technique has successfully improved more than 40 percent of the performance of data reconstruction compared to indexing technique. As such, we conclude that our proposal on sequence based mapping is an ideal technique for improving performance of data reconstruction in data-partitioning with secret sharing and preserving confidentiality of big data in cloud computing.
- Published
- 2017
- Full Text
- View/download PDF
18. Improving the accuracy of collaborative filtering recommendations using clustering and association rules mining on implicit data
- Author
-
Mohd Naz'ri Mahrin, Haslina Md Sarkan, Suriayati Chuprat, and Maryam Khanian Najafabadi
- Subjects
Association rule learning ,business.industry ,Computer science ,Probabilistic logic ,02 engineering and technology ,Recommender system ,computer.software_genre ,Human-Computer Interaction ,Arts and Humanities (miscellaneous) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Collaborative filtering ,Preprocessor ,020201 artificial intelligence & image processing ,The Internet ,Data mining ,business ,Cluster analysis ,computer ,General Psychology ,Curse of dimensionality - Abstract
The recommender systems are recently becoming more significant in the age of rapid development of the Internet technology due to their ability in making a decision to users on appropriate choices. Collaborative filtering (CF) is the most successful and most applied technique in the design of recommender systems where items to an active user will be recommended based on the past rating records from like-minded users. Unfortunately, CF may lead to the poor recommendation when user ratings on items are very sparse in comparison with the huge number of users and items in user-item matrix. To overcome this problem, this research applies the users implicit interaction records with items to efficiently process massive data by employing association rules mining. It captures the multiple purchases per transaction in association rules, rather than just counting total purchases made. To do this, a modified preprocessing is implemented to discover similar interest patterns among users based on multiple purchases done. In addition, the clustering technique has been employed in our technique to reduce the size of data and dimensionality of the item space as the performance of association rules mining. Then, similarities between items based on their features were computed to make recommendations. The experiments were conducted and the results were compared with basic CF and other extended version of CF techniques including K-Means clustering, hybrid representation, and probabilistic learning by using public dataset, namely, Million Song dataset. The experimental results demonstrated that our technique achieves the better performance when compared to the basic CF and other extended version of CF techniques in terms of Precision, Recall metrics, even when the data is very sparse. Tackle data sparsity using clustering and association rules mining on massive data.Utilizing users' implicit interaction records with items for improving CF.Using item repetition in a transaction as the input for association rules.Experiments show that the proposed technique substantially outperforms basic CF.Comparing the accuracy of proposed technique with other extended version of CF.
- Published
- 2017
- Full Text
- View/download PDF
19. Modern Code Review Benefits-Primary Findings of A Systematic Literature Review
- Author
-
Nargis Fatima, Suriayati Chuprat, and Sumaira Nazir
- Subjects
Code review ,Process management ,Process (engineering) ,Computer science ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Fagan inspection ,computer.software_genre ,Software quality ,Knowledge sharing ,Systematic review ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Thematic analysis ,computer - Abstract
Modern Code Review (MCR) an effective quality assurance technique that can ensure software quality and customer satisfaction through the identification of defects, code improvement and accelerating the development process. It is an asynchronous and lightweight review process supported with review tools, for instance, Gerrit. It is a light version of Fagan's inspection process and has developed as a practice for open-source and industrial software development. Researches have been conducted in the context of MCR utilizing various data collection methodologies such as interviews, surveys and comment analysis from review tools. Besides defect detection, other benefits have been reported concerning MCR process adoption, for instance, knowledge sharing, team awareness, collaboration, etc. However, the team members involved in MCR activities are not aware of the benefits of MCR activities as the literature is dispersed. No, systematize study available reporting benefits concerning MCR. As a consequence, there is a lack of actual awareness of the adoption of the MCR process. Therefore, the objective of the study is to systematically analyze and report the benefits of the MCR process. Systematic Literature Review has been utilized to identify MCR benefits. Thematic analysis has been performed to group the identified benefits into the relevant themes. The themes and reported benefits are validated by the experts for their relevancy. The study findings report 54 unique benefits, grouped into 9 themes. This research has implications for the software industry, engineer and researchers. The industry can incorporate the MCR process widely, whereas software engineers being aware of the real benefits of MCR can provide their participation effectively in achieving those benefits in reality. In future, the researchers can extend this study by identifying more benefits in different research settings and by quantifying the reported benefits of MCR.
- Published
- 2020
- Full Text
- View/download PDF
20. Software Engineering Wastes - A Perspective of Modern Code Review
- Author
-
Nargis Fatima, Sumaira Nazir, and Suriayati Chuprat
- Subjects
Code review ,Process (engineering) ,business.industry ,Computer science ,Software development ,020207 software engineering ,Context (language use) ,02 engineering and technology ,computer.software_genre ,Lean manufacturing ,Software quality ,Domain (software engineering) ,Software ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Software engineering ,business ,computer - Abstract
Identification and eradication of waste are the principal emphases of lean thinking. Waste is defined as any activity that consumes resources but does not deliver any value to the stakeholder and it can also be demarcated as an impediment to process flow. Lean thinking has been applied in the software engineering domain concerning overall software development, however, still, there is a need to take action regarding waste identification and elimination concerning specific software engineering activities. This paper describes the wastes generated during Modern Code Review (MCR). MCR is a socio-technical software engineering activity and acknowledged as a lightweight process for defect identification, code improvement and software quality enhancement. It involves coordination and communication among multiple software engineers having different personalities, preferences, and technical skills, thus it can generate multiple types of wastes. Therefore, the study has two objectives that are to recognize and report various wastes generated during MCR and to map the identified MCR wastes on the existing software engineering wastes. Systematic Literature Review and grounded theory has been utilized to recognize and produce a unique list of the waste generated during MCR. The identified unique list of MCR wastes and their mapping on existing software engineering wastes are validated through software engineering experts. The study findings report 28 unique wastes out of which 25 wastes map to the existing software engineering wastes. However, 3 wastes such as negative emotions, inequality/biasness and insignificant feedback are not reported in the existing software engineering literature. The study will be useful for researchers to identify the wastes in same context or for other software engineering activities and to provide the strategies to minimize the generation of identified wastes.
- Published
- 2020
- Full Text
- View/download PDF
21. Knowledge Sharing Factors for Modern Code Review to Minimize Software Engineering Waste
- Author
-
Sumaira Nazir, Nargis Fatima, and Suriayati Chuprat
- Subjects
Code review ,General Computer Science ,Computer science ,business.industry ,Artifact (software development) ,computer.software_genre ,Grounded theory ,Knowledge sharing ,Mental distress ,Systematic review ,Categorization ,Code (cryptography) ,Software engineering ,business ,computer - Abstract
Software engineering activities, for instance, Modern Code Review (MCR) produce quality software by identifying the defects from the code. It involves social coding and provides ample opportunities to share knowledge among MCR team members. However, the MCR team is confronted with the issue of waiting waste due to poor knowledge sharing among MCR team members. As a result, it delays the project delays and increases mental distress. To minimize the waiting waste, this study aims to identify knowledge sharing factors that impact knowledge sharing in MCR. The methodology employed for this study is a systematic literature review to identify knowledge sharing factors, data coding with continual comparison and memoing techniques of grounded theory to produce a unique and categorized list of factors influencing knowledge sharing. The identified factors were then assessed through expert panel for its naming, expressions, and categorization. The study finding reported 22 factors grouped into 5 broad categories. i.e. Individual, Team, Social, Facility conditions, and Artifact. The study is useful for researchers to extend the research and for the MCR team to consider these factors to enhance knowledge sharing and to minimize waiting waste.
- Published
- 2020
- Full Text
- View/download PDF
22. Situational Modern Code Review Framework to Support Individual Sustainability of Software Engineers
- Author
-
Nargis Fatima, Sumaira Nazir, and Suriayati Chuprat
- Subjects
Code review ,Source code ,Process management ,General Computer Science ,business.industry ,Process (engineering) ,Computer science ,media_common.quotation_subject ,Delphi method ,Software development ,computer.software_genre ,Variety (cybernetics) ,Identification (information) ,Software ,Sustainability ,Quality (business) ,Situational ethics ,business ,computer ,Competence (human resources) ,media_common - Abstract
Modern Code Review (MCR) is a socio-technical practice to improve source code quality and ensure successful software development. It involves the interaction of software engineers from different cultures and backgrounds. As a result, a variety of unknown situational factors arise that impact the individual sustainability of MCR team members and affect their productivity by causing mental distress, fear of unknown and varying situations. Therefore, the MCR team needs to be aware of the accurate situational factors, however, they are confronted with the issue of lack of competency in the identification of situational factors. This study aims to conduct the Delphi survey to investigate the optimal and well-balanced MCR-related situational factors. The conducted survey also aimed to recognize and prioritize the most influencing situational factors for MCR activities. The study findings reported 21 situational factors, 147 sub-factors, and 5 Categories. Based on the results of the Delphi survey the identified situational factors are transformed into the situational MCR framework. This study might be helpful to support the individual sustainability of the MCR team by making them aware of the situations that can occur and vary during the execution of the MCR process. This research might also help the MCR team to improve their productivity and sustain in the industry for longer. It can also support software researchers who want to contribute to situational software engineering from varying software engineering contexts.
- Published
- 2020
- Full Text
- View/download PDF
23. Knowledge Sharing Framework for Modern Code Review to Diminish Software Engineering Waste
- Author
-
Nargis Fatima, Sumaira Nazir, and Suriayati Chuprat
- Subjects
Code review ,General Computer Science ,business.industry ,Computer science ,Delphi method ,Psychological distress ,computer.software_genre ,Knowledge sharing ,Waste production ,Lean software development ,business ,Software engineering ,Quality assurance ,computer - Abstract
Modern Code Review (MCR) is a quality assurance technique that involves massive interactions between team members of MCR. Presently team members of MCR are confronting with the problem of waiting waste production, which results in their psychological distress and project delays. Therefore, the MCR team needs to have effective knowledge sharing during MCR activities, to avoid the circumstances that lead the team members to the waiting state. The objective of this study is to develop the knowledge sharing framework for MCR team to reduce waiting waste. The research methodology used for this study is the Delphi survey. The conducted Delphi survey intended to produce the finalized list of knowledge sharing factors and to recognize and prioritize the most influencing knowledge sharing factor for MCR activities. The study results reported 22 knowledge sharing factors, 135 sub-factor, and 5 categories. Grounded on the results of the Delphi survey the knowledge sharing framework for MCR has been developed. The study is beneficial for software engineering researchers to outspread the research. It can also help the MCR team members to consider the designed framework to increase knowledge sharing and diminish waiting waste.
- Published
- 2020
- Full Text
- View/download PDF
24. Situational Factors for Modern Code Review to Support Software Engineers’ Sustainability
- Author
-
Nargis Fatima, Suriayati Chuprat, and Sumaira Nazir
- Subjects
Source code ,Code review ,Knowledge management ,General Computer Science ,Situation awareness ,Process (engineering) ,Computer science ,business.industry ,media_common.quotation_subject ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Grounded theory ,Identification (information) ,Software ,Sustainability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Situational ethics ,business ,computer ,media_common - Abstract
Software engineers working in Modern Code Review (MCR) are confronted with the issue of lack of competency in the identification of situational factors. MCR is a software engineering activity for the identification and fixation of defects before the delivery of the software product. This issue can be a threat to the individual sustainability of software engineers and it can be addressed by situational awareness. Therefore, the objective of the study is to identify situational factors concerning the MCR process. Systematic Literature Review (SLR) has been used to identify situational factors. Data coding along with continuous comparison and memoing procedures of grounded theory and expert review has been used to produce an exclusive and validated list of situational factors grouped under categories. The study results conveyed 23 situational factors that are grouped into 5 broad categories i.e. People, Organization, Technology, Source Code and Project. The study is valuable for researchers to extend the research and for software engineers to identify situations and sustain for longer.
- Published
- 2020
- Full Text
- View/download PDF
25. Does Project Associated Situational Factors have Impact on Sustainability of Modern Code Review Workforce?
- Author
-
Nargis Fatima, Sumaira Nazir, and Suriayati Chuprat
- Subjects
Source code ,Process management ,Code review ,Computer science ,media_common.quotation_subject ,Fagan inspection ,computer.software_genre ,Software quality ,Release management ,Workforce ,Quality (business) ,Situational ethics ,computer ,media_common - Abstract
Code review is an essential practice to evaluate the quality of source code. Presently the Modern trend of code review known as Modern Code Review (MCR), an informal modified version of Fagan's inspection is widely used. In MCR process, MCR workforce that is author and reviewer work together to improve the code quality and software quality. It is a peer-reviewed process, the source code written by the author is evaluated by the reviewer. For the effective outcomes of the MCR process, it is required to focus on the sustainability of the MCR workforce so that they can sustain for a longer and produce the effective outcomes. However, the sustainability of MCR workforce is impacted by the unidentified project-related situational factors. The existing literature of MCR is lacking concerning the identification of project-related situational factors that impact the sustainability of the MCR workforce. Therefore, this study aims to perform the Systematic Literature Review (SLR) to identify the project-related situational factors that can impact the sustainability of the MCR workforce. The situational components are collected into two key classifications, for example, project release management and project attributes. The grounded hypothesis approach has been applied to get final unique and categorized list of project-related situational factors. The identified unique and categorized list of project-related situational components is additionally double checked by the expert panel for their categorizations, naming conventions and suggestions of additional project-related situational factors. The investigation results announced 18 project-related situational factors. The study will be advantageous for experts involved in situational software engineering research as well as the MCR workforce to sustain for longer by overcoming the challenge of unidentified situations.
- Published
- 2019
- Full Text
- View/download PDF
26. Knowledge sharing, a key sustainable practice is on risk: An insight from Modern Code Review
- Author
-
Nargis Fatima, Suriayati Chuprat, and Sumaira Nazir
- Subjects
Code review ,Knowledge management ,business.industry ,Computer science ,media_common.quotation_subject ,Context (language use) ,computer.software_genre ,Asset (computer security) ,Grounded theory ,Knowledge sharing ,Systematic review ,Quality (business) ,Product (category theory) ,business ,computer ,media_common - Abstract
Knowledge is the foremost asset to succeed in the area of green sustainable software engineering. Effective knowledge sharing is a crucial objective of companies. However, individuals are not motivated to share knowledge. This study explores knowledge sharing in the context of Modern Code Review (MCR) where engineers assess source code composed by other developers to improve the product quality through knowledge exchange among developers (Author and reviewer). The procedure is profoundly subject to individual associations among developers regarding contribute to the outcomes of MCR such as identifying alternative solutions, code improvement, and knowledge sharing. Knowledge sharing is supposed as an important outcome of MCR; however, developers involved in MCR activities are confronted with a lack of knowledge sharing challenge due to a dearth of knowledge sharing motivation. There are different individual factors that might encourage or hinder developers to engage in knowledge sharing. Studies have been performed dependent on information investigation gathered from surveys, interviews and review tools concerning elements affecting MCR, still the literature lacks research regarding factors impacting knowledge sharing aspect in MCR. Hence, this examination performs a Systematic Literature Review (SLR) that distinguish individual factors impacting knowledge sharing in MCR. The examination discoveries 34 individual factors such as biasness, cognitive load, time pressure, expertise, etc. The factors are grouped into 7 broad categories utilizing grounded theory. Study discoveries have suggestions for programming specialists involved in MCR activities and researchers interested to perform research in this area to broaden the examination and perform MCR exercises adequately by considering and beating the negative effect of recognized variables.
- Published
- 2019
- Full Text
- View/download PDF
27. Situational factors affecting Software Engineers Sustainability: A Vision of Modern Code Review
- Author
-
Sumaira Nazir, Nargis Fatima, and Suriayati Chuprat
- Subjects
Code review ,Source code ,business.industry ,Computer science ,media_common.quotation_subject ,computer.software_genre ,Grounded theory ,Software quality ,Knowledge sharing ,Engineering management ,Software ,Situational ethics ,business ,computer ,Competence (human resources) ,media_common - Abstract
Code review is a sustainable software engineering practice to assure software quality. Presently it is performed by software engineers with the help of tool support, for instance, Gerrit and termed as Modern Code Review (MCR). It is a lightweight way to deal with distinguishing abandons from source code. Software engineer's efficiency while performing MCR activities is exposed to varying people related situational factors that impact the software engineer's sustainability. The literature lacks the identification of people related situational factors in MCR. For successful MCR results, there is a need to distinguish people concerning situational factors to enhance the productivity and competence of software engineers for their sustainability. Therefore, this investigation intends to recognize situational variables influencing the ability and efficiency of software engineers. The systematic methodology utilized for this examination is Systematic Literature Review (SLR). The study results reported 52 situational factors. The situational factors are assembled into four fundamental classifications such as team, team interaction, reviewer response and knowledge sharing. The grounded theory approach has been applied to get final unique list of people related situational factors and sub factors. The investigation will be useful for expert performing examination seeing situational factors as well as the sustainability of software engineers engaged with MCR activities.
- Published
- 2019
- Full Text
- View/download PDF
28. Understanding the Impact of Feedback on Knowledge Sharing in Modern Code Review
- Author
-
Nargis Fatima, Sumaira Nazir, and Suriayati Chuprat
- Subjects
Code review ,Knowledge management ,Conceptualization ,business.industry ,Computer science ,computer.software_genre ,Constructive ,Grounded theory ,Software quality ,Knowledge sharing ,Formative assessment ,Systematic review ,business ,computer - Abstract
Modern Code Review (MCR) is a key practice to improve code quality and share knowledge among authors and reviewers. Feedback from the reviewer is an important aspect of the review process that initiates the process of knowledge sharing. In MCR the knowledge sharing though reported as an important benefit, however also reported as a challenge for author and reviewer. It is argued that the reviewers do not share knowledge in the form of insightful, constructive and formative feedback. Developers are facing poor knowledge sharing challenge in addition to other MCR challenges. It is understood that the feedback from the reviewer is an important facet to share knowledge. There are various factors associated with the feedback provided by reviewers that hamper the knowledge sharing aspect in MCR process. There is a gap in the MCR literature concerning the exploration of feedback conceptualization, importance and its impacts on knowledge sharing. Therefore, this study explores the factors associated with feedback that impact knowledge sharing in MCR. Systematic Literature Review (SLR), grounded theory and expert opinion strategies are followed to recognize the unique, categorized and validated list of feedback concerning factors that impact knowledge sharing in MCR. The examination discovers 42 factors such as feedback usefulness, harsh comments, criticism on the author, broadcast feedback, etc. The identified 42 factors are grouped into 14 unique categories, for instance, feedback communication, feedback language, feedback content, feedback temporal aspects, etc. The performed study has suggestions for members involved in engaging in MCR activities and investigators attracted research this zone to widen the investigation and share knowledge adequately among team members by bearing in mind and coping with the negative effect of recognized variables and thus minimizing the production of software engineering wastes.
- Published
- 2019
- Full Text
- View/download PDF
29. Individual, Social and Personnel Factors Influencing Modern Code Review Process
- Author
-
Nargis Fatima, Sumaira Nazir, and Suriayati Chuprat
- Subjects
Source code ,Process management ,Code review ,Computer science ,Process (engineering) ,media_common.quotation_subject ,020207 software engineering ,02 engineering and technology ,Work in process ,computer.software_genre ,Software quality ,Systematic review ,020204 information systems ,Workforce ,0202 electrical engineering, electronic engineering, information engineering ,computer ,media_common - Abstract
Modern (MCR) is a multistage process where developers evaluate source code written by another developer to enhance the software quality. The process is deeply dependent on interactions among the workforce (reviewer, and author) involved in process activities. Interactions among the MCR workforce are roots of various individual, social and personnel factors influencing the MCR process. Multiple research has been performed based on data analysis collected from review tools, surveys, and interviews at a single company concerning factors affecting MCR. However, MCR research is lacking systematized study concerning individual, social and personnel factors influencing the MCR process. Therefore, this study performs a Systematic Literature Review (SLR) to identify individual, social and personnel factors influencing the MCR process. The study findings highlight individual factors (individual skills, characteristics, emotions, etc.), social factors (trust, relationship, interaction, impression, etc.) and personnel factors (team, team interaction, and reviewer response) that influence the MCR process. Study findings have implications for software engineering researchers performing research in the area and MCR team members to extend the research and perform MCR activities effectively by considering and overcoming the negative impact of identified factors.
- Published
- 2019
- Full Text
- View/download PDF
30. Individual Sustainability Barriers and Mitigation Strategies: Systematic Literature Review Protocol
- Author
-
Sumaira Nazir, Nargis Fatima, and Suriayati Chuprat
- Subjects
Software Engineering Body of Knowledge ,Process management ,Software ,Systematic review ,business.industry ,Computer science ,Sustainability ,Sustainable practices ,Software system ,business ,Social dimension ,Protocol (object-oriented programming) - Abstract
Software engineering supports the development of software systems ensuring ease and success for humanity. However, humanity is facing various complex barriers contributing risk to societies, for instance, lack of sustainable software engineering practices, tools, and methodologies. As a software engineer, one can support and contribute to sustainability by developing sustainable software products through sustainable practices along with managing economic, environmental, social, technical and individual dimensions of sustainability. Software engineering with sustainable practices allows for the development of sustainable software. Sustainability concerning economic, environmental, technical and social dimensions have been well explored in literature but individual sustainability dimensions need insight from researchers. Therefore, there is a need to research from an individual sustainability perspective, so this study presents the Systematic Literature Review (SLR) protocol to identify the individual sustainability barriers along with their mitigation strategies. This research will be helpful for a sustainable software engineering body of knowledge and researchers interested to contribute for sustainability concerns in software engineering, by providing the SLR protocol to identify the prioritized list of barriers and mitigation strategies concerning individual sustainability.
- Published
- 2019
- Full Text
- View/download PDF
31. Endpoint Detection and Response: Why Use Machine Learning?
- Author
-
Nazri Ahmad Zamani, Suriayati Chuprat, Noor Azurati Ahmad, Firham M. Senan, Nilam Nur Amir Sjarif, Afifah Saupi, Mohd Naz'ri Mahrin, and Aswami Ariffin
- Subjects
Root (linguistics) ,Point (typography) ,Computer science ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,Cyberspace - Abstract
Threats towards the cyberspace have becoming more aggressive, intelligent and some attack at real-time. These urged both researchers and practitioner to secure the cyberspace at the very root point, which refer to as the endpoint. The detection and response at endpoint must be able to protect at real-time as good as the attacker. In this paper, we reviewed the techniques used in endpoint detection and response. We discovered the trend have shifted from the traditional approaches to more intelligent way. Specifically, most proposed techniques focused on machine learnings. We also zoomed into these techniques and outline the advantages of these techniques.
- Published
- 2019
- Full Text
- View/download PDF
32. A Review on Cloud Computing Acceptance Factors
- Author
-
Suriayati Chuprat, Roslina Ibrahim, and Mohd Talmizie Amron
- Subjects
Government ,Process management ,Higher education ,business.industry ,Computer science ,05 social sciences ,Public sector ,Cloud computing ,02 engineering and technology ,Private sector ,Phase (combat) ,020204 information systems ,0502 economics and business ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,business ,Cloud storage ,050203 business & management ,General Environmental Science - Abstract
Cloud computing technology is regarded as a highly useful application for organization due to advantages such as long-term cost saving, easy access of data at any given time and economically. In fact, there are lots of free cloud storage provider these days whereby user can easily store and shared their organizational data easily and efficiently. Both government and private sectors are looking into optimizing their cloud data storage. However, some of them are still in the early implementation phase with issues on both technological and human factors need to be properly taken care of to ensure its success. In the case of Malaysia our Government highly encouraged the use of cloud computing technology. Given that it is in the early implementation phase it is important to understand the factors that contribute to the use of cloud computing. This study, review studies done in health. Higher learning institution and public sector. It was found that the most contributing factors for use or cloud computing are technology readiness, human readiness, organization support, environment and, security and privacy. We hope that this study will help strengthen our knowledge and readiness in implementation of cloud computing.
- Published
- 2017
- Full Text
- View/download PDF
33. Adopting ISO/IEC 27005:2011-based Risk Treatment Plan to Prevent Patients Data Theft
- Author
-
Yazriwati Yahya, Suriayati Chuprat, Mohd Naz'ri Mahrin, Laura Cassandra Hamit, Nurulhuda Firdaus Mohd Azmi, and Haslina Md Sarkan
- Subjects
General Computer Science ,business.industry ,Risk management framework ,General Engineering ,Data security ,Audit ,Data breach ,Information security ,Risk analysis (engineering) ,General Agricultural and Biological Sciences ,business ,Risk assessment ,Risk management ,Countermeasure (computer) - Abstract
The concern raised in late 2017 regarding 46.2 million mobile device subscribers’ data breach had the Malaysian police started an investigation looking for the source of the leak. Data security is fundamental to protect the assets or information by providing its confidentiality, integrity and availability not only in the telecommunication industry but also in other sectors. This paper attempts to protect the data of a patient-based clinical system by producing a risk treatment plan for its software products. The existing system is vulnerable to information theft, insecure databases, needy audit login and password management. The information security risk assessment consisting of identifying risks, analyzing and evaluating them were conducted before a risk assessment report is written down. A risk management framework was applied to the software development unit of the organization to countermeasure these risks. ISO/IEC 27005:2011 standard was used as the basis for the information security risk management framework. The controls from Annex A of ISO/IEC 27001:2013 were used to reduce the risks. Thirty risks have been identified and 7 high-level risks for the product have been recognized. A risk treatment plan focusing on the risks and controls has been developed for the system to reduce these risks in order to secure the patients’ data. This will eventually enhance the information security in the software development unit and at the same time, increase awareness among the team members concerning risks and the means to handle them.
- Published
- 2020
- Full Text
- View/download PDF
34. Fuzzy Delphi Method for Evaluating HyTEE Model (Hybrid Software Change Management Tool with Test Effort Estimation)
- Author
-
Suriayati Chuprat, Nurulhuda Firdaus, and Mazidah Mat Rejab
- Subjects
General Computer Science ,Traceability ,Cost estimate ,business.industry ,Computer science ,Change request ,Software maintainer ,05 social sciences ,Change management ,Code coverage ,Artifact (software development) ,010501 environmental sciences ,01 natural sciences ,Software quality ,Test effort ,Software ,0502 economics and business ,Regression testing ,Change management (engineering) ,Software system ,Software engineering ,business ,050203 business & management ,0105 earth and related environmental sciences ,Software configuration management - Abstract
When changes are made to a software system during development and maintenance, they need to be tested again i.e. regression test to ensure that changes behave as intended and have not impacted the software quality. This research will produce an automated tool that can help the software manager or a maintainer to search for the coverage artifact before and after a change request. Software quality engineer can determine the test coverage from new changes which can support cost estimation, effort, and schedule estimation. Therefore, this study is intended to look at the views and consensus of the experts on the elements in the proposed model by benefitting the Fuzzy Delphi Method. Through purposive sampling, a total of 12 experts from academic and industrial have participated in the verification of items through 5-point linguistic scales of the questionnaire instrument. Outcome studies show 90% of elements in the proposed model consists of change management, traceability support, test effort estimation support, regression testing support, report and GUI meet, the value threshold (d construct) is less than 0.2 and the percentage of the expert group is above 75%. It is shown that elements of all the items contained in the venue are needed in the HyTEE Model (Hybrid Software Change Management Tool with Test Effort Estimation) based on the consensus of experts.
- Published
- 2019
- Full Text
- View/download PDF
35. Sustainable Software Engineering:A Perspective of Individual Sustainability
- Author
-
Nurulhuda F, Haslina Md Sarkan, Sumaira Nazir, Suriayati Chuprat, Nargis Fatima, and Nilam Nur Amir Sjarif
- Subjects
General Computer Science ,business.industry ,Computer science ,General Engineering ,020207 software engineering ,02 engineering and technology ,Knowledge sharing ,Software Engineering Body of Knowledge ,Software development process ,Software ,Systematic review ,020204 information systems ,Sustainability ,0202 electrical engineering, electronic engineering, information engineering ,Domain knowledge ,Dimension (data warehouse) ,General Agricultural and Biological Sciences ,Software engineering ,business - Abstract
Sustainable software engineering is a mean of developing sustainable software with sustainable software engineering process activities while balancing its various dimensions for instance economic, environmental, social, technical and individual. It is conveyed that the economic, technical, environmental and social dimensions are explored to satisfactory degree however the individual dimension of sustainable software engineering which is concerned with wellbeing of software engineers is not explored to satisfactory degree with respect to its understanding and challenges. Therefore, the aim of the study is to highlight and prioritize the challenges regarding individual sustainability dimension. The study also provides the mitigation strategies for the top five individual sustainability challenges. The systematic literature review has been performed to report the challenges and mitigation strategies. The study finding shows that lack of domain knowledge, lack of methodologies and tool support, lack of education, varying and unidentified situations and lack of sustainable software engineering practices are top most challenges regarding individual sustainability. These challenges need an urgent attention to achieve the goal of sustainable software engineering. The study also reports various mitigation strategies to overcome the risk of identified top most individual sustainability challenges such as to introduce sustainable software engineering education and knowledge in software engineering curricula, development of knowledge sharing frameworks and awareness regarding unclear and varying situations for each software engineering activity etc. The study will be beneficial for sustainable software engineering body of knowledge, sustainable software engineering practitioners and researchers by providing classified list of individual sustainability challenges and their mitigation strategies.
- Published
- 2020
- Full Text
- View/download PDF
36. Support Vector Machine Algorithm for SMS Spam Classification in The Telecommunication Industry
- Author
-
Yazriwati Yahya, Nurulhuda Firdaus Mohd Azmi, Suriayati Chuprat, and Nilam Nur Amir Sjarif
- Subjects
Short Message Service ,General Computer Science ,Computer science ,business.industry ,General Engineering ,Telecommunications service ,Sms spam ,Support vector machine ,Naive Bayes classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Cohen's kappa ,Support vector machine algorithm ,General Agricultural and Biological Sciences ,Telecommunications ,business ,Classifier (UML) - Abstract
In recent years, we have withnessed a dramatic increment volume in the number of mobile users grows in telecommunication industry. However, this leads to drastic increase to the number of spam SMS messages. Short Message Service (SMS) is considered one of the widely used communication in telecommunication service. In reality, most of the users ignore the spam because of the lower rate of SMS and limited amount of spam classification tools. In this paper, we propose a Support Vector Machine (SVM) algorithm for SMS Spam Classification. Support Vector Machine is considered as the one of the most effective for data mining techniques. The propose algorithm have been evaluated using public dataset from UCI machine learning repository. The performance achieved is compared with other three data mining techniques such as Naive Bayes, Multinominal Naive Bayes and K-Nearest Neighbor with the different number of K= 1,3 and 5. Based on the measuring factors like higher accuracy, less processing time, highest kappa statistics, low error and the lowest false positive instance, it’s been identified that Support Vector Machines (SVM) outperforms better than other classifiers and it is the most accurate classifier to detect and label the spam messages with an average an accuracy is 98.9%. Comparing both the error parameter overall, the highest error has been found on the algorithm KNN with K=3 and K=5. Whereas the model with less error is SVM followed by Multinominal Naive Bayes. Therefore, this propose method can be used as a best baseline for further comparison based on SMS spam classification.
- Published
- 2020
- Full Text
- View/download PDF
37. Mobile Business Intelligence Acceptance Model for Organisational Decision Making
- Author
-
Haslina Md Sarkan, Yazriwati Yahya, Nurulhuda Firdaus Mohd Azmi, Lim Yee Fang, Suriayati Chuprat, and Nilam Nur Amir Sjarif
- Subjects
Control and Optimization ,Knowledge management ,Computer Networks and Communications ,Computer science ,Dashboard (business) ,02 engineering and technology ,Business intelligence ,Organizational decision making ,020204 information systems ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Electrical and Electronic Engineering ,Instrumentation ,Mobile BI acceptance model ,business.industry ,05 social sciences ,Novelty ,Mobile business intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Order (business) ,Technology acceptance model ,Metric (unit) ,Performance indicator ,business ,Mobile device ,050203 business & management ,Information Systems - Abstract
Mobile Business Intelligence (BI) is the ability to access BI-related data such as key performance indicators (KPIs), business metric and dashboard through mobile device. Mobile BI addresses the use-case of remote or mobile workers that need on-demand access to business-critical data. User acceptance on mobile BI is an essential in order to identify which factors influence the user acceptance of mobile BI application. Research on mobile BI acceptance model on organizational decision-making is limited due to the novelty of mobile BI as newly emerged innovation. In order to answer gap of the adoption of mobile BI in organizational decision-making, this paper reviews the existing works on mobile BI Acceptance Model for organizational decision-making. Two user acceptance models which are Technology Acceptance Model and Technology Acceptance Model for Mobile Services will be review. Realizing the essential of strategic organizational decision-making in determining success of organizations, the potential of mobile BI in decision-making need to be explore. Since mobile BI still in its infancy, there is a need to study user acceptance and usage behavior on mobile BI in organizational decision-making. There is still opportunity for further investigate the impact of mobile BI on organizational decision-making.
- Published
- 2018
38. Malware Forensic Analytics Framework Using Big Data Platform
- Author
-
Pritheega Magalingam, Shamsul Sahibuddin, Firham M. Senan, Mohd. Zabri Adil Talib, Noor Azurati Ahmad, Suriayati Chuprat, Ganthan Narayana, Syahid Anuar, Mohd Naz'ri Mahrin, and Aswami Ariffin
- Subjects
business.industry ,Emerging technologies ,Computer science ,Digital forensics ,Big data ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Data science ,Data warehouse ,Visualization ,Analytics ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,Malware ,020201 artificial intelligence & image processing ,business ,computer - Abstract
The dramatically increased threats such as malware attacks to our cyber world have given us the vital sign to strengthen the security in a more proactive way. Thus, in recent research we proposed an integrated malware forensic analytics framework that will expose the future threats of malware attacks. This framework incorporates malware collections, malware analytics and visualization of discovered malware attacks. In this paper, we present the design and implementation of the framework which focuses on analytics and visualization, and utilized the emerging technology of big data platform. The implementation of the framework shows promising results in presenting descriptive analytics and predicting the future attacks using machine learning algorithms. We also demonstrate the feasibility of Hortonworks Cybersecurity Package (HCP) in supporting the proposed framework. Finally, we discussed the future work that can be further investigated in improving the implementation of the framework.
- Published
- 2018
- Full Text
- View/download PDF
39. Proposed Methodology using Design Development Research (DDR) Improving Traceability Model with Test Effort Estimation
- Author
-
Suriayati Chuprat, Mazidah Mat Rejab, and Nurulhuda Firdaus Mohd Azmi
- Subjects
Estimation ,Test effort ,03 medical and health sciences ,Development (topology) ,030504 nursing ,Traceability ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,0305 other medical science ,Reliability engineering - Published
- 2018
- Full Text
- View/download PDF
40. Challenges and Benefits of Modern Code Review-Systematic Literature Review Protocol
- Author
-
Suriayati Chuprat, Nargis Fatima, and Sumaira Nazir
- Subjects
Code review ,Computer science ,business.industry ,Static program analysis ,computer.software_genre ,Software quality ,Knowledge sharing ,Engineering management ,Software ,Software quality assurance ,Software inspection ,Open-source software development ,business ,computer - Abstract
Software quality is major concern of all the stakeholders either acquirer of the software or supplier of the software. Software inspection is the static verification technique that can contribute to the quality development of the software. As the development trend is focused towards open source software development the smart and light weight techniques are emerged. One of them is modern code review, a light weight software inspection technique. The modern code review is informal, static code analysis performed by small team using collaborative tools. It is argued that modern code review provides benefits other than defect detection and is also subject to various challenges such as lack of trust, communication issues, effective knowledge sharing, personal conflicts and varying situations etc. There has been little research with respect to systematizing and identifying the challenges faced by, and benefits gained by Modern Code Review (MCR) team. There is dire need to recognize and organize these challenges and benefits based on modern code review literature. The aim is to highlight and organize the challenges and benefits regarding modern code review context. It will be beneficial for software quality assurance practitioners and researchers by providing classified list of challenges and benefits.
- Published
- 2018
- Full Text
- View/download PDF
41. A Review for Improving Software Change using Traceability Model with Test Effort Estimation
- Author
-
Suriayati Chuprat, Nurulhuda Firdaus Mohd Azmi, and Mazidah Mat Rejab
- Subjects
Test effort ,Software ,Traceability ,business.industry ,Computer science ,Regression testing ,Software development ,Test suite ,Software system ,Software maintenance ,business ,Reliability engineering - Abstract
Maintaining a software system includes tasks such as fixing defects, adding new features, or modifying the software (software changes) to accommodate different environments. Then, the modified software system needs to be tested, to ensure the changes will not having any adverse effects on the previously validated code. Regression testing is one of the approaches which software tester used to test the software system. The traditional regression testing strategy was to repeat all the previous tests and retesting all the features of the program even for small modifications. For programming with thousand lines of codes (LOC), the cost of retesting the entire system is expensive if attempted after every change. This practice is becoming increasingly difficult because of the demand for testing the new functionalities and correcting errors with limited resources. Numerous techniques and tools have been proposed and developed to reduce the costs of regression testing and to aid regression testing processes, such as test suite reduction, test case prioritization, and test case done on the thresholds and weightings used in regression testing. However, there is still need to study on the software traceability model of coverage analysis in software changes during regression testing and test effort estimation on regression testing. Hence, this paper describes the proposal for improving software changes with hybrid traceability model and test effort estimation during regression testing. We will explain our proposed work including the problem background, the intended research objectives, literature review and plan for future implementation. This study is expected to contribute in developing hybrid traceability model for large software development project to support software changes during regression testing with test estimation approach and expected to reduce operational cost during the implementation on software maintenance. Also, it is hoped that an efficient and improve solution to regression testing can be realized, thus, gives the benefits to software testers and project manager manage the software maintenance task since it is a critical part in software project development.
- Published
- 2018
- Full Text
- View/download PDF
42. Social Networking Sites Habits and Addiction Among Adolescents in Klang Valley
- Author
-
Nurazean Maarop, Suriayati Chuprat, Roslina Ibrahim, Yazriwati Yahya, Nor Zairah Ab. Rahim, and Haslina Md Sarkan
- Subjects
General Computer Science ,Social network ,business.industry ,Computer science ,media_common.quotation_subject ,Addiction ,Sample (statistics) ,Habit ,business ,Social psychology ,media_common - Abstract
Social networking sites (SNS) is a very popular application in today’s world society. SNS, to certain extent has change the way people communicate with each other. This kind of technology has become a trend among the users regardless the impact of the technology to the users either positive or negative. The level of SNS usage among the adolescents has started to raise concern among the parents and also the society. SNS addictions are becoming problematic in certain countries especially in United States and lately this issue has started to spread all over the world. Malaysia is also one of the country affected with SNS addiction. SNS addiction is not an isolated phenomenon as it is started from high engagement on the SNS usage and it originates from habitual behavior. Therefore, it is important to seek and understand habit and addiction of SNS among adolescents in Malaysia. The purpose of this study is to analyze and explore the usage of SNS among the adolescents in Malaysia, specifically in Klang Valley. It examines the SNS usage behavioral, which is habit and addiction. The data was collected from a sample of 60 respondents using an online survey. The data were analyzed using SPSS for descriptive analysis. From the analysis, it was found that most of the adolescents used SNS in daily basis and majority of them use it for more than two hours per day. Patterns on habits and addiction on the SNS usage shows that some adolescents experienced certain habit and addiction behavior.
- Published
- 2018
- Full Text
- View/download PDF
43. Improving the accuracy of complex activities recognition using accelerometer-embedded mobile phone classifiers
- Author
-
Teddy Mantoro, Mohammed Mobark, and Suriayati Chuprat
- Subjects
Activity recognition ,business.industry ,Computer science ,Mobile phone ,Feature extraction ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,Accelerometer ,computer ,Classifier (UML) - Abstract
Using mobile phones for Human Activities Recognition (HAR) is very helpful in observing daily habit of the user and early detecting health diseases or accidents. Many studies have been published which have investigated the HAR with the help of mobile phones. However, these studies mainly focused on simple single locomotion activities. In real-world situations, human activities are often performed in complex manners. This study will investigate recognizing of complex activities with common classifiers those using in recognizing human activities. Data was collected about complex activities, then features were extract, finally the activities were classified. The experiment shows that the recognition accuracy of low level activities is higher than high level in all seven classifiers. Also it was noticed that the highest accuracy was got by IBK classifier (KNN). It got the highest accuracy in both positions, and in the three activity levels. Finally, the activities with armband position got more accuracy in all seven classifiers than in waist position. The study concluded that those classifiers are good to recognize low level activities (simple), but their performance reduces when the complexity of activities increase. So proper classifiers are needed to deal with the complex activities.
- Published
- 2017
- Full Text
- View/download PDF
44. The challenges of Extract, Transform and Loading (ETL) system implementation for near real-time environment
- Author
-
Othman Mohd Yusop, Adilah Sabtu, Saiful Adli Ismail, Haslina Md Sarkan, Nurulhuda Firdaus Mohd Azmi, Suriayati Chuprat, and Nilam Nur Amir Sjarif
- Subjects
Transaction processing ,Computer science ,business.industry ,020208 electrical & electronic engineering ,Real-time computing ,Big data ,02 engineering and technology ,computer.software_genre ,Data type ,Data warehouse ,020204 information systems ,High availability ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,business ,Implementation ,computer ,Data integration - Abstract
Organization with considerable investment into data warehousing, the influx of various data types and forms requires certain ways of prepping data and staging platform that support fast, efficient and volatile data to reach its targeted audiences or users of different business needs. Extract, Transform and Load (ETL) system proved to be a choice standard for managing and sustaining the movement and transactional process of the valued big data assets. However, traditional ETL system can no longer accommodate and effectively handle streaming or near real-time data and stimulating environment which demands high availability, low latency and horizontal scalability features for functionality. This paper identifies the challenges of implementing ETL system for streaming or near real-time data which needs to evolve and streamline itself with the different requirements. Current efforts and solution approaches to address the challenges are presented. The classification of ETL system challenges are prepared based on near real-time environment features and ETL stages to encourage different perspectives for future research.
- Published
- 2017
- Full Text
- View/download PDF
45. A Sticker-Based Model Using DNA Computing for Generating Real Random Numbers
- Author
-
Saman Hedayatpour, Suriayati Chuprat, and Nazri Kama
- Subjects
Lavarand ,Convolution random number generator ,General Computer Science ,Computer science ,DNA computing ,law ,Random seed ,Random function ,Randomness tests ,Hardware random number generator ,Algorithm ,Randomness ,law.invention - Abstract
Real random values have wide range of application in different field of computer science such as cryptography, network security and communication, computer simulation, statistical sampling, etc. In purpose of generating real random values, need for a natural noisy source refers to the main challenge where a source of noise may be reliable for using in random number generator if and only if be derived from physical environment. In this work, we address this requirement by using DNA computing concepts where the molecular motion behavior of DNA molecular provides a pure source of physical noise that may be used for generating high quality real random values. Since one of the main factor for evaluating quality of real random values refer to expectation for generating approximately same amount of 0s and 1s, in this article we model a DNA-based random number generator in sticker mode with ability of generating equal numbers of 0 and 1. After using molecular motion behavior of DNA molecular as the natural source of noise into the proposed DNA-based random number generator, the generated value were subjected to frequency, run, and serial tests which are proposed by National Institute of Standards and Technology (NIST) for randomness evaluation. Obtained result from this evaluation shows that beside the achieving high scores in run and serial tests, the values generated by our DNA-based random number generator pass frequency test with 100% success.
- Published
- 2014
- Full Text
- View/download PDF
46. Web Crime Mining by Means of Data Mining Techniques
- Author
-
Suriayati Chuprat, Javid Hosseinkhani Naniz, Suhaimi Ibrahim, and Javad Hosseinkhani
- Subjects
Engineering ,General Computer Science ,business.industry ,Process (engineering) ,General Engineering ,computer.software_genre ,Data science ,Field (computer science) ,Text mining ,Web mining ,Data mining ,Crime data ,business ,computer - Abstract
The purpose of this study is to provide a review to mining useful information by means of Data Mining. The procedure of extracting knowledge and information from large set of data is data mining that applying artificial intelligence method to find unseen relationships of data. There is more study on data mining applications that attracted more researcher attention and one of the crucial field is criminology that applying in data mining which is utilized for identifying crime characteristics. Detecting and exploring crimes and investigating their relationship with criminals are involved in the analyzing crime process. Criminology is a suitable field for using data mining techniques that shows the high volume and the complexity of relationships between crime datasets. Therefore, for further analysis development, the identifying crime characteristic will be the first step and obtained knowledge from data mining approaches is a very useful tool to help and support police forces. This research aims to provide a review to extract useful information by means of Data Mining, in order to find crime hot spots out and predict crime trends for them using crime data mining techniques.
- Published
- 2014
- Full Text
- View/download PDF
47. DESIGN AND IMPLEMENTATION OF A PRIVACY PRESERVED OFF-PREMISES CLOUD STORAGE
- Author
-
Suriayati Chuprat, Jamalul-lail Ab Manan, Mervat Adib Bamiah, and Sarfraz Nawaz Brohi
- Subjects
Service (systems architecture) ,Cloud computing security ,Computer Networks and Communications ,Computer science ,business.industry ,Privacy policy ,Internet privacy ,Cloud computing ,Audit ,Computer security ,computer.software_genre ,Metadata ,Artificial Intelligence ,Backup ,business ,Cloud storage ,computer ,Software - Abstract
Despite several cost-effective and flexible charact eristics of cloud computing, some clients are reluc tant to adopt this paradigm due to emerging security and pr ivacy concerns. Organization such as Healthcare and Payment Card Industry where confidentiality of info rmation is a vital act, are not assertive to trust the security techniques and privacy policies offered by cloud service providers. Malicious attackers have violated the cloud storages to steal, view, manipul ate and tamper client’s data. Attacks on cloud storages are extremely challenging to detect and mi tigate. In order to formulate privacy preserved cloud storage, in this research paper, we propose a n improved technique that consists of five contributions such as Resilient role-based access c ontrol mechanism, Partial homomorphic cryptography, metadata generation and sound stegano graphy, Efficient third-party auditing service, Data backup and recovery process. We implemented these components using Java Enterprise Edition with Glassfish Server. Finally we evaluated our pro posed technique by penetration testing and the results showed that client’s data is intact and pro tected from malicious attackers.
- Published
- 2014
- Full Text
- View/download PDF
48. Proposing a Framework for Exploration of Crime Data Using Web Structure and Content Mining
- Author
-
Hamed Taherdoost, Javad Hosseinkhani, Suriayati Chuprat, Amin Shahraki Moghaddam, and Hadi Barani Baravati
- Subjects
Structure (mathematical logic) ,Engineering ,General Computer Science ,business.industry ,Process (engineering) ,Digital data ,General Engineering ,Law enforcement ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Data science ,Set (abstract data type) ,World Wide Web ,Scalability ,business ,Web crawler ,Reliability (statistics) - Abstract
The purpose of this study is to propose a framework and implement High-level architecture of a scalable universal crawler to maintenance the reliability gap and present the evaluation process of forensic data analysis criminal suspects. In Law enforcement agencies, criminal web data provide appropriate and anonymous information. Pieces of information implemented the digital data in the forensic analysis to accused social networks but the assessment of these information pieces is so difficult. In fact, the operator manually should pull out the suitable information from the text in the website and find the links and classify them into a database structure. In consequent, the set is ready to implement a various criminal network evaluation tools for testing. As a result, this procedure is not efficient because it has many errors and the quality of obtaining the analyzed data is based on the expertise and experience of the investigator subsequently the reliability of the tests is not constant. Therefore, the better result just comes from the knowledgeable operator. The objectives of this study is to show the process of investigating the criminal suspects of forensic data analysis to maintenance the reliability gap by proposing a structure and applying High-level architecture of a scalable universal crawler.
- Published
- 2013
- Full Text
- View/download PDF
49. Parameters Consideration in Designing a Magnetorheological Damper
- Author
-
M.J. Mughni, Hairi Zamzuri, Izyan Iryani Mohd Yazid, Saiful Amri Mazlan, and Suriayati Chuprat
- Subjects
Engineering ,business.industry ,Mechanical Engineering ,Circuit design ,Mechanical engineering ,Structural engineering ,Piston rod ,Physics::Classical Physics ,Finite element method ,Damper ,law.invention ,Magnetic circuit ,Piston ,Mechanics of Materials ,law ,Magnetorheological fluid ,General Materials Science ,Magnetorheological damper ,business - Abstract
This paper presents a simulation study of electromagnetic circuit design for a mixed mode Magnetorheological (MR) damper. The magnetic field generated by electromagnetic circuit of the MR damper was simulated using Finite Element Method Magnetics (FEMM) software package. All aspects of geometry parameters were considered and adjusted efficiently in order to obtain the best MR damper performance. Eventually, six different parameters approach were proposed; the selection of materials, the polarity of coils, the diameter of piston, piston rod and core, the shear and squeeze gaps clearance, the piston pole length and the thickness of housing.
- Published
- 2013
- Full Text
- View/download PDF
50. Hybrid task scheduling framework based on EDF and HPIO in overloaded situation (For preemptive system by multiprocessor)
- Author
-
Amir Hatami Hardoroudi and Suriayati Chuprat
- Subjects
0106 biological sciences ,Computer science ,Distributed computing ,0202 electrical engineering, electronic engineering, information engineering ,Process control ,020201 artificial intelligence & image processing ,Multiprocessing ,Algorithm design ,02 engineering and technology ,010603 evolutionary biology ,01 natural sciences ,Scheduling (computing) - Abstract
These days, handling complicated tasks has become an undeniable usage of real-time embedded systems. Also How to allocate resource effectively becomes an important issue. Utilizing real time device can be really useful in order to execute complex jobs those are needed to be scheduled. However, the authors tried to tackle overload situation in real-time systems by using HPIO (Hybrid PSO and IWO). The author illustrated that better result could be achieved by using HPIO in multiprocessor system in over loaded situation among PSO and ACO. The author used Min-Min algorithm in order to improve local search in HPIO and have fair load balance among processors. Finally the author presents a new hybrid algorithm that combine HPIO and EDF which is called HEDFIWO. We could achieve better result with higher successful tasks and shorter calculation time in overloaded situation.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.