93,067 results
Search Results
202. Maintainable Software Solution Development Using Collaboration Between Architecture and Requirements in Heterogeneous IoT Paradigm (Short Paper)
- Author
-
Wanchun Dou, Wajid Rafique, and Maqbool Khan
- Subjects
Software ,Development (topology) ,Requirements engineering ,business.industry ,Computer science ,Scalability ,Interoperability ,Maintainability ,Architecture ,Software engineering ,business ,Software architecture - Abstract
Internet of Things (IoT) has been tremendously involved in the development of smart infrastructure. Software solutions in IoT have to consider lack of abstractions, heterogeneity, multiple stakeholders, scalability, and interoperability among the devices. The developers need to implement application logic on multiple hardware platforms to satisfy the fundamental business goals. Moreover, long-term maintenance issues due to the frequent introduction of new requirements and hardware platforms pose a vital challenge in IoT solution development.
- Published
- 2019
- Full Text
- View/download PDF
203. Short Paper: The Proof is in the Pudding
- Author
-
Nadia Heninger, Eric Wustrow, and Marcella Hastings
- Subjects
050101 languages & linguistics ,Theoretical computer science ,Computer science ,business.industry ,05 social sciences ,Cryptography ,02 engineering and technology ,Construct (python library) ,Mathematical proof ,Software ,Discrete logarithm ,Proof-of-work system ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Element (category theory) ,business ,Protocol (object-oriented programming) ,Computer Science::Cryptography and Security - Abstract
We propose a proof of work protocol that computes the discrete logarithm of an element in a cyclic group. Individual provers generating proofs of work perform a distributed version of the Pollard rho algorithm. Such a protocol could capture the computational power expended to construct proof-of-work-based blockchains for a more useful purpose, as well as incentivize advances in hardware, software, or algorithms for an important cryptographic problem. We describe our proposed construction and elaborate on challenges and potential trade-offs that arise in designing a practical proof of work.
- Published
- 2019
- Full Text
- View/download PDF
204. European Project Space Papers for the PROFES 2019 - Summary
- Author
-
Davide Fucci and Alessandra Bagnato
- Subjects
Engineering ,Engineering management ,Deliverable ,business.industry ,Space (commercial competition) ,business ,Dissemination ,Outcome (probability) - Abstract
The European Project Space at PROFES 2019 provides an opportunity for researchers involved in ongoing and recently completed research projects (national, European, and international) related to the topics of the conference to present their projects and disseminate the objectives, deliverables, or outcome.
- Published
- 2019
- Full Text
- View/download PDF
205. Cocks’ Identity-Based Encryption in the Standard Model, via Obfuscation Techniques (Short Paper)
- Author
-
Xin Wang, Shimin Li, and Rui Xue
- Subjects
Discrete mathematics ,Provable security ,Computer science ,business.industry ,Obfuscation ,Hash function ,Cryptosystem ,Cryptography ,Encryption ,business ,Random oracle ,Standard model (cryptography) - Abstract
Identity-based encryption (IBE) is an attractive primitive in modern cryptography. Cocks first gave an elegant construction of IBE under Quadratic Residuosity (\(\mathsf {QR}\)) assumption. Unfortunately, its security works only in the Random Oracle (RO) model. In this work, we aim at providing Cock’s scheme with provable security in the standard model. Specifically, we modify Cocks’ scheme by explicitly instantiating the hash function using indistinguishability obfuscation in two different ways which yield two variants of Cocks’ scheme. Their security are promised under well-defined selective-ID and adaptive-ID model respectively. As an additional contribution, we adapt the same method into the Boneh, LaVigne, Sabin (BLS) \(e^{th}\) residuosity based IBE cryptosystem and obtain an adaptive chosen-ID secure scheme under Modified \(e^{th}\) Residuosity (\(\mathsf {MER}\)) assumption.
- Published
- 2019
- Full Text
- View/download PDF
206. Position Paper: Defect Prediction Approaches for Software Projects Using Genetic Fuzzy Data Mining
- Author
-
V. Ramaswamy, T. P. Pushphavathi, and V. Suma
- Subjects
Engineering ,Fuzzy clustering ,business.industry ,computer.software_genre ,Software quality ,Variety (cybernetics) ,Software ,Software bug ,Key (cryptography) ,Position paper ,Data mining ,Project management ,business ,computer - Abstract
Despite significant advances in software engineering research, the ability to produce reliable software products for a variety of critical applications remains an open problem. The key challenge has been the fact that each software product is unique, and existing methods are predominantly not capable of adapting to the observations made during project development. This paper makes the following claim: Genetic fuzzy data mining methods provide an ideal research paradigm for achieving reliable and efficient software defect pattern analysis. A brief outline of some fuzzy data mining methods is provided, along with a justification of why they are applicable to software defect analysis. Furthermore, some practical challenges to the extensive use of fuzzy data mining methods are discussed, along with possible solutions to these challenges.
- Published
- 2014
- Full Text
- View/download PDF
207. The Random Neural Network and Web Search: Survey Paper
- Author
-
Will Serrano
- Subjects
Information retrieval ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,02 engineering and technology ,Recommender system ,Random neural network ,Ranking (information retrieval) ,Search engine ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Relevance (information retrieval) ,Learning to rank ,Artificial intelligence ,business - Abstract
E-commerce customers and general Web users should not believe that the products suggested by Recommender systems or results displayed by Web search engines are either complete or relevant to their search aspirations. The economic priority of Web related businesses requires a higher rank on Web snippets or product suggestions in order to receive additional customers; furthermore, Web search engines and recommender systems revenue is obtained from advertisements and pay-per-click. This survey paper presents a review of Web Search Engines, Ranking Algorithms, Citation Analysis and Recommender Systems. In addition, Neural networks and Deep Learning are also analyzed including their use in learning relevance and ranking. Finally, this survey paper also introduces the Random Neural Network with its practical applications.
- Published
- 2018
- Full Text
- View/download PDF
208. Generic Paper and Plastic Recognition by Fusion of NIR and VIS Data and Redundancy-Aware Feature Ranking
- Author
-
Matthias Zisler, Alla Serebryanyk, and Claudius Schnörr
- Subjects
Waste sorting ,Fusion ,Pixel ,business.industry ,Computer science ,Decision tree learning ,Feature vector ,Small number ,Pattern recognition ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) ,0105 earth and related environmental sciences ,Hue - Abstract
Near infrared (NIR) spectroscopy is used in many applications to gather information about chemical composition of materials. For paper waste sorting, a small number of scores computed from NIR-spectra and assuming more or less unimodal clustered data, a pixel classifier can still be crafted by hand using knowledge about chemical properties and a reasonable amount of intuition. Additional information can be gained by visual data (VIS). However, it is not obvious what features, e.g. based on color, saturation, textured areas, are finally important for successfully separating the paper classes in feature space. Hence, a rigorous feature analysis becomes inevitable. We have chosen a generic machine-learning approach to successfully fuse NIR and VIS information. By exploiting a classification tree and a variety of additional visual features, we could increase the recognition rate to 78% for 11 classes, compared to 63% only using NIR scores. A modified feature ranking measure, which takes redundancies of features into account, allows us to analyze the importance of features and reduce them effectively. While some visual features like color saturation and hue showed to be important, some NIR scores could even be dropped. Finally, we generalize this approach to analyze raw NIR-spectra instead of score values and apply it to plastic waste sorting.
- Published
- 2018
- Full Text
- View/download PDF
209. Evaluation of Usability and Workload Associated with Paper Strips as Compared to Virtual Flight Strips Used for Ramp Operations
- Author
-
Victoria L. Dulchinos
- Subjects
Decision support system ,business.industry ,Computer science ,Usability ,Workload ,Human factors integration ,STRIPS ,Statistical power ,Field (computer science) ,law.invention ,law ,User interface ,business ,Simulation - Abstract
This paper describes a study comparing the use of paper strips with virtual flight strips depicted on a new user interface, the Ramp Traffic Console (RTC), designed for use by ramp controllers to be used in place of paper strips. A Human-In-the-Loop (HITL) experiment was performed as the fifth in a series of six HITL simulation studies designed to evaluate a pushback Decision Support Tool (DST) concept for Charlotte Douglas International Airport (CLT). Workload and usability were assessed in post-run and post-study questionnaires. In the RTC virtual flight strip condition, post-run questionnaire results show lower workload ratings across all aspects of workload; additionally, a trend is found toward increased usability ratings. Post-study questionnaire results indicate a preference for RTC over paper strips. Additional research is suggested with more training runs and a greater number of participants to increase statistical power. It is also suggested that this new technology be re-evaluated as a part of the ATD-2 field testing activities.
- Published
- 2018
- Full Text
- View/download PDF
210. Vision Paper for Enabling Digital Healthcare Applications in OHP2030
- Author
-
Shuichiro Yamamoto, Yoshimasa Masuda, and Tetsuya Toma
- Subjects
Open platform ,business.industry ,Computer science ,Big data ,Enterprise architecture ,020207 software engineering ,02 engineering and technology ,Digital healthcare ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,020201 artificial intelligence & image processing ,Architecture ,Telecommunications ,business ,Internet of Things - Abstract
Internets of Things (IoT) and Big Data applications and services have spread and are rapidly being deployed in the information services of the healthcare and financial industries, etc. However, the previous paper suggested that the current IoT services were individually developed, therefore, the open platform and architecture for the above IoT services of the healthcare industries should be deemed necessary, while the Big Data applications prevail in healthcare industry gradually. An open healthcare platform is expected to promote and implement the digital IT applications for healthcare communities efficiently. In this paper, we suggest that various IoT and Big Data applications will be designed and verified while the open platform for healthcare related IoT services should be proposed and verified by the research initiative named “Open Healthcare Platform 2030 – OHP2030”. In addition, the vision paper for enabling Digital Healthcare applications in the above OHP2030 research initiative is explained.
- Published
- 2018
- Full Text
- View/download PDF
211. Detection of Computer-Generated Papers Using One-Class SVM and Cluster Approaches
- Author
-
Renata Avros and Zeev Volkovich
- Subjects
Relation (database) ,business.industry ,Computer science ,05 social sciences ,Context (language use) ,050905 science studies ,Machine learning ,computer.software_genre ,Base (topology) ,Class (biology) ,Support vector machine ,Set (abstract data type) ,Outlier ,Artificial intelligence ,0509 other social sciences ,050904 information & library sciences ,Cluster analysis ,business ,computer - Abstract
The paper presents a novel methodology intended to distinguish between real and artificially generated manuscripts. The approach employs inherent differences between the human and artificially generated wring styles. Taking into account the nature of the generation process, we suggest that the human style is essentially more “diverse” and “rich” in comparison with an artificial one. In order to assess dissimilarities between fake and real papers, a distance between writing styles is evaluated via the dynamic dissimilarity methodology. From this standpoint, the generated papers are much similar in their own style and significantly differ from the human written documents. A set of fake documents is captured as the training data so that a real document is expected to appear as an outlier in relation to this collection. Thus, we analyze the proposed task in the context of the one-class classification using a one-class SVM approach compared with a clustering base procedure. The provided numerical experiments demonstrate very high ability of the proposed methodology to recognize artificially generated papers.
- Published
- 2018
- Full Text
- View/download PDF
212. RusNLP: Semantic Search Engine for Russian NLP Conference Papers
- Author
-
Irina Nikishina, Andrey Kutuzov, and Amir Bakarov
- Subjects
Service (systems architecture) ,Source code ,Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,Semantic search ,02 engineering and technology ,Recommender system ,computer.software_genre ,Metadata ,Search engine ,Semantic similarity ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,0509 other social sciences ,Web service ,050904 information & library sciences ,business ,computer ,Natural language processing ,media_common - Abstract
We present RusNLP, a web service implementing semantic search engine and recommendation system over proceedings of three major Russian NLP conferences (Dialogue, AIST and AINL). The collected corpus spans across 12 years and contains about 400 academic papers in English. The presented web service allows searching for publications semantically similar to arbitrary user queries or to any given paper. Search results can be filtered by authors and their affiliations, conferences or years. They are also interlinked with the NLPub.ru service, making it easier to quickly capture the general focus of each paper. The search engine source code and the publications metadata are freely available for all interested researchers.
- Published
- 2018
- Full Text
- View/download PDF
213. A Citation-Based Recommender System for Scholarly Paper Recommendation
- Author
-
Abdullahi Baffa Bichi, Khalid Haruna, Tutut Herawan, Maizatul Akmar Ismail, Sutrisna Wibawa, and Victor Chang
- Subjects
Information retrieval ,business.industry ,Computer science ,05 social sciences ,Novelty ,02 engineering and technology ,Recommender system ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,The Internet ,0509 other social sciences ,050904 information & library sciences ,business ,Baseline (configuration management) ,Citation ,Block (data storage) - Abstract
Several approaches have been proposed to help researchers in acquiring relevant and useful scholarly papers from the enormous amount of information (information overload) that is available over the internet. The significant challenge for those approaches is their assumption of the availability of the whole contents of each of the candidate recommending papers to be freely accessible, which is not always the case considering the copyright restrictions. Also, they immensely depend on priori user profiles, which required a significant number of registered users for the systems to work effectively, and a stumbling block for the creation of a new recommendation system. This paper proposes a citation-based recommender system based on the latent relations connecting research papers for the scholarly paper recommendation. The novelty of the proposed approach is that unlike the existing works, the latent associations that exist between a scholarly paper and its various citations are utilised. The proposed approach aimed to personalise scholarly recommendations regardless of the user expertise and research fields based on paper-citation relations. Experimental results have shown significant improvement over other baseline methods.
- Published
- 2018
- Full Text
- View/download PDF
214. Temporary Housing Made from Recycled Paper Tubes
- Author
-
Lisiane Ilha Librelotto and Luana Toralles Carbonari
- Subjects
Architectural engineering ,media_common.quotation_subject ,Damages ,Quality (business) ,Context (language use) ,Local population ,Business ,Reuse ,Architecture ,Natural disaster ,Constructive ,media_common - Abstract
Natural disasters have been an increasingly present issue in the media and in daily life of society. Due to lack of planning and organization, the damage is intensified, destroying infrastructures and habitable structures. In addition, these damages have generated a large number of homeless, resulting in the need for temporary housing. In response to this, the architect Shigeru Ban developed in 1995 the project of a temporary housing named “Paper Log House” for the homeless after an earthquake in Japan. This housing was built with recycled paper tubes to reduce the building costs, speed up its construction and reuse an available material, minimizing its impact on the environment. Subsequently, it was used as a response to disasters in different places, being modified to adapt to each context. This paper aims to perform a comparative analysis of these temporary housing used in 1995 in Japan, in 2000 in Turkey, in 2001 in India and in 2014 in the Philippines. For this, a literature review was performed, identifying concepts regarding the use of recycled paper tubes in Shigeru Ban’s architecture and the design and construction characteristics of the first project for Japan. After that, an analysis was carried out comparing the four cases. With the results it can be concluded that the cultural, economic and environmental aspects of each context are of great importance in the project. Thus, priority should be given to the use of local materials, constructive agility, comfort and privacy to users, aesthetic quality, participation of the local population, recycling of materials, among others.
- Published
- 2018
- Full Text
- View/download PDF
215. Analysis of Scientific Papers on Organizational Uncertainty in Education and School Administration (1990–2016)
- Author
-
Müzeyyen Petek Dinçman and Didem Koşar
- Subjects
business.industry ,media_common.quotation_subject ,Context (language use) ,Public relations ,Affect (psychology) ,Politics ,Content analysis ,Perception ,Milestone (project management) ,business ,Psychology ,Administration (government) ,media_common ,Qualitative research - Abstract
Organizational uncertainty affects all administration processes and functions whereas perception of uncertainty affects not only organizational administration and functions, but may also adversely affect individual performance. Consequently, a review of the existing papers related to organizational uncertainty and developing suggestions on the issue will direct the course of solutions to be found for this situation. A secondary purpose of this review is researching the titles, types, contents, methods, findings and comments, conclusions, and suggestions of the scientific papers related to organizational uncertainty in education and school administration in the period from 1990 onward. The research was designed with a qualitative research model, and a document examination technique was used. The data were obtained using document analysis. Content analysis was used for interpreting the documents. The year 1990 is a critical milestone in the study since it was a period when social and political events were being experienced rapidly and intensely in both a national and an international context. Since it is of significance to discuss the studies conducted in this period and their outcomes, the year of 1990 was selected as the date of commencement.
- Published
- 2018
- Full Text
- View/download PDF
216. A Hierarchical Neural Extractive Summarizer for Academic Papers
- Author
-
Yoshimasa Tsuruoka and Kazutaka Kinugawa
- Subjects
Training set ,Artificial neural network ,business.industry ,Computer science ,computer.software_genre ,Automatic summarization ,Focus (linguistics) ,Tree (data structure) ,Tree structure ,Recurrent neural network ,Artificial intelligence ,business ,computer ,Natural language processing ,Latent vector - Abstract
Recent neural network-based models have proven successful in summarization tasks. However, previous studies mostly focus on comparatively short texts and it is still challenging for neural models to summarize long documents such as academic papers. Because of their large size, summarization for academic papers has two obstacles: it is hard for a recurrent neural network (RNN) to squash all the information on the source document into a latent vector, and it is simply difficult to pinpoint a few correct sentences among a large number of sentences. In this paper, we present an extractive summarizer for academic papers. The idea is converting a paper into a tree structure composed of nodes corresponding to sections, paragraphs, and sentences. First, we build a hierarchical encoder-decoder model based on the tree. This design eases the load on the RNNs and enables us to effectively obtain vectors that represent paragraphs and sections. Second, we propose a tree structure-based scoring method to steer our model toward correct sentences, which also helps the model to avoid selecting irrelevant sentences. We collect academic papers available from PubMed Central, and build the training data suited for supervised machine learning-based extractive summarization. Our experimental results show that the proposed model outperforms several baselines and reduces high-impact errors.
- Published
- 2018
- Full Text
- View/download PDF
217. Modularized and Attention-Based Recurrent Convolutional Neural Network for Automatic Academic Paper Aspect Scoring
- Author
-
Xiaowei Han, Feng Qiao, and Lizhen Xu
- Subjects
Computer science ,business.industry ,Deep learning ,Pooling ,02 engineering and technology ,Audit ,Machine learning ,computer.software_genre ,Convolutional neural network ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Linear layer ,Baseline (configuration management) ,business ,computer ,Network model - Abstract
Thousands of academic papers are submitted at top venues each year. Manual audits are time-consuming and laborious. And the result may be influenced by human factors. This paper investigates a modularized and attention-based recurrent convolutional network model to represent academic paper and predict aspect scores. This model treats input text as module-document hierarchies, uses attention pooling CNN and LSTM to represent text, and outputs prediction with a linear layer. Empirical results on PeerRead data show that this model give the best performance among the baseline models.
- Published
- 2018
- Full Text
- View/download PDF
218. Manufacturing Cellulosic Fibres for Making Paper: A Historical Perspective
- Author
-
Raimo Alén
- Subjects
Chemical pulping ,Architectural engineering ,Engineering ,Economic production ,Cellulosic ethanol ,business.industry ,Secondary sector of the economy ,Papermaking ,business - Abstract
The manufacture of pulp and paper is an important branch of industry worldwide and is based on complex and multidisciplinary technology. The production and modification of cellulosic fibre, which created the foundation for this industrial sector, has had a rich and colourful history. It has gone through many eras and its development has been closely integrated with the growth of our fundamental knowledge of chemistry and other natural sciences. During the last two decades, modern pulp and paper technologies have undergone some significant developments, especially ones involving the creation of a wide range of new options for different feedstock materials. Nevertheless, the fundamental concept behind most pulp and paper technologies has remained practically the same. By the same token, there are still some major challenges to be overcome in the future, and they deal mainly with the economic production of novel by-products and their possible applications as well as environmental concerns. This chapter briefly traces the historical evolution of the development of pulp and paper technology, and also outlines its present situation.
- Published
- 2018
- Full Text
- View/download PDF
219. Pattern Recognition Method for Classification of Agricultural Scientific Papers in Polish
- Author
-
Waldemar Karwowski and Piotr Wrzeciono
- Subjects
Jaccard index ,Computer science ,business.industry ,02 engineering and technology ,Polish ,computer.software_genre ,language.human_language ,Domain (software engineering) ,Set (abstract data type) ,020204 information systems ,Similarity (psychology) ,Pattern recognition (psychology) ,Inflection ,0202 electrical engineering, electronic engineering, information engineering ,language ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Sentence ,Natural language processing - Abstract
Calculation of text similarity is an essential task for the text analysis and classification. It be can based, e.g., on Jaccard, cosine or other similar measures. Such measures consider the text as a bag-of-words and, therefore, lose some syntactic and semantic features of its sentences. This article presents a different measure based on the so-called artificial sentence pattern (ASP) method. This method has been developed to analyze texts in the Polish language which has very rich inflection. Therefore, ASP has utilized syntactic and semantic rules of the Polish language. Nevertheless, we argue that it admits extensions to other languages. As a result of the analysis, we have obtained several hypernodes which contain the most important words. Each hypernode corresponds to one of the examined documents, the latter being published papers from agriculture domain written in Polish. Experimental results obtained from that set of papers have been described and discussed. Those results have been visually illustrated using graphs of hypernodes and compared with Jaccard and cosine measures.
- Published
- 2018
- Full Text
- View/download PDF
220. Robust Detection of Water Sensitive Papers
- Author
-
André R. S. Marçal
- Subjects
0106 biological sciences ,Quadrilateral ,business.industry ,Computer science ,Pattern recognition ,04 agricultural and veterinary sciences ,01 natural sciences ,Image (mathematics) ,Set (abstract data type) ,Identification (information) ,Metric (mathematics) ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,Segmentation ,Relevance (information retrieval) ,Artificial intelligence ,business ,Mobile device ,010606 plant biology & botany - Abstract
The automatic analysis of water-sensitive papers (WSP) is of great relevance in agriculture. SprayImageMobile is a software tool developed for mobile devices (iOS) that provides full processing of WSP, from image acquisition to the final reporting. One of the initial processing tasks on SprayImageMobile is the detection (or segmentation) of the WSP on the image acquired by the device. This paper presents the method developed for the detection of the WSP that was implemented in SprayImageMobile. The method is based on the identification of reference points along the WSP margins, and the modeling of a quadrilateral that takes into account possible false positive and negative identifications. The method was tested on a set of 360 images, failing to detect the WSP in only 1 case (detection accuracy of 99.7%). The segmentation accuracy was evaluated using references obtained by a semi-automatic method. The average values obtained for the 359 images tested were: 0.9980 (precision), 0.9940 (recall) and 0.9921 (Hammoude metric).
- Published
- 2018
- Full Text
- View/download PDF
221. Using Deep Learning Word Embeddings for Citations Similarity in Academic Papers
- Author
-
El Habib Benlahmar, Sara Mifrah, Nadia Bouhriz, Oumaima Hourrane, and Mohamed Rachdi
- Subjects
Word embedding ,business.industry ,Computer science ,Deep learning ,02 engineering and technology ,computer.software_genre ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,020201 artificial intelligence & image processing ,Plagiarism detection ,Artificial intelligence ,business ,Citation ,computer ,Natural language processing - Abstract
The citation similarity measurement task is defined as determining how similar the meanings of two citations are. This task play an significant role in Natural Language Processing applications, especially in academic plagiarism detection. Yet, computing citation similarity is not a trivial task, due to the incomplete and ambiguous information presented in academic papers, which makes necessity to leverage extra knowledge to understand it, as well as most similarity measures based on the syntactic features, and other based on the semantic part still has many drawbacks. In this paper, we propose a corpus-based approach using deep learning word embeddings to compute more effective citation similarity. Our study explores the previous works on text similarity, namely, string-based, knowledge-based and corpus-based. Then we define our new basis and experiment on a large dataset of scientific papers. The final results demonstrate that deep learning based approach can enhance the effectiveness of citation similarity.
- Published
- 2018
- Full Text
- View/download PDF
222. A Critical Analysis of Teacher Involvement in the English Language Paper of the First School Leaving Certificate Examination (FSLC) in Cameroon
- Author
-
Achu Charles Tante
- Subjects
Higher education ,business.industry ,Possession (linguistics) ,Mathematics education ,Validity ,State of affairs ,Academic achievement ,business ,Psychology ,Training and development ,Certificate ,Competence (human resources) - Abstract
English is one of two official languages in Cameroon and it is used as L2 from pre-school to university. It therefore implies that the language is crucial in the academic achievement of primary school pupils. Success in the First School Leaving Certificate (FSLC) Examination marks the end of the primary cycle and possession of the first education certificate which opens the way to several avenues. However, failure in the English language paper would hardly lead to success in the examination. In addition, the majority of careers require success and competence in the English language. The English language paper then is High-Stakes, with great washback effects not only for pupils and candidates but also for various stakeholders. Recently, there have been loud cries on the disparity existing between English language results and language use of primary school students. The trend seems continuous right up to higher education. The question has been raised as to how to explain the mass success rate in English language in the FSLC Examination given the poor level of communication of students. Many reasons and explanations have been suggested for this state of affairs, such as the policy of education for all, the young ages of pupils, inadequate teaching materials, inadequate teacher training and development, and poor parental support. This chapter takes a critical look at the involvement of classroom teachers in the development, organisation, administration and marking of the English language paper in the FSLC Examination. The chapter attempts to examine the roles and duties of classroom teachers, and whether their involvement enhances or reduces the validity and reliability of the examination.
- Published
- 2018
- Full Text
- View/download PDF
223. Clustering Multi-View Data Using Non-negative Matrix Factorization and Manifold Learning for Effective Understanding: A Survey Paper
- Author
-
Thi Ngoc Khanh Luong and Richi Nayak
- Subjects
Class (computer programming) ,business.industry ,Computer science ,Nonlinear dimensionality reduction ,02 engineering and technology ,Machine learning ,computer.software_genre ,law.invention ,Matrix decomposition ,Non-negative matrix factorization ,ComputingMethodologies_PATTERNRECOGNITION ,law ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Latent structure ,Focus (optics) ,business ,Cluster analysis ,Manifold (fluid mechanics) ,computer - Abstract
Multi-view data that contains the data represented in many types of features has received much attention recently. The class of method utilising non-negative matrix factorization (NMF) and manifold learning to seek the meaningful latent structure of data has been popularly used for both traditional data and multi-view data. The NMF and manifold based multi-view clustering methods focus on dealing with the challenges of manifold learning and applying manifold learning on the NMF framework. This paper provides a comprehensive review of this important class of methods on multi-view data. We conduct an extensive experiments on several datasets and raise many open problems that can be dealt with in the future so a higher clustering performance can be achieved.
- Published
- 2018
- Full Text
- View/download PDF
224. Strengthening Public Key Authentication Against Key Theft (Short Paper)
- Author
-
Conrad Irwin and Martin Kleppmann
- Subjects
Password ,business.industry ,Internet privacy ,Computer security ,computer.software_genre ,Encryption ,Public-key cryptography ,4606 Distributed Computing and Systems Software ,46 Information and Computing Sciences ,Server ,Authentication protocol ,Key (cryptography) ,Strong authentication ,4604 Cybersecurity and Privacy ,Business ,Replay attack ,computer - Abstract
Authentication protocols based on an asymmetric keypair provide strong authentication as long as the private key remains secret, but may fail catastrophically if the private key is lost or stolen. Even when encrypted with a password, stolen key material is susceptible to offline brute-force attacks. In this paper we demonstrate a method for rate-limiting password guesses on stolen key material, without requiring special hardware or changes to servers. By slowing down offline attacks and enabling easy key revocation our algorithm reduces the risk of key compromise, even if a low-entropy password is used.
- Published
- 2018
- Full Text
- View/download PDF
225. Opportunistic Fog Computing for 5G Radio Access Networks: A Position Paper
- Author
-
Boon-Chong Seet and Jofina Jijin
- Subjects
010302 applied physics ,Edge device ,business.industry ,End user ,Computer science ,02 engineering and technology ,021001 nanoscience & nanotechnology ,01 natural sciences ,Base station ,0103 physical sciences ,Scalability ,Cellular network ,Macrocell ,Latency (engineering) ,0210 nano-technology ,business ,5G ,Computer network - Abstract
Fog-based radio access networks (F-RAN) are posed to play a pivotal role in the much-anticipated 5th Generation (5G) cellular networks. The philosophy of F-RAN is to harness the distributed resources of collaborative edge devices to deliver localized RAN services to the end users. The current F-RAN is implemented mainly utilizing dedicated hardware and do not leverage on the available large number of distributed edge devices. This paper introduces the idea of opportunistic fog RAN (OF-RAN) which comprises of virtual fog access points (v-FAPs). The v-FAPs are formed opportunistically by one or more local edge devices also referred to as service nodes, such as WiFi access points, femtocell base stations and more resource rich end user devices under the coverage and management of the physical FAP, which can be dedicated fog server, fog-enabled remote radio heads (RRHs) or macrocell base stations. The proposed OF-RAN can be a low latency and high scalable solution for 5G cellular networks.
- Published
- 2018
- Full Text
- View/download PDF
226. Gender, Colonialism, and Italian Difference: Duras and The Aspern Papers
- Author
-
Kathryn Wichelns
- Subjects
Postcolonialism ,Literature ,History ,business.industry ,media_common.quotation_subject ,Ethnic group ,French ,Ambivalence ,Colonialism ,language.human_language ,Reading (process) ,Novella ,language ,Racialization ,business ,media_common - Abstract
Les Papiers d’Aspern (1961) is Marguerite Duras’s untranslated play based on Henry James’s 1888 novella The Aspern Papers. French scholars generally have viewed this play as a direct translation of an earlier English-language adaptation by Michael Redgrave, but the differences between the two versions are substantive. Quite differently than Redgrave, Duras interprets James’s novella as an account of gendered racialization and linguistic difference that resonates with the emergence of French and Francophone postcolonial writing in Paris during the late 1950s and early 1960s. Her reading of James presents a systematic, period-specific analysis that should be considered within the larger histories of both feminist scholarly interpretations and French readings of his work. Additionally, in its analysis of Italy and the Italian language, this under-read play merits attention in discussions of Duras’s ambivalent portrayals of colonialism and ethnic gendering.
- Published
- 2018
- Full Text
- View/download PDF
227. SVC Based Multiple Access Protocol with QoS Guarantee for Next Generation WLAN (Invited Paper)
- Author
-
Run Zhou, Bo Li, Mao Yang, and Zhongjiang Yan
- Subjects
Access network ,business.industry ,Computer science ,Quality of service ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,Throughput ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,Media access control ,Wireless ,Channel access method ,business ,Computer network - Abstract
With the increasing in demand for video traffic, video service has been becoming more and more diversified. Scaled video coding (SVC) has become one of the most common video code technology to meet the requirements of different video service types. Therefore, SVC based video users quality of service (QoS) guarantee is one of the most basic problems of the network, but there are few studies focusing on the SVC based video users QoS guarantee protocol for the next generation wireless location access network (WLAN). This paper proposes SVC based media access control (MAC) protocol with QoS guarantee for next generation WLAN, referred to as QoS-SVC. If there are some residual sub-channels resource after the first channel contention, the protocol offers another opportunity, named second random contention, for the video users both collided and successful in the first random contention phase and enables them to transmit their data in the residual sub-channels. The simulation results show that the throughput adopting QoS-SVC is improved by 154%, compared with non-second random contention access (Non-SRCA) protocol.
- Published
- 2018
- Full Text
- View/download PDF
228. Beat the Bookmaker – Winning Football Bets with Machine Learning (Best Application Paper)
- Author
-
Julian Knoll and Johannes Stübinger
- Subjects
050208 finance ,Statistical arbitrage ,business.industry ,05 social sciences ,Football ,League ,Machine learning ,computer.software_genre ,Odds ,0502 economics and business ,Economics ,The Internet ,Artificial intelligence ,050207 economics ,business ,computer - Abstract
Over the past decades, football (soccer) has continued to draw more and more attention from people all over the world. Meanwhile, the appearance of the internet led to a rapidly growing market for online bookmakers, companies which offer sport bets for specific odds. With numerous matches every week in dozens of countries, football league matches hold enormous potential for developing betting strategies. In this context, a betting strategy beats the bookmaker if it generates positive average profits over time. In this paper, we developed a data-driven framework for predicting the outcome of football league matches and generating meaningful profits by betting accordingly. Conducting a simulation study based on the matches of the five top European football leagues from season 2013/14 to 2017/18 showed that economically and statistically significant returns can be achieved by exploiting large data sets with modern machine learning algorithms. Furthermore, it turned out that these results cannot be reached with a linear regression model or simple betting strategies, such as always betting on the home team.
- Published
- 2018
- Full Text
- View/download PDF
229. Rhoban Football Club: RoboCup Humanoid Kid-Size 2017 Champion Team Paper
- Author
-
Julien Allali, Rémi Fabre, Loic Gondry, Ludovic Hofer, Olivier Ly, Steve N’Guyen, Grégoire Passault, Antoine Pirrone, and Quentin Rouxel
- Subjects
0209 industrial biotechnology ,Engineering ,business.industry ,ComputingMilieux_PERSONALCOMPUTING ,Champion ,Advertising ,02 engineering and technology ,League ,Competition (economics) ,Football club ,Engineering management ,020901 industrial engineering & automation ,Software ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business - Abstract
In 2019, Rhoban Football Club reached the first place of the KidSize soccer competition for the fourth time and performed the first in-game throw-in in the history of the Humanoid league. Building on our existing code-base, we improved some specific functionalities, introduced new behaviors and experimented with original methods for labeling videos. This paper presents and reviews our latest changes to both software and hardware, highlighting the lessons learned during RoboCup.
- Published
- 2018
- Full Text
- View/download PDF
230. Secure Third Party Data Clustering Using $$\varPhi $$ Data: Multi-User Order Preserving Encryption and Super Secure Chain Distance Matrices (Best Technical Paper)
- Author
-
Nawal Almutairi, Frans Coenen, and Keith Dures
- Subjects
0209 industrial biotechnology ,Information privacy ,Third party ,business.industry ,Computer science ,02 engineering and technology ,Encryption ,Multi-user ,computer.software_genre ,Dbscan clustering ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Cluster analysis ,business ,Proxy (statistics) ,computer ,Distance matrices in phylogeny - Abstract
The paper introduces the concept of \(\varPhi \)-data, data that is a proxy for some underlying data that offers advantages of data privacy and security while at the same time allowing particular data mining operations without requiring data owner participation once the proxy has been generated. The nature of the proxy representation is dependent on the nature of the desired data mining to be undertaken. Secure collaborative clustering is considered where the \(\varPhi \)-data is in the form of a Super Secure Chain Distance Matrices (SSCDM) encrypted using a proposed Multi-User Order Preserving Encryption (MUOPE) scheme. SSCDMs can be produced with respect to horizontal and vertical data partitioning. The DBSCAN clustering algorithm is adopted for illustrative and evaluation purposes. The results indicate that the proposed solution is efficient and produces comparable clustering configurations to those produced using an unencrypted, “standard”, algorithm; while maintaining data privacy and security.
- Published
- 2018
- Full Text
- View/download PDF
231. Importance of Information and Communication Technology (ICT) in Higher Education Paper
- Author
-
Dhiman, Viney, Bharti, Anupama, Sharma, Vijai, Chlamtac, Imrich, Series Editor, Ramu, Arulmurugan, editor, Chee Onn, Chow, editor, and Sumithra, M.G., editor
- Published
- 2022
- Full Text
- View/download PDF
232. Poster Paper Data Integration for Supporting Biomedical Knowledge Graph Creation at Large-Scale
- Author
-
Tatiana Novikova, Samaneh Jozashoori, and Maria-Esther Vidal
- Subjects
Information retrieval ,Exploit ,Computer science ,business.industry ,Interoperability ,Big data ,02 engineering and technology ,computer.software_genre ,03 medical and health sciences ,Open data ,0302 clinical medicine ,030220 oncology & carcinogenesis ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Semantic integration ,business ,computer ,Data integration - Abstract
In recent years, following FAIR and open data principles, the number of available big data including biomedical data has been increased exponentially. In order to extract knowledge, these data should be curated, integrated, and semantically described. Accordingly, several semantic integration techniques have been developed; albeit effective, they may suffer from scalability in terms of different properties of big data. Even scaled-up approaches may be highly costly due to performing tasks of semantification, curation, and integration independently. To overcome these issues, we devise ConMap, a semantic integration approach which exploits knowledge encoded in ontologies to describe mapping rules in a way that performs all these tasks at the same time. The empirical evaluation of ConMap performed on different data sets shows that ConMap can significantly reduce the time required for knowledge graph creation by up to 70% of the time that is consumed following a traditional approach. Accordingly, the experimental results suggest that ConMap can be a semantic data integration solution that embody FAIR principles specifically in terms of interoperability.
- Published
- 2018
- Full Text
- View/download PDF
233. Short Paper: Psychosocial Aspects of New Technology Implementation
- Author
-
Dennis R. Jones
- Subjects
Teamwork ,Process management ,Process (engineering) ,Emerging technologies ,media_common.quotation_subject ,Job satisfaction ,Profitability index ,Quality (business) ,Business ,Productivity ,Psychosocial ,media_common - Abstract
New technology is dramatically changing the workplace by allowing companies to increase efficiency, productivity, quality, safety, and overall profitability. An effective new technology implementation is required for companies to compete successfully in the marketplace. Time and money wasted on unsuccessful and improper new technology implementation is contrary to the overall goal of improving the competitiveness and profitability of the company. Teams and teamwork have been recommended as a way to improve efficiency, productivity, quality, safety, profitability, and employee satisfaction. New technology challenges the current implementation methods and techniques. To effectively utilize these new technologies it is best to consider all the factors involved in the implementation process; most importantly the individual human elements involved. It is recommended to utilize a cooperative team oriented approach to new technology implementation, which relies heavily on obtaining employee input and participation throughout the entire process. By doing this; it is hoped that the new technology can be implemented in the most effective way possible.
- Published
- 2018
- Full Text
- View/download PDF
234. T-SCMA: Time Domain Sparse Code Multiple Access for Narrow Band Internet of Things (NB-IoT) (Invited Paper)
- Author
-
Zhenzhen Yan, Zhicheng Bai, Zhongjiang Yan, Bo Li, and Mao Yang
- Subjects
business.industry ,Computer science ,Frame (networking) ,Time division multiple access ,020206 networking & telecommunications ,Throughput ,02 engineering and technology ,Multiplexing ,Spread spectrum ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Wireless ,020201 artificial intelligence & image processing ,Time domain ,business ,Computer network - Abstract
Nowadays the development of internet of things (IoT) has become the next major growth point of wireless communication. And narrow band has become a new trend in the development of IoT. Achieving massive connectivity and ever-increasing network capacity has become an important subject of our research. We find that sparse code multiple access (SCMA), a type of non-orthogonal multiple access, is supposed to meet these needs, but it can hardly apply to NB-IoT because narrow band does not have the ability of spread spectrum. On this basis, we introduce SCMA into time domain, named T-SCMA, for NB-IoT. By multiplexing SCMA in the time domain and defining a MAC frame structure, we can significantly improve both connectivity and network capacity, ultimately obtain a much higher throughput property gain over time division multiple address (TDMA).
- Published
- 2018
- Full Text
- View/download PDF
235. The Social Acceptance of Paper Credit as Currency in Eighteenth-Century England: A Case Study of Glastonbury c. 1720–1742
- Author
-
Craig Muldrew
- Subjects
Consumption (economics) ,Currency ,Common law ,Debt ,media_common.quotation_subject ,Scrivener ,Economic history ,Bond market ,Business ,Metropolitan area ,Financial Revolution ,media_common - Abstract
The period from roughly 1700 until the rise of county banking saw one of the most acute long-term shortages of small change in the whole of the history of early modern England. The recoinage in 1774 produced only about £800,000 in silver against £18.2 million in gold. But, this was a period of increasing production and consumption, and it is a puzzle how the British economy managed to achieve such continued growth without currency to pay wages and make small transactions, while at the same time relying less on informal credit. However, changes were happening in credit networks below the radar of the very well established history of the financial revolution. Informal written bills and notes were taking the place of unwritten obligations. Although bills for goods sold or work done commonly appear as debts in inventories in the early seventeenth century, it is difficult to know when they became commonly transferable. Certainly transferred bills had no separate legal status in the common law. Fortunately, now, with the publication of the Chronicles of John Cannon, a poor Somerset husbandman’s son who became scrivener for the less wealthy of the small town of Glastonbury, we have an excellent source to trace the transformation of a very rural credit market far away from the stocks and shares of metropolitan finance.
- Published
- 2018
- Full Text
- View/download PDF
236. Semi-granted Sparse Code Multiple Access (SCMA) for 5G Networks (Invited Paper)
- Author
-
Bo Li, Mao Yang, Xiaoya Zuo, Zhicheng Bai, Zhongjiang Yan, and Yusheng Liang
- Subjects
business.industry ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,020206 networking & telecommunications ,Throughput ,02 engineering and technology ,business ,Throughput (business) ,5G ,Computer network - Abstract
Sparse Code Multiple Access (SCMA) is a promising non-orthogonal multiple access technology for 5G radio access networks. It improves the connectivity and capacity. However, the two multiple access methods of SCMA: granted and grant-free cannot dynamically match the real-time demands since the resources allocated for the granted method and grant-free method are clearly separated with each other. This further deteriorates the system performance. This article proposes a semi-granted SCMA method, by enabling the granted demands and grant-free demands to share the same resources. Simulation results confirm that semi-granted SCMA matches the dynamically fluent demands and significantly improve the throughput of SCMA 5G system.
- Published
- 2018
- Full Text
- View/download PDF
237. Categorisation of Papers: Systematic Review on Prostate Cancer Survivorship and Psychosexual Care
- Author
-
Raj Persad and Sanchia S. Goonewardene
- Subjects
Gerontology ,Prostate cancer ,Psychosexual development ,business.industry ,Survivorship curve ,Medicine ,business ,medicine.disease - Published
- 2018
- Full Text
- View/download PDF
238. List of Braverman’s Papers Published in the 'Avtomatika i telemekhanika' Journal, Moscow, Russia, and Translated to English as 'Automation and Remote Control' Journal
- Author
-
Ilya Muchnik
- Subjects
Information retrieval ,Feature (computer vision) ,Computer science ,law ,business.industry ,Visual patterns ,A priori and a posteriori ,business ,Automation ,Remote control ,law.invention - Abstract
Braverman E.M. Certain problems in the design of machines which classify objects according to an identifying feature which is not specified a priori. Automation and Remote Control 21, 971–978 (1960). Braverman, E.M. Experiments on machine learning to recognize visual patterns. Automation and Remote Control 23, 315–327 (1962).
- Published
- 2018
- Full Text
- View/download PDF
239. When Your Browser Becomes the Paper Boy
- Author
-
Joachim Posegga, Eduard Brehm, and Juan D. Parra Rodriguez
- Subjects
021110 strategic, defence & security studies ,Computer science ,business.industry ,Computation ,0211 other engineering and technologies ,02 engineering and technology ,JavaScript ,computer.software_genre ,Internet security ,World Wide Web ,Scripting language ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,020201 artificial intelligence & image processing ,business ,computer ,computer.programming_language - Abstract
We present a scenario where browsers’ network and computation capabilities are used by an attacker without the user’s knowledge. For this kind of abuse, an attacker needs to trigger JavaScript code on the browser, e.g. through an advertisement. However, unlike other Web attacks, e.g. cross-site scripting, the attack can be executed isolated from the Origin of the site visited by the user.
- Published
- 2018
- Full Text
- View/download PDF
240. The European Account Preservation Order: Nuclear Weapon or Paper Tiger?
- Author
-
Tibor Tajti and Peter Iglikowski
- Subjects
Engineering ,Risk analysis (engineering) ,Casting (metalworking) ,Tiger ,Order (business) ,business.industry ,Nuclear weapon ,business - Abstract
Proper assessment of the utility of the Mareva Injunction and the Saisie Conservatoire would not be possible without casting a few words on one of the most recent European developments: the availability of the European Account Preservation Order (hereinafter: EAPO) since 18 January 2017 when Regulation 655/2014—that created it—came into being.
- Published
- 2018
- Full Text
- View/download PDF
241. A Novel Matching Technique for Two-Sided Paper Fragments Reassembly
- Author
-
Hao Wu, Yi Wei, Lumeng Cao, and Wen Yu
- Subjects
Matrix difference equation ,Image stitching ,Matching (statistics) ,business.industry ,Computer science ,Public security ,Computer vision ,Artificial intelligence ,business ,Travelling salesman problem ,Grayscale ,Image (mathematics) - Abstract
Paper fragments reassembly has been playing an important role in many places such as public security and even archaeology. Combined with the travelling salesman problem, a novel approach based on the matching of greyscale difference matrix is adopted. Experimental results demonstrate its potential in speed, accuracy and less human intervention for double-sided paper fragments reassembly. The study may provide a new direction for the automatic stitching or image mosaic technique.
- Published
- 2017
- Full Text
- View/download PDF
242. Implicit Social Networks for Social Recommendation of Scholarly Papers
- Author
-
Julita Vassileva and Shaikhah Alotaibi
- Subjects
Social network ,business.industry ,Bookmarking ,media_common.quotation_subject ,02 engineering and technology ,Social relation ,Domain (software engineering) ,World Wide Web ,Friendship ,020204 information systems ,Political science ,0202 electrical engineering, electronic engineering, information engineering ,Collaborative filtering ,020201 artificial intelligence & image processing ,business ,Social psychology ,media_common - Abstract
Combining social network information with collaborative filtering recommendation algorithms has successfully reduced some of the drawbacks of collaborative filtering and increased the accuracy of recommendations. However, all approaches in the domain of research paper recommendation have used explicit social relations that are initiated by users. Moreover, the results of previous studies have shown that the recommendations produced cannot compete with traditional collaborative filtering. We argue that the available data in social bookmarking websites can be exploited to connect similar users using implicit social connections based on their bookmarking behavior. We explore the implicit social relations between users in social bookmarking websites such as CiteULike and Mendeley, and propose three different implicit social networks to recommend relevant papers/people to users. We showed that the proposed implicit social networks connect users with similar interests and the relations are propagated through the networks. In addition, we showed that implicit social networks connect more users than the two of well-known explicit social networks (co-authorship and friendship).
- Published
- 2017
- Full Text
- View/download PDF
243. Study of Expert Technology on Producing Paper Tubes
- Author
-
Hamada Hiroyuki, Takanori Kitamura, Kanta Ito, Zhang Zhiyuan, Suguru Teramura, Tomoko Ota, and Mitsunori Suda
- Subjects
Engineering ,Work (electrical) ,business.industry ,Mechanical engineering ,Factory (object-oriented programming) ,Production (economics) ,Work site ,business ,Motion (physics) ,Manufacturing engineering ,Task (project management) - Abstract
Paper tubes are usually produced by expert workers who are trained from experience. The manufacture technology can also be developed by this empirical rule. The so-called expert is non-expert person who is trained in the daily working and improve the technology by himself. It is a task for middle and small-sized enterprises in how to train the workers in a short time effectively. This is not only for paper tubes production, but also for a small factory on how to inherit its technology in a short time effectively. The objective of this is to analyze the motion for paper tubes fabricating by observe difference between expert and non-expert at work site as one of “Great master’s work” in middle and small-sized enterprises.
- Published
- 2017
- Full Text
- View/download PDF
244. Professional Value of Scientific Papers and Their Citation Responding
- Author
-
Jaroslav Fiala and Jaroslav Šesták
- Subjects
Value (ethics) ,Beilstein database ,business.industry ,media_common.quotation_subject ,Library science ,Public relations ,Boom ,Value of information ,Order (business) ,Political science ,Quality (business) ,Special case ,Citation ,business ,media_common - Abstract
In the course of the last thirty years, science enjoys a remarkable quantitative boom. For example, the total number of substances, registered in the Chemical Abstracts Service Registry File (CAS RF) at the end of the year 1985, was about 8 millions while at the end of the year 2015 it reached up to 104 millions. But, still more and more behind this quantitative boom of science are some of its qualitative aspects. So, e.g., the x–y–z coordinates of atoms in molecules are presently known for no more than 1 million of substances. For the majority of substances registered in CAS RF, we do not know much on their properties, how they react with other substances and to what purpose they could serve. Gmelin Institute for Inorganic Chemistry and Beilstein Institute for Organic Chemistry, which systematically gathered and extensively published such information since the nineteenth century, were canceled in 1997 (Gmelin) and 1998 (Beilstein). The number of scientific papers annually published increases, but the value of information they bring falls. The growth of sophisticated ‘push-and-button’ apparatuses allows easier preparation of publications while facilitating ready-to-publish data. Articles can thus be compiled by mere combination of different measurements usually without idea what it all is about and to what end this may serve. Driving force for the production of ever growing number of scientific papers is the need of authors to be distinguished in order to be well considered in seeing financial support. The money and fame are distributed to scientists according to their publication and citation scores. While the number of publications is clearly a quantitative criterion, much hopes have been placed on the citation, which promised to serve well as an adequate measure of the genuine scientific value, i.e., of quality of the scientific work. That, and why these hopes were not accomplished, is discussed in detail in our contribution. Special case of Journal of Thermal Analysis and Calorimetry is discussed in more particulars.
- Published
- 2017
- Full Text
- View/download PDF
245. Mode Effects in Correcting Students’ Errors: A Comparison of Computer-Based and Paper-Pencil Tests
- Author
-
Eveline Wuttke, Jürgen Seifried, and Claudia Krille
- Subjects
Data collection ,business.industry ,Computer science ,05 social sciences ,Comparability ,Applied psychology ,050301 education ,050801 communication & media studies ,Variety (cybernetics) ,Test (assessment) ,0508 media and communications ,Mode (computer interface) ,Software ,business ,0503 education ,Equivalence (measure theory) ,Pencil (mathematics) - Abstract
Computer-based testing (CBT) is considered to have several advantages compared with paper-pencil-based tests (PPT). It allows the embedding of different formats (e.g. audio and video files), quick and (semi-)automatic scoring and therefore opportunities for adoptions, measuring of additional information, such as response times, inclusion of a broader variety of test subjects as well as the avoidance of errors in data transmission (e.g. ambiguous and illegible information) and analysis. Overall, it is generally accepted that CBT saves resources such as time, materials and personnel in comparison to PPT situations. Against this background we favour the CBT approach for data collection. Nevertheless, CBT also entails several disadvantages that need to be considered, such as hardware or software problems (e.g. freezing, crashing, display errors, or when the same content is displayed differently). In addition, influencing factors conditioned not only by the technical aspects of a CBT situation but also due to individual characteristics of the participants, are discussed in the literature. Main concerns in using CBT address questions of comparability and equivalence of paper-pencil testing (PPT) and computer based testing (CBT), which has provoked a long history of research with regard to so-called mode effects. To answer the question whether the tests used to evaluate the training programme for prospective teachers can be used in CBT as well as in PPT, we conducted a pilot study and analysed whether mode effects exist. Results indicate that there is no systematic influence of the testing mode on test persons’ performance.
- Published
- 2017
- Full Text
- View/download PDF
246. ECG Waveform Extraction from Paper Records
- Author
-
Yanwei Pang, Jian Wang, Yuqing He, and Jing Pan
- Subjects
Computer science ,business.industry ,Skew ,Comparison results ,Human heart ,Image processing ,Pattern recognition ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Ecg waveforms ,0202 electrical engineering, electronic engineering, information engineering ,Trajectory ,Waveform ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business - Abstract
Electrocardiogram (ECG) is one of the most practiced methods to detect any abnormalities in human heart function. ECG waveforms are usually recorded as paper form. However, ECG paper records are inconvenient for storage and retrieval of the patient records. In this paper, an improved ECG waveform extraction algorithm from paper records is proposed. It is used to get accurate ECG trajectory information with a serious of adaptive image processing techniques, including skew correction, waveform segmentation, and tracking. The presented algorithm is tested with a number of ECG records printed from different equipment. Furthermore, three metrics are adopted for quantitative measurement between reconstructed signals and original waveforms. The comparison result shows an average accuracy of 95.5%, which proves the effectiveness of the method.
- Published
- 2017
- Full Text
- View/download PDF
247. A Comparison of Adaptive Learning Within the SOI Model Using Paper and Computer Presentation
- Author
-
Huei-Ping Chen, Wen-Yi Lin, and Fang-Ming Hwang
- Subjects
Descriptive statistics ,Multimedia ,Concept map ,Computer science ,business.industry ,media_common.quotation_subject ,Cognition ,computer.software_genre ,Empirical research ,Software ,Reading comprehension ,Reading (process) ,Adaptive learning ,Artificial intelligence ,business ,computer ,Natural language processing ,media_common - Abstract
The main purpose of this paper is to analyses adaptive learning of elementary students’ reading comprehension within multi-strategy using different tools. By applying the SOI model (Mayer 1996), three knowledge constructions of cognitive process, the reading comprehension for guiding three cognitive processes is developed in our design website tool named Multiple Online Reading Strategies System. The three multifunctional strategies in online reading strategies system, Selecting relevant information, Organizing incoming information, and Integrating incoming information with exist knowledge, could be applied to highlighting important information, concept mapping, and summarizing to determine topic sentences or important sentences in an article. Data are collected from elementary school in Taiwan and 245 questionnaires are collected in our study. These data are analyzed by the SPSS-for-windows software with statistical methods, descriptive statistics, t-test, one-way analysis of variance (ANOVA). The results of the empirical study suggest that adopting the highlighting method for students to understand articles and employing concept mapping and summarizing strategies to improve students’ comparative analysis capabilities are conducive to establishing compact reading strategies for them whether students use online or on paper.
- Published
- 2017
- Full Text
- View/download PDF
248. Yet Another ADNI Machine Learning Paper? Paving the Way Towards Fully-Reproducible Research on Classification of Alzheimer’s Disease
- Author
-
Jorge Samper-González, Hugo Bertin, Stanley Durrleman, Sabrina Fontanella, Theodoros Evgeniou, Olivier Colliot, Marie-Odile Habert, and Ninon Burgos
- Subjects
Multiple kernel learning ,Computer science ,business.industry ,BETA (programming language) ,Image (category theory) ,Data management ,Core component ,05 social sciences ,Feature extraction ,Disease classification ,Machine learning ,computer.software_genre ,050105 experimental psychology ,Support vector machine ,03 medical and health sciences ,0302 clinical medicine ,0501 psychology and cognitive sciences ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery ,computer.programming_language - Abstract
In recent years, the number of papers on Alzheimer’s disease classification has increased dramatically, generating interesting methodological ideas on the use machine learning and feature extraction methods. However, practical impact is much more limited and, eventually, one could not tell which of these approaches are the most efficient. While over 90% of these works make use of ADNI an objective comparison between approaches is impossible due to variations in the subjects included, image pre-processing, performance metrics and cross-validation procedures. In this paper, we propose a framework for reproducible classification experiments using multimodal MRI and PET data from ADNI. The core components are: (1) code to automatically convert the full ADNI database into BIDS format; (2) a modular architecture based on Nipype in order to easily plug-in different classification and feature extraction tools; (3) feature extraction pipelines for MRI and PET data; (4) baseline classification approaches for unimodal and multimodal features. This provides a flexible framework for benchmarking different feature extraction and classification tools in a reproducible manner. Data management tools for obtaining the lists of subjects in AD, MCI converter, MCI non-converters, CN classes are also provided. We demonstrate its use on all (1519) baseline T1 MR images and all (1102) baseline FDG PET images from ADNI 1, GO and 2 with SPM-based feature extraction pipelines and three different classification techniques (linear SVM, anatomically regularized SVM and multiple kernel learning SVM). The highest accuracies achieved were: 91% for AD vs CN, 83% for MCIc vs CN, 75% for MCIc vs MCInc, 94% for AD-A\(\displaystyle \beta \)+ vs CN-A\(\displaystyle \beta \)- and 72% for MCIc-A\(\displaystyle \beta \)+ vs MCInc-A\(\displaystyle \beta \)+. The code will be made publicly available at the time of the conference (https://gitlab.icm-institute.org/aramislab/AD-ML).
- Published
- 2017
- Full Text
- View/download PDF
249. From Derek Price’s Network of Scientific Papers to Advanced Science Mapping
- Author
-
Henk F. Moed
- Subjects
Engineering ,business.industry ,Relational structure ,Immediacy ,Media studies ,Subject (documents) ,Information flow (information theory) ,Scientific literature ,Space (commercial competition) ,business ,Bibliographic coupling ,Data science ,Science mapping - Abstract
This chapter presents two visionary papers published by Derek de Solla Price, the founding father of the science of science. It presents his view on the scientific literature as a network of scientific papers, and introduces important informetric concepts, including ‘research front’ and ‘immediacy effect’. Next, the chapter shows how his pioneering work on modelling the relational structure of subject space evolved into the creation of a series of currently available, advanced science mapping tools.
- Published
- 2017
- Full Text
- View/download PDF
250. Reporting Robot Ethics for Children-Robot Studies in Contemporary Peer Reviewed Papers
- Author
-
K. Padda, L. Parry, and Marilena Kyriakidou
- Subjects
Data collection ,Impact factor ,business.industry ,05 social sciences ,Applied psychology ,06 humanities and the arts ,Roboethics ,0603 philosophy, ethics and religion ,050105 experimental psychology ,Publishing ,Political science ,Robot ,0501 psychology and cognitive sciences ,060301 applied ethics ,business ,Social psychology - Abstract
How are robot ethics described in peer-reviewed papers for children-robot studies? Do publications refer to robot ethics such as: (a) gaining children’s assent, (b) providing a robot’s description prior to data collection, (c) having a robot exposure phase before data collection and (d) informing children about a robot’s semi-autonomy or not? A total of 27 peer-reviewed papers with an average impact factor of 1.8 were analysed. 63 % of the studies did not state any ethical procedures followed. In eight studies children gave their assent for the experiment; six studies described the robot to children prior to data collection; two studies provided a robot exposure phase prior to data collection and one study informed children that robots are operated machines. The outcomes indicate problematic applications of robot ethics in peer-reviewed journals and the necessity for the publishing industry to consider stricter actions on this aspect of a publication.
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.