994,233 results
Search Results
202. Tourism in Peru - Making Machu Picchu: The Politics of Tourism in Twentieth-Century Peru. By Mark Rice. Chapel Hill: University of North Carolina Press, 2018. Pp. xvi, 233. Abbreviations. Illustrations. Notes. Bibliography. Index. $90.00 cloth; $29.95 paper
- Author
-
Claire F. Fox
- Subjects
Cultural Studies ,History ,Politics ,Index (economics) ,Chapel ,Ancient history ,computer ,Tourism ,computer.programming_language - Published
- 2021
203. A categorization scheme for software engineering conference papers and its application
- Author
-
Eda Marchetti, Breno Miranda, Antonello Calabrò, Francesca Lonetti, and Antonia Bertolino
- Subjects
Paper type ,0301 basic medicine ,Scheme (programming language) ,Paper categorization ,Research problem ,Computer science ,business.industry ,Conference ,020207 software engineering ,02 engineering and technology ,Research contribution ,03 medical and health sciences ,030104 developmental biology ,Categorization ,Hardware and Architecture ,Validation ,0202 electrical engineering, electronic engineering, information engineering ,Software engineering ,business ,computer ,Software ,Information Systems ,computer.programming_language - Abstract
Background In Software Engineering (SE), conference publications have high importance both in effective communication and in academic careers. Researchers actively discuss how a paper should be organized to be accepted in mainstream conferences. Aiming This work tackles the problem of generalizing and characterizing the type of papers accepted at SE conferences. Method The paper offers a new perspective in the analysis of SE literature: a categorization scheme for SE papers is obtained by merging, extending and revising related proposals from a few existing studies. The categorization scheme is used to classify the papers accepted at three top-tier SE conferences during five years (2012–2016). Results While a broader experience is certainly needed for validation and fine-tuning, preliminary outcomes can be observed relative to what problems and topics are addressed, what types of contributions are presented and how they are validated. Conclusions The results provide insights to paper writers, paper reviewers and conference organizers in focusing their future efforts, without any intent to provide judgments or authoritative guidelines.
- Published
- 2018
204. Advances of Proof Scores in CafeOBJ : Invited Paper
- Author
-
Kokichi Futatsugi
- Subjects
Point (typography) ,business.industry ,Computer science ,Programming language ,Design specification ,media_common.quotation_subject ,Knowledge engineering ,Algebraic specification ,computer.file_format ,computer.software_genre ,Domain (software engineering) ,Software ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Quality (business) ,Executable ,business ,computer ,media_common - Abstract
Critical flaws continue to exist at the level of domain, requirement, and/or design specification, and specification verification (i.e. to check whether a specification has desirable properties) is still one of the most important challenges in software/system engineering. CafeOBJ is an executable algebraic specification language system and domain/requirement/design engineers can write proof scores in CafeOBJ for improving quality of specifications by the specification verification. This paper describes advances of the proof scores for the specification verification in CafeOBJ from the author’s point of view.
- Published
- 2021
205. ANALYSIS OF PACKAGING PRINT QUALITY WITH CTF AND CTCP PLATE MAKING SYSTEMS USING DUPLEX PAPER MATERIALS
- Author
-
Septia Ardiani, Roni Fransiscus Lumbantoruan, and Untung Basuki
- Subjects
Offset (computer science) ,business.industry ,Computer science ,media_common.quotation_subject ,Process (computing) ,Duplex (telecommunications) ,computer.file_format ,Quality (business) ,Dot gain ,Raster graphics ,business ,computer ,Computer hardware ,Direct printing ,media_common - Abstract
Comparative analysis of packaging print quality was carried out on the printed output with reference to using Computer to Film (CtF) and Computer to Conventional Plate (CtCP). The comparison of the print results from the two systems is analyzed from cost, durability, and dot gain. In this analysis, 400 gram duplex paper was used and two types of print references were used, namely CtF and CtCP. The pre-printing equipment used is the manufacture of conventional printing plates and plates. What is achieved when observing is comparing two print references with the help of a quality control (QC) tool. Before comparing the two print references, direct printing is done with the two print references, using an offset machine. The differences that exist in CtF and CtCP are caused by different irradiation processes. CtF goes through two irradiation processes while CtCP is in the irradiation process using only lasers. The irradiation process using a laser is uneven and makes the plate not get a raster (neat) result. This process is very influential with the result of the trigger. CtF goes through an iterative process but the results appear better than CtCP.Keywords— Print packaging, Duplex, CtF, CtCp
- Published
- 2021
206. 3D Rendering Framework for Data Augmentation in Optical Character Recognition : (Invited Paper)
- Author
-
Jurgen Seiler, Andre Kau, Anatol Maier, Christian Riess, Maximiliane Hawesch, and Andreas Spruck
- Subjects
business.industry ,Computer science ,Character (computing) ,Word error rate ,Scale (descriptive set theory) ,Percentage point ,Pattern recognition ,Optical character recognition ,computer.software_genre ,User requirements document ,Class (biology) ,3D rendering ,Artificial intelligence ,business ,computer - Abstract
In this paper, we propose a data augmentation framework for Optical Character Recognition (OCR). The proposed framework is able to synthesize new viewing angles and illumination scenarios, effectively enriching any available OCR dataset. Its modular structure allows to be modified to match individual user requirements. The framework enables to comfortably scale the enlargement factor of the available dataset. Furthermore, the proposed method is not restricted to single frame OCR but can also be applied to video OCR. We demonstrate the performance of our framework by augmenting a 15% subset of the common Brno Mobile OCR dataset. Our proposed framework is capable of leveraging the performance of OCR applications especially for small datasets. Applying the proposed method, improvements of up to 2.79 percentage points in terms of Character Error Rate (CER), and up to 7.88 percentage points in terms of Word Error Rate (WER) are achieved on the subset. Especially the recognition of challenging text lines can be improved. The CER may be decreased by up to 14.92 percentage points and the WER by up to 18.19 percentage points for this class. Moreover, we are able to achieve smaller error rates when training on the 15% subset augmented with the proposed method than on the original nonaugmented full dataset.
- Published
- 2021
207. Automatic test suite generation for key-points detection DNNs using many-objective search (experience paper)
- Author
-
Donghwan Shin, Jun Wang, Fitash Ul Haq, Lionel C. Briand, and Thomas Stifter
- Subjects
FOS: Computer and information sciences ,Test data generation ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Key-point detection ,Automotive industry ,02 engineering and technology ,Machine learning ,computer.software_genre ,Image (mathematics) ,Computer Science - Software Engineering ,Random search ,Search algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Test suite ,Computer science [C05] [Engineering, computing & technology] ,business.industry ,deep neural network ,software testing ,020207 software engineering ,Sciences informatiques [C05] [Ingénierie, informatique & technologie] ,Software Engineering (cs.SE) ,Key (cryptography) ,many-objective search algorithm ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Test data - Abstract
Automatically detecting the positions of key-points (e.g., facial key-points or finger key-points) in an image is an essential problem in many applications, such as driver's gaze detection and drowsiness detection in automated driving systems. With the recent advances of Deep Neural Networks (DNNs), Key-Points detection DNNs (KP-DNNs) have been increasingly employed for that purpose. Nevertheless, KP-DNN testing and validation have remained a challenging problem because KP-DNNs predict many independent key-points at the same time -- where each individual key-point may be critical in the targeted application -- and images can vary a great deal according to many factors. In this paper, we present an approach to automatically generate test data for KP-DNNs using many-objective search. In our experiments, focused on facial key-points detection DNNs developed for an industrial automotive application, we show that our approach can generate test suites to severely mispredict, on average, more than 93% of all key-points. In comparison, random search-based test data generation can only severely mispredict 41% of them. Many of these mispredictions, however, are not avoidable and should not therefore be considered failures. We also empirically compare state-of-the-art, many-objective search algorithms and their variants, tailored for test suite generation. Furthermore, we investigate and demonstrate how to learn specific conditions, based on image characteristics (e.g., head posture and skin color), that lead to severe mispredictions. Such conditions serve as a basis for risk analysis or DNN retraining., to appear in ISSTA 2021
- Published
- 2021
208. Modern Trends and Skill Gaps of Cyber Security in Smart Grid : Invited Paper
- Author
-
Sebastian Lehnhoff, Bjorn Siemers, Shadi Attarha, Michael Brand, Mike Mekkanen, Jirapa Kamsamrong, Nadezhda Kunicina, Tero Vartiainen, Jānis Grabis, Maria Valliou, and Ruta Pirta-Dreimane
- Subjects
business.product_category ,Computer science ,business.industry ,Visibility (geometry) ,Information technology ,Computer security ,computer.software_genre ,Field (computer science) ,Smart grid ,Information and Communications Technology ,Internet access ,business ,Energy system ,Curriculum ,computer - Abstract
The emerging of information technology (IT)and operational technology (OT) convergence has driven the smart grid technology adoption in the European (EU) energy system for better visibility and automated controllability. On the other hand, the energy system infrastructures can be threaten by the cyber attacks due to the increasing of information and communication technology integration. The cyber vulnerabilities are caused by the increasing of internet connection and the application complexities. It is crucial to identify essential skills in the field of cyber security protection and defense for the students. This paper presents the outcome of a literature review and a workshop with stakeholders from industry and academia about the state of the art and trends in the education of cyber security in smart grids.
- Published
- 2021
209. CGRA-ME: An Open-Source Framework for CGRA Architecture and CAD Research : (Invited Paper)
- Author
-
Xinyuan Wang, Xiaoyi Ling, Hsuan Hsiao, Rami Beidas, Omar Ragheb, Tianyi Yu, Vimal Chacko, and Jason H. Anderson
- Subjects
Computer science ,CAD ,Solid modeling ,computer.software_genre ,Software framework ,Application-specific integrated circuit ,Computer architecture ,Systems architecture ,Verilog ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Field-programmable gate array ,computer ,computer.programming_language ,Abstraction (linguistics) - Abstract
Coarse-grained reconfigurable arrays (CGRAs) are programmable hardware platforms that can be used to realize application-specific accelerators for higher performance and energy efficiency. A CGRA is a 2D array of configurable logic blocks & interconnect, where the logic blocks are typically large & ALU-like, and the interconnect is word-wide. CGRA-ME is a software framework that enables the modelling and exploration of CGRA architectures, as well as research on CGRA CAD algorithms. With CGRA-ME, an architect can specify a CGRA architecture at a high level of abstraction. A set of applications can be mapped onto the architecture to assess the mappability, power, performance and cost. CGRA-ME also allows one to generate synthesizable Verilog RTL for the modelled CGRA, permitting its implementation as an ASIC or FPGA overlay. In this paper, we describe the CGRA-ME framework [5] and overview its capabilities and current limitations. We discuss ongoing and prior research conducted with the framework, as well as outline future plans. We believe CGRA-ME will be a valuable contribution to the community, enabling new research on CGRA CAD & architectures.
- Published
- 2021
210. Face-Fake-Net: The Deep Learning Method for Image Face Anti-Spoofing Detection : Paper ID 45
- Author
-
Mays Alshaikhli, Omar Elharrouss, Somaya Al-Maadeed, and Ahmed Bouridane
- Subjects
Channel (digital image) ,business.industry ,Computer science ,Deep learning ,Machine learning ,computer.software_genre ,Facial recognition system ,Visualization ,Domain (software engineering) ,Identification (information) ,Face (geometry) ,Classifier (linguistics) ,Artificial intelligence ,business ,computer - Abstract
Due to the increasingly growing demand for user identification on cell phones, PCs, laptops, and so on, face anti-spoofing has risen to significance and is an active research area in academia and industry. The detection of the real face then recognize it present an important challenge regarding the techniques that can be used to spoof any recognition system like masks, printed photos. This paper we present an anti-spoofing face method to solve the real-world scenario that learns the target domain classifier based on samples used for training in a particular source domain. Specifically, with the conventional regression CNN, the Spatial/Channel-wise Attention Modules were introduced. Two modules, namely the Spatial-wise Attention Module and the Channel-wise Attention Module, were used at spatial and channel levels to improve local features and ignore the irrelevant features. Extensive experiments on current collections with benchmarks datasets verifies that the recommended solution will significantly benefit from the two modules and better generalization capability by providing significantly improved results in anti-spoofing.
- Published
- 2021
211. WasmAndroid: a cross-platform runtime for native programming languages on Android (WIP paper)
- Author
-
Gerald Weber, Elliott Wen, and Suranga Nanayakkara
- Subjects
Source code ,Computer science ,Programming language ,media_common.quotation_subject ,Software ecosystem ,Control reconfiguration ,computer.software_genre ,Bytecode ,Cross-platform ,Scalability ,Compiler ,Android (operating system) ,computer ,media_common - Abstract
Open-source hardware such as RISC-V has been gaining substantial momentum. Recently, they have begun to embrace Google's Android operating system to leverage its software ecosystem. Despite the encouraging progress, a challenging issue arises: a majority of Android applications are written in native languages and need to be recompiled to target new hardware platforms. Unfortunately, this recompilation process is not scalable because of the explosion of new hardware platforms. To address this issue, we present WasmAndroid, a high-performance cross-platform runtime for native programming languages on Android. WasmAndroid only requires developers to compile their source code to WebAssembly, an efficient and portable bytecode format that can be executed everywhere without additional reconfiguration. WasmAndroid can also trans-pile existing application binaries to WebAssembly when source code is not available. WebAssembly's language model is very different from C/C++ and this mismatch leads to many unique implementation challenges. In this paper, we provide workable solutions and conduct a preliminary system evaluation. We show that WasmAndroid provides acceptable performance to execute native applications in a cross-platform manner.
- Published
- 2021
212. Review Paper on Yawning Detection Prediction System for Driver Drowsiness
- Author
-
Deepak Garg, Anil Kumar Bari, Kapil Kumar Gupta, Sanjay Kumar, and Nitin Kumar Gupta
- Subjects
Artificial neural network ,Computer science ,business.industry ,Deep learning ,Monitoring system ,Prediction system ,Machine learning ,computer.software_genre ,Yawn ,Support vector machine ,Eye twitch ,medicine ,Head movements ,Artificial intelligence ,medicine.symptom ,business ,computer - Abstract
Drowsiness can be dangerous when performing tasks that require constant attention, such as driving a vehicle. Sleepiness is correlated with a variety of physiological variables, such as eye closing, head movements, pulse rate, eye twitch rate, etc. Also, the yawn can be considered as an accurate indicator of drowsiness and fatigue. Yawning detection is very important for the safety purpose of drivers as it will let the driver know if he/she is getting drowsy. Driving at that moment may not be safe. Several automatic yawning detection techniques have been developed for driver's drowsiness monitoring system. Nevertheless, correctly detecting the yawning of the driver and predicting exhaustion in real-time situations is still a crucial challenge. In this paper, we will be reviewing various existing machine learning approaches for driver's yawning detection. In previous approaches, various classical machine learning algorithms such as viola-Jones, contour activation algorithm and SVM have been used for yawning detection, but these approaches failed to predict yawning in realtime situations. Using Deep learning techniques, we can make a real-time yawn detection system with high accuracy. We find that some precious Deep learning algorithms like CNN, RNN, LSTM, Bi-LSTM can detect the patterns with high accuracy. After the comparison of various algorithms and techniques, we find that with the help of Deep learning algorithms the yawning can be detected in real time with high accuracy.
- Published
- 2021
213. Review Paper of Human Activity Recognition using Smartphone
- Author
-
Deepak Garg, Satyam Porwal, Saurabh Singh, and Nidhi Yadav
- Subjects
business.industry ,Computer science ,Gyroscope ,Accelerometer ,Machine learning ,computer.software_genre ,law.invention ,Support vector machine ,Activity recognition ,Market research ,Identification (information) ,law ,Artificial intelligence ,Sports activity ,business ,computer - Abstract
Human recogntion technologies are gaining significant research attention, where the model can be trained to be more precise to recognize the poses performed by objects. Activity identification is a kind of problem, which needs more research consideration and improvement. There is also an increasing need to detect different poses of objects. To tackle the issue, different sensors like Gyroscope and Accelerometer are required to classify the data in the form of images using machine learning algorithms like SVM and CNN. These approaches help us in implementing real-time applicaions such as health monitoring, tracking sports activity and security. The paper also discusses about its benefits, limitations and prominent approach for human activity recognition.
- Published
- 2021
214. Error Resilient Machine Learning for Safety-Critical Systems: Position Paper
- Author
-
Zitao Chen, Guanpeng Li, and Karthik Pattabiraman
- Subjects
Artificial neural network ,Computer science ,Commodity hardware ,business.industry ,02 engineering and technology ,Fault injection ,Machine learning ,computer.software_genre ,020202 computer hardware & architecture ,Soft error ,Life-critical system ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,Position paper ,Industrial robotics ,Artificial intelligence ,business ,computer - Abstract
Machine learning (ML) has increasingly been adopted in safety-critical systems such as autonomous vehicles (AVs) and industrial robotics. In these domains, reliability and safety are important considerations, and hence it is critical to ensure the resilience of ML systems to faults and errors. On the other hand, soft errors are becoming more frequent in commodity computer systems due to the effects of technology scaling and reduced supply voltages. Further, traditional solutions for masking hardware faults such as Triple-Modular Redundancy (TMR) are prohibitively expensive in terms of their energy and performance overheads. Therefore, there is a compelling need to ensure the resilience of ML applications to soft errors on commodity hardware platforms.We first experimentally assess the resilience of safety-critical ML applications to soft errors. We demonstrate through fault injection experiments that even a single bit flip due to a soft error can lead to misclassification in Deep Neural Network (DNN) applications deployed in AVs, leading to safety violations. However, not all the errors in an DNN will result in serve consequences such as safety violations, and hence it is sufficient to protect the DNN from the ones that do. Unfortunately, finding all possible errors that result in safety violations is a very compute intensive task. We propose BinFI, a fault injection approach that efficiently injects critical faults that are highly likely to result in safety violations, based on the unique properties of DNNs. Finally, we propose Ranger, an approach to protect DNNs from critical faults with minimal performance overheads and no accuracy loss. We will conclude by presenting some of our ongoing work, and the future challenges in this area.
- Published
- 2020
215. 132kV Oil Impregnated Paper Bushing Transformer - Design by CAD, Analysed by FEM
- Author
-
Mohamad Nur Khairul Hafizi Rohani, Mohd Mustafa Al Bakri Abdullah, Haris Hafizal Abd Hamid, Mohamad Rodi Isa, and N. Abd. Rahman
- Subjects
Computer science ,business.industry ,Mechanical engineering ,High voltage ,CAD ,computer.software_genre ,Finite element method ,law.invention ,Software ,law ,Bushing ,Computer Aided Design ,Electrical and Electronic Engineering ,business ,Transformer ,computer ,Voltage - Abstract
The Electric Field and Voltage Distribution (EFVD) are an important parameter for assessing high voltage bushing transformer performance. However, conducting laboratories experiment is dangerous, difficult and expensive due to several aspects. Therefore, Finite Element Method (FEM) software is the best option used as a tool for the assessment of bushing transformer's performance in terms of EFVD. But, before an assessment of analysis could be carried out, an accurate model of bushing transformer must first to be designed. In this research, Computer Aided Design (CAD) software has been employed to design the 145kV bushing transformer based on actual dimension. Upon completion, the design has been exported to FEM software for further analysis. In FEM software, measurement and analysis of electric field and voltage distribution (EFVD) have been carried out. The measurements are performed at various locations of bushing transformer such as of the porcelain surface (both air and oil side), along with aluminum foils, and at oil-impregnated paper (OIP). The results obtained have been compared with other researchers and it is found very satisfactory.
- Published
- 2019
216. The effects of reading on pixel vs. paper: a comparative study
- Author
-
Dilek Doğan, Murat Çınar, and Süleyman Sadi Seferoğlu
- Subjects
Pixel ,Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,General Social Sciences ,02 engineering and technology ,computer.software_genre ,Human-Computer Interaction ,Comprehension ,Arts and Humanities (miscellaneous) ,020204 information systems ,Reading (process) ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Developmental and Educational Psychology ,050211 marketing ,Artificial intelligence ,business ,computer ,Mobile device ,Natural language processing ,media_common - Abstract
The aim of this study was to examine the effects that reading on screens (using digital devices with different screen sizes) and on paper have on reading time and comprehension. The study group was...
- Published
- 2019
217. 55.3: Invited Paper: Development and Manufacture for Free Form Mobile Phone Screen
- Author
-
Jiang wei, Tan Li, Li Lingxia, Cai Kaimin, Li Wei, Li Ting, and Li Xiaohu
- Subjects
Multimedia ,Computer science ,Laser cutting ,Mobile phone ,Free form ,computer.software_genre ,computer - Published
- 2019
218. Exploring Quantity and Diversity of Informal Digital Learning of English (IDLE): A Review of Selected Paper
- Author
-
Ratih Saltri Yudar, Nofita Sari Gowasa Nofita, and Mutia Sari Nursafira
- Subjects
Idle ,english digital learning, idle quantity and diversisty, efl learning outcomes ,Multimedia ,Computer science ,Theory and practice of education ,Digital learning ,computer.software_genre ,computer ,LB5-3640 ,Diversity (business) - Abstract
This article reviews research on informal digital English (IDLE) learning that has increased in the field of English language teaching to other language speakers and computer-assisted language learning written by Ju Seong Lee (2019), entitled Quantity and Diversity of informal digital learning of English published by Language Learning & Technology. This present paper uses descriptive qualitative analysis in an attempt to understand how the quantity and diversity of IDLE can make a unique contribution to the English language outcomes of EFL learners from the researcher's perspective. Lee uses hierarchical linear regression analysis to show that IDLE Quantity, Age, and Major are significant predictors of two affective variables (Confidence and Pleasure), while IDLE Diversity and Major significantly predict productive language outcomes (Speaking and Productive Vocabulary Knowledge), score in the standard English Test (TOEIC), and one affective variable (Lack of Anxiety). This present article aims to review and discuss the findings on the strengths and the weaknesses found in Lee’s 2019 article. The article Lee made seems to possess a clear flow on how to explain these two types of education and make this article easy to understand. Therefore, the replication of Lee’s research is easy enough for similar research purposes.
- Published
- 2019
219. Saving History: How White Evangelicals Tour the Nation's Capital and Redeem a Christian America. By Lauren R. Kerby. Where Religion Lives. Chapel Hill: The University of North Carolina Press, 2020. x + 196 pp. $90.00 cloth; $22.00 paper
- Author
-
Mark Thomas Edwards
- Subjects
Cultural Studies ,History ,White (horse) ,Capital (economics) ,Chapel ,Religious studies ,Sociology ,computer ,computer.programming_language - Published
- 2020
220. EFL Learners' Lexico-grammatical Competence in Paper-based Vs. Computer-based in Genre Writing
- Author
-
Sundus Ziad AlKadi and Abeer Ahmed Madini
- Subjects
Collaborative writing ,Paper-based writing (PB) ,media_common.quotation_subject ,Intermediate level ,Lexico-grammar ,Reading (process) ,Mathematics education ,Narrative ,Competence (human resources) ,media_common ,computer.programming_language ,Computer-assisted language learning (CALL) ,Text Analysis ,lcsh:LC8-6691 ,Second language writing ,lcsh:English language ,lcsh:Special aspects of education ,Computer based ,SocArXiv|Arts and Humanities ,General Medicine ,Paper based ,SocArXiv|Arts and Humanities|English Language and Literature ,bepress|Arts and Humanities|English Language and Literature ,Error Analysis ,Lexico ,lcsh:PE1-3729 ,Psychology ,computer ,Computer-based typing (CB) ,bepress|Arts and Humanities - Abstract
With new technology, writing became a skill that is being developed year after year. The present study questions whether there is a difference between paper-based and computer-based writing in terms of errors and lexico-grammar. It aims at exploring sentence-level errors and lexico-grammatical competence in two writing genres in a collaborative writing environment within paper-based and computer-based writing. A sample of 73 female intermediate level learners participated in the study at the University of Business and Technology (UBT), in Saudi Arabia. This mixed-methods research is significant in the literature of second language writing since it highlights genre awareness, lexico-grammatical competence, analyzing errors, and collaboration in two styles of writing. The reading-based writing tasks acted as a reflection of the learners' lexico-grammatical competence on paper and via Web 2.0 tool (Padlet). Statistically, the Mann-Whitney U-tests showed that there was no significant difference between paper-based and computer-based groups in the sentence-level errors in narrative genre, whereas there was a significant difference between the two different tools of writing groups in the sentence-level errors in opinion genre. However, there was no significant difference between paper-based and computer-based groups in the clauses (lexico-grammar) of the two groups. Immediate semi-structured interviews were conducted and analyzed through NVIVO to get more insights from the learners to explain the comparison between the paper-based and the computer-based writing. In light of the significant findings, implications are sought to create an equillibrium between paper-based and computer-based writing, along with enhancing collaboration in second language writing.
- Published
- 2019
221. Comparing a Multimedia Digital Informed Consent Tool With Traditional Paper-Based Methods: Randomized Controlled Trial
- Author
-
James Dziura, Fuad Abujarad, Sandra L. Alfano, Cynthia Brandt, Chelsea Edwards, Kristina Carlson, Sophia Mun, Geoffrey Chupp, and Peter Peduzzi
- Subjects
Original Paper ,mobile phone ,Multimedia ,business.industry ,informed consent ,digital health ,Medicine (miscellaneous) ,digital consent ,Health Informatics ,Usability ,Cognition ,computer.software_genre ,Coaching ,Digital health ,Computer Science Applications ,law.invention ,Comprehension ,Randomized controlled trial ,Mobile phone ,law ,Informed consent ,e-consent ,business ,Psychology ,computer - Abstract
Background The traditional informed consent (IC) process rarely emphasizes research participants’ comprehension of medical information, leaving them vulnerable to unknown risks and consequences associated with procedures or studies. Objective This paper explores how we evaluated the feasibility of a digital health tool called Virtual Multimedia Interactive Informed Consent (VIC) for advancing the IC process and compared the results with traditional paper-based methods of IC. Methods Using digital health and web-based coaching, we developed the VIC tool that uses multimedia and other digital features to improve the current IC process. The tool was developed on the basis of the user-centered design process and Mayer’s cognitive theory of multimedia learning. This study is a randomized controlled trial that compares the feasibility of VIC with standard paper consent to understand the impact of interactive digital consent. Participants were recruited from the Winchester Chest Clinic at Yale New Haven Hospital in New Haven, Connecticut, and healthy individuals were recruited from the community using fliers. In this coordinator-assisted trial, participants were randomized to complete the IC process using VIC on the iPad or with traditional paper consent. The study was conducted at the Winchester Chest Clinic, and the outcomes were self-assessed through coordinator-administered questionnaires. Results A total of 50 participants were recruited in the study (VIC, n=25; paper, n=25). The participants in both groups had high comprehension. VIC participants reported higher satisfaction, higher perceived ease of use, higher ability to complete the consent independently, and shorter perceived time to complete the consent process. Conclusions The use of dynamic, interactive audiovisual elements in VIC may improve participants’ satisfaction and facilitate the IC process. We believe that using VIC in an ongoing, real-world study rather than a hypothetical study improved the reliability of our findings, which demonstrates VIC’s potential to improve research participants’ comprehension and the overall process of IC. Trial Registration ClinicalTrials.gov NCT02537886; https://clinicaltrials.gov/ct2/show/NCT02537886
- Published
- 2020
222. Survey Paper on Algorithms used for Sentiment Analysis
- Author
-
Meghashree K
- Subjects
Computer science ,business.industry ,Sentiment analysis ,Artificial intelligence ,Machine learning ,computer.software_genre ,business ,computer - Published
- 2020
223. SURVEY PAPER ON CURRENT BLOCKCHAIN SOLUTIONS FOR SECURE BANKTRANSACTIONS
- Author
-
Aniruddha M.N, Akhilesh N.S, and S Preetha
- Subjects
Blockchain ,Computer science ,Current (fluid) ,Computer security ,computer.software_genre ,computer - Published
- 2020
224. Survey on Research Paper Classification based on TF-IDF and Stemming Technique using Classification Algorithm
- Author
-
S. A. and Kshitija G.
- Subjects
Computer science ,Data mining ,tf–idf ,computer.software_genre ,computer - Published
- 2020
225. SwiftPad: Exploring WYSIWYG TEX Editing on Electronic Paper
- Author
-
Elliott Wen and Gerald Weber
- Subjects
010302 applied physics ,Focus (computing) ,Computer science ,05 social sciences ,Composition system ,WYSIWYG ,Cascading Style Sheets ,01 natural sciences ,Readability ,Refresh rate ,law.invention ,Human–computer interaction ,law ,0103 physical sciences ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Identity (object-oriented programming) ,0501 psychology and cognitive sciences ,Electronic paper ,computer ,050107 human factors ,computer.programming_language - Abstract
Electronic paper (i.e., e-paper) is a display technology that aims to imitate and substitute the conventional paper. Previous studies of e-paper mainly focus on evaluating or making practical use of its readability. However, there is little research to explore the potential of e-paper on input-oriented applications. In this paper, we introduce a document composition system named SwiftPad for e-paper. Specifically, SwiftPad renovates the famous TEX typesetting system, enabling users to compose high typographic quality documents on e-paper in a WYSIWYG (what you see is what you get), offline-first, and collaborative fashion. Building such a system on resource-constrained e-paper with low screen refresh rate creates unique challenges. In this paper, we identity these challenges and provides workable solutions. We also provide a preliminary evaluation of the new system.
- Published
- 2020
226. Intelligent Framework for Long-text Political Speeches Summarization and Visualization Using Sentiment Lexicons: A Study Directed at King Abdullah II Discussion Papers
- Author
-
Zaher Salah
- Subjects
Politics ,Computer science ,business.industry ,Computer Science (miscellaneous) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer.software_genre ,Automatic summarization ,computer ,Natural language processing ,Visualization - Published
- 2020
227. Statistics in Proteomics: A Meta-analysis of 100 Proteomics Papers Published in 2019
- Author
-
Paul A. Haynes and David C. L. Handler
- Subjects
Proteomics ,False discovery rate ,Biomedical Research ,Chemistry ,business.industry ,Statistics as Topic ,computer.software_genre ,Hierarchical clustering ,Set (abstract data type) ,Naive Bayes classifier ,Structural Biology ,Meta-analysis ,Principal component analysis ,Multiple comparisons problem ,Humans ,Multinomial distribution ,Artificial intelligence ,business ,computer ,Spectroscopy ,Natural language processing - Abstract
We randomly selected 100 journal articles published in five proteomics journals in 2019 and manually examined each of them against a set of 13 criteria concerning the statistical analyses used, all of which were based on items mentioned in the journals' instructions to authors. This included questions such as whether a pilot study was conducted and whether false discovery rate calculation was employed at either the quantitation or identification stage. These data were then transformed to binary inputs, analyzed via machine learning algorithms, and classified accordingly, with the aim of determining if clusters of data existed for specific journals or if certain statistical measures correlated with each other. We applied a variety of classification methods including principal component analysis decomposition, agglomerative clustering, and multinomial and Bernoulli naïve Bayes classification and found that none of these could readily determine journal identity given extracted statistical features. Logistic regression was useful in determining high correlative potential between statistical features such as false discovery rate criteria and multiple testing corrections methods, but was similarly ineffective at determining correlations between statistical features and specific journals. This meta-analysis highlights that there is a very wide variety of approaches being used in statistical analysis of proteomics data, many of which do not conform to published journal guidelines, and that contrary to implicit assumptions in the field there are no clear correlations between statistical methods and specific journals.
- Published
- 2020
228. Survey Paper on Fraud Detection in Medicare Using Machine Learning
- Author
-
S Muthulakshmi
- Subjects
Psychiatry and Mental health ,Clinical Psychology ,Computer science ,business.industry ,Artificial intelligence ,Pshychiatric Mental Health ,business ,Machine learning ,computer.software_genre ,computer - Published
- 2020
229. Meta-analysis of Hate Speech in Communication Studies: Focusing on Papers Published in Journals Selected in the KCI(Korean Citation Index) in the 2010s
- Author
-
Kim Sooah, Minjeong Kim, Sungil Hong, and Dong-Hoo Lee
- Subjects
business.industry ,Meta-analysis ,Citation index ,Communication studies ,Artificial intelligence ,computer.software_genre ,Psychology ,business ,computer ,Natural language processing - Abstract
2010년대 한국사회에서 혐오표현이 본격적으로 부상하며 언론학 또한 혐오 문제에 각별한 관심을 기울여 왔다. 본 논문은 언론학 분과에서 연구된 지난 10년의 혐오 관련 논문을 메타 분석하여 한국 사회에서 어떻게 혐오가 문제시되었고 현재에 이르렀는지를 입체적으로 살펴볼 목적으로 기술되었다. 분석논문은 한국연구재단에 등재된 사회과학 신문방송학 분야 학술지로 한정되었고 정량적 분석과 정성적 분석을 시행하였다. 정량적 분석을 통해 혐오표현, 표현의 자유, 여성혐오가 주된 연구키워드였으며 여성 연구자의 학술 개입이 두드려졌음을 확인하였다. 정성적 분석은 지난 10년 동안 여성혐오뿐만 아니라 레드콤플렉스, 지역, 이주 노동자와 외국인, 정치혐오와 같이 사실상 전방위적으로 한국 사회의 혐오 대상이 폭증하였음을 보여준다. 이는 신자유주의가 초래한 사회 갈등 심화와 더불어 모바일 미디어로의 매체 전경 변화, 언론과 미디어 플랫폼의 취약한 자율규제가 복합적으로 영향을 미친 탓이다. 이와 같은 상황 속에서 언론학은 표현의 자유를 적극적으로 해석하여 2010년대 후반에는 사회적 소수자의 표현의 자유를 보호하기 위한 혐오표현 규제 논의를 펼쳐내기 시작하였다.
- Published
- 2020
230. Automatic Identification of Compare Paper Relations
- Author
-
Yuliant Sibaroni, Masayu Leylia Khodra, and Dwi H. Widyantoro
- Subjects
Computer science ,General Engineering ,Identification (biology) ,Data mining ,computer.software_genre ,computer - Published
- 2020
231. Panel Paper: Towards Geospatial Humanities: Reflections from Two Panels
- Author
-
Robert T. Tally, Jim Thatcher, Yingjie Hu, Karl Grossner, Edward Ayers, Clio Andris, Alberto Giordano, and Kathy Hart
- Subjects
Human-Computer Interaction ,Geography ,Geospatial analysis ,General Computer Science ,General Arts and Humanities ,computer.software_genre ,Data science ,computer - Published
- 2020
232. Allegories of Encounter: Colonial Literacy and Indian Captivities. By Andrew Newman. (Chapel Hill: University of North Carolina Press, 2019. Pp. 236. $90.00 cloth; $24.95 paper.)
- Author
-
Lorrayne Carroll
- Subjects
History ,Literature and Literary Theory ,media_common.quotation_subject ,Chapel ,Art history ,Art ,Colonialism ,computer ,Literacy ,computer.programming_language ,media_common - Published
- 2020
233. Short Paper: An Update on Marked Mix-Nets: An Attack, a Fix and PQ Possibilities
- Author
-
Thomas Haines, Olivier Pereira, Peter B. Rønne, and UCL - SST/ICTM - Institute of Information and Communication Technologies, Electronics and Applied Mathematics
- Subjects
Scheme (programming language) ,Computer science [C05] [Engineering, computing & technology] ,050101 languages & linguistics ,Computer science ,business.industry ,05 social sciences ,Short paper ,Proof of security ,02 engineering and technology ,Computer security ,computer.software_genre ,Encryption ,Sciences informatiques [C05] [Ingénierie, informatique & technologie] ,Order (exchange) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,business ,computer ,ElGamal encryption ,Quantum computer ,computer.programming_language - Abstract
Marked mix-nets were introduced by Pereira and Rivest as a mechanism to allow very efficient mixing that ensures privacy but at the cost of not guaranteeing integrity. This is useful in a number of e-voting schemes such as STAR-Vote and Selene. However, the proposed marked mix-net construction comes with no proof of security and, as we show in this paper, does not provide privacy even in the presence of a single corrupt authority. Fortunately, the attack that we present is easy to prevent and we show several possible ways to address it. Finally while the original marked mix-net paper worked with ElGamal, we identify conditions that the adopted encryption scheme should satisfy in order to be appropriate for a marked mix-net. This opens the possibility of building marked mix-nets based on intractability assumptions which are believed to hold in the presence of a quantum computer.
- Published
- 2020
234. Adaptive weights clustering of research papers
- Author
-
Kirill Efimov, Larisa Adamyan, Wolfgang Karl Härdle, and Cathy Yi-Hsuan Chen
- Subjects
JEL system ,Adaptive algorithm ,Point (typography) ,Computer science ,330 Wirtschaft ,05 social sciences ,Nonparametric statistics ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Clustering ,Weighting ,0502 economics and business ,ddc:330 ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Economic articles ,Nonparametric ,Data mining ,050207 economics ,Cluster analysis ,computer ,Research center - Abstract
The JEL classification system is a standard way of assigning key topics to economic articles to make them more easily retrievable in the bulk of nowadays massive literature. Usually the JEL (Journal of Economic Literature) is picked by the author(s) bearing the risk of suboptimal assignment. Using the database of the Collaborative Research Center from Humboldt-Universität zu Berlin we employ a new adaptive clustering technique to identify interpretable JEL (sub)clusters. The proposed Adaptive Weights Clustering (AWC) is available on http://www.quantlet.de/ and is based on the idea of locally weighting each point (document, abstract) in terms of cluster membership. Comparison with $$k$$ k -means or CLUTO reveals excellent performance of AWC.
- Published
- 2020
235. Machine learning approaches and databases for prediction of drug–target interaction: a survey paper
- Author
-
Maureen A. Sartor, Maryam Bagherian, Zaneta Nikolovska-Coleska, Kai Wang, Kayvan Najarian, and Elyas Sabeti
- Subjects
Databases, Factual ,AcademicSubjects/SCI01060 ,Computer science ,Process (engineering) ,Drug target ,Review Article ,computer.software_genre ,Machine learning ,Task (project management) ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Drug Discovery ,DTI software ,Humans ,Set (psychology) ,Molecular Biology ,030304 developmental biology ,0303 health sciences ,Database ,business.industry ,drug–target interaction prediction ,Computational Biology ,Key (cryptography) ,Artificial intelligence ,Erratum ,business ,computer ,DTI database ,030217 neurology & neurosurgery ,Information Systems - Abstract
The task of predicting the interactions between drugs and targets plays a key role in the process of drug discovery. There is a need to develop novel and efficient prediction approaches in order to avoid costly and laborious yet not-always-deterministic experiments to determine drug–target interactions (DTIs) by experiments alone. These approaches should be capable of identifying the potential DTIs in a timely manner. In this article, we describe the data required for the task of DTI prediction followed by a comprehensive catalog consisting of machine learning methods and databases, which have been proposed and utilized to predict DTIs. The advantages and disadvantages of each set of methods are also briefly discussed. Lastly, the challenges one may face in prediction of DTI using machine learning approaches are highlighted and we conclude by shedding some lights on important future research directions.
- Published
- 2020
236. Cross-Validation, Risk Estimation, and Model Selection: Comment on a Paper by Rosset and Tibshirani
- Author
-
Stefan Wager
- Subjects
Statistics and Probability ,Estimation ,Computer science ,business.industry ,Model selection ,05 social sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Cross-validation ,Task (project management) ,010104 statistics & probability ,0502 economics and business ,Range (statistics) ,Artificial intelligence ,0101 mathematics ,Statistics, Probability and Uncertainty ,business ,computer ,050205 econometrics - Abstract
How best to estimate the accuracy of a predictive rule has been a longstanding question in statistics. Approaches to this task range from simple methods like Mallow’s Cp to algorithmic techniques l...
- Published
- 2020
237. An Analysis on Differences of Word Usage between Full and Short Conference Papers
- Author
-
Toshiro Minami
- Subjects
Computer science ,business.industry ,Word usage ,Artificial intelligence ,computer.software_genre ,business ,computer ,Natural language processing - Published
- 2020
238. Knowledge Disembodied: From Paper to Digital Media
- Author
-
Abdullah Ibrahim Omran
- Subjects
Multimedia ,business.industry ,General Medicine ,Sociology ,computer.software_genre ,business ,computer ,Digital media ,Knowledge production - Published
- 2020
239. A Review Paper on Food Security
- Author
-
Tjprc and Ansumansamal Ansumansamal
- Subjects
Fluid Flow and Transfer Processes ,Food security ,Mechanical Engineering ,Aerospace Engineering ,Business ,Computer security ,computer.software_genre ,computer - Published
- 2020
240. Biointelligenz/Biointelligence – Definition and Categorization – A Discussion Paper
- Author
-
Thomas Bauernhansl, Robert Miehe, and Yannick Baumgarten
- Subjects
Categorization ,Control and Systems Engineering ,Computer science ,business.industry ,Automotive Engineering ,Artificial intelligence ,computer.software_genre ,business ,computer ,Natural language processing - Abstract
Die Biointelligenz zählt derzeit zu den am meisten beachteten Innovationspfaden in Deutschland und Europa. Maßgeblich für die Entwicklung der Vision der Biointelligenz sind die jüngsten Erfolge der Bio- und Informationstechnologie. So ist es nicht verwunderlich, dass die wissenschaftliche Gemeinschaft, politische Entscheidungsträger wie auch Vertreter der Industrie dem Thema Biointelligenz geradezu enthusiastisch gegenüberstehen. In der Region Stuttgart-Tübingen haben sich beispielsweise über 40 Wissenschaftler im Kompetenzzentrum Biointelligenz zusammengeschlossen, um das Thema gemeinsam, interdisziplinär voranzubringen. Nicht nur die Bundesregierung, sondern auch verschiedene Landesregierungen, planen umfangreiche Fördermaßnahmen. Auf Ebene der EU-Kommission wird hinter vorgehaltener Hand teilweise sogar der Begriff Industrie 5.0 verwendet. Fakt ist, dass die Biointelligenz enormes Potenzial für Sprunginnovationen und Standortvorteile in nahezu allen industriellen Branchen eröffnet. Vielfach wird der Begriff jedoch missverstanden, was in letzter Konsequenz dazu führt, dass die Bedeutung ausgehöhlt und damit das Potenzial für den Wirtschaftsstandort Deutschland unterminiert wird. Für Wissenschaft, Industrie und Politik ist es entscheidend einen Konsens zu finden. Dazu soll dieser Artikel einen Beitrag leisten, indem er die Begriffsherkunft zusammenfasst, einen detaillierten Definitionsvorschlag präsentiert und eine Kategorisierungssystematik erarbeitet. Biointelligence is currently one of the most noted innovation paths in Germany and Europe. Crucial to the development of the Biointelligence vision are recent biotech and information technology breakthroughs. It is therefore not surprising that the scientific community, political decision-makers as well as industry representatives are enthusiastic about the subject of biointelligence. In the Stuttgart-Tübingen region, for example, more than 40 scientists have joined forces in the Competence Center Biointelligence to promote the topic in an interdisciplinary fashion. Not only the federal government, but also various state governments are planning extensive support measures. At the level of the European Commission, the term Industry 5.0 is sometimes even used behind closed doors. The fact is that biointelligence opens up enormous potential for break through innovations, leap frogging and competitive advantages in almost all industrial sectors. In many cases, however, the term is misunderstood, which ultimately leads to undermining the meaning and thus undermining the potential for Germany as a business location. For science, industry and politics it is crucial to find a consensus. This article thus aims to contribute by summarizing the term background, presenting a detailed definition and developing a categorization system.
- Published
- 2020
241. Computer-based versus paper-based testing: Investigating testing mode with cognitive load and scratch paper use
- Author
-
Jared A. Danielson and Anna Agripina Prisacari
- Subjects
Class (computer programming) ,Multimedia ,Computer science ,05 social sciences ,Computer based ,050301 education ,Paper based ,computer.software_genre ,050105 experimental psychology ,Test (assessment) ,Variety (cybernetics) ,Human-Computer Interaction ,Mode (computer interface) ,Arts and Humanities (miscellaneous) ,Scratch ,Human–computer interaction ,0501 psychology and cognitive sciences ,0503 education ,computer ,General Psychology ,Cognitive load ,computer.programming_language - Abstract
The aim of the present study was to explore the relationship between testing mode (taking a test on computer versus paper) and two other factors: (1) cognitive load and (2) scratch paper use in an undergraduate general chemistry class setting. Cognitive load was measured with two self-report questions (perceived mental effort and level of difficulty) and scratch paper use was analyzed manually. All 221 students completed three assessments administered either on computer or paper. The assessments included a variety of chemistry topics with an equal number of three question types (algorithmic, conceptual, and definition). There was no significant difference in the cognitive load imposed by computer or paper-based tests at the overall test level or by question type. Students utilized scratch paper more on paper-based than online tests, especially when the questions were conceptual, and they used scratch paper the most for algorithmic questions. Altogether, these results provide further support that online testing can be implemented in educational settings without imposing additional cognitive load on students.
- Published
- 2017
242. Structure and Technology of Paper Furniture Panel Based on Computer Aided Design
- Author
-
Lin Shi
- Subjects
Structure (mathematical logic) ,Architectural engineering ,Mode (computer interface) ,Point (typography) ,Product design ,Computer science ,Process (engineering) ,Production (economics) ,Computer Aided Design ,Design methods ,computer.software_genre ,computer - Abstract
Furniture design depends on different materials. Under the social background of building an ecological and civilized society, furniture design based on paper materials makes people have a new understanding and change of paper products. Home design works exist depending on different materials. With many materials, designers can use their imagination and hands to give it different shapes and colors, and combine the author's personal consciousness and emotions to create. In various industries, traditional design ideas, design methods and design means lag behind the development of the times due to their inherent shortcomings, thus restricting the improvement of production and production efficiency. Future furniture must be inseparable from the concept of green environmental protection and recycling. This paper will take furniture products as the breakthrough point, explore the new structure and new technology of furniture product design based on computer aided design, look for the future design trend of furniture products based on the social form of energy saving and environmental protection, and analyze and explain the circulation mode of paper materials themselves.
- Published
- 2021
243. Reproducibility Report for the Paper: 'Differentiable Agent-Based Simulation for Gradient-Guided Simulation-Based Optimization'
- Author
-
Emilio Incerto and Matteo Principe
- Subjects
Upload ,Reproducibility ,Simulation-based optimization ,Software ,Computer science ,business.industry ,Review process ,Data mining ,Artifact (software development) ,Differentiable function ,computer.software_genre ,business ,computer - Abstract
The author claimed for the artifact associated with his paper the following ACM Reproducibility badges:(1) Artifact Available,(2) Artifact Evaluated-Functional,(3) Results Reproduced. After an in-depth review process, we agree to assign all the requested badges as we found it to meet the following requirements:i) it is uploaded on a persistent repository, accessible via a DOI; ii) it is well documented, consistent with the presented data, complete of all the necessary software sources and packages, and exercisable; iii) it is exhaustive in the reproduction of all the relevant data of the paper. Some curves in some reproduced plots are truncated, due to the computational limits imposed by the short-term deadline of the review process. Nevertheless, the overall trends are respected, and the curves are supporting the paper's claims.
- Published
- 2021
244. This is Not a Paper
- Author
-
David Philip Green, Joseph Lindley, Hayley Alter, and Miriam Sturdee
- Subjects
Research design ,Coronavirus disease 2019 (COVID-19) ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Publication Formats ,computer.software_genre ,World Wide Web ,Videoconferencing ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,computer ,050107 human factors - Abstract
This is like an abstract to a paper, but it is more abstract. In fact, it is the introduction to something which is a not paper. The global Covid-19 pandemic of 2020 represented an inflection point for our post-post-modern world, a moment where our old normal was dramatically arrested. We are now in a state of comprehensive flux as ‘new normals’ emerge, begin to solidify, and may evolve into an—as yet undetermined—futures. This not paper is a facet and exploration of that flux as it relates to publication and conference culture, video conferencing systems, and how we both conduct, and share, research. You should read the whole of this abstract, but then you should take a step inside the not paper, it lives on the web over here https://designresearch.works/thisisnotapaper/
- Published
- 2021
245. Time-aware Neural Collaborative Filtering with Multi-dimensional Features on Academic Paper Recommendation
- Author
-
Yong Tang, Zelin Peng, Yibo Lu, Yi He, and Yixiang Cai
- Subjects
050101 languages & linguistics ,Artificial neural network ,Social network ,business.industry ,Computer science ,media_common.quotation_subject ,05 social sciences ,02 engineering and technology ,Machine learning ,computer.software_genre ,Factor (programming language) ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Multi dimensional ,Collaborative filtering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Artificial intelligence ,business ,computer ,computer.programming_language ,media_common - Abstract
In modern academic social network, it is very difficult for scholars to find academic papers consistent with their research direction. Time is a critical factor in paper recommendation. As time goes on, the impact of an academic paper would gradually fade. Likewise, the research interests of users may also change. Therefore, we propose a temporal perceptual neural collaborative filtering model that integrates the multi-dimensional features of papers. We conducted our experiments on the dataset from CiteULike, comparing the recommended results by using four time-decay functions and evaluating our model with multiple evaluation indicators. The satisfactory results show that our model is effective in filtering out the expired papers by considering the characteristics of papers and the changes of scholars' interests.
- Published
- 2021
246. Session details: Session 2: Short Papers
- Author
-
Thanh-Binh Nguyen
- Subjects
Multimedia ,Session (computer science) ,Psychology ,computer.software_genre ,computer - Published
- 2021
247. Session details: Session 1: Full Papers
- Author
-
Cathal Gurrin
- Subjects
Multimedia ,Session (computer science) ,computer.software_genre ,Psychology ,computer - Published
- 2021
248. Hierarchical Encoder-Decoder Summary Model with an Instructor for Long Academic Papers
- Author
-
Shasha Li, Jianling Li, Jie Yu, Wuhang Lin, and Jun Ma
- Subjects
business.industry ,Computer science ,Process (computing) ,computer.software_genre ,Communications system ,Salient ,Process control ,Artificial intelligence ,Source text ,State (computer science) ,business ,Encoder ,computer ,Natural language processing ,Abstraction (linguistics) - Abstract
Summary models, whether extractive or abstractive, have achieved great success recently. For long academic papers, the abstractive model with the encoder-decoder architecture mainly only relies on the attentional context vector for generation, unlike humans who have already mastered the salient information of the source text to have full control over what to write. While the extracted sentences always contain the correct and salient information which can be used to control the abstraction process. Therefore, based on a hierarchical encoder-decoder architecture specifically for academic papers, we proposed a summary model with an Instructor, an encoder in essence by taking the guiding sentences as the input to further control the generating process. In the encoder part, the final hidden state from Instructor is directly added to the basic hierarchical hidden state from the encoder. Experimental results on arXiv/PubMed show that the only encoder-improved model can generate better abstract. In the decoder part, the context vector from Instructor is integrated with the original discourse-aware context vector for the generation. The results show that Instructor is effective for control and our model can generate a more accurate and fluent abstract with significantly higher ROUGE values.
- Published
- 2021
249. Predicting Paper Acceptance via Interpretable Decision Sets
- Author
-
Weihui Hong, Xuanya Li, and Peng Bao
- Subjects
Hierarchy (mathematics) ,business.industry ,Computer science ,media_common.quotation_subject ,Machine learning ,computer.software_genre ,Consistency (database systems) ,Statistical classification ,Component (UML) ,Quality (business) ,Artificial intelligence ,business ,Set (psychology) ,Construct (philosophy) ,computer ,media_common ,Interpretability - Abstract
Measuring the quality of research work is an essential component of the scientific process. With the ever-growing rates of articles being submitted to top-tier conferences, and the potential consistency and bias issues in the peer review process identified by scientific community, it is thus of great necessary and challenge to automatically evaluate submissions. Existing works mainly focus on exploring relevant factors and applying machine learning models to simply be accurate at predicting the acceptance of a given academic paper, while ignoring the interpretability power which is required by a wide range of applications. In this paper, we propose a framework to construct decision sets that consist of unordered if-then rules for predicting paper acceptance. We formalize decision set learning problem via a joint objective function that simultaneously optimize accuracy and interpretability of the rules, rather than organizing them in a hierarchy. We evaluate the effectiveness of the proposed framework by applying it on a public scientific peer reviews dataset. Experimental results demonstrate that the learned interpretable decision sets by our framework performs on par with state-of-the-art classification algorithms which optimize exclusively for predictive accuracy and much more interpretable than rule-based methods.
- Published
- 2021
250. Finding Keystone Citations for Constructing Validity Chains among Research Papers
- Author
-
Catherine Blake, Yuanxi Fu, and Jodi Schneider
- Subjects
Sociology of scientific knowledge ,Dependency (UML) ,Computer science ,business.industry ,computer.software_genre ,Argumentation theory ,Focus (linguistics) ,Rhetorical question ,Graph (abstract data type) ,Artificial intelligence ,business ,Citation ,computer ,Sentence ,Natural language processing - Abstract
New discoveries in science are often built upon previous knowledge. Ideally, such dependency information should be made explicit in a scientific knowledge graph. The Keystone Framework was proposed for tracking the validity dependency among papers. A keystone citation indicates that the validity of a given paper depends on a previously published paper it cites. In this paper, we propose and evaluate a strategy that repurposes rhetorical category classifiers for the novel application of extracting keystone citations that relate to research methods. Five binary rhetorical category classifiers were constructed to identify Background, Objective, Methods, Results, and Conclusions sentences in biomedical papers. The resulting classifiers were used to test the strategy against two datasets. The initial strategy assumed that only citations contained in Methods sentences were methods keystone citations, but our analysis revealed that citations contained in sentences classified as either Methods or Results had a high likelihood to be methods keystone citations. Future work will focus on fine tuning the rhetorical category classifiers, experimenting with multiclass classifiers, evaluating the revised strategy with more data, and constructing a larger gold standard citation context sentence dataset for model training.
- Published
- 2021
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.