46,868 results
Search Results
2. Paper Wrapping, Based on Knowledge about Face Connectivity among Paper Fragments
- Author
-
Toyohide Watanabe and Kenta Matsushima
- Subjects
Paper sheet ,Tree (data structure) ,Creative work ,Human–computer interaction ,Interface (Java) ,business.industry ,Computer science ,Process (engineering) ,Face (geometry) ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Construct (python library) ,Artificial intelligence ,business - Abstract
The purpose of paper wrapping is to protect goods from external shocks, decorate goods beautifully, carry out materials/goods safely, etc. Also, the paper wrapping is intelligent and creative work. The knowledge about paper wrapping is dependent on the features of target-objects, paper sheets and wrapping purposes. This article addresses a method to design the wrapping process. We introduce the knowledge about paper wrapping and then construct a stage tree, which represents various kinds of wrapping means successfully. We propose a framework for designing the wrapping process appropriate to target-objects, and also describe an interactive support interface in the wrapping process.
- Published
- 2013
3. Position Paper: Pragmatics in Fuzzy Theory
- Author
-
Karl Erich Wolff
- Subjects
business.industry ,Fuzzy set ,Formal concept analysis ,Position paper ,Distributed object ,Artificial intelligence ,Pragmatics ,Type-2 fuzzy sets and systems ,business ,Fuzzy logic ,Fuzzy cognitive map ,Mathematics - Abstract
This position paper presents the main problems in classical and modern Fuzzy Theory and gives solutions in Formal Concept Analysis for many of these problems. To support the successful cooperation between scientists from the communities of Fuzzy Theory and Formal Concept Analysis the author starts with this position paper an initiative, called "Pragmatics in Fuzzy Theory".
- Published
- 2011
4. Visual Texture Characterization of Recycled Paper Quality
- Author
-
Manuel Graña Romay, José Orlando Maldonado, and David Vicente Herrera
- Subjects
Surface (mathematics) ,Paper sheet ,Gabor filter ,business.industry ,Computer science ,Wavelet transform ,Paper quality ,Computer vision ,Artificial intelligence ,Visual texture ,business ,Image (mathematics) ,Characterization (materials science) - Abstract
When performing quality inspection of recycled paper one phenomenon of concern is the appearance of macroscopic undulations on the paper sheet surface that may emerge shortly or some time after its production. In this paper we explore the detection and measurement of this defect by means of computer vision and statistical pattern recognition techniques that may allow early detection at the production site. We propose features computed from Gabor Filter Banks (GFB) and Discrete Wavelet Transforms (DWT) for the characterization of paper sheet surface bumpiness in recycled paper images. The lack of a precise definition of the defect and the great variability of the sheet deformation shapes and scales, both within each image and between images, introduce additional difficulties to the problem. We obtain, with both proposed modeling approaches (GFB and DWT), classification accuracies are comparable to the agreement between human observers. The best performance is obtained using DWT features.
- Published
- 2007
5. Paper Currency Denomination Recognition Based on GA and SVM
- Author
-
Ou Jin, Hua-Min Zhang, Jianbiao He, Jun Liang, and Li Xi
- Subjects
Banknote ,business.industry ,Computer science ,Process (computing) ,Pattern recognition ,Function (mathematics) ,Support vector machine ,symbols.namesake ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science::Computer Vision and Pattern Recognition ,Pattern recognition (psychology) ,Genetic algorithm ,Gaussian function ,symbols ,Artificial intelligence ,business ,Statistic - Abstract
SVM is a new general learning method based on the statistic learning system which can be used as an effective means to process small sample, nonlinear and high dimensional pattern recognition. This paper did research on the learning algorithm of support vector machine, extracted characteristic data of banknote which is on account of PCA according to the characteristics of the support vector machine (SVM), and proposed to put support vector machine (SVM) into banknotes denomination recognition by combining SMO training algorithm with one-to-many multi-value classification algorithm. Besides, this article used genetic algorithm in parameters optimization such as the punishment coefficient C of soft margin SVM and the width parameter of Gaussian kernel function. The ultimate purpose is to recognize the denomination of banknote efficiently and accurately. The experimental results verified that this kind of recognition method increases the recognition accuracy up to 90% or more.
- Published
- 2015
6. An Ontology-Based System for Generating Mathematical Test Papers
- Author
-
Deqian Liu, Xuzhi Zhou, Jiayi Cheng, Can Lin, and Jianfeng Du
- Subjects
Information retrieval ,Optimization problem ,Computer science ,business.industry ,Artificial intelligence ,Ontology (information science) ,business ,Entrance exam ,Task (project management) ,Test (assessment) - Abstract
Automatic test paper generation is highly helpful in teaching and learning. In order to generate a test paper that covers as many knowledge points as possible, it is needed to discover knowledge points from exam questions. However, the problem of automatically finding knowledge points is seldom investigated in existing work. To fill this gap, this paper proposes an ontology-based method to discover knowledge points from mathematical exam questions. Accordingly, a system for automatically generating mathematical test papers is also proposed. It composes a test paper by solving a pseudo-Boolean optimization problem. Its practicality is demonstrated by a task of generating mathematical test papers from hundreds of postgraduate entrance exam questions.
- Published
- 2014
7. Classifying Papers from Different Computer Science Conferences
- Author
-
Avi Rosenfeld, Yaakov HaCohen-Kerner, Daniel Nisim Cohen, and Maor Tzidkani
- Subjects
Computer science ,business.industry ,Decision tree learning ,Document classification ,Key (cryptography) ,Feature (machine learning) ,Artificial intelligence ,computer.software_genre ,business ,Part of speech ,computer ,Natural language processing - Abstract
This paper analyzes what stylistic characteristics differentiate different styles of writing, and specifically types of different A-level computer science articles. To do so, we compared various full papers using stylistic feature sets and a supervised machine learning method. We report on the success of this approach in identifying papers from the last 6 years of the following three conferences: SIGIR, ACL, and AAMAS. This approach achieves high accuracy results of 95.86%, 97.04%, 93.22%, and 92.14% for the following four classification experiments: (1) SIGIR / ACL, (2) SIGIR / AAMAS, (3) ACL / AAMAS, and (4) SIGIR / ACL / AAMAS, respectively. The Part of Speech (PoS) and the Orthographic sets were superior to all others and have been found as key components in different types of writing.
- Published
- 2013
8. Text Classification of Technical Papers Based on Text Segmentation
- Author
-
Thien Hai Nguyen and Kiyoaki Shirai
- Subjects
Structure (mathematical logic) ,Multi-label classification ,business.industry ,Computer science ,Supervised learning ,Text segmentation ,Binary number ,Feature selection ,computer.software_genre ,Text mining ,Artificial intelligence ,Representation (mathematics) ,business ,computer ,Natural language processing - Abstract
The goal of this research is to design a multi-label classification model which determines the research topics of a given technical paper. Based on the idea that papers are well organized and some parts of papers are more important than others for text classification, segments such as title, abstract, introduction and conclusion are intensively used in text representation. In addition, new features called Title Bi-Gram and Title SigNoun are used to improve the performance. The results of the experiments indicate that feature selection based on text segmentation and these two features are effective. Furthermore, we proposed a new model for text classification based on the structure of papers, called Back-off model, which achieves 60.45% Exact Match Ratio and 68.75% F-measure. It was also shown that Back-off model outperformed two existing methods, ML-kNN and Binary Approach.
- Published
- 2013
9. Classification of Mexican Paper Currency Denomination by Extracting Their Discriminative Colors
- Author
-
Jair Cervantes, Asdrúbal López, Lisbeth Rodríguez, and Farid García-Lamont
- Subjects
Discriminative model ,Pixel ,Machine vision ,Computer science ,business.industry ,Orientation (computer vision) ,RGB color model ,Pattern recognition ,Image processing ,Artificial intelligence ,HSL and HSV ,business - Abstract
In this paper we describe a machine vision approach to recognize the denomination classes of the Mexican paper currency by extracting their color features. A banknote's color is characterized by summing all the color vectors of the image's pixels to obtain a resultant vector, the banknote's denomination is classified by knowing the orientation of the resulting vector within the RGB space. In order to obtain a more precise characterization of paper currency, the less discriminative colors of each denomination are eliminated from the images; the color selection is applied in the RGB and HSV spaces, separately. Experimental results with the current Mexican banknotes are presented.
- Published
- 2013
10. Personalized Paper Recommendation Based on User Historical Behavior
- Author
-
Jie Liu, Yuan Wang, Tianbi Liu, XingLiang Dong, and Yalou Huang
- Subjects
Information retrieval ,Computer science ,business.industry ,computer.software_genre ,Field (computer science) ,Preference ,World Wide Web ,Recommendation model ,Similarity (psychology) ,Language model ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
With the increasing of the amount of the scientific papers, it is very important and difficult for paper-sharing platforms to recommend related papers accurately for users. This paper tackles the problem by proposing a method that models user historical behavior. Through collecting the operations on scientific papers of online users and carrying on the detailed analysis, we build preference model for each user. The personalized recommendation model is constructed based on content-based filtering model and statistical language model.. Experimental results show that users’ historical behavior plays an important role in user preference modeling and the proposed method improves the final predication performance in the field of technical papers recommendation.
- Published
- 2012
11. Paper Retrieval Based on Specific Paper Features: Chain and Laid Lines
- Author
-
Pavel Paclík, J.C.A. van der Lubbe, M. van Staalduinen, and E. Backer
- Subjects
Similarity (geometry) ,business.industry ,Computer science ,Image processing ,Similarity measure ,computer.software_genre ,Similitude ,Set (abstract data type) ,Metric (mathematics) ,Visual Word ,Artificial intelligence ,Data mining ,business ,computer - Abstract
This paper presents paper retrieval using the specific paper features chain and laid lines. Paper features are detected in digitized paper images and they are represented such that they could be used for retrieval. Optimal retrieval performance is achieved by means of a trainable similarity measure for a given set of paper features. By means of these methods a retrieval system is developed that art experts could use real-time in order to speed up their paper research.
- Published
- 2006
12. Advances in Deep Parsing of Scholarly Paper Content
- Author
-
Bernd Kiefer and Ulrich Schäfer
- Subjects
Head-driven phrase structure grammar ,Information retrieval ,Parsing ,Computer science ,business.industry ,Semantic search ,computer.software_genre ,Semantic similarity ,Language technology ,Question answering ,Artificial intelligence ,Computational linguistics ,business ,Phrase structure grammar ,computer ,Natural language processing - Abstract
We report on advances in deep linguistic parsing of the full textual content of 8200 papers from the ACL Anthology, a collection of electronically available scientific papers in the fields of Computational Linguistics and Language Technology. We describe how - by incorporating new techniques - we increase both speed and robustness of deep analysis, specifically on long sentences where deep parsing often failed in former approaches. With the current open source HPSG (Head-driven phrase structure grammar) for English (ERG), we obtain deep parses for more than 85% of the sentences in the 1.5 million sentences corpus, while the former approaches achieved only approx. 65% coverage. The resulting sentence-wise semantic representations are used in the Scientist's Workbench, a platform demonstrating the use and benefit of natural language processing (NLP) to support scientists or other knowledge workers in fast and better access to digital document content. With the generated NLP annotations, we are able to implement important, novel applications such as robust semantic search, citation classification, and (in the future) question answering and definition exploration.
- Published
- 2011
13. COMPENDIUM: A Text Summarization System for Generating Abstracts of Research Papers
- Author
-
Manuel Palomar, Elena Lloret, and María Teresa Romá-Ferri
- Subjects
Information retrieval ,business.industry ,Computer science ,User satisfaction ,computer.software_genre ,Automatic summarization ,Compendium ,Preliminary analysis ,Multi-document summarization ,Information system ,Selection (linguistics) ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
This paper presents COMPENDIUM, a text summarization system, which has achieved good results in extractive summarization. Therefore, our main goal in this research is to extend it, suggesting a new approach for generating abstractive-oriented summaries of research papers. We conduct a preliminary analysis where we compare the extractive version of COMPENDIUM (COMPENDIUME) with the new abstractiveoriented approach (COMPENDIUME-A). The final summaries are evaluated according to three criteria (content, topic, and user satisfaction) and, from the results obtained, we can conclude that the use of COMPENDIUM is appropriate for producing summaries of research papers automatically, going beyond the simple selection of sentences.
- Published
- 2011
14. Handwriting on Paper as a Cybermedium
- Author
-
Akira Yoshida, Marcus Liwichi, Masakazu Iwamura, Seiichi Uchida, Shinichiro Omachi, and Koichi Kise
- Subjects
Sequence ,Handwriting ,Computer science ,business.industry ,Speech recognition ,Carry (arithmetic) ,Value (computer science) ,Image processing ,Artificial intelligence ,computer.software_genre ,business ,computer ,Natural language processing - Abstract
In this paper, we report recent work of the data-embedding pen, which adds an ink-dot sequence along a hand written pattern during writing. The ink-dot sequence represents some information, such as writer's name, date of writing, and URL. This information drastically increases the value of hand writing on a paper. The embedded information can be extracted from the hand written pattern by image processing techniques and a stroke recovery technique. Consequently, we can augment the hand written pattern by the data-embedding pen to carry arbitrary information.
- Published
- 2011
15. A Divide-and-Conquer Tabu Search Approach for Online Test Paper Generation
- Author
-
Minh Luan Nguyen, Siu Cheung Hui, and Alvis C. M. Fong
- Subjects
Divide and conquer algorithms ,Optimization problem ,business.industry ,Computer science ,Constraint satisfaction ,Machine learning ,computer.software_genre ,Swarm intelligence ,Multi-objective optimization ,Tabu search ,Dynamic programming ,Constraint (information theory) ,Artificial intelligence ,business ,computer - Abstract
Online Test Paper Generation (Online-TPG) is a promising approach for Web-based testing and intelligent tutoring. It generates a test paper automatically online according to user specification based on multiple assessment criteria, and the generated test paper can then be attempted over the Web by user for self-assessment. Online-TPG is challenging as it is a multi-objective optimization problem on constraint satisfaction that is NP-hard, and it is also required to satisfy the online runtime requirement. The current techniques such as dynamic programming, tabu search, swarm intelligence and biologically inspired algorithms are ineffective for Online-TPG as these techniques generally require long runtime for generating good quality test papers. In this paper, we propose an efficient approach, called DAC-TS, which is based on the principle of constraint-based divide-and-conquer (DAC) and tabu search (TS) for constraint decomposition and multi-objective optimization for Online-TPG. Our empirical performance results have shown that the proposed DAC-TS approach has outperformed other techniques in terms of runtime and paper quality.
- Published
- 2011
16. ACO-GA Approach to Paper-Reviewer Assignment Problem in CMS
- Author
-
Dariusz Król and Tomasz Kolasa
- Subjects
Computer science ,business.industry ,Carry (arithmetic) ,Ant colony optimization algorithms ,Genetic algorithm ,Artificial intelligence ,Recommender system ,business ,Travelling salesman problem ,Assignment problem ,Generalized assignment problem ,Task (project management) - Abstract
Conference management that requires proper coordination and international communication, is a complex task. There are many Conference Management Systems (CMS), which can be used to carry out the conference. A convenient to use, free and effective module for automatic paper-reviewer assignment is still not available. Searching for the best assignments relying only on common paper-reviewer topics not always will give good solutions. This paper proposes an approach, that uses the reviewers responses information, to tune the solution up. Proposed algorithm combines genetic algorithm (GA) and ant colony optimization (ACO), to quickly find good solutions. The experiment results confirm the superiority of proposed algorithm.
- Published
- 2010
17. Extraction of Co-existent Sentences for Explaining Figures toward Effective Support for Scientific Papers Reading
- Author
-
Toyohide Watanabe and Ryo Takeshima
- Subjects
Word-sense disambiguation ,Computer science ,business.industry ,Process (engineering) ,media_common.quotation_subject ,Keyword extraction ,Data science ,Focus (linguistics) ,Reading (process) ,Artificial intelligence ,Function (engineering) ,business ,Mechanism (sociology) ,media_common - Abstract
It is important for researchers/investigators to read and understand scientific papers effectually and effectively. However, it takes much time and many efforts to read and understand many papers related directly to their researches, even if they could refer to necessary papers timely. In this paper, we address a function for supporting the scientific paper understanding process successfully. We focus on figures which can usually explain the important topics along a series of successive paragraphs, and develop an intellectual tool which collects the mutually related sentences, attended interdependently to the focused figure, and supports a paper understanding ability through the focused figure. In this paper, we introduce the propagation mechanism of important words over the corresponding sentences. This propagation mechanism can select candidate sentences appropriate to explain the focused figure.
- Published
- 2010
18. Principles and practice of multi-agent systems : 13th International Conference, PRIMA 2010 : Kolkata, India, November 12-15, 2010 : revised selected papers
- Author
-
Jean-Daniel Zucker, Tuong-Vinh Ho, Duc-An Vo, Alexis Drogoul, Unité de modélisation mathématique et informatique des systèmes complexes [Bondy] (UMMISCO), Université Cadi Ayyad [Marrakech] (UCA)-Université de Yaoundé I-Université Gaston Bergé (Saint-Louis, Sénégal)-Université Cheikh Anta Diop [Dakar, Sénégal] (UCAD)-Institut de la francophonie pour l'informatique-Université Pierre et Marie Curie - Paris 6 (UPMC), Desai, N. (ed.), Liu, A. (ed.), Winikoff, M. (ed.), and Institut de Recherche pour le Développement (IRD)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Université de Yaoundé I-Institut de la francophonie pour l'informatique-Université Cheikh Anta Diop [Dakar, Sénégal] (UCAD)-Université Gaston Bergé (Saint-Louis, Sénégal)-Université Cadi Ayyad [Marrakech] (UCA)
- Subjects
[INFO.INFO-CC]Computer Science [cs]/Computational Complexity [cs.CC] ,Agent-based model ,Theoretical computer science ,Computer science ,abstraction ,02 engineering and technology ,03 medical and health sciences ,0302 clinical medicine ,agent-based modelling language ,GAMA platform ,0202 electrical engineering, electronic engineering, information engineering ,emergence ,sort ,Abstraction ,Representation (mathematics) ,Simple (philosophy) ,business.industry ,ACM ,simulation ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,Visualization ,emerging structure ,030220 oncology & carcinogenesis ,Obstacle ,Boids ,020201 artificial intelligence & image processing ,Artificial intelligence ,GAML modelling language ,business - Abstract
International audience; All modellers have come across, one day, one of these popular toy agent-based models (ABMs), like "Ants", for instance, which depicts the appearance of pheromone trails built by simulated ants. They are simple, but representative of the way "real", more complex, ABMs are designed: in addition to explicitly describe the individual entities used to represent the system, modellers make implicit references to abstractions corresponding to the emerging structures they are tracking in the simulations. Yet, these abstractions are not represented in the models themselves as first-class entities: they are either hidden in ex-post computations or only part of visualization tasks, as if an explicit representation could somehow damage the processes at work in their emergence. This clearly constitutes an obstacle to the development of multi-level models, where emergence is likely to occur at different levels of abstraction of the system: if some of these levels are not represented in the models, the emergence of higher-level structures is not likely to be observed. This paper describes a modelling language that allows a modeller to represent and specify emerging structures in agent-based models. Firstly, to ease the description, we present these structures and their properties in four toy ABMs: Schelling, Boids, Collective Sort and Ants. Then we define the operations that are needed to represent and specify them without sacrificing the properties of the original model. An implementation of these operations in the GAML modelling language (part of the GAMA agent-based platform) is then presented. Finally, two simulations of the Boids model are used to illustrate the expressivity of this language and the multiple advantages it brings in terms of analysis, visualization and modeling of multi-level ABMs.
- Published
- 2012
19. Relating Halftone Dot Quality to Paper Surface Topography
- Author
-
Pekka Kumpulainen, Heimo Ihalainen, Mikko Lauri, and Marja Mettänen
- Subjects
Surface (mathematics) ,Quality (physics) ,Halftone ,business.industry ,Computer science ,Computer vision ,Artificial intelligence ,Cluster analysis ,business ,Key issues ,Reflectivity ,Subpixel rendering ,Support vector machine classification - Abstract
Most printed material is produced by printing halftone dot patterns. One of the key issues that determine the attainable print quality is the structure of the paper surface but the relation is non-deterministic in nature. We examine the halftone print quality and study the statistical dependence between the defects in printed dots and the topography measurement of the unprinted paper. The work concerns SC paper samples printed by an IGT gravure test printer. We have small-scale 2D measurements of the unprinted paper surface topography and the reflectance of the print result. The measurements before and after printing are aligned with subpixel resolution and individual printed dots are detected. First, the quality of the printed dots is studied using Self Organizing Map and clustering and the properties of the corresponding areas in the unprinted topography are examined. The printed dots are divided into high and low print quality. Features from the unprinted paper surface topography are then used to classify the corresponding paper areas using Support Vector Machine classification. The results show that the topography of the paper can explain some of the print defects. However, there are many other factors that affect the print quality and the topography alone is not adequate to predict the print quality.
- Published
- 2009
20. Screening Paper Runnability in a Web-Offset Pressroom by Data Mining
- Author
-
Ahmad Alzghoul, Magnus Hållander, Antanas Verikas, Adas Gelzinis, and Marija Bacauskiene
- Subjects
Offset (computer science) ,Computer science ,business.industry ,Data classification ,Information and Computer Science ,Feature selection ,Machine learning ,computer.software_genre ,Data mapping ,Search engine ,Test set ,Data mining ,Artificial intelligence ,business ,computer ,Classifier (UML) - Abstract
This paper is concerned with data mining techniques for identifying the main parameters of the printing press, the printing process and paper affecting the occurrence of paper web breaks in a pressroom. Two approaches are explored. The first one treats the problem as a task of data classification into "break " and "non break " classes. The procedures of classifier design and selection of relevant input variables are integrated into one process based on genetic search. The search process results in a set of input variables providing the lowest average loss incurred in taking decisions. The second approach, also based on genetic search, combines procedures of input variable selection and data mapping into a low dimensional space. The tests have shown that the web tension parameters are amongst the most important ones. It was also found that, provided the basic off-line paper parameters are in an acceptable range, the paper related parameters recorded online contain more information for predicting the occurrence of web breaks than the off-line ones. Using the selected set of parameters, on average, 93.7% of the test set data were classified correctly. The average classification accuracy of the break cases was equal to 76.7%.
- Published
- 2009
21. A Reliable Classification Method for Paper Currency Based on LVQ Neural Network
- Author
-
Xiaofeng Li, Xuedong Li, Hongling Gou, and Jing Yi
- Subjects
Learning vector quantization ,Artificial neural network ,business.industry ,Computer science ,Feature vector ,Pattern recognition ,computer.software_genre ,Kernel principal component analysis ,Principal component analysis ,Classification methods ,Artificial intelligence ,Data mining ,business ,Classifier (UML) ,computer ,Lvq neural network - Abstract
To increase the reliability of currency classification, a classification method using neural networks with multi-pattern vectors is proposed in this paper. The data space of samples are divided into three blocks, then the latter are further divided into four sub-pattern vectors, and kernel principal component analysis is applied to extract features and assemble feature vectors to train LVQ neural network classifier. We draw the conclusion by testing new fifth edition RMB including four kinds of inputting directions of 1 Yuan, 5 Yuan, 10 Yuan and 20 Yuan RMB, up to 800 samples that PCA can compress data and decrease dimension of input vectors, extract the feature vectors effectively, thus the high-level reliability can be achieved by using the LVQ network classifier.
- Published
- 2011
22. S-SimRank: Combining Content and Link Information to Cluster Papers Effectively and Efficiently
- Author
-
Xiaoyong Du, Pei Li, Jun He, Yuanzhe Cai, and Hongyan Liu
- Subjects
SimRank ,Computer science ,Content analysis ,business.industry ,Graph (abstract data type) ,Artificial intelligence ,Data mining ,business ,Machine learning ,computer.software_genre ,Cluster analysis ,computer ,Link analysis - Abstract
Both Content analysis and link analysis have its advantages in measuring relationships among documents. In this paper, we propose a new method to combine these two methods to compute the similarity of research papers so that we can do clustering of these papers more accurately. In order to improve the efficiency of similarity calculation, we develop a strategy to deal with the relationship graph separately without affecting the accuracy. We also design an approach to assign different weights to different links to the papers, which can enhance the accuracy of similarity calculation. The experimental results conducted on ACM Data Set show that our new algorithm, S-SimRank,outperforms other algorithms.
- Published
- 2008
23. Screening Paper Formation Variations on Production Line
- Author
-
Carl Magnus Nilsson, Marcus Ejnarsson, and Antanas Verikas
- Subjects
Production line ,Computer science ,business.industry ,Feature vector ,Speech recognition ,Pattern recognition ,Novelty detection ,Data point ,Kernel (image processing) ,Outlier ,Artificial intelligence ,Canonical correlation ,business ,Diode - Abstract
This paper is concerned with a multi-resolution tool for screening paper formation variations in various frequency regions on production line. A paper web is illuminated by two red diode lasers and the reflected light recorded as two time series of high resolution measurements constitute the input signal to the papermaking process monitoring system. The time series are divided into blocks and each block is analyzed separately. The task is treated as kernel based novelty detection applied to a multi-resolution time series representation obtained from the band-pass filtering of the Fourier power spectrum of the series. The frequency content of each frequency region is characterized by a feature vector, which is transformed using the canonical correlation analysis and then categorized into the inlier or outlier class by the novelty detector. The ratio of outlying data points, significantly exceeding the predetermined value, indicates abnormalities in the paper formation. The tools developed are used for online paper formation monitoring in a paper mill.
- Published
- 2007
24. Discovering User Profiles from Semantically Indexed Scientific Papers
- Author
-
Pasquale Lops, Pierpaolo Basile, Marco de Gemmis, and Giovanni Semeraro
- Subjects
Information retrieval ,User profile ,business.industry ,Computer science ,Search engine indexing ,WordNet ,Lexical database ,computer.software_genre ,Session (web analytics) ,Naive Bayes classifier ,Text mining ,Categorization ,Artificial intelligence ,business ,computer ,Word (computer architecture) ,Natural language processing - Abstract
Typically, personalized information recommendation services automatically infer the user profile, a structured model of the user interests, from documents that were already deemed relevant by the user. We present an approach based on Word Sense Disambiguation (WSD) for the extraction of user profiles from documents. This approach relies on a knowledge-based WSD algorithm, called JIGSAW, for the semantic indexing of documents: JIGSAW exploits the WordNet lexical database to select, among all the possible meanings (senses) of a polysemous word, the correct one. Semantically indexed documents are used to train a naive Bayes learner that infers "semantic", sense-baseduser profiles as binary text classifiers (user-likes and user-dislikes). Two empirical evaluations are described in the paper. In the first experimental session, JIGSAW has been evaluated according to the parameters of the Senseval-3 initiative, that provides a forum where the WSD systems are assessed against disambiguated datasets. The goal of the second empirical evaluation has been to measure the accuracy of the user profiles in selecting relevant documents to be recommended. Performance of classical keyword-based profiles has been compared to that of sense-based profiles in the task of recommending scientific papers. The results show that sense-based profiles outperform keyword-based ones.
- Published
- 2007
25. Automatic Recognition and Interpretation of Pen- and Paper-Based Document Annotations
- Author
-
Markus Weber, Andreas Dengel, and Marcus Liwicki
- Subjects
Information management ,Information retrieval ,Computer science ,business.industry ,computer.software_genre ,Semantic desktop ,Gesture recognition ,Handwriting recognition ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Semantic Web Stack ,Artificial intelligence ,business ,computer ,Semantic Web ,Natural language processing ,Gesture ,Meaning (linguistics) - Abstract
In this paper we present a system which recognizes handwritten annotations on printed text documents and interprets their semantic meaning. This system processes in three steps. In the first step, document analysis methods are applied to identify possible gestures and text regions. In the second step, the text and gestures are recognized using several state-of-the-art recognition methods. In the fourth step, the actual marked text is identified. Finally, the recognized information is sent to the Semantic Desktop, the personal Semantic Web on the Desktop computer, which supports users in their information management. In order to assess the performance of the system, we have performed an experimental study. We evaluated the different stages of the system and measured the overall performance.
- Published
- 2009
26. A Method to Analyze Preferred MTF for Printing Medium Including Paper
- Author
-
Norimichi Tsumura, Yoichi Miyake, Toshiya Nakaguchi, Martti Mäkinen, Masayuki Ukishima, and Jussi Parkkinen
- Subjects
Liquid-crystal display ,Inkwell ,Image quality ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Observer rating ,law.invention ,law ,Computer graphics (images) ,Optical transfer function ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Computer vision ,Artificial intelligence ,business ,Radiant intensity - Abstract
A method is proposed to analyze the preferred Modulation Transfer Function (MTF) of printing medium like paper for the image quality of printing. First, the spectral intensity distribution of printed image is simulated by changing the MTF of medium. Next, the simulated image is displayed on a high-precision LCD to reproduce the appearance of printed image. An observer rating evaluation experiment is carried out to the displayed image to discuss what the preferred MTF is. The appearance simulation of printed image was conducted on particular printing conditions: several contents, ink colors, a halftoning method and a print resolution (dpi). The experiments on different printing conditions can be conducted since our simulation method is flexible about changing conditions.
- Published
- 2009
27. Evaluating the Adaptation of a Learning System before the Prototype Is Ready: A Paper-Based Lab Study
- Author
-
Barbara Kump, Antonia Maas, Tobias Ley, Dietrich Albert, and Neil Maiden
- Subjects
Proactive learning ,Computer science ,business.industry ,Active learning (machine learning) ,Context (language use) ,Machine learning ,computer.software_genre ,Robot learning ,Task (project management) ,Human–computer interaction ,Adaptive system ,Adaptive learning ,Artificial intelligence ,Adaptation (computer science) ,business ,computer - Abstract
We report on results of a paper-based lab study that used information on task performance, self appraisal and personal learning need assessment to validate the adaptation mechanisms for a work-integrated learning system. We discuss the results in the wider context of the evaluation of adaptive systems where the validation methods we used can be transferred to a work-based setting to iteratively refine adaptation mechanisms and improve model validity.
- Published
- 2009
28. A Bayesian Approach to Classify Conference Papers
- Author
-
Kok-Chin Khor and Choo-Yee Ting
- Subjects
business.industry ,Computer science ,Bayesian probability ,Bayesian network ,Feature selection ,Machine learning ,computer.software_genre ,Intelligent tutoring system ,Classifier (linguistics) ,Expectation–maximization algorithm ,Prior probability ,The Internet ,Artificial intelligence ,business ,computer - Abstract
This article aims at presenting a methodological approach for classifying educational conference papers by employing a Bayesian Network (BN). A total of 400 conference papers were collected and categorized into 4 major topics (Intelligent Tutoring System, Cognition, e-Learning, and Teacher Education). In this study, we have implemented a 80-20 split of collected papers. 80% of the papers were meant for keywords extraction and BN parameter learning whereas the other 20% were aimed for predictive accuracy performance. A feature selection algorithm was applied to automatically extract keywords for each topic. The extracted keywords were then used for constructing BN. The prior probabilities were subsequently learned using the Expectation Maximization (EM) algorithm. The network has gone through a series of validation by human experts and experimental evaluation to analyze its predictive accuracy. The result has demonstrated that the proposed BN has outperformed Naive Bayesian Classifier, and BN learned from the training data.
- Published
- 2006
29. Modelling Citation Networks for Improving Scientific Paper Classification Performance
- Author
-
Mengjie Zhang, Minh Duc Cao, Xiaoying Gao, and Yuejin Ma
- Subjects
Computer science ,business.industry ,Probabilistic logic ,Bayesian network ,Hyperlink ,computer.software_genre ,Machine learning ,Class (biology) ,Data set ,Naive Bayes classifier ,Content analysis ,Data mining ,Artificial intelligence ,Citation ,business ,computer - Abstract
This paper describes an approach to the use of citation links to improve the scientific paper classification performance. In this approach, we develop two refinement functions, a linear label refinement (LLR) and a probabilistic label refinement (PLR), to model the citation link structures of the scientific papers for refining the class labels of the documents obtained by the content-based Naive Bayes classification method. The approach with the two new refinement models is examined and compared with the content-based Naive Bayes method on a standard paper classification data set with increasing training set sizes. The results suggest that both refinement models can significantly improve the system performance over the content-based method for all the training set sizes and that PLR is better than LLR when the training examples are sufficient.
- Published
- 2006
30. Inventing Malleable Scores: From Paper to Screen Based Scores
- Author
-
Arthur Clay
- Subjects
Malleability ,business.industry ,Computer science ,Mathematics education ,Artificial intelligence ,Standard score ,business ,Notation ,computer.software_genre ,computer ,Composition (language) ,License ,Interpreter - Abstract
This paper examines the idea of artistic license of the interpreter as a positive aspect of composition. The possibilities of participating in the creative act beyond the role of the traditional interpreter are illustrated by tracing the development of malleability in score writing in selected works of the author. Starting with the standard score, examples are given for the various forms of malleable scores that lead up to the application of real-time electronic scores in which a concept of self-conduction is feasibly implemented for use in distributed ensembles.
- Published
- 2008
31. Searching for Illustrative Sentences for Multiword Expressions in a Research Paper Database
- Author
-
Hidetsugu Nanba and Satoshi Morishita
- Subjects
Information retrieval ,Database ,business.industry ,Computer science ,Parse tree ,Limiting ,computer.software_genre ,Measure (mathematics) ,Expression (mathematics) ,Focus (linguistics) ,Component (UML) ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
We propose a method to search for illustrative sentences for English multiword expressions (MWEs) from a research paper database. We focus on syntactically flexible expressions such as "regard --- as." Traditionally, illustrative sentences that contain such expressions have been searched for by limiting the maximum number of words between the component words of the MWE. However, this method could not collect enough illustrative sentences in which clauses are inserted between component words of MWEs. We therefore devised a measure that calculates the distance between component words of an MWE in a parse tree, and use it for flexible expression search. We conducted experiments, and obtained a precision of 0.832 and a recall of 0.911.
- Published
- 2008
32. Extracting and Querying Relations in Scientific Papers
- Author
-
Yajing Zhang, Torsten Marek, Hans Uszkoreit, Christian Federmann, and Ulrich Schäfer
- Subjects
Head-driven phrase structure grammar ,Parsing ,Information retrieval ,Grammar ,Computer science ,business.industry ,media_common.quotation_subject ,WordNet ,computer.software_genre ,Named-entity recognition ,Language technology ,Minimal recursion semantics ,Artificial intelligence ,Computational linguistics ,business ,computer ,Natural language processing ,media_common - Abstract
High-precision linguistic and semantic analysis of scientific texts is an emerging research area. We describe methods and an application for extracting interesting factual relations from scientific texts in computational linguistics and language technology. We use a hybrid NLP architecture with shallow preprocessing for increased robustness and domain-specific, ontology-based named entity recognition, followed by a deep HPSG parser running the English Resource Grammar (ERG). The extracted relations in the MRS (minimal recursion semantics) format are simplified and generalized using WordNet. The resulting `quriples' are stored in a database from where they can be retrieved by relation-based search. The query interface is embedded in a web browser-based application we call the Scientist's Workbench. It supports researchers in editing and online-searching scientific papers.
- Published
- 2008
33. A Memetic Differential Evolution in Filter Design for Defect Detection in Paper Production
- Author
-
Tuomo Rossi, Ville Tirronen, Kirsi Majava, Ferrante Neri, and Tommi Kärkkäinen
- Subjects
Engineering ,education.field_of_study ,Finite impulse response ,business.industry ,Process (engineering) ,Population ,Evolutionary algorithm ,Machine learning ,computer.software_genre ,Filter design ,Differential evolution ,Memetic algorithm ,Artificial intelligence ,business ,education ,computer ,Digital filter - Abstract
This article proposes a Memetic Differential Evolution (MDE) for designing digital filters which aim at detecting defects of the paper produced during an industrial process. The MDE is an adaptive evolutionary algorithm which combines the powerful explorative features of Differential Evolution (DE) with the exploitative features of two local searchers. The local searchers are adaptively activated by means of a novel control parameter which measures fitness diversity within the population. Numerical results show that the DE framework is efficient for the class of problems under study and employment of exploitative local searchers is helpful in supporting the DE explorative mechanism in avoiding stagnation and thus detecting solutions having a high performance.
- Published
- 2007
34. Mapping Unstructured Applications into Nested Parallelism Best Student Paper Award: First Prize
- Author
-
Arturo Gonzalez-Escribano, Valentín Cardeñoso-Payo, and Arjan J. C. van Gemund
- Subjects
Nested parallelism ,Computer science ,business.industry ,Computation ,Message passing ,Artificial intelligence ,Parallel computing ,business ,Critical path method ,Scheduling (computing) - Abstract
Nested parallel programming models, where the task graph associated to a computation is series-parallel are easy to program and show good analysis properties. These can be exploited for efficient scheduling, accurate cost estimation or automatic mapping to different architectures. Restricting synchronization structures to nested series-parallelism may bring performance losses due to a less parallel solution, as compared to more generic ones based in unstructured models (e.g. message passing).
- Published
- 2010
35. From Simulations to Theorems: A Position Paper on Research in the Field of Computational Trust
- Author
-
Karl Krukow and Mogens Nielsen
- Subjects
Theoretical computer science ,business.industry ,Reputation system ,Computer science ,Prior probability ,Principal (computer security) ,Divergence-from-randomness model ,Probabilistic logic ,Artificial intelligence ,Computational trust ,business ,Probabilistic relevance model ,Probabilistic argumentation - Abstract
Since the millennium, a quickly increasing number of research papers in the field of "computational trust and reputation" have appeared in the Computer Science literature. However, it remains hard to compare and evaluate the respective merits of proposed systems. We argue that rigorous use of formal probabilistic models enables the clear specification of the assumptions and objectives of systems, which is necessary for comparisons. To exemplify such probabilistic modeling, we present a simple probabilistic trust model in which the system assumptions as well as its objectives are clearly specified. We show how to compute (in this model) the so-called predictive probability: The probability that the next interaction with a specific principal will have a specific outcome. We sketch preliminary ideas and first theorems indicating how the use of probabilistic models could enable us to quantitatively compare proposed systems in various different environments.
- Published
- 2007
36. Vectorization-Free Reconstruction of 3D CAD Models from Paper Drawings
- Author
-
Frank Ditrich, Herbert Suesse, and Klaus Voss
- Subjects
Engineering drawing ,business.industry ,Computer science ,3D reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Image processing ,CAD ,Iterative reconstruction ,computer.software_genre ,Computer graphics (images) ,Pattern recognition (psychology) ,Computer Aided Design ,Computer vision ,Image tracing ,Artificial intelligence ,business ,computer - Abstract
We propose a new approach for the reconstruction of 3D CAD models from paper drawings. Our method uses a combination of the well-known fleshing-out-projections method and accumulation techniques from image processing to reconstruct part models. It should provide a comfortable method to handle inaccuracies and missing elements unavoidable in scanned paper drawings while giving the user the chance to observe and interactively control the reconstruction process.
- Published
- 2004
37. An Intelligent Grading System for Descriptive Examination Papers Based on Probabilistic Latent Semantic Analysis
- Author
-
Jae-Young Lee, Yu-Seop Kim, Jeong-Ho Chang, and Jung-Seok Oh
- Subjects
Probabilistic latent semantic analysis ,Computer science ,business.industry ,Semantics ,computer.software_genre ,Similitude ,Semantic similarity ,Vector space model ,Semantic memory ,Artificial intelligence ,business ,Grading (education) ,computer ,Natural language processing - Abstract
In this paper, we developed an intelligent grading system, which scores descriptive examination papers automatically, based on Probabilistic Latent Semantic Analysis (PLSA) For grading, we estimated semantic similarity between a student paper and a model paper PLSA is able to represent complex semantic structures of given contexts, like text passages, and are used for building linguistic semantic knowledge which could be used in estimating contextual semantic similarity In this paper, we marked the real examination papers and we can acquire about 74% accuracy of a manual grading, 7% higher than that from the Simple Vector Space Model.
- Published
- 2004
38. Conference Paper Assignment Using a Combined Greedy/Evolutionary Algorithm
- Author
-
Pedro Castillo-Valdivieso and Juan J. Merelo-Guervós
- Subjects
business.industry ,Process (engineering) ,Genetic algorithm ,Evolutionary algorithm ,Artificial intelligence ,business ,Greedy algorithm ,Mathematics - Abstract
This paper presents a method that combines a greedy and an evolutionary algorithm to assign papers submitted to a conference to reviewers. The evolutionary algorithm tries to maximize match between the referee expertise and the paper topics, with the constraints that no referee should get more papers than a preset maximum and no paper should get less reviewers than an established minimum, taking into account also incompatibilities and conflicts of interest. A previous version of the method presented on this paper was tested in another conference obtaining not only a good match, but also a high satisfaction of referees with the papers they have been assigned; the current version has been also applied on that conference data, and to the conference where this paper has been submitted; results were obtained in a short time, and yielded a good match between reviewers and papers assigned to them, better than a greedy algorithm. The paper finishes with some conclusions and reflections on how the whole submission and refereeing process should be conducted.
- Published
- 2004
39. Extracting Positive Attributions from Scientific Papers
- Author
-
Achim Hoffmann and Son Bao Pham
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Context (language use) ,computer.software_genre ,Machine learning ,Knowledge acquisition ,Task (project management) ,Term (time) ,Information extraction ,Knowledge base ,Expression (architecture) ,Reading (process) ,Artificial intelligence ,business ,computer ,Natural language processing ,media_common - Abstract
The aim of our work is to provide support for reading (or skimming) scientific papers. In this paper we report on the task to identify concepts or terms with positive attributions in scientific papers. This task is challenging as it requires the analysis of the relationship between a concept or term and its sentiment expression. Furthermore, the context of the expression needs to be inspected. We propose an incremental knowledge acquisition framework to tackle these challenges. With our framework we could rapidly (within 2 days of an expert’s time) develop a prototype system to identify positive attributions in scientific papers. The resulting system achieves high precision (above 74%) and high recall rates (above 88%) in our initial experiments on corpora of scientific papers. It also drastically outperforms baseline machine learning algorithms trained on the same data.
- Published
- 2004
40. Mechatronics Education: From Paper Design to Product Prototype Using LEGO NXT Parts
- Author
-
Paul Y. Oh, Daniel M. Lofaro, and Tony Truong Giang Le
- Subjects
Engineering ,Industrial design ,Process (engineering) ,business.industry ,Robot kit ,Design process ,Robotics ,Product (category theory) ,Artificial intelligence ,Mechatronics ,Engineering design process ,business ,Manufacturing engineering - Abstract
The industrial design cycle starts with design then simulation, prototyping, and testing. When the tests do not match the design requirements the design process is started over again. It is important for students to experience this process before they leave their academic institution. The high cost of the prototype phase, due to CNC/Rapid Prototype machine costs, makes hands on study of this process expensive for students and the academic institutions. This document shows that the commercially available LEGO NXT Robot kit is a viable low cost surrogate to the expensive industrial CNC/Rapid Prototype portion of the industrial design cycle.
- Published
- 2009
41. A Kernel Based Multi-resolution Time Series Analysis for Screening Deficiencies in Paper Production
- Author
-
Marcus Ejnarsson, Antanas Verikas, and Carl Magnus Nilsson
- Subjects
Discrete wavelet transform ,Artificial neural network ,Computer science ,business.industry ,Multiresolution analysis ,Detector ,Pattern recognition ,Novelty detection ,Kernel (linear algebra) ,Kernel method ,Discrete time and continuous time ,Kernel (statistics) ,Artificial intelligence ,Time series ,business - Abstract
This paper is concerned with a multi-resolution tool for analysis of a time series aiming to detect abnormalities in various frequency regions. The task is treated as a kernel based novelty detection applied to a multi-level time series representation obtained from the discrete wavelet transform. Having a priori knowledge that the abnormalities manifest themselves in several frequency regions, a committee of detectors utilizing data dependent aggregation weights is build by combining outputs of detectors operating in those regions.
- Published
- 2006
42. Quadtree Decomposition Texture Analysis in Paper Formation Determination
- Author
-
Erik Lieng
- Subjects
Set (abstract data type) ,Basis (linear algebra) ,Computer science ,business.industry ,Process (computing) ,Quadtree ,Pattern recognition ,Context (language use) ,Artificial intelligence ,business ,Texture (geology) ,Block (data storage) ,Image (mathematics) - Abstract
The main topic of the article is to give a detailed description of the new and promising quadtree decomposition texture analysis method used for paper formation determination. Paper formation or configuration of fibers, fines and fillers in the two-dimensional spatial xy-domain of the paper is a very important property and image analysis application for the paper industry. The basis of the method is the successive quadtree decomposition process resulting in a two-dimensional block partitioning of the formation structure image analysed. In this context the blocks represents a unit of variation, and the size of a quadtree block is controlled by a set of different parameters. In addition to the primary features detected by the algorithm, characterization of a large set of secondary features is performed, including gradient analysis and spatial distribution analysis.
- Published
- 2003
43. Working Group II — Acquisition — Position Paper: Data collection and 3D reconstruction
- Author
-
Sisi Zlatanova
- Subjects
Data collection ,Laser scanning ,business.industry ,Computer science ,3D reconstruction ,Point cloud ,law.invention ,Photogrammetry ,Software ,Data acquisition ,law ,Computer vision ,Artificial intelligence ,Radar ,business - Abstract
3D Geographical Information Systems need 3D representations of objects and, hence, 3D data acquisition and reconstructions methods. Developments in these two areas, however, are not compatible. While numerous operational sensors for 3D data acquisition are readily available on the market (optical, laser scanning, radar, thermal, acoustic, etc.), 3D reconstruction software offers predominantly manual and semi-automatic tools (e.g. Cyclone, Leica Photogrammetry Suite, PhotoModeler or Sketch-up). The ultimate 3D reconstruction algorithm is still a challenge and a subject of intensive research. Many 3D reconstruction approaches have been investigated, and they can be classified into two large groups, optical image-based and point cloud-based, with respect to the sensor used, which can be mount on different platforms.
- Published
- 2008
44. The New Area Subdivision Methods for Producing Shapes of Colored Paper Mosaic
- Author
-
Dae Wook Kang, Sanghyun Seo, Young Park, and Kyunghyun Yoon
- Subjects
Voronoi polygon ,business.industry ,Color image ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Image segmentation ,Mosaic ,Rendering (computer graphics) ,Computer graphics ,Colored ,Computer Science::Computer Vision and Pattern Recognition ,Polygon ,Quadtree ,Computer vision ,Segmentation ,Artificial intelligence ,business ,Voronoi diagram ,ComputingMethodologies_COMPUTERGRAPHICS ,Subdivision - Abstract
This paper proposes a colored paper mosaic rendering technique based on image segmentation that can automatically generate a torn and tagged colored paper mosaic effect. The previous method[12] did not produce satisfactory results due to the ineffectiveness of having to use pieces of the same size. The proposed two methods for determination of paper shape and location that are based on segmentation can subdivide an image area by considering characteristics of image. The first method is to generate a Voronoi polygon after subdividing the segmented image again using a quad tree. And the second method is to apply the Voronoi diagram on each segmentation layer. Through these methods, the characteristic of the image is expressed in more detail than the previous colored paper mosaic rendering method.
- Published
- 2002
45. Relevant Information Extraction Driven with Rhetorical Schemas to Summarize Scientific Papers
- Author
-
Abdelmajid Ben Hamadou and Mariem Ellouze
- Subjects
Structural linguistics ,Phrase ,Computer science ,business.industry ,computer.software_genre ,Cohesion (linguistics) ,Information extraction ,Knowledge base ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Rhetorical question ,Artificial intelligence ,Source text ,business ,computer ,Natural language processing ,Sentence - Abstract
Automatic summaries are often subject to several criticisms (e.g., lack of cohesion and coherence). In this paper, we propose an approach that uses coherent Summary-Schemas (templates) conceived from the rhetorical structure of scientific papers including their abstracts. The Summary-Schemas embed rhetorical roles specified by signatures (sets of positional, structural, linguistic and thematic features) that guide the search for appropriate sentences in the source text.
- Published
- 2002
46. 3D Reconstruction of Paper Based Assembly Drawings: State of the Art and Approach
- Author
-
Harald Kunze, Hans Grabowski, Arno Michelis, El-Fathi El-Mejbri, and Ralf-Stefan Lossack
- Subjects
Engineering drawing ,Process (engineering) ,Computer science ,business.industry ,3D reconstruction ,Computer Aided Design ,Image tracing ,Artificial intelligence ,State (computer science) ,business ,computer.software_genre ,computer ,Digitization - Abstract
Engineering solutions are generally documented in assembly and part drawings and bill of materials. A great benefit, qualitatively and commercially, can be achieved if these paper based storages can be transformed into digital information archives. The process of this transformation is called reconstruction. The reconstruction process of paper based assembly drawings consists of four steps: digitization; vectorization/ interpretation; 3D reconstruction of the parts and the 3D reconstruction of the assembly.This paper evaluates existing commercial systems worldwide for interpretation of paper based mechanical engineering drawings. For a complete reconstruction process a 3D reconstruction is needed. This functionality is already supported by some CAD systems to a certain extent, but it still remains a major topic of research work. One CAD system which converts 2D CAD models into 3D CAD models is presented. Finally, after the reconstruction of the parts the whole assembly can be reconstructed. Until now, no system for the automatic reconstruction of assemblies is available. In our paper we present a general approach for automatic reconstruction of 3D assembly model data by interpretation of mechanical engineering 2D assembly drawings, their part drawings, and the bill of materials.
- Published
- 2002
47. Capturing Abstract Matrices from Paper
- Author
-
Volker Sorge, Toshihiro Kanahori, Masakazu Suzuki, and Alan P. Sexton
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Semantic analysis (machine learning) ,Image processing ,Ambiguity ,computer.software_genre ,Semantics ,Task (project management) ,Matrix (mathematics) ,Artificial intelligence ,business ,computer ,Natural language processing ,media_common - Abstract
Capturing and understanding mathematics from print form is an important task in translating written mathematical knowledge into electronic form. While the problem of syntactically recognising mathematical formulas from scanned images has received attention, very little work has been done on semantic validation and correction of recognised formulas. We present a first step towards such an integrated system by combining the Infty system with a semantic analyser for matrix expressions. We applied the combined system in experiments on the semantic analysis of matrix images scanned from textbooks. While the first results are encouraging, they also demonstrate many ambiguities one has to deal with when analysing matrix expressions in different contexts. We give a detailed overview of the problems we encountered that motivate further research into semantic validation of mathematical formula recognition.
- Published
- 2006
48. Formal Versus Rigorous Mathematics: How to Get Your Papers Published
- Author
-
Erik Rosenthal
- Subjects
Computer science ,Proof theory ,business.industry ,Perspective (graphical) ,Mathematics education ,Quality (philosophy) ,Artificial intelligence ,Mathematical proof ,Symbolic computation ,business ,Formal verification ,Abstract structure - Abstract
This talk will consider rigorous mathematics and the nature of proof. It begins with an historical perspective and follows the development of formal mathematics. The talk will conclude with examples demonstrating that understanding the relationship between formal mathematics and rigorous proof can assist with both the discovery and the quality of real proofs of real results.
- Published
- 2005
49. Helli-Respina 2001 Team Description Paper
- Author
-
Omid Aladini, B Bahador Nooraei, and N Siavash Rahbar
- Subjects
Intelligent agent ,Computer science ,business.industry ,Unsupervised learning ,Artificial intelligence ,computer.software_genre ,Agent architecture ,business ,computer - Abstract
One of the most important problems for development of intelligent agents is adaptation to the environment. In this paper we briefly describe Helli-Respina soccer simulator team that uses a new self-adaptive method named Dynamic Multi-Behavior Assessment (DMBA). By using built-in behavior manager named dynamic behavior transformer method lets the agent can choose the best algorithms to decide during the game. This system always tries to choose a set of available algorithms to get the best result against each opponent. The main objective in this research is how to choose a set of algorithms dynamically to get the best result against an opponent.
- Published
- 2002
50. Learning on Paper: Diagrams and Discovery in Game Playing
- Author
-
J.-Holger Keibel and Susan L. Epstein
- Subjects
Cognitive science ,Game playing ,Computer science ,Process (engineering) ,business.industry ,media_common.quotation_subject ,Spatial intelligence ,Cognition ,Task (project management) ,Artificial intelligence ,business ,Game tree ,Game theory ,Diversity (politics) ,media_common - Abstract
Diagrams play an important role in human problem solving. In response to a challenging assignment, three students produced diagrams and subsequent verbal protocols that offer insight into human cognition. The diversity and richness of their response, and their ability to address the task via diagrams, provide an incisive look at the role diagrams play in the development of expertise. This paper recounts how their diagrams led and misled them, and how the diagrams both explained and drove explanation. It also considers how this process might be adapted for a computer program.
- Published
- 2002
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.