3,813 results
Search Results
2. Generating Color Documents from Segmented and Synthetic Elements.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Kamel, Mohamed, Campilho, Aurélio, Lins, Rafael Dueire, and da Silva, João Marcelo Monte
- Abstract
This paper presents way of generation color documents from elements extracted from an original document image. This scheme yields synthetic documents of similar visual information to the original one. This method has several advantages as it allows a far more efficient way of storing and transmitting the information. Its rationale is to decompose a color document into paper and ink texture parameters and textual information. The textual information may be either typed or handwritten and is stored as a compressed monochromatic image. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
3. Synthesizing the Artistic Effects of Ink Painting.
- Author
-
Kalviainen, Heikki, Parkkinen, Jussi, Kaarna, Arto, Ching-tsorng Tsai, Chishyan Liaw, Cherng-yue Huang, and Jiann-Shu Lee
- Abstract
A novel method that is able to simulate artistic effects of ink-refusal and stroke-trace-reservation in ink paintings is developed. The main ingredients of ink are water, carbon particles, as well as glue. However, glue is not taken into account in other researches, although it plays an important role in ink diffusion. In our ink-diffusion model, we consider the number of fibers and the quantity of glue as parameters of the structure of paper. We simulate the physical interaction among water, carbon particles, glue, and fiber mesh of paper. The realistic renderings created from our models have demonstrated that our models are successful, and are able to imitate the special artistic effects of ink painting. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
4. Automating Visual Inspection of Print Quality.
- Author
-
Vartiainen, J., Lyden, S., Sadovnikov, A., Kamarainen, J. -K., Lensu, L., Paalanen, P., Kalviainen, H., Campilho, Aurélio, and Kamel, Mohamed
- Abstract
Automatic evaluation of visual print quality is addressed in this study. Due to many complex factors of perceived visual quality its evaluation is divided to separate parts which can be individually evaluated using standardized assessments. Most of the assessments however require active evaluation by trained experts. In this paper one quality assessment, missing dot detection from printed dot patterns, is addressed by defining sufficient hardware for image acquisition and method for detecting and counting missing dots from a test strip. The experimental results are evidence how the human assessment can be automated with the help of machine vision, thus making the test more repeatable and accurate. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
5. Shape-Based Co-occurrence Matrices for Defect Classification.
- Author
-
Kalviainen, Heikki, Parkkinen, Jussi, Kaarna, Arto, Rautkorpi, Rami, and Iivarinen, Jukka
- Abstract
This paper discusses two statistical shape descriptors, the Edge Co-occurrence Matrix (ECM) and the Contour Co-occurrence Matrix (CCM), and their use in surface defect classification. Experiments are run on two image databases, one containing metal surface defects and the other paper surface defects. The extraction of Haralick features from the matrices is considered. The descriptors are compared to other shape descriptors from e.g. the MPEG-7 standard. The results show that the ECM and the CCM give superior classification accuracies. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
6. Motion Information Exploitation in H.264 Frame Skipping Transcoding.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Blanc-Talon, Jacques, Philips, Wilfried, Popescu, Dan, Scheunders, Paul, and Li, Qiang
- Abstract
This paper proposes an adaptive motion mode selection method in H.264 frame skipping transcoding. In order to reduce the high complexity arising from variable block sizes in H.264, the proposed method exploits original motion information from incoming bitstreams. In addition, the paper also adopts Forward Dominant Vector Selection approach in MV composition of H.264 transcoding, in comparison with Bilinear Interpolation method. The simulation results show that the proposed method achieves good trade-off between computational complexity and video quality. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
7. Recognition and Classification of Figures in PDF Documents.
- Author
-
Wenyin Liu, Lladós, Josep, Mingyan Shao, and Futrelle, Robert P.
- Abstract
Graphics recognition for raster-based input discovers primitives such as lines, arrowheads, and circles. This paper focuses on graphics recognition of figures in vector-based PDF documents. The first stage consists of extracting the graphic and text primitives corresponding to figures. An interpreter was constructed to translate PDF content into a set of self-contained graphics and text objects (in Java), freed from the intricacies of the PDF file. The second stage consists of discovering simple graphics entities which we call graphemes, e.g., a pair of primitive graphic objects satisfying certain geometric constraints. The third stage uses machine learning to classify figures using grapheme statistics as attributes. A boosting-based learner (LogitBoost in the Weka toolkit) was able to achieve 100% classification accuracy in hold-out-one training/testing using 16 grapheme types extracted from 36 figures from BioMed Central journal research papers. The approach can readily be adapted to raster graphics recognition. Keywords: Graphics Recognition, PDF, Graphemes, Vector Graphics, Machine Learning, Boosting. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
8. Combining Fuzzy Clustering and Morphological Methods for Old Documents Recovery.
- Author
-
Marques, Jorge S., Pérez de la Blanca, Nicolás, Caldas Pinto, João R., Bandeira, Lourenço, Sousa, João M.C., and Pina, Pedro
- Abstract
In this paper we tackle the specific problem of old documents recovery. Spots, print through, underlines and others ageing features are undesirable not only because they harm the visual appearance of the document, but also because they affect future Optical Character Recognition (OCR). This paper proposes a new method integrating fuzzy clustering of color properties of original images and mathematical morphology. We will show that this technique leads to higher quality of the recovered images and, at the same time, it delivers cleaned binary text for OCR applications. The proposed method was applied to books of XIX Century, which were cleaned in a very effective way. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
9. A Segmentation Method Based on Dynamic Programming for Breast Mass in MRI Images.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Zhang, David, Jihong Liu, Weina Ma, and Soo-Young Lee
- Abstract
The tumor segmentation in Breast MRI image is difficult due to the complicated galactophore structure. The work in this paper attempts to accurately segment the abnormal breast mass in MRI(Magnetic resonance imaging) Images. The ROI (Region of Interest) is segmented using a novel DP (Dynamic Programming) based optimal edge detection technique. DP is an optimal approach in multistage decision-making. The method presented in this paper processes the object image to get the minimum cumulative cost matrix combining with LUM nonlinear enhancement filter, Gaussian preprocessor, non-maximum suppression and double-threshold filtering, and then trace the whole optimal edge. The experimental results show that this method is robust and efficient on image edge detection and can segment the breast tumor area more accurately. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
10. Gradient Direction Edge Enhancement Based Nucleus and Cytoplasm Contour Detector of Cervical Smear Images.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Zhang, David, Shys-Fan Yang-Mao, Yung-Fu Chen, Yung-Kuan Chan, and Meng-Hsin Tsai
- Abstract
This paper presents a gradient direction edge enhancement based contour (GDEEBC) detector to segment the nucleus and cytoplasm from each cervical smear image. In order to eliminate noise from the image, this paper proposes a trim-meaning filter that can effectively remove impulse and Gaussian noise but still preserve the edge sharpness of an object. In addition, a bi-group enhancer is proposed to make a clear-cut separation for the pixels lying between two objects. Finally, a gradient direction (GD) enhancer is presented to suppress the gradients of noise and to brighten the gradients of object contours as well. The experimental results show that all the techniques proposed above have impressive performances. In addition to cervical smear images, these proposed techniques can also be utilized in object segmentation of other types of images. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
11. Integrating Disparity Images by Incorporating Disparity Rate.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Sommer, Gerald, Klette, Reinhard, Vaudrey, Tobi, Badino, Hernán, and Gehrig, Stefan
- Abstract
Intelligent vehicle systems need to distinguish which objects are moving and which are static. A static concrete wall lying in the path of a vehicle should be treated differently than a truck moving in front of the vehicle. This paper proposes a new algorithm that addresses this problem, by providing dense dynamic depth information, while coping with real-time constraints. The algorithm models disparity and disparity rate pixel-wise for an entire image. This model is integrated over time and tracked by means of many pixel-wise Kalman filters. This provides better depth estimation results over time, and also provides speed information at each pixel without using optical flow. This simple approach leads to good experimental results for real stereo sequences, by showing an improvement over previous methods. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
12. Probabilistic Combination of Visual Cues for Object Classification.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
Recent solutions to object classification have focused on the decomposition of objects into representative parts. However, the vast majority of these methods are based on single visual cue measurements. Psychophysical evidence suggests that humans use multiple visual cues to accomplish recognition. In this paper, we address the problem of integrating multiple visual information for object recognition. Our contribution in this paper is twofold. First, we describe a new probabilistic integration model of multiple visual cues at different spatial locations across the image. Secondly, we use the cue integration framework to classify images of objects by combining two-dimensional and three-dimensional visual cues. Classification results obtained using the method are promising. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
13. Exploitation of Combined Scalability in Scalable H.264/AVC Bitstreams by Using an MPEG-21 XML-Driven Framework.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Blanc-Talon, Jacques, Philips, Wilfried, Popescu, Dan, Scheunders, Paul, and De Schrijver, Davy
- Abstract
The heterogeneity in the contemporary multimedia environments requires a format-agnostic adaptation framework for the consumption of digital video content. Preferably, scalable bitstreams are used in order to satisfy as many circumstances as possible. In this paper, the scalable extension on the H.264/AVC specification is used to obtain the parent bitstreams. The adaptation along the combined scalability axis of the bitstreams must occur in a format-independent manner. Therefore, an abstraction layer of the bitstream is needed. In this paper, XML descriptions are used representing the high-level structure of the bitstreams by relying on the MPEG-21 Bitstream Syntax Description Language standard. The adaptation process is executed in the XML domain by transforming the XML descriptions considering the usage environment. Such an adaptation engine is discussed in this paper in which all communication is based on XML descriptions without knowledge of underlying coding format. From the performance measurements, one can conclude that the transformations in the XML domain and the generation of the corresponding adapted bitstream can be realized in real time. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
14. CLEAR Evaluation of Acoustic Event Detection and Classification Systems.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Rangan, C. Pandu, Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Stiefelhagen, Rainer, Garofolo, John, Temko, Andrey, Malkin, Robert, and Zieger, Christian
- Abstract
In this paper, we present the results of the Acoustic Event Detection (AED) and Classification (AEC) evaluations carried out in February 2006 by the three participant partners from the CHIL project. The primary evaluation task was AED of the testing portions of the isolated sound databases and seminar recordings produced in CHIL. Additionally, a secondary AEC evaluation task was designed using only the isolated sound databases. The set of meeting-room acoustic event classes and the metrics were agreed by the three partners and ELDA was in charge of the scoring task. In this paper, the various systems for the tasks of AED and AEC and their results are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
15. A Study of Hippocampal Shape Difference Between Genders by Efficient Hypothesis Test and Discriminative Deformation.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Ayache, Nicholas, Ourselin, Sébastien, Maeder, Anthony, Luping Zhou, and Hartley, Richard
- Abstract
Hypothesis testing is an important way to detect the statistical difference between two populations. In this paper, we use the Fisher permutation and bootstrap tests to differentiate hippocampal shape between genders. These methods are preferred to traditional hypothesis tests which impose assumptions on the distribution of the samples. An efficient algorithm is adopted to rapidly perform the exact tests. We extend this algorithm to multivariate data by projecting the original data onto an "informative direction" to generate a scalar test statistic. This "informative direction" is found to preserve the original discriminative information. This direction is further used in this paper to isolate the discriminative shape difference between classes from the individual variability, achieving a visualization of shape discrepancy. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
16. Facial Expression Recognition Using 3D Facial Feature Distances.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Kamel, Mohamed, Campilho, Aurélio, Soyel, Hamit, and Demirel, Hasan
- Abstract
In this paper, we propose a novel approach for facial expression analysis and recognition. The proposed approach relies on the distance vectors retrieved from 3D distribution of facial feature points to classify universal facial expressions. Neural network architecture is employed as a classifier to recognize the facial expressions from a distance vector obtained from 3D facial feature locations. Facial expressions such as anger, sadness, surprise, joy, disgust, fear and neutral are successfully recognized with an average recognition rate of 91.3%. The highest recognition rate reaches to 98.3% in the recognition of surprise. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
17. Data Segmentation of Stereo Images with Complicated Background.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Kamel, Mohamed, Campilho, Aurélio, Yi Wei, Shushu Hu, and Yu Li
- Abstract
With the development of computer science, there is an increasing demand on the object recognition in stereo images. As a binocular image pair contains larger and more complicated information than a monocular image, the stereo vision analysis has been a difficult task. Therefore how to extract the region of user's interest is a vital step to reduce the data redundancy and improve the robustness and reliability of the analysis. The original stereo sequences used in the paper are obtained from two parallel video cameras mounted on a vehicle driving in a residential area. This paper targets the problem of data segmentation of those stereo images. It proposes a set of algorithms to separate the foreground from the complicated changing background. Experiments show that the whole process is fast and efficient in reducing the data redundancy, and improves the overall performance for the further obstacle extraction. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
18. Surface Reconstruction Using Polarization and Photometric Stereo.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Kropatsch, Walter G., Kampel, Martin, Hanbury, Allan, Atkinson, Gary A., and Hancock, Edwin R.
- Abstract
This paper presents a novel shape recovery technique that combines photometric stereo with polarization information. First, a set of ambiguous surface normals are estimated from polarization data. This is achieved using Fresnel theory to interpret the polarization patterns of light reflected from dielectric surfaces. The process is repeated using three different known light source positions. Photometric stereo is then used to disambiguate the surface normals. The relative pixel brightnesses for the different light source positions reveal the correct surface orientations. Finally, the resulting unambiguous surface normal estimates are integrated to recover a depth map. The technique is tested on various objects of different materials. The paper also demonstrates how the depth estimates can be enhanced by applying methods suggested in earlier work. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
19. Reduction of Ring Artifacts in High Resolution X-Ray Microtomography Images.
- Author
-
Franke, Katrin, Müller, Klaus-Robert, Nickolay, Bertram, Schäfer, Ralf, Axelsson, Maria, Svensson, Stina, and Borgefors, Gunilla
- Abstract
Ring artifacts can occur in reconstructed images from X-ray microtomography as full or partial circles centred on the rotation axis. In this paper, a 2D method is proposed that reduces these ring artifacts in the reconstructed images. The method consists of two main parts. First, the artifacts are localised in the image using local orientation estimation of the image structures and filtering to find ring patterns in the orientation information. Second, the map of the located artifacts is used to calculate a correction image using normalised convolution. The method is evaluated on 2D images from volume data of paper fibre imaged at the European Synchrotron Radiation Facility (ESRF) with high resolution X-ray microtomography. The results show that the proposed method reduces the artifacts and restores the pixel values for all types of partial and complete ring artifacts where the signal is not completely saturated. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
20. Analysis System of Endoscopic Image of Early Gastric Cancer.
- Author
-
Campilho, Aurélio, Kamel, Mohamed, Kwang-Baek Kim, Sungshin Kim, and Gwang-Ha Kim
- Abstract
Gastric cancer is a great part of the cancer occurrence and the mortality from cancer in Korea, and the early detection of gastric cancer is very important in the treatment and convalescence. This paper, for the early detection of gastric cancer, proposes the analysis system of an endoscopic image of the stomach, which detects the abnormal region by using the change of color in the image and by providing the surface tissue information to the detector. While advanced inflammation and cancer may be easily detected, early inflammation and cancer are difficult to detect and requires more attention to be detected. This paper, at first, converts the endoscopic image to the image of the IHb(Index of Hemoglobin) model and removes noises incurred by illumination and, automatically detects the regions suspected as cancer and provides the related information to the detector, or provides the surface tissue information for the regions appointed by the detector. This paper does not intend to provide the final diagnosis of the detected abnormal regions as gastric cancer, but it intends to provide a supplementary mean to reduce the load and mistaken diagnosis of the detector, by automatically detecting the abnormal regions not easily detected by the human eye and this provides additional information for the diagnosis. The experiments using practical endoscopic images for performance evaluation showed that the proposed system is effective in the analysis of endoscopic image of the stomach. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
21. Ant Based Fuzzy Modeling Applied to Marble Classification.
- Author
-
Campilho, Aurélio, Kamel, Mohamed, Vieira, Susana M., Sousa, João M. C., and Pinto, João R. Caldas
- Abstract
Automatic classification of objects based on their visual appearance is often performed based on clustering algorithms, which can be based on soft computing techniques. One of the most used methods is fuzzy clustering. However, this method can converge to local minima. This problem has been addressed very recently by applying ant colony optimization to tackle this problem. This paper proposed the use of this fuzzy-ant clustering approach to derive fuzzy models. These models are used to classify marbles based on their visual appearance; color and vein classification is performed. The proposed fuzzy modeling approach is compared to other soft computing classification algorithms, namely: fuzzy, neural, simulated annealing, genetic and combinations of these approaches. Fuzzy-ant models presented higher classification rates than the other soft computing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
22. Extraction of Index Components Based on Contents Analysis of Journal's Scanned Cover Page.
- Author
-
Wenyin Liu, Lladós, Josep, and Young-Bin Kwon
- Abstract
In this paper, a method for automatically indexing the contents to reduce the effort that used to be required for input paper information and constructing index is sought. Various contents formats for journals, which have different features from those for general documents, are described. The principal elements that we want to represent are titles, authors, and pages for each paper. Thus, the three principal elements are modeled according to the order of their arrangement, and then their features are generalized. The content analysis system is then implemented based on the suggested modeling method. The content analysis system, implemented for verifying the suggested method, gets its input in the form containing more than 300 dpi gray scale image and analyze structural features of the contents. It classifies titles, authors and pages using efficient projection method. The definition of each item is classified according to regions, and then is extracted automatically as index information. It also helps to recognize characters region by region. The experimental result is obtained by applying to some of the suggested 6 models, and the system shows 97.3% success rate for various journals. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
23. On Using a Dissimilarity Representation Method to Solve the Small Sample Size Problem for Face Recognition.
- Author
-
Blanc-Talon, Jacques, Philips, Wilfried, Popescu, Dan, Scheunders, Paul, and Kim, Sang-Woon
- Abstract
For high-dimensional classification tasks such as face recognition, the number of samples is smaller than the dimensionality of the samples. In such cases, a problem encountered in Linear Discriminant Analysis-based (LDA) methods for dimension reduction is what is known as the Small Sample Size (SSS) problem. Recently, a number of approaches that attempt to solve the SSS problem have been proposed in the literature. In this paper, a different way of solving the SSS problem compared to these is proposed. It is one that employs a dissimilarity representation method where an object is represented based on the dissimilarity measures among representatives extracted from training samples instead of from the feature vector itself. Thus, by appropriately selecting representatives and by defining the dissimilarity measure, it is possible to reduce the dimensionality and achieve a better classification performance in terms of both speed and accuracy. Apart from utilizing the dissimilarity representation, in this paper simultaneously employing a fusion technique is also proposed in order to increase the classification accuracy. The rationale for this is explained in the paper. The proposed scheme is completely different from the conventional ones in terms of the computation of the transformation matrix, as well as in controlling the number of dimensions. The present experimental results, which to the best of the authors' knowledge, are the first such reported results, demonstrate that the proposed mechanism achieves nearly identical efficiency results in terms of the classification accuracy compared with the conventional LDA-extension approaches for well-known face databases involving AT&T and Yale databases. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
24. Recognizing Face or Object from a Single Image: Linear vs. Kernel Methods on 2D Patterns.
- Author
-
Dit-Yan Yeung, Kwok, James T., Fred, Ana, Roli, Fabio, de Ridder, Dick, Daoqiang Zhang, Songcan Chen, and Zhi-Hua Zhou
- Abstract
We consider the problem of recognizing face or object when only single training image per class is available, which is typically encountered in law enforcement, passport or identification card verification, etc. In such cases, many discriminant subspace methods such as Linear Discriminant Analysis (LDA) fail because of the non-existence of intra-class variation. In this paper, we propose a novel framework called 2-Dimensional Kernel PCA (2D-KPCA) for face or object recognition from a single image. In contrast to conventional KPCA, 2D-KPCA is based on 2D image matrices and hence can effectively utilize the intrinsic spatial structure information of the images. On the other hand, in contrast to 2D-PCA, 2D-KPCA is capable of capturing part of the higher-order statistics information. Moreover, this paper reveals that the current 2D-PCA algorithm and its many variants consider only the row information or column information, which has not fully exploited the information contained in the image matrices. So, besides proposing the unilateral 2D-KPCA, this paper also proposes the bilateral 2D-KPCA which could exploit more information concealed in the image matrices Furthermore, some approximation techniques are developed for improving the computational efficiency. Experimental results on the FERET face database and the COIL-20 object database show that: 1) the performance of KPCA is not necessarily better than that of PCA; 2) 2D-KPCA almost always outperforms 2D-PCA significantly; 3) the kernel methods are more appropriate on 2D pattern than on 1D patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
25. Segmentation of Triangular Meshes Using Multi-scale Normal Variation.
- Author
-
Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, Remagnino, Paolo, Nefian, Ara, Meenakshisundaram, Gopi, Pascucci, Valerio, Zara, Jiri, Molineros, Jose, Theisel, Holger, Malzbender, Thomas, Min, Kyungha, and Jung, Moon-Ryul
- Abstract
In this paper, we present a scheme that segments triangular meshes into several meaningful patches using multi-scale normal variation. In differential geometry, there is a traditional scheme that segments smooth surfaces into several patches such as elliptic, hyperbolic, or parabolic regions, with several curves such as ridge, valley, and parabolic curve between these regions, by means of the principal curvatures of the surface. We present a similar segmentation scheme for triangular meshes. For this purpose, we develop a simple and robust scheme that approximates the principal curvatures on triangular meshes by multi-scale normal variation scheme. Using these approximated principal curvatures and modifying the classical segmentation scheme for triangular meshes, we design a scheme that segments triangular meshes into several meaningful regions. This segmentation scheme is implemented by evaluating a feature weight at each vertex, which quantifies the likelihood that each vertex belongs to one of the regions. We test our scheme on several face models and demonstrate its capability by segmenting them into several meaningful regions. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
26. A Simple Solution to the Six-Point Two-View Focal-Length Problem.
- Author
-
Leonardis, Aleš, Bischof, Horst, Pinz, Axel, and Li, Hongdong
- Abstract
This paper presents a simple and practical solution to the 6-point 2-view focal-length estimation problem. Based on the hidden-variable technique we have derived a 15th degree polynomial in the unknown focal-length. During this course, a simple and constructive algorithm is established. To make use of multiple redundant measurements and then select the best solution, we suggest a kernel-voting scheme. The algorithm has been tested on both synthetic data and real images. Satisfactory results are obtained for both cases. For reference purpose we include our Matlab implementation in the paper, which is quite concise, consisting of 20 lines of code only. The result of this paper will make a small but useful module in many computer vision systems. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
27. Unsupervised Texture Segmentation with Nonparametric Neighborhood Statistics.
- Author
-
Leonardis, Aleš, Bischof, Horst, Pinz, Axel, Awate, Suyash P., Tasdizen, Tolga, and Whitaker, Ross T.
- Abstract
This paper presents a novel approach to unsupervised texture segmentation that relies on a very general nonparametric statistical model of image neighborhoods. The method models image neighborhoods directly, without the construction of intermediate features. It does not rely on using specific descriptors that work for certain kinds of textures, but is rather based on a more generic approach that tries to adaptively capture the core properties of textures. It exploits the fundamental description of textures as images derived from stationary random fields and models the associated higher-order statistics nonparametrically. This general formulation enables the method to easily adapt to various kinds of textures. The method minimizes an entropy-based metric on the probability density functions of image neighborhoods to give an optimal segmentation. The entropy minimization drives a very fast level-set scheme that uses threshold dynamics, which allows for a very rapid evolution towards the optimal segmentation during the initial iterations. The method does not rely on a training stage and, hence, is unsupervised. It automatically tunes its important internal parameters based on the information content of the data. The method generalizes in a straightforward manner from the two-region case to an arbitrary number of regions and incorporates an efficient multi-phase level-set framework. This paper presents numerous results, for both the two-texture and multiple-texture cases, using synthetic and real images that include electron-microscopy images. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
28. A Comparative Study of Energy Minimization Methods for Markov Random Fields.
- Author
-
Leonardis, Aleš, Bischof, Horst, Pinz, Axel, Szeliski, Richard, Zabih, Ramin, Scharstein, Daniel, Veksler, Olga, Kolmogorov, Vladimir, Agarwala, Aseem, Tappen, Marshall, and Rother, Carsten
- Abstract
One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Markov Random Fields (MRF's), the resulting energy minimization problems were widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. Unfortunately, most papers define their own energy function, which is minimized with a specific algorithm of their choice. As a result, the tradeoffs among different energy minimization algorithms are not well understood. In this paper we describe a set of energy minimization benchmarks, which we use to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods—graph cuts, LBP, and tree-reweighted message passing—as well as the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching and interactive segmentation. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods with minimal overhead. We expect that the availability of our benchmarks and interface will make it significantly easier for vision researchers to adopt the best method for their specific problems. Benchmarks, code, results and images are available at http://vision.middlebury.edu/MRF. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
29. Discriminant Analysis Based on Kernelized Decision Boundary for Face Recognition.
- Author
-
Kanade, Takeo, Jain, Anil, Ratha, Nalini K., Baochang Zhang, Xilin Chen, and Wen Gao
- Abstract
A novel nonlinear discriminant analysis method, Kernelized Decision Boundary Analysis (KDBA), is proposed in our paper, whose Decision Boundary feature vectors are the normal vector of the optimal Decision Boundary in terms of the Structure Risk Minimization principle. We also use a simple method to prove a property of Support Vector Machine (SVM) algorithm, which is combined with the optimal Decision Boundary Feature matrix to make our method consistent with the Kernel Fisher method(KFD). Moreover, KDBA is easily used in its applications, and the traditional Decision Boundary Analysis implementations are computationally expensive and sensitive to the size of the problem. Text classification problem is first used to testify the effectiveness of KDBA. Then experiments on the large-scale face database, the CAS-PEAL database, have illustrated its excellent performance compared with some popular face recognition methods such as Eigenface, Fisherface, and KFD. Keywords: Face Recognition, Kernel Fisher, Support Vector Machine [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
30. Modelling the Time-Variant Covariates for Gait Recognition.
- Author
-
Kanade, Takeo, Jain, Anil, Ratha, Nalini K., Veres, Galina V., Nixon, Mark S., and Carter, John N.
- Abstract
This paper deals with a problem of recognition by gait when time-dependent covariates are added, i.e. when 6 months have passed between recording of the gallery and the probe sets. We show how recognition rates fall significantly when data is captured between lengthy time intevals, for static and dynamic gait features. Under the assumption that it is possible to have some subjects from the probe for training and that similar subjects have similar changes in gait over time, a predictive model of changes in gait is suggested in this paper, which can improve the recognition capability. A small number of subjects were used for training and a much large number for classification and the probe contains the covariate data for a smaller number of subjects. Our new predictive model derives high recognition rates for different features which is a considerable improvement on recognition capability without this new approach. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
31. Comparison of Two Different Prediction Schemes for the Analysis of Time Series of Graphs.
- Author
-
Marques, Jorge S., Pérez de la Blanca, Nicolás, Pina, Pedro, Bunke, Horst, Dickinson, Peter, and Kraetzl, Miro
- Abstract
This paper is concerned with time series of graphs and compares two novel schemes that are able to predict the presence or absence of nodes in a graph. Our work is motivated by applications in computer network monitoring. However, the proposed prediction methods are generic and can be used in other applications as well. Experimental results with graphs derived from real computer networks indicate that a correct prediction rate of up to 97% can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
32. Enhanced Fourier Shape Descriptor Using Zero-Padding.
- Author
-
Kalviainen, Heikki, Parkkinen, Jussi, Kaarna, Arto, Kunttu, Iivari, Lepistö, Leena, and Visa, Ari
- Abstract
The shapes occurring in the images are essential features in image classification and retrieval. Due to their compactness and classification accuracy, Fourier-based shape descriptors are popular boundary-based methods for shape description. However, in the case of short boundary functions, the frequency resolution of the Fourier spectrum is low, which yields to inadequate shape description. Therefore, we have applied zero-padding method for the short boundary functions to improve their Fourier-based shape description. In this paper, we show that using this method the Fourier-based shape classification can be significantly improved. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
33. Modeling Inaccurate Perception: Desynchronization Issues of a Chaotic Pattern Recognition Neural Network.
- Author
-
Kalviainen, Heikki, Parkkinen, Jussi, Kaarna, Arto, Calitoiu, Dragos, Oommen, B. John, and Nusbaumm, Dorin
- Abstract
The usual goal of modeling natural and artificial perception involves determining how a system can extract the object that it perceives from an image which is noisy. The "inverse" of this problem is one of modeling how even a clear image can be perceived to be blurred in certain contexts. We propose a chaotic model of Pattern Recognition (PR) for the theory of "blurring". The paper, which is an extension to a Companion paper [3] demonstrates how one can model blurring from the view point of a chaotic PR system. Unlike the Companion paper in which the chaotic PR system extracts the pattern from the input, this paper shows that the perception can be "blurred" if the dynamics of the chaotic system are modified. We thus propose a formal model, the Mb-AdNN, and present a rigorous analysis using the Routh-Hurwitz criterion and Lyapunov exponents. We also demonstrate, experimentally, the validity of our model by using a numeral dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
34. Three Dimensional Reconstruction and Dynamic Analysis of Mitral Annular Based on Connected Equi-length Curve Angle Chain.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Zhang, David, Zhu Lei, Yang Xin, Yao Liping, and Sun Kun
- Abstract
Based on three dimensional echocardiography sequences, a new technique using connected equi-length curve angle chain (CELCAC) model for reconstruction and dynamic analysis of mitral annulus is proposed in this paper. Firstly, the boundary points of mitral annulus of the mitral annulus is extracted by interactive method and ordered according to their positions. Then, the three dimensional mitral annulus visualization model is established based on non-uniform rational B - spline algorithm. Finally, dynamic analysis of mitral annulus presented by the CELCAC model is suggested. The presentation is invariant to rotation, scaling, and translation. Results show that the reconstruction and analysis method proposed in this paper is feasible to assess the geometry and analyze the motion of mitral annulus. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
35. Indicator and Calibration Material for Microcalcifications in Dual-Energy Mammography.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Zhang, David, Xi Chen, Xuanqin Mou, and Lei Zhang
- Abstract
Dual-energy mammography can suppress the contrast between adipose and glandular tissues and improve the detectability of microcalcifications (MCs). In the published papers, MCs were calibrated by aluminum and identified by their thickness. However, the variety of compositions of MCs causes the variety of attenuation differences between MCs and MC calibration material which bring about huge calculation errors. In our study, we selected calcium carbonate and calcium phosphate as the most suitable MC calibration materials and the correction coefficient was reasonably determined. Area density was used as MC indicator instead of thickness. Therefore, the calculation errors from MC calibration materials can be reduced a lot and the determination of MCs will become possible. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
36. Compression of Medical Images Using Enhanced Vector Quantizer Designed with Self Organizing Feature Maps.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Zhang, David, Dandawate, Yogesh H., Joshi, Madhuri A., and Umrani, Shrirang
- Abstract
Now a days all medical imaging equipments give output as digital image and non-invasive techniques are becoming cheaper, the database of images is becoming larger. This archive of images increases up to significant size and in telemedicine-based applications the storage and transmission requires large memory and bandwidth respectively. There is a need for compression to save memory space and fast transmission over internet and 3G mobile with good quality decompressed image, even though compression is lossy. This paper presents a novel approach for designing enhanced vector quantizer, which uses Kohonen's Self Organizing neural network. The vector quantizer (codebook) is designed by training with a neatly designed training image and by selective training approach .Compressing; images using it gives better quality. The quality analysis of decompressed images is evaluated by using various quality measures along with conventionally used PSNR. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
37. Real-Time RFID-Based Intelligent Healthcare Diagnosis System.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Zhang, David, Khosla, Rajiv, and Chowdhury, Belal
- Abstract
In a health care context, the use of RFID (Radio Frequency Identi- fication) technology can be employed for not only bringing down health care costs but also to facilitate automatic streamlining patient identification processes in health centers and assist medical practitioners in quick and accurate diagnosis and treatments. In this paper, we outline a describe design and application of RFID-based Real-time Intelligent Clinical Diagnosis and Treatment Support System (ICDTS) in health care. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
38. An Effective Recognition Method of Breast Cancer Based on PCA and SVM Algorithm.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Zhang, David, Jihong Liu, and Weina Ma
- Abstract
Breast cancer is the leading cancer among females, the key technology of preventing breast cancers is early detection. Based on the advantage of support vector machine (SVM), finding global solution and possessing higher generalization capability on dealing with the small sample, a new method of diagnosing breast cancer by CAD is proposed in this paper. Firstly, a principal component analysis is used to represent the information of ROI image, which account for most of the variance of the original data set while significantly reducing the data dimension. After the extraction of principal components, only those data of which account for most part of variance were retained as the feature vector and input into a SVM classification and BP neural network classification to classify. Finally, the results of experiment show that the accuracy and specificity for the diagnosis of breast cancer using SVM classification is good. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
39. Fast Line-Segment Extraction for Semi-dense Stereo Matching.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Sommer, Gerald, Klette, Reinhard, McKinnon, Brian, and Baltes, Jacky
- Abstract
This paper describes our work on practical stereo vision for mobile robots using commodity hardware. The approach described in this paper is based on line segments, since those provide a lot of information about the environment, provide more depth information than point features, and are robust to image noise and colour variations. However, stereo matching with line segments is a difficult problem due to poorly localized end points and perspective distortion. Our algorithm uses integral images and Haar features for line segment extraction. Dynamic programming is used in the line segment matching phase. The resulting line segments track accurately from one frame to the next, even in the presence of noise. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
40. Automatic Subcortical Structure Segmentation Using Probabilistic Atlas.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
Automatic segmentation of sub-cortical structures has great use in studying various neurodegentative diseases. In this paper, we propose a fully automatic solution to this problem through the utilization of a distribution atlas built from a set of training MR images. Our model consists of two major components: a local likelihood based active contour (LLAC) model and a guiding probabilistic atlas. The former has a very strong ability in standing out the structures that are in low contrast with the surrounding tissues. The latter has the functionality of defining and leading the segmentation procedure to capture the structure of interest. Formulated under the maximum a posterior framework, probabilistic atlas for the structure of interest, e.g. caudate, putamen, can be seamlessly integrated into the level set evolution procedure, and no thresholding step is needed for capturing the target. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
41. Robust Classification of Strokes with SVM and Grouping.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
The ability to recognize the strokes drawn by the user, is central to most sketch-based interfaces. However, very few solutions that rely on recognition are robust enough to make sketching a definitive alternative to traditional WIMP user interfaces. In this paper, we propose an approach based on classification that given an unconstrained sketch, can robustly assign a label to each stroke that comprises the sketch. A key contribution of our approach is a technique for grouping strokes that eliminates outliers and enhances the robustness of the classification. We also propose a set of features that capture important attributes of the shape and mutual relationship of strokes. These features are statistically well-behaved and enable robust classification with Support Vector Machines (SVM). We conclude by presenting a concrete implementation of these techniques in an interface for driving facial expressions. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
42. Probe-It! Visualization Support for Provenance.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
Visualization is a technique used to facilitate the understanding of scientific results such as large data sets and maps. Provenance techniques can also aid in increasing the understanding and acceptance of scientific results by providing access to information about the sources and methods which were used to derive them. Visualization and provenance techniques, although rarely used in combination, may further increase scientists' understanding of results since the scientists may be able to use a single tool to see and evaluate result derivation processes including any final or partial result. In this paper we introduce Probe-It!: a visualization tool for scientific provenance information that enables scientists to move the visualization focus from intermediate and final results to provenance back and forth. To evaluate the benefits of Probe-It!, in the context of maps, this paper presents a quantitative user study on how the tool was used by scientists to discriminate between quality results and results with known imperfections. The study demonstrates that only a very small percentage of the scientists tested can identify imperfections using maps without the help of knowledge provenance and that most scientists, whether GIS experts, subject matter experts (i.e., experts on gravity data maps) or not, can identify and explain several kinds of map imperfections when using maps together with knowledge provenance visualization. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
43. Enhanced Visual Experience and Archival Reusability in Personalized Search Based on Modified Spider Graph.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
Academia and search engine industry followers consider personalization as the future of search engines, and this fact is well supported by the tremendous amount of research in this field. However the impact of technological advancement seems to be focused towards bringing more relevant results to the users - not the way it is usually presented to the users. User archives are useful resources which can be exploited more efficiently if reusability is promoted appropriately. In this paper, we present a theoretical framework which can sit on top of existing search technologies and deliver visually enhanced user experience and archival reusability. Contribution of this paper is two fold; first - visual interface for personal search engine setup, self-updating user interests, and session mapping based on modified spider graph; and secondly - enabling better archival reusability through user archival maps, session maps, interest specific maps and visual bookmarking. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
44. A Mesh Meaningful Segmentation Algorithm Using Skeleton and Minima-Rule.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
In this paper, a hierarchical shape decomposition algorithm is proposed, which integrates the advantages of skeleton-based and minima-rule-based meaningful segmentation algorithms. The method makes use of new geometrical and topological functions of skeleton to define initial cutting critical points, and then employs salient contours with negative minimal principal curvature values to determine natural final boundary curves among parts. And sufficient experiments have been carried out on many meshes, and shown that our framework can provide more reasonable perceptual results than single skeleton-based [8] or minima-rule-based [15] algorithm. In addition, our algorithm not only can divide a mesh of any genus into a collection of genus zero, but also partition level-of-detail meshes into similar parts. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
45. A Robust Method for Near Infrared Face Recognition Based on Extended Local Binary Pattern.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
Face recognition is one of the most successful applications in biometric authentication. However, methods reported in the literature still suffer from some problems which prevent the further development in face recognition. This paper presents a novel robust method for face recognition under near infrared (NIR) lighting condition based on Extended Local Binary Pattern (ELBP), which solves the problems produced by variations of illumination rightly, since the NIR images are insensitive to variations of ambient lighting, and ELBP can extract adequate texture features form the NIR images. By combining the local feature vectors, a global feature vector is formed and as the global feature vectors extracted by ELBP operator often have very high dimensions, a classifier has been trained using the AdaBoost algorithm to select the most representative features for better performance and dimensionality reduction. Compared with the huge number of features produced by ELBP operator, only a small part of the features are selected in this paper, which saves much computation and time cost. The comparison with the results of classic algorithms proves the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
46. Content-Based Image Retrieval Using Shape and Depth from an Engineering Database.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
Content based image retrieval (CBIR), a technique which uses visual contents to search images from the large scale image databases, is an active area of research for the past decade. It is increasingly evident that an image retrieval system has to be domain specific. In this paper, we present an algorithm for retrieving images with respect to a database consisting of engineering/computer-aided design (CAD) models. The algorithm uses the shape information in an image along with its 3D information. A linear approximation procedure that can capture the depth information using the idea of shape from shading has been used. Retrieval of objects is then done using a similarity measure that combines shape and the depth information. Plotted precision/recall curves show that this method is very effective for an engineering database. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
47. A New Set of Normalized Geometric Moments Based on Schlick's Approximation.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
Schlick's approximation of the term xp is used primarily to reduce the complexity of specular lighting calculations in graphics applications. Since moment functions have a kernel defined using a monomial xpyp, the same approximation could be effectively used in the computation of normalized geometric moments and invariants. This paper outlines a framework for computing moments of various orders of an image using a simplified kernel, and shows the advantages provided by the approximating function through a series of experimental results. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
48. Motion Projection for Floating Object Detection.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
Floating mines are a significant threat to the safety of ships in theatres of military or terrorist conflict. Automating mine detection is difficult, due to the unpredictable environment and high requirements for robustness and accuracy. In this paper, a floating mine detection algorithm using motion analysis methods is proposed. The algorithm aims to locate suspicious regions in the scene using contrast and motion information, specifically regions that exhibit certain predefined motion patterns. Throughput of the algorithm is improved with a parallel pipelined data flow. Moreover, this data flow enables further computational performance improvements though special hardware such as field programmable gate arrays (FPGA) or Graphics Processing Units (GPUs). Experimental results show that this algorithm is able to detect mine regions in the video with reasonable false positive and minimum false negative rates. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
49. A Hardware-Friendly Adaptive Tensor Based Optical Flow Algorithm.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Bebis, George, Boyle, Richard, Parvin, Bahram, Koracin, Darko, and Paragios, Nikos
- Abstract
A tensor-based optical flow algorithm is presented in this paper. This algorithm uses a cost function that is an indication of tensor certainty to adaptively adjust weights for tensor computation. By incorporating a good initial value and an efficient search strategy, this algorithm is able to determine optimal weights in a small number of iterations. The weighting mask for the tensor computation is decomposed into rings to simplify a 2D weighting into 1D. The devised algorithm is well-suited for real-time implementation using a pipelined hardware structure and can thus be used to achieve real-time optical flow computation. This paper presents simulation results of the algorithm in software, and the results are compared with our previous work to show its effectiveness. It is shown that the proposed new algorithm automatically achieves equivalent accuracy to that previously achieved via manual tuning of the weights. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
50. Shape Registration by Simultaneously Optimizing Representation and Transformation.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Ayache, Nicholas, Ourselin, Sébastien, Maeder, Anthony, Yifeng Jiang, and Jun Xie
- Abstract
This paper proposes a novel approach that achieves shape registration by optimizing shape representation and transformation simultaneously, which are modeled by a constrained Gaussian Mixture Model (GMM) and a regularized thin plate spline respectively. The problem is formulated within a Bayesian framework and solved by an expectation-maximum (EM) algorithm. Compared with the popular methods based on landmarks-sliding, its advantages include: (1) It can naturally deal with shapes of complex topologies and 3D dimension; (2) It is more robust against data noise; (3) The registration performance is better in terms of the generalization error of the resultant statistical shape model. These are demonstrated on both synthetic and biomedical shapes. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.