34 results on '"Frank Dehne"'
Search Results
2. Efficient External Memory Algorithms by Simulating Coarse-Grained Parallel Algorithms.
- Author
-
Frank Dehne, Wolfgang Dittrich, and David Hutchinson
- Subjects
ALGORITHMS ,FLASH memory ,COMPUTER simulation - Abstract
Abstract. External memory (EM) algorithms are designed for large-scale computational problems in which the size of the internal memory of the computer is only a small fraction of the problem size. Typical EM algorithms are specially crafted for the EM situation. In the past, several attempts have been made to relate the large body of work on parallel algorithms to EM, but with limited success. The combination of EM computing, on multiple disks, with multiprocessor parallelism has been posted as a challenge by the ACM Working Group on Storage I/ O for Large-Scale Computing. In this paper we provide a simulation technique which produces efficient parallel EM algorithms from efficient BSP-like parallel algorithms. The techniques obtained can accommodate one or multiple processors on the EM target machine, each with one or more disks, and they also adapt to the disk blocking factor of the target machine. When applied to existing BSP-like algorithms, our simulation technique produces improved parallel EM algorithms for a large number of problems. [ABSTRACT FROM AUTHOR]
- Published
- 2003
3. Parallel Sorting for GPUs.
- Author
-
Dehne, Frank and Zaboli, Hamidreza
- Published
- 2017
- Full Text
- View/download PDF
4. Introduction to Special Issue.
- Author
-
Frank Dehne and Jörg-Rüdiger Sack
- Published
- 2008
- Full Text
- View/download PDF
5. Proteome‐wide assessment of human interactome as a source of capturing domain–motif and domain‐domain interactions.
- Author
-
Idrees, Sobia and Paudel, Keshav Raj
- Abstract
Protein–protein interactions (PPIs) play a crucial role in various biological processes by establishing domain–motif (DMI) and domain–domain interactions (DDIs). While the existence of real DMIs/DDIs is generally assumed, it is rarely tested; therefore, this study extensively compared high‐throughput methods and public PPI repositories as sources for DMI and DDI prediction based on the assumption that the human interactome provides sufficient data for the reliable identification of DMIs and DDIs. Different datasets from leading high‐throughput methods (Yeast two‐hybrid [Y2H], Affinity Purification coupled Mass Spectrometry [AP‐MS], and Co‐fractionation‐coupled Mass Spectrometry) were assessed for their ability to capture DMIs and DDIs using known DMI/DDI information. High‐throughput methods were not notably worse than PPI databases and, in some cases, appeared better. In conclusion, all PPI datasets demonstrated significant enrichment in DMIs and DDIs (p‐value <0.001), establishing Y2H and AP‐MS as reliable methods for predicting these interactions. This study provides valuable insights for biologists in selecting appropriate methods for predicting DMIs, ultimately aiding in SLiM discovery. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Binding Site Prediction for Protein-Protein Interactions and Novel Motif Discovery using Re-occurring Polypeptide Sequences
- Author
-
Adam Amos-Binks, Catalin Patulea, Ashkan Golshani, Andrew Schoenrock, Sylvain Pitre, Frank Dehne, James R. Green, and Yuan Gui
- Subjects
Saccharomyces cerevisiae Proteins ,Amino Acid Motifs ,education ,Plasma protein binding ,Computational biology ,Saccharomyces cerevisiae ,Biology ,lcsh:Computer applications to medicine. Medical informatics ,Biochemistry ,Protein–protein interaction ,03 medical and health sciences ,Structural Biology ,Prediction methods ,Protein Interaction Mapping ,Humans ,Protein Interaction Domains and Motifs ,Binding site ,lcsh:QH301-705.5 ,Molecular Biology ,030304 developmental biology ,Genetics ,0303 health sciences ,Applied Mathematics ,Methodology Article ,030302 biochemistry & molecular biology ,Proteins ,Protein Structure, Tertiary ,Computer Science Applications ,DNA binding site ,lcsh:Biology (General) ,Proteome ,lcsh:R858-859.7 ,Experimental methods ,DNA microarray ,Algorithms ,Protein Binding - Abstract
Background While there are many methods for predicting protein-protein interaction, very few can determine the specific site of interaction on each protein. Characterization of the specific sequence regions mediating interaction (binding sites) is crucial for an understanding of cellular pathways. Experimental methods often report false binding sites due to experimental limitations, while computational methods tend to require data which is not available at the proteome-scale. Here we present PIPE-Sites, a novel method of protein specific binding site prediction based on pairs of re-occurring polypeptide sequences, which have been previously shown to accurately predict protein-protein interactions. PIPE-Sites operates at high specificity and requires only the sequences of query proteins and a database of known binary interactions with no binding site data, making it applicable to binding site prediction at the proteome-scale. Results PIPE-Sites was evaluated using a dataset of 265 yeast and 423 human interacting proteins pairs with experimentally-determined binding sites. We found that PIPE-Sites predictions were closer to the confirmed binding site than those of two existing binding site prediction methods based on domain-domain interactions, when applied to the same dataset. Finally, we applied PIPE-Sites to two datasets of 2347 yeast and 14,438 human novel interacting protein pairs predicted to interact with high confidence. An analysis of the predicted interaction sites revealed a number of protein subsequences which are highly re-occurring in binding sites and which may represent novel binding motifs. Conclusions PIPE-Sites is an accurate method for predicting protein binding sites and is applicable to the proteome-scale. Thus, PIPE-Sites could be useful for exhaustive analysis of protein binding patterns in whole proteomes as well as discovery of novel binding motifs. PIPE-Sites is available online at http://pipe-sites.cgmlab.org/.
- Full Text
- View/download PDF
7. FrontMatter.
- Published
- 2015
8. FrontMatter.
- Published
- 2017
9. FrontMatter.
- Published
- 2017
10. FPT at Work: Using Fixed Parameter Tractability to Solve Larger Instances of Hard Problems.
- Author
-
Bodlaender, Hans L., Langston, Michael A., and Dehne, Frank
- Abstract
When dealing with hard computational problems (NP-complete or worse), Scientists, Engineers and other users of software provided by Computer Scientists care most about how large a problem instance a particular method is able to solve in reasonable time. In this talk, we discuss the contributions and deficiencies of current fixed parameter tractability methods within this context. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
11. Shortest Paths in Time-Dependent FIFO Networks.
- Author
-
Dehne, Frank, Omran, Masoud, and Sack, Jörg-Rüdiger
- Subjects
GRAPH theory ,LINEAR time invariant systems ,ALGORITHMS ,PIECEWISE linear topology ,TRAVEL time (Traffic engineering) ,FIRST in, first out (Accounting) ,COMPUTER networks - Abstract
In this paper, we study the time-dependent shortest paths problem for two types of time-dependent FIFO networks. First, we consider networks where the availability of links, given by a set of disjoint time intervals for each link, changes over time. Here, each interval is assigned a non-negative real value which represents the travel time on the link during the corresponding interval. The resulting shortest path problem is the time-dependent shortest path problem for availability intervals ( $\mathcal{TDSP}_{\mathrm{int}}$), which asks to compute all shortest paths to any (or all) destination node(s) d for all possible start times at a given source node s. Second, we study time-dependent networks where the cost of using a link is given by a non-decreasing piece-wise linear function of a real-valued argument. Here, each piece-wise linear function represents the travel time on the link based on the time when the link is used. The resulting shortest paths problem is the time-dependent shortest path problem for piece-wise linear functions ( $\mathcal{TDSP}_{\mathrm{lin}}$) which asks to compute, for a given source node s and destination d, the shortest paths from s to d, for all possible starting times. We present an algorithm for the $\mathcal{TDSP}_{\mathrm{lin}}$ problem that runs in time O(( F+ γ)(| E|+| V|log | V|)) where F is the output size (i.e., number of linear pieces needed to represent the earliest arrival time function to d) and γ is the input size (i.e., number of linear pieces needed to represent the local earliest arrival time functions for all links in the network). We then solve the $\mathcal{TDSP}_{\mathrm{int}}$ problem in O( λ(| E|+| V|log | V|)) time by reducing it to an instance of the $\mathcal{TDSP}_{\mathrm{lin}}$ problem. Here, λ denotes the total number of availability intervals in the entire network. Both methods improve significantly on the previously known algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
12. PnP: sequential, external memory, and parallel iceberg cube computation.
- Author
-
Ying Chen, Dehne, Frank, Eavis, Tood, and Rau-Chaplin, Andrew
- Subjects
QUERYING (Computer science) ,SEARCH algorithms ,SYSTEMS design ,COMPUTER science ,SYSTEM analysis ,SYSTEMS development - Abstract
We present "Pipe 'n Prune" (PnP), a new hybrid method for iceberg-cube query computation. The novelty of our method is that it achieves a tight integration of top-down piping for data aggregation with bottom-up a priori data pruning. A particular strength of PnP is that it is efficient for all of the following scenarios: (1) Sequential iceberg-cube queries, (2) External memory iceberg-cube queries, and (3) Parallel iceberg-cube queries on shared-nothing PC clusters with multiple disks. We performed an extensive performance analysis of PnP for the above scenarios with the following main results: In the first scenario PnP performs very well for both dense and sparse data sets, providing an interesting alternative to BUC and Star-Cubing. In the second scenario PnP shows a surprisingly efficient handling of disk I/O, with an external memory running time that is less than twice the running time for full in-memory computation of the same iceberg-cube query. In the third scenario PnP scales very well, providing near linear speedup for a larger number of processors and thereby solving the scalability problem observed for the parallel iceberg-cubes proposed by Ng et al. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
13. Scalable Parallel Algorithms for FPT Problems.
- Author
-
Faisal N. Abu-Khzam, Michael A. Langston, Pushkar Shanbhag, and Christopher T. Symons
- Subjects
ALGORITHMS ,COMPUTATIONAL biology ,COMPUTER science ,ELECTRONIC systems - Abstract
Algorithmic methods based on the theory of fixed-parameter tractability are combined with powerful computational platforms to launch systematic attacks on combinatorial problems of significance. As a case study, optimal solutions to very large instances of the NP-hard vertex cover problem are computed. To accomplish this, an efficient sequential algorithm and various forms of parallel algorithms are devised, implemented, and compared. The importance of maintaining a balanced decomposition of the search space is shown to be critical to achieving scalability. Target problems need only be amenable to reduction and decomposition. Applications in high throughput computational biology are also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2006
14. The cgmCUBE project: Optimizing parallel data cube generation for ROLAP.
- Author
-
Dehne, Frank, Eavis, Todd, and Rau-Chaplin, Andrew
- Subjects
OLAP technology ,ONLINE databases ,DECISION support systems ,ONLINE data processing ,MATHEMATICAL optimization ,MULTIDIMENSIONAL databases ,DATA warehousing - Abstract
On-line Analytical Processing (OLAP) has become one of the most powerful and prominent technologies for knowledge discovery in VLDB (Very Large Database) environments. Central to the OLAP paradigm is the data cube, a multi-dimensional hierarchy of aggregate values that provides a rich analytical model for decision support. Various sequential algorithms for the efficient generation of the data cube have appeared in the literature. However, given the size of contemporary data warehousing repositories, multi-processor solutions are crucial for the massive computational demands of current and future OLAP systems. In this paper we discuss the cgmCUBE Project, a multi-year effort to design and implement a multi-processor platform for data cube generation that targets the relational database model (ROLAP). More specifically, we discuss new algorithmic and system optimizations relating to (1) a thorough optimization of the underlying sequential cube construction method and (2) a detailed and carefully engineered cost model for improved parallel load balancing and faster sequential cube construction. These optimizations were key in allowing us to build a prototype that is able to produce data cube output at a rate of over one TeraByte per hour. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
15. Guest Editors' Foreword.
- Author
-
Cormen, T., Dehne, F., Fraigniaud, P., and Matias, Y.
- Subjects
PARALLEL algorithms ,PERIODICAL publishing - Abstract
No Abstract. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
16. Parallel ROLAP Data Cube Construction on Shared-Nothing Multiprocessors.
- Author
-
Ying Chen, Dehne, Frank, Eavis, Todd, and Rau-Chaplin, Andrew
- Subjects
DATA warehousing ,OLAP technology ,HIGH performance computing ,DATABASE management ,MANAGEMENT information systems - Abstract
The pre-computation of data cubes is critical to improving the response time of On-Line Analytical Processing (OLAP) systems and can be instrumental in accelerating data mining tasks in large data warehouses. In order to meet the need for improved performance created by growing data sizes, parallel solutions for generating the data cube are becoming increasingly important. This paper presents a parallel method for generating data cubes on a shared-nothing multiprocessor. Since no (expensive) shared disk is required, our method can be used on low cost Beowulf style clusters consisting of standard PCs with local disks connected via a data switch. Our approach uses a ROLAP representation of the data cube where views are stored as relational tables. This allows for tight integration with current relational database technology. We have implemented our parallel shared-nothing data cube generation method and evaluated it on a PC cluster, exploring relative speedup, local vs. global schedule trees, data skew, cardinality of dimensions, data dimensionality, and balance tradeoffs. For an input data set of 2,000,000 rows (72 Megabytes), our parallel data cube generation method achieves close to optimal speedup; generating a full data cube of ≈X227 million rows (5.6 Gigabytes) on a 16 processors cluster in under 6 minutes. For an input data set of 10,000,000 rows (360 Megabytes), our parallel method, running on a 16 processor PC cluster, created a data cube consisting of ≈846 million rows (21.7 Gigabytes) in under 47 minutes. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
17. Bulk Synchronous Parallel Algorithms for the External Memory Model.
- Author
-
Dehne, Frank, Dittrich, Wolfgang, Hutchinson, David, and Maheshwari, Anil
- Subjects
COMPUTER storage devices ,PARALLEL algorithms - Abstract
Blockwise access to data is a central theme in the design of efficient external memory (EM) algorithms. A second important issue, when more than one disk is present, is fully parallel disk I/O. In this paper we present a simple, deterministic simulation technique which transforms certain Bulk Synchronous Parallel (BSP) algorithms into efficient parallel EM algorithms. It optimizes blockwise data access and parallel disk I/O and, at the same time, utilizes multiple processors connected via a communication network or shared memory. We obtain new improved parallel EM algorithms for a large number of problems including sorting, permutation, matrix transpose, several geometric and GIS problems including three-dimensional convex hulls (two-dimensional Voronoi diagrams), and various graph problems. We show that certain parallel algorithms known for the BSP model can be used to obtain EM algorithms that meet well known I/O complexity lower bounds for various problems, including sorting. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
18. Parallelizing the Data Cube.
- Author
-
Dehne, Frank, Eavis, Todd, Hambrusch, Susanne, and Rau-Chaplin, Andrew
- Abstract
This paper presents a general methodology for the efficient parallelization of existing data cube construction algorithms. We describe two different partitioning strategies, one for top-down and one for bottom-up cube algorithms. Both partitioning strategies assign subcubes to individual processors in such a way that the loads assigned to the processors are balanced. Our methods reduce inter processor communication overhead by partitioning the load in advance instead of computing each individual group-by in parallel. Our partitioning strategies create a small number of coarse tasks. This allows for sharing of prefixes and sort orders between different group-by computations. Our methods enable code reuse by permitting the use of existing sequential (external memory) data cube algorithms for the subcube computations on each processor. This supports the transfer of optimized sequential data cube code to a parallel setting. The bottom-up partitioning strategy balances the number of single attribute external memory sorts made by each processor. The top-down strategy partitions a weighted tree in which weights reflect algorithm specific cost measures like estimated group-by sizes. Both partitioning approaches can be implemented on any shared disk type parallel machine composed of p processors connected via an interconnection fabric and with access to a shared parallel disk array. We have implemented our parallel top-down data cube construction method in C++ with the MPI message passing library for communication and the LEDA library for the required graph algorithms. We tested our code on an eight processor cluster, using a variety of different data sets with a range of sizes, dimensions, density, and skew. Comparison tests were performed on a SunFire 6800. The tests show that our partitioning strategies generate a close to optimal load balance between processors. The actual run times observed show an optimal speedup of p. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
19. Editorial.
- Published
- 1986
- Full Text
- View/download PDF
20. Translation separability of sets of polygons.
- Author
-
Dehne, Frank and Sack, Jörg-Rüdiger
- Abstract
We consider the problem of separating a set of polygons by a sequence of translations (one such collision-free translation motion for each polygon). If all translations are performed in a common direction the separability problem so obtained has been referred to as the uni-directional separability problem; for different translation directions, the more general multi-directional separability problem arises. The class of such separability problems has been studied previously and arises e.g. in computer graphics and robotics. Existing solutions to the uni-directional problem typically assume the objects to have a certain predetermined shape (e.g., rectangular or convex objects), or to have a direction of separation already available. Here we show how to compute all directions of unidirectional separability for sets of arbitrary simple polygons. The problem of determining whether a set of polygons is multi-directionally separable had been posed by G.T. Toussaint. Here we present an algorithm for solving this problem which, in addition to detecting whether or not the given set is multidirectionally separable, also provides an ordering in which to separate the polygons. In case that the entire set is not multi-directionally separable, the algorithm will find the largest separable subset. [ABSTRACT FROM AUTHOR]
- Published
- 1987
- Full Text
- View/download PDF
21. Optical clustering.
- Author
-
Dehne, Frank
- Abstract
This paper presents a definition of 'optical clusters' which is derived from the concept of optical resolution. The clustering problem (induced by this definition) is transformed such that the application of well known Computational Geometry methods yields efficient solutions. One result (which can be extended to different classes of objects and metrices) is the following: Given a set S of N disjoint line segments in E. [ABSTRACT FROM AUTHOR]
- Published
- 1986
- Full Text
- View/download PDF
22. Randomized parallel list ranking for distributed memory multiprocessors.
- Author
-
Dehne, Frank and Song, Siang W.
- Subjects
ALGORITHMS ,MULTIPROCESSORS - Abstract
Focuses on a parallel list ranking algorithm for distributed memory multiprocessors. Description of a simple version; Improved version requiring high probability; Reid-Miller's algorithm as a list ranking implementation; Practical relevance of the results.
- Published
- 1997
- Full Text
- View/download PDF
23. A workbench for computational geometry.
- Author
-
Epstein, P., Kavanagh, J., Knight, A., May, J., Nguyen, T., and Sack, J.
- Abstract
We describe the design and implementation of a workbench for computational geometry. We discuss issues arising from this implementation, including comparisons of different algorithms for constant factors, code size, and ease of implementation. The workbench is not just a library of computational geometry algorithms and data structures, but is designed as a geometrical programming environment, providing tools for: creating, editing, and manipulating geometric objects; demonstrating and animating geometric algorithms; and, most importantly, for implementing and maintaining complex geometric algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
24. Finding a minimal cover for binary images: An optimal parallel algorithm.
- Author
-
Moitra, Dipen
- Abstract
Given a black-and-white image, represented by an array of √ n × √ n binary-valued pixels, we wish to cover the black pixels with a minimal set of (possibly overlapping) maximal squares. It was recently shown that obtaining a minimum square cover for a polygonal binary image with holes is NP-hard. We derive an optimal parallel algorithm for the minimal square cover problem, which for any desired computation time T in [log n, n] runs on an EREW-PRAM with ( n/T) processors. The cornerstone of our algorithm is a novel data structure, the cover graph, which compactly represents the covering relationships between the maximal squares of the image. The size of the cover graph is linear in the number of pixels. This algorithm has applications to problems in VLSI mask generation, incremental update of raster displays, and image compression. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
25. Computing convexity properties of images on a pyramid computer.
- Author
-
Miller, Russ and Stout, Quentin
- Abstract
We present efficient parallel algorithms for using a pyramid computer to determine convexity properties of digitized black/white pictures and labeled figures. Algorithms are presented for deciding convexity, identifying extreme points of convex hulls, and using extreme points in a variety of fashions. For a pyramid computer with a base of n simple processing elements arranged in an n × n square, the running times of the algorithms range from Θ(log n) to find the extreme points of a convex figure in a digitized picture, to Θ( n ) to find the diameter of a labeled figure, Θ( n log n) to find the extreme points of every figure in a digitized picture, to Θ( n ) to find the extreme points of every labeled set of processing elements. Our results show that the pyramid computer can be used to obtain efficient solutions to nontrivial problems in image analysis. We also show the sensitivity of efficient pyramid-computer algorithms to the rate at which essential data can be compressed. Finally, we show that a wide variety of techniques are needed to make full and efficient use of the pyramid architecture. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
26. Parallel computation of disease transforms.
- Author
-
Schwarzkopf, Otfried
- Abstract
Distance transforms are an important computational tool for the processing of binary images. For an n × n image, distance transforms can be computed in time $$\mathcal{O}$$( n) on a mesh-connected computer and in polylogarithmic time on hypercube related structures. We investigate the possibilities of computing distance transforms in polylogarithmic time on the pyramid computer and the mesh of trees. For the pyramid, we obtain a polynomial lower bound using a result by Miller and Stout, so we turn our attention to the mesh of trees. We give a very simple $$\mathcal{O}$$ (log n) algorithm for the distance transform with respect to the L -metric, an $$\mathcal{O}$$ (log n) algorithm for the transform with respect to the L -metric, and find that the Euclidean metric is much more difficult. Based on evidence from number theory, we conjecture the impossibility of computing the Euclidean distance transform in polylogarithmic time on a mesh of trees. Instead, we approximate the distance transform up to a given error. This works for any L -metric and takes time $$\mathcal{O}$$(log n). [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
27. Processor-time optimal parallel algorithms for digitized images on mesh-connected processor arrays.
- Author
-
Alnuweiri, Hussein and Prasanna Kumar, V.
- Abstract
We present processor-time optimal parallel algorithms for several problems on n × n digitized image arrays, on a mesh-connected array having p processors and a memory of size O( n ) words. The number of processors p can vary over the range [1, n ] while providing optimal speedup for these problems. The class of image problems considered here includes labeling the connected components of an image; computing the convex hull, the diameter, and a smallest enclosing box of each component; and computing all closest neighbors. Such problems arise in medium-level vision and require global operations on image pixels. To achieve optimal performance, several efficient data-movement and reduction techniques are developed for the proposed organization. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
28. Topological numbering of features on a mesh.
- Author
-
Atallah, Mikhail, Hambrusch, Susanne, and TeWinkel, Lynn
- Abstract
Assume we are given an n × n binary image containing horizontally convex features; i.e., for each feature, each of its row's pixels form an interval on that row. In this paper we consider the problem of assigning topological numbers to such features, i.e., assign a number to every feature f so that all features to the left of f in the image have a smaller number assigned to them. This problem arises in solutions to the stereo matching problem. We present a parallel algorithm to solve the topological numbering problem in O( n) time on an n × n mesh of processors. The key idea of our solution is to create a tree from which the topological numbers can be obtained even though the tree does not uniquely represent the 'to the left of' relationship of the features. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
29. Computational geometry algorithms for the systolic screen.
- Author
-
Dehne, F., Hassenklover, A., Sack, J., and Santoro, N.
- Abstract
A digitized plane Π of size M is a rectangular √ M × √ M array of integer lattice points called pixels. A √ M × √ M mesh-of-processors in which each processor P represents pixel ( i, j) is a natural architecture to store and manipulate images in Π; such a parallel architecture is called a systolic screen. In this paper we consider a variety of computational-geometry problems on images in a digitized plane, and present optimal algorithms for solving these problems on a systolic screen. In particular, we present O(√ M)-time algorithms for determining all contours of an image; constructing all rectilinear convex hulls of an image (peeling); solving the parallel and perspective visibility problem for n disjoint digitized images; and constructing the Voronoi diagram of n planar objects represented by disjoint images, for a large class of object types (e.g., points, line segments, circles, ellipses, and polygons of constant size) and distance functions (e.g., all L metrics). These algorithms imply O( √M)-time solutions to a number of other geometric problems: e.g., rectangular visibility, separability, detection of pseudo-star-shapedness, and optical clustering. One of the proposed techniques also leads to a new parallel algorithm for determining all longest common subsequences of two words. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
30. Editor's foreword special issue on parallel algorithms for geometric problems on digitized pictures.
- Author
-
Dehne, Frank
- Published
- 1991
- Full Text
- View/download PDF
31. FrontMatter.
- Published
- 2009
32. FrontMatter.
- Published
- 2008
33. FrontMatter.
- Published
- 2008
34. Computational geometry.
- Author
-
Toussaint, Godfried
- Published
- 1988
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.