72 results
Search Results
2. Machine recognition of printed Chinese characters via transformation algorithms
- Author
-
Robert C. Shiau and Paul P. Wang
- Subjects
Topological property ,Structure (mathematical logic) ,Engineering ,Philosophy of design ,business.industry ,Intelligent character recognition ,Speech recognition ,Feature extraction ,Pattern recognition ,Transformation (function) ,Artificial Intelligence ,Hadamard transform ,Signal Processing ,Pattern recognition (psychology) ,Feature (machine learning) ,Artificial intelligence ,Computer Vision and Pattern Recognition ,Chinese characters ,business ,Algorithm ,Software ,Mathematics - Abstract
This paper presents some novel results concerning the recognition of single-font printed Chinese characters via the transformation algorithms of Fourier, Hadamard, and Rapid. The new design philosophy of a three-stage structure is believed to offer at least a suboptimal search strategy for recognizing printed Chinese characters with a dictionary of 7000–8000 characters. The transformation algorithms discussed in this paper will be used in the last two stages. Extensive experiments and simulations concerning feature extraction and noisy or abnormal pattern recognition have been carried out (the simulations have been restricted to a 63-character subset called “Radicals”). Comparison has been made of all three transforms according to their ability to recognize characters.
- Published
- 1973
3. Partitioned estimation algorithms, II: Linear estimation
- Author
-
Demetrios G. Lainiotis
- Subjects
Information Systems and Management ,Stochastic process ,Linear system ,Initialization ,Computer Science Applications ,Theoretical Computer Science ,Controllability ,Matrix (mathematics) ,symbols.namesake ,Artificial Intelligence ,Control and Systems Engineering ,symbols ,Partition (number theory) ,Observability ,Fisher information ,Algorithm ,Software ,Smoothing ,Linear filter ,Mathematics - Abstract
In a radically new approach to linear estimation, Lainiotis [33, 36–37, 52–53], using the “partition theorem”-an explicit Bayes theorem-obtained fundamentally new linear filtering and smoothing algorithms both for continuous as well as discrete data. The new algorithms are given in explicit, integral expressions of a “partitioned” form, and in terms of decoupled forward filters. The “partitioned” algorithms were shown to be especially advantageous from a computational as well as from an analysis standpoint. They are essentially based on the decomposition of the innovations into partial or conditional innovations and residuals. In this paper, the “partitioned” algorithms are shown to be the natural framework in which to study such important concepts as observability, controllability, unbiasedness, and the solution of Riccati equations. Specifically, in this paper, the “partitioned” algorithms are re-examined yielding further insight as well as several significant new results on: 1. (a) unbiased estimation and filter initialization procedures; 2. (b) stochastic observability and stochastic controllability; 3. (c) the interconnection between stochastic observability, Fisher information matrix, and the Cramer-Rao bound; 4. (d) estimation error-bounds; and most importantly 5. (e) computationally effective “partitioned” solutions of time-varying matrix Riccati equations. In fact, all of the above results have been obtained for general, time-varying, lumped, linear systems. In addition, it is shown that previously established smoothing algorithms, such as the Meditch differential algorithm and the Kailath-Frost total innovation algorithm, are readily obtained from the “partitioned” algorithms. The properties of the “partitioned” algorithms are obtained, thoroughly examined, and compared to those of other algorithms.
- Published
- 1974
4. Numerical methods for fuzzy clustering
- Author
-
Enrique H. Ruspini
- Subjects
Information Systems and Management ,Fuzzy clustering ,Fuzzy classification ,Fuzzy set ,Type-2 fuzzy sets and systems ,Defuzzification ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Fuzzy set operations ,Fuzzy number ,Algorithm ,Software ,Membership function ,Mathematics - Abstract
In a previous paper ^[^1^] the use of the concept of fuzzy sets in clustering was proposed. The convenience of fuzzy clustering over conventional representation was then stressed. Assigning each point a degree of belongingness to each cluster provides a way of characterizing bridges, strays, and undetermined points. This is especially useful when considering scattered data. The classificatory process may be considered as the breakdown of the probability density function of the original set into the weighted sum of the component fuzzy set densities. Such decomposition should be performed so that the components really represent clusters. This is done by optimization of some functional defined over all possible fuzzy classifications of the data set. Several functionals were suggested in ^[^1^]. The bulk of this paper is concerned with numerical techniques useful in the solution of such problems. The first two formulas treated do not provide an acceptable fuzzy classification but yield good starting points for the minimization of a third functional. This last method obtains very good dichotomies and is characterized by slower convergence than the previous processes. Using that functional, a modification is suggested to obtain partitions in more than two sets. Numerous computational experiments are presented.
- Published
- 1970
5. Compression algorithms that preserve basic topological features in binary-coded patterns
- Author
-
Mariagiovanna Sami and Renato Stefanelli
- Subjects
Class (set theory) ,media_common.quotation_subject ,Binary number ,Ambiguity ,Type (model theory) ,Topology ,Set (abstract data type) ,Reduction (complexity) ,Artificial Intelligence ,Signal Processing ,Point (geometry) ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Data compression ,media_common ,Mathematics - Abstract
In this paper, some problems related to the recognition of plane, two-tone pictures (e.g. handprinted characters) are considered. It is assumed that suitable algorithms (already described) have been applied to the original picture, in order to obtain linearwise, two-tone figures. A problem that arises at this point is classifying these results in a suitable way; to this purpose, it is necessary to define and perform a set of measurements such that the results obtained by applying them to figures of the same class be the same, the possible ambiguity be minimal and the loss of information be as reduced as possible. In this paper, some algorithms are described that transform the figure into another one of the least possible dimensions, but retaining a set of basic topological and quasi-topological characteristics of the original picture. While implicitly defining a set of measurements to be performed, the “reduction” of the figure is implemented so that several figures having the same basic characteristics give the same “reduced” figure as final result. The considerable reduction of the figure's dimensions may furthermore make the recognition simpler. Several algorithms are described, and the results are compared; all are of parallel type, and therefore particularly suited for hardware implementation.
- Published
- 1973
6. Statistical Determination of Certain Mathematical Constants and Functions Using Computers
- Author
-
Satya D. Dubey
- Subjects
Pseudorandom number generator ,Mathematical constants and functions ,Scientific literature ,Confidence interval ,symbols.namesake ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Euler's formula ,symbols ,Applied mathematics ,Inverse trigonometric functions ,Electronic computer ,Constant (mathematics) ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
With the availability of high speed electronic computers it is now quite convenient to devise statistical experiments for the purpose of estimating certain mathematical constants and functions. The paper contains statistical formulas for estimating such mathematical constants and functions as π, C (Euler's constant), e , Ψ (1) (1), Γ( x ), In x , B ( x, y ), arctan x , Ψ( p ) and polygamma functions. Statistical estimates of these quantities may be used to construct desired confidence intervals for these parameters. Although numerical techniques are available to approximate these mathematical quantities very satisfactorily, a statistical approach to these problems seems to deserve mention in the scientific literature. Numerical illustrations are given in the paper which also give some indication of the effect of pseudorandom numbers on the final results. Statistical procedures, considered in the paper, make extensive use of a high speed electronic computer which may help to develop a positive attitude among theoreticians, promising students of mathematics and others toward the role of computers in the ever-expanding scientific work.
- Published
- 1966
7. Empirical tests of a theory of human acquisition of concepts for sequential patterns
- Author
-
Herbert A. Simon and Kenneth Kotovsky
- Subjects
Linguistics and Language ,Pattern language ,Series (mathematics) ,business.industry ,Minor (linear algebra) ,Extrapolation ,Experimental and Cognitive Psychology ,computer.software_genre ,Task (project management) ,Test (assessment) ,Neuropsychology and Physiological Psychology ,Artificial Intelligence ,Developmental and Educational Psychology ,Thurstone scale ,Artificial intelligence ,Construct (philosophy) ,business ,Algorithm ,computer ,Natural language processing ,Mathematics - Abstract
The paper examines a body of empirical data on S's performing the Thurstone Letter Series Completion task, in order to test the theory proposed by the authors in 1963 for explaining behavior on this task. The data confirm the theory in its main aspects, while indicating the need for some minor extensions and modifications. In particular the data show that subjects first discover the periodicity of the letter series, then construct a description of the pattern, and finally use the pattern description to make an extrapolation. Most of the pattern descriptions used by Ss fall within the pattern language defined in the earlier paper.
- Published
- 1973
8. On the computational complexity of finite functions and semigroup multiplication
- Author
-
X. Cheng and Heng-Da Cheng
- Subjects
Discrete mathematics ,Information Systems and Management ,Group (mathematics) ,Semigroup ,Function (mathematics) ,Upper and lower bounds ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Multiplication ,Function composition ,Unit (ring theory) ,Realization (systems) ,Algorithm ,Software ,Mathematics - Abstract
In a prior paper [5]we have given a lower bound on the time to multiply in a group using a circuit of unit delay elements with limited fan-in. Also in that paper was a circuit realization for group multiplication requiring time at most one unit greater than the lower bound. In the present work we generalize that circuit construction method so as to render it applicable to any function f: X"1 x X"2 -> Y, where X"1 and X"2 are finite sets. We then examine the optimality of the method when S is a finite semigroup and f"l: S x S -> S is semigroup multiplication.
- Published
- 1970
9. Location of the Maximum on Unimodal Surfaces
- Author
-
Donald J. Newman
- Subjects
Computation ,Dimension (graph theory) ,Function (mathematics) ,Unimodality ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,A priori and a posteriori ,Order (group theory) ,Algorithm ,Software ,Information Systems ,Mathematics ,Variable (mathematics) - Abstract
Pinning down the maximum of a function in one or more variable is a basic computing problem. Without further a priori information regarding the nature of the function, of course, the problem is not feasible for computation. Thus if the function is permitted to oscillate infinitely often then no number of evaluations can give information regarding its maximum. J. Kiefer, in his paper, treats the one-dimensional problem and shows that the correct a priori assumption regarding the function is that of “unimodality.” He then gives the complete and exact optimal procedure within this framework. In this paper is given what the author believes is the correct a priori background for functions of several variables. (This is analogous to unimodality in one dimension.) However, there is not obtained the exactness of Kiefer's result, but rather the determination of the correct procedures to within “order of magnitude.”
- Published
- 1965
10. Statistical Complexity of Algorithms for Boolean Function Minimization
- Author
-
Gianfranco R. Putzolu and Franco Mileto
- Subjects
Parity function ,Variance (accounting) ,Prime (order theory) ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Maximum satisfiability problem ,Boolean expression ,Minification ,Circuit minimization for Boolean functions ,Boolean function ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
The first problem in a two level Boolean minimization is the determination of the prime k -cubes. This paper is concerned with the estimation of the statistical complexity of some well-known algorithms which solve this problem. Formulas are given for the average number of comparison operations among k -cubes occurring in Quine's method and in Mc-Cluskey's method; these quantities provide indications of the average execution time of computer programs based on the corresponding algorithms. Numerical values are given and commented on. Formulas are also obtained for the variance of the number of k -cubes and the variance of the number of cubes of a Boolean function; in fact the calculation of these quantities is strictly related to that of the average number of comparison operations among k -cubes. These variances give an idea of the probable error made by using the corresponding average values (obtained in a previous paper by the authors) to make forecasts. It turns out that this error is quite small.
- Published
- 1965
11. An investigation into the consistency of marking examination scripts in B.Sc. Part I in mechanical engineering
- Author
-
B. J. Hill
- Subjects
business.industry ,Consistency (statistics) ,Artificial intelligence ,business ,computer.software_genre ,computer ,Algorithm ,Natural language processing ,Standard deviation ,Education ,Mathematics - Abstract
Copies of the unmarked scripts of ten candidates in the 1970 B.Sc. (Eng.) Part I paper on Fluid Mechanics were marked by eight separate markers each working to the same marking schedule. For the whole paper (maximum 100 marks) the standard deviation of a candidate's marks about his mean mark fell within the range of 3 to 11 marks. If the results of this investigation are typical the standard deviation in the marks aggregated for N equally weighted examinations is 5√ marks and is independent of the candidate's mean mark.
- Published
- 1972
12. On Euclid's Algorithm and the Theory of Subresultants
- Author
-
Joseph F. Traub and W. S. Brown
- Subjects
Discrete mathematics ,Polynomial ,Sequence ,Relation (database) ,Fundamental theorem ,Rational function ,Algebra ,Number theory ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Computer Science::Symbolic Computation ,Remainder ,Algorithm ,Sturm's theorem ,Software ,Information Systems ,Mathematics - Abstract
A key ingredient for systems which perform symbolic computer manipulation of multivariate rational functions are efficient algorithms for calculating polynomial greatest common divisors. Euclid's algorithm cannot be used directly because of problems with coefficient growth. The search for better methods leads naturally to the theory of subresultants. This paper presents an elementary treatment of the theory of subresultants, and examines the relationship of the subresultants of a given pair of polynomials to their polynomial remainder sequence as determined by Euclid's algorithm. This relation is expressed in the fundamental theorem of this paper. The results are essentially the same as those of Collins but the presentation is briefer, simpler, and somewhat more general. The fundamental theorem finds further applications in the proof that the modular algorithm for polynomial GCD terminates. (Author)
- Published
- 1971
13. The Theory of Search. I. Kinematic Bases
- Author
-
B. O. Koopman
- Subjects
business.industry ,Relative motion ,Probabilistic behavior ,Kinematics ,Management Science and Operations Research ,Sonar ,Computer Science Applications ,law.invention ,Feature (computer vision) ,Simple (abstract algebra) ,law ,Artificial intelligence ,Radar ,business ,Algorithm ,Mathematics - Abstract
In the conventional search situation, there are three general features. I The kinematic bases, involving the positions, geometrical configurations, and motions in the searchers and targets, with particular reference to the statistics of their contacts and the probabilities of their reaching various specified relative positions. II The probabilistic behavior of the instrument of detection (eye, radar, sonar, etc) when making a given passage relative to the target. III The over-all result—the probability of contact under general stated conditions, along with the possibility of optimizing the results by improving the methods of directing the search. The present paper is devoted to the first feature, and is intended to provide the kinematic bases of the theory. It studies fairly simple cases of relative motion and draws inspiration from the kinetic theory of matter. The second paper, entitled Target Detection, will deal with the second feature. It is based on the methods of survival probabilities familiar in actuarial studies, and is, it must be emphasized, confined to the case of independent trials. A third paper will deal with an important aspect of the third feature, and will be entitled The Optimum Distribution of Searching Effort. It represents the continuous extension of a paper of a similar title published by the author in Operations Research in February 1953.
- Published
- 1956
14. Image Quality in Sampled Data Systems
- Author
-
Sr. Schade, Frederick A. Rosell, Richard Legault, A. Fenner Milton, H Schnitzler Otto, Lucien M. Biberman, L Harry, and D Snyder Alvin
- Subjects
Operator performance ,business.industry ,Image quality ,Observer (special relativity) ,Transfer function ,Sampling (signal processing) ,Optical transfer function ,Cutoff ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Sampled data systems ,Mathematics - Abstract
The paper deals with aliasing, an important effect of one-and two- dimensional sampling on image quality. Aliasing changes the normal criteria of utility of the modulation transfer function (MTF) as a measure of system quality. Aliasing can be eliminated by letting the MTF fall to zero at one-half the sampling frequency. This, of course, markedly reduces the signal-to-noise ratio at the display (SNR(D)) near cutoff and thus reduces operator performance. Several factors control the amount of information an observer extracts from an image and the rate at which he extracts it. The paper presents the results of a series of experimental programs relating observer performance to the modulation transfer function area (MTFA) and to the SNR(D) and derives the relationship between MTFA and SNR(D). The paper discusses the tradeoffs as they are presently understood.
- Published
- 1971
15. NON-OPTIMALITY OF LIKELIHOOD RATIO TESTS FOR SEQUENTIAL DETECTION OF SIGNALS IN GAUSSIAN NOISE
- Author
-
Bennett Eisenberg
- Subjects
symbols.namesake ,Gaussian noise ,Noise (signal processing) ,business.industry ,Likelihood-ratio test ,symbols ,Pattern recognition ,Artificial intelligence ,business ,Algorithm ,Signal ,Mathematics - Abstract
This paper is motivated by two papers Selin [1964,1965] on the problem of the sequential detection of signals in normal noise. Selin's problem was to construct a decision procedure with given error probabilities and of minimal expected time for the hypotheses signal present and signal absent. Here, the behaviour of the likelihood ratio test used by Selin is re-examined. It is shown that this test need not be optimal.
- Published
- 1971
16. Computer generation of optimized subroutines
- Author
-
Harry H. Denman
- Subjects
Chebyshev polynomials ,Polynomial ,Mathematical optimization ,Computer science ,Subroutine ,Interval (mathematics) ,Computational science ,Matrix polynomial ,Reciprocal polynomial ,symbols.namesake ,Artificial Intelligence ,Degree of a polynomial ,Limit (mathematics) ,Wilkinson's polynomial ,Mathematics ,Lagrange polynomial ,Function (mathematics) ,Hardware and Architecture ,Control and Systems Engineering ,Computer Science::Mathematical Software ,symbols ,Chebyshev nodes ,Algorithm ,Software ,Information Systems ,Generator (mathematics) - Abstract
Ideally, subroutines used for function evaluation should be both flexible (suitable for use over arbitrary intervals as well as accurate to any desired order) and fast. Often these characteristics conflict, and, as an alternative, a number of subroutines for the same function may be kept in the Subroutine Library, with the programmer selecting the one closest to his needs. Among the fastest subroutines are those using fixed-degree polynomial approximations, where the fixed number of terms has been selected so that the error does not exceed a certain limit throughout the interval. This paper describes a subroutine generator program which, for a number of functions, produces a fixed-degree polynomial approximation subroutine which is adapted to the particular interval and accuracy needed by the programmer (within wide limits). The generation of the polynomial is based on the use of Chebyshev polynomials, and therefore the approximation is optimum in the sense of the Chebyshev polynomial approximations (for a given degree polynomial, very nearly minimum maximum error over the interval).
- Published
- 1959
17. An Improved Algorithm for the Generation of Nonparametric Curves
- Author
-
W.J. Lennon, B.W. Jordan, and B.D. Holm
- Subjects
Current (mathematics) ,business.industry ,Improved algorithm ,Nonparametric statistics ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Decision variables ,Computational Theory and Mathematics ,Hardware and Architecture ,Plotter ,Numerical control ,Point (geometry) ,Artificial intelligence ,business ,Representation (mathematics) ,Algorithm ,computer ,Software ,Mathematics - Abstract
Generation of curves using incremental steps along fixed coordinate axes is important in such diverse areas as computer displays, digital plotters, and numerical control. Direct implementation of a nonparametric representation of a curve, f(x, y) = 0, has been shown to be attractive for digital generation. The algorithm in this paper is developed directly from the nonparametric representation of the curve, allows steps to be taken to any point adjacent to the current one, and uses decision variables closely related to an error criterion. Consequently, the algorithm is more general and produces curves closer to the actual curve than do previously reported algorithms.
- Published
- 1973
18. Unsupervised root locus gain selection
- Author
-
Charles J. Kocourek and Richard A. Northouse
- Subjects
business.industry ,Root locus ,Machine learning ,computer.software_genre ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Control and Systems Engineering ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Point (geometry) ,Artificial intelligence ,business ,computer ,Algorithm ,Selection (genetic algorithm) ,Mathematics - Abstract
This paper describes an algorithm for the unsupervised selection of gains for a Root Locus analysis. This algorithm includes breakaway point gains, critical gains, and additive increments.
- Published
- 1974
19. Experiments with some cluster analysis algorithms
- Author
-
Chin-Liang Chang, Richard C. T. Lee, and James R. Slagle
- Subjects
Discrete mathematics ,Spanning tree ,Shortest-path tree ,Correlation clustering ,Single-linkage clustering ,Minimum spanning tree ,Combinatorics ,Distributed minimum spanning tree ,Artificial Intelligence ,Euclidean minimum spanning tree ,Signal Processing ,Reverse-delete algorithm ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
In this paper, we shall show the following experimental results: (1) the one-dimensional clustering algorithm advocated by Slagle and Lee (1) can be generalized to the n -dimensional case, n > 1: (2) if a set of points in some n -space ( n > 1) are linearly ordered through the short spanning path algorithm, then this set of points can be considered as occupying a one-dimensional space and the original n -dimensional clustering problem can now be viewed as a one-dimensional clustering problem; (3) a short spanning path usually contains as much information as a minimal spanning tree; (4) the one-dimensional clustering algorithm can be used to find the long links in a short spanning path or a minimal spanning tree. These long links have to be broken to obtain clusters.
- Published
- 1974
20. Optimization in non-hierarchical clustering
- Author
-
Edwin Diday
- Subjects
Fuzzy clustering ,Theoretical computer science ,Single-linkage clustering ,Correlation clustering ,Partition (database) ,Hierarchical clustering ,Artificial Intelligence ,Signal Processing ,Pattern recognition (psychology) ,Computer Vision and Pattern Recognition ,Cluster analysis ,Finite set ,Algorithm ,Software ,Mathematics - Abstract
Algorithms which are operationally efficient and which give a good partition of a finite set, produce solutions that are not necessarily optimum. The main aim of this paper is a synthetical study of properties of optimality in spaces formed by partitions of a finite set. We formalize and take for a model of that study a family of particularly efficient technics of “clusters centers” type. The proposed algorithm operates on groups of points or “kernels” these kernels adapt and evolve into interesting clusters. After having developed the notion of “strong” and “weak” patterns, and the computer aspects, we illustrate the different results by an artificial example and by two applications: one in mineral geology, the other in medicine to determine biological profiles.
- Published
- 1974
21. On the most rational algorithm of recognition
- Author
-
S Guiaşu
- Subjects
business.industry ,Information Theory ,Complex system ,General Medicine ,Models, Psychological ,Information theory ,Machine learning ,computer.software_genre ,Pattern Recognition, Automated ,Quantitative measure ,Computer Science::Sound ,Computer Science::Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,computer ,Mathematics ,Problem Solving - Abstract
The aim of the present paper is to present some applications of the information theory to the problem of the algorithms of recognition. In all these considerations, a qualitative quantitative measure of the information is used to give a generalisation of Landa's algorithm of recognition.
- Published
- 1968
22. Uniform Random Number Generators
- Author
-
M. Donald MacLaren and George Marsaglia
- Subjects
Pseudorandom number generator ,Random number generation ,Monte Carlo method ,Multiplicative function ,Probability theory ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Linear congruential generator ,Algorithm ,Software ,Randomness ,Information Systems ,Statistical hypothesis testing ,Mathematics - Abstract
This paper discusses the testing of methods for generating uniform numbers in a computer--the commonly used multiplicative and mixed congruential generators as well as two methods. Tests proposed here are more stringent than those usually applied, because the usual tests for randomness have passed several of the commonly used pprocedures which subsequently gave poor results in actual Monte Carlo calculations. The principal difficulty seems to be that certain simple functions of n-tuples of uniform random numbers do not have the distribution that probability theory predicts. Two alternative generating methods are described, one of them using a table of uniform numbers, the other one combining two congruential generators. Both of these methods passed the tests, whereas the conventional multiplicative and mixed congruential methods did not.
- Published
- 1965
23. Walsh Functions in Image Processing, Feature Selection and Pattern Recognition
- Author
-
Harry C. Andrews
- Subjects
Signal processing ,Hadamard code ,business.industry ,Orthogonal functions ,Reed–Muller code ,Feature selection ,Pattern recognition ,Image processing ,Condensed Matter Physics ,Atomic and Molecular Physics, and Optics ,Hadamard transform ,Walsh function ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Mathematics - Abstract
Walsh functions have become quite useful in the applications of image processing and feature selection. Due to their inherent efficiency of implementation, (they are a subset of the Reed Muller Codes and Hadamard matrices), they have become popular for coding, enhancement and other signal processing tasks. This paper will briefly describe applications in some of these areas with emphasis on a correlation analysis for justification purposes.
- Published
- 1971
24. Image detection through bipolar correlation
- Author
-
E. Trombini, P. Mengert, and A. Arcese
- Subjects
Signal processing ,business.industry ,Autocorrelation ,Map matching ,Library and Information Sciences ,Computer Science Applications ,Convolution ,Optical correlator ,Pattern recognition (psychology) ,Range (statistics) ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Digital filter ,Information Systems ,Mathematics - Abstract
The purpose of this paper is to describe the bipolar reference concept. A bipolar reference is simply a reference that can take on a continuous range of both positive and negative numerical values. It will be shown that for map matching via the convolution operation, references that yield maximum signal-to-noise ratios, in general, will be bipolar references. Algorithms for generating bipolar references and digital simulation results are given.
- Published
- 1970
25. Systematic image errors in aerial triangulation
- Author
-
E.R. Bosman, D. Eckhart, K. Kubik, and E. Clerici
- Subjects
Systematic error ,business.industry ,General Engineering ,Triangulation (social science) ,Image (mathematics) ,Compensation (engineering) ,Complete information ,Component (UML) ,Random error ,General Earth and Planetary Sciences ,Computer vision ,Artificial intelligence ,business ,Algorithm ,General Environmental Science ,Block (data storage) ,Mathematics - Abstract
A review is given of the research into the effects of systematic image errors in aerial triangulation and of the known methods of compensation. Rather complete information is already available about the propagation of random image errors in block adjustment. However, random errors account for only a part of the total image error, the other component being those errors which are termed systematic. In the last years the effects of these systematic errors after block adjustment have been investigated in more detail and methods for compensating of these effects have been proposed by various authors. This paper reviews these investigations and summarizes the principle problems which remain, together with recommendations for their solution.
- Published
- 1973
26. Computer Evaluation of Radar Ambiguity Functions Using Fast Fourier Transform Techniques
- Author
-
J. Kundu and Lt. A. Paul Raj
- Subjects
Ambiguity function ,business.industry ,Estimation theory ,Matched filter ,media_common.quotation_subject ,Ambiguity ,Computer Science Applications ,Theoretical Computer Science ,law.invention ,symbols.namesake ,Noise ,Gaussian noise ,law ,symbols ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Radar ,Computational problem ,business ,Algorithm ,Mathematics ,media_common - Abstract
The ambiguity function is a useful tool for the design of radar signals. It is well known that the matched filter receiver is the optimum receiver for the detection and parameter estimation of signals corrupted by Gaussian noise. The ambiguity function represents the matched filter response in the absence of noise. For a given radar environment, the proper choice of an ambiguity function is necessary to minimize self-clutter while maintaining the required measure of local accuracy and resolution.No efficient methods are known for determining the signal given an ambiguity function. It is, therefore, customary for radar designers to evaluate the ambiguity functions for a number of likely signals before a choice is made. Closed form solutions for the ambiguity function are known only for a few simple signals, and computational methods need to be used in most cases. A direct approach to the computational problem is extremely tedious. In this paper a novel solution to this problem is discussed. We define a dis...
- Published
- 1973
27. An Algorithm for the Numerical Application of a Linear Operator
- Author
-
Terence G. Jones
- Subjects
Definite integrals ,Function (mathematics) ,Linear map ,Set (abstract data type) ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Simple (abstract algebra) ,Point (geometry) ,Algorithm ,Software ,Information Systems ,Mathematics ,Interpolation - Abstract
The use of iterative procedures for interpolation is well-known. In this paper an iterative procedure, that may be used to compute values of derivatives and definite integrals, is derived. The procedure may also be used to compute the result of applying any linear operator to a function. The data used are a set of point values, at any reasonable set of abscissae. Within certain limitations, values of the derivatives may also be used. Worked examples are given to demonstrate the use of the procedure in three simple problems.
- Published
- 1962
28. Simulation of a classical detection problem: Detection of a phase-coherent rf pulse train of unknown phase in additive gaussian noise
- Author
-
Cliff A. Harris and J. E. Belt
- Subjects
Numerical Analysis ,General Computer Science ,Noise measurement ,Stochastic resonance ,business.industry ,Applied Mathematics ,Machine learning ,computer.software_genre ,Noise (electronics) ,Noise floor ,Theoretical Computer Science ,symbols.namesake ,Additive white Gaussian noise ,Signal-to-noise ratio ,Gaussian noise ,Modeling and Simulation ,symbols ,Detection theory ,Artificial intelligence ,business ,Algorithm ,computer ,Mathematics - Abstract
This paper describes a hybrid computer simulation of a simple signal detection problem. A signal of unknown phase plus white Gaussian noise is detected with a correlation receiver. Measured detection probabilities and false-alarm probabilities for various signal to noise ratio are compared to the theoretical curves for the Bayes optimum receiver. Good results are obtained using a lowcost educational computer, APE II.
- Published
- 1969
29. Space distortion and decomposition theory
- Author
-
Hans Marko
- Subjects
Adaptive control ,Computers ,business.industry ,Generalization ,Complex system ,Process (computing) ,Pattern recognition ,General Medicine ,Space (mathematics) ,Form Perception ,Space Perception ,Distortion ,Pattern recognition (psychology) ,Humans ,Detection theory ,Artificial intelligence ,business ,Cybernetics ,Algorithm ,Mathematics ,Vision, Ocular - Abstract
It is shown that the mathematical methods hitherto used for solving pattern recognition problems are usually not well adapted with respect to the natural class-forming processes of patterns generated from objects of the real world. Especially the well known method using a multidimensional signal space is not suited for typical pattern recognition problems, although its value is unobjected for signal detection problems. The paper proposes to regard the class-forming process of patterns with the aid of spatial transformations, where here the three-dimensional space of the real world is considered. Accordingly, a space distortion theory for both two-dimensional and three-dimensional objects, respectively, has been developed. It leads to recognition schemes using a generalized correlation technique. The generalization consists of an adaptive spatial transformation process prior to correlation. By this way, pattern recognition is supposed to be an adaptive control process using optimization methods. The decomposition theory proposes the application of this method on separate parts of the decomposed picture.
- Published
- 1973
30. Models of Computations and Systems—Evaluation of Vertex Probabilities in Graph Models of Computations
- Author
-
Gerald Estrin and David Martin
- Subjects
Theoretical computer science ,Computation ,Graph model ,Graph ,Vertex (geometry) ,Graph bandwidth ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,A priori and a posteriori ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
This paper concerns itself with the modeling of computations and systems and the generation of a priori estimates of expected computation time for given problems on given processing systems. In particular, methods are discussed for determining the probabilities of reaching vertices in a graph model of computations.
- Published
- 1967
31. Integer Arithmetic Algorithms for Polynomial Real Zero Determination
- Author
-
Lee E. Heindel
- Subjects
Discrete mathematics ,Factor theorem ,Polynomial ,Modular arithmetic ,Univariate ,Zero (complex analysis) ,Interval (mathematics) ,Combinatorics ,Reciprocal polynomial ,Real data type ,Finite field ,Integer ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Simple (abstract algebra) ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
This paper discusses a set of algorithms which given a univariate polynomial with integer coefficients (with possible multiple zeros) and a positive rational error bound, uses infinite-precision integer arithmetic and Sturm's Theorem to compute intervals containing the real zeros of the polynomial and whose lengths are less than the given error bound. The algorithms also provide a simple means of determining the number of real zeros in any interval. Theoretical computing time bounds are developed for the algorithms and some empirical results are reported.
- Published
- 1971
32. An innovations approach to least-squares estimation--Part V: Innovations representations and recursive estimation in colored noise
- Author
-
R. Geesey and Thomas Kailath
- Subjects
Covariance function ,Stochastic process ,business.industry ,State vector ,Machine learning ,computer.software_genre ,Computer Science Applications ,law.invention ,Invertible matrix ,Control and Systems Engineering ,Colors of noise ,law ,Artificial intelligence ,Electrical and Electronic Engineering ,Representation (mathematics) ,Linear combination ,business ,computer ,Algorithm ,Linear filter ,Mathematics - Abstract
We show that least-squares filtered and smoothed estimates of a random process given observations of another colored noise process can be expressed as certain linear combinations of the state vector of the so-called innovations representation (IR) of the observed process. The IR of a process is a representation of it as the response of a causal and causally invertible linear filter to a white-noise "innovations" process. For nonstationary colored noise processes, the IR may not always exist and a major part of this paper is devoted to the problem of identifying a proper class of such processes and of giving effective recursive algorithms for their determination. The IR can be expressed either in terms of the parameters of a known lumped model for the process or in terms of its covariance function. In the first case, our results on estimation encompass most of those found in the previous literature on the subject; in the second case, there seems to be no prior literature. Finally, we may note that our proofs rely on, and exploit in both directions, the intimate relation that exists between least-squares estimation and the innovations representation.
- Published
- 1973
33. An algorithm to generate prime implicants and its application to the selection problem
- Author
-
Richard C. T. Lee
- Subjects
Information Systems and Management ,Implicant ,Brute-force search ,Space (mathematics) ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Conjunctive normal form ,Boolean function ,Algorithm ,Software ,Selection (genetic algorithm) ,Mathematics - Abstract
In this paper, the algorithm to generate prime implicants presented in Ref. 5 is extended to a new algorithm. The new algorithm does not require the Boolean function to be in conjunctive normal form and is complete in the sense that it generates all the prime implicants. The author also shows that this new algorithm can be used to efficiently solve the selection problem ^[^4^]. By using this algorithm, the original selection problem can be solved by a branch-and-bound approach ^[^1^] which avoids an exhaustive search of the solution space and reduces the amount of necessary calculations.
- Published
- 1972
34. A method of comparing two patterns independent of possible transformations and small distortions
- Author
-
J.C. Simon, Christophe Roche, and A. Checroun
- Subjects
Artificial Intelligence ,Distortion ,Signal Processing ,Computer Vision and Pattern Recognition ,Translation (geometry) ,Rotation (mathematics) ,Algorithm ,Software ,Mathematics ,Homothetic transformation - Abstract
This paper discusses a recognition processing between patterns represented by N points in R 2 . As the processing is founded on the comparison of “ordonnances” obtained by ordering the Euclidian distances between the N points, it does not depend on either translation, rotation or homothety and also allows up to 10 per cent distortion. Experimental results are presented.
- Published
- 1972
35. Data Reduction Problems in Pattern Recognition
- Author
-
Jan Bialasiewicz
- Subjects
Set (abstract data type) ,business.industry ,Pattern recognition (psychology) ,Decomposition (computer science) ,Pattern recognition ,Artificial intelligence ,business ,Space (mathematics) ,Algorithm ,Mathematics ,Data reduction - Abstract
In this paper the problem of determination of the set of essential pattern features given the initial set of pattern features has been investigated. It has been shown that this problem may be treated as a problem of searching a suitable decomposition of the measurement space whose coordinates represent the initial pattern features. Two broad categories of such decompositions has been distinguished, i.e. decompositions which do not and which do, result in loss of information. An algorithm for determination of decompositions belonging to the second category was given.
- Published
- 1968
36. On the use of binary and gray code schemes for continuous-tone picture representation
- Author
-
Edward S. Deutsch
- Subjects
Basis (linear algebra) ,Binary number ,Gray code ,Artificial Intelligence ,Simple (abstract algebra) ,Signal Processing ,Binary code ,Computer Vision and Pattern Recognition ,Representation (mathematics) ,Algorithm ,Software ,Continuous tone ,Data compression ,Mathematics - Abstract
The paper discusses the idea of iterating the conversion of a continuous-tone picture from binary to Gray code representation. A simple recursive function is defined by means of which the value of the ith bit representing a picture point's grey level after the mth iteration is determined. The resulting rearrangement of grey levels as well as that of the bits throughout the planes is discussed next. The iteration is shown to repeat after 2″ steps, where n is the number of bit planes representing the picture. A data compression scheme is developed on the basis that prior to exact repetition, approximate repetitions take place during the iterations, and the resulting picture is almost the same as the original.
- Published
- 1973
37. Least-square methods in abstract pattern recognition
- Author
-
William S. Meisel
- Subjects
Information Systems and Management ,business.industry ,Probability density function ,Pattern recognition ,Class (philosophy) ,Computer Science Applications ,Theoretical Computer Science ,Bayes' theorem ,Discriminant function analysis ,Discriminant ,Artificial Intelligence ,Control and Systems Engineering ,Pattern recognition (psychology) ,Artificial intelligence ,business ,Finite set ,Algorithm ,Complex problems ,Software ,Mathematics - Abstract
One approach to abstract pattern recognition is to represent the discriminant functions such that they are characterized by linearly appearing parameters, e.g., a general multivariate polynomial. This paper discusses a class of methods for deriving values for these parameters given a finite set of samples of each pattern class; the methods discussed are those which use least-square or least-mean-square approximation to an unknown probability density or partially specified discriminant function. The algorithms which result are feasible for the solution of complex problems using a general-purpose digital computer. This viewpoint emphasizes the common denominator of many procedures in abstract pattern recognition.
- Published
- 1968
38. Two New Classes of Algorithms for Finding the Eigenvalues and Eigenvectors of Real Symmetric Matrices
- Author
-
C. Donald La Budde
- Subjects
Tridiagonal matrix ,Algebra ,symbols.namesake ,Jacobi eigenvalue algorithm ,Jacobi rotation ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Diagonal matrix ,symbols ,Symmetric matrix ,Skew-symmetric matrix ,Divide-and-conquer eigenvalue algorithm ,Algorithm ,Software ,Eigendecomposition of a matrix ,Information Systems ,Mathematics - Abstract
The algorithms described in this paper are essentially Jacobi-like iterative procedures employing Householder orthogonal similarity transformations and Jacobi orthogonal similarity transformations to reduce a real symmetrix matrix to diagonal form. The convergence of the first class of algorithms depends upon the fact that the algebraic value of one diagonal element is increased at each step in the iteration and the convergence of the second class of algorithms depends upon the fact that the absolute value of one off-diagonal element is increased at each step in the iteration. Then it is shown how it is possible to combine one algorithm from each class together into a “mixed” strategy for diagonalizing a real symmetric matrix.
- Published
- 1964
39. A transformation with invariance under cyclic permutation for applications in pattern recognition
- Author
-
Herbert J. Reitboeck and T. P. Brody
- Subjects
Digital computer ,business.industry ,Fast Fourier transform ,General Engineering ,Pattern recognition ,Cyclic permutation ,Machine recognition ,Autonomous learning ,Artificial intelligence ,business ,Classifier (UML) ,Algorithm ,Engineering(all) ,Mathematics - Abstract
The paper describes a transformation that can be used to characterize patterns independent of their position. Examples of the application of the transform for the machine recognition of letters are discussed. The program succeeded in a recognition rate of 80–100% for letters having different position, distortions, inclination, rotation up to 15° and size variation up to 1:3 relative to a reference set of 10 letters. Results with a program for the autonomous learning of new varieties of a pattern (using a learning matrix as an adaptive classifier) are given. When executed on a digital computer, this transform is 10–100 times faster than the fast Fourier transform (depending on the number of sampling points).
- Published
- 1969
40. Digital One-Third Octave Spectral Analysis
- Author
-
C. D. Negron
- Subjects
Autocorrelation technique ,Autocorrelation ,Filter (signal processing) ,A-weighting ,Octave (electronics) ,symbols.namesake ,Transformation (function) ,Fourier transform ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Statistics ,Bandwidth (computing) ,symbols ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
An economical approach is described for estimating power spectra from sampled data through the application of Z -transform theory. The method consists of computing a weighting sequence whose frequency characteristics approximate a one-third octave bandwidth filter, and applying these coefficients recursively to the digitized data record. Filtering is followed by a variance calculation and a division by the appropriate filter bandwidth. A specific example of power spectra computed in the usual manner (Fourier transformation of the autocorrelation function) and power spectra computed by the method in this paper demonstrates the practicability of the technique. The most significant advantage is the economical aspect. It is shown that owing to the variable bandwidth and the small number of filtering coefficients, the savings that may be realized by the employment of this technique, in lieu of the autocorrelation transformation approach, may be quite considerable, depending on the record length and the number of lag products.
- Published
- 1966
41. A Class of Merging Algorithms
- Author
-
D. N. Deutsch and F. K. Hwang
- Subjects
Combinatorics ,Discrete mathematics ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Ordered set ,Disjoint sets ,Algorithm ,Software ,Ordered subsets ,Information Systems ,Mathematics - Abstract
Suppose we are given two disjoint linearly ordered subsets A and B of a linearly ordered set C , say A = { a 1 < a 2 < ··· < a m } and B = { b 1 < b 2 < ··· < b n }. The problem is to determine the linear ordering of their union (i.e. to merge A and B ) by means of a sequence of pairwise comparisons between an element of A and an element of B (which we refer to in the paper as the ( m, n ) problem). Given any algorithm s to solve the ( m, n ) problem, we are interested in the maximum number of comparisons K s ( m, n ) required under all possible orderings of A ∪ B . An algorithm s is said to be minimax if K s ( m, n ) = K ( m, n ) where K ( m, n ) = mins K s ( m, n ) It is a rather difficult task to determine K ( m, n ) in general. In this study the authors are only concerned with the minimax over a particular class of merging algorithms. This class includes the tape merge algorithm, the simple binary algorithm, and the generalized binary algorithm.
- Published
- 1973
42. Classification by cascaded threshold elements
- Author
-
F. Holdermann
- Subjects
Nonlinear system ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Classifier (UML) ,Algorithm ,Software ,Linear separability ,Mathematics - Abstract
This paper is concerned with the synthesis of networks of threshold elements to separate patterns into two classes. There exist many algorithms which determine the parameters of a classifier, when the two classes are linearly separable. But a large number of problems do not fulfill this requirement. In such cases it is necessary to generate nonlinear decision boundaries to separate the patterns.
- Published
- 1971
43. Optimum Adaptive Reception for Binary Sequences
- Author
-
Paul A. Wintz
- Subjects
Multivariate random variable ,business.industry ,Autocorrelation ,Aerospace Engineering ,Estimator ,Pattern recognition ,Probability density function ,symbols.namesake ,Gaussian noise ,Pattern recognition (psychology) ,Binary data ,symbols ,Statistical inference ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Mathematics - Abstract
The a posteriori probability density function p(?|X1, X2,...,Xk), where the Xi, i=1, 2, ..., K, represent Kvector-valued observations statistically related to the random vector ?, appears in many applications of the methods of statistical inference to problems in pattern recognition and statistical communication theory. In this paper, it is shown that for equally likely binary sequences (M= 2) of anticorrelated patterns for signals observed in additive Gaussian noise, a device that computes p?|X1, X2, XK) can be synthesized from a correlator, a simple instantaneous nonlinearity, and a multiplier. These results are used to derive some equally simple structures for various optimum nonsupervised estimators, pattern recognition machines, and signal detectors.
- Published
- 1970
44. Correlation Technique for ECG Interpretation
- Author
-
S. C. Gupta, S. K. Sud, and A. K. Kamal
- Subjects
Normalization (statistics) ,Heart disease ,business.industry ,Pattern recognition ,Precordial examination ,medicine.disease ,Entire heart ,Computer Science Applications ,Theoretical Computer Science ,Correlation ,medicine ,Classification methods ,Waveform ,Experimental work ,cardiovascular diseases ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Mathematics - Abstract
This paper describes the application of a new developed pattern-recognition technique, called the cross-correlation method to the diagnosis of heart disease as evidenced in the electrocardiogram, A study is made to see if the electrocardiographic leads can be used, employing a cross-correlation scheme—to the classification of the electrocardiograms into their respective disease categories. Experimental work has been limited to the analysis of the entire heart cycle of precordial leads V1, V2 and V3. In addition to one ‘normal’ ECG waveform, four cases of known cardiac diseases and one unknown (which is to be classified) are taken for the purpose of this study.Studies are also made to determine the appropriate sampling rate, the time and amplitude normalization procedures and the classification method. The results indicate that the cross-correlation technique may be used for ECG interpretation.
- Published
- 1972
45. An algorithm for pattern classification using the concept of characteristic vector of the output of threshold element
- Author
-
Wah-Chun Chan
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Element (category theory) ,Boolean function ,Realization (systems) ,Algorithm ,Finite set ,Software ,Eigenvalues and eigenvectors ,Computer Science Applications ,Theoretical Computer Science ,Mathematics - Abstract
This paper presents an algorithm for pattern classification using the concept of characteristic vector of a Boolean function first proposed by Dertouzos for single-threshold element realization. It is shown that the algorithm is convergent for a finite number of iterations. Depending on the choice of a parameter, three correction rules which are the fixed increment rule, the absolute correction rule and the fractional correction rule can be obtained from the algorithm. The quantity (b − bj) used in the algorithm is shown to be equivalent to the augmented pattern used in pattern classification in the literature.
- Published
- 1973
46. A Starting Method for the Three-Point Adams Predictor-Corrector Method
- Author
-
R. Alonso
- Subjects
Predictor–corrector method ,Truncation error ,Polynomial ,Minor (linear algebra) ,Zero (complex analysis) ,Value (computer science) ,Combinatorics ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Ordinary differential equation ,Order (group theory) ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
Certain higher order iterative procedures for the numerical solution of ordinary differential equations require a separate procedure for determining starting values. Such is the case in the Adams and Milne methods. The work involved in programming the computation of starting values may be of the same magnitude as that required in programming the method itself. This paper shows that the three-point Adams formula leads to a method which may be considered “self-starting” in the sense that very minor modifications of the method provide starting values of sufficient accuracy for continuing the solution. Let the equation under solution be y′ = ƒ( y, t ), and let the values y′ ( t 0 ) = y 0 ′, y′ ( t 0 + h ) = y 1′, y ( t 0 ) = y 0 , y ( t 0 + h ) = y 1, be given. The general three-point predictor-corrector process consists of estimating (in some unspecified way) a value y 2 ′ of y 2 ′ computing a first estimate y 2 by means of a closed three-point integration formula; obtaining the (presumably) better value y 2 ′ = ƒ( y 2 , t 0 + 2 h ); and then repeating the process until some convergence criterion is satisfied, either for y 2 ′ or for y 2 . The process could have started by first estimating y 2 , rather than y 2 ′. It is important to note that as long as the step size h and the first estimate y 2 ′ of y 2 ′ result in a converging process, the values to which there is convergence are independent of the method of prediction. The Adams three-point formula is y n +1 = y n + h /12[5ƒ( t n +1 , y n +1 ) + 8ƒ( t n , y n ) - ƒ( t n -1 , y n -1)]. (1) T n , the single step truncation error, is given by T n = - h 4 /24ƒ′′′( ξ ), t n -1 ≦ ξ ≦ t n +1 . The quantities y n , y ′ n +1 , y n ′, y ′ n -1 are assumed exact. Let the initial conditions y 0 ′, y 0 be specified for y ′ = ƒ( y, t ). As a first approximation, let y (0) -1 ′ = y 0 ′, y (0) +1 ′ = y 0 ′, y (0) -1 = y 0 , y (0) +1 = y 0 . The superscript zero in parentheses indicates that these are the initial estimates of y , y ′. The starting procedure consists of performing corrective iterations in the forward (+1) and backward (-1) direction as follows: y ( i ) +1 = y 0 + h /12[5ƒ( t +1 , y ( i -1) +1 ) + 8ƒ( t 0 , y 0 ) - ƒ( t -1 , y ( j ) -1 )], y ( j ) -1 = y 0 - h /12[5ƒ( t -1 , y ( j -1) -1 ) + 8ƒ( t 0 , y 0 ) - ƒ( t +1 , y ( i ) +1 )]. (2) The superscript notation employed does not imply any order to the forward and backward iterations. In the following analysis, however, it will be assumed that the process starts with a forward iteration, and alternates thereafter. It is necessary to show the conditions under which the process converges and the conditions under which the resulting values are accurate enough to warrant continuing the solution by the Adams method. Let y +1 be the value of y at t +1 to which the iterative process converges (if at all). The error at the i th forward iteration is defined as α ( i ) , such that y ( i ) +1 = y +1 + α ( i ) . (3) The error in the backward direction is β ( j ) , such that y ( j ) -1 = y -1 + β ( j ) . (3′) Substituting the error definition into equation (2) yields α ( i ) = 5 h /12{ƒ[ t +1 , y +1 + α ( i -1) ] - ƒ[ t +1 , y +1 ]} - h /12{ƒ[ t -1 , y -1 + β ( i ) ] - ƒ[ t -1 , y -1 ]}, (4) β ( j ) = - 5 h /12{ƒ[ t -1 , y -1 + β ( i -1) ] - ƒ[ t -1 , y -1 ]} + h /12{ƒ[ t +1 , y +1 + α ( i ) ] - ƒ[ t +1 , y +1 ]}. Let g +1 = ∂ƒ[ t +1 , y ( t 0 + ξ 1 )]/∂ y , 0 ≦ ξ 1 ≦ h , and (5) g -1 = ∂ƒ[ t -1 , y ( t 0 + ξ 2 )]/∂ y , - h ≦ ξ 2 ≦ 0. Let it be assumed that g +1 and g -1 are insensitive to small changes in ξ 1 and ξ 2 ; then, by the law of the mean, equations (4) can be written as α ( i ) = 5 h /12 g +1 α ( i -1) - h /12 g -1 β ( j ) , β ( j ) = 5 h /12 g -1 α ( j -1) + h /12 g +1 α ( i ) . (6) The order in which the iterations are performed may now be specified by letting i = 2 k + 3, j = 2 k + 2, k = -1, 0, 1, ···, ∞. Equations (6) can be reduced to a single equation in either α or β . Choosing β results in β (2 k +4) + 5 h /12 ( g -1 - g +1 ) β (2 k +2) - 24 h 2 /12 g +1 g -1 β (2 k ) = 0. (7) The condition for convergence of the starting process is that β → 0 as k → ∞; i.e., that the roots of equations (7), when considered as a polynomial in β 2 , be less than one in magnitude. The conditions for convergence are then ‖- 5 h /12 ( g +1 - g -1 ) ± h /12 √[5( g +1 - g -1 ] 2 - 4 g +1 g -1 ‖ < 1. (8) Choosing α instead of β yields the same result. The condition for the convergence of the usual predictor-corrector process, in which y -1 and y 0 are given, is ‖5 h /12 g 1 ‖ < 1. (9) Conditions (8) are quite similar to condition (9). As can be seen, convergence can always be obtained for sufficiently small values of h . To say that the starting process gives values y +1 , and y -1 of sufficient accuracy to warrant the further use of the normal Adams procedure implies that the error introduced by the former is not significantly larger than the error introduced by a single step of the latter. Let y -1 and y 0 be given exactly, and let y +1 be the value to which successive forward iterations converge. If y +1 is the true value of y at t +1 , the error ε 1 is then defined as y +1 = y +1 + ε 1 , (10) By definition, y +1 satisfies y +1 = y 0 + h /12 [5ƒ( t +1 , y +1 ) + 8ƒ( t 0 , y 0 ) - ƒ( t -1 , y -1 )]. (11) It is known that y +1 = y 0 + h /12 [5ƒ( t +1 , y +1 ) + 8ƒ( t 0 , y 0 ) - ƒ( t -1 , y -1 )] + T 1 , (1′) where T 1 is the truncation error. From equations (1), (10) and (11), and by the law of the mean, it can be stated that ε 1 = - T 1 + h /12{5ε 1 ∂ƒ/∂ y [ t +1 , y ( t 0 + ξ 1 )]}, 0 ≦ ξ 1 ≦ h . (12) By defining γ 1 = h /12 ∂ƒ/∂ y [ t +1 , y ( t 0 + ξ 1 )], (13) the error equation becomes ε 1 = - T 1 /1 - 5 γ 1 . (14) The quantity γ 1 may be estimated, since γ 1 = h /12 ƒ[ t +1 , y ( i ) +1 ] - ƒ[ t +1 , y ( i -1) +1 ]/ y ( i ) +1 - y ( i -1) +1 ; (15) the superscript again refer to iterations. As a rule, the step size h is chosen so as to make γ 1 small compared to one. A similar analysis for the proposed starting process yields ε 1 = - T 1 - γ -1 (5 T 1 - T -1 )/1 - 5 γ 1 + 5 γ 1 - 24 γ 1 γ -1 . (16) The difference between the error of equation (14) and that of equation (16) is Δ = γ -1 T -1 - 5 γ 1 T 1 - 24 γ 1 γ -1 T 1 /(1 - 5 γ 1 )(1 + 5 γ -1 ) + γ 1 γ -1 . (17) This equation may be simplified by considering a worst possible case, in which T -1 and T 1 are replaced by some maximum T and γ -1 and γ 1 by some maximum γ , and the signs adjusted. This yields ‖Δ‖≦‖ γ (6 + γ )/1 - 26 γ 2 T ‖. (18) Thus, if γ is of the order of 10 -2 , the difference Δ is less than seven percent of the single step truncation error. In general, if the step size h is chosen so that the error introduced by a single step of the iterative procedure is of the order of magnitude of the single step truncation error (which is the error in y n +1 if y ′ n +1 , y n ′, and y n -1 are known exactly), then the proposed procedure results in starting values of sufficient accuracy to warrant continuing the solution by means of the three-point Adams method.
- Published
- 1960
47. An admissible and optimal algorithm for searching AND/OR graphs
- Author
-
James R. Slagle and Chin-Liang Chang
- Subjects
Discrete mathematics ,Factor-critical graph ,Linguistics and Language ,Voltage graph ,Butterfly graph ,Language and Linguistics ,law.invention ,Combinatorics ,Artificial Intelligence ,law ,Line graph ,Null graph ,Graph property ,Graph factorization ,Algorithm ,Complement graph ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
An AND/OR graph is a graph which represents a problem-solving process. A solution graph is a subgraph of the AND/OR graph which represents a derivation for a solution of the problem. Therefore, solving a problem can be viewed as searching for a solution graph in an AND/OR graph. A “cost” is associated with every solution graph. A minimal solution graph in a solution graph with minimal cost. In this paper, an algorithm for searching for a minimal solution graph in an AND/OR graph is described. If the “lower bound” condition is satisfied, the algorithm is guaranteed to find a minimal solution graph when one exists. Furthermore, the “optimality” of the algorithm is also proved.
- Published
- 1971
48. Statistical Properties of Random Digital Sequences
- Author
-
Taylor L. Booth
- Subjects
Markov kernel ,Computer science ,Markov process ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Machine learning ,computer.software_genre ,Markov model ,Time reversibility ,Theoretical Computer Science ,Matrix (mathematics) ,symbols.namesake ,Markov renewal process ,Process control ,Computer Science::Databases ,Mathematics ,Discrete mathematics ,Sequence ,business.industry ,Stochastic process ,Variable-order Markov model ,Process (computing) ,Random function ,Computational Theory and Mathematics ,Hardware and Architecture ,symbols ,Multinomial distribution ,Noise (video) ,Artificial intelligence ,business ,computer ,Algorithm ,Software - Abstract
Recent studies have shown that many noise processes in sequential networks can be described as either a multinomial process, a Markov process, or a linearly-dependent process. This paper extends the results of those studies to show how the statistics of any sequence selected from processes of this type can be calculated. Through the use of z-transform techniques, it is shown that the characteristic roots of the characterization matrix which describe the process are useful in categorizing the statistical properties of the process. It is also shown that the logical combination of two or more linearly-dependent sequences is another linearly-dependent sequence. The relationship between the characterization matrices of the input sequences to a sequential machine, and the characterization matrix of the output process is derived.
- Published
- 1968
49. On Effective Procedures for Speeding Up Algorithms
- Author
-
Manuel Blum
- Subjects
Discrete mathematics ,Large class ,Weighted Majority Algorithm ,Speedup ,Computer science ,Population-based incremental learning ,Cornacchia's algorithm ,Function (mathematics) ,Fast algorithm ,Nondeterministic algorithm ,Shortest Path Faster Algorithm ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Ramer–Douglas–Peucker algorithm ,Output-sensitive algorithm ,Suurballe's algorithm ,Finite set ,Algorithm ,Software ,FSA-Red Algorithm ,Information Systems ,Mathematics - Abstract
This paper is concerned with the nature of speedups. Let f be any recursive func- tion. We show that there is no effective procedure for going from an algorithm forf to another algorithm for f that is significantly faster on all but a finite number of inputs. On the other hand, for a large class of functions f, one can go effectively from any algorithm for f to one that is faster on at least infinitely many integers. Finally, if one has an algorithm for a given function f, and if there is an algorithm which is faster on all but a finite number of inputs, then even though one cannot get this faster algorithm effectively, one can still obtain a pseudo- speedup: This is a very fast algorithm which computes a variant of the function, one which differs from the original function on a finite number of inputs.
- Published
- 1971
50. Some observations on analytical aerial triangulation
- Author
-
E.H. Thompson
- Subjects
Orientation (computer vision) ,business.industry ,Computation ,General Engineering ,Triangulation (social science) ,Stereographic projection ,STRIPS ,law.invention ,law ,General Earth and Planetary Sciences ,Computer vision ,Surface triangulation ,Artificial intelligence ,business ,Algorithm ,Iteration process ,General Environmental Science ,Mathematics - Abstract
Summary This paper raises certain problems in the computation of analytical aerial triangulations in such a form as to stimulate discussion. The subjects raised are: the treatment of redundant observations; the incorporation of models into strips and blocks; the condition equations for relative orientation; the use of an intermediate stereographic projection; and the iteration process.
- Published
- 1959
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.