478 results
Search Results
2. Differentiation of Bence Jones Protein from Uroglobulins: A New Test Based on Differential Extraction of Uroproteins Dried on Filter Paper
- Author
-
Hans N. Naumann
- Subjects
Electrophoresis ,Lymphoma ,Filter paper ,business.industry ,Globulins ,Pattern recognition ,General Medicine ,Urine ,Chemistry Techniques, Analytical ,Bence Jones protein ,Proteinuria ,Urinary Tract Infections ,Immunology ,Humans ,Kidney Diseases ,Artificial intelligence ,Waldenstrom Macroglobulinemia ,Differential extraction ,Multiple Myeloma ,business ,Bence Jones Protein ,Mathematics - Published
- 1965
3. Two-Dimension Paper Chromatography of Vanilla Extract
- Author
-
J Fitelson
- Subjects
Paper chromatography ,food.ingredient ,food ,Dimension (vector space) ,business.industry ,Vanilla extract ,Pattern recognition ,Artificial intelligence ,business ,Mathematics - Published
- 1962
4. An Artificial Intelligence Approach to the Symbolic Factorization of Multivariable Polynomials. Technical Report No. CS74019-R.
- Author
-
Virginia Polytechnic Inst. and State Univ., Blacksburg. Dept. of Computer Science. and Claybrook, Billy G.
- Abstract
A new heuristic factorization scheme uses learning to improve the efficiency of determining the symbolic factorization of multivariable polynomials with interger coefficients and an arbitrary number of variables and terms. The factorization scheme makes extensive use of artificial intelligence techniques (e.g., model-building, learning, and automatic classification) in an attempt to reduce the amount of searching for the irreducible factors of the polynomial. The approach taken to polynomial factorization is quite different from previous attempts because: (1) it is distinct from numerical techniques; (2) possibilities for terms in a factor are generated from the terms in the polynomial; and (3) a reclassification technique is used to allow the application of different sets of heuristics to a polynomial during factorization attempts on it. Data presented show the importance of learning to the efficiency of operation of the scheme. Factorization times of polynomials factored by both the scheme described in this paper and Wang's implementation of Berlekamp's algorithm are given and compared, and an analysis of avariance experiment provides an indication of the significant sources of variation influencing the factorization time. (Author/DGC)
- Published
- 1974
5. A measure of the color characteristics of white paper
- Author
-
R.E. Lofton
- Subjects
White paper ,Computer Networks and Communications ,Control and Systems Engineering ,business.industry ,Applied Mathematics ,Signal Processing ,Measure (physics) ,Pattern recognition ,Artificial intelligence ,business ,Mathematics - Published
- 1924
6. A note on a paper by Rumelhart and Greeno
- Author
-
Joseph L. Zinnes, Wilson S. Geisler, and Stephen E. Edgell
- Subjects
business.industry ,Applied Mathematics ,media_common.quotation_subject ,Artificial intelligence ,business ,Mathematical economics ,Equivalence (measure theory) ,General Psychology ,Neglect ,media_common ,Mathematics - Abstract
The parameters of the models described by Rumelhart and Greeno in this journal (1971) are constrained in certain ways. Neglect of this fact in their paper leads them to an inadmissible set of parameter values, and an invalid argument and statement concerning the equivalence of two models. However, their conclusion that the Restle model fits their data better than the Luce model remains unchanged.
- Published
- 1973
7. A chronometric study of mental paper folding
- Author
-
Christine Feng and Roger N. Shepard
- Subjects
Combinatorics ,Quantitative Biology::Biomolecules ,Linguistics and Language ,Neuropsychology and Physiological Psychology ,Flat surface ,Artificial Intelligence ,Developmental and Educational Psychology ,Experimental and Cognitive Psychology ,Folding (DSP implementation) ,Edge (geometry) ,Cube ,Square (algebra) ,Mathematics - Abstract
On each trial Ss viewed one of the patterns of six connected squares that result when the faces of a cube are unfolded onto a flat surface. The Ss tried, as rapidly as possible, to decide whether two arrows, each marked on an edge of a (different) square, would or would not meet if the squares were folded back up into the cube. The time required to make such decisions increased linearly (from 2 to about 15 sec) with the sum of the number of squares that would be involved in each fold, if those folds were actually performed physically.
- Published
- 1972
8. Two Papers on the Comparison of Bayesian and Frequentist Approaches to Statistical Problems of Prediction: Bayesian Tolerance Regions
- Author
-
J. Aitchison
- Subjects
Statistics and Probability ,business.industry ,010102 general mathematics ,Bayesian probability ,Machine learning ,computer.software_genre ,01 natural sciences ,010104 statistics & probability ,Frequentist inference ,Artificial intelligence ,0101 mathematics ,business ,computer ,Mathematics - Published
- 1964
9. GLOSS AND SMOOTHNESS OF PAPER
- Author
-
Minoru Kometani and Shigenobu Takagi
- Subjects
Smoothness (probability theory) ,business.industry ,Computer vision ,Artificial intelligence ,business ,Gloss (optics) ,Mathematics - Published
- 1938
10. Flesh, Filter, Paper
- Author
-
Robert Sward
- Subjects
Literature and Literary Theory ,Filter paper ,business.industry ,Flesh ,Computer vision ,Artificial intelligence ,business ,Mathematics - Published
- 1957
11. Erratum to: Correction to the paper 'On the theory of statistical decision functions'
- Author
-
Kameo Matusita
- Subjects
Statistics and Probability ,business.industry ,Evidential reasoning approach ,Decision rule ,Artificial intelligence ,Statistical theory ,business ,Algorithm ,Mathematics - Published
- 1952
12. CONNECTING CHILDREN'S LANGUAGE AND LINGUISTIC THEORY11Revised version of a paper presented at the Buffalo Conference on Psycholinguistics. I am indebted to members of the conference for a number of pertinent criticisms, some of which have led to alterations in the paper. Further theoretical discussion can be found in Roeper (1972)
- Author
-
Thomas Roeper
- Subjects
Structure (mathematical logic) ,business.industry ,computer.software_genre ,Linguistics ,language.human_language ,German ,Transformation (function) ,Simple (abstract algebra) ,Component (UML) ,Theoretical linguistics ,language ,Artificial intelligence ,Element (category theory) ,business ,computer ,Natural language processing ,Word order ,Mathematics - Abstract
Publisher Summary This chapter focuses on connection theory of children's language and linguistic theory. It discusses the problem of a child's recognition of nonuniversal aspects of deep structure. It is not the case that deep structure is the same for all languages. Grammatical relations are, by hypothesis, universal, but the order of elements is not. It is not the case that simple, active, declarative sentences always reflect deep structure order directly. It is not the case that the most frequent forms in surface structure reflect deep structure. Traditionally, the determination of base word order depends upon analysis of the transformational component. The most important transformation in German is the verb-second transformation which transfers an element from final position to second position in the creation of declarative sentences. It is the broad applicability of this transformation that makes elegantly simple a description of German that uses a verb-final base structure. The subordinate-clause strategy is a powerful mechanism for the discovery of deep structure.
- Published
- 1973
13. A Remark on a Paper of Trawinski and David Entitled: 'Selection of the Best Treatment in a paired-Comparison Experiment'
- Author
-
Peter J. Huber
- Subjects
Operations research ,business.industry ,Paired comparison ,Artificial intelligence ,business ,computer.software_genre ,computer ,Selection (genetic algorithm) ,Natural language processing ,Mathematics - Published
- 1963
14. A 'CRISIS' IN THE THEORY OF PATTERN RECOGNITION**This is the first manuscript submitted to the conference, but he was unable to attend the meeting. We tried to contact him to obtain the final version of the paper but we were unsuccessful
- Author
-
A. Lerner
- Subjects
Sequence ,Flow (mathematics) ,business.industry ,Pattern recognition (psychology) ,Principal (computer security) ,Pattern recognition ,Minification ,Artificial intelligence ,business ,Stochastic approximation ,Field (computer science) ,Period (music) ,Mathematics - Abstract
Publisher Summary This chapter discusses crises in the theory of pattern recognition. The rules obtained in pattern recognition learning even with scarce empirical data are sometimes found to be much stronger than could be expected from the estimates which are valid for the problem stated in the sense that they yield correct solutions in the situations that have not been encounted in the training sequence. The researchers are attracted by the unusual success of the experiments in applying pattern recognition to various fields of science by using the most diverse learning algorithms. Different facets of the problem were viewed as the principal ones over the period of research in the field. It was assumed that one should understand the learning technology of living beings and then computerize it. Then, however, the problem was restated as one of risk minimization. As a result, a huge number of algorithms appeared and this flow is strong enough even at present. Soon these algorithms were found to be based on the same ideas, either stochastic approximation or minimization of the empirical risk.
- Published
- 1972
15. CLASS INCLUSION PROCESSES**This work was supported in part by a grant to Dr. Klahr from the Ford Foundation and to Dr. Wallace from the British Social Science Research Council. We wish to thank our colleagues Guy Groen, Allen Newell, Don Waterman and Richard Young for many stimulating discussions about production systems and cognitive development. We are further indebted to Allen Newell for his careful critique of an earlier version of this paper
- Author
-
David Klahr and J.G. Wallace
- Subjects
Class (set theory) ,Generality ,Information processing theory ,Series (mathematics) ,Action (philosophy) ,Basis (linear algebra) ,business.industry ,Decomposition (computer science) ,Artificial intelligence ,business ,Mathematical economics ,Unitary state ,Mathematics - Abstract
Publisher Summary This chapter discusses class inclusion processes and describes the problems of stage and transition. A gap exists between the hypothetical structures and processes that form the basis of the theory and the level of performance. The theoretical account is presented at a level of generality that makes it uncertain as to whether it is sufficient to account for the complex and varied behavior that it purports to explain. There is no way at all of determining what can be its consequences on the level of performance. A much more detailed account of the functioning of specific processes is necessary before these uncertainties can be dispelled. The information processing approach provides a methodology that bridges the gap between theory and performance. The most specific theory of human problem solving deals entirely with adult subjects. A repeated decomposition of action chains in unfamiliar situations can ultimately lead to a series of unitary productions.
- Published
- 1972
16. A Selection of Early Statistical Papers of J. Neyman
- Author
-
Jerzy Neyman
- Subjects
business.industry ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,computer ,Selection (genetic algorithm) ,Mathematics - Published
- 1967
17. Comments on a Paper By Schild and Fredman
- Author
-
Willard L. Eastman
- Subjects
Theoretical computer science ,business.industry ,Strategy and Management ,Scheduling (production processes) ,Linear loss ,Artificial intelligence ,Management Science and Operations Research ,business ,Mathematics - Abstract
A comment on Schild, A., I. Fredman. 1961. On scheduling tasks with associated linear loss functions. Management Sci. 7 (3, April) 280–285.
- Published
- 1965
18. A Semi-quantitative Test Paper Method for Organophosphate by the Use of Indophenylacetate
- Author
-
K. Ueda, T. Mori, M. Nishimura, and Y. Usui
- Subjects
chemistry.chemical_compound ,chemistry ,business.industry ,Organophosphate ,Public Health, Environmental and Occupational Health ,Pattern recognition ,Artificial intelligence ,Toxicology ,business ,Semi quantitative ,Mathematics ,Test (assessment) - Published
- 1961
19. Comments on the paper by C. A. Coulson
- Author
-
W. Moffitt
- Subjects
Electronegativity ,Force constant ,Theoretical physics ,General Energy ,business.industry ,Atom (measure theory) ,Criticism ,Context (language use) ,Artificial intelligence ,Term (logic) ,business ,Measure (mathematics) ,Mathematics - Abstract
The interesting new definition of electronegativity put forward by Dr Walsh is open to the same sort of criticism as that which may be levelled at that of Pauling. As applied to an atom, the term ‘electronegativity’, in its original context, was surely a measure of its polar propensities. And both of these empirical definitions appear to preserve only the most tenuous of connexions with this connotation. It is true that A —H bond force constants appear to increase on going from left to right across the periodic table and that they decrease on going down. Deviations from bond additivity exhibit closely related regularities. We may observe that Mulliken's electronegativities ½( I + E ) behave in a similar fashion. But it is quite another thing to define electronegativities on the basis of such correlations. Indeed, the work reported by Dr Cottrell and Dr Sutton suggests that both Dr Walsh’s and Pauling’s data may be satisfactorily explained in terms of a purely homopolar effect. Professor Coulson has drawn attention to the very delicate position in which electronegativity theory now finds itself. It may therefore be of interest to the discussion if I may summarize an entirely theoretical approach to these problems which I have developed recently.
- Published
- 1951
20. Correction Notes: Correction to Abstract of 'Optimal Classification Rules': Abstracts of Papers
- Author
-
Somesh Das Gupta
- Subjects
business.industry ,Artificial intelligence ,computer.software_genre ,business ,computer ,Natural language processing ,Mathematics - Published
- 1962
21. Th. Skolem. Forklaring til foranstdende avhandling av L. Kalmár (Explanation to the foregoing paper of L. Kalmár). Norsk matematisk tidsskrift, vol. 19 (1937), pp. 130–133
- Author
-
Alonzo Church
- Subjects
Philosophy ,Logic ,business.industry ,Artificial intelligence ,business ,Humanities ,Mathematics - Published
- 1938
22. EFFECT OF HIGH TEMPERATURE ON CHARACTER OF GROWTH OF CABBAGE
- Author
-
Julian C. Miller
- Subjects
Brief Papers ,Character (mathematics) ,Physiology ,business.industry ,Genetics ,Plant Science ,Artificial intelligence ,computer.software_genre ,business ,computer ,Natural language processing ,Mathematics - Published
- 1928
23. Introduction to String Analysis
- Author
-
Zellig S. Harris
- Subjects
Interpretation (logic) ,Parsing ,business.industry ,Meaning (non-linguistic) ,computer.software_genre ,Syntax ,Set (abstract data type) ,Morpheme ,Artificial intelligence ,business ,computer ,Natural language processing ,Utterance ,Sentence ,Mathematics - Abstract
String analysis has developed out of an attempt to carry out syntactic analysis on a computer, just as, some ten years earlier, transformational analysis developed out of the attempt to normalize texts for discourse analysis. The arrangement of syntax for computability, following in part the method presented in ‘From Morpheme to Utterance’ (Language 22 (1946), 161–83; Paper VI of this volume) was based on an effective procedure for finding in each sentence a sequence (in general, broken) of words which was itself a sentence, belonging to a certain set of minimal sentence structures. This minimal sentence was called the center of the given sentence, and its meaning had an important and central relation to the meaning of the given sentence; this relation can be specified independently of the given sentence. The remainder of the sentence consisted of adjunctions to the center or to the adjunctions; an effective procedure was presented for an ordered determining of these adjunctions, and the ordered adjunctions had an interpretation independent of the given sentence. The original version of this analysis, made for the Univac sentence-decomposing program of 1959, is given in Computable Syntactic Analysis (TDAP 15), 1959 (Paper XVI of this volume).1
- Published
- 1970
24. The implicit conditioning method in statistical mechanics
- Author
-
John M. Richardson
- Subjects
A priori probability ,Information Systems and Management ,Crystal system ,Conditional probability ,Statistical mechanics ,Computer Science Applications ,Theoretical Computer Science ,Least mean squares filter ,Combinatorics ,Artificial Intelligence ,Control and Systems Engineering ,Variational principle ,Simple (abstract algebra) ,Conditioning ,Applied mathematics ,Software ,Mathematics - Abstract
It is well known that least mean square estimation can be employed to calculate conditional means, a procedure called the implicit conditioning method in this paper. It is possible to construct a priori probability densities of tractable form that, when conditioned on certain sets of variables, reduce to conditional probability densities which are identical to the canonical probability densities occurring in the statistical mechanics of certain classical systems. This yields a new variational principle for the calculation of canonical mean values in classical statistical mechanics. In this paper, two versions of this variational principle are applied to a simple lattice system to yield approximate expressions for the canonical mean values of certain properties of physical interest.
- Published
- 1974
25. THE CONCEPT AND MEASUREMENT OF CENTRALITY?AN INFORMATION APPROACH
- Author
-
Charles S. Tapiero and Arie Y. Lewin
- Subjects
Structure (mathematical logic) ,Information Systems and Management ,Basis (linear algebra) ,Inequality ,business.industry ,Process (engineering) ,Strategy and Management ,media_common.quotation_subject ,Structural system ,Information theory ,General Business, Management and Accounting ,Interdependence ,Management of Technology and Innovation ,Artificial intelligence ,business ,Centrality ,media_common ,Cognitive psychology ,Mathematics - Abstract
The complexity of interdependent structural systems greatly complicates the analysis of any single structure. This is particularly the case when a structure represents some behavioral process. For this reason it is necessary to devise measures which can differentiate qualitatively and quantitatively between structures as well as between subsets (or points) of a particular structure. For example, consider the authority structures of two different organizations. They exhibit similarities and differences which a behavioral analyst tries to identify and explain. Typically, both similarities and differences are compared by structural indices which, on the basis of past data and prior information, tend to reflect certain organizational traits. The purpose of this paper is to investigate one particularly important index—centrality. Centrality conveys the notion that points in a structure are not all ‘equal’. This ‘inequality’ vis-a-vis the structure creates a situation in which certain points will be more ‘central’ than others. In this paper we first identify the characteristics of centrality and observe how they may relate to behavioral research. We then develop a procedure for measuring centrality which is based on information theory.
- Published
- 1973
26. Compression algorithms that preserve basic topological features in binary-coded patterns
- Author
-
Mariagiovanna Sami and Renato Stefanelli
- Subjects
Class (set theory) ,media_common.quotation_subject ,Binary number ,Ambiguity ,Type (model theory) ,Topology ,Set (abstract data type) ,Reduction (complexity) ,Artificial Intelligence ,Signal Processing ,Point (geometry) ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Data compression ,media_common ,Mathematics - Abstract
In this paper, some problems related to the recognition of plane, two-tone pictures (e.g. handprinted characters) are considered. It is assumed that suitable algorithms (already described) have been applied to the original picture, in order to obtain linearwise, two-tone figures. A problem that arises at this point is classifying these results in a suitable way; to this purpose, it is necessary to define and perform a set of measurements such that the results obtained by applying them to figures of the same class be the same, the possible ambiguity be minimal and the loss of information be as reduced as possible. In this paper, some algorithms are described that transform the figure into another one of the least possible dimensions, but retaining a set of basic topological and quasi-topological characteristics of the original picture. While implicitly defining a set of measurements to be performed, the “reduction” of the figure is implemented so that several figures having the same basic characteristics give the same “reduced” figure as final result. The considerable reduction of the figure's dimensions may furthermore make the recognition simpler. Several algorithms are described, and the results are compared; all are of parallel type, and therefore particularly suited for hardware implementation.
- Published
- 1973
27. Unit Refutations and Horn Sets
- Author
-
Lawrence J. Henschen and Larry Wos
- Subjects
Discrete mathematics ,Horn clause ,Unit propagation ,SLD resolution ,Horn-satisfiability ,Resolution (logic) ,First-order logic ,Combinatorics ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Rule of inference ,Software ,Axiom ,Information Systems ,Mathematics - Abstract
The key concepts for this automated theorem-proving paper are those of Horn set and strictly-unit refutation. A Horn set is a set of clauses such that none of its members contains more than one positive literal. A strictly-unit refutation is a proof by contradiction in which no step is justified by applying a rule of inference to a set of clauses all of which contain more than one literal. Horn sets occur in many fields of mathematics such as the theory of groups, rings, Moufang loops, and Henkin models. The usual translation into first-order predicate calculus of the axioms of these and many other fields yields a set of Horn clauses. The striking feature of the Horn property for finite sets of clauses is that its presence or absence can be determined by inspection. Thus, the determination of the applicability of the theorems and procedures of this paper is immediate. In Theorem 1 it is proved that, if S is an unsatisfiable Horn set, there exists a strictly-unit refutation of S employing binary resolution alone, thus eliminating the need for factoring; moreover, one of the immediate ancestors of each step of the refutation is in fact a positive unit clause. A theorem similar to Theorem 1 for paramodulation-based inference systems is proven in Theorem 3 but with the inclusion of factoring as an inference rule. In Section 3 two reduction procedures are discussed. For the first, Chang's splitting, a rule is provided to guide both the choice of clauses and the way in which to split. The second reduction procedure enables one to refute a Horn set by refuting but one of a corresponding family of simpler subproblems.
- Published
- 1974
28. Computing a Subinterval of the Image
- Author
-
P. L. Richman
- Subjects
Discrete mathematics ,Class (set theory) ,Image (category theory) ,Zero (complex analysis) ,Function (mathematics) ,Interval (mathematics) ,System of linear equations ,Set (abstract data type) ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Bounding overwatch ,Software ,Information Systems ,Mathematics - Abstract
The problem of computing a desired function value to within a prescribed tolerance can be formulated in the following two distinct ways: Formulation I: Given x and ∈ > 0, compute f (x) to within ∈. Formulation II: Given only that x is in a closed interval X, compute a subinterval of the image, f (X) = { f (x) : x ∈ X}. The first formulation is applicable when x is known to arbitrary accuracy. The second formulation is applicable when x is known only to a limited accuracy, in which case the tolerance is prescribed albeit indirectly by the interval X, and one must be satisfied with all or part of the set f (X) of possible function values. Elsewhere the author has presented an efficient solution to Formulation I for any rational f and many nonrational f . B. A. Chartres has presented an efficient solution to Formulation II for a very restricted class of rational f and for a few nonrational f . In this paper a solution to Formulation II for the arbitrary nonconstant rational f is presented. By bounding df/dx away from zero over some subset of X, it is shown how to reduce Formulation II to Formulation I, yielding the solution given here. In generalizing to vector-valued functions f, Chartres has solved Formulation II only for rational f which satisfy a linear system of equations, while this paper presents a solution for arbitrary non-degenerate rational vector-valued f.
- Published
- 1974
29. Two-automata games
- Author
-
Akihiro Takeuchi, Kokichi Tanaka, and Tadahiro Kitahashi
- Subjects
Discrete mathematics ,Computer Science::Computer Science and Game Theory ,Information Systems and Management ,Expected value ,Computer Science Applications ,Theoretical Computer Science ,Automaton ,Matrix (mathematics) ,symbols.namesake ,Zero-sum game ,Artificial Intelligence ,Control and Systems Engineering ,Example of a game without a value ,symbols ,Mathematical economics ,Game theory ,Value (mathematics) ,Software ,Von Neumann architecture ,Mathematics - Abstract
It is an interesting point of view to consider automata games as a model of human behaviors in a society. Automata games have been proposed by Tsetlin and Krylov. In their model, however, the final expected value of winnings was not equal to the value of Von Neumann in the game theory. In this paper, we consider the two automata zero sum games between automata proposed by Fu and Li with two strategies, and show that if the game matrix G = [ g ij ] satisfies the condition then the final expected value of winnings is equal to the value of Von Neumann. Satisfactory results are obtained by computer simulations, as shown at the end of this paper.
- Published
- 1974
30. A formalization of cluster analysis
- Author
-
William E. Wright
- Subjects
Class (set theory) ,Fuzzy clustering ,Correlation clustering ,Constrained clustering ,Conceptual clustering ,computer.software_genre ,Set (abstract data type) ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Data mining ,Cluster analysis ,computer ,Software ,Axiom ,Mathematics - Abstract
This paper presents a formalization of the concept of cluster analysis. It begins with an intuitive description of clustering, and discusses the separation of the measurement problem from the clustering problem. It develops the nature of the elements to be clustered, the nature of the possible clusters, and the nature of the clustering results. Particularly significant is the introduction of the attribute of mass of an element. The paper defines a class of functions, called clustering functions, having a certain domain and range and satisfying a certain set of properties, called axioms. It discusses some properties which are inadequate to serve as axioms. Finally, it presents a function which is in the class of clustering functions.
- Published
- 1973
31. Recognition of X-Ray Picture Patterns
- Author
-
King-Sun Fu and Y. P. Chien
- Subjects
business.industry ,Feature extraction ,General Engineering ,Boundary (topology) ,Class (biology) ,Set (abstract data type) ,Polygon ,Pattern recognition (psychology) ,Medical imaging ,Preprocessor ,Computer vision ,Artificial intelligence ,business ,Mathematics - Abstract
It is generally a problem to select the appropriate preprocessing and feature-extraction technique in most pictorial pattern recognition applications so that an accurate classification is possible. In this paper a class of pictures of medical importance, namely, chest X-ray pictures, is used to test the proposed preprocessing and feature-extraction technique. The technique presented in this paper is applied only to chest X-ray images; however, the same technique could also be applied to a fairly broad class of picture patterns with only some minor modifications. The proposed preprocessing technique, which utilizes the local and global information of the picture patterns, is to extract the lung boundary. The lung field is then enclosed by a polygon which is the piecewise linear approximation of the lung boundary. The set of texture features, which are the average of some local property measures, is has then extracted in this approximated lung area. The proposed technique been tested on two sets of X-ray picture classes?one with abnormalities caused by a known disease and the others with abnormalities caused by some unkown effects in the lung region. The classification results presented in this paper show the feasibility of the proposed pictorial pattern recognition system in effectively screening out the abnormal pictures without human intervention.
- Published
- 1974
32. Bernstein-Bézier Methods for the Computer-Aided Design of Free-Form Curves and Surfaces
- Author
-
Richard F. Riesenfeld and William J. Gordon
- Subjects
Pure mathematics ,Bézier curve ,Monotonic function ,computer.software_genre ,Bernstein polynomial ,Convexity ,Algebra ,Geometric design ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Computer Aided Design ,Differentiable function ,computer ,Software ,Information Systems ,Parametric statistics ,Mathematics - Abstract
The m th degree Bernstein polynomial approximation to a function ƒ defined over [0, 1] is Σ m μ =0 ƒ( μ / m ) φ μ ( s ), where the weights φ μ ( s ) are binomial density functions. The Bernstein approximations inherit many of the global characteristics of ƒ, like monotonicity and convexity, and they always are at least as “smooth” as ƒ, where “smooth” refers to the number of undulations, the total variation, and the differentiability class of ƒ. Historically, their relatively slow convergence in the L ∞ -norm has tended to discourage their use in practical applications. However, in a large class of problems the smoothness of an approximating function is of greater importance than closeness of fit. This is especially true in connection with problems of computer-aided geometric design of curves and surfaces where aesthetic criteria and the intrinsic properties of shape are major considerations. For this latter class of problems, P. Bézier of Renault has successfully exploited the properties of parametric Bernstein polynomials. The purpose of this paper is to analyze the Bézier techniques and to explore various extensions and generalizations. In a sequel, the authors consider the extension of the results contained herein to free-form curve and surface design using polynomial splines . These B-spline methods have several advantages over the techniques described in the present paper.
- Published
- 1974
33. ON A PATTERN CLASSIFICATION PROBLEM ON THE BASIS OF A TRAINING SEQUENCE ASSOCIATED WITH DEPENDENT RANDOM VARIABLES
- Author
-
Masafumi Watanabe
- Subjects
Basis (linear algebra) ,business.industry ,Materials Science (miscellaneous) ,Training (meteorology) ,Pattern recognition ,General Business, Management and Accounting ,Industrial and Manufacturing Engineering ,Dependent random variables ,Statistics ,Artificial intelligence ,Business and International Management ,General Agricultural and Biological Sciences ,business ,Sequence (medicine) ,Mathematics - Abstract
In this paper we shall be concerned with the pattern classification problem related to "learning with a teacher". Many previous authors have studied this problem under the given situation of a training sequence composed of observed patterns independently sampled from a common population. From the practical point of view, however, the situation that observed patterns are independently sampled is rather restrictive. For this reason, in this paper the author treats the pattern classification problem on the basis of a dependent sequence of observed patterns. In [8], [9] and [10], K. Tanaka treated same problem as the present author, but restricted himself to the parametric case. In this paper, we shall consider the non-parametric case, and appeal to the method which has been developed in [12]. Consequently, our various conditions imposed are different from those in [8], [9] and [10]. This paper consists of five sections. In Section 2, we shall give t h e formulation of our problem and five assumptions necessary for subsequent arguments. In Section 3, we shall define a recursive algorithm for the pattern classification problem, which is an application of the dynamic stochastic approximation method [3], and investigate the convergence of it. The meaning of the convergence is "in the mean". In Section 4 we shall give two examples.
- Published
- 1974
34. Machine recognition of printed Chinese characters via transformation algorithms
- Author
-
Robert C. Shiau and Paul P. Wang
- Subjects
Topological property ,Structure (mathematical logic) ,Engineering ,Philosophy of design ,business.industry ,Intelligent character recognition ,Speech recognition ,Feature extraction ,Pattern recognition ,Transformation (function) ,Artificial Intelligence ,Hadamard transform ,Signal Processing ,Pattern recognition (psychology) ,Feature (machine learning) ,Artificial intelligence ,Computer Vision and Pattern Recognition ,Chinese characters ,business ,Algorithm ,Software ,Mathematics - Abstract
This paper presents some novel results concerning the recognition of single-font printed Chinese characters via the transformation algorithms of Fourier, Hadamard, and Rapid. The new design philosophy of a three-stage structure is believed to offer at least a suboptimal search strategy for recognizing printed Chinese characters with a dictionary of 7000–8000 characters. The transformation algorithms discussed in this paper will be used in the last two stages. Extensive experiments and simulations concerning feature extraction and noisy or abnormal pattern recognition have been carried out (the simulations have been restricted to a 63-character subset called “Radicals”). Comparison has been made of all three transforms according to their ability to recognize characters.
- Published
- 1973
35. The Animals of Architecture: Some Census Results onN-Omino Populations forN= 6, 7, 8
- Author
-
L March and R Matela
- Subjects
Discrete mathematics ,Polyomino ,business.industry ,Geography, Planning and Development ,Census ,Square (algebra) ,Set (abstract data type) ,Golomb coding ,Architectural education ,Artificial intelligence ,Architecture ,business ,General Environmental Science ,Mathematics - Abstract
An N-omino is a planar edge-connected set of N square cells. The terms polyomino (N-omino) and animal are equivalent, the first attributable to Golomb (1954) and the latter to Read (1962). This paper presents some results derived from a census of three different populations of N-ominoes for N = 6, 7, and 8. In all, 512 different polyominoes have been analysed in terms of properties of form which are of architectural interest. In this respect the paper extends our knowledge of the so-called ‘animals of architecture’ (Frew et al., 1972). The paper concludes with some reflections on the nature of architectural solution spaces and design processes based on the experience of conducting this census, and the potential role of polyomino studies in architectural education.
- Published
- 1974
36. The shape-oriented dissimilarity of polygons and its application to the classification of chromosome images
- Author
-
E.T Lee
- Subjects
Combinatorics ,Artificial Intelligence ,business.industry ,Signal Processing ,Pattern recognition ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Invariant (mathematics) ,business ,Software ,Mathematics - Abstract
Pavlidis in his “analysis of set patterns” proposed three size-oriented similarity measures. In this paper, shape-oriented similarity and dissimilarity measures of triangles and polygons are proposed and investigated. There are three advantages of shape-oriented similarity measures. First, two polygons may have the same shape but differ in area and dimensions and still be similar. Second, shape-oriented similarity and dissimilarity measures can be normalized between zero and one. Third, shape-oriented similarity measures are invariant with respect to rotation, translation, or expansion or contraction in size. The “rubber-mask” technique proposed by Widrow utilized length and width parameters in the classification of chromosomes. In this paper, chromosome images are classified through the use of angular and dimensional proximity measures which are in terms of angle and length parameters. The results obtained in this paper may contribute to processing a picture from the polygonal approximation stage to the final filtering stage in order to recognize or classify a picture.
- Published
- 1974
37. Partitioned estimation algorithms, II: Linear estimation
- Author
-
Demetrios G. Lainiotis
- Subjects
Information Systems and Management ,Stochastic process ,Linear system ,Initialization ,Computer Science Applications ,Theoretical Computer Science ,Controllability ,Matrix (mathematics) ,symbols.namesake ,Artificial Intelligence ,Control and Systems Engineering ,symbols ,Partition (number theory) ,Observability ,Fisher information ,Algorithm ,Software ,Smoothing ,Linear filter ,Mathematics - Abstract
In a radically new approach to linear estimation, Lainiotis [33, 36–37, 52–53], using the “partition theorem”-an explicit Bayes theorem-obtained fundamentally new linear filtering and smoothing algorithms both for continuous as well as discrete data. The new algorithms are given in explicit, integral expressions of a “partitioned” form, and in terms of decoupled forward filters. The “partitioned” algorithms were shown to be especially advantageous from a computational as well as from an analysis standpoint. They are essentially based on the decomposition of the innovations into partial or conditional innovations and residuals. In this paper, the “partitioned” algorithms are shown to be the natural framework in which to study such important concepts as observability, controllability, unbiasedness, and the solution of Riccati equations. Specifically, in this paper, the “partitioned” algorithms are re-examined yielding further insight as well as several significant new results on: 1. (a) unbiased estimation and filter initialization procedures; 2. (b) stochastic observability and stochastic controllability; 3. (c) the interconnection between stochastic observability, Fisher information matrix, and the Cramer-Rao bound; 4. (d) estimation error-bounds; and most importantly 5. (e) computationally effective “partitioned” solutions of time-varying matrix Riccati equations. In fact, all of the above results have been obtained for general, time-varying, lumped, linear systems. In addition, it is shown that previously established smoothing algorithms, such as the Meditch differential algorithm and the Kailath-Frost total innovation algorithm, are readily obtained from the “partitioned” algorithms. The properties of the “partitioned” algorithms are obtained, thoroughly examined, and compared to those of other algorithms.
- Published
- 1974
38. A MODEL OF INFORMATION INTEGRATION AND ITS
- Author
-
Hitoshi Fujisawa and Kimiyoshi Hirota
- Subjects
business.industry ,Weight factor ,Impression formation ,Artificial intelligence ,business ,Information integration ,Mathematics ,Impression - Abstract
This paper would point out some problems of the models of Impression formation under the traditional point of view. One of the problems may be stemed from the one-dimensional point of view to the scale. The model, therefore, would be constructed from the multi-dimensional point of view. The other may be in the weight factor, so, in this paper, it would define that the weight is the degree of relevance with respect to the meaning factor of the stimulus word. After these considerations, it would represent a new progressive redundancy model, and that would be firmed by experiment and computer simulation. The feature of this model lies in that it reduces the impression to the common meaning factors and unique meaning factors based on factor analytic method.
- Published
- 1974
39. The Theory of Search. II. Target Detection
- Author
-
B. O. Koopman
- Subjects
Visual detection ,Observer (quantum physics) ,Series (mathematics) ,business.industry ,Computer vision ,Kinematics ,Artificial intelligence ,Management Science and Operations Research ,business ,Computer Science Applications ,Course (navigation) ,Mathematics - Abstract
“Kinematic bases,” the first paper of this series, discussed the geometric and kinematic factors involved in search—the positions, motions, and contacts of observers and targets. Probability was introduced only in assuming specific relative positions for the observer and target. The present paper discusses the uncertainties inherent in the act of detection under various specific conditions of contact. In the course of the discussion a body of methods for applying probability to problems of detection is developed. It must be emphasized, however, that these methods are conditioned by the particular situation in the case of visual detection because the different elementary acts of looking or “glimpses” are essentially independent trials. The reason for the distinction follows.
- Published
- 1956
40. Eigenschaften und Aufbau von Lernmatrizen für nichtbinäre Signale
- Author
-
P. Müller
- Subjects
Matrix (mathematics) ,Similarity (network science) ,Basis (linear algebra) ,business.industry ,Cybernetics ,Pattern recognition ,General Medicine ,Artificial intelligence ,business ,Realization (systems) ,Mathematics - Abstract
This paper is concerned with the learning matrix for non-binary signals (LMn) as a classifying network. It can be shown that the properties of a LMn are determined by five parameters. One of these parameters corresponds to a criterion of similarity as the basis of the classification process. Two types of LMn, incorporating different criteria of similarity are investigated and compared as far as the accuracy and the invariance of classification are concerned. The second part of the paper deals with the technical realization of both types of the LMn. There the main problem is the storage of non-binary information in the connecting elements of the LMn. This difficulty can be met by the application of control techniques. Finally some results of investigations on transfluxors are reported which show that transfluxors are suitable connecting elements of a LMn.
- Published
- 1964
41. Numerical methods for fuzzy clustering
- Author
-
Enrique H. Ruspini
- Subjects
Information Systems and Management ,Fuzzy clustering ,Fuzzy classification ,Fuzzy set ,Type-2 fuzzy sets and systems ,Defuzzification ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Fuzzy set operations ,Fuzzy number ,Algorithm ,Software ,Membership function ,Mathematics - Abstract
In a previous paper ^[^1^] the use of the concept of fuzzy sets in clustering was proposed. The convenience of fuzzy clustering over conventional representation was then stressed. Assigning each point a degree of belongingness to each cluster provides a way of characterizing bridges, strays, and undetermined points. This is especially useful when considering scattered data. The classificatory process may be considered as the breakdown of the probability density function of the original set into the weighted sum of the component fuzzy set densities. Such decomposition should be performed so that the components really represent clusters. This is done by optimization of some functional defined over all possible fuzzy classifications of the data set. Several functionals were suggested in ^[^1^]. The bulk of this paper is concerned with numerical techniques useful in the solution of such problems. The first two formulas treated do not provide an acceptable fuzzy classification but yield good starting points for the minimization of a third functional. This last method obtains very good dichotomies and is characterized by slower convergence than the previous processes. Using that functional, a modification is suggested to obtain partitions in more than two sets. Numerous computational experiments are presented.
- Published
- 1970
42. Organismic sets: II. Some general considerations
- Author
-
N. Rashevsky
- Subjects
Pharmacology ,Mathematical sociology ,Fundamental theorem ,business.industry ,General Mathematics ,General Neuroscience ,Immunology ,General Medicine ,General Biochemistry, Genetics and Molecular Biology ,Epistemology ,Zero (linguistics) ,Mathematical biophysics ,Core (game theory) ,Range (mathematics) ,Sociology ,Computational Theory and Mathematics ,Specialization (logic) ,Artificial intelligence ,General Agricultural and Biological Sciences ,business ,Set (psychology) ,Biology ,Mathematics ,General Environmental Science - Abstract
The theory of organismic sets, developed in previous papers (Bull. Math. Biophysics,29, 139–152; 389–393; 643–647) is further generalized. To conform better with some biological and sociological facts the basic definitions are made more general. The conclusion is reached that every organismic setSo is in general the union of three disjoined subsetsSo1,So2 andSo3. Of these the subsetSo1, called the “core” is equivalent to an organismic set defined in previous publications. Its functioning is essential for the functioning ofSo. The subsetsSo2 andSo3, taken alone, are not organismic sets. The first of them is responsible for such biological or sociological functions which are not necessary for the “immediate” survival ofSo but which are important for adaptation to changing environment and are therefore essential for a “long range survival.” The second one,So3, is responsible for biological or social functions which are irrelevant for the survival ofSo. Biological and sociological examples ofSo2 andSo3 are given. In addition to the fundamental theorem established in the first of the above mentioned papers, three new conclusions are derived. One is that in organismic sets of order higher than zero not all elements are specialized. The second is that every organismic set of order higher than zero is mortal. The third is that with increasing specialization the intensities of some activities in some elements ofSo are reduced. Again the biological and sociological examples are given.
- Published
- 1968
43. Application of Ternary Algebra to the Study of Static Hazards
- Author
-
Michael Yoeli and Shlomo Rinon
- Subjects
Hazard (logic) ,Mathematical optimization ,Combinational switching ,Function (mathematics) ,Ternary algebra ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Lattice (order) ,Algebraic number ,Ternary operation ,Software ,Hardware_LOGICDESIGN ,Information Systems ,Electronic circuit ,Mathematics - Abstract
This paper is concerned with the study of static hazards in combinational switching circuits by means of a suitable ternary switching algebra. Techniques for hazard detection and elimination are developed which are analogous to the Huffman-McCluskey procedures. However, gate and series-parallel contact networks are treated by algebraic methods exclusively, whereas a topological approach is applied to non-series-parallel contact networks only. Moreover, the paper derives necessary and sufficient conditions for a ternary function to adequately describe the steady-state and static hazard behavior of a combinational network. The sufficiency of these conditions is proved constructively leading to a method for the synthesis of combinational networks containing static hazards as specified. The section on non-series-parallel contact networks also includes a brief discussion of the applicability of lattice matrix theory to hazard detection. Finally, hazard prevention in contact networks by suitable contact sequencing techniques is discussed and a ternary map method for the synthesis of such networks is explained.
- Published
- 1964
44. Strategy construction using homomorphisms between games
- Author
-
George W. Ernst and R. B. Banerji
- Subjects
Discrete mathematics ,Computer Science::Computer Science and Game Theory ,Linguistics and Language ,Theoretical computer science ,ComputingMilieux_PERSONALCOMPUTING ,Combinatorial game theory ,Solved game ,Class (philosophy) ,Language and Linguistics ,Artificial Intelligence ,Similarity (psychology) ,Homomorphism ,Special case ,Representation (mathematics) ,Mathematics - Abstract
One reason for changing the representation of a game is to make it similar to a previously solved game. As a definition of similarity, people have proposed homomorphism-like structures. Two such structures are discussed in this paper and it is proven that they "preserve" winning strategies. They are incomparable in their strength and areas of applicability; i.e., neither of them is a special case of the other. The games to which these homomorphisms have been applied are positional games and decomposable games. The reason for concentrating on these two classes of games is that powerful methods for playing these games are known. For motivation, these methods are briefly described in the paper. The two homomorphisms discussed in this paper effectively extend the methods for playing positional and decomposable games to a much larger class of games. For several specific games which are neither positional nor decomposable, it is shown how they can be played as though they were positional or decomposable by using the homomorphisms.
- Published
- 1972
45. Statistical Determination of Certain Mathematical Constants and Functions Using Computers
- Author
-
Satya D. Dubey
- Subjects
Pseudorandom number generator ,Mathematical constants and functions ,Scientific literature ,Confidence interval ,symbols.namesake ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Euler's formula ,symbols ,Applied mathematics ,Inverse trigonometric functions ,Electronic computer ,Constant (mathematics) ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
With the availability of high speed electronic computers it is now quite convenient to devise statistical experiments for the purpose of estimating certain mathematical constants and functions. The paper contains statistical formulas for estimating such mathematical constants and functions as π, C (Euler's constant), e , Ψ (1) (1), Γ( x ), In x , B ( x, y ), arctan x , Ψ( p ) and polygamma functions. Statistical estimates of these quantities may be used to construct desired confidence intervals for these parameters. Although numerical techniques are available to approximate these mathematical quantities very satisfactorily, a statistical approach to these problems seems to deserve mention in the scientific literature. Numerical illustrations are given in the paper which also give some indication of the effect of pseudorandom numbers on the final results. Statistical procedures, considered in the paper, make extensive use of a high speed electronic computer which may help to develop a positive attitude among theoreticians, promising students of mathematics and others toward the role of computers in the ever-expanding scientific work.
- Published
- 1966
46. Empirical tests of a theory of human acquisition of concepts for sequential patterns
- Author
-
Herbert A. Simon and Kenneth Kotovsky
- Subjects
Linguistics and Language ,Pattern language ,Series (mathematics) ,business.industry ,Minor (linear algebra) ,Extrapolation ,Experimental and Cognitive Psychology ,computer.software_genre ,Task (project management) ,Test (assessment) ,Neuropsychology and Physiological Psychology ,Artificial Intelligence ,Developmental and Educational Psychology ,Thurstone scale ,Artificial intelligence ,Construct (philosophy) ,business ,Algorithm ,computer ,Natural language processing ,Mathematics - Abstract
The paper examines a body of empirical data on S's performing the Thurstone Letter Series Completion task, in order to test the theory proposed by the authors in 1963 for explaining behavior on this task. The data confirm the theory in its main aspects, while indicating the need for some minor extensions and modifications. In particular the data show that subjects first discover the periodicity of the letter series, then construct a description of the pattern, and finally use the pattern description to make an extrapolation. Most of the pattern descriptions used by Ss fall within the pattern language defined in the earlier paper.
- Published
- 1973
47. On the Plantar Papillar Ridges of Japanese Twins
- Author
-
Shiro Kondo
- Subjects
Combinatorics ,business.industry ,Significant difference ,General Medicine ,Artificial intelligence ,business ,Sexual difference ,Mathematics - Abstract
This study was made, as a part of research for twin at the Tokyo University since 1950, upon twins in Tokyo.The author printed toe patterns upon adequate paper, by sticking it to each toe and rolling them both, toe and paper. As to nomenclatures, reference to his previous report is hoped.The results obtained are as follows:(1) The author represents combinations of the patterns of each foot (r, 1) of each twin (A, B), such as Ar-Br, Al-B1, Ar-B1 and Br-B1. From Table 2, it may be concluded that DT shows rather more significant difference than MT. And MT shows a stronger likeness in respective side of foot of a pair than in feet of a same person, and vise versa in DT. The characteristics of toe III and IV are noteworthy in respect to MT.(2) MT males reveal the significant difference of the likeness in metatalsal patterns between 1 and B or A and between r and A, though not so remarkable with females. Sexual differences between each combination of T) T) are not significant in MT.(3) Metatarsal triadii: In MT males, the likeness of M1 between Ar and Br is much stronger than in other combinations, that of M2 between A1 and B1 than in others.(4) Fibural sinus: Every combination equally resembles.(5) The author obtained the transitory series of bailer patterns and toe patterns arranged as in Table 9. Setting the distance between neighbouring patterns as value one, those separated from each other are valued according to the smallest number of gasps between the two patterns concerned, but if patterns concerned are same, the value for the distance is zero. As regards to metatarsal triradii, fibural and calcaneal sinus, he gives value one when each of them is present in one and is absent in another, excepting those cases when both feet have the neighbouring metatarsal triradii, when it is valued at zero. In this way, points of a twin pair are obtained by the summation of points of both sides of A and B. Statistically speaking, the mean and the variance are smaller in MT than in DT. The approximation shows the significant difference between MT (_??__??_, _??__??_) and DT (_??__??_, _??__??_) at a 5% level of significance. There is no significant difference between DT (_??__??_, _??__??_) and DT (_??_, _??_).
- Published
- 1953
48. The lambda-gamma calculus: A language adequate for defining recursive functions
- Author
-
P. C. Gilmore
- Subjects
Epsilon calculus ,Information Systems and Management ,Natural deduction ,Simply typed lambda calculus ,Process calculus ,Time-scale calculus ,Lambda cube ,Computer Science Applications ,Theoretical Computer Science ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Artificial Intelligence ,Control and Systems Engineering ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Church encoding ,Calculus ,Typed lambda calculus ,Software ,Mathematics - Abstract
This paper provides another formalization of the concept of an effectively calculable function motivated by the LISP language. In addition to the lambda functional abstraction operator the calculus described in this paper has a gamma decision operator of four arguments. There are two primitive relations of the calculus. The first is a denotation relation which in computer terms is the relationship holding between a name for a memory location and the contents of the location. The second is an identity relationship. An applied lambdagamma calculus with a successor function as the only primitive function is described in the paper as a fully formal theory with axioms and rules of deduction.
- Published
- 1970
49. On the computational complexity of finite functions and semigroup multiplication
- Author
-
X. Cheng and Heng-Da Cheng
- Subjects
Discrete mathematics ,Information Systems and Management ,Group (mathematics) ,Semigroup ,Function (mathematics) ,Upper and lower bounds ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Multiplication ,Function composition ,Unit (ring theory) ,Realization (systems) ,Algorithm ,Software ,Mathematics - Abstract
In a prior paper [5]we have given a lower bound on the time to multiply in a group using a circuit of unit delay elements with limited fan-in. Also in that paper was a circuit realization for group multiplication requiring time at most one unit greater than the lower bound. In the present work we generalize that circuit construction method so as to render it applicable to any function f: X"1 x X"2 -> Y, where X"1 and X"2 are finite sets. We then examine the optimality of the method when S is a finite semigroup and f"l: S x S -> S is semigroup multiplication.
- Published
- 1970
50. A Class of Sequential Games
- Author
-
David A. Kohler and R. Chandrasekaran
- Subjects
Value (ethics) ,Class (computer programming) ,Non-cooperative game ,Relation (database) ,Sequential game ,business.industry ,ComputingMilieux_PERSONALCOMPUTING ,Screening game ,Management Science and Operations Research ,Object (philosophy) ,Outcome (game theory) ,Computer Science Applications ,Artificial intelligence ,business ,Mathematical economics ,Mathematics - Abstract
This paper considers a game between two players who choose from a collection of objects. The players make their choices alternately and Vi,j represents the value or amount that the ith player will gain if he selects the jth object. In relation to these values, the players may have various strategies or approaches to the game, and each of them constitutes a distinct theoretical problem. The paper formulates and solves three of these problems, each one having practical significance (for example, for the draft of professional football players).
- Published
- 1971
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.