143 results
Search Results
2. Automatic Subject Recognition in Scientific Papers: An Empirical Study
- Author
-
John O'Connor
- Subjects
Information retrieval ,Empirical research ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Computer science ,Subject (documents) ,Data science ,Software ,Information Systems - Published
- 1965
3. Footnote to a Recent Paper
- Author
-
Herman H. Goldstine
- Subjects
Class (set theory) ,Computer science ,Generalization ,Stability (learning theory) ,Jacobi method ,Statistical proof ,Unitary state ,symbols.namesake ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,symbols ,Calculus ,Element (category theory) ,Software ,Information Systems ,Von Neumann architecture - Abstract
In a recent paper 1 I stated that von Neumann had originated the suggestion for the use of Schur's canonical form for arbitrary matrices. I have since learned that the suggestion actually is due in the first instance to John Greenstadt, who brought it to von Neumann's attention. The history of this is rather interesting and was communicated to me in a letter from John Greenstadt, which I quote below. “The full story is, that the triangularization occurred to me early in 1953, after trying in vain to find a general iterative diagonalization procedure, even where one knew that it was possible to diagonalize (defective degeneracy being the impossible case). It seemed to me that one thing that made for the stability of the Jacobi method was the fact that all the elements in the transformation matrix were less than 1. A natural generalization embodying this requirement was to consider unitary transformations. Then, a quick check of Murnaghan's book showed that one could hope only to triangularize, but that this was always possible. “I did some hand calculations on this, and lo and behold! it converged in the few cases I tried. I then programmed it for the CPC and tried many other cases. For several months thereafter, Kogbetliantz, John Sheldon, and I tried to prove convergence, when the algorithm involved the sequential annihilation of off-diagonal elements. We (particularly Sheldon) tried many approaches, but with no hint of success. Finally, in the latter part of 1953, we decided to ask von Neumann, who was then a consultant for IBM, when he was in New York at our offices. “I had prepared a writeup describing the procedure, but von Neumann (rightly) didn't want to bother reading it, so I explained it to him in about two minutes. He spent the next 15 minutes thinking up all the approaches we had thought of in three or four months, plus a few ones—all, however, without promise. “At this point he decided that it was a nontrivial problem, and perhaps not worth it anyway, and immediately suggested minimizing the sum of squares of subdiagonal elements, which is, of course, the truly natural generalization of the Jacobi method. For the next 15 minutes he investigated the case when it would be impossible to make an improvement for a particular pivotal element and found that these cases were of measure zero. “I recoded my procedure for the 701 and tried many other matrices of various sizes. I myself never had a failure, but it has since been demonstrated that the method will indeed fail for a class of matrices. Hence, a proof is clearly impossible. However, I think a statistical proof is possible, along lines suggested by Kogbetliantz, which, however, I have not been able to find. I do not think von Neumann's variation of the method would fail. (However, it is more complicated and time consuming.)”
- Published
- 1960
4. Comments on a Paper by Gaver
- Author
-
W. Chiu, Roger C. Wood, E. Balkovich, and L. Presser
- Subjects
Theoretical computer science ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Computer science ,Statistical model ,Data mining ,Computer multitasking ,computer.software_genre ,computer ,Software ,Information Systems - Abstract
In a 1967 publication, D. P. Gaver studied a probabilistic model of a multiprogramming computer system. His results have been utilized recently by a number of authors. However, we have observed that Gaver's results contain inconsistencies. These inconsistencies are discussed in detail and a correction suggested and verified through an independent derivation.
- Published
- 1974
5. An Answer to Mr. J. A. Lively's Remarks on the Paper ' Amphisbaenic Sorting'
- Author
-
H. Nagler
- Subjects
Theoretical computer science ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Computer science ,Sorting ,Algorithm ,Software ,Information Systems - Published
- 1961
6. Unit Refutations and Horn Sets
- Author
-
Lawrence J. Henschen and Larry Wos
- Subjects
Discrete mathematics ,Horn clause ,Unit propagation ,SLD resolution ,Horn-satisfiability ,Resolution (logic) ,First-order logic ,Combinatorics ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Rule of inference ,Software ,Axiom ,Information Systems ,Mathematics - Abstract
The key concepts for this automated theorem-proving paper are those of Horn set and strictly-unit refutation. A Horn set is a set of clauses such that none of its members contains more than one positive literal. A strictly-unit refutation is a proof by contradiction in which no step is justified by applying a rule of inference to a set of clauses all of which contain more than one literal. Horn sets occur in many fields of mathematics such as the theory of groups, rings, Moufang loops, and Henkin models. The usual translation into first-order predicate calculus of the axioms of these and many other fields yields a set of Horn clauses. The striking feature of the Horn property for finite sets of clauses is that its presence or absence can be determined by inspection. Thus, the determination of the applicability of the theorems and procedures of this paper is immediate. In Theorem 1 it is proved that, if S is an unsatisfiable Horn set, there exists a strictly-unit refutation of S employing binary resolution alone, thus eliminating the need for factoring; moreover, one of the immediate ancestors of each step of the refutation is in fact a positive unit clause. A theorem similar to Theorem 1 for paramodulation-based inference systems is proven in Theorem 3 but with the inclusion of factoring as an inference rule. In Section 3 two reduction procedures are discussed. For the first, Chang's splitting, a rule is provided to guide both the choice of clauses and the way in which to split. The second reduction procedure enables one to refute a Horn set by refuting but one of a corresponding family of simpler subproblems.
- Published
- 1974
7. Computing a Subinterval of the Image
- Author
-
P. L. Richman
- Subjects
Discrete mathematics ,Class (set theory) ,Image (category theory) ,Zero (complex analysis) ,Function (mathematics) ,Interval (mathematics) ,System of linear equations ,Set (abstract data type) ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Bounding overwatch ,Software ,Information Systems ,Mathematics - Abstract
The problem of computing a desired function value to within a prescribed tolerance can be formulated in the following two distinct ways: Formulation I: Given x and ∈ > 0, compute f (x) to within ∈. Formulation II: Given only that x is in a closed interval X, compute a subinterval of the image, f (X) = { f (x) : x ∈ X}. The first formulation is applicable when x is known to arbitrary accuracy. The second formulation is applicable when x is known only to a limited accuracy, in which case the tolerance is prescribed albeit indirectly by the interval X, and one must be satisfied with all or part of the set f (X) of possible function values. Elsewhere the author has presented an efficient solution to Formulation I for any rational f and many nonrational f . B. A. Chartres has presented an efficient solution to Formulation II for a very restricted class of rational f and for a few nonrational f . In this paper a solution to Formulation II for the arbitrary nonconstant rational f is presented. By bounding df/dx away from zero over some subset of X, it is shown how to reduce Formulation II to Formulation I, yielding the solution given here. In generalizing to vector-valued functions f, Chartres has solved Formulation II only for rational f which satisfy a linear system of equations, while this paper presents a solution for arbitrary non-degenerate rational vector-valued f.
- Published
- 1974
8. Application of the Diffusion Approximation to Queueing Networks I: Equilibrium Queue Distributions
- Author
-
Hisashi Kobayashi
- Subjects
Mathematical optimization ,Queueing theory ,Computer science ,M/M/1 queue ,G/G/1 queue ,Fork–join queue ,Heavy traffic approximation ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Mean value analysis ,Layered queueing network ,Applied mathematics ,Bulk queue ,Software ,Information Systems - Abstract
The practical value of queueing theory in engineering applications such as in computer modeling has been limited, since the interest in mathematical tractability has almost always led to an oversimplified model. The diffusion process approximation is an attempt to break away from the vogue in queueing theory. The present paper introduces a vector-valued normal process and its diffusion equation in order to obtain an approximate solution to the joint distribution of queue lengths in a general network of queues. In this model, queueing processes of various service stations which interact with each other are approximated by a vector-valued Wiener process with some appropriate boundary conditions. Some numerical examples are presented and compared with Monte Carlo simulation results. A companion paper, Part II, discusses transient solutions via the diffusion approximation.
- Published
- 1974
9. Bernstein-Bézier Methods for the Computer-Aided Design of Free-Form Curves and Surfaces
- Author
-
Richard F. Riesenfeld and William J. Gordon
- Subjects
Pure mathematics ,Bézier curve ,Monotonic function ,computer.software_genre ,Bernstein polynomial ,Convexity ,Algebra ,Geometric design ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Computer Aided Design ,Differentiable function ,computer ,Software ,Information Systems ,Parametric statistics ,Mathematics - Abstract
The m th degree Bernstein polynomial approximation to a function ƒ defined over [0, 1] is Σ m μ =0 ƒ( μ / m ) φ μ ( s ), where the weights φ μ ( s ) are binomial density functions. The Bernstein approximations inherit many of the global characteristics of ƒ, like monotonicity and convexity, and they always are at least as “smooth” as ƒ, where “smooth” refers to the number of undulations, the total variation, and the differentiability class of ƒ. Historically, their relatively slow convergence in the L ∞ -norm has tended to discourage their use in practical applications. However, in a large class of problems the smoothness of an approximating function is of greater importance than closeness of fit. This is especially true in connection with problems of computer-aided geometric design of curves and surfaces where aesthetic criteria and the intrinsic properties of shape are major considerations. For this latter class of problems, P. Bézier of Renault has successfully exploited the properties of parametric Bernstein polynomials. The purpose of this paper is to analyze the Bézier techniques and to explore various extensions and generalizations. In a sequel, the authors consider the extension of the results contained herein to free-form curve and surface design using polynomial splines . These B-spline methods have several advantages over the techniques described in the present paper.
- Published
- 1974
10. The Expression of Algorithms by Charts
- Author
-
John L. Bruno and Kenneth Steiglitz
- Subjects
Flowchart ,Sequence ,Sorting algorithm ,Theoretical computer science ,Competitive analysis ,Programming language ,Computer science ,Term (logic) ,Expression (computer science) ,computer.software_genre ,law.invention ,Control flow ,Simplex algorithm ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,law ,computer ,Algorithm ,Software ,Information Systems - Abstract
This paper discusses the expression of algorithms by flowcharts, and in particular by flowcharts without explicit go-to's (D-charts). For this purpose we introduce a machine independent definition of algorithm which is broader than usual. Our conclusion is that D- charts are in one technical sense more restrictive than general flowcharts, but not if one allows the introduction of additional variables which represent a history of control flow. The term "algorithm" is used in many different ways. Sometimes we speak of an algorithm as a process in the abstract, without reference to a particular computer. It is in this sense, for example, that we speak of the "radix exchange sort algorithm," or the "simplex algorithm." Often we identify an algorithm with a particular se- quence of instructions for a particular computer. In this paper we shall present a new definition of algorithm which emphasizes the sequence of commands associated with a particular "input." We then define the notion "expression" of algorithms by general flowcharts and flowcharts without explicit go-to's (D-charts). Some theorems are given which exhibit some of the rela- tionships between algorithms, flowcharts, and D-charts.
- Published
- 1972
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.