1,055 results
Search Results
2. AN ALGOL COMPUTER PROGRAM FOR THE COMPUTATION OF QUENCH CORRECTION BY REMOTE TERMINAL USING PUNCHED PAPER TAPE
- Author
-
A.W. Forrey
- Subjects
Quenching ,Computer program ,Terminal (electronics) ,Computer science ,business.industry ,Paper tape ,Computation ,Scintillation counter ,business ,Divergence (statistics) ,Algorithm ,Computer hardware ,Communication channel - Abstract
A computer program has been written which calculates DPM for single and dual labeled samples after correction for quenching. The program in ALGOL is executed from a remote terminal and uses punched paper tape input logged directly from the liquid scintillation counter in the proper format. Multiple quench indexes are used to calculate the best estimate of efficiencies for each isotope; channel ratios, AES ratios and AES counts are all used for single isotope samples and the two AES indexes only for dual labeled samples are used in the estimate computed in either case from a single sample counting. A separate counting produces data for the channels ratio computation for dual label samples using either the screening method or the general method where appropriate. Divergence of efficiencies computed from the different indexes indicates non-identity with the standards. The algorithm has been tested with serum, urine and aqueous biological samples.
- Published
- 1971
3. The dimensional accuracy of Mr Hampton's paper on 'The annealing of glass'
- Author
-
F W Preston
- Subjects
Operations research ,Computer science ,Algorithm ,Annealing (glass) - Abstract
No corrections are made in Mr Hampton's results, and it is shown that they are based on sound reasoning. The dimensional requirements are straightened out, however, and, together with the theoretical justification of the Adams and Williamson formula for the fundamental law of annealing (see previous paper), this seems to put the theory of the annealing process on a fairly sound basis, and to justify in every way the practical conclusions of Mr Hampton's paper.
- Published
- 1925
4. A novel paper-tape function generator and multiplier
- Author
-
A. C. Soudack and Laszlo T. Lovas
- Subjects
Numerical Analysis ,External variable ,General Computer Science ,business.industry ,Computer science ,Applied Mathematics ,Analog computer ,Electrical engineering ,Function generator ,Analog multiplier ,Theoretical Computer Science ,law.invention ,law ,Modeling and Simulation ,Product (mathematics) ,Range (statistics) ,Multiplier (economics) ,business ,Algorithm ,Circuit diagram - Abstract
This paper describes a novel function generator and multiplier using digital-to-analog conversion techniques. The device has been designed and built to supplement the existing analog computer facilities at the University of British Columbia. The function generator and multiplier, basically a digital-to-analog converter, is controlled by one ore two punched paper tapes. The device is capable of generating two independent functions specified by the input tapes, their product, and the product of an external variable and one of the tape inputs. All outputs are in analog form with a maximum range of +/- 100 volts and 0.5 % accuracy. Principle of operation, error analysis, circuit diagrams, photographs and test results are included in the paper to illustrate the operation and application of the device.
- Published
- 1966
5. Notes and Correspondence: Franklin Filter Paper
- Author
-
Herman A. Holz
- Subjects
Filter paper ,Computer science ,General Medicine ,Algorithm - Published
- 1920
6. Paper 35: Some Aspects of Measurement of Roundness
- Author
-
A. N. Tabenkin, I. A. Faradjev, V. G. Shuster, Yu. L. Polunov, and A. N. Avdulov
- Subjects
Embryology ,Digital computer ,Computer science ,Graph (abstract data type) ,Cell Biology ,Anatomy ,Algorithm ,Roundness (object) ,Developmental Biology - Abstract
This paper deals with the analysis of the extent to which the results obtained from the measurements of the same graph may vary while using different reference circles. The statistical characteristics of the graph were obtained experimentally, and the algorithms for circles were worked out. The analysis was carried out by the Monte-Carlo method on a digital computer.
- Published
- 1967
7. Quantitative Spot Test on Filter Paper and Examples of its Application
- Author
-
Yohei Hashimoto, Shuichi Shimizu, and Tatsuo Kariyone
- Subjects
Chromatography ,Hematologic Tests ,Multidisciplinary ,Hematologic tests ,Filter paper ,Computer science ,Filter (video) ,Humans ,Algorithm ,Filtration - Abstract
WHILE, so far, the spot test on filter paper, developed by Dr. F. Feigl and his co-workers, has been used exclusively for qualitative purposes, we have discovered a method for the estimation of various substances by using circular filter papers and the solvents which are commonly used in paper partition chromatography.
- Published
- 1952
8. Microwave Circuit Optimization Employing Exact Algebraic Partial Derivatives (Short Papers)
- Author
-
G.R. Branner
- Subjects
Flowchart ,Radiation ,Computer science ,Control engineering ,Function (mathematics) ,Condensed Matter Physics ,law.invention ,law ,Component (UML) ,Equivalent circuit ,Partial derivative ,Algorithm design ,Sensitivity (control systems) ,Electrical and Electronic Engineering ,Algebraic number ,Algorithm - Abstract
A technique for the optimization and sensitivity analysis of broad classes of electrical networks is illustrated. The method utilizes the exact algebraic partial derivatives of functions with respect to any desired independent variable. This completely automated technique has the obvious advantage that the derivatives of any circuit response function with respect to any desired component parameter may be obtained with no additional analytical effort on the part of the designer. Several examples are given to illustrate the procedure.
- Published
- 1974
9. On tracing rays through an optical system (Fifth paper)
- Author
-
T Smith
- Subjects
Relation (database) ,Computer science ,Iterative method ,Fully automatic ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Skew ,Shaping ,Tracing ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
An iterative method of tracing rays through an optical system which is suitable for operation by a fully automatic recording machine is described. The rays may be axial or skew, and the surfaces of any rotationally symmetrical form suitable for optical working. The relation of this method to earlier schemes and the advantages to be gained by successive approximations are considered.
- Published
- 1945
10. Paper 16: Some Principles of Design of Combined-Stress Testing Machines
- Author
-
T.-C. Hsu
- Subjects
Embryology ,Computer science ,Design elements and principles ,Cell Biology ,Anatomy ,Stress testing (software) ,Algorithm ,Developmental Biology ,Reliability engineering - Published
- 1965
11. Reviews of Books and Papers in the Computer Field
- Author
-
Donald L. Epley
- Subjects
Combinational logic ,Computer science ,And-inverter graph ,Boolean circuit ,Theoretical Computer Science ,Automaton ,Boolean algebra ,Algebra ,symbols.namesake ,Computational Theory and Mathematics ,Hardware and Architecture ,symbols ,Circuit minimization for Boolean functions ,Circuit complexity ,Boolean function ,Algorithm ,Software - Published
- 1966
12. Paper Negatives-A Simple, Inexpensive Technique for Obtaining Microphotographs
- Author
-
Robert C. Goss and Larry H. Ogren
- Subjects
Mathematics (miscellaneous) ,History and Philosophy of Science ,Physics and Astronomy (miscellaneous) ,Simple (abstract algebra) ,Computer science ,Engineering (miscellaneous) ,Algorithm ,Education - Published
- 1960
13. Reviews of Books and Papers in the Computer Field
- Author
-
Arthur Gill
- Subjects
Engineering drawing ,Computational Theory and Mathematics ,Hardware and Architecture ,Computer science ,Logic testing ,Algorithm ,Software ,Fault detection and isolation ,Theoretical Computer Science - Published
- 1965
14. Invited papers---1
- Author
-
R. J. Nelson
- Subjects
Cognitive science ,Computer science ,Subject (philosophy) ,Automata theory ,Algorithm ,Exposition (narrative) ,Reflexive pronoun ,Automaton - Abstract
I WAS INVITED to present a survey of the theory of automata. Since there are already several surveys, especially the very competent ones of Chomsky 10, McNaughton 37, and Rogers 54, all of which deal with various aspects of the theory of automata and more, I shall confine myself to an exposition of what might loosely be called a “philosophy of automata”. On the way, however, I hope to touch upon certain developments of the subject which have come about since McNaughton's 1961 paper. Readers who feel they are up on the theory may wish to omit sections 2 and 3.
- Published
- 1965
15. MAGIC PAPER - AN ON-LINE SYSTEM FOR THE MANIPULATION OF SYMBOLIC MATHEMATICS
- Author
-
Ellen J. Wax, Lewis C. Clapp, Robert S Wolf, and Dale E. Jordan
- Subjects
Symbolic mathematics ,Data processing ,Algebraic equation ,Computer science ,Programming language ,Magic (programming) ,Light pen ,Mathematical notation ,computer.software_genre ,Algorithm ,computer - Abstract
The report describes the preliminary version of the MAGIC PAPER system. Through a conversational interaction, the system aids the scientist, engineer or mathematician as he performs symbolic operations on linear algebraic equations. The user begins by entering his initial equations and conditions through a mathematical keyboard. As he types these equations, they are displayed on a flicker-free scope in standard mathematical notation. Using a push-button control panel and a light pen, he may select expressions and operations which are to be performed on them. If the operation is legal, the system generates a new equation which is then added to the scope display. With the basic set of operations, the user may create new operators which can then be added to the system. He can also introduce special notational conventions. The user has considerable control which enables him to personalize the system to meet his own particular needs. (Author)
- Published
- 1966
16. The Estimation of Fibers in Paper
- Author
-
Roger C. Griffin
- Subjects
Estimation ,Computer science ,General Medicine ,Algorithm - Abstract
n/a
- Published
- 1919
17. On the Calibration Process of Automatic Network Analyzer Systems (Short Papers)
- Author
-
S. Rehnmark
- Subjects
Accuracy and precision ,Radiation ,Computational complexity theory ,Iterative method ,Computer science ,System of measurement ,System testing ,Condensed Matter Physics ,Test set ,Electronic engineering ,Scattering parameters ,Electrical and Electronic Engineering ,Complex number ,Algorithm - Abstract
Formulas are presented for the direct calculation of the scattering parameters of a linear two-port, when it is measured by an imperfect network analyzer. Depending on the hardware configuration of the test set, the measurement system is represented by one of two flowgraph models. In both models presented, leakage paths are included. The error parameters, i.e., the scattering parameters of the measuring system, are six respective ten complex numbers for each frequency of interest. A calibration procedure, where measurements are made on standards, will determine the error parameters. One of many possible calibration procedures is briefly described. By using explicit formulas instead of iterative methods, the computing time for the correction of the scattering parameters of the unknown two-port is significantly reduced. The addition of leakage paths will only have a minor effect on computational complexity while measurement accuracy will increase.
- Published
- 1974
18. Discussion of paper by Mark A. Melton, ‘Methods for measuring the effect of environmental factors on channel properties’
- Author
-
B. T. Harris
- Subjects
Atmospheric Science ,Ecology ,Series (mathematics) ,Computer science ,Use of time ,Paleontology ,Soil Science ,Forestry ,Aquatic Science ,Oceanography ,Geophysics ,Space and Planetary Science ,Geochemistry and Petrology ,Statistics ,Earth and Planetary Sciences (miscellaneous) ,Time series ,Algorithm ,Earth-Surface Processes ,Water Science and Technology ,Communication channel - Abstract
In general, I am pleased to note Dr. Melton's use of time series analysis in studying the channel gradient of rivers. This is an important area of techniques, whose application to hydrologic problems warrants careful study. Time series analysis has far greater applicability to hydrologie problems than is generally realized, and hydrologists would benefit greatly from considering the methods in this area and could thus obtain much useful information about hydrologic phenomena.
- Published
- 1962
19. A SIMPLE METHOD OF MAGNETOTELLURIC INTERPRETATION. (DISCUSSION ON A PAPER BY SUDHIR JAIN)
- Author
-
J. Chauveau
- Subjects
Geophysics ,Geochemistry and Petrology ,Magnetotellurics ,Simple (abstract algebra) ,Computer science ,Algorithm ,Interpretation (model theory) - Published
- 1966
20. An Answer to Mr. J. A. Lively's Remarks on the Paper ' Amphisbaenic Sorting'
- Author
-
H. Nagler
- Subjects
Theoretical computer science ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Computer science ,Sorting ,Algorithm ,Software ,Information Systems - Published
- 1961
21. Computer Method for Treatment Planning in External Radiotherapy
- Author
-
Bengt Roos, Inger Ragnhult, and Hans Halldén
- Subjects
Electronic Data Processing ,medicine.medical_specialty ,Computer science ,Paper tape ,Quantitative Biology::Tissues and Organs ,Computation ,Physics::Medical Physics ,Radiotherapy Dosage ,General Medicine ,Programming method ,Field (computer science) ,External radiotherapy ,law.invention ,Cobalt Isotopes ,Distribution (mathematics) ,law ,medicine ,Humans ,Cartesian coordinate system ,Medical physics ,Radiation treatment planning ,Algorithm - Abstract
>A method of automatic computation of dose distributions in multifield and moving field therapy is described. The isodose diagrams were transferred to numerical lattices in a Cartesian system and punched on paper tape. The treatment area was drawn in another Cartesian system in which the dose contributions from the different fields are added at a number of points. The calculation was carried out on a digital computer. The method is illustrated by examples, and corrections for different types of tissue are discussed. (auth)
- Published
- 1963
22. Two FORTRAN programs for rosette calculations
- Author
-
M. Cockrill
- Subjects
Diagnostic information ,Error message ,Computer science ,Fortran ,Paper tape ,Mechanical Engineering ,Principal (computer security) ,Mechanics of Materials ,Data logger ,Arithmetic ,computer ,Algorithm ,Range (computer programming) ,computer.programming_language - Abstract
These programs calculate principal stresses, strains and directions from any rosette readings. The appropriate values are read in, the angles placed in the range 0 to 180° and the strains arranged in descending order. Next the necessary quantities are calculated and checked; a choice of output formats is provided together with an error message and diagnostic information if the check fails. One of the programs is written to accept strain readings on punched cards, the other accepts voltage readings on paper tape from a data logger. The programming language is FORTRAN IV. The programs have been used on an ICL 1903A computer.
- Published
- 1972
23. Piecewise method for large-scale electrical networks
- Author
-
K. Wang
- Subjects
Scale (ratio) ,Computer science ,Diakoptics ,Short paper ,General Engineering ,Piecewise ,Algorithm ,Electrical impedance ,Admittance parameters - Abstract
A new method for diakoptics, the piecewise method for large-scale networks, is presented. While the subject of diakoptics is well known, it treats only networks described by either impedance or admittance matrix. Using the sparse-matrix approach, the new method treats networks whose constitutive relationships are not restricted to impedances or admittances. Finally, the previously known techniques for diakoptics are merely special cases of the method described here, thus this short paper provides a broad view on this subject.
- Published
- 1973
24. An FFT chart
- Author
-
H.L. Groginsky
- Subjects
Computer science ,Pipeline (computing) ,Fast Fourier transform ,Short-time Fourier transform ,Graph paper ,symbols.namesake ,Fourier transform ,Chart ,Electronic engineering ,symbols ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Electrical and Electronic Engineering ,Harmonic wavelet transform ,Algorithm ,Random access - Abstract
This letter contains a graph paper summarizing many properties of Fourier transforms and fast Fourier transform (FFT) devices. It shows the interrelationships among data block length, frequency resolution sampling rate, FFT stages, and other related variables. The letter shows the limiting factors in a pipeline FFT and allows its performance to be effectively compared with a random access oriented FFT.
- Published
- 1970
25. THE 'PATH' THEORY OF CORTICAL FUNCTION
- Author
-
W. R. Ashby
- Subjects
Psychiatry and Mental health ,Computer science ,Path (graph theory) ,Surgery ,Neurology (clinical) ,Function (mathematics) ,Bioinformatics ,Original Papers ,Algorithm - Published
- 1931
26. Special Techniques in TLC
- Author
-
Egon Stahl
- Subjects
Set (abstract data type) ,Computer science ,Paper electrophoresis ,Special problem ,Algorithm - Abstract
If the procedures discussed above fail to give a satisfactory separation or if a special problem is set, other techniques often effect progress.
- Published
- 1969
27. A progress report on computer applications in computer design
- Author
-
R. N. Kisch and S. R. Cray
- Subjects
Computer science ,business.industry ,Computer Applications ,media_common.quotation_subject ,Design engineer ,Subject (documents) ,Boolean algebra ,Reduction (complexity) ,symbols.namesake ,Debugging ,Work (electrical) ,symbols ,Production (economics) ,Software engineering ,business ,Algorithm ,media_common - Abstract
The subject of computers designing other computers has been a popular one for several years. This subject generally brings to mind Boolean algebra reduction or generation of design logic. This is a difficult problem which the authors of this paper have investigated only superficially, and is not the subject of this paper. Another aspect of computer development work, however, lends itself to mechanization and represents the greatest portion of the time, money, and manpower consuming business of developing a new computing system. This paper summarizes the progress which has been made to date in writing, debugging, and placing in production a general purpose computer program (ERA 1103) for handling this portion of the development work. This is a program for processing the logical design engineer's work through simulated operation of the proposed equipment to the production of detailed wiring tabulations for manufacturing purposes.
- Published
- 1956
28. Some remarks on abstract machines
- Author
-
Seymour Ginsburg
- Subjects
Set (abstract data type) ,Discrete mathematics ,Computer science ,Applied Mathematics ,General Mathematics ,Function (mathematics) ,State (computer science) ,Sequential switching ,Algorithm ,Abstract machine ,Mathematics - Abstract
Introduction. In 1954 the mathematical entity called a (sequential) machine was found to be a valuable tool in designing sequential switching circuits [2; 8; 9]. Since then there has been considerable mathematical activity by mathematicians and nonmathematicians relating to the analysis and the synthesis of these machines. As was to be expected of a topic which arose because of an engineering need, most of these results have appeared in engineering and computing journals. Recently though, some of the papers have appeared in mathematical journals [3; 4; 5; 6; 10]. Also, much of the recent literature has dealt with questions almost exclusively of mathematical, as contrasted with engineering, interest [1; 3; 4; 5; 6; 10; 12]. The present paper is written in that spirit. The Moore-Mealy (complete, sequential) machine is defined [8; 9] as a nonempty set K (of "states"), a nonempty set D (of 'inputs"), a nonempty set F (of "outputs"), and two functions a (the "next state" function), and X (the "output" function), 5 mapping KXD into K and X mapping KXD into F. Then 5 and X are extended to sequences of inputs I1 ... Ik (written without commas) by
- Published
- 1958
29. Derivation and evaluation of improved tracking filter for use in dense multitarget environments
- Author
-
K. Housewright, R. Sea, and R. Singer
- Subjects
Computer science ,Covariance matrix ,Monte Carlo method ,Kalman filter ,Library and Information Sciences ,Tracking (particle physics) ,Computer Science Applications ,Adaptive filter ,Extended Kalman filter ,Filter design ,Control theory ,Filter (video) ,Clutter ,Algorithm ,Information Systems - Abstract
When tracking targets in dense environments, sensor reports originating from sources other than the target being tracked (i.e., from clutter, thermal false alarms, other targets) are occasionally incorrectly used in track updating. As a result tracking performance degrades, and the error covariance matrix calculated on-line by the usual types of tracking filters becomes extremely unreliable for estimating actual accuracies. This paper makes three contributions in this area. First, a new tracking filter is developed that incorporates, in an a posteriori statistical fashion, all data available from sensor reports located in the vicinity of the track, and that provides both optimal performance and reliable estimates of this performance when operating in dense environments. The optimality of and the performance equations for this filter are verified by analytical and simulation results. Second, several computationally efficient classes of suboptimal tracking filters based on the optimal filter developed in this paper and on an optimal filter of another class that appeared previously in the literature are developed. Third, using an extensive Monte Carlo simulation, the various optimal and suboptimal filters as well as the Kalman filter are compared, with regard to the differences between the on-line calculated and experimental covariances of each filter, and with regard to relative accuracies, computational requirements, and numbers of divergences or lost tracks each produces.
- Published
- 1974
30. Simulation of a chemical reaction process using GASP IV
- Author
-
Nicholas R. Hurst and A. Alan B. Pritsker
- Subjects
Operations research ,Computer science ,Modeling and Simulation ,Computer Graphics and Computer-Aided Design ,Algorithm ,Chemical reaction ,Software ,Coding (social sciences) - Abstract
This paper is a sequel to the preceding paper de scribing the GASP IV language. In this paper, the simulation of a chemical reaction process is used as a vehicle for illustrating the procedures for using GASP IV. The details of the computer coding as well as samples of computer output are included. The in tent of the paper is to illustrate the use of GASP IV to analyze a system involving both continuous varia bles and discrete events.
- Published
- 1973
31. The Effects of Races, Delays, and Delay Faults on Test Generation
- Author
-
Melvin A. Breuer
- Subjects
Correctness ,Computer science ,Fault injection ,Automatic test pattern generation ,Fault (power engineering) ,Fault detection and isolation ,Theoretical Computer Science ,Stuck-at fault ,Computational Theory and Mathematics ,Hardware and Architecture ,Control theory ,Fault coverage ,Fault model ,Algorithm ,Software - Abstract
Test sequences constructed by most test generation procedures often create time dependent results when applied to a circuit. These dependencies often invalidate the test. The main cause for this situation is that the test generation procedures and circuit models employed do not take into account many aspects of delay associated with a circuit. In this paper we present modeling techniques to be used by conventional test generation procedures to alleviate some of these problems. These models include the cases of equal, unequal and ambiguous delay values. Both inertial and transport delays are considered. Both static and dynamic output behavior is studied, though we restrict inputs to fundamental mode operation. Finally, a new type of fault, caUed a delay fault, is introduced, and a model developed so that a test to detect this class of fault can be generated via conventional test generation techniques. In summary, this paper attempts to outline procedures and identify problem areas so that test generation is more of a science rather than a hit and miss process, and so that the correctness of results need not always be verified via simulation or physical fault injection.
- Published
- 1974
32. Data structuring to efficiently implement the primal-dual transportation algorithm
- Author
-
William G. Truscott
- Subjects
General Computer Science ,Computer science ,Fortran ,Modeling and Simulation ,Transportation theory ,Management Science and Operations Research ,Execution time ,computer ,Structuring ,Algorithm ,Primal dual ,computer.programming_language - Abstract
This paper focuses on the computer implementation of Ford and Fulkerson's primaldual method for solving the transportation problem. It explores the practical implications of improving the efficiency with which transportation problems can be solved. By using appropriate techniques for storing data elements and manipulating their structural relationships, a computer procedure, which is quite efficient in terms of central processor execution time and core storage requirements, is developed. Computational results using a FORTRAN program on test problems ranging in size from 15 sources and 25 destinations to 30 sources and 400 destinations are presented and discussed. These results indicate that the method of implementation is instrumental in reducing the effect of problem size on computational requirements. The paper emphasizes the general principles of the implementation rather than those aspects which are specifically related to the language and computer used for experimentation. Also, the applicability of the research is extended by an investigation of the potential for using the implementation concepts with other algorithms.
- Published
- 1974
33. A Formalism for Description and Synthesis of Logical Algorithms and their Hardware Implementation
- Author
-
W.K. Giloi and H. Liebig
- Subjects
Formalism (philosophy of mathematics) ,Finite-state machine ,Theoretical computer science ,Computational Theory and Mathematics ,Hardware and Architecture ,Computer science ,Algorithm ,Software ,Theoretical Computer Science - Abstract
The design methodology developed in the paper is based on an APL-like definition of logical arrays and operations on such arrays. First, the notion of a logical algorithm is introduced as a finite state automaton described by transition and output matrix. The technical realizations of such algorithms is then uniformly described as time-discrete, space-discrete, and time-space-discrete systems, and the transition of an algorithm from state-to-state (or space node-to-space node) can be explicitly formulated in a very concise way. An application of this formalism to the state reduction problem is shown. Thus the paper extends the APL-based design of logical networks beyond the area of combinational networks as developed first by Iverson.
- Published
- 1974
34. Application of fuzzy logic to the detection of static hazards in combinational switching systems
- Author
-
Abraham Kandel
- Subjects
Fuzzy classification ,Sequential logic ,Computer science ,Fuzzy set ,Fuzzy logic ,Theoretical Computer Science ,Computational Theory and Mathematics ,Fuzzy mathematics ,Fuzzy number ,Fuzzy set operations ,Algorithm ,Software ,Membership function ,Hardware_LOGICDESIGN ,Information Systems - Abstract
In this paper the fuzzy set as discussed by Zadeh is viewed as a multivalued logic with a continuum of truth values in the interval [0,1]. The concept of static hazard in combinational switching systems is related to fuzzy logic and various properties of this relation are established. The paper derives the necessary and sufficient conditions for a fuzzy function to adequately describe the steady-state and static hazard behavior of a combinational system, by extending the ternary method discussed by Yoeli and Rinon and using the resolution principle of mechanical theorem-proving.
- Published
- 1974
35. Drawing Contours from Arbitrary Data Points
- Author
-
D. H. McLain
- Subjects
Set (abstract data type) ,Data point ,General Computer Science ,Computer science ,Plotter ,Programming method ,Algorithm ,Derived Data - Abstract
This paper describes a computer method for drawing, on an incremental plotter, a set of contours when the height is available only for some arbitrary collection of points. The method is based on a distance-weighted, least-squares approximation technique, with the weights varying with the distance of the data points. It is suitable not only for mathematically derived data, but also for data of geographical and other non-mathematical origins, for which numerical approximations are not usually appropriate. The paper includes a comparison with other approximation techniques.
- Published
- 1974
36. Inherited error in finite element analyses of structures
- Author
-
R. J. Melosh
- Subjects
Mathematical problem ,Computer science ,Mechanical Engineering ,Process (computing) ,Rigid body ,Finite element method ,Computer Science Applications ,Simple (abstract algebra) ,Modeling and Simulation ,General Materials Science ,Point (geometry) ,Boundary value problem ,Algorithm ,Civil and Structural Engineering ,Equation solving - Abstract
Mathematicians are quick to point out that round-off and truncation errors induced by the digital computer are only half of the manipulation errors in numerical analyses. The other half are the errors in quantizing the mathematical problem for computer solution: errors inherited for the equation solving process. This paper examines the relevance of inherited errors in structural analyses using the finite element concept and the digital computer. It illustrates error magnitudes using numerical experiments of simple structures. It constructs a theory explaining the errors for these systems. Having identified the most significant controllable computer error source, it describes a process for minimizing its contribution to the inherited error. The paper concludes that orders of magnitude between errors reported by various investigators can be explained by differences in inherited error. The most significant effect of these errors can be identified with inconsistency in problem formulation. These inconsistencies can be eliminated by exploiting the existence of rigid body states in the finite element models. Thereby, solution errors introduced by inherited errors can be reduced to intrinsic errors in parameters defining the geometry, material characteristics, and boundary conditions.
- Published
- 1973
37. Document retrieval by means of an automatic classification algorithm for citations
- Author
-
Ronald G. Parsons and Julie Bichteler
- Subjects
Information retrieval ,Recall ,Automatic indexing ,Computer science ,General Engineering ,Relevance (information retrieval) ,Subject (documents) ,Document retrieval ,Citation ,Precision and recall ,Algorithm ,Test (assessment) - Abstract
The applicability of an automatic classification technique to information retrieval was investigated. A modified version of the Schiminovich algorithm was used to classify articles in the data base, utilizing citations found in their bibliographies and a “triggering file” of bibliographically related papers. Classes of articles were formed by comparing the citations in the articles with members of the triggering file. Those papers with similar citing patterns formed a group ; citations occurring sufficiently often within the papers of a group formed a bibliography . Bibliographies in turn became new triggering files in an iterative procedure. Results of this nontraditional method were compared with those of retrieval by standard American Institute of Physics (AIP) subject analysis of the same material. A combination of the two methods was also investigated to test the hypothesis that one could be used to augment or refine the other. Nine cooperating physicists defined queries in terms of (1) the AIP classification scheme and (2) a list of articles likely to be cited in current relevant literature. The data base was 18 months of 1971–1972 physics journal articles (AIP SPIN tapes). The chief means of evaluating the results of the three retrieval approaches was a comparison of recall and precision values obtained from relevance judgments by the physicists. Average precision for the AIP subject analysis was 17 per cent and for the citation processing 62 per cent. Based on an assumption of 100 per cent recall for the AIP analysis, average recall for the citation processing was 45 per cent. All but one physicist would prefer to have both the citation and subject approaches available for information retrieval. Sometimes, only a few relevant articles are desired; at other times comprehensiveness is necessary. However, if only one method were available, all except one participant would choose the subject approach. They felt that one must be able to retrieve all the relevant articles, even if that would mean examining 25–30 irrelevant notices for every relevant one. Several expressed the notion that looking at a file with a very high percentage of relevant articles made one wonder what was missing.
- Published
- 1974
38. A new method to determine the failure frequency of a complex system
- Author
-
Roy Billinton and Chanan Singh
- Subjects
Computer Applications ,Computer science ,Explicit formulae ,Complex system ,Condensed Matter Physics ,Upper and lower bounds ,Atomic and Molecular Physics, and Optics ,Reliability engineering ,Surfaces, Coatings and Films ,Electronic, Optical and Magnetic Materials ,Mean down time ,Cut ,Key (cryptography) ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,Algorithm ,Reliability (statistics) ,Drawback - Abstract
Two key indices in system reliability evaluation are the probability that the system is failed and the frequency of system failure. Other measures such as the mean cycle time and the mean down time can be easily derived from these quantities. This paper considers the reliability evaluation of a complex maintainable system using a cut set approach. The available literature on this subject generally deals with the failure probabilities. The technique proposed by Buzacott to determine the frequency of failure has the drawback that explicit formulae for system availability must be first derived. Numerical values are then obtained by further manipulation. This approach is, therefore, not suitable for computer application. The contribution of this paper is the development of a new formula for the frequency of system failure using a cut set approach, from which the numerical values can be obtained directly. This method overcomes the drawback of Buzacott's method and is suitable for computer application. Upper and lower bounds for frequency are also given and the method is illustrated by an example.
- Published
- 1973
39. RTPFIT: A program to fit the extended general field equation to therapy unit data
- Author
-
R. A. Kolde and J.J. Weinkam
- Subjects
Computers ,Computer science ,Medicine (miscellaneous) ,Radiotherapy Dosage ,Dose distribution ,Linear particle accelerator ,Task (computing) ,Distribution (mathematics) ,Range (statistics) ,Radioisotope Teletherapy ,Field equation ,Algorithm ,Nonlinear regression ,Unit (ring theory) ,Simulation - Abstract
The first formula for computing dose distribution was developed by Sterling et al. for the Eldorado A therapy unit at 80 cm ssd. This formula later evolved into the General Field Equation which was applicable over the entire range of source skin distances. Recently, this equation has been extended to fit the dose distributions of a wide variety of therapy units, including cobalt units, linear accelerators, and betatrons. Fitting the Extended General Field Equation to dose data is a complex task involving the use of nonlinear regression. This paper describes a software package to perform this task. The system is implemented on an IBM 360. The procedure can readily accept either directly measured dose data or digitized data from manufacturer-supplied isodose charts. In addition to the basic task of fitting the equation to the dose data, the program performs a comparison of the computed and observed dose values and an analysis of the error distribution. The research reported in this paper has been supported by NCI Grant CA10208.
- Published
- 1973
40. Computer Simulation for Mixed-Model Production Lines
- Author
-
J. L. C. Macaskill
- Subjects
Production line ,Mixed model ,Section (archaeology) ,Computer science ,Strategy and Management ,Product (mathematics) ,Work (physics) ,Line (geometry) ,Management Science and Operations Research ,Algorithm ,Simulation - Abstract
This paper presents results of research into computer simulation of a mixed-model assembly line. The problem to be solved is described and the mathematical model of the line is presented. Algorithms that provide acceptable sequences of product models in various conditions are given, and results of simulator experiments are analysed and discussed. In the final section of the paper conclusions are drawn from the results of the work, and the general usefulness of simulation for assembly work is considered.
- Published
- 1973
41. Large capacity equation solver for structural analysis
- Author
-
Graham H. Powell and Digambar P. Mondkar
- Subjects
Computer science ,Mechanical Engineering ,Large capacity ,Type (model theory) ,Solver ,Computer Science Applications ,General purpose ,Low speed ,Modeling and Simulation ,General Materials Science ,Coefficient matrix ,Algorithm ,Civil and Structural Engineering ,Data transmission ,Block (data storage) - Abstract
This paper extends the procedures and ideas presented in an earlier paper [1] to the out-of-core solution of sets of equations of almost unlimited size. Block partitioning of the coefficient matrix and load vectors is considered and an illustrative example presented. The data transfer procedures, for input to or output from low speed storage, which are essential for an equation solver of this type, are explained in detail. Based on these procedures, a listing of a large capacity general purpose equation solver is presented. Example problems are solved to demonstrate the computational efficiency of the equation solver.
- Published
- 1974
42. Heuristic-Programming Solution of a Flowshop-Scheduling Problem
- Author
-
Kenneth Steiglitz and Martin J. Krone
- Subjects
Mathematical optimization ,Schedule ,Job shop scheduling ,Computer science ,Heuristic ,Computation ,Heuristic programming ,Function (mathematics) ,Management Science and Operations Research ,Algorithm ,Computer Science Applications - Abstract
This paper considers the static flowshop-scheduling problem with the objective of minimizing, as a cost function, the mean job-completion time. Within the more general framework of combinatorial optimization problems, it defines a heuristic search technique—an approach that has been successful in the past in obtaining near-optimal solutions for problems that could not be solved exactly, either for lack of theory or because of exorbitant computational requirements. The paper presents a two-phase algorithm: The first phase searches among schedules with identical processing orders on all machines; the second refines the schedule by allowing passing. Results of computer study are presented for a large ensemble of pseudorandom problems, and for two particular problems previously cited in the literature. The method is shown to provide solutions that are exceptionally low in cost, and superior to those provided by sampling techniques in the cases for which comparison is possible. Computation time is also discussed and is given in machine-independent terms.
- Published
- 1974
43. Performance Matching With Constraints
- Author
-
R.D. Carter, A.C. Pierce, L.F. Kemp, and D.L. Williams
- Subjects
Matching (statistics) ,Computer science ,General Engineering ,Algorithm - Abstract
Abstract This paper describes two new iterative reservoir performance matching techniques for the performance matching techniques for the single-phase compressible flow case. The influence coefficients employed in these procedures are computed by Jacquard's method - very economical one when the number of reservoir parameters to be estimated exceeds the number of observation locations. A derivation of the method based on the properties of the diffusion equation is presented properties of the diffusion equation is presented Sample results are given for the two new procedures and the magnified diagonal method of Jacquard and Jain. It is concluded that all the procedures discussed in this paper are about equally effective in reducing the difference between computed and observed pressure. However, the new procedures that employ pressure. However, the new procedures that employ linear programming methods can reduce the vector of pressure differences to an acceptably low level while guaranteeing that computed values of the reservoir parameters are within predetermined constraint intervals, whereas the method of Jacquard and Jain, being unconstrained, can lead to unacceptable values of the reservoir parameters. On the other hand, the (magnified diagonal) procedure of Jacquard and Jain requires less computing time per iteration. per iteration Introduction Performance matching is the process of varying reservoir characteristics in a reservoir simulator until the performance predicted by the simulator agrees, within some acceptable tolerance, with a set of observed performance data, while at the same time the parameters meet some criterion of reasonableness. This paper presents performance matching techniques that are applicable to single-phase isotropic compressible flow governed by Darcy's law. Thus, the basic equation is(1) The performance matching problem is to estimate M(x, y) and V(x, y) given a complete description of Q and usually an incomplete and sometimes inaccurate description of p. (Variables are described in the Nomenclature.) Many schemes have been presented for automatically solving this problem. The papers of Jacquard and of Jacquard and Jain stand out because they present an ingenious convolutional method for computing the influence coefficients of a linear system that relates the differences between calculated and observed pressures to changes in reservoir properties. The calculation requires a number of simulations per iteration equal to one plus the number of observation points. The value of plus the number of observation points. The value of their technique may have been overlooked by later authors. The papers by Jacquard and Jain also present a magnified diagonal variant of the classical least-squares procedure for solving an underdetermined, exactly determined, or overdetermined linear system. In the case of underdetermined systems, the magnified diagonal method interpolates between corrections based on steepest descent and corrections based on linearity. The Jacquard and Jain procedure also scales corrections to prevent the calculation of negative values of M and V. Dupuy applied the methods of Refs. 6 and 7 to additional problems. Jahns then presented a method for the single-phase compressible flow problem based on the same perturbation principle as given by Jacquard and perturbation principle as given by Jacquard and Jain, but differing from their method as follows. First, the influence coefficients are calculated by difference quotients based on numerical simulation results. Second, the reservoir description is increased in detail as the calculation progresses. Third, statistical measures of the reliability of the calculated reservoir properties are provided. The direct method of obtaining influence coefficients employed in Jahns' work requires a number of simulations per iteration equal to one plus the number of reservoir parameters to be determined. SPEJ P. 187
- Published
- 1974
44. Towards a language for the description of IC chips
- Author
-
Reiner W. Hartenstein
- Subjects
Programming language ,Computer science ,business.industry ,General Medicine ,Symbolic notation ,computer.software_genre ,Notation ,Field (computer science) ,Set (abstract data type) ,Software ,Block (programming) ,business ,computer ,Algorithm ,Register transfer - Abstract
This paper is continuing as "part II" a paper out of the preceding number of SIGMICRO Newsletter. That preceding paper (for its title see ref. |10|) is referenced as "part I" in the following lines. (As in part I |10|, also in part II some of the ideas are half-baked ideas.) In part I a draft design has been given of a set of register transfer primitives (RTP) and a corresponding symbolic notation, as well as its block diagramm equivalent. Part I particularly aimed at the demonstration of the use of register transfer notations for modelling constructs known from the software field, as seen with the eyes of a hardware man.
- Published
- 1973
45. The representation of two-dimensional sequences as one-dimensional sequences
- Author
-
Russell M. Mersereau and D. Dudgeon
- Subjects
Overlap–add method ,Multidimensional signal processing ,Theoretical computer science ,Finite impulse response ,Computer science ,Signal Processing ,Image processing ,Pyramid (image processing) ,Filter (signal processing) ,Quadrature filter ,Algorithm ,Linear filter - Abstract
A number of signal processing techniques which have been developed for processing one-dimensional sequences do not generalize to the processing of two-dimensional signals, largely due to the absence of a two-dimensional factorization theorem. In an attempt to circumvent this problem, a specific representation of two-dimensional sequences as one-dimensional sequences is presented in this paper. Using this mapping several two-dimensional problems can be viewed as one-dimensional problems and approached using one-dimensional techniques. This representation is valid both for signals of finite extent and for the more general class of signals with rational Z-transforms. In this paper we consider applications of these techniques for high speed convolution, processing of drum scans, and two-dimensional finite impulse response (FIR) filter design.
- Published
- 1974
46. A Characterization of Ten Hidden-Surface Algorithms
- Author
-
Ivan E. Sutherland, Robert A. Schumacker, and Robert F. Sproull
- Subjects
General Computer Science ,Computer science ,Bogosort ,Hidden surface determination ,Merge algorithm ,Sorting ,Sorting network ,Potentially visible set ,External sorting ,Algorithm ,Hidden line removal ,ComputingMethodologies_COMPUTERGRAPHICS ,Theoretical Computer Science - Abstract
The paper asserts that the hidden-surface problem is mainly one of sorting. The various surfaces of an object to be shown in hidden-surface or hidden-line form must be sorted to find out which ones are visible at various places on the screen. Surfaces may be sorted by lateral position in the picture (XY), by depth (Z), or by other criteria. The paper shows that the order of sorting and the types of sorting used form differences among the existing hidden-surface algorithms. (Modified author abstract)
- Published
- 1974
47. Computer Implementation of Digital Techniques and Spectral Estimation
- Author
-
Arvind Kumar and D. Dutta Majumder
- Subjects
Digital computer ,Computer science ,Spectrum (functional analysis) ,Electronic engineering ,Bilinear interpolation ,Frequency shift ,Spectral density estimation ,Electrical and Electronic Engineering ,Performance results ,Algorithm ,Digital filter ,Computer Science Applications ,Theoretical Computer Science - Abstract
In this paper along with a brief theoretical analysis of the design aspects of recursive and non-recursive digital filters, derivation of design algorithms and their performance characteristics are studied with the help of general-purpose digital computer, Honeywell-400. The basic mathematical tools used are the well-known z-transformation calculus and the bilinear z-transformation methods. We have presented in this paper in some detail the digital filtering with sampled spectrum frequency shift technique and its application to spectral estimation and the design procedure of a bank of filters using the methods along with the performance results.
- Published
- 1973
48. Flow Profile Computation by Direct Integration
- Author
-
Amruthur S. Ramamurthy and Kanakatti Subramanya
- Subjects
Variable (computer science) ,Flow (mathematics) ,Hydraulics ,law ,Differential equation ,Computer science ,Computation ,General Engineering ,Direct integration of a beam ,Algorithm ,law.invention ,Open-channel flow - Abstract
Using Chow’s procedure to calculate gradually varied flow profiles in channels with variable hydraulic exponents may lead to large errors because of an erroneous assumption in the formulation of the basic equation. This paper identifies the source of the error and presents a generalized procedure which overcomes these discrepancies. Regarding the extensive use of Chow’s procedure in standard text and handbooks dealing with open-channel hydraulics, it is hoped that this paper will help to present a corrected method.
- Published
- 1974
49. An Algorithm for the Dynamic Relocation of Fire Companies
- Author
-
Warren E. Walker and Peter Kolesar
- Subjects
ALARM ,Operations research ,Computer science ,Fire protection ,Management Science and Operations Research ,Relocation ,Algorithm ,Computer Science Applications ,Test (assessment) - Abstract
When all the fire companies in a region are engaged in fighting fires, protection against a future fire is considerably reduced. It is standard practice in many urban fire departments to protect the exposed region by relocating outside fire companies temporarily to some of the vacant houses. In New York City, situations requiring such relocations arise ten times a day on the average. The Fire Department of the City of New York (FDNY) currently makes its relocations according to a system of preplanned moves. This system was designed at a time when alarm rates were low and is based on the assumption that only one fire is in progress at a time. Because of the high alarm rates currently being experienced in parts of New York City, this assumption is no longer valid, and the preplanned relocation system breaks down at the times when it is needed most. This paper describes a computer-based method for determining relocations that overcomes the deficiencies of the existing method by utilizing the computer’s ability to (1) store up-to-date information about the status of all fires in progress and the location and activity of all fire companies, (2) generate and compare many alternative relocation plans quickly. The method, which will become part of the FDNY’s real-time Management Information and Control System (MICS), is designed to be fast and to require little computer memory. After giving some background of the problem and the objectives of relocation, we give the problem a mathematical programming formulation and then describe the heuristic algorithm to be used for generating relocations in the MICS. The remainder of the paper is devoted to a discussion of an example illustrating how the algorithm works, a rigorous test of the algorithm using a computer simulation model of Fire-Department operations, and a description of the current use of the computer algorithm by dispatchers in an interactive time-shared environment. The results of the testing indicate that the proposed algorithm is a significant improvement over existing methods, particularly in crisis situations. Although designed to solve a problem for the New York City Fire Department, the algorithm should be applicable to other cities.
- Published
- 1974
50. Statistical Behavior of Deep Fades of Diversity Signals
- Author
-
Sing-Hsiung Lin
- Subjects
Signal processing ,Computer science ,Rayleigh distribution ,Radio Link Protocol ,law.invention ,Diversity combining ,symbols.namesake ,Signal-to-noise ratio ,Diversity gain ,law ,Electronic engineering ,symbols ,Probability distribution ,Fading ,Electrical and Electronic Engineering ,Rayleigh scattering ,Algorithm ,Computer Science::Information Theory - Abstract
This paper presents the results of a statistical analysis of diversity combining systems. Previous theoretical work on this topic often assumes that the input signals are jointly Rayleigh distributed, which may not hold for a practical fading environment. In this paper, we use a new formulation and analysis to show that the major results of previous theoretical work are actually valid without the restrictive assumption of a joint Rayleigh distribution. The statistics include the probability of fade, the expected number of fades per unit time, and the average fade duration. We also extend the analysis to include the effect of a dominant interfering signal such as the water-reflected ray of an overwater radio link. The statistics of diversity signals on these radio links are different and are not treated in the literature.
- Published
- 1972
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.