117 results on '"README"'
Search Results
102. Structures and Analysis: Help
- Author
-
Bill Scott
- Subjects
Set (abstract data type) ,Maple ,Database ,Computer science ,README ,engineering ,Initialization ,engineering.material ,computer.software_genre ,computer ,Directory structure - Abstract
Understand the ‘bones’ of Maple structures; arguments, indexes and ordering. Analysis of units and and a greenhouse chamber. The details of execution groups and producing documents. Set up a self-help system, using readme files and help from Maple. Combine initialization and help to make an environment you enjoy.
- Published
- 2001
- Full Text
- View/download PDF
103. Geologic map and digital database of the Cougar Buttes 7.5' quadrangle, San Bernardino County, California
- Author
-
Jonathan C. Matti, Pamela M. Cossette, and Robert E. Powell
- Subjects
Database ,Geologic map ,computer.software_genre ,Geography ,Quadrangle ,README ,Computer graphics (images) ,Polygon ,Geological survey ,Data set (IBM mainframe) ,Line (text file) ,Scale (map) ,Cartography ,computer - Abstract
This data set maps and describes the geology of the Cougar Buttes 7.5' quadrangle, San Bernardino County, California. Created using Environmental Systems Research Institute's ARC/INFO software, the data base consists of the following items: (1) a map coverage showing geologic contacts and units, (2) a separate coverage layer showing structural data, (3) a scanned topographic base at a scale of 1:24,000, and (4) attribute tables for geologic units (polygons), contacts (arcs), and site-specific data (points). The data base is accompanied by a readme file and this metadata file. In addition, the data set includes the following graphic and text products: (1) A portable document file (.pdf) containing a browse-graphic of the geologic map on a 1:24,000 topographic base. The map is accompanied by a marginal explanation consisting of a Description of Map Units (DMU), a Correlation of Map Units (CMU), and a key to point and line symbols. (2) Separate .pdf files of the DMU and CMU, individually. (3) A PostScript graphic plot-file containing the geologic map on a 1:24,000 topographic base accompanied by the marginal explanation. (4) A pamphlet that summarizes the late Cenozoic geology of the Cougar Buttes quadrangle. The geologic map data base contains original U.S. Geological Survey data generated by detailed field observation and by interpretation of aerial photographs, including low-altitude color and black-and-white photographs and high-altitude infrared photographs. The map was created by transferring lines from the aerial photographs to a 1:24,000 topographic base via a mylar orthophoto-quadrangle or by using a PG-2 plotter. The map was then scribed, scanned, and imported into ARC/INFO, where the database was built. Within the database, geologic contacts are represented as lines (arcs), geologic units as polygons, and site-specific data as points. Polygon, arc, and point attribute tables (.pat, .aat, and .pat, respectively) uniquely identify each geologic datum and link it to other tables (.rel) that provide more detailed geologic information.
- Published
- 2000
- Full Text
- View/download PDF
104. A documentation generator for (C)LP systems
- Author
-
Manuel V. Hermenegildo
- Subjects
Documentation generator ,Computer science ,media_common.quotation_subject ,0102 computer and information sciences ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Documentation ,Software ,README ,0202 electrical engineering, electronic engineering, information engineering ,media_common ,Unix ,Informática ,business.industry ,Programming language ,Program specification ,Assertion language ,Program optimization ,Software distribution ,Debugging ,010201 computation theory & mathematics ,Scripting language ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,020201 artificial intelligence & image processing ,Compiler ,business ,computer - Abstract
We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in Ciao, ISO-Prolog, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what versión of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc. ...) for the predicates in the program, and machine-readable comments. One of the main novelties of lpdoc is that these assertions and comments are written using the Ciao system asseriion language, which is also the language of communication between the compiler and the user and between the components of the compiler. This allows a significant synergy among specification, debugging, documentation, optimization, etc. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The documentation can be generated interactively from emacs or from the command line, in many formats including texinfo, dvi, ps, pdf, info, ascii, html/css, Unix nroff/man, Windows help, etc., and can include bibliographic citations and images, lpdoc can also genérate "man" pages (Unix man page format), nicely formatted plain ASCII "readme" files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or info formats suitable for inclusión in on-line Índices of manuals, and even complete WWW and info sites containing on-line catalogs of documents and software distributions. The lpdoc manual, all other Ciao system manuals, and parts of this paper are generated by lpdoc.
- Published
- 2000
105. Geologic map and digital database of the Apache Canyon 7.5' quadrangle, Ventura and Kern counties, California
- Author
-
Paul Stone and Pamela M. Cossette
- Subjects
Quadrangle ,Geography ,Database ,README ,Polygon ,Geological survey ,Data set (IBM mainframe) ,Line (text file) ,Scale (map) ,Geologic map ,computer.software_genre ,Cartography ,computer - Abstract
This data set maps and describes the geology of the Apache Canyon 7.5' quadrangle, Ventura and Kern Counties, California. Created using Environmental Systems Research Institute's ARC/INFO software, the data base consists of the following items: (1) a map coverage showing geologic contacts, faults and units, (2) a separate coverage layer showing structural data, (3) an additional point coverage which contains bedding data, (4) a point coverage containing sample localities, (5) a scanned topographic base at a scale of 1:24,000, and (6) attribute tables for geologic units (polygons), contacts (arcs), and site-specific data (points). The data base is accompanied by a readme file and this metadata file. In addition, the data set includes the following graphic and text products: (1) A jpg file (.jpg) containing a browse-graphic of the geologic map on a 1:24,000 topographic base. The map is accompanied by a marginal explanation consisting of a List of Map Units, a Correlation of Map Units, and a key to point and line symbols. (2) A .pdf file of a geologic explanation pamphlet that includes a Description of Map Units. (3) Two postScript graphic plot-files: one containing the geologic map on a 1:24,000 topographic base and the other, three accompanying structural cross sections. The geologic map database contains original U.S. Geological Survey data generated by detailed field observation and by interpretation of aerial photographs. The map was created by transferring lines and point data from the aerial photographs to a 1:24,000 topographic base by using a PG-2 plotter. The map was scribed, scanned, and imported into ARC/INFO, where the database was built. Within the database, geologic contacts are represented as lines (arcs), geologic units as polygons, and site-specific data as points. Polygon, arc, and point attribute tables (.pat, .aat, and .pat, respectively) uniquely identify each geologic datum and link it to other tables (.rel) that provide more detailed geologic information.
- Published
- 2000
- Full Text
- View/download PDF
106. Program Code README File
- Author
-
Peter Schattner
- Subjects
World Wide Web ,Computer science ,README ,Computational genomics ,Genomics ,Program code ,Genome - Published
- 2008
- Full Text
- View/download PDF
107. Cases of Practical Scheduling
- Author
-
Eugene Khmelnitsky, Konstantin Kogan, and Oded Maimon
- Subjects
Rate-monotonic scheduling ,business.industry ,Computer science ,Scheduling (production processes) ,Time horizon ,Flow shop scheduling ,computer.software_genre ,Fair-share scheduling ,Software ,README ,Operating system ,business ,computer ,Graphical user interface - Abstract
This chapter presents seven case studies of practical scheduling implemented at several manufacturing plants located in the northeast of the United States. Production engineers managing the manufacturing helped us to gather data on the shop floor. To study these cases we developed a scheduling software which is based on the numerical methods presented in Chapter 7. The software has an easy graphical user interface and runs under a Windows environment. The software as well as a tutorial can be found in the disk attached to this book (see file “Readme” for details of the disk).
- Published
- 1998
- Full Text
- View/download PDF
108. An Einstein X-ray Survey of Optically-Selected Galaxies: I. Data
- Author
-
Christine Jones, William R. Forman, A. P. Marston, Ronald O. Marzke, and David Burstein
- Subjects
Physics ,Astrophysics::High Energy Astrophysical Phenomena ,Astrophysics (astro-ph) ,X-ray ,Proportional counter ,FOS: Physical sciences ,Astronomy and Astrophysics ,Astrophysics ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Redshift ,Galaxy ,symbols.namesake ,Space and Planetary Science ,Observatory ,README ,symbols ,Electronic data ,Einstein ,Astrophysics::Galaxy Astrophysics - Abstract
We present the results of a complete Einstein Imaging Proportional Counter (IPC) X-ray survey of optically-selected galaxies from Shapley-Ames Catalog, Uppsala General Catalog and the European Southern Observatory Catalog. X-ray fluxes for 1018 galaxies have been surveyed, 21% of which are detected. 827 galaxies have either independent distance estimates or radial velocities. Associated optical, redshift and distance data has been assembled. The five large data tables in the paper are in easily-accessible ascii formats with an ADC-style README file, for easy electronic data handling. The accuracy of the X-ray fluxes has been checked in three different ways; all are consistent with the derived X-ray fluxes being of $\le$0.1 dex accuracy. In particular, there is agreement with previous published X-ray fluxes for galaxies in common with Roberts et al. (1991) and Fabbiano et al. (1992). The data presented here will be used in further studies to characterize the X-ray output of galaxies of various morphological types and thus to enable the determination of the major sources contributing to the X-ray emission from galaxies., Comment: 14 files: .ps file of text, 19 pages; .ps files of 6 figures, 1 table; ascii ADC-style format for 5 data tables plus 1 ADC-style README file. Get original files via ftp://samuri.la.asu.edu/pub/burstein/einsteinpap to apear 1997 ApJS
- Published
- 1996
- Full Text
- View/download PDF
109. Extended calculus of constructions as a specification language
- Author
-
Rod M. Burstall
- Subjects
Epsilon calculus ,Natural deduction ,Type theory ,Computer science ,Programming language ,README ,Process calculus ,Calculus of constructions ,Specification language ,Calculus of communicating systems ,computer.software_genre ,computer - Abstract
Huet and Coquand's Calculus of Constructions, an implementation of type theory, was extended by Luo with sigma types, a type of pairs where the type of the second component depends on the value of the first one. This calculus has been implemented as ‘Lego’ by Pollack. The system and documentation is obtainable thus: ftp ftp.dcs.ed.ac.uk cd export/lego, after which one should read the file README.
- Published
- 1993
- Full Text
- View/download PDF
110. Three sets of Macintosh AppleScripts for the automatic submission of sequence data to the Internet BLAST server
- Author
-
Brian I. Osborne
- Subjects
Statistics and Probability ,File Transfer Protocol ,Base Sequence ,Databases, Factual ,Computer science ,DNA ,Sequence Analysis, DNA ,computer.software_genre ,File format ,Biochemistry ,Computer Science Applications ,Clipboard ,World Wide Web ,Computer Communication Networks ,Computational Mathematics ,Microcomputers ,Computational Theory and Mathematics ,AppleScript ,Scripting language ,README ,Molecular Biology ,computer ,Software ,computer.programming_language - Abstract
AppleScript is a Macintosh scripting language. This note describes AppieScripts that automatically mail sequences to a BLAST server. The existence of efficient methods for DNA sequence generation has led to the need for substantial sequence analysis, even for those labs not directly involved in genome-scale sequencing. This analysis can be effectively accomplished by delivering sequences via email to powerful server computers. Software tools for automated email delivery have existed for the most part only on platforms using varieties of the Unix operating system. Unfortunately, many laboratories use Macintosh computers exclusively. However, Macintosh computers now come packaged with AppleScript, which is a simple scripting language that can direct the operations of scriptable' programs. I have written free software for Macintosh computers in AppleScript which automates the delivery of DNA or protein sequences to the BLAST (Altschul el al., 1990) server at ncbi.nlm.nih.gov. These AppieScripts make use of the scriptability' of the commercial email program Eudora (Qualcomm Inc., eudora-sales@qualcomm.com). The sets of scripts are entitled Seq-EudoraBlast' and Automatic-BLAST', and are available through the Internet at ftp://fly.bio.indiana.edu/molbio/mac and at the fly.bio.indiana.edu mirror sites. Although there are three types of scripts (the CLIPBOARD scripts and the droplet' scripts, contained in SeqEudora-Blast', and the Automatic-BLAST' script), they all operate similarly: they receive a sequence or sequences, strip out descriptive text, launch Eudora, read essential parameters from an accompanying BLAST parameters' text file, then compose a properly addressed letter with the parameters and a sequence, and send the letter or letters to the BLAST server. The user stipulates which of the five BLAST programs (blastn, blastp, blastx, tblastn, tblastx) should be used by selecting a particular script when using the CLIPBOARD or droplet' scripts. One set of scripts (CLIPBOARD) collects the contents of the Clipboard, and can deliver only one sequence at a time. One simply double-clicks on the desired script to activate it. The droplet' scripts take drag-and-dropped text files, and thus can deliver a large number of sequences at one time. The Automatic-BLAST script is fully automatic. At a daily or weekly time set by the researcher, the script searches the contents of specified folders (e.g. daily blastn', weekly tblastx'). If the script finds text files, it reads their contents and sends those sequences to the server. Note that this means the Automatic-BLAST script must be kept running in order for it to periodically check for the specified time or day. The Automatic-BLAST and droplet' scripts also record their submissions cumulatively in a log file, which could be useful given the large number of sequences that can be delivered by these scripts. The droplet' and Automatic-BLAST scripts can parse DNA sequences in plain, DNA Strider ASCII, Pearson/Fasta, Genbank/GB, EMBL or Zuker format. The result from a BLAST analysis can be sizeable, and large results are split into smaller messages for delivery to the researcher. Both sets of scripts contain BLASTextract', a script to extract BLAST messages from a Eudora mailbox' to text files, concatenating smaller messages to form a single file when necessary. There are some known deficiencies in these scripts. First, they cannot accommodate every known sequence file format. Second, the scripts cannot, as yet, accommodate multiple sequences in one file. Third, the scripts appear to run out of memory if a given sequence exceeds 4000 characters in length. This may be remedied by increasing the script's Preferred size' using the script's Get Info' box. Finally, these scripts require recent versions of AppleScript's Scripting Additions-please read the accompanying README files for the specific details. I note that there are a multitude of analytical email servers now in service for the community of molecular biologists, performing diverse functions, apart from the sequence comparison servers at ncbi.nlm.nih.gov (for a listing, see http://expasy.hcuge.ch/info/serv_ema.txt). A researcher with a beginner's knowledge of AppleScript could easily modify any of these scripts and their accompanying parameters files to automate the delivery of sequence to any of these other servers.
- Published
- 1996
- Full Text
- View/download PDF
111. Catalogue of HII regions measured on 6 m telescope plates at Observatoire de Marseille
- Author
-
M. Petit, P. Figon, and H. Petit
- Subjects
Telescope ,Physics ,law ,README ,General Physics and Astronomy ,Astronomy ,Astrophysics ,Galaxy ,Original data ,law.invention - Abstract
Several galaxies were studied at Observatoire de Marseille using the films obtained at the 6 m telescope with the Great Focal Reducer installed at the prime focus of the russian telescope. We thought that before we retire, it should be of some interest for the astronomical community to find in a single catalogue all the data we published to facilitate the research. (see hereunder the references). We give in this catalogue the data of the following galaxies: M 33, M 51, M 81, NGC 2403, NGC 4258 and NGC 7331. NGC 4258 was not studied with the 6 m telescope, but we added its results because it was published by the same team using the same softwares. For NGC 7331 (see Petit 1998). The flux values are followed by*; that means that they are not absolute fluxes like in the other publications. In order to make the catalogue easier to consult, we homogeneised the data. So, one can find some slight differences between the original data and the catalogue ones. The details of these modifications are precised in the Readme file associated to the catalogue in the data base. We did not keep the remarks so it is necessary to look at the original publication to find the full information. We give the whole references hereunder to facilitate the research of the original articles.
- Published
- 1998
- Full Text
- View/download PDF
112. Update from the Software Editor
- Author
-
M. Victor Wickerhauser
- Subjects
World Wide Web ,Software ,business.industry ,Plain text ,Applied Mathematics ,Home page ,README ,Home directory ,computer.file_format ,business ,computer ,Mathematics - Abstract
Home Directory: ~acha (/Mathematics/Staff/acha/public_html) Subdirectories: README -> ./readme.txt # User instructions gifs/ # Browser images index.html # Home page definitions readme.dvi # Agreement for submitters readme.tex # . . .in TeX source form readme.txt # . . .in plain text form uploads/ # Put submissions here utility/ # Get ‘unzip’, etc., here volume03/ # ACHA vol. 3 submissions volume13/ # ACHA vol. 13 submissions
- Full Text
- View/download PDF
113. Reusability library framework (RLF)
- Author
-
M. A. Simos, K. C. Wallnau, and J. J. Solderitsch
- Subjects
World Wide Web ,Software ,Computer science ,business.industry ,README ,Key (cryptography) ,Software engineering ,business ,Naval research ,Reusability - Abstract
This document provides an overview for the R-increment enhancements to the Reusability Library Framework (RLF) software. This software was developed in its original form under the STARS Foundations program (contract number N00014-88-C-2052, administered by the Naval Research Laboratory). This delivery includes modifications to the RLF software in several key areas which are briefly described below. Additional information can be found in README files within the various subdirectories that comprise this delivery. In particular, these README files contain information necessary to install the software for evaluation purposes. The user manuals for each of the major subsystems included in this delivery remain in their original STARS Foundations form. (JS)
- Published
- 1989
- Full Text
- View/download PDF
114. A system for automatically generating documentation for (C)LP programs
- Author
-
Manuel V. Hermenegildo
- Subjects
General Computer Science ,Computer science ,Interface (Java) ,0102 computer and information sciences ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Theoretical Computer Science ,Software ,Documentation ,README ,0202 electrical engineering, electronic engineering, information engineering ,Informática ,Unix ,business.industry ,Programming language ,Assertion language ,010201 computation theory & mathematics ,Scripting language ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,020201 artificial intelligence & image processing ,Compiler ,business ,computer ,Computer Science(all) - Abstract
We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in ISO-Prolog, Ciao, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what version of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc.) for the predicates in the program, and machine-readable comments. One of the main novelties of lpdoc is that these assertions and comments are written using the Ciao system assertion language, which is also the language of communication between the compiler and the user and between the components of the compiler. This allows a significant synergy among specification, documentation, optimization, etc. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The documentation can be generated in many formats including texinfo, dvi, ps, pdf, info, html/css, Unix nroff/man, Windows help, etc., and can include bibliographic citations and images. lpdoc can also generate “man” pages (Unix man page format), nicely formatted plain ascii “readme” files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or info formats suitable for inclusion in on-line indices of manuals, and even complete WWW and info sites containing on-line catalogs of documents and software distributions. The lpdoc manual, all other Ciao system manuals, and parts of this paper are generated by lpdoc.
115. [Untitled]
- Subjects
0301 basic medicine ,Gaussian ,Luminance ,Retinal ganglion ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,symbols.namesake ,0302 clinical medicine ,README ,Genetics ,Logical matrix ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,Mathematics ,Ecology ,Artificial neural network ,computer.file_format ,030104 developmental biology ,Computational Theory and Mathematics ,Modeling and Simulation ,symbols ,Raster graphics ,Algorithm ,computer ,030217 neurology & neurosurgery ,Decoding methods - Abstract
This package contains data for the publication "Nonlinear decoding of a complex movie from the mammalian retina" by Deny S. et al, PLOS Comput Biol (2018). The data consists of (i) 91 spike sorted, isolated rat retinal ganglion cells that pass stability and quality criteria, recorded on the multi-electrode array, in response to the presentation of the complex movie with many randomly moving dark discs. The responses are represented as 648000 x 91 binary matrix, where the first index indicates the timebin of duration 12.5 ms, and the second index the neural identity. The matrix entry is 0/1 if the neuron didn't/did spike in the particular time bin. (ii) README file and a graphical illustration of the structure of the experiment, specifying how the 648000 timebins are split into epochs where 1, 2, 4, or 10 discs were displayed, and which stimulus segments are exact repeats or unique ball trajectories. (iii) a 648000 x 400 matrix of luminance traces for each of the 20 x 20 positions ("sites") in the movie frame, with time that is locked to the recorded raster. The luminance traces are produced as described in the manuscript by filtering the raw disc movie with a small gaussian spatial kernel.
116. README. WRITING NOTES
- Author
-
Andrew Benjamin
- Subjects
README ,media_common.quotation_subject ,Library science ,General Medicine ,Art ,media_common - Abstract
Meditations on the temporality of writing., Méditation sur la temporalité de l’écriture.
117. README.DOC: On Oulipo
- Author
-
Jean-Jacques Thomas and Lee Hilliker
- Subjects
Surprise ,Phrase ,History ,Literature and Literary Theory ,Nothing ,README ,media_common.quotation_subject ,Face (sociological concept) ,Paratext ,Set (psychology) ,Linguistics ,Computer technology ,media_common - Abstract
According to accepted critical standards, a good essay is one which manages both to elucidate and justify its title. I have therefore chosen a rather inexplicit heading in order to have the opportunity for immediate clarification that at the same time will permit me briefly to identify and contextualize the subject-matter of this essay. Anyone who is even somewhat familiar with computer technology knows that numerous hours of practice are required to learn how to use computer software, be it for the manipulation of a text or of a spread-sheet. In many cases this apprenticeship is facilitated by what might be called a para-program: an electronic file called README.DOC. The label has become generic and is found as an introductory file on numerous software diskettes in France and in the United States. This preliminary document should be consulted before actual use begins. README.DOC is in some respects the computer version of the old "Preface," "Publisher's Note," or "Foreword," with which we are so familiar from (our present) book culture. However, it is even better rendered by the phrase "Directions For Use" [Mode d'emploi]. My linking the terms "Preface" with "Directions For Use" might come as a surprise to some since we usually associate "Preface" with a literary work and "Directions For Use" with a common household appliance. And one might legitimately ask what connects a novel of Balzac's with a food processor. On the face of it, nothing. However, recent French experimental literature provides such a connection since it relies on preliminary remarks (or what is now called the "paratext" [paratexte']) that are much closer to "Directions for use" than to the traditional "Preface." It is for just this reason that an apparently straight-forward title, such as "Prefaces," would have been inappropriate for this essay. Until now, the conventional utterances that precede a literary text have appeared in the form of declarative statements designed to direct the reader. Authors or their representatives set up boundaries and landmarks aimed at
- Published
- 1988
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.