377 results on '"Interpreted language"'
Search Results
2. Minimizing Power Consumption in Cloud Computing: An Evaluation of Optimization Approaches
- Author
-
Soloveva, Julia
- Subjects
Virtuelle Maschinen (VMs) ,Virtualisierung ,Energieverbrauch ,Virtual machines (VMs) ,Cloud Computing ,Interpretierte Sprache ,Compiled language ,Energie-Optimierung ,Energy consumption ,Infrastructure as a Service (IaaS) ,Energy efficiency ,Energy-aware programming ,Virtualization ,Energieeffiziente Programmierung ,Energy optimization ,Kompilierte Sprache ,Energie-Effizienz ,Interpreted language ,Java ,Python - Abstract
Der zunehmende Fokus auf Energieeffizienz in verschiedenen Industriesektoren verdeutlicht die Herausforderungen des Cloud Computing, einer aufstrebenden Technologie mit hohem Energiebedarf. Die Gewährleistung der Energieeffizienz im Cloud Computing ist für Technologieunternehmen, die \coo\- Neutralität anstreben, von großer Bedeutung. In dieser Arbeit werden verschiedene Optimierungsstrategien untersucht, die den Energieverbrauch in Cloud-Umgebungen potenziell reduzieren können. Die Bewertungen basieren auf dem Konzept der annähernden Energieverteilung in diesen Umgebungen, wobei auch andere wichtige Parameter wie die Laufzeit und der gesamte Netzwerkverkehr berücksichtigt werden. Ein wesentliches Ziel dieser Analyse ist ein Vergleich zwischen Python und Java, indem dieselbe Applikation in diesen beiden Sprachen nachgebildet und zusätzliche Anpassungen, wie die Zusammenlegung von virtuellen Maschinen (VMs), vorgenommen werden. Die Ergebnisse deuten darauf hin, dass die Plattform, wenn sie auf einer Java-Applikation mit einer einzigen VM basiert, einen geringeren Energieverbrauch aufweist. The escalating focus on energy efficiency across various sectors accentuates the challenges inherent in cloud computing, an emerging technology with substantial energy requirements. Ensuring energy efficiency in cloud computing is crucial for tech businesses striving for \coo\ neutrality. This thesis scrutinizes several optimization strategies that can potentially reduce energy consumption within cloud environments. Assessments are grounded on the concept of approximate energy allocation in these environments, while considering other key parameters such as execution time and total network traffic. A salient objective of this thesis is a comparative analysis between Python and Java by recreating the same application in these two languages and incorporating additional adjustments, like merging Virtual Machines (VMs). Findings suggest that the platform, when based on a Java application with a single VM, achieves a lower energy consumption.
- Published
- 2023
3. English learner education and teacher preparation in the U.S.: an interpretive language education policy analysis
- Author
-
M. Katherine Ford and Linda Harklau
- Subjects
Linguistics and Language ,Teacher preparation ,Pedagogy ,Interpreted language ,Language education ,Lens (geology) ,Education policy ,Sociology ,Policy analysis ,Language and Linguistics ,Education - Abstract
This article examines recent U.S. language education policies regarding English learners (ELs) and their teachers using an interpretive policy analysis theoretical lens (Wagenaar 2011) incorporatin...
- Published
- 2021
4. Compositionality
- Author
-
Kracht, Marcus and Kracht, Marcus
- Published
- 2011
- Full Text
- View/download PDF
5. On Equivalence Relations Between Interpreted Languages, with an Application to Modal and First-Order Language
- Author
-
Kai F. Wehmeier
- Subjects
Logic ,Computer science ,010102 general mathematics ,Interpreted language ,Structure (category theory) ,06 humanities and the arts ,Modal operator ,Ontology (information science) ,0603 philosophy, ethics and religion ,01 natural sciences ,Algebra ,Philosophy ,Modal ,060302 philosophy ,Equivalence relation ,0101 mathematics ,Algebraic number ,Equivalence (measure theory) - Abstract
I examine notions of equivalence between logics (understood as languages interpreted model-theoretically) and develop two new ones that invoke not only the algebraic but also the string-theoretic structure of the underlying language. As an application, I show how to construe modal operator languages as what might be called typographical notational variants of bona fide first-order languages.
- Published
- 2021
6. Next-generation Web Applications with WebAssembly and TruffleWasm
- Author
-
D. Muharemagic, Matija Šipek, Aleksander Radovan, Branko Mihaljević, and Skala, Karolj
- Subjects
FOS: Computer and information sciences ,Java ,Computer science ,business.industry ,Interoperability ,Interpreted language ,Software development ,Polyglot ,computer.file_format ,JavaScript ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Web application ,Executable ,Software engineering ,business ,computer ,WebAssembly ,GraalVM ,Truffle ,Binary Format ,computer.programming_language - Abstract
In modern software development, the JavaScript ecosystem of various frameworks and libraries used to develop contemporary web applications presents many advantages. JavaScript is a widely known interpreted programming language, simple to learn and start development, and with numerous third-party libraries and extensions. However, with the rise of highly user-interactive websites and browser-based games, in some cases, JavaScripts executable engine could lack in performance. Therefore, developers could combine several other programming languages to create a polyglot user-interactive interoperable system to develop efficient modern web applications. The interoperability modules offer significant advantages but also present challenges in the execution due to high complexity and longer compilation times. This paper explores WebAssembly, a binary format compilation target with a low-level assembly-like language used for targeting from other programming languages. The binary format allows near-native performance level due to its compactness, as it prioritizes usage of low-level languages. Moreover, as a continuation of our previous research of the GraalVM ecosystem, we analyzed a guest language implementation of a WebAssembly based system, TruffleWasm, hosted on GraalVM and Truffle Java framework. This paper presents the architecture and review of the TruffleWasm within the GraalVM-based ecosystem as well as from performance test results within our academic environment., Comment: 6 pages, 3 figures
- Published
- 2021
7. Userspace Software Integrity Measurement
- Author
-
Tim Riemann and Michael Eckel
- Subjects
Service (systems architecture) ,Computer science ,Network security ,business.industry ,Event (computing) ,Interpreted language ,Trusted Computing ,Python (programming language) ,computer.software_genre ,Software ,Operating system ,Trusted Platform Module ,business ,computer ,computer.programming_language - Abstract
Todays computing systems are more interconnected and sophisticated than ever before. Especially in healthcare 4.0, services and infrastructures rely on cyber-physical systemss (CPSess) and Internet of Things (IoT) devices. This adds to the complexity of these highly connected systems and their manageability. Even worse, the variety of emerging cyber attacks is becoming more severe and sophisticated, making healthcare one of the most important sectors with major security risks. The development of appropriate countermeasures constitutes one of the most complex and difficult challenges in cyber security research. Research areas include, among others, anomaly detection, network security, multi-layer event detection, cyber resiliency, and integrity protection. Securing the integrity of software running on a device is a desirable protection goal in the context of systems security. With a Trusted Platform Module (TPM), measured boot, and remote attestation there exist technologies to ensure that a system has booted up correctly and runs only authentic software. The Linux Integrity Measurement Architecture (IMA) extends these principles into the operating systems (OSes), measuring native binaries before they are loaded. However, interpreted language files, such as Java classes and Python scripts, are not considered executables and are not measured as such. Contemporary OSess ship with many of these and it is vital to consider them as security-critical as native binaries. In this paper, we introduce Userspace Software Integrity Measurement (USIM) for the Linux OSes. Userspace Software Integrity Measurement (USIM) enables interpreters to measure, log, and irrevocably anchor critical events in the TPM. We develop a software library in C which provides TPM-based measurement functionality as well as the USIM service, which provides concurrent access handling to the TPM based event logging. Further, we develop and implement a concept to realize highly frequent event logging on the slow TPM. We integrate this library into the Java Virtual Machine (JVM) to measure Java classes and show that it can be easily integrated into other interpreters. With performance measurements we demonstrate that our contribution is feasible and that overhead is negligible.
- Published
- 2021
8. The Resh Programming Language for Multirobot Orchestration
- Author
-
Itai Segall, Martin D. Carroll, and Kedar S. Namjoshi
- Subjects
FOS: Computer and information sciences ,Multirobot systems ,Computer Science - Programming Languages ,business.industry ,Programming language ,Computer science ,Interpreted language ,computer.software_genre ,Automation ,Software Engineering (cs.SE) ,Computer Science - Robotics ,Computer Science - Software Engineering ,Task analysis ,Robot ,Orchestration (computing) ,business ,Programmer ,Robotics (cs.RO) ,computer ,Programming Languages (cs.PL) - Abstract
This paper describes Resh, a new, statically typed, interpreted programming language and associated runtime for orchestrating multirobot systems. The main features of Resh are: (1) It offloads much of the tedious work of programming such systems away from the programmer and into the language runtime; (2) It is based on a small set of temporal and locational operators; and (3) It is not restricted to specific robot types or tasks. The Resh runtime consists of three engines that collaborate to run a Resh program using the available robots in their current environment. This paper describes both Resh and its runtime and gives examples of its use., Comment: Accepted for publication at ICRA'21
- Published
- 2021
9. OSS Scripting System for Game Development in Rust
- Author
-
Rodrigo Oliveira Campos, Carla Rocha, Pablo Diego Silva da Silva, Universidade de Brasilia [Brasília] (UnB), Davide Taibi, Valentina Lenarduzzi, Terhi Kilamo, Stefano Zacchiroli, TC 2, and WG 2.13
- Subjects
Rust language ,OSS ,Video game development ,business.industry ,Computer science ,Amethyst game engine ,Interpreted language ,Software development ,Python (programming language) ,computer.software_genre ,Software portability ,Entity component system ,Foreign function interface ,Scripting language ,Game engine ,Scripting system ,[INFO]Computer Science [cs] ,Tool paper ,business ,Software engineering ,computer ,Rust (programming language) ,computer.programming_language - Abstract
International audience; Software development for electronic games has remarkable performance and portability requirements, and the system and low-level languages usually provide those. This ecosystem became homogeneous at commercial levels around C and C++, both for open source or proprietary solutions. However, innovations brought other possibilities that are still growing in this area, including Rust and other system languages. Rust has low-level language properties and modern security guarantees in access to memory, concurrency, dependency management, and portability. The Open Source game engine Amethyst has become a reference solution for game development in Rust, has a large and active community, and endeavors in being an alternative to current solutions. Amethyst brings parallelism and performance optimizations, with the advantages of the Rust language. This paper presents scripting concepts that allow the game logic to be implemented in an external interpreted language. We present a scripting module called Legion Script that was implemented for the entity and component system (ECS) called Legion, part of the Amethyst organization. As a Proof-of-Concept (POC), we perform the Python code interpretation using the Rust Foreign Function Interface (FFI) with CPython. This POC added scripting capabilities to Legion. We also discuss the benefit of using the alternative strategy of developing a POC before contributing to OSS communities in emergent technologies.
- Published
- 2021
10. Pedagogical strategies for developing interpretive language about images
- Author
-
Louise J. Ravelli
- Subjects
060201 languages & linguistics ,Structure (mathematical logic) ,Linguistics and Language ,media_common.quotation_subject ,05 social sciences ,Interpreted language ,050301 education ,The arts ,Language and Linguistics ,Education ,Focus (linguistics) ,Style (sociolinguistics) ,Reading (process) ,0602 languages and literature ,ComputingMilieux_COMPUTERSANDEDUCATION ,Mathematics education ,Psychology ,Set (psychology) ,0503 education ,Curriculum ,media_common - Abstract
Purpose The purpose of this paper is to reflect on pedagogical strategies which support the teaching of critical analysis of visual and multimodal texts in a tertiary-level course for Arts students. Design/methodology/approach The paper describes strategies which focus on developing students’ abilities to express interpretive critique, as opposed to mere description. These strategies give students strong scaffolding towards success in their interpretive writing. The course in question is a tertiary-level Arts course which teaches Kress and van Leeuwen’s (2006) approach to “reading images” in relation to contemporary media texts. The basic structure of the course is described, along with the macro steps which underpin the pedagogy. Examples of highly successful and less successful student writing are compared to reveal the key components of effective interpretive answers. Findings In addition to the normal expectations regarding essay structure and style, and in addition to mastery of the technicality of the course, successful and less successful student writing depends on their mastery of a specific set of moves within the essay. These moves integrate textual observations with clear explanations and a strong relation to interpretation. Practical implications While the course and strategies discussed are for tertiary-level students, the strategies described are adaptable to primary and secondary levels also. Multimodal texts are an integral part of the English curriculum, and all teachers need to explore strategies for enabling their students’ critical engagement with such texts. Originality/value Visual and multimodal texts are an exciting and also challenging part of English curricula, and new analytical frameworks and pedagogical strategies are needed to tackle these texts. In particular, the gap between simply describing visual resources (applying the tools) and critical analysis (using the tools) is vast, and specific pedagogical strategies are needed to help students develop the necessary interpretive language.
- Published
- 2019
11. CHORT: an original system for cardiological database hospital reports
- Author
-
Dee, D., Derwael, C., Matton, J.L., Vanbutsele, R., Brohet, Christian, De Kock, Marc, Computers in Cardiology 1994, UCL - MD/MINT - Département de médecine interne, UCL - (SLuc) Service de pathologie cardiovasculaire, UCL - MD/CHIR - Département de chirurgie, and UCL - (SLuc) Service d'anesthésiologie
- Subjects
Flexibility (engineering) ,Data acquisition ,Generator (computer programming) ,Data collection ,Database ,Computer science ,Control (management) ,Interpreted language ,Data input ,computer.software_genre ,computer ,Data administration - Abstract
Data collection and management is a tedious and time consuming activity. With CHORT (Cardiac HOspital ReporT), the authors have designed a new approach to integrating all data related to a specific patient. CHORT allows interactive data input or data acquisition from external systems. CHORT is able to access other local databases. CHORT can initiate REGAL, a report generator, at any time during a patient's hospital stay. All information is converted into fluent French text before being integrated into the report and merged with free text. REGAL is an interpreted language permitting flexibility in data selection and control over data display and page lay-out. CHORT improves and speeds up medical file access, suppresses typed report output, and provides a database for clinical and scientific purposes. >
- Published
- 2021
12. Cooperative Dynamic Programmable Devices Using Actor Model for Embedded Systems of Microcontrollers
- Author
-
Sharil Tumin and Sylvia Encheva
- Subjects
Microcontroller ,Software ,business.industry ,Software deployment ,Computer science ,Coroutine ,Concurrency ,Embedded system ,Interpreted language ,Actor model ,business ,Agile software development - Abstract
IoT devices are everywhere. Developing embedded system software is a challenge. Embedded systems are inherently event-driven and concurrent multitasks. Coroutine-based concurrency is better suited than multi-threading on microcontrollers with limited computing resources. Using higher interpreted language and a robust cooperative model will help. An Actor Model implemented in MicroPython can provide a solution for agile development and deployment of dynamically programmable devices in actor-based networks.
- Published
- 2021
13. Extending the R Language with a Scalable Matrix Summarization Operator
- Author
-
Carlos Ordonez, Siva Uday Sampreeth Chebolu, and Sikder Tahsin Al-Amin
- Subjects
Multiplication algorithm ,Source code ,Computer science ,media_common.quotation_subject ,Interpreted language ,02 engineering and technology ,Parallel computing ,Automatic summarization ,Matrix multiplication ,Matrix (mathematics) ,020204 information systems ,Spark (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Multiplication ,media_common - Abstract
Analysts prefer simpler interpreted languages to program their computations. Prominent languages include R, Python, and Matlab. On the other hand, analysts aim to compute mathematical models as fast as possible, especially with large data sets. Data summarization remains a fundamental technique to accelerate machine learning computations. Based on this motivation, we propose a novel summarization mechanism computed via a single matrix multiplication in the statistical R language. We show our summarization benefits a large family of linear models, including Linear Regression, PCA, and Naive Bayes. We present a subsystem that enables exploiting summarization by detecting Gramian matrix products in R. We optimize the existing R source code by overriding the internal R matrix multiplication algorithm using ours. Our solution can be plugged into R and help solving where a similar matrix multiplication appears, much faster and without RAM limitations. Moreover, our solution can be benefited from the parallel processing ability of the summarization matrix. We present an experimental validation showing our subsystem incurs little overhead since it works on source code while providing much faster speeds compared to the R language built-in functions. To round up our comparisons, we also compare our subsystem with Spark in parallel machines. For our solution, we assume that data can be in the HDFS, disk, or already partitioned. Our solution triumphs Spark in most cases proving we can also compete in the big data space.
- Published
- 2020
14. ACE
- Author
-
Anthony Byrne, Ayse K. Coskun, and Shripad Nadgowda
- Subjects
business.industry ,Computer science ,Interpreted language ,Cloud computing ,computer.software_genre ,Software ,Workflow ,Software deployment ,Application security ,Component-based software engineering ,Container (abstract data type) ,Operating system ,business ,computer - Abstract
While much of the software running on today's serverless platforms is written in easily-analyzed high-level interpreted languages, many performance-conscious users choose to deploy their applications as container-encapsulated compiled binaries on serverless container platforms such as AWS Fargate or Google Cloud Run. Modern CI/CD workflows make this deployment process nearly-instantaneous, leaving little time for in-depth manual application security reviews. This combination of opaque binaries and rapid deployment prevents cloud developers and platform operators from knowing if their applications contain outdated, vulnerable, or legally-compromised code. This paper proposes Approximate Concrete Execution (ACE), a just-in-time binary analysis technique that enables automatic software component discovery for serverless binaries. Through classification and search engine experiments with common cloud software packages, we find that ACE scans binaries 5.2x faster than a state-of-the-art binary analysis tool, minimizing the impact on deployment and cold-start latency while maintaining comparable recall.
- Published
- 2020
15. Overview of JavaScript Engines for Resource-Constrained Microcontrollers
- Author
-
Kai Grunert
- Subjects
Focus (computing) ,business.industry ,Programming language ,Computer science ,Resource constrained ,Interpreted language ,computer.software_genre ,JavaScript ,Task (computing) ,Microcontroller ,Internet of Things ,business ,computer ,Interpreter ,computer.programming_language - Abstract
IoT devices often use small, constrained micro-controllers to implement their functionality. Usually, they are programmed with languages like C or C++, but there is a trend to use interpreted languages for this task. In this paper, we focus on JavaScript. We discuss the advantages and disadvantages of using this interpreted language for microcontroller development, we give an overview of the available JavaScript engines for constrained devices, and we compare these interpreters' general properties. One finding shows that the projects can be divided into embeddable interpreters (JerryScript, Duktape, mJS) and standalone runtimes (Espruino, Moddable, Mongoose OS). It was also identified that the interpreters follow different architectural approaches.
- Published
- 2020
16. Scalable Machine Learning in C++ (CAMEL)
- Author
-
Anshuman Raina, Saumye Mehrotra, Kashish Khullar, Harshit Khandelwal, and Moolchand Sharma
- Subjects
Traverse ,business.industry ,Computer science ,Interpreted language ,computer.software_genre ,Perceptron ,Machine learning ,Expert system ,Everyday tasks ,Scalability ,Artificial intelligence ,Compiler ,business ,Compiled language ,computer - Abstract
As technology to collect and operate data from everyday tasks has augmented, there is a significant rise in extrapolation concluded from the datasets. This has made Machine Learning seemingly omnipresent in the decision-making processes around the world. From Decision-Driven programmers to Expert Systems, we count on Machine Learning for optimization and increasing the efficiency of subsystems. Yet, we find that Machine Learning today is constrained based on Programming Language used in development. In this paper, we created a library that is purely developed in C++, a widely used compiler language. We also aim to calculate the performance metrics of Compiled versus Interpreted Languages after developing the algorithms. The scientific library “Armadillo” is used to ease many math-related functions and help us traverse the problem of dynamic datasets introduction instead of statically coded matrices. This paper aims to highlight the differences between Compiled and Interpreted languages as well as to find if Compiled languages are a better alternative for ML Algorithms. This research is also aimed to be a continuing effort to be used as a library like TensorFlow, which offers Application Program Interfaces (API) coded in C as a medium. Lastly, we also aim to increase the scalability of these algorithms to remove any language-based constraints. Thus, these are the main reasons for developing C++ Augmented Machine Learning (CAMEL) library.
- Published
- 2020
17. Risk Prediction Of Chronic Kidney Disease Using Machine Learning Algorithms
- Author
-
Pritilata, Nazmus Sakib, Ashikul Islam, Tanzila Islam, Sadaf Salman Pantho, Shanila Yunus Yashfi, and Mohammad Shahbaaz
- Subjects
Artificial neural network ,business.industry ,Computer science ,Interpreted language ,Developing country ,Python (programming language) ,medicine.disease ,Machine learning ,computer.software_genre ,Random forest ,Chronic Kidney Diseases ,medicine ,Artificial intelligence ,business ,computer ,computer.programming_language ,Kidney disease - Abstract
CKD is a serious reason of demise and disability. It was the 27th focal reason in 1990 and became 18th focal reason in 2010. Near about 1 million people lose their life in 2013. In spite of that, people of developing countries are being affected by CKD. We analyzed the data of CKD patient and proposed a system from which it will be possible to predict the risk of CKD. We have used 455 patients' data. Online data set which is collected from UCI Machine Learning Repository and real time dataset which is collected from Khulna City Medical College are used here. We used Python as a high-level interpreted programming language for developing our system. We trained the data using 10-fold CV and applied Random forest and ANN. The accuracy achieved by Random forest algorithm is 97.12% and ANN is 94.5%. This system will help to predict early disclosure of chronic kidney diseases.
- Published
- 2020
18. PSS/E Based Power System Stabilizer Tuning Tool
- Author
-
Yulian Rangelov, Ngoc Tuan Trinh, Ara Panosyan, and Nikolay Nikolaev
- Subjects
phase compensation ,PSS/E ,Operating point ,Computer science ,Interpreted language ,Control engineering ,Python (programming language) ,stabilizer ,computer.software_genre ,Dynamic simulation ,Electric power system ,tuning ,Scripting language ,Linearization ,Maximum power transfer theorem ,computer ,Python ,computer.programming_language - Abstract
Since their introduction, Power System Stabilizers have become widely accepted as an efficient supplementary excitation controller for extending the power transfer capability limits of the system by adding damping to the low-frequency electromechanical modes of oscillation to improve the overall system stability. The purpose of the tool presented in this paper is to enable an easy to use, flexible and accurate tuning of various types of Power System Stabilizers, both for a SMIB, as well as multi-machine system configurations. The tool is built in Python and relies on the dynamic models of the PSS/E library, and the PSS/E steady-state and dynamic simulation modules. The key component of this new tool is PSS/E's linearization module, which generates the linear state-space model around a steady-state operating point of the full non-linear dynamic model of the system. It provides valuable tools necessary for the analysis of the stability phenomena and tackling with the optimization of the stabilizer parameters and validation of the results. The core features of the tool were demonstrated with the well-established Kundur's four-machine test system. Utilizing interpreted language, such as Python, makes it very convenient for potential users to flexibly create custom scripts to perform the analysis and optimization according to their needs and preferences.
- Published
- 2020
19. WTFHE: neural-netWork-ready Torus Fully Homomorphic Encryption
- Author
-
Martin Novotny and Jakub Klemsa
- Subjects
Scheme (programming language) ,Service (systems architecture) ,Focus (computing) ,Theoretical computer science ,Artificial neural network ,business.industry ,Process (engineering) ,Computer science ,Interpreted language ,Homomorphic encryption ,0102 computer and information sciences ,Encryption ,01 natural sciences ,03 medical and health sciences ,0302 clinical medicine ,010201 computation theory & mathematics ,030220 oncology & carcinogenesis ,business ,computer ,computer.programming_language - Abstract
We are currently witnessing two arising trends, which have a huge potential to threaten our privacy: the invasive sensors of the Internet of Things (IoT), and the powerful data mining techniques, in particular we focus on Neural Networks (NN's). For this reason, powerful countermeasures must be called for service: namely end-to-end encryption. Such an approach however requires an encryption scheme that enables processing of the encrypted data - this is known as the Fully Homomorphic Encryption (FHE). In this paper, we revisit an FHE scheme named TFHE, which is suitable for evaluation of NN's over encrypted input data, and we suggest to incorporate a verifiability feature to the evaluation process. Since there already exist other variants of the original TFHE scheme-currently only implemented in C++, which is rigid-we further introduce a library for rapid prototyping of new concepts related to TFHE. Our library is implemented in Ruby, which is an interpreted language and which goes with an interactive shell. Hence any new method can be speedily verified before implemented as a high-performance library.
- Published
- 2020
20. JIT Leaks: Inducing Timing Side Channels through Just-In-Time Compilation
- Author
-
Tegan Brennan, Tevfik Bultan, and Nicolás Rosner
- Subjects
Profiling (computer programming) ,021110 strategic, defence & security studies ,Java ,Programming language ,Computer science ,String (computer science) ,Interpreted language ,0211 other engineering and technologies ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,JavaScript ,Oracle ,Template ,Just-in-time compilation ,Virtual machine ,0202 electrical engineering, electronic engineering, information engineering ,computer ,computer.programming_language - Abstract
Side-channel vulnerabilities in software are caused by an observable imbalance in resource usage across different program paths. We show that just-in-time (JIT) compilation, which is crucial to the runtime performance of modern interpreted languages, can introduce timing side channels in cases where the input distribution to the program is non-uniform. Such timing channels can enable an attacker to infer potentially sensitive information about predicates on the program input.We define three attack models under which such side channels are harnessable and five vulnerability templates to detect susceptible code fragments and predicates. We also propose profiling algorithms to generate the representative statistical information necessary for the attacker to perform accurate inference.We systematically evaluate the strength of these JIT-based side channels on the java.lang.String, java.lang.Math, and java.math.BigInteger classes from the Java standard library, and on the JavaScript built-in objects String, Math, and Array. We carry out our evaluation using two widely adopted, open-source, JIT-enhanced runtime engines for the Java and JavaScript languages: the Oracle HotSpot Java Virtual Machine and the Google V8 JavaScript engine, respectively.Finally, we demonstrate a few examples of JIT-based side channels in the Apache Shiro security framework and the GraphHopper route planning server, and show that they are observable over the public Internet.
- Published
- 2020
21. Efficiently and easily integrating differential equations with JiTCODE, JiTCDDE, and JiTCSDE
- Author
-
Gerrit Ansmann
- Subjects
FOS: Computer and information sciences ,Computer science ,Differential equation ,FOS: Physical sciences ,General Physics and Astronomy ,Dynamical Systems (math.DS) ,01 natural sciences ,010305 fluids & plasmas ,Stochastic differential equation ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,0103 physical sciences ,FOS: Mathematics ,Mathematics - Dynamical Systems ,010306 general physics ,Mathematical Physics ,computer.programming_language ,Stochastic process ,Applied Mathematics ,Interpreted language ,Statistical and Nonlinear Physics ,Delay differential equation ,Computational Physics (physics.comp-ph) ,Complex network ,Python (programming language) ,Numerical integration ,Computer Science - Mathematical Software ,Physics - Computational Physics ,Mathematical Software (cs.MS) ,computer ,Algorithm - Abstract
We present a family of Python modules for the numerical integration of ordinary, delay, or stochastic differential equations. The key features are that the user enters the derivative symbolically and it is just-in-time-compiled, allowing the user to efficiently integrate differential equations from a higher-level interpreted language. The presented modules are particularly suited for large systems of differential equations such as those used to describe dynamics on complex networks. Through the selected method of input, the presented modules also allow almost complete automatization of the process of estimating regular as well as transversal Lyapunov exponents for ordinary and delay differential equations. We conceptually discuss the modules' design, analyze their performance, and demonstrate their capabilities by application to timely problems.
- Published
- 2020
22. Implicit Formulations of Bounded-Impulse Trajectory Models for Preliminary Interplanetary Low-Thrust Analysis
- Author
-
Steven L. McCarty, Jeffrey Pekosh, Kaushik Ponnapalli, and Robert D. Falck
- Subjects
business.industry ,Computer science ,Interpreted language ,Thrust ,Modular design ,Impulse (physics) ,symbols.namesake ,Software ,Bounded function ,Jacobian matrix and determinant ,symbols ,business ,Interplanetary spaceflight ,Algorithm - Abstract
The bounded-impulse approach to low-thrust interplanetary trajectory optimization is widely used. In an effort to efficiently implement this approach using NASA’s OpenMDAO optimization software, the authors have implemented implicit formulations of the forward shooting/backwards-shooting methods commonly used in bounded-impulse models. These implicit approaches allow for vectorization of the underlying calculations which can significantly reduce runtime in interpreted languages. An implicit approach may be either converged by using an underlying nonlinear solver to converge the state propagation, or as a constraint in an optimizer-driven multiple-shooting approach. Significant computational efficiency gains are realized through the utilization of the modular approach to unified derivatives. Further computational efficiency is achieved by capitalizing on the sparsity of the constraint Jacobian matrix. This work demonstrates that a vectorized multiple-shooting approach for propagating a state-time history is superior in terms of computational efficiency as the number of segments in the state-propagation is increased.
- Published
- 2020
23. RAPL: A Domain Specific Language for Resource Allocation of Indivisible Goods
- Author
-
Cristopher Zhunio, Rigoberto Fonseca-Delgado, Israel Pineda, and Franklin Camacho
- Subjects
Structure (mathematical logic) ,Domain-specific language ,Parsing ,Programming language ,Computer science ,Interpreted language ,Pareto principle ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Digital subscriber line ,0202 electrical engineering, electronic engineering, information engineering ,Resource allocation ,020201 artificial intelligence & image processing ,computer ,Interpreter - Abstract
We propose a new Domain Specific Language (DSL) to represent and solve resource allocation problems of indivisible goods. Resource allocation problems can be represented using matrices; this representation is flexible and has interesting mathematical properties that the solution can exploit. However, the programming of such a problem using a general-purpose programming language might include an unnecessary level of complexity. This new DSL allows the user to declare the agents and their preferences of resources. Also, the language can manipulate those elements with the proper operations involved in the resource allocation problem. The proposed DSL can measure efficiency criteria such as Pareto optimality, measure fairness criteria such as Envy-free, and represent results using matrices. This work shows the structure of the interpreter of this language and provides details about the scanner, parser, and interpreter for this language. This DSL will be called Resource Allocation Programming Language (RAPL). We hope that the easiness of use of this DSL can motivate further research on this topic.
- Published
- 2020
24. CoSaTa: A Constraint Satisfaction Solver and Interpreted Language for Semi-Structured Tables of Sentences
- Author
-
Peter Jansen
- Subjects
Programming language ,Computer science ,Interpreted language ,Inference ,02 engineering and technology ,Solver ,Constraint satisfaction ,computer.software_genre ,Constraint (information theory) ,03 medical and health sciences ,0302 clinical medicine ,Simple (abstract algebra) ,030221 ophthalmology & optometry ,0202 electrical engineering, electronic engineering, information engineering ,Question answering ,020201 artificial intelligence & image processing ,computer ,Interpreter - Abstract
This work presents CoSaTa, an intuitive constraint satisfaction solver and interpreted language for knowledge bases of semi-structured tables expressed as text. The stand-alone CoSaTa solver allows easily expressing complex compositional "inference patterns" for how knowledge from different tables tends to connect to support inference and explanation construction in question answering and other downstream tasks, while including advanced declarative features and the ability to operate over multiple representations of text (words, lemmas, or part-of-speech tags). CoSaTa also includes a hybrid imperative/declarative interpreted language for expressing simple models through minimally-specified simulations grounded in constraint patterns, helping bridge the gap between question answering, question explanation, and model simulation. The solver and interpreter are released as open source. Screencast Demo: https://youtu.be/t93Acsz7LyE
- Published
- 2020
25. The Helium Cryptocurrency Project
- Author
-
Karan Singh Garewal
- Subjects
Development environment ,Cryptocurrency ,Syntax (programming languages) ,Computer science ,Programming language ,Interpreted language ,Python (programming language) ,computer.software_genre ,computer ,Simple (philosophy) ,computer.programming_language - Abstract
In this chapter, we shall begin the construction of the Helium cryptocurrency. Our implementation is going to include all of the features that you expect in a production-quality cryptocurrency. Helium is modeled after Bitcoin, which is the canonical standard. We will implement Helium in Python. Python is a simple yet powerful language whose syntax is lucid and exceptionally easy to understand. Python is an ideal language for clearly exposing the algorithms and techniques that constitute the backbone of a cryptocurrency. The downside of using Python is that it is an interpreted language and thus inherently slow. This limitation makes it unsuitable for a production environment.
- Published
- 2020
26. Towards Measuring Supply Chain Attacks on Package Managers for Interpreted Languages
- Author
-
Ranjita Pai Kasturi, Wenke Lee, Omar Alrawi, Ruian Duan, Ryan Elder, and Brendan Saltaformaggio
- Subjects
FOS: Computer and information sciences ,Computer Science - Cryptography and Security ,Computer science ,Supply chain ,Interpreted language ,computer.software_genre ,Computer security ,Pipeline (software) ,Metadata ,Software development process ,Program analysis ,Malware ,computer ,Cryptography and Security (cs.CR) ,Codebase - Abstract
Package managers have become a vital part of the modern software development process. They allow developers to reuse third-party code, share their own code, minimize their codebase, and simplify the build process. However, recent reports showed that package managers have been abused by attackers to distribute malware, posing significant security risks to developers and end-users. For example, eslint-scope, a package with millions of weekly downloads in Npm, was compromised to steal credentials from developers. To understand the security gaps and the misplaced trust that make recent supply chain attacks possible, we propose a comparative framework to qualitatively assess the functional and security features of package managers for interpreted languages. Based on qualitative assessment, we apply well-known program analysis techniques such as metadata, static, and dynamic analysis to study registry abuse. Our initial efforts found 339 new malicious packages that we reported to the registries for removal. The package manager maintainers confirmed 278 (82%) from the 339 reported packages where three of them had more than 100,000 downloads. For these packages we were issued official CVE numbers to help expedite the removal of these packages from infected victims. We outline the challenges of tailoring program analysis tools to interpreted languages and release our pipeline as a reference point for the community to build on and help in securing the software supply chain., Comment: To appear in NDSS SYMPOSIUM 2021
- Published
- 2020
- Full Text
- View/download PDF
27. EmbML Tool: Supporting the use of Supervised Learning Algorithms in Low-Cost Embedded Systems
- Author
-
Lucas Tsutsui da Silva, Gustavo E. A. P. A. Batista, and Vinicius M. A. Souza
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,010401 analytical chemistry ,Interpreted language ,02 engineering and technology ,01 natural sciences ,0104 chemical sciences ,020901 industrial engineering & automation ,Software ,Embedded system ,business ,Classifier (UML) ,Supervised training - Abstract
Machine Learning (ML) is becoming a ubiquitous technology employed in many real-world applications. In some applications, sensors measure the environment while ML algorithms are responsible for interpreting the data. These systems often face three main restrictions: power consumption, cost, and lack of infrastructure. Therefore, we need highly-efficient classifiers suitable to execute in unresourceful hardware. However, this scenario conflicts to the state-of-practice of ML, in which classifiers are frequently implemented in high-level interpreted languages, make unrestricted use of floating-point operations and assume plenty of resources. In this paper, we present a software tool named EmbML that implements a pipeline to develop classifiers for low-powered embedded systems. It starts with learning a classifier using popular software packages or libraries. Then, EmbML converts the classifier into a carefully crafted C++ code with support for embedded hardware. Our experimental evaluation shows that EmbML classifiers present competitive results in terms of accuracy, time and memory cost.
- Published
- 2019
28. Hydra image processor: 5-D GPU image analysis library with MATLAB and python wrappers
- Author
-
Mark R. Winter, Eric Wait, and Andrew R. Cohen
- Subjects
Statistics and Probability ,Source code ,Computer science ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Biochemistry ,Computational science ,03 medical and health sciences ,CUDA ,Software ,MATLAB ,Molecular Biology ,Gene Library ,030304 developmental biology ,computer.programming_language ,media_common ,0303 health sciences ,Computers ,Image processor ,business.industry ,030302 biochemistry & molecular biology ,Interpreted language ,Python (programming language) ,Frame rate ,Applications Notes ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,business ,computer ,Algorithms - Abstract
Summary Light microscopes can now capture data in five dimensions at very high frame rates producing terabytes of data per experiment. Five-dimensional data has three spatial dimensions (x, y, z), multiple channels (λ) and time (t). Current tools are prohibitively time consuming and do not efficiently utilize available hardware. The hydra image processor (HIP) is a new library providing hardware-accelerated image processing accessible from interpreted languages including MATLAB and Python. HIP automatically distributes data/computation across system and video RAM allowing hardware-accelerated processing of arbitrarily large images. HIP also partitions compute tasks optimally across multiple GPUs. HIP includes a new kernel renormalization reducing boundary effects associated with widely used padding approaches. Availability and implementation HIP is free and open source software released under the BSD 3-Clause License. Source code and compiled binary files will be maintained on http://www.hydraimageprocessor.com. A comprehensive description of all MATLAB and Python interfaces and user documents are provided. HIP includes GPU-accelerated support for most common image processing operations in 2-D and 3-D and is easily extensible. HIP uses the NVIDIA CUDA interface to access the GPU. CUDA is well supported on Windows and Linux with macOS support in the future.
- Published
- 2019
29. Flexibility and coordination in event-based, loosely coupled, distributed systems
- Author
-
Silvestre, B., Rossetto, S., Rodriguez, N., and Briot, J.-P.
- Subjects
- *
PROGRAMMING languages , *LUA (Computer program language) , *AJAX (Web development technology) , *DISTRIBUTED computing , *SYNCHRONIZATION , *COROUTINES (Computer programs) - Abstract
Abstract: The scale and diversity of interactions in current wide-area distributed programming environments, specially in Internet-based applications, point to the fact that there is no single solution for coordinating distributed applications. Instead, what is needed is the ability to easily build and combine different coordination abstractions. In this paper, we discuss the role of some language features, such as first-class function values, closures, and coroutines, in allowing different coordination mechanisms to be constructed out of a small set of communication primitives, and to be easily mixed and combined. Using the Lua programming language, we define a basic asynchronous primitive, which allows programming in a direct event-driven style with the syntax of function calls, and, based on this primitive, we build different well-known coordination abstractions for distributed computing. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
30. OIL—Output input language for data connectivity between geoscientific software applications
- Author
-
Amin Khan, Khalid, Akhter, Gulraiz, and Ahmad, Zulfiqar
- Subjects
- *
APPLICATION software , *COMPUTER software development , *DATA conversion , *PROGRAMMING languages , *COMPUTER input-output equipment , *UTILITIES (Computer programs) , *COMPUTER architecture , *ELECTRONIC systems - Abstract
Abstract: Geoscientific computing has become so complex that no single software application can perform all the processing steps required to get the desired results. Thus for a given set of analyses, several specialized software applications are required, which must be interconnected for electronic flow of data. In this network of applications the outputs of one application become inputs of other applications. Each of these applications usually involve more than one data type and may have their own data formats, making them incompatible with other applications in terms of data connectivity. Consequently several data format conversion utilities are developed in-house to provide data connectivity between applications. Practically there is no end to this problem as each time a new application is added to the system, a set of new data conversion utilities need to be developed. This paper presents a flexible data format engine, programmable through a platform independent, interpreted language named; Output Input Language (OIL). Its unique architecture allows input and output formats to be defined independent of each other by two separate programs. Thus read and write for each format is coded only once and data connectivity link between two formats is established by a combination of their read and write programs. This results in fewer programs with no redundancy and maximum reuse, enabling rapid application development and easy maintenance of data connectivity links. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
31. A comparative study of programming languages for next-generation astrodynamics systems
- Author
-
Frazer McLean, Reiner Anderl, Helge Eichhorn, and Juan Luis Cano
- Subjects
0301 basic medicine ,Computer science ,Programming language ,Comparison of multi-paradigm programming languages ,Interpreted language ,Aerospace Engineering ,Second-generation programming language ,computer.software_genre ,01 natural sciences ,03 medical and health sciences ,Third-generation programming language ,030104 developmental biology ,Control flow analysis ,Space and Planetary Science ,0103 physical sciences ,Fourth-generation programming language ,Fifth-generation programming language ,010303 astronomy & astrophysics ,Compiled language ,computer - Abstract
Due to the computationally intensive nature of astrodynamics tasks, astrodynamicists have relied on compiled programming languages such as Fortran for the development of astrodynamics software. Interpreted languages such as Python, on the other hand, offer higher flexibility and development speed thereby increasing the productivity of the programmer. While interpreted languages are generally slower than compiled languages, recent developments such as just-in-time (JIT) compilers or transpilers have been able to close this speed gap significantly. Another important factor for the usefulness of a programming language is its wider ecosystem which consists of the available open-source packages and development tools such as integrated development environments or debuggers. This study compares three compiled languages and three interpreted languages, which were selected based on their popularity within the scientific programming community and technical merit. The three compiled candidate languages are Fortran, C++, and Java. Python, Matlab, and Julia were selected as the interpreted candidate languages. All six languages are assessed and compared to each other based on their features, performance, and ease-of-use through the implementation of idiomatic solutions to classical astrodynamics problems. We show that compiled languages still provide the best performance for astrodynamics applications, but JIT-compiled dynamic languages have reached a competitive level of speed and offer an attractive compromise between numerical performance and programmer productivity.
- Published
- 2017
32. Just-In-Time GPU Compilation for Interpreted Languages with Partial Evaluation
- Author
-
Christophe Dubach, Juan Fumero, Lukas Stadler, and Michel Steuwer
- Subjects
Profiling (computer programming) ,Flexibility (engineering) ,QA75 ,010302 applied physics ,Interface (Java) ,Computer science ,Programming language ,Interpreted language ,020207 software engineering ,Parallel computing ,02 engineering and technology ,computer.software_genre ,Data type ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Partial evaluation ,020202 computer hardware & architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,computer ,Software - Abstract
Computer systems are increasingly featuring powerful parallel devices with the advent of many-core CPUs and GPUs. This offers the opportunity to solve computationally-intensive problems at a fraction of the time traditional CPUs need. However, exploiting heterogeneous hardware requires the use of low-level programming language approaches such as OpenCL, which is incredibly challenging, even for advanced programmers. On the application side, interpreted dynamic languages are increasingly becoming popular in many domains due to their simplicity, expressiveness and flexibility. However, this creates a wide gap between the high-level abstractions offered to programmers and the low-level hardware-specific interface. Currently, programmers must rely on high performance libraries or they are forced to write parts of their application in a low-level language like OpenCL. Ideally, nonexpert programmers should be able to exploit heterogeneous hardware directly from their interpreted dynamic languages. In this paper, we present a technique to transparently and automatically offload computations from interpreted dynamic languages to heterogeneous devices. Using just-in-time compilation, we automatically generate OpenCL code at runtime which is specialized to the actual observed data types using profiling information. We demonstrate our technique using R, which is a popular interpreted dynamic language predominately used in big data analytic. Our experimental results show the execution on a GPU yields speedups of over 150x compared to the sequential FastR implementation and the obtained performance is competitive with manually written GPU code. We also show that when taking into account start-up time, large speedups are achievable, even when the applications run for as little as a few seconds.
- Published
- 2017
33. Performance Evaluation of Dynamic and Static WordPress-based Websites
- Author
-
Marko Cacic, Mario Tomiša, Marin Milković, and Kamolphiwong, Sinchai
- Subjects
Interface (Java) ,Computer science ,Interpreted language ,020207 software engineering ,Static web page ,02 engineering and technology ,computer.software_genre ,World Wide Web ,Relational database management system ,Virtual machine ,Web page ,WordPress, CMS, performance, static website, Apache Bench ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Theme (computing) ,computer - Abstract
WordPress is a popular website Content Management System (CMS) based on the PHP programming language and MySQL or MariaDB Relational Database Management System (RDBMS). Since PHP is not compiled but interpreted programming language, every time a WordPress-based web page is requested from the server, it needs to load core CMS files and custom theme files, read related content from associated database and generate final HTML output that will be sent to the user's web browser. This significantly prolongs a website's loading time because a user's web browser is normally not able to display any part of web page's interface until the server sends the complete front-end code. In this paper, we have setup a standard WordPress-based website and its static version on a virtual machine. Using the Apache Bench program, we have tested their performances by observing chosen metrics. The experiment results indicate that usage of a static version of a normally dynamic WordPress-based website can have great benefits for both server-side and client-side website operation processes.
- Published
- 2019
34. Combination of NumPy, SciPy and Matplotlib/Pylab -a good alternative methodology to MATLAB - A Comparative analysis
- Author
-
A. Sheela, J Ranjani, and K. Pandi Meena
- Subjects
Commercial software ,Computer science ,Interfacing ,Programming language ,NumPy ,Interpreted language ,Programming paradigm ,Minification ,Python (programming language) ,MATLAB ,computer.software_genre ,computer ,computer.programming_language - Abstract
Python is a simple, dominant and well-organized interpreted language. Python is used to develop the very high performance scientific related application and it is used to develop an application for Numeric computation, it is frequently used. Using python we can develop a high performance applications Python often utilizes external libraries to perform scientific computing. The most important libraries used are NumPy, SciPy and Matplotlib to perform scientific and numeric applications. The Python libraries are open-source tag-on modules, which do additional frequent mathematical and numerical routines in pre-compiled, high-speed tasks. These two modules are providing the functionality that meets, or perhaps exceeds, that related with common commercial software like MATLAB. The python package NumPy (Numeric Python) is used for manipulating large arrays and matrices of numeric data. The Scientific Python (SciPy) extends the functionality of NumPy with a considerable collection of valuable algorithms, like minimization, Fourier transformation, regression, and other applied mathematical techniques.Matrix Laboratory (MATLAB) is a multiple programming paradigm. it is used for numerical computing environment and it is a proprietary programming language. The following various operations are performed by the MATLAB, operations are plotting of functions and data, matrix manipulations, python use an interface for user to interact with an application, some type of algorithms are used for implementation purpose and another type of programming languages are used to create interfacing with programs. The combination of the following three packages NumPy, SciPy and Matplotlib/Pylab are alternatively used as a MATLAB, so this three packages give the alternative to MATLAB, MATLAB environment is best suit for scientific works. Python provide a alternative to MATLAB.
- Published
- 2019
35. Using the Bentley MicroStation environment to program calculations of predicted ground subsidence caused by underground mining exploitation
- Author
-
Artur Krawczyk and Paweł Owsianka
- Subjects
lcsh:GE1-350 ,Engineering drawing ,Computer science ,business.industry ,Interpreted language ,0211 other engineering and technologies ,Underground mining (hard rock) ,Coal mining ,MicroStation VBA programming ,Subsidence ,Terrain ,Context (language use) ,02 engineering and technology ,computer.file_format ,Visual Basic for Applications ,020401 chemical engineering ,Interpolating isolines ,021108 energy ,Executable ,Subsidence calculation ,0204 chemical engineering ,business ,computer ,lcsh:Environmental sciences ,Underground mining - Abstract
The paper presents a new concept of creating a program for calculating ground subsidence caused by underground mining extraction, which is substantially different than previously used solutions. Instead of compiling the calculation algorithm to an executable file, the whole application algorithm has been written in the Bentley MicroStation graphic environment’s interpreted language. This graphic environment is used in mining for keeping mining maps and the format is used for storing finished and designed mining exploitation data sets. The paper describes S. Knothe’s theory of the influence of mining exploitation on the terrain surface in the context of the calculation methodology used for terrain subsidence computation. Based on the theory of influence, the structure of the created MVBA (MicroStation Visual Basic for Applications) algorithm for interpolating subsidence contours is described. The new application is called uDEFO. Calculation results from the program are compared with calculation results from software currently used in coal mines.
- Published
- 2019
36. Language-Agnostic Optimization and Parallelization for Interpreted Languages
- Author
-
Julio Cárdenas-Rodríguez, Bonnie L. Hurwitz, Jon Stephens, Theo Sackos, Sam Badger, Benjamin Gaska, Kat Volk, Brandon Neth, Ian J. Bertolacci, Anthony Encinas, Barbara Kreaseck, Michelle Mills Strout, Sarah Willer, Jesse Bartels, Babak Yadegari, Saumya K. Debray, Katherine E. Isaacs, and Sabin Devkota
- Subjects
060201 languages & linguistics ,Java ,Computer science ,business.industry ,Interpreted language ,Software development ,06 humanities and the arts ,02 engineering and technology ,Parallel computing ,Python (programming language) ,computer.software_genre ,0602 languages and literature ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Compiler ,Perl ,business ,MATLAB ,computer ,Implementation ,computer.programming_language - Abstract
Scientists are increasingly turning to interpreted languages, such as Python, Java, R, Matlab, and Perl, to implement their data analysis algorithms. While such languages permit rapid software development, their implementations often run into performance issues that slow down the scientific process. Source-level approaches for parallelization are problematic for two reasons: first, many of the language features common to these languages can be challenging for the kinds of analyses needed for parallelization; and second, even where such analysis is possible, a language-specific approach implies that each language would need its own parallelizing compiler and/or constructs, resulting in significant duplication of effort.
- Published
- 2019
37. Computing Languages for Bioinformatics: Python
- Author
-
Pietro H. Guzzi
- Subjects
Software_GENERAL ,Hardware_GENERAL ,Computer science ,Programming language ,Interpreted language ,Programming paradigm ,Maintenance release ,Python (programming language) ,computer.software_genre ,computer ,Interpreter ,Readability ,computer.programming_language - Abstract
Python is a high-level programming language written by Guido van Rossum 1991. Python is an interpreted language that aims to emphasise the readability of the code using a simple syntax and to avoid special cases and exceptions. It uses a dynamic type system, and it supports multiple programming paradigms. Python interpreters are available for many operating systems. The distribution is available from python.org, and the current version is Python 3.6.3 that is the third maintenance release of Python 3.6, which was initially released in 2016–12.
- Published
- 2019
38. JavaScript: Unique Parts
- Author
-
Sammie Bae
- Subjects
Syntax (programming languages) ,Computer science ,Process (engineering) ,Programming language ,Interpreted language ,JavaScript ,computer.software_genre ,computer ,computer.programming_language - Abstract
This chapter will briefly discuss some exceptions and cases of JavaScript’s syntax and behavior. As a dynamic and interpreted programming language, its syntax is different from that of traditional object-oriented programming languages. These concepts are fundamental to JavaScript and will help you to develop a better understanding of the process of designing algorithms in JavaScript.
- Published
- 2019
39. Vectors, Matrices, and Multidimensional Arrays
- Author
-
Robert Johansson
- Subjects
Computer science ,Computation ,Interpreted language ,Numerical computing ,Parallel computing ,Python (programming language) ,computer ,Data type ,computer.programming_language - Abstract
Vectors, matrices, and arrays of higher dimensions are essential tools in numerical computing. When a computation must be repeated for a set of input values, it is natural and advantageous to represent the data as arrays and the computation in terms of array operations. Computations that are formulated this way are said to be vectorized. Many modern processors provide instructions that operate on arrays. These are also known as vectorized operations, but here vectorized refers to high-level array-based operations, regardless of how they are implemented at the processor level. Vectorized computing eliminates the need for many explicit loops over the array elements by applying batch operations on the array data. The result is concise and more maintainable code, and it enables delegating the implementation of (for example, elementwise) array operations to more efficient low-level libraries. Vectorized computations can therefore be significantly faster than sequential element-by-element computations. This is particularly important in an interpreted language such as Python, where looping over arrays element-by-element entails a significant performance overhead.
- Published
- 2018
40. Chapter 6: Using Python to Backtest Your Algorithm
- Author
-
George Pruitt
- Subjects
Programming language ,Computer science ,Interpreted language ,Python (programming language) ,computer.software_genre ,computer ,computer.programming_language - Published
- 2016
41. Hard real-time multibody simulations using ARM-based embedded systems
- Author
-
Javier Cuadrado, Frank Naets, Roland Pastorino, Francesco Cosco, and Wim Desmet
- Subjects
0209 industrial biotechnology ,Control and Optimization ,Source code ,Dead code ,Computer science ,media_common.quotation_subject ,Aerospace Engineering ,02 engineering and technology ,computer.software_genre ,020901 industrial engineering & automation ,0203 mechanical engineering ,Code generation ,Compiled language ,media_common ,computer.programming_language ,Programming language ,business.industry ,Mechanical Engineering ,Interpreted language ,Python (programming language) ,Computer Science Applications ,020303 mechanical engineering & transports ,Procedural programming ,Modeling and Simulation ,Embedded system ,business ,Automatic programming ,computer - Abstract
© 2016, Springer Science+Business Media Dordrecht. The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers. ispartof: Multibody System Dynamics vol:37 issue:1 pages:127-143 ispartof: location:SOUTH KOREA, Busan status: published
- Published
- 2016
42. Parallel processing and visualization for results of molecular simulation problems
- Author
-
Viktoriia O. Podryga, Sergey Polyakov, and D. V. Puzyrkov
- Subjects
Workstation ,Computer science ,ПАРАЛЛЕЛЬНАЯ ОБРАБОТКА,ВИЗУАЛИЗАЦИЯ,МОЛЕКУЛЯРНАЯ ДИНАМИКА,PARALLEL PROCESSING,VISUALIZATION,MOLECULAR DYNAMICS,PYTHON,MAYAVI2 ,Computation ,визуализация ,Interpreted language ,Scientific visualization ,Parallel algorithm ,Multiprocessing ,Parallel computing ,Python (programming language) ,mayavi2 ,lcsh:QA75.5-76.95 ,Visualization ,Computational science ,law.invention ,python ,law ,General Earth and Planetary Sciences ,lcsh:Electronic computers. Computer science ,параллельная обработка ,молекулярная динамика ,computer ,General Environmental Science ,computer.programming_language - Abstract
In this paper authors presents “mmdlab” library for the interpreted programming language Python. This library allows to carry out reading, processing and visualization of the results of numerical calculations in the tasks of molecular simulation. Considering the large volume of data obtained from such simulations, there is a need in parallel realization of algorithms for processing those volumes. Parallel processing should be performed on multicore systems, such as common scientific workstation, and on super-computer systems and clusters, where the MD simulations were held. During the development process we have study the effectiveness of the Python language for such tasks, and we have examined the tools for it’s acceleration. As well, we studied multiprocessing capabilities and tools for cluster computation using this language. Also we have investigated the problems of receiving and processing the data, located on multiple computational nodes. This was prompted by the need to process the data, produced by parallel algorithm, that was executed on multiple computational nodes, and saves its output on each of them. As a tool for scientific visualization was chosen an open-source “Mayavi2” package. The developed ”mmdlab” library was used in the analysis of the results of MD simulation of the gas and metal plate interaction. As a result, we managed to observe the effect of adsorption in details, which is important for many practical applications.
- Published
- 2016
43. A model of conference interpretation
- Author
-
Adil Al-Kufaishi
- Subjects
060201 languages & linguistics ,Linguistics and Language ,030504 nursing ,Arabic ,Communication ,Interpreted language ,06 humanities and the arts ,computer.software_genre ,Language and Linguistics ,language.human_language ,Lexical item ,Linguistics ,Rendering (computer graphics) ,03 medical and health sciences ,Idiomatic expressions ,0602 languages and literature ,Rhetorical device ,language ,Rhetorical question ,0305 other medical science ,Psychology ,computer ,Interpreter - Abstract
The objective of this research is to develop a model of consecutive interpretation that can cope with a number of linguistic, pragmatic, stylistic, thematic, discourse and communicative problems a conference interpreter encounters while interpreting from English into Arabic or vice versa. A linguistic corpus of one hundred page English speeches delivered at the United Nation General Assembly sessions and interpreted into Arabic is analysed. The proposed model caters for both the SL and TL communicative contexts and views the conference interpreter as a mediator who decodes the original message and encodes it appropriately. The model is tested against the collected sample of linguistic data. It has proved to be capable of identifying inconsistencies and inaccuracies in five major areas: textual, stylistic, lexical, collocation and structural; the percentage of each is statistically calculated. The stylistic aspects constitute 39.3% of the inconsistencies; these cover the deviant forms that are not acceptable in Arabic: the stylistic variants, the modes of request, and the language forms that need to be reformulated in order to be consistent with the Arabic rhetorical patterns. The inappropriate rendering of lexical items makes up 26.1% of the inconsistencies; this comprises the inappropriately rendered collocation patterns, clichés and idiomatic expressions. The structural aspects constitute 18 % of the incorrectly interpreted language forms; these are the inappropriately rendered passive and modification constructions. The textual aspects constitute 10.9% of the inconsistencies; these are the parallel constructions that are not properly handled in Arabic. Translation inaccuracies, items missed or incorrectly interpreted, constitute 5.1% of the inconsistencies.
- Published
- 2015
44. An interpreted language implementation of the Vaganov–Shashkin tree-ring proxy system model
- Author
-
Kevin J. Anchukaitis, Malcolm K. Hughes, Eugene A. Vaganov, and Michael N. Evans
- Subjects
0106 biological sciences ,010504 meteorology & atmospheric sciences ,Ecology ,Computer science ,Fortran ,Interpreted language ,Plant Science ,Systems modeling ,01 natural sciences ,System model ,Data assimilation ,Boundary value problem ,MATLAB ,Algorithm ,computer ,010606 plant biology & botany ,0105 earth and related environmental sciences ,Codebase ,computer.programming_language - Abstract
We describe the implementation of the Vaganov–Shashkin tree-ring growth model (VSM) in MATLAB. VSM, originally written in Fortran, mimics subdaily and daily resolution processes of cambial growth as a function of soil moisture, air temperature, and insolation, with environmental forcing modeled as the principle of limiting factors. The re-implementation in a high level interpreted language, while sacrificing speed, provides opportunities to systematically evaluate model parameters, generate large ensembles of simulated tree-ring chronologies, and embed proxy system modeling within data assimilation approaches to climate reconstruction. We provide a versioned code repository and examples of model applications which permit process-level understanding of tree ring width variations in response to environmental variations and boundary conditions.
- Published
- 2020
45. Executing linear algebra kernels in heterogeneous distributed infrastructures with PyCOMPSs
- Author
-
Anciaux-Sedrakian, A., Tran, Q. H., Amela, R., Ramon-Cortes, C., Ejarque, Jorge, Conejero, Javier, Badia, Rosa M., Ejarque, Jorge, Conejero, Javier, Badia, Rosa M., Barcelona Supercomputing Center, Universitat Politècnica de Catalunya. Doctorat en Bioinformàtica, Universitat Politècnica de Catalunya. Doctorat en Arquitectura de Computadors, Ejarque, Jorge [0000-0003-4725-5097], Conejero, Javier [0000-0001-6401-6229], and Badia, Rosa M. [0000-0003-2941-5499]
- Subjects
Programació (Ordinadors) ,Computer science ,General Chemical Engineering ,Energy Engineering and Power Technology ,Parallel programming (Computer science) ,010103 numerical & computational mathematics ,02 engineering and technology ,computer.software_genre ,lcsh:Chemical technology ,lcsh:HD9502-9502.5 ,01 natural sciences ,Task-based programming ,Informàtica [Àrees temàtiques de la UPC] ,Task based programming ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:TP1-1185 ,0101 mathematics ,computer.programming_language ,020203 distributed computing ,Programming language ,Interpreted language ,Python (programming language) ,lcsh:Energy industries. Energy policy. Fuel trade ,Fuel Technology ,Linear algebra ,Parallel programming model ,computer ,Xeon Phi ,Python - Abstract
Python is a popular programming language due to the simplicity of its syntax, while still achieving a good performance even being an interpreted language. The adoption from multiple scientific communities has evolved in the emergence of a large number of libraries and modules, which has helped to put Python on the top of the list of the programming languages [1]. Task-based programming has been proposed in the recent years as an alternative parallel programming model. PyCOMPSs follows such approach for Python, and this paper presents its extensions to combine task-based parallelism and thread-level parallelism. Also, we present how PyCOMPSs has been adapted to support heterogeneous architectures, including Xeon Phi and GPUs. Results obtained with linear algebra benchmarks demonstrate that significant performance can be obtained with a few lines of Python.
- Published
- 2018
46. Interactive control of purpose built analytical instruments with Forth on microcontrollers - A tutorial
- Author
-
Peter C. Hauser and Jasmine S. Furter
- Subjects
Scientific instrument ,Terminal (telecommunication) ,business.industry ,Chemistry ,010401 analytical chemistry ,Interpreted language ,SIGNAL (programming language) ,Detector ,02 engineering and technology ,021001 nanoscience & nanotechnology ,01 natural sciences ,Biochemistry ,0104 chemical sciences ,Analytical Chemistry ,Microcontroller ,Interactivity ,Personal computer ,Environmental Chemistry ,0210 nano-technology ,business ,Spectroscopy ,Computer hardware - Abstract
The use of the computer language Forth for controlling experimental analytical instruments built in laboratories is described. Forth runs on a microcontroller and as it is an interpreted language the user can directly communicate with it by employing a terminal emulator program running on a personal computer. Thus the user can test attached hardware, such as pumps, valves, electronic pressure regulators, detectors and chemical sensors, directly from the keyboard. This overcomes the lack of interactivity, a significant shortcoming, of the computer languages C and C++, the default on such microcontroller platforms as the Arduinos, which have become very popular in recent years for laboratory applications. Common examples of purpose built experimental analytical laboratory instruments are sequential injection analysis systems, microfluidic devices, or automated sample extraction systems. Application examples from our laboratory are given, namely the regulation of mass-flow controllers for gases, the sequencing of an experimental capillary electrophoresis instrument and the acquisition of a signal from an alcohol sensor.
- Published
- 2018
47. Performance improvements of evolutionary algorithms in perl 6
- Author
-
Juan-Julián Merelo-Guervós and José-Mario García-Valdez
- Subjects
Point (typography) ,Computer science ,Programming language ,Concurrency ,Interpreted language ,Evolutionary algorithm ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Code refactoring ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Perl ,Perl 6 ,computer ,computer.programming_language ,Codebase - Abstract
Perl 6 is a recently released language that belongs to the Perl family but was actually designed from scratch, not as a refactoring of the Perl 5 codebase. Through its two-year-old (released) history, it has increased performance by several orders of magnitude, arriving recently to the point where it can be safely used in production. In this paper, we are going to compare the historical and current performance of Perl 6 in a single problem, OneMax, to those of other interpreted languages; besides, we will also use implicit concurrency and see what kind of performance and scaling can we expect from it.
- Published
- 2018
48. Open-Source Package for PJVS Testing and Calibration
- Author
-
Paolo Durandetto and Andrea Sosso
- Subjects
Programming language ,Computer science ,business.industry ,020208 electrical & electronic engineering ,Interpreted language ,Control reconfiguration ,02 engineering and technology ,Open source software ,Modular design ,Python (programming language) ,Software package ,computer.software_genre ,Open source ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer ,computer.programming_language - Abstract
This paper describes a software package for PJVS automated measurements written in Python, an interpreted programming language widely used in scientific applications. The code is modular and expandable with the support of many libraries allowing easy reconfiguration for different calibration and testing purposes.
- Published
- 2018
49. Programmability versus performance tradeoff
- Author
-
Rosa M. Badia
- Subjects
Multi-core processor ,Philosophy of design ,Source lines of code ,Memory hierarchy ,business.industry ,Computer science ,Interpreted language ,Python (programming language) ,Supercomputer ,Programming paradigm ,business ,computer ,Computer hardware ,computer.programming_language - Abstract
Programming languages that offer simple, elegant interfaces with strong semantics are valued by the applications developers. Python is one example of such a programming language, adopted both by the High Performance Computing and Data Analytics communitites, with a design philosophy that emphasizes code readibilty and a syntax that allows programmers to express concepts in fewer lines of code, while still offering object-orientation and advanced programming features such as generators and list comprehensions. However, Python is an interpreted language and concurreny is ill-supported. This talk will be based on PyCOMPSs, a task-based programming model that aims to parallelize Python sequential codes and to execute them in distributed computing platforms. The talk will overview the system, and present how different hardware challenges are overcome: multicore architectures, accelerators such as GPUs with specific APIs, memory hierarchy. distributed computing, or distributed file systems.
- Published
- 2018
50. A Technology for Optimizing the Process of Maintaining Software Up-to-Date
- Author
-
Andrei Panu
- Subjects
Source code ,business.industry ,Process (engineering) ,Computer science ,media_common.quotation_subject ,Interpreted language ,Software maintenance ,computer.software_genre ,Information extraction ,Software ,Web mining ,Software engineering ,business ,computer ,Interpreter ,media_common - Abstract
In this paper we propose a solution for reducing the time needed to make changes in an application in order to support a new version of a software dependency (e.g., library, interpreter). When such an update is available, we do not know if it comes with some changes that can break the execution of the application. This issue is very serious in the case of interpreted languages, because errors appear at runtime. System administrators and software developers are directly affected by this problem. Usually the administrators do not know many details about the applications hosted on their infrastructure, except the necessary execution environment. Thus, when an update is available for a library packaged separately or for an interpreter, they do not know if the applications will run on the new version, being very hard for them to take the decision to do the update. The developers of the application must make an assessment and support the new version, but these tasks are time consuming. Our approach automates this assessment by analyzing the source code and verifying if and how the changes in the new version affect the application. By having such kind of information obtained automatically, it is easier for system administrators to take a decision regarding the update and it is faster for developers to find out which is the impact of the new version.
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.