5,240 results on '"High-level programming language"'
Search Results
2. Using the capabilities of modern programming languages in solving problems of technical specialties
- Author
-
Gayratovich, Ergashev Nuriddin, Uktamovich, Shukurov Akmal, and Erkinogli, Jabborov Elbek
- Published
- 2020
- Full Text
- View/download PDF
3. The Notification Oriented Paradigm Language to Digital Hardware as an Intuitive High-level Synthesis Tool
- Author
-
Gabriel Rodrigues Garcia, Jean Marcelo Simão, André Augusto Kaviatkovski, Carlos R. Erig Lima, and Ricardo Kerschbaumer
- Subjects
General Computer Science ,business.industry ,Computer science ,Notification Oriented Paradigm to Digital Hardware (NOP-DH) ,NOP ,Context (language use) ,Software ,High-level programming language ,High-level synthesis ,VHDL ,Notification Oriented Paradigm (NOP) ,NOP Language (NOPL) ,Redundancy (engineering) ,NOPL-DH ,business ,Field-programmable gate array ,computer ,FPGA ,Computer hardware ,computer.programming_language - Abstract
The parallelism allowed by FPGAs has attracted attention for knowing applications that need processing power. However, the need for specific and very technical development language has not stimulate its broad use. As an alternative, there are High-level Synthesis Languages (HSL), which allow less complicated FPGA use. However, they do not tend to take full advantage of the FPGA technology. Therefore, another alternative was developed, based on the Notification Oriented Paradigm (NOP), called NOP for Digital Hardware (NOP-DH). NOP allows development in high level with its rule-oriented language called NOPL. Its entity decoupling, parallelism, and redundancy avoidance are useful for best performance. In turn, the NOP-DH brings NOP for the FPGA context with the benefits observed in software but enhanced by hardware nature. This paper reviews the NOPL for NOP-DH (NOPL-DH) that aims high level programming for FPGA. The paper proposes the NOPL-DH test by independent developers, by developing a monitoring device for a box transporting bidirectional conveyer. As a result, NOPL-DH allowed high-level development under the NOP-DH structure in an FPGA, without the need for technical knowledge and, still, maintaining and exploring the NOP properties in FPGA
- Published
- 2021
4. The Database System Environment
- Author
-
Shripad V. Godbole and Elvis C. Foster
- Subjects
Database server ,Spatiotemporal database ,Database ,High-level programming language ,Computer science ,computer.software_genre ,Query language ,computer ,Database design ,Conceptual schema ,Integrity check - Abstract
This chapter discusses the environment of a database system. The sections in this chapter are as follows.
- Published
- 2022
5. Alchemy: Distributed financial quantitative analysis system with high‐level programming model
- Author
-
Rong Gu, Zhixiang Zhang, Zhihao Xu, Chunfeng Yuan, Kai Zhang, Zhaokang Wang, and Yihua Huang
- Subjects
Alchemy ,Quantitative analysis (finance) ,Computer science ,High-level programming language ,Industrial engineering ,Software - Published
- 2021
6. Interfacing C and TMS320C6713 Assembly Language (Part II)
- Author
-
Abdullah A. Wardak
- Subjects
General Computer Science ,Assembly language ,Computer science ,Programming language ,media_common.quotation_subject ,Subroutine ,02 engineering and technology ,computer.software_genre ,law.invention ,Microprocessor ,Stack (abstract data type) ,High-level programming language ,Interfacing ,law ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Compiler ,Function (engineering) ,computer ,computer.programming_language ,media_common - Abstract
In this paper, an interfacing of C and the assembly language of TMS320C6713 is presented. Similarly, interfacing of C with the assembly language of Motorola 68020 (MC68020) microprocessor is also presented for comparison. However, it should be noted that the way the C compiler passes arguments from the main function in C to the TMS320C6713 assembly language subroutine is totally different from the way the C compiler passes arguments in a conventional microprocessor such as MC68020. Therefore, it is very important for a user of the TMS320C6713-based system to properly understand and follow the register conventions and stack operation when interfacing C with the TMS320C6713 assembly language subroutine. This paper describes the application of special registers and stack in the interfacing of these programming languages. Working examples of C and their implementation in the TMS320C6713 assembly language are described in detail. Finally, the concept presented in this paper has been tested extensively by examining different examples under various conditions and has proved highly reliable in operation.
- Published
- 2021
7. Inherent Parallelism and Speedup Estimation of Sequential Programs
- Author
-
Sesha Kalyur and Nagaraja G.S
- Subjects
Class (computer programming) ,Speedup ,General Computer Science ,Computer science ,05 social sciences ,0507 social and economic geography ,02 engineering and technology ,Parallel computing ,Outcome (probability) ,Flattening ,020202 computer hardware & architecture ,High-level programming language ,0202 electrical engineering, electronic engineering, information engineering ,Parallelism (grammar) ,Calipers ,Limit (mathematics) ,Electrical and Electronic Engineering ,050703 geography - Abstract
Although several automated Parallel Conversion solutions are available, very few have attempted, to provide proper estimates of the available Inherent Parallelism and expected Parallel Speedup. CALIPER which is the outcome of this research work is a parallel performance estimation technology that can fill this void. High level language structures such as Functions, Loops, Conditions, etc which ease program development, can be a hindrance for effective performance analysis. We refer to these program structures as the Program Shape. As a preparatory step, CALIPER attempts to remove these shape related hindrances, an activity we refer to as Program Shape Flattening. Programs are also characterized by dependences that exist between different instructions and impose an upper limit on the parallel conversion gains. For parallel estimation, we first group instructions that share dependences, and add them to a class we refer to as Dependence Class or Parallel Class. While instructions belonging to a class run sequentially, the classes themselves run in parallel. Parallel runtime, is now the runtime of the class that runs the longest. We report performance estimates of parallel conversion as two metrics. The inherent parallelism in the program is reported, as Maximum Available Parallelism (MAP) and the speedup after conversion as Speedup After Parallelization (SAP).
- Published
- 2021
8. Metodología de conexión utilizando NeuroSKY Mindwave MW003 con MATLAB
- Author
-
Bryan Quino Ortiz, Marcia Lorena Hernández Nieto, Aldo R. Sartorius Castellanos, Antonia Zamudio Radilla, and José de Jesús Moreno Vázquez
- Subjects
Process (engineering) ,business.industry ,Computer science ,Headset ,Principal (computer security) ,law.invention ,Bluetooth ,Explication ,law ,Human–computer interaction ,High-level programming language ,Wireless ,business ,MATLAB ,computer ,computer.programming_language - Abstract
En la actualidad el ímpetu por comprender el funcionamiento del encéfalo ha motivado a compañías como Neurosky en crear e innovar diademas para la obtención de señales encefalográfícas de bajo costo y gran exactitud, enfocadas a la venta para todo tipo de usuario. En el presente trabajo se mostrará la metodología de conexión de la diadema Neurosky MindWave MW003 efectuando el proceso de recepción, envío y configuración inalámbrica (Bluetooth) con el computador, haciendo uso de la librería Thinkgear.h impartida por la empresa Neurosky, realizando un explicación breve y concisa para el uso del dispositivo, estableciendo las características, métodos de operación y funciones principales para su conexión, utilizando la herramienta MATLAB R2015B, el proceso se describe sistemáticamente enfocándose a usuarios inexpertos en la resolución de sus dudas, así mismo contribuir al usuario experimentado en lenguajes de alto nivel en la creación de nuevas aplicaciones.
- Published
- 2020
9. Efficiently Translating Complex SQL Query to MapReduce Jobflow on Cloud
- Author
-
Junzhou Luo, Zhiang Wu, Lu Zhang, Aibo Song, and Jie Cao
- Subjects
SQL ,Computer Networks and Communications ,Computer science ,Data_MISCELLANEOUS ,Cloud computing ,02 engineering and technology ,Parallel computing ,computer.software_genre ,Set (abstract data type) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,computer.programming_language ,Declarative programming ,business.industry ,Computer Science Applications ,Hardware and Architecture ,High-level programming language ,Scalability ,Programming paradigm ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Data mining ,business ,computer ,Software ,Information Systems - Abstract
MapReduce is a widely-used programming model in cloud environment for parallel processing large-scale data sets. The combination of the high-level language with a SQL-to-MapReduce translator allows programmers to code using SQL-like declarative language, so that each program can afterwards be complied into a MapReduce jobflow automatically. This way is helpful to narrow the gap between non-professional users and cloud platforms, and thus significantly improve the usability of the cloud. Although a number of translators have been developed, the auto-generated MapReduce programs still suffered from extremely inefficiency. In this paper, we present an efficient C ost- A ware SQL-to-MapReduce T ranslator (CAT). CAT has two notable features. First, it defines two intra-SQL correlations: Generalized Job Flow Correlation (GJFC) and Input Correlation (IC), based on which a set of looser merging rules are introduced. Thus, both Top-Down (TD) and Bottom-Up (BU) merging strategies are proposed and integrated into CAT simultaneously. Second, it adopts a cost estimation model for MapReduce jobflows to guide the selection of a more efficient MapReduce jobflows auto-generated by TD and BU merging strategies. Finally, comparative experiments on TPC-H benchmark demonstrate the effectiveness and scalability of CAT.
- Published
- 2020
10. Implementation of deep neural networks on FPGA-CPU platform using Xilinx SDSOC
- Author
-
Rania O. Hassan and Hassan Mostafa
- Subjects
Computational complexity theory ,Contextual image classification ,Computer science ,business.industry ,020208 electrical & electronic engineering ,020206 networking & telecommunications ,02 engineering and technology ,Convolutional neural network ,Surfaces, Coatings and Films ,Acceleration ,Hardware and Architecture ,High-level programming language ,High-level synthesis ,Embedded system ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Graphics ,Field-programmable gate array ,business - Abstract
Deep Convolutional Neural Networks (CNNs) are the state-of-the-art systems for image classification due to their high accuracy but on the other hand their high computational complexity is very costly. The acceleration is the target in this field nowadays for using these systems in real time applications. The Graphics Processing Units is the solution but its high-power consumption prevents its utilization in daily-used equipment moreover the Field Programmable Gate Array (FPGA) has low power consumption and flexible architecture which fits more for CNN implementations. This work discusses this problem and provides a solution that compromises between the speed of the CNN and the power consumption of the FPGA. This solution depends on two main techniques for speeding up: parallelism of layers resources and pipelining inside some layers. On the other hand, we added a new methodology to compromise the area requirements with the speed and design time by implementing CNN using Xilinx SDSOC tool (including processor and FPGA on the same board). Implementing design using HW/SW partitioning will enhance time design based on high level language(C or C++) in Vivado HLS (High Level Synthesis). It also fits for more large designs than using FPGA only and faster in design time.
- Published
- 2020
11. HLock: Locking IPs at the High-Level Language
- Author
-
Mark Tehranipoor, Farimah Farahmandi, Rafid Muttaki, and Roshanak Mohammadivojdan
- Subjects
Record locking ,Computer science ,business.industry ,Supply chain ,Integrated circuit ,Business model ,law.invention ,law ,High-level programming language ,Logic gate ,Embedded system ,business ,Key size ,Abstraction (linguistics) - Abstract
The introduction of the horizontal business model for the semiconductor industry has introduced trust issues for the integrated circuit supply chain. The most common vulnerabilities related to intellectual properties can be caused by untrusted third-party vendors and malicious foundries. Various techniques have been proposed to lock the design at the gate-level or RTL before sending it to the untrusted foundry for fabrication. However, such techniques have been proven to be easily broken using SAT attacks and machine learning-based attacks. In this paper, we propose HLock, a framework for ensuring hardware protection in the form of locking at the high-level description of the design. Our approach includes a formal analysis of design specifications, assets, and critical operations to determine points in which locking keys are inserted. The locked design is then synthesized using high-level synthesis, which has become an integral part of modern IP design due to its advantages on lesser development and verification efforts. The locking at the higher abstraction with the combination of multiple syntheses shows that HLock delivers superior performance considering attack resiliency (i.e., SAT attack, removal attacks, machine learning-based attacks) and overheads compared to conventional locking techniques. Additionally, HLock provides a dynamic/automatic locking solution for any high-level abstraction design based on performance constraints, attack resiliency, power, and area overheads as well as locking key size, and it is well suited for large-scale designs.
- Published
- 2021
12. A Survey on System-on-a-Chip Design Using Chisel HW Construction Language
- Author
-
Timo Hämäläinen, Matti Kayra, Tampere University, and Computing Sciences
- Subjects
Functional programming ,Source code ,Computer science ,media_common.quotation_subject ,Hardware description language ,113 Computer and information sciences ,Abstraction layer ,Computer architecture ,High-level programming language ,High-level synthesis ,System on a chip ,Implementation ,computer ,media_common ,computer.programming_language - Abstract
This paper presents a survey of functional programming languages in System-on-a-Chip (SoC) design. The motivation is improving the design productivity by better source code expressiveness, increased abstraction level in design entry, or improved automation. The survey focuses on Chisel that is one of the most potential High Level Language (HLL) based design frameworks. We include 26 papers that report implementations ranging from IP blocks to complete chips. The result is that functional programming languages are viable for SoC design and can also be deployed in production use. However, Chisel does not increase the abstraction level in a similar way as High Level Synthesis (HLS), since it is used to create circuit generators instead of direct descriptions. Additional benefit is that Chisel offloads user effort from control and connectivity structures, and makes reusability and configurability improved over traditional Hardware Description Language (HDL) designs. acceptedVersion
- Published
- 2021
13. Forecasting the Life of a Structure Relative to the Operating Mode
- Author
-
Ogorelkov, D. A., Lukashuk, O. A., Ogorelkov, D. A., and Lukashuk, O. A.
- Abstract
The question of forecasting the service life of transport machines designed by taking into consideration the load spectrum, that is close to the real one, is an important problem at the calculation. One of the ways to simulate real operating conditions at the design calculation is a method of randomization quasirandom loads. Methods of randomization are widely used in many areas of science and technique. In the article, the numeric comparison of different ways of randomization is shown at the calculation for determining the service time using two techniques: the use of a standardized function of randomization in the high-level programming language of the and the law of normal distribution at its different parameters. The use of the law of normal distribution makes the more exact fatigue calculation because it makes it possible to simulate the quasirandom process that corresponds to the real operation picture to a greater degree. The results presented in the work make it possible to fulfill the calculation of the service time of the metallic structure that is under cyclic asymmetric loads, at the well-known nature of the application of loading to it. © 2021 Institute of Physics Publishing. All rights reserved.
- Published
- 2021
14. Improving Monitoring Greenhouse System using Smart wireless Sensors Actuators Network
- Author
-
Mohamed Fezari and Ali Al-Dahoud
- Subjects
business.product_category ,Installation ,Computer science ,business.industry ,High-level programming language ,Laptop ,Real-time computing ,Greenhouse ,Wireless ,Software design ,business ,Wireless sensor network ,Graphical user interface - Abstract
the aim of this work is an improvement of previous work on greenhouse monitoring using smart sensors network, hardware and software design is presented to control and monitoring greenhouse parameters such as: air temperature, humidity and CO2 gas emission then providing irrigation, ventilation and enrichment of the sol with appropriate chemical products. The smart nodes for WSAN (wireless sensor and actuators network) transfer the data to and from a Laptop via a wireless transmission hardware agent, a simulation based on fuzzy logic controller is provided in order to make smart decision. As a result; the command devices, fans, vapor injectors and heaters will changed to reach the requested condition. A friendly Graphic user interface using high level language was developed to carry out the monitoring tasks. The control algorithms is used to receive data that includes the set points and issuing signals to activate the appropriate devices to get the appropriate condition. The system performance was tested by installing four smart actuators and sensors in the greenhouse.
- Published
- 2021
15. Array languages make neural networks fast
- Author
-
Sven-Bodo Scholz, Artjoms Šinkarovs, Hans-Nikolai Vießmann, and Low, T. Meng
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Programming Languages ,Dependency (UML) ,Source lines of code ,Artificial neural network ,Computer science ,Python (programming language) ,computer.software_genre ,Convolutional neural network ,Machine Learning (cs.LG) ,Computer engineering ,High-level programming language ,Software Science ,Compiler ,computer ,Implementation ,Programming Languages (cs.PL) ,computer.programming_language - Abstract
Modern machine learning frameworks are complex: they are typically organised in multiple layers each of which is written in a different language and they depend on a number of external libraries, but at their core they mainly consist of tensor operations. As array-oriented languages provide perfect abstractions to implement tensor operations, we consider a minimalistic machine learning framework that is shallowly embedded in an array-oriented language and we study its productivity and performance. We do this by implementing a state of the art Convolutional Neural Network (CNN) and compare it against implementations in TensorFlow and PyTorch --- two state of the art industrial-strength frameworks. It turns out that our implementation is 2 and 3 times faster, even after fine-tuning the TensorFlow and PyTorch to our hardware --- a 64-core GPU-accelerated machine. The size of all three CNN specifications is the same, about 150 lines of code. Our mini framework is 150 lines of highly reusable hardware-agnostic code that does not depend on external libraries. The compiler for a host array language automatically generates parallel code for a chosen architecture. The key to such a balance between performance and portability lies in the design of the array language; in particular, the ability to express rank-polymorphic operations concisely, yet being able to do optimisations across them. This design builds on very few assumptions, and it is readily transferable to other contexts offering a clean approach to high-performance machine learning.
- Published
- 2021
16. High-level language brain regions are sensitive to sub-lexical regularities
- Author
-
Affourtit J, Regev Ti, Leon Bergen, Schipper Ae, Evelina Fedorenko, Kyle Mahowald, and Xiang Chen
- Subjects
Comprehension ,business.industry ,High-level programming language ,Computer science ,Phonology ,Artificial intelligence ,computer.software_genre ,business ,computer ,Sentence ,Natural language processing ,Word (group theory) ,Language network - Abstract
A network of left frontal and temporal brain regions supports ‘high-level’ language processing— including the processing of word meanings, as well as word-combinatorial processing—across presentation modalities. This ‘core’ language network has been argued to store our knowledge of words and constructions as well as constraints on how those combine to form sentences. However, our linguistic knowledge additionally includes information about sounds (phonemes) and how they combine to form clusters, syllables, and words. Is this knowledge of phoneme combinatorics also represented in these language regions? Across five fMRI experiments, we investigated the sensitivity of high-level language processing brain regions to sub-lexical linguistic sound patterns by examining responses to diverse nonwords—sequences of sounds/letters that do not constitute real words (e.g., punes, silory, flope). We establish robust responses in the language network to visually (Experiment 1a, n=605) and auditorily (Experiments 1b, n=12, and 1c, n=13) presented nonwords relative to baseline. In Experiment 2 (n=16), we find stronger responses to nonwords that obey the phoneme-combinatorial constraints of English. Finally, in Experiment 3 (n=14) and a post-hoc analysis of Experiment 2, we provide suggestive evidence that the responses in Experiments 1 and 2 are not due to the activation of real words that share some phonology with the nonwords. The results suggest that knowledge of phoneme combinatorics and representations of sub-lexical linguistic sound patterns are stored within the same fronto-temporal network that stores higher-level linguistic knowledge and supports word and sentence comprehension.
- Published
- 2021
17. Taichi
- Author
-
Frédo Durand, Jonathan Ragan-Kelley, Luke Anderson, Yuanming Hu, and Tzu-Mao Li
- Subjects
Computer science ,Sparse grid ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Data structure ,Computer Graphics and Computer-Aided Design ,Finite element method ,Convolution ,Computational science ,Rendering (computer graphics) ,CUDA ,Multigrid method ,High-level programming language ,Path tracing ,0202 electrical engineering, electronic engineering, information engineering ,Compiler ,General-purpose computing on graphics processing units ,computer ,Sparse matrix - Abstract
3D visual computing data are often spatially sparse. To exploit such sparsity, people have developed hierarchical sparse data structures, such as multi-level sparse voxel grids, particles, and 3D hash tables. However, developing and using these high-performance sparse data structures is challenging, due to their intrinsic complexity and overhead. We propose Taichi , a new data-oriented programming language for efficiently authoring, accessing, and maintaining such data structures. The language offers a high-level, data structure-agnostic interface for writing computation code. The user independently specifies the data structure. We provide several elementary components with different sparsity properties that can be arbitrarily composed to create a wide range of multi-level sparse data structures. This decoupling of data structures from computation makes it easy to experiment with different data structures without changing computation code, and allows users to write computation as if they are working with a dense array. Our compiler then uses the semantics of the data structure and index analysis to automatically optimize for locality, remove redundant operations for coherent accesses, maintain sparsity and memory allocations, and generate efficient parallel and vectorized instructions for CPUs and GPUs. Our approach yields competitive performance on common computational kernels such as stencil applications, neighbor lookups, and particle scattering. We demonstrate our language by implementing simulation, rendering, and vision tasks including a material point method simulation, finite element analysis, a multigrid Poisson solver for pressure projection, volumetric path tracing, and 3D convolution on sparse grids. Our computation-data structure decoupling allows us to quickly experiment with different data arrangements, and to develop high-performance data structures tailored for specific computational tasks. With 1 1 0 th as many lines of code, we achieve 4.55× higher performance on average, compared to hand-optimized reference implementations.
- Published
- 2019
18. MakeCode and CODAL: Intuitive and efficient embedded systems programming for education
- Author
-
James Devine, Michal Moskal, Thomas Ball, Peli de Halleux, Steve Hodges, and Joe Finney
- Subjects
Computer science ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Domain (software engineering) ,Set (abstract data type) ,Software ,020204 information systems ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Web application ,0601 history and archaeology ,010302 applied physics ,060102 archaeology ,business.industry ,Firmware ,020207 software engineering ,06 humanities and the arts ,Computer Graphics and Computer-Aided Design ,Microcontroller ,Hardware and Architecture ,High-level programming language ,Embedded system ,business ,computer ,Barriers to entry ,Range (computer programming) - Abstract
Historically, embedded systems development has been a specialist skill, requiring knowledge of low-level programming languages, complex compilation toolchains, and specialist hardware, firmware, device drivers and applications. However, it has now become commonplace for a broader range of non-specialists to engage in the making (design and development) of embedded systems - including educators to motivate and excite their students in the classroom. This diversity brings its own set of unique requirements, and the complexities of existing embedded systems development platforms introduce insurmountable barriers to entry. In this paper we present the motivation, requirements, implementation, and evaluation of a new programming platform that enables novice users to create effective and efficient software for embedded systems. The platform has two major components: (1) Microsoft MakeCode ( www.makecode.com ), a web app that encapsulates an accessible IDE for microcontrollers; and (2) CODAL, an efficient component-oriented C++ runtime for microcontrollers. We show how MakeCode and CODAL combine to provide an accessible, cross-platform, installation-free, high level programming experience for embedded devices without sacrificing performance and efficiency.
- Published
- 2019
19. A Proposed Mobile Based Payment System (Quickpay) for Nigerian Universities
- Author
-
Alo Uzoma Rita, Igwe Joseph Sunday, Achi Ifeanyi Isaiah, Stats, Odegwo Ifeanyi, and Agwu Chukwuemeka Odi
- Subjects
Authentication ,Multidisciplinary ,Database ,Computer science ,business.industry ,media_common.quotation_subject ,Payment system ,computer.software_genre ,Payment ,Security token ,High-level programming language ,User identifier ,Mobile payment ,Mobile technology ,business ,computer ,media_common - Abstract
Objectives: In this research work, we proposed and designed a QR code, 2FA authentication based mobile payment system (Quickpay) for Nigerian universities with focus on Ebonyi State University, Abakaliki. Quickpay mobile payment system will aide student in the payment of all the necessary fees required by the university. Methods and Analysis: We deployed object oriented analysis and methodology where we utilized Unified Modelling Language (UML) in analyzing and modelling the new system. Then, the methodology facilitated the use of competent high level programming tool (PHP programming language) for the development of the software (Quickpay mobile app) after detailed system analysis and design. Findings: When using the system proposed in this study, the different security model used in design of mobile payment solution which was seamlessly integrated into the system that will ensure authorized access. This model deploys 2FA authentication system at the initial stage of verification and an SMS token (request for SMS code and User ID) at the second stage of verification to guarantee user access to the system. On successful authentication, the user uses phone to scan the QR code and send an SMS using USSD code and payment code user will click on generate invoice in the form QR code, the user will use a mobile generated form of the invoice to complete the payment transaction. Application Improvements: Despite the QR code system as deployed in most mobile payment system, Quickpay is designed to seamlessly interface with the university portal while utilizing several layers of possession factor authentication methods. This includes an SMS module to generate a security token for authentication and the 2FA authentication system. Keywords: Mobile Payment, Mobile Technology, QR code, 2FA Authentication, USSD Code
- Published
- 2019
20. [ANT]: A Machine Learning Approach for Building Performance Simulation: Methods and Development
- Author
-
Ahmed Mohamed Yousef Toutou and Mahmoud Abdelrahman
- Subjects
Computer science ,Machine learning ,computer.software_genre ,NA1-9428 ,building performance simulation ,Architecture ,rhino3d ,Plug-in ,Cluster analysis ,City planning ,computer.programming_language ,business.industry ,Model selection ,Usability ,Python (programming language) ,ml ,python ,machine learning ,HT165.5-169.9 ,High-level programming language ,Unsupervised learning ,Design process ,grasshopper ,scikit-learn ,Artificial intelligence ,business ,computer - Abstract
In this paper, we represent an approach for combining machine learning (ML) techniques with building performance simulation by introducing four methods in which ML could be effectively involved in this field i.e. Classification, Regression, Clustering and Model selection . Rhino-3d-Grasshopper SDK was used to develop a new plugin for involving machine learning in design process using Python programming language and making use of scikit-learn module, that is, a python module which provides a general purpose high level language to nonspecialist user by integration of wide range supervised and unsupervised learning algorithms with high performance, ease of use and well documented features. ANT plugin provides a method to make use of these modules inside Rhino\Grasshopper to be handy to designers. This tool is open source and is released under BSD simplified license. This approach represents promising results regarding making use of data in automating building performance development and could be widely applied. Future studies include providing parallel computation facility using PyOpenCL module as well as computer vision integration using scikit-image.
- Published
- 2019
21. Application of Custom Macro B high level CNC programming language in a five-axis milling machine for drilling holes distributed in axi-symmetric working planes
- Author
-
G. Guerrero-Vaca, O. Rodriguez-Alabanda, and Pablo E. Romero
- Subjects
Flexibility (engineering) ,0209 industrial biotechnology ,Computer science ,Programming language ,business.industry ,02 engineering and technology ,computer.software_genre ,Industrial and Manufacturing Engineering ,020303 mechanical engineering & transports ,020901 industrial engineering & automation ,Software ,Group technology ,0203 mechanical engineering ,Artificial Intelligence ,High-level programming language ,Point (geometry) ,Macro ,business ,computer ,Scope (computer science) ,Parametric statistics - Abstract
This paper describes a specific application of the Custom Macro B high level language which is within the scope of CNC programming. This type of programming language allows working with parametric part programs, based on the group technology, through which it is possible to save programming time and costs, facilitates the setting works in the machines implying less CN files also of a smaller size of them into the memory used in the CNC equipment. Small spring bars are commonly used to assemble the belt in a watchcase and the necessary drilling operation on the part shows the particularity that it is impossible to make the holes in the direction perpendicular to the plane to be drilled. There are CAD-CAM solutions for this specific work allowing to program inclined drills for its implementation in 4-axis and 5-axis milling machines. The proposed solution is the best choice and the most flexible to this specific case and it has been developed, simulated and evaluated for different part designs, showing a remarkable improvement from the point of view of flexibility and programming time with respect to two different CAD-CAM software tools commonly used as solution to this 5 axes CNC programming feature.
- Published
- 2019
22. Off-chain Execution and Verification of Computationally Intensive Smart Contracts
- Author
-
Gaurav Panwar, Emrah Sariboz, Kartick Kolachala, Roopa Vishwanathan, and Satyajayant Misra
- Subjects
FOS: Computer and information sciences ,Cryptocurrency ,Computer Science - Cryptography and Security ,Chain (algebraic topology) ,Computer science ,High-level programming language ,Computation ,Distributed computing ,Limit (mathematics) ,Cryptography and Security (cs.CR) ,Block (data storage) - Abstract
We propose a novel framework for off-chain execution and verification of computationally-intensive smart contracts. Our framework is the first solution that avoids duplication of computing effort across multiple contractors, does not require trusted execution environments, supports computations that do not have deterministic results, and supports general-purpose computations written in a high-level language. Our experiments reveal that some intensive applications may require as much as 141 million gas, approximately 71x more than the current block gas limit for computation in Ethereum today, and can be avoided by utilizing the proposed framework., Comment: Scheduled to appear in International Conference on Blockchains and Cryptocurrencies (ICBC-2021)
- Published
- 2021
23. Metamorphic Edge Processor Simulation Framework Using Flexible Runtime Partial Replacement of Software-Embedded Verilog RTL Models
- Author
-
Sejong Oh, Daejin Park, and Jisu Kwon
- Subjects
Speedup ,Finite impulse response ,business.industry ,Computer science ,Processor design ,Verilog Procedural Interface ,Software ,High-level programming language ,Filter (video) ,Embedded system ,Verilog ,business ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,computer ,computer.programming_language - Abstract
Iterative register-transfer level (RTL) simulation is essential for the edge processor design, but the RTL simulation speed is significantly slower in a system where various RTL models are complicatedly integrated. In this paper, we propose a novel metamorphic edge processor simulation framework that partitions the software part and virtualizes it in the system emulator to eject from full RTL simulation. The system emulator, which is written in a high-level language, and the Verilog simulation have different abstraction levels, thus the Verilog procedural interface (VPI) module is plugged into the Verilog simulator to connect with the virtual layer interface. In the system emulator, a Verilog RTL simulation session corresponding to a specific parameter set can be dynamically loaded at runtime to provide metamorphism by flexible partial parameter-driven RTL model replacement. We applied the proposed framework to finite impulse response (FIR) filter, and it is successfully demonstrated and achieved simulation speedup for given parameters.
- Published
- 2021
24. Domain-Specific Language Abstractions for Compression
- Author
-
Shoaib Kamil, Jessica Ray, Richard Y. Wang, Vivienne Sze, Ajav Brahmakshatriya, Albert Reuther, and Saman Amarasinghe
- Subjects
Set (abstract data type) ,Structure (mathematical logic) ,Domain-specific language ,Computer science ,High-level programming language ,Programming language ,Block (programming) ,Optimizing compiler ,computer.software_genre ,Implementation ,computer ,Data compression - Abstract
Little attention has been given to language support for block-based compression algorithms, despite their high implementation complexity. Current implementations have to deal with both the intricacies of the algorithm itself, as well as the low-level optimizations necessary for generating fast code. However, many block-based compression algorithms share a common structure in terms of their data representations, data partitioning operations, and data traversals. In this work, we propose a set of high-level language abstractions that can succinctly capture this structure. These abstractions provide the building blocks for the development of a domain-specific language and an associated optimizing compiler. With compression-specific language support, researchers can focus on algorithm development rather than the low-level implementation details.
- Published
- 2021
25. An Experience with Code-Size Optimization for Production iOS Mobile Applications
- Author
-
Jin Lin, Milind Chabbi, and Raj Barik
- Subjects
Swift ,business.industry ,Computer science ,computer.software_genre ,Pipeline (software) ,Software ,High-level programming language ,Key (cryptography) ,Operating system ,Code (cryptography) ,business ,computer ,Machine code ,Language construct ,computer.programming_language - Abstract
Modern mobile application binaries are bulky for many reasons: software and its dependencies, fast-paced addition of new features, high-level language constructs, and statically linked platform libraries. Reduced application size is critical not only for the end-user experience but also for vendor's download size limitations. Moreover, download size restrictions may impact revenues for critical businesses. In this paper, we highlight some of the key reasons of code-size bloat in iOS mobile applications, specifically apps written using a mix of Swift and Objective-C. Our observation reveals that machine code sequences systematically repeat throughout the app's binary. We highlight source-code patterns and high-level language constructs that lead to an increase in the code size. We propose whole-program, fine-grained machine-code outlining as an effective optimization to constrain the code-size growth. We evaluate the effectiveness of our new optimization pipeline on the UberRider iOS app used by millions of customers daily. Our optimizations reduce the code size by 23%. The impact of our optimizations on the code size grows in magnitude over time as the code evolves. For a set of performance spans defined by the app developers, the optimizations do not statistically regress production performance. We applied the same optimizations to Uber's UberDriver and UberEats apps and gained 17% and 19% size savings, respectively.
- Published
- 2021
26. Software Tools for the PicoBlaze Softcore Microprocessor
- Author
-
Uwe Meyer-Baese
- Subjects
Programming language ,Computer science ,business.industry ,computer.software_genre ,Data type ,PicoBlaze ,law.invention ,Set (abstract data type) ,Microprocessor ,Software ,law ,High-level programming language ,Architecture ,Programmer ,business ,computer - Abstract
This chapter gives an overview of a popular PicoBlaze internal architecture from a programmer perspective only. In particular based on KCPSM6 architecture, the complete instructions set is discussed as well as all steps to develop program from C level, assembler level, or ISS view. This chapter does not require a detailed HDL knowledge as the previous chapter.
- Published
- 2021
27. High-Level Languages for Geospatial Analysis of Big Data
- Author
-
Sami Faiz and Symphorien Monsia
- Subjects
Geospatial analysis ,business.industry ,Computer science ,Big data ,InformationSystems_DATABASEMANAGEMENT ,02 engineering and technology ,computer.software_genre ,Data science ,GeneralLiterature_MISCELLANEOUS ,High-level programming language ,020204 information systems ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,computer ,Strengths and weaknesses - Abstract
In recent years, big data has become a major concern for many organizations. An essential component of big data is the spatio-temporal data dimension known as geospatial big data, which designates the application of big data issues to geographic data. One of the major aspects of the (geospatial) big data systems is the data query language (i.e., high-level language) that allows non-technical users to easily interact with these systems. In this chapter, the researchers explore high-level languages focusing in particular on the spatial extensions of Hadoop for geospatial big data queries. Their main objective is to examine three open source and popular implementations of SQL on Hadoop intended for the interrogation of geospatial big data: (1) Pigeon of SpatialHadoop, (2) QLSP of Hadoop-GIS, and (3) ESRI Hive of GIS Tools for Hadoop. Along the same line, the authors present their current research work toward the analysis of geospatial big data.
- Published
- 2021
28. A Multi-engine Aspect-Oriented Language with Modeling Integration for Video Game Design
- Author
-
Shane L. Kavage and Ben J. Geisler
- Subjects
Domain-specific language ,business.industry ,Computer science ,Game programming ,Code reuse ,computer.software_genre ,System programming ,Game design ,High-level programming language ,Scripting language ,Software engineering ,business ,Video game design ,computer - Abstract
Video game programming is a diverse, multi-faceted endeavor involving elements of graphics programming, systems programming, UI, HCI, and other software engineering disciplines. Game programmers typically employ a new “codebase” per software artifact which often means a unique choice of game engines and scripting languages. Non-portable code is exacerbated by a lack of a shared language and a lack of translation utilities between languages. Meanwhile, many game programming tasks occur time and time again. Aspect-oriented programming was largely developed to assist software engineers in decoupling tasks while maintaining software reuse. GAMESPECT is a language that promotes software reuse through aspects while also providing a platform for translation of software artifacts which has enabled it to be used in multiple game engines across multiple projects. Code reuse on these projects has been high and our methodologies can be summarized by discussing three tenants of GAMESPECT: 1) composition specifications, which define source to source transition properties 2) pluggable aspect interpreters and 3) high level language constructs and modeling language constructs (MDAML) which encourage designer friendly terminology. By comparing accuracy, efficiency, pluggability, and modularity, these three tenants are demonstrated to be effective in creating a new game programming language.
- Published
- 2021
29. Implementation of an Intent Layer for SDN-enabled and QoS-Aware Network Slicing
- Author
-
Molka Gharbaoui, Piero Castoldi, and Barbara Martini
- Subjects
Flexibility (engineering) ,business.industry ,Software deployment ,Computer science ,High-level programming language ,Quality of service ,Component (UML) ,Network service ,Software engineering ,business ,Slicing ,Personalization - Abstract
Vertical industries are attracted by the flexibility and customization offered by network operators through network slicing to run their application platforms in virtual infrastructure assets tailored to their needs. On the other hand, in addition to the desired virtual capacity and component network functions into the slice, verticals may also want to specify connectivity and QoS slice requirements in abstract terms and without dealing with details of network service descriptors and technicalities of the underlying infrastructure.In this work, we propose an intent-based QoS-aware and SDN-enabled slicing implementation that allows customized specifications and establishment of network slices toward flexible delivery of vertical services. The aim is to simplify and automate the deployment of the network slices through a declarative approach while unburdening verticals to deal with technology-specific low-level networking directives. The paper focuses on the implementation of the intent layer using an ETSI NFV MANO platform and presents preliminary experimental results assessing the feasibility and effectiveness of our approach.
- Published
- 2021
30. Hardware Implementation of Floating Point Matrix Inversion Modules on FPGAs
- Author
-
V Lekshmi, J. Manikandan, Sudhakar S, and Chetan S
- Subjects
Floating point ,business.industry ,Computer science ,020208 electrical & electronic engineering ,020206 networking & telecommunications ,Image processing ,02 engineering and technology ,Video processing ,Matrix (mathematics) ,Compressed sensing ,High-level programming language ,Model-based design ,0202 electrical engineering, electronic engineering, information engineering ,business ,Field-programmable gate array ,Computer hardware - Abstract
Matrices are employed for diversified applications such as image processing, control systems, video processing, radar signal processing, compressive sensing and many more. Finding inverse of a floating point large scale matrix is considered to be computationally intensive and their hardware implementation is still a research topic. FPGA implementation of four different floating-point matrix inversion algorithms using a novel combination of high level language programming and model based design is proposed in this paper. The proposed designs can compute inverse of a floating point matrix up to a matrix size of 25×25 and can be easily scaled to large size matrices. The performance evaluation of proposed matrix inversion modules are carried out by their hardware implementation on a Zynq 7000 FPGA based ZED board and the results are reported.
- Published
- 2020
31. Enabling High-Level Programming Languages on IoT Devices
- Author
-
Ioana Culic, Teona Severin, and Alexandru Radovici
- Subjects
Multimedia ,business.industry ,Computer science ,media_common.quotation_subject ,Wearable computer ,Context (language use) ,computer.software_genre ,Domain (software engineering) ,Smartwatch ,High-level programming language ,Quality (business) ,The Internet ,business ,computer ,media_common ,Range (computer programming) - Abstract
Nowadays the Internet of Things (IoT) is not a novel and ambiguous phrase anymore. As IoT technologies such as smartwatches improve the quality of everyday living, people gradually rely more and more on connected devices. However, despite the exponential increase in popularity, the technologies used for connecting everyday devices and embedded computers to the Internet are still limited. While the range of programming languages and runtimes for web and desktop applications becomes wider, IoT development tools lack in diversity. In this context, this paper aims to adapt a new and popular programming language to be used for building Internet of Things applications. D is an accessible and widely used programming language, which is not currently designed to be used within the IoT domain. By adapting D to run on constrained devices, we desire to make IoT prototyping more accessible.
- Published
- 2020
32. Cephalopode: A custom processor aimed at functional language execution for IoT devices
- Author
-
Jules Saget, Carl-Johan H. Seger, and Jeremy Pope
- Subjects
Functional programming ,Resource (project management) ,Software ,High-level programming language ,Computer science ,business.industry ,Internet of Things ,business ,Software engineering ,Vulnerability (computing) - Abstract
The Internet of Things (IoT) conceives a future where "things" are interconnected by means of suitable information and communication technologies. Unfortunately, recent events have demonstrated the high vulnerability of IoT. One of the main reasons for this is the use of low-level programming languages. The Octopi project is developing technologies to easily and securely program IoT devices by the use of functional high-level languages. Unfortunately, a traditional implementation of a modern functional language that runs on traditional hardware is very resource demanding. So resource demanding that few, if any, IoT devices can run them.In the Cephalopode project (which is a subproject of Octopi) we are exploring the implementation of a very low power hardware device directly aimed at running a high-level functional language. By integrating many resource-heavy tasks into dedicated hardware, we aim at creating an execution engine for IoT devices that will allow secure programming.
- Published
- 2020
33. Survey of Deep Learning Neural Networks Implementation on FPGAs
- Author
-
El Hadrami Cheikh Tourad and Mohsine Eleuldj
- Subjects
Artificial neural network ,Computer science ,business.industry ,Deep learning ,Hardware description language ,Cloud computing ,02 engineering and technology ,Python (programming language) ,020202 computer hardware & architecture ,Computer architecture ,High-level programming language ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Relevance (information retrieval) ,Artificial intelligence ,business ,Field-programmable gate array ,computer ,computer.programming_language - Abstract
Deep learning has recently indicated that FPGAs (Field-Programmable Gate Arrays) play a significant role in accelerating DLNNs (Deep Learning Neural Networks). The initial specification of DLNN is usually done using a high-level language such as python, followed by a manual transformation to HDL (Hardware Description Language) for synthesis using a vendor tool. This transformation is tedious and needs HDL expertise, which limits the relevance of FPGAs. This paper presents an updated survey of the existing frameworks for mapping DLNNs onto FPGAs, comparing their characteristics, architectural choices, and achieved performance. Besides, we provide a comprehensive evaluation of different tools and their effectiveness for mapping DLNNs onto FPGAs. Finally, we present the future works.
- Published
- 2020
34. Programming microcontrollers through high-level abstractions
- Author
-
Benoît Vaugon, Emmanuel Chailloux, Basile Pesin, Steven Varoumas, Algorithmes, Programmes et Résolution (APR), LIP6, Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Parallélisme de Kahn Synchrone ( Parkas), Inria de Paris, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Département d'informatique - ENS Paris (DI-ENS), Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-École normale supérieure - Paris (ENS Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS), Armadillo, Département d'informatique de l'École normale supérieure (DI-ENS), École normale supérieure - Paris (ENS Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS)-Inria de Paris, Institut National de Recherche en Informatique et en Automatique (Inria), Département d'informatique - ENS Paris (DI-ENS), École normale supérieure - Paris (ENS-PSL), and Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS-PSL)
- Subjects
[INFO.INFO-PL]Computer Science [cs]/Programming Languages [cs.PL] ,Computer science ,Programming language ,Concurrency ,020208 electrical & electronic engineering ,02 engineering and technology ,computer.software_genre ,Extensibility ,Microcontroller ,Embedded software ,Virtual machine ,High-level programming language ,visual_art ,Electronic component ,0202 electrical engineering, electronic engineering, information engineering ,visual_art.visual_art_medium ,020201 artificial intelligence & image processing ,Abstraction ,computer - Abstract
International audience; In this paper, we present an approach for programming microcontrollers that provides more expressivity and safety than a low-level language approach traditionally used to program such devices. To this end, we provide various abstraction layers (abstraction of the microcontroller, of the electronic components of the circuit, and of concurrency) which, while being adapted to the scarce resources of the hardware, offer high-level programming traits for the development of embedded applications. The various presented abstractions make use of an OCaml virtual machine able to run on devices with limited resources and take advantage of the expressivity and extensibility of the language. We illustrate the interest of our work on both entertainment applications and embedded software examples.
- Published
- 2020
35. Combining the Functional and Oject-Oriented Paradigms in the FOBS-X Scripting Language
- Author
-
James Gil de Lamadrid
- Subjects
Domain-specific language ,Programming language ,Computer science ,Specification language ,computer.software_genre ,Language primitive ,Hybrid, scripting, functional, object-oriented ,Universal Networking Language ,Scripting language ,High-level programming language ,Data control language ,Macro ,computer - Abstract
A language FOBS-X (Extensible FOBS) is described. This language is an interpreted language, intended as a universal scripting language. An interesting feature of the language is its ability to be extended, allowing it to be adapted to new scripting environments. The interpretation process is structured as a core-language parser back-end, and a macro processor frontend. The macro processor allows the language syntax to be modified. A configurable library is used to help modify the semantics of the language, adding the required capabilities for interacting in a new scripting environment. This paper focuses on the macro capability of the language. A macro extension to the language has been developed, called the standard extension, that gives FOBS-X a friendlier syntax. It also serves as a convenient tool for demonstrating the macro expansion process.
- Published
- 2020
- Full Text
- View/download PDF
36. Fast Prototyping of a Deep Neural Network on an FPGA
- Author
-
HyeGang Jun and Wonjong Kim
- Subjects
Hardware architecture ,Fpga design ,Artificial neural network ,Computer science ,High-level programming language ,business.industry ,Embedded system ,Code (cryptography) ,Field-programmable gate array ,business ,Virtual platform ,Software implementation - Abstract
This paper describes a prototyping methodology for implementing deep neural network (DNN) models in hardware. From a DNN model developed in C or C++ programming language, we develop a hardware architecture using a SoC virtual platform and verify the functionality using FPGA board. It demonstrates the viability of using FPGAs for accelerating specific applications written in a high-level language. With the use of High-level Synthesis tools provided by Xilinx [3], it is shown to be possible to implement an FPGA design that would run the inference calculations required by the MobileNetV2 [1] Deep Neural Network. With minimal alterations to the C++ code developed for a software implementation of the MobileNetV2 where HDL code could be directly synthesized from the original C++ code, dramatically reducing the complexity of the project. Consequently, when the design was implemented on an FPGA, upwards of 5 times increase in speed was able to be realized when compared to similar processors (ARM7).
- Published
- 2020
37. Resource Utilization Comparison between Plain FPGA and SoC Combined with FPGA for Image Processing Applications Used by Robotic Arms
- Author
-
Aurel Gontean and Rol Szabo
- Subjects
Software ,business.industry ,High-level programming language ,Computer science ,Embedded system ,Robot ,Image processing ,business ,Field-programmable gate array ,Resource management (computing) ,Robotic arm ,Implementation - Abstract
This paper presents a comparison of two FPGA implementations for controlling robotic arm with image processing. Image processing it's useful in robotic industry since robots can be made more autonomous this way to reduce as much as possible human intervention in production. Image processing it is also known to drain much resources from the control system. A comparison of more implementations can help chose the most suitable one. One implementation is the classic FPGA where everything is done from scratch and the other implementation is the more advanced one, where a microprocessor architecture is made from the FPGA, a graphical operating system is installed, and the control software is made in high level programming language.
- Published
- 2020
38. High level programming abstractions for leveraging hierarchical memories with micro-core architectures
- Author
-
Maurice Jamieson and Nick Brown
- Subjects
FOS: Computer and information sciences ,MicroBlaze ,Computer Networks and Communications ,Computer science ,02 engineering and technology ,Theoretical Computer Science ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,computer.programming_language ,Computer Science - Programming Languages ,Memory hierarchy ,Artificial neural network ,020206 networking & telecommunications ,Hardware accelerators ,Python (programming language) ,Runtime environments ,Parallel programming languages ,Computer architecture ,Computer Science - Distributed, Parallel, and Cluster Computing ,Hardware and Architecture ,High-level programming language ,Interpreters ,020201 artificial intelligence & image processing ,Distributed, Parallel, and Cluster Computing (cs.DC) ,computer ,Neural networks ,Software ,Programming Languages (cs.PL) - Abstract
Micro-core architectures combine many low memory, low power computing cores together in a single package. These are attractive for use as accelerators but due to limited on-chip memory and multiple levels of memory hierarchy, the way in which programmers offload kernels needs to be carefully considered. In this paper we use Python as a vehicle for exploring the semantics and abstractions of higher level programming languages to support the offloading of computational kernels to these devices. By moving to a pass by reference model, along with leveraging memory kinds, we demonstrate the ability to easily and efficiently take advantage of multiple levels in the memory hierarchy, even ones that are not directly accessible to the micro-cores. Using a machine learning benchmark, we perform experiments on both Epiphany-III and MicroBlaze based micro-cores, demonstrating the ability to compute with data sets of arbitrarily large size. To provide context of our results, we explore the performance and power efficiency of these technologies, demonstrating that whilst these two micro-core technologies are competitive within their own embedded class of hardware, there is still a way to go to reach HPC class GPUs., Accepted manuscript of paper in Journal of Parallel and Distributed Computing 138
- Published
- 2020
39. Toward OpenACC-enabled GPU-FPGA Accelerated Computing
- Author
-
Makito Abe, Masayuki Umemura, Kohji Yoshikawa, Ryohei Kobayashi, Norihisa Fujita, and Yoshiki Yamaguchi
- Subjects
Moore's law ,010308 nuclear & particles physics ,Computer science ,Computation ,media_common.quotation_subject ,Supercomputer ,01 natural sciences ,Computer architecture ,High-level programming language ,0103 physical sciences ,Key (cryptography) ,Hardware acceleration ,Field-programmable gate array ,010303 astronomy & astrophysics ,media_common - Abstract
Field-programmable gate arrays (FPGAs) have garnered significant interest in research on high-performance computing because their computation and communication capabilities have drastically improved in recent years due to advances in semiconductor integration technologies that rely on Moore's Law. These improvements reveal the possibility of implementing a concept to enable on-the-fly offloading computation at which CPUs/GPUs perform poorly to FPGAs while performing low-latency data movement. We think that this concept is key to improving the performance of heterogeneous supercomputers using accelerators such as the GPU. In this paper, we propose a GPU-FPGA-accelerated simulation based on the concept and show preliminary results of the proposed concept.
- Published
- 2020
40. Acceleration of Simulation Models Through Automatic Conversion to FPGA Hardware
- Author
-
Daniel Jung, Frans Skarman, Mattias Krysander, and Oscar Gustafsson
- Subjects
010302 applied physics ,business.industry ,Computer science ,02 engineering and technology ,01 natural sciences ,Data type ,020202 computer hardware & architecture ,Dynamic programming ,Acceleration ,High-level programming language ,High-level synthesis ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Verilog ,business ,Field-programmable gate array ,computer ,Computer hardware ,computer.programming_language - Abstract
By running simulation models on FPGAs, their execution speed can be significantly improved, at the cost of increased development effort. This paper describes a project to develop a tool which converts simulation models written in high level languages into fast FPGA hardware. The tool currently converts code written using custom C++ data types into Verilog. A model of a hybrid electric vehicle is used as a case study, and the resulting hardware runs significantly faster than on a general purpose CPU.
- Published
- 2020
41. CausaLM: Causal Model Explanation Through Counterfactual Language Models
- Author
-
Uri Shalit, Roi Reichart, Nadav Oved, and Amir Feder
- Subjects
FOS: Computer and information sciences ,Counterfactual thinking ,Linguistics and Language ,Computer Science - Machine Learning ,Computer science ,Computer Science - Artificial Intelligence ,Machine learning ,computer.software_genre ,Language and Linguistics ,Machine Learning (cs.LG) ,Adversarial system ,Artificial Intelligence ,Correlation does not imply causation ,Representation (mathematics) ,Causal model ,Computer Science - Computation and Language ,business.industry ,Computer Science Applications ,Artificial Intelligence (cs.AI) ,High-level programming language ,Key (cryptography) ,Language model ,Artificial intelligence ,business ,Computation and Language (cs.CL) ,computer - Abstract
Understanding predictions made by deep neural networks is notoriously difficult, but also crucial to their dissemination. As all machine learning based methods, they are as good as their training data, and can also capture unwanted biases. While there are tools that can help understand whether such biases exist, they do not distinguish between correlation and causation, and might be ill-suited for text-based models and for reasoning about high level language concepts. A key problem of estimating the causal effect of a concept of interest on a given model is that this estimation requires the generation of counterfactual examples, which is challenging with existing generation technology. To bridge that gap, we propose CausaLM, a framework for producing causal model explanations using counterfactual language representation models. Our approach is based on fine-tuning of deep contextualized embedding models with auxiliary adversarial tasks derived from the causal graph of the problem. Concretely, we show that by carefully choosing auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance. A byproduct of our method is a language representation model that is unaffected by the tested concept, which can be useful in mitigating unwanted bias ingrained in the data., Our code and data are available at: https://amirfeder.github.io/CausaLM/ Accepted for publication in Computational Linguistics journal
- Published
- 2020
42. Towards High Performance, Portability, and Productivity: Lightweight Augmented Neural Networks for Performance Prediction
- Author
-
Ajitesh Srivastava, Rajgopal Kannan, Naifeng Zhang, and Viktor K. Prasanna
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Speedup ,Computer Science - Performance ,Artificial neural network ,Computer science ,Optimizing compiler ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Machine Learning (cs.LG) ,Performance (cs.PF) ,Software portability ,Computer engineering ,High-level programming language ,0202 electrical engineering, electronic engineering, information engineering ,Performance prediction ,020201 artificial intelligence & image processing ,Compiler ,computer ,Compile time - Abstract
Writing high-performance code requires significant expertise in the programming language, compiler optimizations, and hardware knowledge. This often leads to poor productivity and portability and is inconvenient for a non-programmer domain-specialist such as a Physicist. More desirable is a high-level language where the domain-specialist simply specifies the workload in terms of high-level operations (e.g., matrix-multiply(A, B)), and the compiler identifies the best implementation fully utilizing the heterogeneous platform. For creating a compiler that supports productivity, portability, and performance simultaneously, it is crucial to predict the performance of various available implementations (variants) of the dominant operations (kernels) contained in the workload on various hardware to decide (a) which variant should be chosen for each kernel in the workload, and (b) on which hardware resource the variant should run. To enable the performance prediction, we propose lightweight augmented neural networks for arbitrary combinations of kernel-variant-hardware. A key innovation is utilizing the mathematical complexity of the kernels as a feature to achieve higher accuracy. These models are compact to reduce training time and allow fast inference during compile-time and run-time. Using models with less than 75 parameters, and only 250 training data instances, we are able to obtain accurate performance predictions, significantly outperforming traditional feed-forward neural networks on 48 kernel-variant-hardware combinations. We further demonstrate that our variant-selection approach can be used in Halide implementations to obtain up to 1.7x speedup over Halide auto-scheduler.
- Published
- 2020
43. High-level language and thought
- Author
-
Rob Ellis and Glyn W. Humphreys
- Subjects
Structure (mathematical logic) ,Connectionism ,Grammar ,Principle of compositionality ,High-level programming language ,Computer science ,media_common.quotation_subject ,Verb ,Set (psychology) ,Linguistics ,Natural language ,media_common - Abstract
Noam Chomsky, in 1959, made a critical and influential attack on the idea that human language might be an example of a stimulus-response relationship, associatively learned by children. Training was carried out using a set of 700 verbs, in which the frequency of occurrence of words in the language was varied in terms of the number of times each verb was included in one run through the training set. Natural language has a well defined syntactic structure which forms the basis of its “compositional semantics”. The rules of grammar specify how the elements of language can be combined, and thereby define the set of legal sequences of symbols or expressions. Fodor and Pylyshyn argue that connectionist networks cannot develop representations that have the combinatorial structure necessary for language processing. If this is the case, connectionist networks cannot be the basis of complete accounts of cognition.
- Published
- 2020
44. A Grid for Multidimensional and Multivariate Spatial Representation and Data Processing
- Author
-
Tobias Stål and Anya M. Reading
- Subjects
Computer science ,0207 environmental engineering ,02 engineering and technology ,Library and Information Sciences ,computer.software_genre ,01 natural sciences ,010305 fluids & plasmas ,Regular grid ,0103 physical sciences ,Spatial model ,Multivariate processing ,Python ,Geophysics, Remonte sensing ,020701 environmental engineering ,computer.programming_language ,regular grid ,lcsh:Computer software ,Data processing ,NumPy ,Python (programming language) ,Grid ,Visualization ,python ,Metadata ,spatial model ,lcsh:QA76.75-76.765 ,High-level programming language ,Data mining ,multivariate processing ,computer ,Software ,Information Systems - Abstract
Researchers use 2D and 3D spatial models of multivariate data of differing resolutions and formats. It can be challenging to work with multiple datasets, and it is time consuming to set up a robust, performant grid to handle such spatial models. We share ‘agrid’, a Python module which provides a framework for containing multidimensional data and functionality to work with those data. The module provides methods for defining the grid, data import, visualisation, processing capability and export. To facilitate reproducibility, the grid can point to original data sources and provides support for structured metadata. The module is written in an intelligible high level programming language, and uses well documented libraries as numpy, xarray, dask and rasterio. Funding statement: This research was supported under Australian Research Council’s Special Research Initiative for Antarctic Gateway Partnership (Project ID SR140300001).
- Published
- 2020
45. Copy-and-Patch Compilation: A fast compilation algorithm for high-level languages and bytecode
- Author
-
Fredrik Kjolstad and Haoran Xu
- Subjects
FOS: Computer and information sciences ,Computer Science - Programming Languages ,Computer science ,Construct (python library) ,computer.software_genre ,Metaprogramming ,Bytecode ,High-level programming language ,Code (cryptography) ,Binary code ,Code generation ,Compiler ,Software_PROGRAMMINGLANGUAGES ,Safety, Risk, Reliability and Quality ,computer ,Algorithm ,Software ,Programming Languages (cs.PL) - Abstract
Fast compilation is important when compilation occurs at runtime, such as query compilers in modern database systems and WebAssembly virtual machines in modern browsers. We present copy-and-patch, an extremely fast compilation technique that also produces good quality code. It is capable of lowering both high-level languages and low-level bytecode programs to binary code, by stitching together code from a large library of binary implementation variants. We call these binary implementations stencils because they have holes where missing values must be inserted during code generation. We show how to construct a stencil library and describe the copy-and-patch algorithm that generates optimized binary code. We demonstrate two use cases of copy-and-patch: a compiler for a high-level C-like language intended for metaprogramming and a compiler for WebAssembly. Our high-level language compiler has negligible compilation cost: it produces code from an AST in less time than it takes to construct the AST. We have implemented an SQL database query compiler on top of this metaprogramming system and show that on TPC-H database benchmarks, copy-and-patch generates code two orders of magnitude faster than LLVM -O0 and three orders of magnitude faster than higher optimization levels. The generated code runs an order of magnitude faster than interpretation and 14% faster than LLVM -O0. Our WebAssembly compiler generates code 4.9X-6.5X faster than Liftoff, the WebAssembly baseline compiler in Google Chrome. The generated code also outperforms Liftoff's by 39%-63% on the Coremark and PolyBenchC WebAssembly benchmarks.
- Published
- 2020
- Full Text
- View/download PDF
46. Enabling system wide shared memory for performance improvement in PyCOMPSs applications
- Author
-
Clément Foyer, Jorge Ejarque, Adrian Tate, Rosa M. Badia, Simon McIntosh-Smith, Javier Conejero, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Barcelona Supercomputing Center, and Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
- Subjects
Source lines of code ,Computer science ,Distributed computing ,Parallel programming ,Libraries ,Parallel programming (Computer science) ,02 engineering and technology ,Programació en paral·lel (Informàtica) ,Data management ,Tools ,Shared memory ,Memory ,NumPy ,0202 electrical engineering, electronic engineering, information engineering ,Decorator pattern ,Task ,Informàtica::Arquitectura de computadors [Àrees temàtiques de la UPC] ,computer.programming_language ,Metadata ,020203 distributed computing ,Parallel processing (Electronic computers) ,Processament en paral·lel (Ordinadors) ,Python (programming language) ,Distributed memory ,Runtime ,High-level programming language ,High-level programming languages ,Task analysis ,Programming paradigm ,020201 artificial intelligence & image processing ,computer ,Python - Abstract
Python has been gaining some traction for years in the world of scientific applications. However, the high-level abstraction it provides may not allow the developer to use the machines to their peak performance. To address this, multiple strategies, sometimes complementary, have been developed to enrich the software ecosystem either by relying on additional libraries dedicated to efficient computation (e.g., NumPy) or by providing a framework to better use HPC scale infrastructures (e.g., PyCOMPSs).In this paper, we present a Python extension based on SharedArray that enables the support of system-provided shared memory and its integration into the PyCOMPSs programming model as an example of integration to a complex Python environment. We also evaluate the impact such a tool may have on performance in two types of distributed execution-flows, one for linear algebra with a blocked matrix multiplication application and the other in the context of data-clustering with a k-means application. We show that with very little modification of the original decorator (3 lines of code to be modified) of the task-based application the gain in performance can rise above 40% for tasks relying heavily on data reuse on a distributed environment, especially when loading the data is prominent in the execution time. This work was partly funded by the EXPERTISE project (http://www.msca-expertise.eu/), which has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 721865. BSC authors have also been supported by the Spanish Government through contracts SEV2015-0493 and TIN2015-65316-P, and by Generalitat de Catalunya through contract 2014-SGR-1051.
- Published
- 2020
- Full Text
- View/download PDF
47. A Smart Farm – An Introduction to IoT for Generation Z
- Author
-
Lakshmi Prayaga, Andrew Hart, Aaron Wade, and Chandra Prayaga
- Subjects
Multimedia ,Computer science ,business.industry ,Smart device ,Context (language use) ,computer.software_genre ,Automation ,law.invention ,Bluetooth ,law ,High-level programming language ,Scalability ,Table (database) ,The Internet ,business ,computer - Abstract
This paper describes an Internet of Things (IoT) project within the context of an inexpensive table top model of a smart farm. The purpose of the project is to use advances in technology including Bluetooth, the Internet and micro sensors to design an inexpensive model of a smart farm. The project brings ideas related to IoT and smart cities within the reach of a common man. Advances in technology and simplicity of high level programming languages make it also possible for anyone who is interested to easily build a smart connected system that can be communicated with, using inexpensive components and a simple program. This project demonstrates one such novel application of a smart farm with the use of a microbit, a humidity sensor and WiFi to monitor moisture levels of plants and depending on a set threshold the program sends an appropriate message to the owner of the plants. The use of a very simple microcontroller (the microbit) makes the project accessible to students at college and even school level. This technology is not only scalable, but it is also expandable to other domains, such as inventory management, power systems, etc. This project has broad applicability and fits in with the recent trends in resource planning and allocations.
- Published
- 2020
48. MicroPython as a Development Platform for IoT Applications
- Author
-
Michal Kuba, Gabriel Gaspar, Juraj Dudak, Peter Fabo, and Eduard Nemlaha
- Subjects
Computer science ,business.industry ,010401 analytical chemistry ,STM32 ,020206 networking & telecommunications ,02 engineering and technology ,Python (programming language) ,01 natural sciences ,0104 chemical sciences ,Microcontroller ,Software portability ,Computer architecture ,High-level programming language ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Internet of Things ,business ,computer ,computer.programming_language - Abstract
The paper is focused on the possibilities of using high-level language - Python in the development of IoT applications. Described are the basic features of the MicroPython implementation and its use in the development of peripherals on the STM32 class microcontroller platform, as well as the possibilities of modification and extension of the standard implementation. Further are mentioned terms of code portability, modularity, flexibility, scalability and expandability significant for MicroPython applications.
- Published
- 2020
49. High-level Programming via Generalized Planning and LTL Synthesis
- Author
-
Sasha Rubin, Blai Bonet, Hector Geffner, Fabio Patrizi, and Giuseppe De Giacomo
- Subjects
Reasoning about Actions ,Computer science ,Programming language ,0102 computer and information sciences ,02 engineering and technology ,Artificial Intelligence ,Knowledge Representation ,computer.software_genre ,01 natural sciences ,010201 computation theory & mathematics ,High-level programming language ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,computer - Abstract
We look at program synthesis where the aim is to automatically synthesize a controller that operates on data structures and from which a concrete program can be easily derived. We do not aim at a fully-automatic process or tool that produces a program meeting a given specification of the program’s behaviour. Rather, we aim at the design of a clear and well-founded approach for supporting programmers at the design and implementation phases. Concretely, we first show that a program synthesis task can be modeled as a generalized planning problem. This is done at an abstraction level where the involved data structures are seen as black-boxes that can be interfaced with actions and observations, the first corresponding to the operations and the second to the queries provided by the data structure. The abstraction level is high enough to capture intuitive and common assumptions as well as general and simple strategies used by programmers, and yet it contains sufficient structure to support the automated generation of concrete solutions (in the form of controllers). From such controllers and the use of standard data structures, an actual program in a general language like C++ or Python can be easily obtained. Then, we discuss how the resulting generalized planning problem can be reduced to an LTL synthesis problem, thus making available any LTL synthesis engine for obtaining the controllers. We illustrate the effectiveness of the approach on a series of examples.
- Published
- 2020
- Full Text
- View/download PDF
50. Leveraging Hybrid Cloud HPC with Multitier Reactive Programming
- Author
-
Christian Bischof, Guido Salvaneschi, Daniel Sokolowski, and Jan-Patrick Lehr
- Subjects
Speedup ,Software deployment ,Computer science ,High-level programming language ,business.industry ,Distributed computing ,Code (cryptography) ,Reactive programming ,Resource allocation ,Cloud computing ,Solver ,business - Abstract
The advent of cloud computing has enabled large-scale availability of on-demand computing and storage resources. However, these benefits are not yet at the fingertips of HPC developers: Typical HPC applications use on-premise computing resources and rely on static deployment setups, reliable hardware, and rather homogeneous resources. This hinders (partial) execution in the cloud, even though applications could benefit from scaling beyond on-premise resources and from the variety of hardware available in the cloud to speed up execution. To address this issue, we orchestrate computationally intensive kernels using a high-level programming language that ensures advanced optimization and improves execution flexibility-enabling hybrid cloud/on-premise HPC deployments. Our approach is based on multitier reactive programming, where distributed code is defined within the same compilation unit and computations are placed explicitly using placement types. We adjust placement based on performance characteristics measured before execution, apply our approach to a shortest vector problem (SVP) solver from cryptanalysis, and evaluate it to be effective.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.