3,027 results on '"json"'
Search Results
152. τJSchema: A Framework for Managing Temporal JSON-Based NoSQL Databases
- Author
-
Brahmia, Safa, Brahmia, Zouhaier, Grandi, Fabio, Bouaziz, Rafik, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Hartmann, Sven, editor, and Ma, Hui, editor
- Published
- 2016
- Full Text
- View/download PDF
153. Efficient Management of Data Models in Constrained Systems by Using Templates and Context Based Compression
- Author
-
Berzosa, Jorge, Gardeazabal, Luis, Cortiñas, Roberto, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, García, Carmelo R., editor, Caballero-Gil, Pino, editor, Burmester, Mike, editor, and Quesada-Arencibia, Alexis, editor
- Published
- 2016
- Full Text
- View/download PDF
154. JSON Patch for Turning a Pull REST API into a Push
- Author
-
Cao, Hanyang, Falleri, Jean-Rémy, Blanc, Xavier, Zhang, Li, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Sheng, Quan Z., editor, Stroulia, Eleni, editor, Tata, Samir, editor, and Bhiri, Sami, editor
- Published
- 2016
- Full Text
- View/download PDF
155. Describing of elements IO field in a testing computer program
- Author
-
Igor V. Loshkov and Dmitriy I. Loshkov
- Subjects
computer testing ,standard ,json ,object ,object properties ,parameters of the element ,formatting string ,io field ,Special aspects of education ,LC8-6691 - Abstract
A standard of describing the process of displaying interactive windows on a computer monitor, through which an output of questions and input of answers are implemented during computer testing, is presented in the article [11]. According to the proposed standard, the description of the process mentioned above is performed with a format line, containing element names, their parameters as well as grouping and auxiliary symbols. Program objects are described using elements of standard. The majority of objects create input and output windows on a computer monitor. The aim of our research was to develop a minimum possible set of elements of standard to perform mathematical and computer science testing.The choice of elements of the standard was conducted in parallel with the development and testing of the program that uses them. This approach made it possible to choose a sufficiently complete set of elements for testing in fields of study mentioned above. For the proposed elements, names were selected in such a way: firstly, they indicate their function and secondly, they coincide with the names of elements in other programming languages that are similar by function. Parameters, their names, their assignments and accepted values are proposed for the elements. The principle of name selection for the parameters was the same as for elements of the standard: the names should correspond to their assignments or coincide with names of similar parameters in other programming languages. The parameters define properties of objects. Particularly, while the elements of standard create windows, the parameters define object properties (location, size, appearance) and the sequence in which windows are created. All elements of standard, proposed in this article are composed in a table, the columns of which have names and functions of these elements. Inside the table, the elements of standard are grouped row by row into four sets: input elements, output elements, input-output elements, grouping elements. All parameters are collected into another table, columns of which have names, assignments and values of parameters. Elements, for which the parameters are intended, are indicated in whole lines inside of this table. After each table, the necessary explanations are given to some of its items.An eхаmple of use of the standard for creating an input window of polynomial coefficients is demonstrated at the end of this article. The example shows significant compactness and ease of recording. Moreover, the testing program, based on the elements of standard, proposed in this article was written on HTML, JavaScript, PHP program languages and allows to perform mathematical and computer science testing. The program is available on the website [20]. Students of Moscow State (National Research) University of Civil Engineering were tested frequently using this program.The composition of elements and their parameters proposed above is convenient and does not require the creators of test problems to have a high level knowledge of programming languages.
- Published
- 2018
- Full Text
- View/download PDF
156. CREATING AN INFORMATIONAL WEBSITE FOR PHYSICS ACADEMIC COURSE: WEB DESIGN SPECIFICS
- Author
-
Іryna A. Slipukhina, Taras V. Gedenach, and Vyacheslav V. Olkhovyk
- Subjects
Web development ,web site ,web design ,server configuration ,content management system ,graphics software ,content ,usability testing ,WordPress ,Joomla ,plugin ,AJAX ,JSON ,Physics academic course ,educational informational portal ,electronic laboratory ,Theory and practice of education ,LB5-3640 - Abstract
The article is devoted to the analysis of means and methods of creating an educational informational website for the Physics academic course. The stages of technical task creation, design of the main and typical pages of the website, layout, programming, content filling and publication are considered. The analysis of libraries, frameworks and popular WordPress and Joomla CMSes has been carried out as well as usability testing. Features of ready-made tools suitable for efficient creation of such web applications are considered. The contents of the front end and back end components of the given specification, as well as their connection with AJAX, are determined. The features of the WordPress architecture and the location of JSON files for the transmission of structured information are revealed. An original Student Score plugin for WordPress, that allows managing the contents of the e-register and displaying them for a teacher and students, as well as plugins for managing electronic laboratory reporting and user administration have been created.
- Published
- 2017
157. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
- Author
-
Marcus D. Hanwell, Wibe A. de Jong, and Christopher J. Harris
- Subjects
Chemistry ,Web ,Data ,Semantic ,NWChem ,JSON ,Information technology ,T58.5-58.64 ,QD1-999 - Abstract
Abstract An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction—connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web—going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.
- Published
- 2017
- Full Text
- View/download PDF
158. Sistem Informasi Akademik untuk Layanan Mahasiswa UMS berbasis Mobile
- Author
-
Hanif Amrullah and Bana Handaga
- Subjects
sistem informasi ,akademik ,mobile ,android ,api ,json ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Pemanfaatan teknologi terkait penyampaian informasi dalam dunia akademik kini mulai bergeser dari teknologi web menjadi teknologi mobile yang lebih fleksibel, efektif, dan efisien.Meningkatnya akses informasi secara mobile tanpa diimbangi pengembangan yang baik dan serius akan menimbulkan masalah baru terkait keamanan sistem, kenyamanan dan kepuasan pengunaan teknologi mobile yang memiliki ukuran beragam. Oleh Karena itu, penelitian ini bertujuan untuk membangun sistem informasi akademik berbasis mobile native dengan sistem operasi android yang berfokus pada layanan akademik untuk mahasiswa di UMS. Sistem ini dibuat dengan Android Studio sebagai software IDE, java sebagai bahasa pemrogramannya dan API JSON utuk pertukaran datanya. Pengamatan secara langsung terhadap sistem informasi akademik UMS berbasis web dimaksudkan untuk pengumpulan data terkait fitur. Penelitian ini menghasilkan suatu sistem informasi akademik untuk layanan mahasiswa UMS berbasis mobile yang memiliki fitur cari jadwal, edit data diri, fitur terkait rencana studi, fitur lihat jadwal perkuliahan, fitur lihat nilai akademik, dan fitur atur password. Pengujian yang dilakukan terhadap 40 mahasiswa UMS dari beberapa program studi menunjukkan bahwa 88% responden menyetujui sistem ini mudah di gunakan, menarik, bermanfaat dan dibutuhkan oleh responden.
- Published
- 2017
- Full Text
- View/download PDF
159. Cloud Computing Application For Romanian Smes
- Author
-
Pistol Luminiţa, Bucea-Manea Ţoniș Rocsana, and Bucea-Manea Ţoniș Radu
- Subjects
cloud computing ,smes ,angularjs ,json ,model of innovation ,supply chain ,decision tree ,Social Sciences - Abstract
The article studies the current economical state of Romanian SMEs and the utility of cloud computing technologies in the process of sustainable open innovation. The study is based on a supply chain adapted for SMEs, on a model of innovation within a network business environment and on a decision tree dedicated for SMEs when starting a new project. Taking into account the statements of the article, a new framework of cloud computing economics can be developed.
- Published
- 2017
- Full Text
- View/download PDF
160. Systems Integration Using Web Services, REST and SOAP: A Practical Report
- Author
-
GARCIA, C. M. and ABÍLIO, R.
- Subjects
Service-Oriented Architecture ,Information Systems Integration ,Web Services ,JSON ,SOAP ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In companies environments, it is normal to exist several systems to ease daily activities. In academic environments, it also happens. However, academic environments may be even more heterogeneous as there are many specialized activities, such as: restaurant, library, academic processes, administrative processes and computer network services, such as email and network authentication. To maintain the data consistency throughout the systems, all the systems must be integrated. This integration was carried out in the Federal University of Lavras by using Simples Object Access Protocol (SOAP) as communication protocol. The development of a new system (mobile application), it was noticed that SOAP is very CPU-intensive and slow, as mobile devices have constraints such as internet and processing. Thus, a REST-JSON layer to integrate mobile application and the integration architecture was developed, benefiting from all the resources the integration architecture had. By using this new layer, the offer of functions from the integration architecture was also expanded to REST, attending to other applications without having to make big changes in the code. It was measured that the REST-JSON layer consumes around 73% less data than SOAP. The REST-JSON layer was released, attending to about 5600 installations of the application that requests the integration around 54000 times a day.
- Published
- 2017
161. Implementation of a simplified Access Control language using JSON
- Author
-
Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Delgado Mercè, Jaime, Bertran Serrano, Albert, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Delgado Mercè, Jaime, and Bertran Serrano, Albert
- Abstract
The aim of this project is to develop an access control validator specifically tailored for image resources using a simplified version of XACML (eXtensible Access Control Markup Language) called JACSON, which is based on JSON instead of XML. Building upon the existing specification of JACSON achieved by a previous project, the validator will be responsible for validating requests against access control policies for images, and it will provide a response indicating whether the request for access to the resource should be allowed or denied. Additionally, this project will include the development of a demonstration application to showcase the correct implementation of the validator and propose possible use cases for the validator., El objetivo de este proyecto es desarrollar un validador de control de acceso específicamente adaptado para imágenes utilizando una versión simplificada de XACML (eXtensible Access Control Markup Language) llamado JACSON, que se basa en JSON en lugar de XML. Basándose en la especificación existente de JACSON realizada por un proyecto anterior, el validador será responsable de validar las solicitudes contra las políticas de control de acceso para imágenes, y proporcionará una respuesta que indique si la solicitud de acceso al recurso debe ser permitida o rechazada. Además, este proyecto incluirá el desarrollo de una aplicación de demostración para mostrar la correcta implementación del validador y proponer posibles casos de uso para el validador., L'objectiu d'aquest projecte és desenvolupar un validador de control d'accés adaptat específicament per a imatges utilitzant una versió simplificada de XACML (eXtensible Access Control Markup Language) anomenat JACSON, que es basa en JSON en lloc d'XML. Basant-se en l'especificació existent de JACSON realitzada per un projecte anterior, el validador serà responsable de validar les sol·licituds contra les polítiques de control d'accés per a imatges, i proporcionarà una resposta que indiqui si la sol·licitud d'accés al recurs ha de ser permesa o rebutjada. A més, aquest projecte inclourà el desenvolupament d'una aplicació de demostració per mostrar la implementació correcta del validador i proposar-ne possibles casos d'ús.
- Published
- 2023
162. CAD/CAM Software Integration for Toolpath Application : at Sandvik Coromant
- Author
-
Dhanapal, Karthikeyan, Ameen, Tariq Aslam Mohamed, Dhanapal, Karthikeyan, and Ameen, Tariq Aslam Mohamed
- Abstract
In the domain of CAD/CAM, innovative strategies are devised and positioned to transform communication and streamline machining processes. Recent advancements have fueled the need for tailored solutions integrated into CAM software, having a transformative impact in this field. To address this, Sandvik Coromant is currently developing digital machining solutions to improve the standard of manufacturing data by extending their development in different CAM systems. This thesis project delves into enhancing efficiency in tool path generation utilizing the extensive knowledge cultivated by CoroPlus® Tool Path, a cloud-based specialized tool path solution. It focuses on integrating CoroPlus® Tool Path as a Plug-in within Siemens NX and Mastercam, simplifying complex setups, enabling users to create precise tool motion with ease. The project explores the vital role of APIs in seamless CAM interactions while presenting a feasible data management framework between the components involved. A program was developed to perform the stated functions through a generic plug-in coined CSI (CAM System Integration). A data flow architecture has been designed such that for each node, there exists a structured data transfer mechanism that facilitates data communication in the plug-in/solution. The possibility of utilizing a STEP file for this data transfer is also discussed. In this project, the results of the framework implementation, as well as its challenges and limitations, were discussed in order to improve the existing integration solution., Inom CAD/CAM-området utvecklas och positioneras innovativa strategier för att omvandla kommunikation och strömlinjeforma bearbetningsprocesser. Nyliga framsteg har drivit behovet av skräddarsydda lösningar integrerade i CAM-programvara, med en transformerande effekt inom detta område. För att möta detta utvecklar Sandvik Coromant för närvarande digitala bearbetningslösningar för att förbättra standarden för tillverkningsdata genom att utöka deras utveckling i olika CAM-system. Det här examensprojektet fokuserar på att förbättra effektiviteten i verktygsbana genom att använda den omfattande kunskap som odlas av CoroPlus® Tool Path, en molnbaserad specialiserad verktygsbanelösning. Projektet fokuserar på att integrera CoroPlus® Tool Path som en Plug-in inom Siemens NX och Mastercam för att förenkla komplexa inställningar och möjliggöra användare att skapa exakta verktygsrörelser med lätthet. Projektet utforskar den avgörande rollen som API:er spelar i sömlösa CAM-interaktioner samtidigt som det presenterar en genomförbar ram för datahantering mellan de involverade komponenterna. Ett program utvecklades för att utföra de angivna funktionerna genom en generisk plug-in kallad CSI (CAM System Integration). En dataflödesarkitektur har utformats så att det för varje nod finns en strukturerad datatransfermekanism som underlättar datakommunikationen i plug-in/lösningen. Möjligheten att använda STEP-fil för denna datatransfer diskuteras också. I detta projekt diskuterades resultaten av ramverksimplementeringen samt dess utmaningar och begränsningar för att förbättra den befintliga integrationslösningen.
- Published
- 2023
163. Ecosistema de Interoperabilidad basado en C++/MicroPython sobre plataformas Raspberry y ESP32
- Author
-
Niño, Jorge Andres, Politi, Marcos, Gulfo, Maximiliano Ezequiel, Laiz, Héctor, Lucangioli, Lucien, Quiroga, Camilo, Niño, Jorge Andres, Politi, Marcos, Gulfo, Maximiliano Ezequiel, Laiz, Héctor, Lucangioli, Lucien, and Quiroga, Camilo
- Abstract
This work describes a low cost solution based on ESP32 processor and the LoRa controller SX1276 for interoperability of photovoltaic inverters with WiFi interfaces. The system firmware is developed in C++ and MicroPython. Also, this work describes a solution on Python for Raspberry. For the test of firmware solution, we use a HELTEC platform. This work present also, a development of Hardware solution on PCB with more functionalities for different kind of interfaces in PV inverters., El presente trabajo describe una solución de bajo costo basada en el procesador ESP32 y un controlador LoRa SX1276 para control de inversores fotovoltaicos con interfaz WiFi, tanto en lenguaje C++ como MicroPython. También se presenta una solución en Python para Raspberry. Para la prueba del firmware se empleó la plataforma HELTEC. Se desarrollaron además el hardware del ecosistema con diversas funcionalidades y un firmware configurable para manejo de múltiples interfaces con los inversores FV.
- Published
- 2023
164. Integration of GraphQL data sources into an RDF federation engine
- Author
-
Karlsson, Wiktor and Karlsson, Wiktor
- Abstract
As data interoperability has become increasingly more important as of late, so has the topic of database query language translations. Being able to translate between different query languages, both old and new, could potentially improve a lot of different systems. One of these systems is HeFQUIN, a federated query engine that primarily focuses on querying many different RDF data sources by sending out SPARQL sub-queries. Exploring ways to extend this engine to include non-RDF data sources, like GraphQL for example, would naturally increase the uses of such an engine. This thesis presents, and implements, a generalized approach for translating SPARQL queries, primarily Basic Graph Patterns (BGPs), into GraphQL queries, after which the resulting JSON response received by executing the query is translated back to solution mappings. The performance of the translator implementation is then evaluated by using a set of pre-determined, specially designed, test queries. These queries are structured to encompass many different scenarios the proposed translator could face. The results of the evaluation tests reveal that the translator implementation with its steps only constitute a comparatively small fraction of the full request to the GraphQL endpoints. Finally, it is concluded that due in part to the generalized nature of the approach, a wide range of different structured basic graph patterns are able to be translated, as long as they follow the set up requirements imposed on the BGP and the GraphQL endpoint.
- Published
- 2023
165. Prototipo de videojuego 3D procedural
- Author
-
Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Godoy Balil, Guillem, Doncel Gutiérrez, Gerard, Zimmermann García, Daniel, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Godoy Balil, Guillem, Doncel Gutiérrez, Gerard, and Zimmermann García, Daniel
- Abstract
Aquest projecte consisteix en la creació d'un prototip de videojoc 3D en Unity. El present document mostra el nostre procés de desenvolupament d'aquest prototip i les diverses tècniques que hem utilitzat amb aquesta finalitat. Com que els nostres coneixements en aquest àmbit eren més aviat escassos al principi, ens hem trobat amb traves, algunes de les quals estan documentades en el text, amb la finalitat de mostrar una mica millor l'evolució dels nostres coneixements i la del propi videojoc. En el document podrem trobar una gran quantitat d'imatges, models, textures i fragments de codi creats íntegrament per nosaltres, així com la descripció de les eines amb les quals s'han desenvolupat. Per tant, les nostres tasques han anat més enllà del que faria purament un enginyer informàtic, ja que també inclouen aspectes de disseny gràfic, els quals són molt rellevants en aquest context. Entre el més destacable, tenim un escenari procedural generat automàtica i aleatòriament en cada càrrega, objectes sobre l'escenari amb les seves propietats i objectius, un personatge amb una àmplia gamma de funcionalitats, tals com inventari, interfície animada, interaccions amb altres objectes, col·lisions, etc., textures variables segons altures, textures animades, i guardat i càrrega d'escenari usant estructures de dades JSON. En conclusió, aquest document descriu el procés de desenvolupament del nostre primer projecte seriós en Unity, d'una manera que resulti accessible a un públic no expert., Este proyecto consiste en la creación de un prototipo de videojuego 3D en Unity. El presente documento muestra nuestro proceso de desarrollo de este prototipo y las diversas técnicas que hemos utilizado con ese fin. Puesto que nuestros conocimientos en este ámbito eran más bien escasos al principio, nos hemos encontrado con trabas, algunas de las cuales están documentadas en el texto, con la finalidad de mostrar un poco mejor la evolución de nuestros conocimientos y la del propio videojuego. En el documento podremos encontrar una gran cantidad de imágenes, modelos, texturas y fragmentos de código creados en su totalidad por nosotros, así como la descripción de las herramientas con las que se han desarrollado. Por lo tanto, nuestras tareas han ido más allá de lo que haría puramente un ingeniero informático, ya que también incluyen aspectos de diseño gráfico, los cuales son muy relevantes en este contexto. Entre lo más destacable, tenemos un escenario procedural generado automática y aleatoriamente en cada carga, objetos sobre el escenario con sus propiedades y objetivos, un personaje con una amplia gama de funcionalidades, tales cómo inventario, interfaz animada, interacciones con otros objetos, colisiones, etc., texturas variables según alturas, texturas animadas, y guardado y carga de escenario usando estructuras de datos JSON. En conclusión, este documento describe el proceso de desarrollo de nuestro primer proyecto serio en Unity, de un modo que resulte accesible a un público no experto., This project consists of the development of a 3D video game prototype in Unity. This document shows the development process of such a prototype and describes the various techniques that we have made use of with that end. Since our knowledge in this field was rather scarce at the beginning, we have come across many obstacles, some of which are also documented in the text with the aim to show a bit better our learning progress and that of the video game itself. Along the document, we can find a large number of images, models, textures, code fragments, all of them created entirely by us, as well as the description of the tools used to build them. Therefore, this work extends beyond what a pure computer engineer would do, as it comprises aspects of graphics design, too, which are very relevant in this context. Amongst the most remarkable feats of our job, we have a procedural scenario generated automatically and randomly for each load, objects on the scenario with their own properties and objectives, a character with a wide range of features, such as an inventory, animated interface, interactions with other objects, collisions and so on, variable textures whose appearance is dependent on heights, animated textures, and save and restore scenarios using JSON data structures. In conclusion, this document provides a description of the development process of our first serious project in Unity, in a way that is accessible to a non-expert audience.
- Published
- 2023
166. Användargränssnitt för systematiskt experimenterandeCoordination_oru
- Author
-
Alkeswani, Maria and Alkeswani, Maria
- Abstract
Coordination_oru är ett programramverk för forskning som skapades vid Örebrouniversitet i Sverige. Det är ett testramverk för en specifik algoritm för koordination avrobotar som utvecklas vidare till en simulationsplatform som möjliggör systematisktexperiment. I det här examensarbetet skapas en experimentspecifikation med alladelar som behövs för att fullt konfigurera systematiska experiment förCoordination_oru-ramverket. Experimentspecifikation utvecklas för att anpassa ettgrafiskt användargränssnitt som bidrar till att göra det enklare för användare attkontrollera, ändra och hantera systemet. Dessutom kan användare skapa och köraexperiment med möjlighet att justera karta, väg, robotens hastighet, acceleration,storlek/form, färg och destination samt att se resultatet. Användargränssnittet harutvecklats med JSON-formatet för att hantera konfiguration avexperimentspecifikation. Dessutom används CSV-formatet för att lagra resultatdata itabellform under projektet
- Published
- 2023
167. Desarrollo de una solución basada en vistas de mapa para afrontar caídas en un centro de procesamiento de datos
- Author
-
Valderas Aranda, Pedro José, Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació, Universitat Politècnica de València. Escola Tècnica Superior d'Enginyeria Informàtica, López Server, Adrián, Valderas Aranda, Pedro José, Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació, Universitat Politècnica de València. Escola Tècnica Superior d'Enginyeria Informàtica, and López Server, Adrián
- Abstract
[ES] El presente documento recoge las metodologías y procedimientos utilizados para el desarrollo de una aplicación que automatiza la populación de una Configuration Management Database (CMDB). Esta, se desarrollará mediante un microservicio diseñado con el patrón Cloud Native, que nos permitirá contenerizar la solución para ser desplegado en la nube de Google, a través de un clúster de kubernetes, el cual la administrara. Este microservicio recogerá información de diferentes herramientas, que posteriormente analizará y creará un modelo de datos concreto para exponer dicha información en una API., [EN] In this document, it includes the methodologies and procedures employed for the development of an application that automates the population of a Configuration Management Database (CMDB). This will be implemented through a microservice designed following the Cloud Native pattern, which allows us to containerize the solution to be deployed in the Google cloud Platform. This will be achieved through a Kubernetes cluster responsible for its management. This microservice will gather data from different tools, which it will later analyze and generating a specific data model for the purpose of exposing said information through an API.
- Published
- 2023
168. Benchmarking the request throughput of conventional API calls and gRPC : A Comparative Study of REST and gRPC
- Author
-
Berg, Johan, Mebrahtu Redi, Daniel, Berg, Johan, and Mebrahtu Redi, Daniel
- Abstract
As the demand for better and faster applications increase every year, so does the demand for new communication systems between computers. Today, a common method for computers and software systems to exchange information is the use of REST APIs, but there are cases where more efficient solutions are needed. In such cases, RPC can provide a solution. There are many RPC libraries to choose from, but gRPC is the most widely used today. gRPC is said to offer faster and more efficient communication than conventional web-based API calls. The problem investigated in this thesis is that there are few available resources demonstrating how this performance difference translates into request throughput on a server. The purpose of the study is to benchmark the difference in request throughput for conventional API calls (REST) and gRPC. This was done with the goal of providing a basis for making better decisions regarding the choice of communication infrastructure between applications. A qualitative research method with support of quantitative data was used to evaluate the results. REST and gRPC servers were implemented in three programming languages. A benchmarking client was implemented in order to benchmark the servers and measure request throughput. The benchmarks were conducted on a local network between two hosts. The results indicate that gRPC performs better than REST for larger message payloads in terms of request throughput. REST initially outperforms gRPC for small payloads but falls behind as the payload size increases. The result can be beneficial for software developers and other stakeholders who strive to make informed decisions regarding communication infrastructure when developing and maintaining applications at scale., Eftersom efterfrågan på bättre och snabbare applikationer ökar varje år, så ökar även behovet av nya kommunikationssystem mellan datorer. Idag är det vanligt att datorer och programvara utbyter information genom användning av APIer, men det finns fall där mer effektiva lösningar behövs. I sådana fall kan RPC erbjuda en lösning. Det finns många olika RPC-bibliotek att välja mellan, men gRPC är det mest använda idag. gRPC sägs erbjuda snabbare och mer effektiv kommunikation än konventionella webbaserade API-anrop. Problemet som undersöks i denna avhandling är att det finns få tillgängliga resurser som visar hur denna prestandaskillnad översätts till genomströmning av förfrågningar på en server. Syftet med studien är att mäta skillnaden i genomströmning av förfrågningar för konventionella API-anrop (REST) och gRPC. Detta gjordes med målet att ge en grund för att fatta bättre beslut om val av kommunikationsinfrastruktur mellan applikationer. En kvalitativ forskningsmetod med stöd av kvantitativa data användes för att utvärdera resultaten. REST- och gRPC-servrar implementerades i tre programmeringsspråk. En benchmarking-klient implementerades för att mäta servrarnas prestanda och genomströmning av förfrågningar. Mätningarna genomfördes i ett lokalt nätverk mellan två datorer. Resultaten visar att gRPC presterar bättre än REST för större meddelanden när det gäller genomströmning av förfrågningar. REST presterade initialt bättre än gRPC för små meddelanden, men faller efter när meddelandestorleken ökar. Resultatet kan vara fördelaktig för programutvecklare och andra intressenter som strävar efter att fatta informerade beslut gällande kommunikationsinfrastruktur vid utveckling och underhållning av applikationer i större skala.
- Published
- 2023
169. En jämförelse av webbaserade REST och GraphQL-AP : En teknikorienterad undersökning för att jämföra lämpliga API-tekniker till SPV.
- Author
-
Haj Rashid, Kinan and Haj Rashid, Kinan
- Abstract
Detta examensarbete är baserad på ett projekt hos SPV. Den presenterar resultaten av en studie som syftar till att utvärdera och jämföra prestanda hos olika webb-API-tekniker och hanterare. Syftet med denna forskning var att välja lämpliga webb-API-tekniker som uppfyller funktionella krav och distribuera dem på noggrant utvalda API- hanterare för att dra slutsatser om deras prestanda. Studien fokuserade på att jämföra REST och GraphQL webb-API-tekniker sedan distribuera dem på två API-hanterare WSO2 och 3Scale. Både REST och GraphQL API har sina egna fördelar och nackdelar, där användningsområdet och API:ets funktionskrav bestämmer vilken som är bäst. Dessutom REST API är kända av sin enkla implementering och resurshantering medan GraphQL API är mer lämpliga med hantering av komplexa relationer. Detta projekt tolkar att REST API överträffade GraphQL API över båda APIhanterare. Dessutom visade resultaten att API-hanterare WSO2 uppvisade någon snabbare svarstid för både REST och GraphQL API jämfört med 3Scale. Projektets resultat bidrar till den vetenskapliga förståelsen av webb-API-tekniker och hanterare samt att den ger värdefulla insikter för framtida forskning och utveckling. Den dokumenterade källkoden och data, tillgängliga på GitHub, säkerställer transparens. Etiska överväganden demonstrerades också, med betoning på ansvarsfull användning av dataskydd av användarnas integritet., This thesis is based on a project at SPV. It presents the results of a study aimed at evaluating and comparing the performance of different Web API technologies and managers. The objective of this research was to select appropriate Web API technologies that meet functional requirements and deploy them on carefully selected API handlers to draw conclusions about their performance. The study focused on comparing REST and GraphQL web API technologies, then deploying them on two API managers WSO2 and 3Scale. Both the REST and GraphQL APIs have their own advantages and disadvantages, with the use case and the API's functional requirements determining which one is best. Moreover, REST APIs are known for their simple implementation and resource management while GraphQL APIs are more suitable with handling complex relationships. This project interprets that the REST API outperformed the GraphQL API across both API managers. In addition, the results showed that API manager WSO2 exhibited slightly faster response time for both REST and GraphQL APIs compared to 3Scale. The project's results contribute to the scientific understanding of web API technologies and handlers and provide valuable insights for future research and development. The documented source code and data, available on GitHub, ensures transparency. Ethical considerations were also demonstrated, emphasizing the responsible use of data protection of user privacy.
- Published
- 2023
170. Sistema para preprocesar métricas e API dun sistema de asistencia en procesos de empresa
- Author
-
Regueiro, Carlos V., Figueira Muñiz, Sandra, Universidade da Coruña. Facultade de Informática, Alba Sineiro, Julián Jesús, Regueiro, Carlos V., Figueira Muñiz, Sandra, Universidade da Coruña. Facultade de Informática, and Alba Sineiro, Julián Jesús
- Abstract
[Resumen]: Las páginas web actuales necesitan procesar una gran cantidad de datos y métricas cuando realizan reportes personalizados, o cuando tienen que generar tablas muy amplias. Estas cuestiones son las que intenta abordar este trabajo de fin de grado, con el desarrollo de un microservicio que intervenga como intermediario entre las consultas a la base de datos y la visualización del usuario. Para ello se diseñará una aplicación con Java/Spring Boot, capaz de preprocesar los datos y de generar reportes personalizados, que se almacenaran en archivos JSON o CSV. Además de que los informes se tendrán que generar de forma asíncrona, también será necesario que los reportes ya generados se actualicen periódicamente si existieran nuevos datos o métricas en la base de datos. Las operaciones de procesado con las que contará el sistema serán: distintos orígenes de datos, filtrado según los valores de las propiedades, filtrado por fechas, selección de las propiedades de los objetos, ordenación de los objetos por una propiedad, incorporación de títulos a los objetos y ordenación de las propiedades dentro de los objetos., [Abstract]: Today’s websites need to process a large amount of data and metrics when building customised reports, or when they need to generate very large tables. These are the issues that this final degree thesis aims to address, with the development of a microservice that acts as an intermediary between database queries and the user’s visualisation. To this end, a Java/Spring Boot application will be designed, capable of pre-processing the data and generating customised reports, which will be stored in JSON or CSV files. In addition to the fact that the reports will have to be generated asynchronously, it will also be necessary for the reports already generated to be updated periodically if there are new data or metrics in the database. The processing operations available to the system will be: different data sources, filtering by property values, filtering by date, selection of object properties, sorting objects by a property, adding titles to objects, and sorting properties within objects.
- Published
- 2023
171. RAMSÉS. Sistema de importación/exportación BD-JFLAP, gestión de idiomas y copias de seguridad
- Author
-
Etxeberria Agiriano, Ismael, E.U.I.T. INDUSTRIAL - E I.T. TOPOGRAFIA -VITORIA, GASTEIZKO INGENIARITZAKO U.E., Grado en Ingeniería Informática de Gestión y Sistemas de Información, Kudeaketaren eta Informazio Sistemen Informatikaren Ingeniaritzako Gradua, Pozo Yubero, Maider, Etxeberria Agiriano, Ismael, E.U.I.T. INDUSTRIAL - E I.T. TOPOGRAFIA -VITORIA, GASTEIZKO INGENIARITZAKO U.E., Grado en Ingeniería Informática de Gestión y Sistemas de Información, Kudeaketaren eta Informazio Sistemen Informatikaren Ingeniaritzako Gradua, and Pozo Yubero, Maider
- Abstract
71 p. – Bibliogr.: p. 55-57, En este documento se muestra cómo se ha ido desarrollando el Trabajo de Fin de Grado de la autora. El proyecto consiste en añadir funcionalidades y especificaciones a un área de RAMSÉS, aplicación web de simulación de máquinas abstractas. Es una más completa herramienta de uso académico para la asignatura de segundo curso Lenguajes, Computación y Sistemas Inteligentes, cubriendo la funcionalidad de una herramienta ya existente, JFLAP. En el caso de este proyecto en cuestión se ha desarrollado la opción de importar cualquier fichero JFLAP o JSON a la base de datos. De igual manera se ha desarrollado la opción de exportar cualquier autómata a JFLAP o JSON. Asimismo, se han añadido funcionalidades para etiquetar máquinas abstractas en la base de datos permitiendo asociarles información académica, como pueden ser colecciones de ejercicios o preguntas de exámenes. También se ha desarrollado un mecanismo adecuado para que RAMSÉS sea una herramienta multi-idioma. Por último, se ha desarrollado un procedimiento para la realización de copias de seguridad de RAMSÉS para proteger las actividades académicas a pérdidas de información asociadas a incidentes de seguridad. Cabe destacar que este proyecto está realizado en gran parte en el lenguaje de programación JavaScript tanto en la parte cliente como en la de servidor, apoyado en tecnologías como Node.js, Git, JSON, SVG, CSS3 y HTML5, junto con MySQL para el almacenamiento y recuperación de datos. Por otro lado, se ha realizado la puesta en producción de la aplicación mediante máquinas virtuales en un servidor, mediante Docker y Proxmox. En estos momentos RAMSÉS está en producción en el servidor del departamento de LSI siendo accesible desde la Web.
- Published
- 2023
172. Proceso de migración de pólizas desde Oracle a MongoDB
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Mayol Sarroca, Enric, Vilaseca Parga, Javier, Sans Pallarés, Oscar Daniel, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Mayol Sarroca, Enric, Vilaseca Parga, Javier, and Sans Pallarés, Oscar Daniel
- Published
- 2023
173. Implementasi Web Push Notification pada Sistem Informasi Manajemen Arsip Menggunakan PUSHJS
- Author
-
Alam Rahmatulloh, Andi Nur Rachman, and Fahmi Anwar
- Subjects
ajax ,json ,notification api html5 ,pushjs ,webstorage api html5 ,Technology ,Information technology ,T58.5-58.64 - Abstract
Teknologi terus menerus berkembang, berbagai jenis teknologi terus bermunculan seperti sistem informasi manajemen arsip, masalahnya para pekerja kadang melakukan pekerjaan lain di komputer sehingga arsip tidak terkontrol. Penerapan Web Push Notification dapat menampilkan pemberitahuan berbasis website meskipun tidak membuka web browser secara langsung atau dalam kondisi minimize. Web Push Notification merupakan mekanisme pemberitahuan menggunakan Javascript pada web browser. Fitur ini tersedia dalam Push API HTML5 dengan menggunakan Push Service atau Messaging server yang mengirim pemberitahuan ke web browser yang telah berlangganan tanpa membuka website sehingga dapat melakukan broadcast message dan Notification API HTML5 tidak memerlukan Push Service atau Messaging server tetapi harus membuka website, tetapi belum didukung semua web browser sehingga pada makalah ini dibahas Implementasi Web Push Notification pada sistem informasi manajemen arsip menggunakan PushJS, metode pengembangan yang digunakan adalah Rational Unified Proccess (RUP). Teknologi pemberitahuan yang cocok untuk sistem informasi manajemen arsip berbasis web yaitu Notification API HTML5 karena tidak akan mengirim pemberitahuan yang sama ke semua pengguna. Namun tidak ada proses di belakang layar sehingga tidak akan dijalankan secara otomatis, masalah tersebut diatasi dengan menggunakan AJAX dengan mengambil JSON kemudian dijalankan berulang-ulang pada web browser dan meminimalisir bentrokan antara script web push notification di multi tab window atau window web browser diatasi menggunakan localStorage dari WebStorage API HTML5. Hasil uji menunjukan bahwa penerapan teknologi Web Push Notification pada Sistem Informasi Manajemen Arsip dapat membantu para pengguna dalam mengelola arsip yang banyak serta penggunaan AJAX berpengaruh terhadap kecepatan akses web. Abstract Technology continues to evolve, various types of technology continue to emerge such as records management information systems, the problem is that workers sometimes do other work on the computer so that the archive is not controlled. Web Push Notification application can display website-based notifications even if you don't open the web browser directly or in a minimized condition. Web Push Notification is a notification mechanism using Javascript in a web browser. This feature is available in the HTML5 Push API by using a Push Service or Messaging server that sends notifications to subscribed web browsers without opening the website so that it can broadcast and the HTML5 Notification API does not require a Push Service or Messaging server but must open a website, but not supported all web browsers so that this paper discusses Push Notification Web Implementation in archive management information systems using PushJS, the development method used is the Rational Unified Process (RUP). Notification technology that is suitable for web-based archive management information systems namely HTML5 Notification API because it will not send the same notification to all users. But there is no process behind the scenes so that it will not be run automatically, the problem is overcome by using AJAX by retrieving JSON and then running repeatedly on the web browser and minimizing clashes between web push notification scripts on multi tab windows or web browser windows resolved using localStorage from the HTML5 WebStorage API. The test results show that the application of Web Push Notification technology in the Archive Management Information System can help users manage many archives and use AJAX influences the speed of web access.
- Published
- 2019
- Full Text
- View/download PDF
174. PERANCANGAN APLIKASI PEMESANAN BERBASIS ANDROID UNTUK RESTORAN X
- Author
-
Bagas Asih Sudharmo, Yoka and Kusuma Wardana, Hartanto
- Subjects
aplikasi ,JSON ,NoSQL ,pemesanan ,android ,self-service - Abstract
Kemajuan teknologi dan meningkatnya lingkungan yang kompetitif mendorong bisnis makanan dan minuman untuk berinovasi dan membuat layanan yang berbeda. Penggunaan menu digital dapat meningkatkan keuntungan dan meningkatkan kinerja pelayanan pada suatu sektor bisnis khususnya di bidang kuliner dalam hal ini sistem dapat mengatasi masalah umum yang sering ditemui di dalam sistem lama atau tradisional yaitu antri dan kesalahan dalam pemesanan. Sistem ini menitikberatkan pada pengunaan digital menu melalui perangkat android dengan konsep self-service dan proses pemesanan. Sistem ini diimplementasikan dengan pemprograman JAVA sebagai frontend dan Firebase sebagai database NoSQL menggunakan JSON sebagai format penyimpanan. Sistem diuji performanya dengan pengujian respons dan stabilitas dengan menyertakan tanggapan pengguna untuk mendukung hasil pengujian. Hasil tanggapan pengguna terhadap 30 responden dilakukan untuk mengetahui performa, kelayakan dan sifat user-friendly dari aplikasi hasil ini menunjukkan bahwa aplikasi memiliki performa cepat, layak untuk digunakan, memiliki fitur yang jelas dan juga mudah untuk digunakan, dimengerti dan dipahami. Technological advancements and increasing competitive environment encourage food and beverage businesses to innovate and create differentiated services. The use of digital menus can increase profits and improve service performance in a business sector, especially in the culinary sector, in this case the system can overcome common problems that are often encountered in the old or traditional system, namely queuing and errors in ordering. This system focuses on using digital menus through android devices with the concept of selfservice and ordering process. The system is implemented by programming JAVA as a frontend and Firebase as a NoSQL database using JSON as the storage format. Systems are tested for performance with responsiveness and stability testing by including user feedback to support test results. The results of user responses to 30 respondents were conducted to determine the performance, feasibility and user-friendly nature of the application these results show that the application has fast performance, is feasible to use, has clear features and is also easy to use, understand and understand.
- Published
- 2023
175. Deserializing JSON Data in Hadoop
- Author
-
Chen, Shih-Ying, Chen, Hung-Ming, Chen, I-Hsueh, Huang, Chien-Che, Park, James J. (Jong Hyuk), editor, Pan, Yi, editor, Kim, Cheonshik, editor, and Yang, Yun, editor
- Published
- 2015
- Full Text
- View/download PDF
176. Nonintrusive SSL/TLS Proxy with JSON-Based Policy
- Author
-
Jawi, Suhairi Mohd, Ali, Fakariah Hani Mohd, Zulkipli, Nurul Huda Nik, and Kim, Kuinam J., editor
- Published
- 2015
- Full Text
- View/download PDF
177. Route Tracking of Moving Vehicles for Collision Avoidance Using Android Smartphones
- Author
-
Ibrahim, Ehab Ahmed, El Noubi, Said, Aly, Moustafa H., Elleithy, Khaled, editor, and Sobh, Tarek, editor
- Published
- 2015
- Full Text
- View/download PDF
178. Three Levels of R Language Involvement in Global Monitoring Plan Warehouse Architecture
- Author
-
Kalina, Jiří, Hůlek, Richard, Borůvkova, Jana, Jarkovský, Jiří, Klánová, Jana, Dušek, Ladislav, Denzer, Ralf, editor, Argent, Robert M., editor, Schimak, Gerald, editor, and Hřebíček, Jiří, editor
- Published
- 2015
- Full Text
- View/download PDF
179. Inferring Versioned Schemas from NoSQL Databases and Its Applications
- Author
-
Sevilla Ruiz, Diego, Morales, Severino Feliciano, García Molina, Jesús, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Johannesson, Paul, editor, Lee, Mong Li, editor, Liddle, Stephen W., editor, Opdahl, Andreas L., editor, and Pastor López, Óscar, editor
- Published
- 2015
- Full Text
- View/download PDF
180. ПРИНЦИПИ ПОБУДОВИ ХМАР ТЕГІВ ДАНИХ.
- Author
-
Хараш, К. М., Ольшевська, О. В., and Титуренко, Ж. А.
- Subjects
TAGS (Metadata) ,VISUAL perception ,DATA warehousing ,K-means clustering ,REGRESSION trees ,WEB-based user interfaces - Abstract
Visualization mechanisms for constructing terminological clouds are considered. An example of JSON, HTML, CSV, XLSX, XML, TXT is a list of file types and resources. Possibilities of extraction and storage of input data are analyzed. Studies of similar systems were performed, on the basis of which two optimal file types were selected, namely CSV and TXT. The approach of forming a list of keywords for scholarly publications or distinguishing the leading topics of different texts was discovered. If the need is to handle large collaborative texts, such as literary works, scientific articles, judgments, etc., it will be sufficient to use small web applications to build tag clouds. K-mean tag clouds are able to effectively identify key concepts, most commonly used words, and leading concepts. When comparing CSV and TXT formats, it was confirmed that the processing speed depends more on the amount of input than on the file structure. Hence, it can be argued that the use of one or the other format is conditioned by the user's choice. An analysis has been conducted that noted that the CSV format needs an upper line that specifies attributes. For the sake of correctness of the further analysis, the attributes should be specified and formed each successive row of data in strict order. Such a slight feature of the structure helps the researcher to navigate among the set of textual information, and in further processing the first line can be ignored. Unlike the previous format, the TXT format does not require the formation of the first line of attributes. This complicates the visual perception of the information available. It is not recommended to enter the attributes yourself, in the future, when processing it will affect the correctness of the clustering results in the negative. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
181. ВЕБ ТЕХНОЛОГІЇ В SMART БІБЛІОТЕЦІ.
- Author
-
Зінченко, І. І., Шершун, О. О., and Іванова, А. Г.
- Subjects
WEB 2.0 ,HIGHER education research ,INFORMATION services ,COMPUTER software development ,INFORMATION modeling ,ELECTRONIC journals - Abstract
The problem of incorrect and outdated interpretation of the library activity, as well as its positioning in the modern world, is considered. Compiled information model of the future of software, that is the goal and the means by which the system can operate for achieving this goal and that is demanding these features, so they can be used. According to the results of the research, in order to systematize the modern view of the library, it was decided to implement the Library 3.0 standard, which, in turn, is responsible for modernizing the form of library services using such technologies as semantic network, cloud services, mobile devices, in the scientific- technical library. Library 3.0 is the standard that is responsible for the electronic systematization of services that ensure the life of the academic library through communication between departments and the user. At the moment, this is the most exciting achievement in the research and development of higher education libraries, as the standard presented emphasizes context, not just the means of providing information services. A thorough analysis was carried out on the basis of existing methodologies and approaches to create the conceptual model of the project, basic principles and principles of software development, auxiliary parameters and tools for achieving the desired result, analytical data. A web resource was developed to present the Library of Science and Technology ONAHT on Library 3.0 principles, taking into account the latest Web 2.0 capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
182. Data Aggregation in Microservice Architecture.
- Author
-
Damyanov, Ivo
- Subjects
SOFTWARE architecture ,RELATIONAL databases ,INFORMATION sharing - Abstract
In a microservice architecture aggregation of data collected from different sources is a common task. Today's technology trends require us to exchange data that is no longer tabular. JSON data format has gained popularity among web developers, and has become the main format for exchanging information over the web. When we need to aggregate data collected from the web, storing it into relational database just to perform this task and pass it to the next unit to process or display it often is an exaggerated action. In this paper, we discuss a scenario and an implementation of in-memory preprocessing and aggregating data using lazy evaluation, value tuples and LINQ. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
183. Exporting Diabetic Retinopathy Images from VA VistA Imaging for Research.
- Author
-
Kuzmak, Peter, Demosthenes, Charles, and Maa, April
- Subjects
ALGORITHMS ,DIABETIC retinopathy ,DIAGNOSTIC imaging ,EYE examination ,JAVA programming language ,MEDICAL research ,RETINA ,EMPLOYEES' workload ,IMAGE retrieval ,DICOM (Computer network protocol) ,ELECTRONIC health records - Abstract
The US Department of Veterans Affairs has been acquiring store and forward digital diabetic retinopathy surveillance retinal fundus images for remote reading since 2007. There are 900+ retinal cameras at 756 acquisition sites. These images are manually read remotely at 134 sites. A total of 2.1 million studies have been performed in the teleretinal imaging program. The human workload for reading images is rapidly growing. It would be ideal to develop an automated computer algorithm that detects multiple eye diseases as this would help standardize interpretations and improve efficiency of the image readers. Deep learning algorithms for detection of diabetic retinopathy in retinal fundus photographs have been developed and there are needs for additional image data to validate this work. To further this research, the Atlanta VA Health Care System (VAHCS) has extracted 112,000 DICOM diabetic retinopathy surveillance images (13,000 studies) that can be subsequently used for the validation of automated algorithms. An extensive amount of associated clinical information was added to the DICOM header of each exported image to facilitate correlation of the image with the patient's medical condition. The clinical information was saved as a JSON object and stored in a single Unlimited Text (VR = UT) DICOM data element. This paper describes the methodology used for this project and the results of applying this methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
184. Spring Framework Reliability Investigation Against Database Bridging Layer Using Java Platform.
- Author
-
Ginanjar, Arief and Hendayun, Mokhamad
- Subjects
JAVA programming language ,WEB-based user interfaces ,PROGRAMMING languages ,SPRING ,WEB services ,RELIABILITY in engineering - Abstract
There are several frameworks that can be used to make create applications easier in the Java programming environment, whether in web applications or desktop applications. If we focus more on Java web framework, there is Spring Framework that has been popular since 2004, especially with the ability of Spring Framework which can be combined with various other frameworks such as Hibernate Framework, Ibatis or namely MyBatis Framework today and several other frameworks. This research was conducted in comparing ability of data loading from a web service application built using the Java programming language with the Spring Framework, especially if combined with Database Bridging Layer such as Java Database Connection (JDBC), Hibernate Framework, MyBatis Framework, plus additional framework capabilities contained at Hibernate and MyBatis that have as cache data layer. Performance test scenario create a web service in Spring Framework then accessed by custom test script built with third party code and call it repeatedly with a certain time period. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
185. Parametric schema inference for massive JSON datasets.
- Author
-
Baazizi, Mohamed-Amine, Colazzo, Dario, Ghelli, Giorgio, and Sartiani, Carlo
- Abstract
In recent years, JSON established itself as a very popular data format for representing massive data collections. JSON data collections are usually schemaless. While this ensures several advantages, the absence of schema information has important negative consequences as well: Data analysts and programmers cannot exploit a schema for a reliable description of the structure of the dataset, the correctness of complex queries and programs cannot be statically checked, and many schema-based optimizations are not possible. In this paper, we deal with the problem of inferring a schema from massive JSON datasets. We first identify a JSON type language which is simple and, at the same time, expressive enough to capture irregularities and to give complete structural information about input data. We then present our contributions, which are the design of a parametric and parallelizable schema inference algorithm, its theoretical study, and its implementation based on Spark, enabling reasonable schema inference time for massive collections. Our algorithm is parametric as the analyst can specify a parameter determining the level of precision and conciseness of the inferred schema. Finally, we report about an experimental analysis showing the effectiveness of our approach in terms of execution time, conciseness of inferred schemas, and scalability. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
186. 기상 데이터를 활용한 CQRS 패턴의 조회 모델 구현.
- Author
-
서보민, 전철호, 전현식, 안세윤, and 박현주
- Subjects
METADATA ,INFORMATION storage & retrieval systems ,DATA warehousing ,SOFTWARE architecture ,INTERNET servers ,HUMIDITY ,METEOROLOGICAL precipitation - Abstract
At a time when large amounts of data are being poured out, there are many changes in software architecture or data storage patterns because of the nature of the data being written, rather more read-intensive than writing. Accordingly, in this paper, the query model of Command Query Responsibility Segmentation (CQRS) pattern separating the responsibilities of commands and queries is used to implement an efficient high-capacity data lookup system in users' requirements. This paper uses the 2018 temperature, humidity and precipitation data of the Korea Meteorological Administration Open API to store about 2.3 billion data suitable for RDBMS (PostgreSQL) and NoSQL (MongoDB). It also compares and analyzes the performance of systems with CQRS pattern applied from the perspective of the web server (Web Server) implemented and systems without CQRS pattern, the storage structure performance of each database, and the performance corresponding to the data processing characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
187. Generic input template for cloud simulators: A case study of CloudSim.
- Author
-
Jammal, Manar, Hawilo, Hassan, Kanso, Ali, and Shami, Abdallah
- Subjects
CLOUD computing ,INFORMATION & communication technologies ,COMPUTER software ,MUSIC orchestration ,INFORMATION theory - Abstract
Summary: Cloud computing and its service models, such as Platform as a Service (PaaS), have changed the way that computing resources are allocated to Information and Communications Technology enterprises and users. Although multiple cloud providers support dynamic service provisioning, it is necessary to facilitate the management of the cloud infrastructure and applications in order to allow the continuous refinement of cloud models. Therefore, issues are raised regarding the cloud orchestration, including the flexible portability and interoperability of cloud applications among multiple cloud providers. Having said that, there is a need for a standardized design and management of the cloud use cases (during the creation of scenarios, application's deployment, and patching) to ensure efficient applications' migration between different providers. This paper proposes an artifact, GITS, a generic input template for CloudSim and other cloud simulators. GITS can be provided by PaaS offering to manage the creation, monitoring, administration, and patching of infrastructure and applications in the cloud. GITS defines the cloud schema that can be used with conforming cloud models and independent cloud providers; thus, portability and interoperability can be enabled in PaaS cloud models. GITS focuses on the architecture‐based modeling for cloud infrastructure and application not only in terms of computational resources but also in terms of high availability properties associated with infrastructure and applications. The main objective of the GITS template is to provide the cloud user with a modular, simple, readable, and reusable model that still supports the essential components and provide them with the ability to control the applications' execution, deployment, and other management needs in addition to the allocation environment. This paper describes GITS usage, specifically as an input template for CloudSim. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
188. A Highly-Available Move Operation for Replicated Trees
- Author
-
Kleppmann, M, Mulligan, DP, Gomes, VBF, Beresford, AR, Kleppmann, M [0000-0001-7252-6958], Mulligan, DP [0000-0003-4643-3541], Gomes, VBF [0000-0002-2954-4648], Beresford, AR [0000-0003-0818-6535], and Apollo - University of Cambridge Repository
- Subjects
Correctness ,Conflict-free replicated data types (CRDTs) ,distributed filesystems ,Computer science ,Distributed computing ,HOL ,Drives ,Synchronization ,formal verification ,Block (data storage) ,computer.programming_language ,Internet ,Proof assistant ,Data models ,XML ,JSON ,distributed collaboration ,Tree (data structure) ,Tree structure ,Computational Theory and Mathematics ,Software bug ,Computer bugs ,Hardware and Architecture ,Signal Processing ,computer ,Software - Abstract
Replicated tree data structures are a fundamental building block of distributed filesystems, such as Google Drive and Dropbox, and collaborative applications with a JSON or XML data model. These systems need to support a move operation that allows a subtree to be moved to a new location within the tree. However, such a move operation is difficult to implement correctly if different replicas can concurrently perform arbitrary move operations, and we demonstrate bugs in Google Drive and Dropbox that arise with concurrent moves. In this article we present a CRDT algorithm that handles arbitrary concurrent modifications on trees, while ensuring that the tree structure remains valid (in particular, no cycles are introduced), and guaranteeing that all replicas converge towards the same consistent state. Our algorithm requires no synchronous coordination between replicas, making it highly available in the face of network partitions. We formally prove the correctness of our algorithm using the Isabelle/HOL proof assistant, and evaluate the performance of our formally verified implementation in a geo-replicated setting.
- Published
- 2022
- Full Text
- View/download PDF
189. The construction of syntax trees using external data for partially formalized text documents
- Author
-
Kirill Chuvilin
- Subjects
abstract syntax tree ,JSON ,LaTeX ,parsing ,text mining ,tree ,Telecommunication ,TK5101-6720 - Abstract
This article investigates the possibility of logical structure (abstract syntax tree) automatic construction for text documents, the format of which is not fully defined by standards or other rules common to all the documents. In contrast to the syntax described by formal grammars, in such cases there is no way to build the parser automatically. Text files in LATEX format are the typical examples of such formatted documents with not completely formalized syntax markup. They are used as the resources for the implementation of the algorithms developed in this work. The relevance of LATEX document analysis is due to the fact that many scientific publishings and conferences use LATEX typesetting system, and this gives rise to important applied task of automation for categorization, correction, comparison, statistics collection, rendering for WEB, etc. The parsing of documents in format requires additional information about styles: symbols, commands and environments. A method to describe them in JSON format is proposed in this work. It allows to specify not only the information necessary to pars, but also meta information that facilitates further data mining. And it is really necessary, for example, for correct comparison of documents, which arises in the solution of the automatic correction problem. This approach is used for the first time. The developed algorithms for constructing a syntax tree of a document in LATEX format, that use such information as an external parameter are described. The results are successfully applied in the tasks of comparison, auto-correction and categorization of scientific papers. The implementation of the developed algorithms is available as a set of libraries released under the LGPLv3. The key features of the proposed approach are: flexibility (within the framework of the problem) and simplicity of parameter descriptions. The proposed approach allows to solve the problem of parsing documents in LATEX format. But it is required to form th- base of style element descriptions for widespread practical use of the developed algorithms.
- Published
- 2016
- Full Text
- View/download PDF
190. A modular lightweight implementation of the Smart-M3 semantic information broker
- Author
-
Fabio Viola, Alfredo D'Elia, Luca Roffia, and Tullio Salmon Cinotti
- Subjects
smart-m3 ,interoperability ,iot ,json ,embedded systems ,Telecommunication ,TK5101-6720 - Abstract
Interoperability among heterogeneous devices is one of the main topics investigated nowadays to realize the Ubiquitous Computing vision. Smart-M3 is a software architecture born to provide interoperability through the Semantic Web technologies and reactivity thanks to the publish-subscribe paradigm. In this paper we present a new implementation in Python of the central component of the Smart-M3 architecture: the Semantic Information Broker (SIB). The new component, named pySIB, has been specifically designed for embedded or resource constrained devices. pySIB represents a new open source lightweight and portable SIB implementation, but also introduces new features and interesting performances. JSON has been introduced as the default information encoding notation as it offers the flexibility of XML with minor bandwidth requirements. Memory allocation on disk and at runtime is in the order of Kilobytes i.e. minimal, if compared with the other reference implementations. Performance tests on existing (SP2B) and ad-hoc benchmarks point out possible improvements but also encouraging data such as the best insertion time among the existing SIB implementations.
- Published
- 2016
- Full Text
- View/download PDF
191. Policychain: A Decentralized Authorization Service With Script-Driven Policy on Blockchain for Internet of Things
- Author
-
Yan Zhu, Shou-Yu Lee, E Chen, W. Eric Wong, Zhiyuan Zhou, and William C. Chu
- Subjects
Service (systems architecture) ,Blockchain ,Computer Networks and Communications ,Computer science ,business.industry ,Access control ,JavaScript ,Computer security ,computer.software_genre ,JSON ,Computer Science Applications ,Application lifecycle management ,Hardware and Architecture ,Scripting language ,Signal Processing ,business ,computer ,Database transaction ,Information Systems ,computer.programming_language - Abstract
Decentralization mechanism provides manufacturers and distributors with greater customization and flexibility they need through IoT-based Industrial Collaboration Systems (IoT-ICS), but it has brought forward security concerns about the shared data-processing tasks and IoT-based access to services and resources. To address them, we propose a practical blockchain solution to achieve decentralized policy management and evaluation on Attribute-based Access Control (ABAC). By offloading the responsibility of ABAC policy administration and decision-making to blockchain nodes, a blockchain-based access control framework, called Policychain, is presented to ensure policy with high availability, autonomy, and traceability. To deliver a solid design, we first present a transaction-oriented policy expression scheme with a well-defined syntax and semantics. The scheme can translate ABAC policies into the blockchain transactions with JavaScript Object Notation (JSON) syntax and Script-based logical expression. We further realize a script-driven policy evaluation by extending blockchain inherent scripting instructions to support attribute acquisition of ABAC entities. Furthermore, we propose a policy lifecycle management scheme from policy creation, renovation, to revocation, in which policies are verified by three validation principles at the transaction level. Finally, we provide sophisticated analysis and experiments to show that our framework is secure and practical for decentralized policy management on ABAC in IoT-ICS.
- Published
- 2022
- Full Text
- View/download PDF
192. BLOSOM: BLOckchain technology for Security Of Medical records
- Author
-
Kalpana Gupta, Rahul Johari, Deo Prakash Vidyarthi, and Vivek Kumar
- Subjects
Blockchain ,Computer Networks and Communications ,Computer science ,020208 electrical & electronic engineering ,020206 networking & telecommunications ,02 engineering and technology ,Information security ,Computer security ,computer.software_genre ,Merkle tree ,JSON ,Upload ,Artificial Intelligence ,Hardware and Architecture ,Proof-of-work system ,0202 electrical engineering, electronic engineering, information engineering ,Cryptographic hash function ,computer ,Software ,Information Systems ,computer.programming_language ,Block (data storage) - Abstract
Today, information security world is witnessing frequent attacks on electronic records by the naive and professional hackers and crackers. Security of Patient’s Electronic Medical Record (EMR) in Hospital Management System is of paramount importance and so warrants immediate attention. In this research work, BlockChain containers running on multiple ports are proposed to be used for holding patient’s medical records. To perform this task effectively, a BlockChain framework named as “Medichain” was developed from scratch that contains all basic functionalities of a BlockChain to secure patient’s data. Each block of the BlockChain keeps a list of records of patient details. Users would be provided with an option of uploading the file containing the list of records as a JSON file, in a decisive, distributed and decentralized network. The proposed BlockChain Algorithm, consisting of cryptographic hash of the records, proof of work and a Merkle tree formulation, were simulated in Python. Results have been positive and encouraging.
- Published
- 2022
- Full Text
- View/download PDF
193. Modèles basés sur CBOR pour les données de séries chronologiques de l'Internet des objets : un pas vers la normalisation
- Author
-
Molina, Sebastian, Martinez, Ivan, Montavont, Nicolas, Toutain, Laurent, Objets communicants pour l'Internet du futur (OCIF), IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-RÉSEAUX, TÉLÉCOMMUNICATION ET SERVICES (IRISA-D2), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), Département Systèmes Réseaux, Cybersécurité et Droit du numérique (IMT Atlantique - SRCD), and Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)
- Subjects
[INFO.INFO-NI]Computer Science [cs]/Networking and Internet Architecture [cs.NI] ,Internet des objets IoT ,Interopérabilité ,Internet des objets IoT Series Temporelles TS Interopérabilité CBOR JSON ,Series Temporelles TS ,JSON ,CBOR - Abstract
International audience; Actuellement, il y a une expansion rapide de la technologie IoT (Internet of Things) et un déploiement généralisé des appareils IoT. De ce fait, assurer l'interopérabilité avec les systèmes d'information est devenu un enjeu critique dans le déploiement accéléré de ces dispositifs. Malgré la prévalence de la représentation en série chronologique des informations IoT, un format standardisé n'a pas émergé. En effet, la majorité des études de la littérature se concentrent sur le traitement, la prédiction ou la compression de séries temporelles. Dans cet article, un nouveau format de représentation de séries chronologiques pour les appareils IoT est proposé sur la base de Concise Binary Object Representation (CBOR). Le format introduit des deltas, des balises et différents modèles comme Measurements-based TS (MTS) ou Variable-based TS (VTS) pour permettre une représentation compacte des données et une intégration efficace du système d'information. De plus, la représentation proposée est capable de regrouper de manière compacte les données de séries chronologiques collectées par les appareils IoT et est capable de réduire la quantité de données transmises jusqu'à 76% et 96% par rapport à JavaScript Object Notation (JSON), ce qui entraîne des extensions potentielles de la durée de vie de la batterie, et la durée de vie utile des appareils IoT.
- Published
- 2023
194. NeuronX - NEventSpace single neuron model of the soliton spike of mimosa pudica as Ca, Na, K, Cl ion channels in NEURON script of Yale University in python
- Author
-
K(Kumar), Bheemaiah Anil
- Subjects
Yale ,NeuronX ,Mimosa Pudica ,NEvents ,NEURON ,JSON - Abstract
NEvents and NeuronX JSON based events can be compiled to NEURON scripting language, in python. NEURON developed by Hines of Yale University, is a popular language for neuronal modelling. In this short paper, a model of a Mimosa Pudica hair cell, the plant neurobiological interaction with peduncles is described with two NEvent pathways, compiled to NEURON script source code.
- Published
- 2023
- Full Text
- View/download PDF
195. Miller: a swiss-army chainsaw for CSV and more
- Author
-
Kerl, John
- Subjects
json ,miller ,csv ,tsv ,commandline - Abstract
Miller (`mlr`) is one of many command-line tools available for modern data processing. In this talk, we'll start from the basics of CSV manipulation: querying, sorting, converting to/from TSV and JSON, etc. We'll peek at some of the expressive things you can do using Miller's query language -- as well as some very simple and powerful things you can do without it. We'll see how Miller is useful for non-programmers as well as programmers: data analysts, system admins, researchers, etc. https://miller.readthedocs.io/en/latest/
- Published
- 2023
- Full Text
- View/download PDF
196. NEventSpace, an OOPS EVent Calculus
- Author
-
Kumar, Bheemaiah Anil K
- Subjects
Feynman Diagrams ,JSON ,NEvents - Abstract
Poster Presented at the 10th Neuromodulation Symposium, April 20-21 2023, Twin Cities, MN, USA. The poster presents a pre-talk on the use of Feynman diagrams in expressing NEventSpace, neuron, axon, dendritic action events, potentiation events and spike phenomenon, second messenger, messenger pathways and other pathways and phenomenon.
- Published
- 2023
- Full Text
- View/download PDF
197. NEventSpace, an OOPS EVent Calculus
- Author
-
Dr Bheemaiah Anil K Kumar
- Subjects
Feynman Diagrams ,JSON ,NEvents - Abstract
Poster Presented at the 10th Neuromodulation Symposium, April 20-21 2023, Twin Cities, MN, USA. The poster presents a pre-talk on the use of Feynman diagrams in expressing NEventSpace, neuron, axon, dendritic action events, potentiation events and spike phenomenon, second messenger, messenger pathways and other pathways and phenomenon.
- Published
- 2023
- Full Text
- View/download PDF
198. Sistema de recolección y registro de datos para el laboratorio de aguas en la empresa Veolia Aguas de Tunja
- Author
-
Leguizamo Mancipe, Juan Diego, Galarza Bogotá, Cesar Mauricio, Sosa Quintero, Luis Fredy, and Universidad Santo Tomas
- Subjects
JSON ,Mobile app ,CSV file ,AndroidStudio ,Desarrollo de Apps ,App Development ,Archivo CSV ,smartphone ,android programming ,Bases de datos ,Navigation ,Programación Android ,Databases ,Programación ,Aplicación Móvil ,Navegación ,API ,LabVIEW ,Programming ,Excel ,Android Studio ,Software - Abstract
En el laboratorio de Aguas de Tunja de la multinacional Veolia Environnement surgió la necesidad de crear un sistema que tiene como finalidad de adquisición de datos, a través de dos aplicativos con unas interfaces amigables que les permita a los usuarios facilitar el registro de tomas de campo, estas tomas se basan en la obtención de datos de los clientes y las muestras de agua tomadas por el operario, para que posteriormente ser estudiadas y en consecuencia se entregue un informe final al cliente. Estos procedimientos y formularios se han venido manejando de manera física y por ende están sometidos a daños, deterioros y hasta perdida de los mismos, es por lo anterior que el objetivo de este proyecto fue suplir dicha necesidad generando dos aplicativos que permiten la recolección y muestra de la información, tanto en campo como en el laboratorio de manera rápida, segura y confiable, mediante un sistema de escritura y adquisición de información. La primera aplicación, tiene como objetivo principal la recopilación de datos en campo, su creación fue posible mediante las herramientas que brinda el software Android Studio, es por esto que se utilizó como desarrollador de la App móvil a partir de un lienzo en blanco, en donde guarda la información en archivos csv (Valores separados por coma) y los envía por medio de Google Drive para la visualización en la segunda aplicación. La segunda aplicación se desarrolló mediante el software de LabVIEW y su funcionalidad es la recepción y visualización de los datos tomados en campo, así mismo este aplicativo es el encargado de generar informes en pdf para poder ser entregados al cliente final. Finalmente, se obtuvo como resultado el funcionamiento óptimo de las aplicaciones, consiguiendo todos los objetivos planteados. In the Aguas de Tunja laboratory of the multinational Veolia Environnement, the need to create a system whose purpose is data acquisition, through two applications with friendly interfaces that allow users to users facilitate the registration of field shots, these shots are based on the Obtaining customer data and water samples taken by the operator, so that later they can be studied and consequently a report is delivered end to customer. These procedures and forms have been handled physical way and therefore are subject to damage, deterioration and even loss of the same, it is for the above that the objective of this project was to supply said necessity generating two applications that allow the collection and sample of the information, both in the field and in the laboratory, quickly, safely and reliable, through a writing and information acquisition system. The first application has as its main objective the collection of data in field, its creation was possible through the tools provided by the software Android Studio, which is why it was used as a developer of the mobile App from of a blank canvas, where it saves the information in csv files (Values separated by commas) and sends them via Google Drive for viewing in the second application. The second application was developed using LabVIEW software and its functionality is the reception and visualization of the data taken in the field, as well as This application itself is in charge of generating reports in pdf to be able to delivered to the end customer. Finally, the optimal functioning of the applications was obtained as a result, achieving all the objectives set. Ingeniero Electronico Pregrado
- Published
- 2023
199. Leveraging Structural and Semantic Measures for JSON Document Clustering
- Author
-
Priya D, Uma and Thilagam, P. Santhi
- Subjects
JSON ,Data Mining ,Similarity Measures ,Clustering - Abstract
In recent years, the increased use of smart devices and digital business opportunities has generated massive heterogeneous JSON data daily, making efficient data storage and management more difficult. Existing research uses different similarity metrics and clusters the documents to support the above tasks effectively. However, extant approaches have focused on either structural or semantic similarity of schemas. As JSON documents are application-specific, differently annotated JSON schemas are not only structurally heterogeneous but also differ by the context of the JSON attributes. Therefore, there is a need to consider the structural, semantic, and contextual properties of JSON schemas to perform meaningful clustering of JSON documents. This work proposes an approach to cluster heterogeneous JSON documents using the similarity fusion method. The similarity fusion matrix is constructed using structural, semantic, and contextual measures of JSON schemas. The experimental results demonstrate that the proposed approach outperforms the existing approaches significantly.
- Published
- 2023
200. RAMSÉS. Sistema de importación/exportación BD-JFLAP, gestión de idiomas y copias de seguridad
- Author
-
Pozo Yubero, Maider, Etxeberria Agiriano, Ismael, E.U.I.T. INDUSTRIAL - E I.T. TOPOGRAFIA -VITORIA, GASTEIZKO INGENIARITZAKO U.E., Grado en Ingeniería Informática de Gestión y Sistemas de Información, and Kudeaketaren eta Informazio Sistemen Informatikaren Ingeniaritzako Gradua
- Subjects
JavaScript ,Docker ,autómatas ,Proxmox ,software ,RAMSÉS ,JSON ,JFLAP ,intérprete ,servidor ,Turing - Abstract
71 p. – Bibliogr.: p. 55-57 En este documento se muestra cómo se ha ido desarrollando el Trabajo de Fin de Grado de la autora. El proyecto consiste en añadir funcionalidades y especificaciones a un área de RAMSÉS, aplicación web de simulación de máquinas abstractas. Es una más completa herramienta de uso académico para la asignatura de segundo curso Lenguajes, Computación y Sistemas Inteligentes, cubriendo la funcionalidad de una herramienta ya existente, JFLAP. En el caso de este proyecto en cuestión se ha desarrollado la opción de importar cualquier fichero JFLAP o JSON a la base de datos. De igual manera se ha desarrollado la opción de exportar cualquier autómata a JFLAP o JSON. Asimismo, se han añadido funcionalidades para etiquetar máquinas abstractas en la base de datos permitiendo asociarles información académica, como pueden ser colecciones de ejercicios o preguntas de exámenes. También se ha desarrollado un mecanismo adecuado para que RAMSÉS sea una herramienta multi-idioma. Por último, se ha desarrollado un procedimiento para la realización de copias de seguridad de RAMSÉS para proteger las actividades académicas a pérdidas de información asociadas a incidentes de seguridad. Cabe destacar que este proyecto está realizado en gran parte en el lenguaje de programación JavaScript tanto en la parte cliente como en la de servidor, apoyado en tecnologías como Node.js, Git, JSON, SVG, CSS3 y HTML5, junto con MySQL para el almacenamiento y recuperación de datos. Por otro lado, se ha realizado la puesta en producción de la aplicación mediante máquinas virtuales en un servidor, mediante Docker y Proxmox. En estos momentos RAMSÉS está en producción en el servidor del departamento de LSI siendo accesible desde la Web.
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.