661 results on '"Hypertext Transfer Protocol"'
Search Results
2. Design and implementation of an open-Source IoT and blockchain-based peer-to-peer energy trading platform using ESP32-S2, Node-Red and, MQTT protocol
- Author
-
Mohsin Jamil, Mirza Jabbar Aziz Baig, Jahangir Khan, and M. Tariq Iqbal
- Subjects
MQTT ,Hypertext Transfer Protocol (http) ,Hypertext Transfer Protocol ,Smart contract ,Peer-to-Peer (P2P) ,Computer science ,computer.internet_protocol ,business.industry ,Interface (computing) ,Node (networking) ,Local area network ,Peer-to-peer ,computer.software_genre ,TK1-9971 ,Internet of things (IoT) ,General Energy ,Electrical engineering. Electronics. Nuclear engineering ,Ethereum blockchain ,Message Queuing Telemetry Transport (MQTT) ,business ,computer ,Message queue ,Computer network - Abstract
An open-source P2P energy trading platform facilitates energy trading amongst the peers. The proposed system provides real time data acquisition, monitoring and control of self-generated energy at a remote location. The trading activities are done on a web interface that uses a private Ethereum blockchain. A smart contract is deployed on the Ethereum blockchain and the trading activities performed on the web interface are recorded on a tamper-proof blockchain network. An internet of things platform is used to monitor and control the self-generated energy. Energy data is collected and processed by means of ESP32-S2 microcontrollers using field instrumentation devices which are connected to the voltage source and load. An open-source decentralized Peer-to-Peer (P2P) energy trading system, designed on the blockchain and internet of things (IoT) architecture is proposed. The hardware setup includes a relay, a current sensor, a voltage sensor, a Wi-Fi router and ESP32-S2 microcontroller. For data transfer the Message Queuing Telemetry Transport (MQTT) protocol is used over a local network. ESP32-S2 is set up as MQTT client and Node-Red IoT server is used as MQTT broker. Hypertext Transfer Protocol (http) request method is implemented to connect the Node-Red server with the web interface developed using React.JS library. The system design, implementation, testing, and results are presented in this paper.
- Published
- 2021
- Full Text
- View/download PDF
3. Comparison of WebSocket and HTTP protocol performance
- Author
-
Marcin Badurowicz and Wojciech Łasocha
- Subjects
websocket protocol ,http protocol ,protocols performance comparison ,Hypertext Transfer Protocol ,business.industry ,computer.internet_protocol ,Computer science ,Local area network ,Information technology ,QA75.5-76.95 ,Client ,T58.5-58.64 ,computer.software_genre ,Encryption ,Software ,WebSocket ,Electronic computers. Computer science ,Operating system ,Overhead (computing) ,Web application ,business ,computer - Abstract
The aim of the author of this article is analyze the performance of WebSocket and HTTP protocol and their comparison. For this purpose, was used equipment working in a local network consisting of server, two client computers, switch and self-created research web application. Using a test application was measured time of data transfer between clients and server as well server and clients. The tests included transmission 100-character texts in specified number of copies considering speed of hardware (laptops) and software (web browsers). Additionally, was investigated the impact of overhead and TLS encryption to performance. The obtained results have illustrated in the form of charts, discussed and appropriate conclusions drawn.
- Published
- 2021
- Full Text
- View/download PDF
4. Protokol HTTPS, Apakah Benar-benar Aman?
- Author
-
Deddy Prayama, Amelia Yolanda, and Yuhefizar
- Subjects
Data traffic ,Hypertext Transfer Protocol ,computer.internet_protocol ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Data theft ,security ,QA75.5-76.95 ,Computer security ,computer.software_genre ,Communications security ,http ,Electronic computers. Computer science ,website ,protocol ,computer ,Protocol (object-oriented programming) - Abstract
In particular, the method used in this research is a case study where previously the p3m.pnp.ac.id website still uses the http protocol. However http protocol does not have a method of securing data in its communication. The final result of this research is the design and implementation of the https protocol for data communication security that occurs between visitors and the p3m.pnp.ac.id website. The https protocol that has been implemented is tested to ensure the data traffic on the p3m.pnp.ac.id website is guaranteed to be safe and protected from possible piracy and data theft.
- Published
- 2021
5. A Deep Learning-Based Data Minimization Algorithm for Fast and Secure Transfer of Big Genomic Datasets
- Author
-
Mohammed Aledhari, Fahad Saeed, Mohamed Hefeida, and Marianne Di Pierro
- Subjects
0301 basic medicine ,Information Systems and Management ,File Transfer Protocol ,Hypertext Transfer Protocol ,020205 medical informatics ,business.industry ,Computer science ,computer.internet_protocol ,Deep learning ,Bandwidth (signal processing) ,Code word ,02 engineering and technology ,computer.software_genre ,03 medical and health sciences ,030104 developmental biology ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,Minification ,Data mining ,business ,computer ,Information Systems ,Data transmission ,Communication channel - Abstract
In the age of Big Genomics Data, institutions such as the National Human Genome Research Institute (NHGRI) are challenged in their efforts to share volumes of data between researchers, a process that has been plagued by unreliable transfers and slow speeds. These occur due to throughput bottlenecks of traditional transfer technologies. Two factors that affect the efficiency of data transmission are the channel bandwidth and the amount of data. Increasing the bandwidth is one way to transmit data efficiently, but might not always be possible due to resource limitations. Another way to maximize channel utilization is by decreasing the bits needed for transmission of a dataset. Traditionally, transmission of big genomic data between two geographical locations is done using general-purpose protocols, such as hypertext transfer protocol (HTTP) and file transfer protocol (FTP) secure. In this paper, we present a novel deep learning-based data minimization algorithm that 1) minimizes the datasets during transfer over the carrier channels; 2) protects the data from the man-in-the-middle (MITM) and other attacks by changing the binary representation (content-encoding) several times for the same dataset: we assign different codewords to the same character in different parts of the dataset. Our data minimization strategy exploits the alphabet limitation of DNA sequences and modifies the binary representation (codeword) of dataset characters using deep learning-based convolutional neural network (CNN) to ensure a minimum of code word uses to the high frequency characters at different time slots during the transfer time. This algorithm ensures transmission of big genomic DNA datasets with minimal bits and latency and yields an efficient and expedient process. Our tested heuristic model, simulation, and real implementation results indicate that the proposed data minimization algorithm is up to 99 times faster and more secure than the currently used content-encoding scheme used in HTTP of the HTTP content-encoding scheme and 96 times faster than FTP on tested datasets. The developed protocol in C# will be available to the wider genomics community and domain scientists.
- Published
- 2021
- Full Text
- View/download PDF
6. A novel framework for delivering static search capabilities to large textual corpora directly on the Web domain: an implementation for Migne’s Patrologia Graeca
- Author
-
Sozon Papavlasopoulos, Marios Poulos, Ilias Giarenis, and Evagelos Varthis
- Subjects
Representational state transfer ,Hypertext Transfer Protocol ,Computer Networks and Communications ,computer.internet_protocol ,Computer science ,Interface (Java) ,020207 software engineering ,02 engineering and technology ,NoSQL ,computer.software_genre ,World Wide Web ,Greek language ,Software portability ,Web mining ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,computer ,Information Systems - Abstract
Purpose This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The proposed framework is applied to the unstructured texts of Migne’s Patrologia Graeca (PG) collection, setting PG as an implementation example of the method. Design/methodology/approach The unstructured texts of PG have automatically transformed to a read-only not only Structured Query Language (NoSQL) database with a structure identical to that of a representational state transfer access point interface. The transformation makes it possible to execute queries and retrieve ranked results based on a specialized application of the extended Boolean model. Findings Using a specifically built Web-browser-based search tool, the user can quickly locate ranked relevant fragments of texts with the ability to navigate back and forth. The user can search using the initial part of words and by ignoring the diacritics of the Greek language. The performance of the search system is comparatively examined when different versions of hypertext transfer protocol (Http) are used for various network latencies and different modes of network connections. Queries using Http-2 have by far the best performance, compared to any of Http-1.1 modes. Originality/value The system is not limited to the case study of PG and has a generic application in the field of humanities. The expandability of the system in terms of semantic enrichment is feasible by taking into account synonyms and topics if they are available. The system’s main advantage is that it is totally static which implies important features such as simplicity, efficiency, fast response, portability, security and scalability.
- Published
- 2021
- Full Text
- View/download PDF
7. Software Simulation of the Network Traffic Processing Device in the Information System
- Author
-
K. I. Budnikov and A. V. Kurochkin
- Subjects
Emulation ,Hypertext Transfer Protocol ,computer.internet_protocol ,Computer science ,Interface (computing) ,Real-time computing ,Condensed Matter Physics ,computer.software_genre ,Networking hardware ,Set (abstract data type) ,Virtual machine ,Component (UML) ,Information system ,Electrical and Electronic Engineering ,Instrumentation ,computer - Abstract
The article presents a method for computer simulation of a network device using its digital emulator, for which a virtual environment is created to generate network traffic. During emulation, the simulated objects are defined by a set of functional and interface threads, which interact through virtual communication lines, which are common areas of memory. The proposed approach minimizes the time cost of packet transmission between digital objects and focuses on the algorithmic component of the simulated device. The method is illustrated by a computer simulation of the operation of the HTTP protocol filtering device as a part of the information Web system.
- Published
- 2021
- Full Text
- View/download PDF
8. Edge Intelligence (EI)-Enabled HTTP Anomaly Detection Framework for the Internet of Things (IoT)
- Author
-
Jianyong Chen, Jianqiang Li, Victor C. M. Leung, Yufei An, and F. Richard Yu
- Subjects
021110 strategic, defence & security studies ,Hypertext Transfer Protocol ,Computer Networks and Communications ,Computer science ,computer.internet_protocol ,Feature extraction ,0211 other engineering and technologies ,Process (computing) ,Vulnerability ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,Header ,0202 electrical engineering, electronic engineering, information engineering ,Anomaly detection ,Data mining ,Enhanced Data Rates for GSM Evolution ,Cluster analysis ,computer ,Information Systems - Abstract
In recent years, with the rapid development of the Internet of Things (IoT), various applications based on IoT have become more and more popular in industrial and living sectors. However, the hypertext transfer protocol (HTTP) as a popular application protocol used in various IoT applications faces a variety of security vulnerabilities. This article proposes a novel HTTP anomaly detection framework based on edge intelligence (EI) for IoT. In this framework, both clustering and classification methods are used to quickly and accurately detect anomalies in the HTTP traffic for IoT. Unlike the existing works relying on a centralized server to perform anomaly detection, with the recent advances in EI, the proposed framework distributes the entire detection process to different nodes. Moreover, a data processing method is proposed to divide the detection fields of HTTP data, which can eliminate redundant data and extract features from the fields of an HTTP header. Simulation results show that the proposed framework can significantly improve the speed and accuracy of HTTP anomaly detection, especially for unknown anomalies.
- Published
- 2021
- Full Text
- View/download PDF
9. Fostering secure cross-layer collaborative communications by means of covert channels in MEC environments
- Author
-
Castiglione, Aniello, Nappi, Michele, Narducci, Fabio, and Pero, Chiara
- Subjects
Covert channel ,D2D communications ,HTTP protocol ,Mobile Edge Computing ,Steganography ,Mobile edge computing ,Hypertext Transfer Protocol ,Exploit ,Computer Networks and Communications ,business.industry ,computer.internet_protocol ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Cryptographic protocol ,Computer security ,computer.software_genre ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,The Internet ,business ,computer ,Protocol (object-oriented programming) - Abstract
Recently, due to unexpected conditions introduced by the COVID-19 outbreak, collaborative tools are widely adopted in almost all sectors of our daily lifestyle. Almost all tools rely mainly on the World Wide Web technologies that, in turn, are built upon the HTTP protocol. The HTTP protocol is considered as the “bricks” of all kind of communications among people/devices that exchange messages with different purposes and meanings. Unfortunately, it is widely used to track and monitor people when using the Internet. This paper exploits the HTTP protocol and try to reverse this negative aspect by designing and implementing a way to help users (and devices) to not disclose too much information when collaborating each other even in an unfriendly environment. A novel steganographic protocol is proposed by using the HTTP “control” messages. The proposed protocol can be easily adopted by devices communicating in a MEC (Mobile Edge Computing) environment where it is important to guarantee the integrity and the confidentiality of all communications, especially messages that give “instructions” to devices or in device-to-device communications. The proposed protocol allows to avoid using complex and computationally demanding cryptographic protocols that are very difficult to be used in devices with limited resources.
- Published
- 2021
- Full Text
- View/download PDF
10. Perancangan Sistem Monitoring Konduktivitas dan Padatan Terlarut PDAM Banyumas Berbasis IoT
- Author
-
Indah Permatasari, Nur Zen, and Nia Annisa Ferani Tanjung
- Subjects
MQTT ,Hypertext Transfer Protocol ,Database ,computer.internet_protocol ,Computer science ,media_common.quotation_subject ,kualitas air ,Clean water ,mqtt ,Monitoring system ,Engineering (General). Civil engineering (General) ,computer.software_genre ,pdam ,http ,Quality standard ,konduktivitas ,Quality (business) ,Water quality ,TA1-2040 ,User interface ,computer ,tds ,media_common - Abstract
PDAM merupakan perusahaan yang bergerak dalam distribusi air bersih bagi masyarakat. Sebagian masyarakat Indonesia telah menjadi pelanggan air PDAM guna memenuhi kebutuhan air untuk aktivitas sehari-hari. Kualitas air menjadi persoalan penting karena berhubungan erat dengan kesehatan. Pada makalah ini dilakukan perancangan sistem monitoring kualitas air PDAM Banyumas berbasis IoT dengan meninjau parameter konduktivitas (EC) dan padatan terlarut (TDS). Hasil data pengukuran dapat diakses melalui aplikasi Andorid pada smartphone. Aplikasi dirancang menggunakan protokol HTTP dan MQTT. Protokol HTTP digunakan pada user interface untuk mengambil data terakhir pengukuran, sedangkan protokol MQTT digunakan untuk update data pengukuran sehingga proses transmisi data lebih cepat. Sistem akan mengirim pemberitahuan melalui Telegram apabila kualitas air berada di bawah baku mutu. Pengujian akurasi pengukuran dilakukan dengan membandingkan perangkat monitoring yang dibuat dengan alat ukur bersertifikat pada sampel air minum kemasan dan air PDAM. Hasil penelitian menunjukkan kinerja perangkat monitoring kualitas air PDAM yang dirancang sebesar 97,31% serta kualitas air PDAM Banyumas yang didistribusikan sangat stabil dan aman dikonsumsi.
- Published
- 2021
- Full Text
- View/download PDF
11. High Performance Distributed Web-Scraper
- Subjects
Hypertext Transfer Protocol ,Source code ,Database ,business.industry ,computer.internet_protocol ,Computer science ,media_common.quotation_subject ,Load balancing (computing) ,Python (programming language) ,computer.software_genre ,Data extraction ,General Earth and Planetary Sciences ,The Internet ,Web crawler ,business ,computer ,Web scraping ,General Environmental Science ,media_common ,computer.programming_language - Abstract
Over the past decade, the Internet has become the gigantic and richest source of data. The data is used for the extraction of knowledge by performing machine leaning analysis. In order to perform data mining of the web-information, the data should be extracted from the source and placed on analytical storage. This is the ETL-process. Different web-sources have different ways to access their data: either API over HTTP protocol or HTML source code parsing. The article is devoted to the approach of high-performance data extraction from sources that do not provide an API to access the data. Distinctive features of the proposed approach are: load balancing, two levels of data storage, and separating the process of downloading files from the process of scraping. The approach is implemented in the solution with the following technologies: Docker, Kubernetes, Scrapy, Python, MongoDB, Redis Cluster, and СephFS. The results of solution testing are described in this article as well.
- Published
- 2021
- Full Text
- View/download PDF
12. Detection and Classification of DDoS Flooding Attacks on Software-Defined Networks: A Case Study for the Application of Machine Learning
- Author
-
Abimbola O. Sangodoyin, Mobayode O. Akinsolu, Prashant Pillai, and Vic Grout
- Subjects
QA75 ,Hypertext Transfer Protocol ,General Computer Science ,computer.internet_protocol ,Computer science ,Transmission Control Protocol ,TK ,SDN security ,Denial-of-service attack ,Machine learning ,computer.software_genre ,QA76 ,Robustness (computer science) ,network security ,User Datagram Protocol ,General Materials Science ,Network architecture ,T1 ,business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,General Engineering ,DDoS flooding attack ,TK1-9971 ,Flooding (computer networking) ,machine learning ,TA ,Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,Software-defined networking ,business ,computer - Abstract
Software-defined networks (SDNs) offer robust network architectures for current and future Internet of Things (IoT) applications. At the same time, SDNs constitute an attractive target for cyber attackers due to their global network view and programmability. One of the major vulnerabilities of typical SDN architectures is their susceptibility to Distributed Denial of Service (DDoS) flooding attacks. DDoS flooding attacks can render SDN controllers unavailable to their underlying infrastructure, causing service disruption or a complete outage in many cases. In this paper, machine learning-based detection and classification of DDoS flooding attacks on SDNs is investigated using popular machine learning (ML) algorithms. The ML algorithms, classifiers and methods investigated are quadratic discriminant analysis (QDA), Gaussian Naïve Bayes (GNB), $k$ -nearest neighbor (k-NN), and classification and regression tree (CART). The general principle is illustrated through a case study, in which, experimental data (i.e. jitter, throughput, and response time metrics) from a representative SDN architecture suitable for typical mid-sized enterprise-wide networks is used to build classification models that accurately identify and classify DDoS flooding attacks. The SDN model used was emulated in Mininet and the DDoS flooding attacks (i.e. hypertext transfer protocol (HTTP), transmission control protocol (TCP), and user datagram protocol (UDP) attacks) have been launched on the SDN model using low orbit ion cannon (LOIC). Although all the ML methods investigated show very good efficacy in detecting and classifying DDoS flooding attacks, CART demonstrated the best performance on average in terms of prediction accuracy (98%), prediction speed ( $5.3\,\,{\times }\,\,10^{5}$ observations per second), training time (12.4 ms), and robustness.
- Published
- 2021
- Full Text
- View/download PDF
13. Implementation of Microservices Architecture on E-Commerce Web Service
- Author
-
Juan Andrew Suthendra and Magdalena Ariance Ineke Pakereng
- Subjects
Representational state transfer ,e-commerce web service ,Technology ,Service (systems architecture) ,Hypertext Transfer Protocol ,computer.internet_protocol ,Computer science ,business.industry ,General Medicine ,Microservices ,Engineering (General). Civil engineering (General) ,computer.software_genre ,JavaScript ,JSON ,microservices architecture ,TA1-2040 ,Web service ,Software engineering ,business ,computer ,Protocol (object-oriented programming) ,computer.programming_language - Abstract
The research aimed to make e-commerce web services using a microservices architecture. Web service was built using Representational State Transfer Protocol (REST) with Hypertext Transfer Protocol (HTTP) method and JavaScript Object Notation (JSON) response format. Meanwhile, the microservices architecture was developed using Domain-driven Design (DDD) approach. The research began by analyzing e-commerce business processes and was modeled using Unified Modeling Language (UML) based on business process analysis. Next, the bounded context was used to make a small responsible service for a function. The Programming language used to make the system was Go programming language with Go-kit tool and apply database-per-service pattern for data management. The system also applied the concept of containerization using Docker as the container platform and using API Gateway to manage each endpoint. Last, the evaluation process was carried out using the Postman application by testing each endpoint based on the white-box testing method. Based on the results of the evaluation process, the e-commerce web service can work as expected. The results also show that the system has a high level of resilience. It means that the system has a low level of dependencies between services and adapts to future changes.
- Published
- 2020
- Full Text
- View/download PDF
14. Improving Efficiency of Web Application Firewall to Detect Code Injection Attacks with Random Forest Method and Analysis Attributes HTTP Request
- Author
-
Nguyen Manh Thang
- Subjects
Hypertext Transfer Protocol ,business.industry ,computer.internet_protocol ,Computer science ,020207 software engineering ,Denial-of-service attack ,0102 computer and information sciences ,02 engineering and technology ,Computer security ,computer.software_genre ,01 natural sciences ,Firewall (construction) ,010201 computation theory & mathematics ,Web query classification ,0202 electrical engineering, electronic engineering, information engineering ,Web application ,The Internet ,Application firewall ,business ,computer ,Software ,Computer technology - Abstract
In the era of information technology, the use of computer technology for both work and personal use is growing rapidly with time. Unfortunately, with the increasing number and size of computer networks and systems, their vulnerability also increases. Protecting web applications of organizations is becoming increasingly relevant as most of the transactions are carried out over the Internet. Traditional security devices control attacks at the network level, but modern web attacks occur through the HTTP protocol at the application level. On the other hand, the attacks often come together. For example, a denial of service attack is used to hide code injection attacks. The system administrator spends a lot of time to keep the system running, but they may forget the code injection attacks. Therefore, the main task for system administrators is to detect network attacks at the application level using a web application firewall and apply effective algorithms in this firewall to train web application firewalls automatically for increasing his efficiency. The article introduces parameterization of the task for increasing the accuracy of query classification by the random forest method, thereby creating the basis for detecting attacks at the application level.
- Published
- 2020
- Full Text
- View/download PDF
15. A HIGHLY SCALABLE DATA MANAGEMENT SYSTEM FOR POINT CLOUD AND FULL WAVEFORM LIDAR DATA
- Author
-
C. N. L. Hewage, Nhien-An Le-Khac, M. Trifkovic, Michela Bertolotto, Ulrich Ofterdinger, Anh-Vu Vo, and Debra F. Laefer
- Subjects
lcsh:Applied optics. Photonics ,Hypertext Transfer Protocol ,computer.internet_protocol ,Computer science ,Distributed computing ,Data management ,Big data ,02 engineering and technology ,Information repository ,computer.software_genre ,lcsh:Technology ,Node (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Web application ,Distributed database ,business.industry ,lcsh:T ,Spatial database ,05 social sciences ,050301 education ,lcsh:TA1501-1820 ,020207 software engineering ,Visualization ,lcsh:TA1-2040 ,Scalability ,Web service ,business ,lcsh:Engineering (General). Civil engineering (General) ,0503 education ,computer - Abstract
The massive amounts of spatio-temporal information often present in LiDAR data sets make their storage, processing, and visualisation computationally demanding. There is an increasing need for systems and tools that support all the spatial and temporal components and the three-dimensional nature of these datasets for effortless retrieval and visualisation. In response to these needs, this paper presents a scalable, distributed database system that is designed explicitly for retrieving and viewing large LiDAR datasets on the web. The ultimate goal of the system is to provide rapid and convenient access to a large repository of LiDAR data hosted in a distributed computing platform. The system is composed of multiple, share-nothing nodes operating in parallel. Namely, each node is autonomous and has a dedicated set of processors and memory. The nodes communicate with each other via an interconnected network. The data management system presented in this paper is implemented based on Apache HBase, a distributed key-value datastore within the Hadoop eco-system. HBase is extended with new data encoding and indexing mechanisms to accommodate both the point cloud and the full waveform components of LiDAR data. The data can be consumed by any desktop or web application that communicates with the data repository using the HTTP protocol. The communication is enabled by a web servlet. In addition to the command line tool used for administration tasks, two web applications are presented to illustrate the types of user-facing applications that can be coupled with the data system.
- Published
- 2020
16. Detection of Attacks on Apache2 Web Server Using Genetic Algorithm Based On Jaro Winkler Algorithm
- Author
-
M Rizqi Maulana
- Subjects
Web server ,Security monitoring ,Hypertext Transfer Protocol ,computer.internet_protocol ,Computer science ,business.industry ,HTML ,computer.software_genre ,Software ,Genetic algorithm ,Data as a service ,business ,Algorithm ,computer ,computer.programming_language ,Hacker - Abstract
Web server is software that provides data services in the form of HTTP (Hypertext Transfer Protocol) requests and responses in the form of HTML documents (Hypertext Markup Language) with the aim of managing data in the form of text files, images, videos and files. But in managing large amounts of data, good security monitoring is needed so that the data stored on the web server is not easily hacked. To protect the web server from hackers need an application to detect activities that are considered suspicious or possible hacking activities. By utilizing logs from a web server that is processed using the Jaro Winkler algorithm to see hacking attempts that produce a matrix and hacking activity reports to the admin. Thus the web server admin can see suspicious activity on the web server directly.Keywords: Web Server, Jaro Winkler Algorithm.
- Published
- 2020
- Full Text
- View/download PDF
17. 5G Media Data System Architecture
- Author
-
Thomas Stockhammer, Lucia D’Acunto, Gunnar Heikkilä, Thorsten Lohmar, and Frederic Gabin
- Subjects
Hypertext Transfer Protocol ,Multimedia ,computer.internet_protocol ,Network packet ,Computer science ,Service provider ,computer.software_genre ,New media ,Media Technology ,Systems architecture ,Cellular network ,Data as a service ,Electrical and Electronic Engineering ,Unicast ,computer - Abstract
The existing 3rd Generation Partnership Project (3GPP) packet- switched streaming (PSS) architecture that was developed for 3G and 4G and evolved to carry streaming content with DASH [Dynamic and Adaptive Streaming over Hypertext Transfer Protocol (HTTP)], is now seen as too limited for 5G. 3GPP has, therefore, started developing a new media streaming architecture, considering the latest advances in the media industry and the features offered by the 5G system. Recognizing that, most media and video content delivered to the user is provided by several online service providers, the new 3GPP 5G media streaming architecture focuses on different collaboration and deployment models between mobile network operators and media data service providers. These collaboration and deployment models also cater to traditional broadcasters, which increasingly see the need for high-quality contributions via a mobile network, for example, to cover unplanned or transient events. The new architecture supports unicast downlink media data distribution and uplink streaming .
- Published
- 2020
- Full Text
- View/download PDF
18. Representation model of requests to Web resources, based on a vector space model and attributes of requests for HTTP protocol
- Author
-
Alexander Kozachok and Thang Manh Nguyen
- Subjects
Hypertext Transfer Protocol ,Computer science ,business.industry ,computer.internet_protocol ,Artificial intelligence ,computer.software_genre ,business ,computer ,Natural language processing - Abstract
— Recently, the number of incidents related to Web applications, due to the increase in the number of users of mobile devices, the development of the Internet of things, the expansion of many services and, as a consequence, the expansion of possible computer attacks. Malicious programs can be used to collect information about users, personal data and gaining access to Web resources or blocking them. The purpose of the study is to enhance the detection accuracy of computer attacks on Web applications. In the work, a model for presenting requests to Web resources, based on a vector space model and attributes of requests via the HTTP protocol is proposed. Previously carried out research allowed us to obtain an estimate of the detection accuracy as well as 96% for Web applications for the dataset KDD 99, vector-based query representation and a classifier based on model decision treesTóm tắt – Trong những năm gần đây, số lượng sự cố liên quan đến các ứng dụng Web có xu hướng tăng lên do sự gia tăng số lượng người dùng thiết bị di động, sự phát triển của Internet cũng như sự mở rộng của nhiều dịch vụ của nó. Do đó càng làm tăng khả năng bị tấn công vào thiết bị di động của người dùng cũng như hệ thống máy tính. Mã độc thường được sử dụng để thu thập thông tin về người dùng, dữ liệu cá nhân nhạy cảm, truy cập vào tài nguyên Web hoặc phá hoại các tài nguyên này. Mục đích của nghiên cứu nhằm tăng cường độ chính xác phát hiện các cuộc tấn công máy tính vào các ứng dụng Web. Bài báo trình bày một mô hình biểu diễn các yêu cầu Web, dựa trên mô hình không gian vectơ và các thuộc tính của các yêu cầu đó sử dụng giao thức HTTP. So sánh với các nghiên cứu được thực hiện trước đây cho phép chúng tôi ước tính độ chính xác phát hiện xấp xỉ 96% cho các ứng dụng Web khi sử dụng bộ dữ liệu KDD 99 trong đào tạo cũng như phát hiện tấn công đi kèm với việc biểu diễn truy vấn dựa trên không gian vectơ và phân loại dựa trên mô hình cây quyết định
- Published
- 2020
- Full Text
- View/download PDF
19. A Comparison Study of DASH Technique by Video Streaming over IP with the Use of RTP and HTTP Protocols
- Author
-
Christian Hoppe and Tadeus Uhl
- Subjects
Hypertext Transfer Protocol ,Multimedia ,Computer Networks and Communications ,Computer science ,computer.internet_protocol ,020206 networking & telecommunications ,020302 automobile design & engineering ,02 engineering and technology ,computer.software_genre ,0203 mechanical engineering ,Dash ,0202 electrical engineering, electronic engineering, information engineering ,Comparison study ,Video streaming ,Electrical and Electronic Engineering ,computer - Abstract
Today’s Internet knows no bounds. New applications are marketed every single day. Many of them incorporate video sequences. These must be transported over the Internet quickly (often in real time). However, the Internet has not been designed for live communications and, regrettably, this may become apparent all too quickly. Countermeasures are required in the form of new, efficient transport techniques facilitating online video services. MPEG-DASH is one of such modern techniques. But how good is this new technique really? This paper delves into the matter. The paper contains an analysis of the impact that the new technology exerts on the quality of video streaming over IP networks. It also describes a new numerical tool – QoSCalc (DASH-HTTP) which has been used to analyze MPEG-DASH under different use scenarios. The results are presented graphically and their interpretation is provided
- Published
- 2020
- Full Text
- View/download PDF
20. Web intrusion detection system combined with feature analysis and SVM optimization
- Author
-
Jing Yang, Jin-qiu Wu, and Chao Liu
- Subjects
Hypertext Transfer Protocol ,Support vector machine ,Computer Networks and Communications ,Computer science ,computer.internet_protocol ,lcsh:TK7800-8360 ,02 engineering and technology ,Intrusion detection system ,computer.software_genre ,lcsh:Telecommunication ,lcsh:TK5101-6720 ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,Grid search ,Protocol analysis ,Hidden Markov model ,lcsh:Electronics ,020206 networking & telecommunications ,Computer Science Applications ,Signal Processing ,020201 artificial intelligence & image processing ,Anomaly detection ,Data mining ,Application firewall ,computer - Abstract
The current network traffic is large, and the network attacks have multiple types. Therefore, anomaly detection model combined with machine learning is developing rapidly. Frequent occurrences of Web Application Firewall (WAF) bypass attacks and the redundancy of the data characteristics in Hypertext Transfer Protocol (HTTP) protocol make it difficult to extract data characteristics. In this paper, an integrated web intrusion detection system combined with feature analysis and support vector machine (SVM) optimization is proposed. By using expert’s knowledge, the characteristics of the common Web attacks are analyzed. The related data characteristics are selected by the analysis of the HTTP protocol. In the classification learning, the mature and robust support vector machine algorithm is utilized and the grid search method is used for the parameter optimization. Consequently, a better detection capability on Web attacks can be obtained. By using the HTTP DATASET CSIC 2010 data set, experiments have been carried out to compare the detection capability of different kernel functions. The results show that the proposed system performs good in the detection capability and can detect the WAF bypass attacks effectively.
- Published
- 2020
21. Mimicking attack by botnet and detection at gateway
- Author
-
R. Subhashini and V. Rama Krishna
- Subjects
Hypertext Transfer Protocol ,Computer Networks and Communications ,Computer science ,computer.internet_protocol ,Botnet ,02 engineering and technology ,Intrusion detection system ,Network layer ,Computer security ,computer.software_genre ,Application layer ,020202 computer hardware & architecture ,Default gateway ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Anomaly detection ,computer ,Software ,Block (data storage) - Abstract
In cyber world Botnets becoming more popular and great challenge to security. Attacker by using bot net taking legacy attacks towards new dimension. Existing Intrusion Prevention / Intrusion Detection (IPS/IDS) systems can detect botnets attacks by using anomaly detection methods (or) signature. To fly the radar of IDS/IPS systems Bot master creates an attack either anomaly (or) any known signature. One possible thing is mimicking attack. Attacker hack the popular website browsing history. By using, browsing history they will simulate thousands of users through bots and will try to degrade the performance of the website. Mimicking kind of attacks can be made as distributed by using Botnet. In this paper, we are discussing about the possibility of mimicking attack by using Botnet. The first phase attacker will inject bots into the targeted systems. In second phase Bot master will inject respective mimicking profile in to targeted systems similar to their browsing behavior. We are proposing possible algorithm to identify the mimicking attack at gate way level, which will be tied up with NIDS. We worked on example of mimicking attack by using HTTP protocol. The attacker will collect the profile of users and using that mimicking profile was extracted. With that heterogeneous mimicking attack was executed. NIDS will be installed at gateway which will collect the connection statistics. The statistics will be given to the detection algorithm which will identify the similar flows based on Layer 3, Layer 4, Layer 7. The suspicious flows will be sent for challenges to prove the identity of the user. If it is in attack mimicking applications can’t respond to the challenges, the source ip address does not respond to challenges were added to the block list.
- Published
- 2020
- Full Text
- View/download PDF
22. (In-)Security of Cookies in HTTPS: Cookie Theft by Removing Cookie Flags
- Author
-
Sangtae Lee, Hyunsoo Kwon, Hyun-Jae Nam, Junbeom Hur, and Changhee Hahn
- Subjects
021110 strategic, defence & security studies ,Authentication ,Transport Layer Security ,Hypertext Transfer Protocol ,Computer Networks and Communications ,Computer science ,computer.internet_protocol ,InformationSystems_INFORMATIONSYSTEMSAPPLICATIONS ,ComputingMilieux_PERSONALCOMPUTING ,0211 other engineering and technologies ,02 engineering and technology ,Security token ,Computer security ,computer.software_genre ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Information sensitivity ,Stateful firewall ,Server ,ComputingMilieux_COMPUTERSANDSOCIETY ,Safety, Risk, Reliability and Quality ,computer - Abstract
HyperText Transfer Protocol (HTTP) cookies are widely used on the web to enhance communication efficiency between a client and a server by storing stateful information. However, cookies may contain private and sensitive information about users. Thus, in order to guarantee the security of cookies, most web browsers and servers support not only Transport Layer Security (TLS) but also other mechanisms such as HTTP Strict Transport Security and cookie flags. However, a recent study has shown that it is possible to circumvent cookie flags in HTTPS by exploiting a vulnerability in HTTP software that allows message truncation. In this paper, we propose a novel cookie hijacking attack called $rotten~cookie$ which deactivates cookie flags even if they are protected by TLS by exploiting a weakness in HTTP in terms of integrity checks. According to our investigation, all major browsers ignore uninterpretable sections of the header of HTTP response messages and accept incorrect formats without any rejection. We demonstrate that, when combined with TLS or application vulnerabilities, this form of attack can obtain private cookies by removing cookie flags. Thus, the attacker can impersonate a legitimate user in the eyes of the server when cookies are used as an authentication token. We prove the practicality of our attack by demonstrating that our attack can lead five major web browsers to accept a cookie without any cookie flags. We thus present a mitigation strategy for the transport layer to preserve cookie security against our attack.
- Published
- 2020
- Full Text
- View/download PDF
23. Offline but still connected with IPFS based communication
- Author
-
Lenuta Alboaie, Vlad Radulescu, Alexandru-Gabriel Cristea, and Andrei Panu
- Subjects
File system ,business.product_category ,Hypertext Transfer Protocol ,Computer science ,business.industry ,computer.internet_protocol ,Distributed computing ,computer.software_genre ,Computer data storage ,Internet access ,General Earth and Planetary Sciences ,The Internet ,Android (operating system) ,business ,computer ,General Environmental Science - Abstract
InterPlanetary File System (IPFS) with its specific features is a novelty to the data distribution area. In this article we are analyzing this filesystem/protocol and we will conduct a parallel study based on the comparison to other distributed file systems (DFSs) and to HTTP (Hypertext Transfer Protocol). IPFS provides efficient data storage and distribution, data permanence, offline-oriented paradigms, and no-central-administration feature. Furthermore, we will employ an applied example in order to prove the strengths of IPFS, which is an Android based system that was implemented with the goal of helping people during emergency situations, in case when no access to the global Internet connectivity is available. The application has IPFS as the core system of distribution and messages propagation. Overall, we prove that the vision and the architecture of the Internet can be significantly improved in the (near) future if developers and engineers take into consideration and start to use the massive amount of modern, efficient existing technologies.
- Published
- 2020
- Full Text
- View/download PDF
24. Multiplexed Asymmetric Attacks: Next-Generation DDoS on HTTP/2 Servers
- Author
-
P. Santhi Thilagam and Amit Praseed
- Subjects
021110 strategic, defence & security studies ,Web server ,Hypertext Transfer Protocol ,Computer Networks and Communications ,business.industry ,computer.internet_protocol ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,0211 other engineering and technologies ,Denial-of-service attack ,02 engineering and technology ,computer.software_genre ,Application layer ,Multiplexing ,Push technology ,Server ,Safety, Risk, Reliability and Quality ,business ,computer ,Computer network - Abstract
Distributed Denial of Service (DDoS) attacks using the HTTP protocol have started gaining popularity in recent years. A recent trend in this direction has been the use of computationally expensive requests to launch attacks. These attacks, called Asymmetric Workload attacks can bring down servers using limited resources, and are extremely difficult to detect. The introduction of HTTP/2 has been welcomed by developers because it improves user experience and efficiency. This was made possible by the ability to transport HTTP requests and their associated inline resources simultaneously by using Multiplexing and Server Push. However multiplexing has made request traffic bursty and rendered DDoS detection mechanisms based on connection limiting obsolete. Contrary to its intention, multiplexing can also be misused to launch sophisticated DDoS attacks using multiple high workload requests in a single TCP connection. However, sufficient research has not been done in this area. Existing research demonstrates that the HTTP/2 protocol allows users to launch DDoS attacks easily, but does not focus on whether an HTTP/2 server can handle DDoS attacks more efficiently or not. Also, sufficient research has not been done on the possibility of Multiplexing and Server Push being misused. In this work, we analyse the performance of an HTTP/2 server compared to an HTTP/1.1 server under an Asymmetric DDoS attack for the same load. We propose a new DDoS attack vector called a Multiplexed Asymmetric DDoS attack, which uses multiplexing in a different way than intended. We show that such an attack can bring down a server with just a few attacking clients. We also show that a Multiplexed Asymmetric Attack on a server with Server Push enabled can trigger an egress network layer flood in addition to an application layer attack.
- Published
- 2020
- Full Text
- View/download PDF
25. Real-Time Remote Sensorless Control of PMSM Using Embedded System and Webserver
- Author
-
Marcel Nicola and Claudiu-Ionel Nicola
- Subjects
Web server ,Electronic speed control ,Hypertext Transfer Protocol ,Open platform ,Computer science ,business.industry ,computer.internet_protocol ,Particle swarm optimization ,computer.software_genre ,law.invention ,Microcontroller ,law ,Embedded system ,business ,MATLAB ,computer ,Remote control ,computer.programming_language - Abstract
This paper presents an application of real-time remote control of a Permanent Magnet Synchronous Motor (PMSM) using an embedded system and Webserver. The starting point is represented by the global Field Oriented Control (FOC) type approach for the PMSM control and the numerical simulations performed when using a Sliding Mode Observer (SMO) and the optimal adjusting of the speed controller parameters is performed by meaning of Particle Swarm Optimization (PSO) method. The F28069 MCU is selected for a good performance/cost criterion, and Matlab/Simulink development environments are used together with Motor Control Blockset (MCB) Toolbox and Embedded - Coder Support Package (ECSP) developed for TI C2000 controllers from Texas Instruments and LabVIEW to obtain a flexible and robust software implementation of the real-time remote control application. Open Platform Communications (OPC) type of communication technologies and a LabVIEW embedded Webserver are used. The proposed implementation architecture enables real-time remote control both for clients that have minimal resources specific to the LabVIEW environment, and also for clients that can connect only through Hypertext Transfer Protocol (HTTP).
- Published
- 2021
- Full Text
- View/download PDF
26. Development of QR-code based Interactive Dynamic Billboard System with Motion Detection
- Author
-
Salama Ndayisaba, Kisangiri Francis Michael, Yvonne Iradukunda, Devotha Nyambo, and Innocent Ciza
- Subjects
Web server ,Hypertext Transfer Protocol ,QR-Code ,Computer science ,business.industry ,computer.internet_protocol ,Raspberry Pi ,Motion detection ,Usability ,Dynamic Billboard ,Smart Advertisement ,computer.software_genre ,GeneralLiterature_MISCELLANEOUS ,Display device ,Code (cryptography) ,Web Application ,Web application ,business ,PIR Motion Sensor ,Protocol (object-oriented programming) ,computer ,Computer hardware - Abstract
This paper presents the design and implementation of an intelligent Dynamic electronic billboard based on QR-Code (Quick Response Code) that can be used in a variety of interior locations such as offices, malls, universities, supermarkets, and other similar establishments. A screen display, a sensor such as a PIR motion sensor, and a QR-Code reader are all included in the system. Once scanned, the QR-Code should provide rapid access to information. While the motion sensor detects a nearby person and instructs the system to display the QR-Code on the billboard for the person to scan, the display device continues to display the company's products or announcements. The QR-Code contains Uniform Resource Locators (URLs) in our prototype, and once scanned, a user follows the link and begins exploring what he or she needs to see on a display device based on what the companies sell or communicate with the general public. As a result, the system allows users to interact with it by allowing them to search for what they need using QR-Codes. The Raspberry Pi houses the sensors and display device, allowing the system to relay sensory data to a web server using the HTTP (Hypertext Transfer Protocol) protocol. Because of its low cost, efficiency, and ease of use, the designed system is beneficial.
- Published
- 2021
27. Routing Communication Inside Ad Hoc Drones Network
- Author
-
Abderrahim Hasbi, Hamza Zemrane, and Youssef Baddi
- Subjects
Routing protocol ,File Transfer Protocol ,Hypertext Transfer Protocol ,Computer Networks and Communications ,Computer science ,computer.internet_protocol ,Wireless ad hoc network ,Optimized Link State Routing Protocol (OLSR) Ad hoc On Demand Distance Vector (AODV) ,drones communication architectures, uaanet drones network, routing protocols, optimized link state routing protocol (olsr) ad hoc on demand distance vector (aodv), ad hoc network ,TK5101-6720 ,computer.software_genre ,Ad Hoc network ,Videoconferencing ,Ad hoc On-Demand Distance Vector Routing ,Protocol (object-oriented programming) ,Drones communication architectures ,business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,UAANET drones network ,Computer Science Applications ,Optimized Link State Routing Protocol ,Telecommunication ,business ,computer ,routing protocols ,Computer network - Abstract
The world knows a constant development of technology applied in different sectors of activities: health, factories, homes, transportation, and others, one of the big axes that take a lot of attention today is the drone’s field. To communicate information a fleet of drones can use different communication architectures: centralized communication architecture, satellite communication architecture, cellular network communication architecture and a specific AdHoc communication architecture called the UAANET drones architecture. In our work we focused specifically on the routing of information inside the UAANET where we analyze and compare the performances of the reactive protocol AODV and the proactive protocol OLSR, when the UAANET use an applications based on the HTTP protocol, the FTP protocol, the database queries, voice application, and video conferencing application.
- Published
- 2021
28. Encryption of Data over HTTP (Hypertext Transfer Protocol)/HTTPS (Hypertext Transfer Protocol Secure) Requests for Secure Data transfers over the Internet
- Author
-
Stevina Correia and Rushank Shah
- Subjects
Hypertext Transfer Protocol ,business.industry ,Computer science ,computer.internet_protocol ,Data security ,Cryptography ,Encryption ,Computer security ,computer.software_genre ,Hypertext Transfer Protocol over Secure Socket Layer ,Personal computer ,The Internet ,business ,computer ,Hacker - Abstract
With the increasing use of the internet for transferring data, the security of this data has been a serious concern since the very beginning. There has been an ever-increasing number of cyber-attacks happening all over the internet. Hackers, after getting access to the end user's personal computer, have complete control over all the data flowing in and out of the computer. In this case, if any sensitive data gets in the hands of the hacker, it might create a great catastrophe for that person and the party he wants to communicate with. Hence, there is a need for creating an encryption system for data transfer that is extremely sensitive such as Criminal Data, Banking data and it can extend to a person's private details such as banking details and account passwords. For any such sensitive data transfer, we need a very strong encryption system, which ensures that the data being transferred is safe and is only accessible to the person who is authenticated to view that data. This paper discusses the various methodologies, algorithms, and proposes a solution to securely transfer sensitive data over the internet.
- Published
- 2021
- Full Text
- View/download PDF
29. Latency Comparison of MMT and ROUTE/DASH for the Transport Layer of the TV 3.0 Project
- Author
-
Cesar Augusto Diez Alves, Cristiano Akamine, Natalia Silva Santiago, Gustavo de Melo Valeira, George Henrique Maranhao Garcia de Oliveira, Fadi Jerji, Felipe Costa Pais, Leonardo Chaves, and Allan Seiti Sassaqui Chaubet
- Subjects
Hypertext Transfer Protocol ,Multimedia ,business.industry ,Computer science ,computer.internet_protocol ,computer.software_genre ,Digital terrestrial television ,Dynamic Adaptive Streaming over HTTP ,Transport layer ,Broadband ,Dash ,The Internet ,Digital television ,business ,computer - Abstract
The need for internet compatibility in modern digital television systems led to the development of new standards for their transport layer. The Moving Picture Experts Group (MPEG) took part by working on a standard based on Hypertext Transfer Protocol (HTTP), called Dynamic Adaptive Streaming over HTTP (DASH) for broadband services, and by developing MPEG Media Transport (MMT) for broadcast and hybrid services. A protocol called Real-time Object delivery over Unidirectional Transport (ROUTE) allows for the delivery of DASH-formatted content on a broadcast channel, making ROUTE/DASH another alternative for television systems. The Brazilian Digital Terrestrial Television System (SBTVD) Forum released a Call for Proposals named TV 3.0 Project, with requirements and evaluation methods to be considered by technology proponents. This paper compares the latency of MMT and ROUTE/DASH on an Advanced Television Systems Committee 3.0 (ATSC 3.0) setup considering the specifications of the TV 3.0 Project.
- Published
- 2021
- Full Text
- View/download PDF
30. A Hybrid Analysis-Based Approach to Android Malware Family Classification
- Author
-
Wenhui Zhang, Bei Lu, Chao Ding, and Nurbol Luktarhan
- Subjects
Hypertext Transfer Protocol ,Computer science ,computer.internet_protocol ,Science ,QC1-999 ,malware detection and family classification ,General Physics and Astronomy ,Feature selection ,02 engineering and technology ,Astrophysics ,computer.software_genre ,Article ,Protocol stack ,dynamic networking flow ,0202 electrical engineering, electronic engineering, information engineering ,Android (operating system) ,Physics ,020206 networking & telecommunications ,Static analysis ,Random forest ,QB460-466 ,hybrid analysis ,android malware ,machine learning ,Transport layer ,Malware ,020201 artificial intelligence & image processing ,Data mining ,computer - Abstract
With the popularity of Android, malware detection and family classification have also become a research focus. Many excellent methods have been proposed by previous authors, but static and dynamic analyses inevitably require complex processes. A hybrid analysis method for detecting Android malware and classifying malware families is presented in this paper, and is partially optimized for multiple-feature data. For static analysis, we use permissions and intent as static features and use three feature selection methods to form a subset of three candidate features. Compared with various models, including k-nearest neighbors and random forest, random forest is the best, with a detection rate of 95.04%, while the chi-square test is the best feature selection method. After using feature selection to explore the critical static features contained in this dataset, we analyzed a subset of important features to gain more insight into the malware. In a dynamic analysis based on network traffic, unlike those that focus on a one-way flow of traffic and work on HTTP protocols and transport layer protocols, we focused on sessions and retained protocol layers. The Res7LSTM model is then used to further classify the malicious and partially benign samples detected in the static detection. The experimental results show that our approach can not only work with fewer static features and guarantee sufficient accuracy, but also improve the detection rate of Android malware family classification from 71.48% in previous work to 99% when cutting the traffic in terms of the sessions and protocols of all layers.
- Published
- 2021
- Full Text
- View/download PDF
31. Indonesia’s Public Application Programming Interface (API)
- Author
-
Nur Aini Rakhmawati, Deny Hermansyah, Sayekti Harits Suryawan, and Muhammad Ariful Furqon
- Subjects
Authentication ,Hypertext Transfer Protocol ,Application programming interface ,Computer science ,computer.internet_protocol ,computer.software_genre ,JSON ,Open API ,World Wide Web ,Key (cryptography) ,Web service ,computer ,Scope (computer science) ,computer.programming_language - Abstract
Indonesia places the fifth position of the most internet users in the world. Consequently, data transaction through HTTP protocol saw an increase. An open API can facilitate Indonesia's users to access data and build application through HTTP protocol. In this paper, 38 open APIs were investigated and classified by using five criteria, namely technology, authentication, scope, source, and approval request. In general, the open APIs in Indonesia employ RESTful as a web service and JSON format as data format. In term of authentication, API key is a common method in most of open APIs.
- Published
- 2019
- Full Text
- View/download PDF
32. Using XGBoost to Discover Infected Hosts Based on HTTP Traffic
- Author
-
Ting Li, Xiaosong Zhang, Heng Wu, Niu Weina, Jiang Tianyu, and Teng Hu
- Subjects
Software_OPERATINGSYSTEMS ,Hypertext Transfer Protocol ,Article Subject ,Computer Networks and Communications ,Computer science ,business.industry ,computer.internet_protocol ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,computer.software_genre ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Feature (computer vision) ,lcsh:Technology (General) ,Command and control ,lcsh:T1-995 ,Malware ,False positive rate ,Extreme gradient boosting ,lcsh:Science (General) ,business ,computer ,Host (network) ,lcsh:Q1-390 ,Information Systems ,Computer network - Abstract
In recent years, the number of malware and infected hosts has increased exponentially, which causes great losses to governments, enterprises, and individuals. However, traditional technologies are difficult to timely detect malware that has been deformed, confused, or modified since they usually detect hosts before being infected by malware. Host detection during malware infection can make up for their deficiency. Moreover, the infected host usually sends a connection request to the command and control (C&C) server using the HTTP protocol, which generates malicious external traffic. Thus, if the host is found to have malicious external traffic, the host may be a host infected by malware. Based on the background, this paper uses HTTP traffic combined with eXtreme Gradient Boosting (XGBoost) algorithm to detect infected hosts in order to improve detection efficiency and accuracy. The proposed approach uses a template automatic generation algorithm to generate feature templates for HTTP headers and uses XGBoost algorithm to distinguish between malicious traffic and normal traffic. We conduct a performance analysis to demonstrate that our approach is efficient using dataset, which includes malware traffic from MALWARE-TRAFFIC-ANALYSIS.NET and normal traffic from UNSW-NB 15. Experimental results show that the detection speed is about 1859 HTTP traffic per second, and the detection accuracy reaches 98.72%, and the false positive rate is less than 1%.
- Published
- 2019
- Full Text
- View/download PDF
33. Detection of HTTP flooding attacks in cloud using fuzzy bat clustering
- Author
-
T. Raja Sree and S. Mary Saira Bhanu
- Subjects
0209 industrial biotechnology ,Hypertext Transfer Protocol ,Computer science ,business.industry ,computer.internet_protocol ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Fuzzy logic ,Flooding (computer networking) ,020901 industrial engineering & automation ,Artificial Intelligence ,Virtual machine ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,False alarm ,Data mining ,Cluster analysis ,business ,computer ,Software ,Bat algorithm - Abstract
Cloud computing plays a major role in reducing the expenditure of infrastructural costs on the basis of pay per use model. Security is the major concern wherein detection of security attacks and crimes is very difficult. Due to the distributed nature of attacks and crimes in the cloud, there is a need for an efficient security mechanism. Traditional security mechanisms cannot be applied directly to identify the source of attack due to the dynamic changes in the cloud. Hypertext Transfer Protocol (HTTP) flooding attacks are identified by keeping track of all the activities of the virtual machine instances running in the cloud. It is hard to identify the source of an attack since an attacker deletes all the possible traces. So, in order to mitigate this issue, the proposed method reads the logs, extracts the relevant features and investigates HTTP flooding attacks by a grouping of similar input patterns using fuzzy bat clustering and determines the anomalous behavior using deviated anomalous score. The suspicious source is determined by finding the event correlation between the virtual machine instance issued by cloud service provider with the suspicious source list. The experimental results are compared with the existing approaches, viz., k-means clustering, fuzzy c-means clustering, bat clustering and Bartd method in which the proposed method determines the anomalies accurately with very few false alarm than existing approaches.
- Published
- 2019
- Full Text
- View/download PDF
34. Technical analysis on security realization in web services for e-business management
- Author
-
K. Srihari, V. Sakthivel, Priyadharshini Muthukrishnan, and Baskaran Ramachandran
- Subjects
0209 industrial biotechnology ,Service (systems architecture) ,Hypertext Transfer Protocol ,Electronic business ,Computer science ,computer.internet_protocol ,business.industry ,Quality of service ,Interoperability ,02 engineering and technology ,computer.software_genre ,Variety (cybernetics) ,World Wide Web ,020901 industrial engineering & automation ,020204 information systems ,Business intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Web service ,business ,computer ,Information Systems - Abstract
The web service is proved to be one of significant milestone in the evolution of distributed computing. Applications interoperate with programs providing simple services to deliver sophisticated value-added services. Web service proves to be a loosely coupled way of achieving complex operations with less ownership of the resources in a standard way. Variety of platforms and frameworks communicate with the aim of transferring the business intelligence, domain specific functionalities and so on. The communication between the server providing the service and the client revolves around two main web technologies such as World Wide Web, and Hyper Text Transfer Protocol. As specified earlier web service invocation is achieved due to collaboration of multiple entities on the web. The quality of service factors such as performance, reliability, security, response time, availability etc., are very important to enable this web service invocation. Among which security proves to be a challenging factor due to vulnerabilities in the web that is imposed on the usage of numerous methods, tools and technologies. In the same pace, numerous standards and mechanisms has been introduced to handle the security threats. It is found to be difficult to arrive at a complete solution or standard to address the security issues of web services. As an initiative to provide a broader perspective on security of web services the review presented could provide glimpses of security vulnerabilities and solutions available.
- Published
- 2019
- Full Text
- View/download PDF
35. Malicious Web traffic detection for Internet of Things environments
- Author
-
Qingguo Zhou, Binbin Yong, Liang Huang, Qingchen Yu, and Xin Liu
- Subjects
Hypertext Transfer Protocol ,General Computer Science ,business.industry ,computer.internet_protocol ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Computer security ,computer.software_genre ,Control and Systems Engineering ,Server ,Web traffic ,Injection attacks ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Architecture ,Internet of Things ,business ,Hidden Markov model ,computer ,Range (computer programming) - Abstract
The Internet of Things (IoT) is gradually becoming an infrastructure, providing a wide range of applications, from health monitoring to industrial control and many other social domains. Unfortunately, for open connectivity, it is always built on Hypertext Transfer Protocol (HTTP), which inherently brings in new challenging security threats. Parameter injection, as a common and powerful attack, is often exploited by attackers to break into the HTTP servers of IoT by injecting malicious codes into the parameters of the HTTP requests. In this work we present a Hidden Markov Model (HMM) based detection system, which is designed as a novel bidirectory scoring architecture utilizing both benign and malicious Web traffic, to defend against parameter injection attacks in IoT systems. We evaluate the proposed system in terms of Web traffic data in real IoT environments. Results show improvements over baselines.
- Published
- 2019
- Full Text
- View/download PDF
36. Impact of class distribution on the detection of slow HTTP DoS attacks using Big Data
- Author
-
Taghi M. Khoshgoftaar and Chad Calvert
- Subjects
Big Data ,Information Systems and Management ,Hypertext Transfer Protocol ,Class imbalance ,lcsh:Computer engineering. Computer hardware ,Exploit ,Computer Networks and Communications ,Computer science ,computer.internet_protocol ,Big data ,Denial-of-service attack ,lcsh:TK7885-7895 ,02 engineering and technology ,Computer security ,computer.software_genre ,lcsh:QA75.5-76.95 ,Slow HTTP DoS ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Block (data storage) ,Class (computer programming) ,lcsh:T58.5-58.64 ,business.industry ,lcsh:Information technology ,Application layer ,Random forest ,Hardware and Architecture ,020201 artificial intelligence & image processing ,lcsh:Electronic computers. Computer science ,business ,computer ,Information Systems - Abstract
The integrity of modern network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are consistently shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. In recent years, attackers have begun to focus their attack efforts on the application layer, allowing them to produce attacks that can exploit known issues within specific application protocols. Slow HTTP Denial of Service attacks are one such attack variant, which targets the HTTP protocol and can imitate legitimate user traffic in order to deny resources from a service. Successful mitigation of this attack type requires network analysts to evaluate large quantities of network traffic to identify and block intrusive traffic. The issue, is that the number of legitimate traffic instances can far outnumber the amount of attack instances, making detection problematic. Machine learning techniques can be used to aid in detection, but the large level of imbalance between normal (majority) and attack (minority) instances can lead to inaccurate detection results. In this work, we evaluate the use of data sampling to produce varying class distributions in order to counteract the effects of severely imbalanced Slow HTTP DoS big datasets. We also detail our process for collecting real-world representative Slow HTTP DoS attack traffic from a live network environment to create our datasets. Five class distributions are generated to evaluate the Slow HTTP DoS detection performance of eight machine learning techniques. Our results show that the optimal learner and class distribution combination is that of Random Forest with a 65:35 distribution ratio, obtaining an AUC value of 0.99904. Further, we determine through the use of significance testing, that the use of sampling techniques can significantly increase learner performance when detecting Slow HTTP DoS attack traffic.
- Published
- 2019
- Full Text
- View/download PDF
37. An Ensemble Intrusion Detection Technique Based on Proposed Statistical Flow Features for Protecting Network Traffic of Internet of Things
- Author
-
Benjamin Turnbull, Kim-Kwang Raymond Choo, and Nour Moustafa
- Subjects
MQTT ,Hypertext Transfer Protocol ,Computer Networks and Communications ,Computer science ,computer.internet_protocol ,Domain Name System ,Botnet ,020206 networking & telecommunications ,02 engineering and technology ,Intrusion detection system ,computer.software_genre ,Ensemble learning ,Computer Science Applications ,Internet protocol suite ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,computer ,Message queue ,Information Systems - Abstract
Internet of Things (IoT) plays an increasingly significant role in our daily activities, connecting physical objects around us into digital services. In other words, IoT is the driving force behind home automation, smart cities, modern health systems, and advanced manufacturing. This also increases the likelihood of cyber threats against IoT devices and services. Attackers may attempt to exploit vulnerabilities in application protocols, including Domain Name System (DNS), Hyper Text Transfer Protocol (HTTP) and Message Queue Telemetry Transport (MQTT) that interact directly with backend database systems and client–server applications to store data of IoT services. Successful exploitation of one or more of these protocols can result in data leakage and security breaches. In this paper, an ensemble intrusion detection technique is proposed to mitigate malicious events, in particular botnet attacks against DNS, HTTP, and MQTT protocols utilized in IoT networks. New statistical flow features are generated from the protocols based on an analysis of their potential properties. Then, an AdaBoost ensemble learning method is developed using three machine learning techniques, namely decision tree, Naive Bayes (NB), and artificial neural network, to evaluate the effect of these features and detect malicious events effectively. The UNSW-NB15 and NIMS botnet datasets with simulated IoT sensors’ data are used to extract the proposed features and evaluate the ensemble technique. The experimental results show that the proposed features have the potential characteristics of normal and malicious activity using the correntropy and correlation coefficient measures. Moreover, the proposed ensemble technique provides a higher detection rate and a lower false positive rate compared with each classification technique included in the framework and three other state-of-the-art techniques.
- Published
- 2019
- Full Text
- View/download PDF
38. An OpenStack based cloud testbed framework for evaluating HTTP flooding attacks
- Author
-
P. Nithyanandam and A. Dhanapal
- Subjects
Hypertext Transfer Protocol ,Computer Networks and Communications ,Computer science ,business.industry ,computer.internet_protocol ,Distributed computing ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Data_MISCELLANEOUS ,Testbed ,020206 networking & telecommunications ,020302 automobile design & engineering ,Denial-of-service attack ,Cloud computing ,02 engineering and technology ,Virtualization ,computer.software_genre ,Flooding (computer networking) ,0203 mechanical engineering ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,business ,Cloud storage ,computer ,Information Systems - Abstract
The cloud computing has inherent challenges to detect the Hyper Text Transfer Protocol (HTTP) flooding Distributed Denial of Service (DDoS) attack due to its natural characteristics like virtualization, elasticity and multi-tenancy. The usage of cloud computing is user-friendly, but the implementation of the cloud infrastructure such as compute node, networking, cloud storage is very complex in order to achieve its various characteristics. Similarly, detecting the HTTP flooding attack in the cloud is also very complex as it requires an understanding of various potential attack paths in such a complex environment. So, designing the cloud testbed framework to detect the HTTP flooding attacks is a challenging problem to be solved. The cloud testbed framework has to consider several aspects of attack scenarios while accounting the cloud characteristics. This paper reviews the existing DDoS attack detection framework and their gaps and proposes a cloud testbed framework for evaluating the HTTP flooding DDoS attack solution. The proposed framework is implemented using the OpenStack cloud environment. The Federation Internationale de Football Association (FIFA) World Cup 1998 real-time dataset is used to generate the HTTP flooding attack to the OpenStack cloud testbed framework for the experimentation.
- Published
- 2019
- Full Text
- View/download PDF
39. LIFH: Learning Interactive Features from HTTP Payload using Image Reconstruction
- Author
-
Shuhao Li, Zhenyu Cheng, Zhicheng Liu, Jinbu Geng, and Yongzheng Zhang
- Subjects
Web server ,Hypertext Transfer Protocol ,Network packet ,Computer science ,computer.internet_protocol ,business.industry ,Deep learning ,Payload (computing) ,Deep packet inspection ,computer.software_genre ,Application layer ,Constant false alarm rate ,Data mining ,Artificial intelligence ,business ,computer - Abstract
The complexity and intelligence of the attacks towards the application layer have raised to an unprecedented level. HyperText Transfer Protocol (HTTP), as the widely used application layer protocol, is part of the main vectors for various malicious attacks. The previous detection based on Deep Packet Inspection (DPI) relies heavily on packets, which leads to insufficient detection and a high false alarm rate. In this paper, we propose LIFH, a deep neural network model equipped with interactive information for detecting application-layer attacks. Firstly, the image reconstruction method is designed to reconstruct the HTTP traffic session into an image. Then, the latent features, instead of explicit features which are typically used in machine learning models, are extracted by HTTP-CNN in order to respond against forgery attacks. Finally, the high-level features are further fed to multi-classifiers to identify the traffic involved in malicious activities. We make exclusive experiments and evaluate the performance of LIFH on the standard dataset CICIDS_2017 and IIE_data collected from critical web servers. The results demonstrate that the proposed model can significantly improve the performance of malicious traffic detection with an accuracy of 99.07% and a false positive rate of 0.40% which is superior to the state of the arts.
- Published
- 2021
- Full Text
- View/download PDF
40. Slicing Wi‐Fi links based on QoE video streaming fairness
- Author
-
Daniel F. Macedo and Mauricio de Oliveira
- Subjects
Hypertext Transfer Protocol ,Multimedia ,Computer Networks and Communications ,business.industry ,Computer science ,computer.internet_protocol ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,computer.software_genre ,Slicing ,Computer Science Applications ,Dash ,The Internet ,Video streaming ,business ,computer - Abstract
Summary Dynamic Adaptive Streaming over HyperText Transfer Protocol (DASH) video streaming is one of the dominant sources of traffic on the Internet, and this traffic is often delivered to users vi...
- Published
- 2021
- Full Text
- View/download PDF
41. The Cyber Attack on the Corporate Network Models Theoretical Aspects
- Author
-
S. S. Shavrin, V. V. Goncharov, N. A. Shishova, and A. V. Goncharov
- Subjects
Web server ,Theoretical computer science ,Hypertext Transfer Protocol ,Computer science ,Network packet ,computer.internet_protocol ,Cartesian product ,computer.software_genre ,symbols.namesake ,symbols ,Cyber-attack ,Graph (abstract data type) ,Representation (mathematics) ,Protocol (object-oriented programming) ,computer ,Computer Science::Cryptography and Security - Abstract
Mathematical model of web server protection is being proposed based on filtering HTTP (Hypertext Transfer Protocol) packets that do not match the semantic parameters of the request standards of this protocol. The model is defined as a graph, and the relationship between the parameters - the sets of vulnerabilities of the corporate network, the methods of attacks and their consequences-is described by the Cartesian product, which provides the correct interpretation of a corporate network cyber attack. To represent the individual stages of simulated attacks, it is possible to separate graph models in order to model more complex attacks based on the existing simplest ones. The unity of the model proposed representation of cyber attack in three variants is shown, namely: graphic, text and formula.
- Published
- 2021
- Full Text
- View/download PDF
42. Real Time Weather Monitoring System Using Iot
- Author
-
Puja Sharma and Shiva Prakash
- Subjects
Web server ,Hypertext Transfer Protocol ,computer.internet_protocol ,business.industry ,Node (networking) ,Real-time computing ,General Engineering ,Cloud computing ,Information technology ,computer.software_genre ,T58.5-58.64 ,Upload ,Arduino ,Default gateway ,Controller (irrigation) ,business ,computer - Abstract
In Today’s World, knowing live environmental condition is one of the biggest issues because there is an IoT of hurdles arrives when live environmental condition is measured. The proposed system will remove this problem since it monitors real-time weather conditions. In this proposed work we will monitor the live weather’s parameter of the Gorakhpur Region. The proposed system will work on the client-server architecture model using IoT. The system is organized in Two-tier Architecture. Our proposed system contains a various sensor which will monitor the temperature of the region, humidity, Rain value and pressure of the system. The sensor captured data and send it to the node MCU controller. Arduino ide is used to upload the sensed data. The serial monitor has worked as a gateway between the sensor and the cloud. The data is pushed by the sensor on a serial monitor. The serial monitors an IP address. The HTTP protocol is used to view the data on the webserver. This paper displays the data on the webserver and monitor the real-time data of weather using environmental parameter or sensor. Using a webserver, everyone can monitor the weather’s condition from anywhere without depending on any application or website. The data is available publicly. With the help of this proposed system, we measure the weather condition of the Gorakhpur Region. After getting results from the various sensor, it is observed that our proposed model achieves better results in comparison with the standard weather parameter.
- Published
- 2021
43. Machine Learning as a Service for High Energy Physics on heterogeneous computing resources
- Author
-
Daniele Spiga, Luca Giommi, Daniele Bonacorsi, Valentin Kuznetsov, Giommi, Luca, Kuznetsov, Valentin, Bonacorsi, Daniele, and Spiga, Daniele
- Subjects
Service (systems architecture) ,Machine Learning, Cloud, as a service, HEP, Data Science, ROOT ,Hypertext Transfer Protocol ,business.industry ,computer.internet_protocol ,Symmetric multiprocessor system ,Modular design ,Machine learning ,computer.software_genre ,Upload ,Workflow ,Scalability ,Resource allocation (computer) ,Artificial intelligence ,business ,computer - Abstract
Machine Learning (ML) techniques in the High-Energy Physics (HEP) domain are ubiquitous and will play a significant role also in the upcoming High-Luminosity LHC (HL-LHC) upgrade foreseen at CERN: a huge amount of data will be produced by LHC and collected by the ex- periments, facing challenges at the exascale. Despite ML models are successfully applied in many use-cases (online and offline reconstruction, particle identification, detector simulation, Monte Carlo generation, just to name a few) there is a constant seek for scalable, performant, and production-quality operations of ML-enabled workflows. In addition, the scenario is complicated by the gap among HEP physicists and ML experts, caused by the specificity of some parts of the HEP typical workflows and solutions, and by the difficulty to formulate HEP problems in a way that match the skills of the Computer Science (CS) and ML community and hence its potential ability to step in and help. Among other factors, one of the technical obstacles resides in the difference of data-formats used by ML-practitioners and physicists, where the former use mostly flat-format data representations while the latter use to store data in tree-based objects via the ROOT data format. Another obstacle to further development of ML techniques in HEP resides in the difficulty to secure the adequate computing resources for training and inference of ML models, in a scalable and transparent way in terms of CPU vs GPU vs TPU vs other resources, as well as local vs cloud resources. This yields a technical barrier that prevents a relatively large portion of HEP physicists from fully accessing the potential of ML-enabled systems for scientific research. In order to close this gap, a Machine Learning as a Service for HEP (MLaaS4HEP) solution is presented as a product of R&D activities within the CMS experiment. It offers a service that is capable to directly read ROOT-based data, use the ML solution provided by the user, and ultimately serve predictions by pre-trained ML models “as a service” accessible via HTTP protocol. This solution can be used by physicists or experts outside of HEP domain and it provides access to local or remote data storage without requiring any modification or integration with the experiment specific framework. Moreover, MLaaS4HEP is built with a modular design allowing independent resource allocation that opens up a possibility to train ML models on PB-size datasets remotely accessible from the WLCG sites without physically downloading data into local storage. To prove the feasibility and utility of the MLaaS4HEP service with large datasets and thus be ready for the next future when an increase of data produced is expected, an exploration of different hardware resources is required. In particular, this work aims to provide the MLaaS4HEP service transparent access to heterogeneous resources, which opens up the usage of more powerful resources without requiring any effort from the user side during the access and use phase.
- Published
- 2021
44. Web Server Part 1: Apache/Nginx Basics
- Author
-
Robert La Lau
- Subjects
Web browser ,Web server ,Hypertext Transfer Protocol ,Computer science ,business.industry ,computer.internet_protocol ,computer.software_genre ,Encryption ,Port (computer networking) ,World Wide Web ,Software ,business ,Web crawler ,Protocol (object-oriented programming) ,computer - Abstract
The web server is the software that makes the website(s) accessible. The web server does this by listening on the ports 80 and 443 and serving the files in certain directories as responses to requests received on those ports. Port 80 is the default port for HTTP (Hypertext Transfer Protocol), and port 443 is the port for the encrypted HTTPS variant (the S meaning Secure). Even though web servers can usually be configured to listen on other ports, a client like a web browser or a web crawler will always send HTTP and HTTPS requests without an explicit port indication to ports 80 and 443, respectively; if the user does not specify a protocol, clients will usually fall back to HTTP.
- Published
- 2021
- Full Text
- View/download PDF
45. Performance Testing of Five Back-End JavaScript Frameworks Using GET and POST Methods
- Author
-
I Putu Agus Eka Pratama
- Subjects
Router ,performansi ,Hypertext Transfer Protocol ,Web development ,Computer science ,computer.internet_protocol ,Loopback ,JavaScript ,computer.software_genre ,lcsh:TA168 ,framework ,Back-End JavaScript ,Performance measurement ,POST ,computer.programming_language ,lcsh:T58.5-58.64 ,Database ,lcsh:Information technology ,business.industry ,GET ,lcsh:Systems engineering ,Validator ,Routing (electronic design automation) ,business ,computer ,performance - Abstract
Currently, JavaScript is widely used in server-side (Back-End) website development. There are five choices of Back-End JavaScript frameworks that are commonly used: Koa, Express, Plumier, Loopback, Nest. Developers need to choose which framework has the best performance in order to produce a website with the best performance. For this reason, in this research, a comparsion of performance of the five Back-End JavaScript frameworks on the HTTP protocol was carried out using the GET and POST methods. Performance measurement uses two assessment parameters: 1.) The framework's ability to handle requests per second (req/s), 2.) Decrease in data processing speed (%) related to parsing, validation, routing, requests. Each tested framework is equipped with a router, body-parser, validator, NPM. Tests were carried out ten times on GET and POST, then obtain the average performance value of each framework. The test results show that Koa has the best performance and Loopback has the worst. From the results, it is recommended that Koa, Express, Plumier, be chosen by the developer, compared to Nest and Loopback., Saat ini, JavaScript banyak digunakan di dalam pengembangan website di sisi server (Back-End). Terdapat lima pilihan Back-End JavaScript framework yang umum digunakan, yaitu: Koa, Express, Plumier, Loopback, dan Nest. Pengembang perlu memilih framework mana yang memiliki performansi terbaik untuk dapat menghasilkan website dengan performansi terbaik. Untuk itu, di dalam penelitian ini, dilakukan pengujian perbandingan performansi terhadap kelima Back-End JavaScript framework pada protokol HTTP di dalam jaringan komputer dengan menggunakan metode GET dan POST. Pengukuran performansi menggunakan dua parameter penilaian, yaitu: 1.)Kemampuan framework di dalam menangani request per detik (dalam req/s), dan 2.)Penurunan kecepatan pemrosesan data (dalam %) terkait dengan parsing, validasi, routing, request. Masing-masing framework yang diujikan, dilengkapi dengan router, body parser, validator, NodeJS Package Manager (NPM). Pengujian dilakukan sepuluh kali pada GET dan POST, untuk kemudian diperoleh nilai rata-rata performansi dari setiap framework. Hasil pengujian menunjukkan bahwa Koa memiliki performansi terbaik dan Loopback memiliki performansi terburuk. Dari hasil penelitian ini, Koa, Express, dan Plumier, direkomendasikan untuk dipilih pengembang, jika dibandingkan dengan Nest dan Loopback.
- Published
- 2020
- Full Text
- View/download PDF
46. Implementation Messaging Broker Middleware for Architecture of Public Transportation Monitoring System
- Author
-
Hafiyyan Putra Pratama, Ary Setijadi Prihatmanto, and Agus Sukoco
- Subjects
Hypertext Transfer Protocol ,business.industry ,computer.internet_protocol ,Computer science ,Information technology ,HTML ,computer.software_genre ,Asynchronous communication ,Middleware (distributed applications) ,The Internet ,Message broker ,Web service ,business ,computer ,computer.programming_language ,Computer network - Abstract
Information technology becomes an important part of human life. World Wide Web is an example of an information technology success story. HyperText Transfer Protocol (HTTP) and HyperText Markup Language (HTML) used in web browsers have proven to be an effective way between human-computer interaction in internet scale. Web services are also used for communication between 2 systems such as communication between Automated Teller Machine (ATM) and server that all done in a synchronous way. There are 2 different ways to communicate between two systems, synchronous and asynchronous. In synchronous communication both parts must be online (for example a phone call), whereas in asynchronous communication only one part must be online (e.g. send email). Nowadays there are many devices that produce data continuously such as temperature sensors, Global Positioning System (GPS), humidity sensors, motion capture sensors. Due to the number of sensors available, the volume of data generated every second will make it difficult to manage in a good and easy way. This paper aims is to provide an overview of how an information technology role to manage data and then provide information on a particular problem. In Public Transportation Monitoring System there are system components that communicate in synchronous way. So, when one part does not work (i.e. web service) then the communication does not occur so that data such as geolocation is not stored. With the implementation of Message Broker Middleware architecture that works in asynchronous way on public transportation monitoring system, it is expected to solve the problems that occur with the synchronous way. Testing is conduct with multiple vehicles using mobile applications and some others with GPS Tracker module for 7 days to get raw location data and success stored in database.
- Published
- 2020
- Full Text
- View/download PDF
47. Firewall for Intranet Security
- Author
-
Archana Wankhade and Premchand Ambhore
- Subjects
Unix ,Software_OPERATINGSYSTEMS ,Hypertext Transfer Protocol ,computer.internet_protocol ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,computer.software_genre ,Proxy server ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Upload ,Firewall (construction) ,Internet protocol suite ,Backup ,Operating system ,Application firewall ,computer - Abstract
At present, there are many commercial and noncommercial firewalls available in the market. Some of them are Squid Firewall for Unix environment, Raptor Firewall, Tunix Firewall, Security Firewall, and Trustix Firewall for Linux operating system. Squid Firewall is the generally available open source firewall. It works on Unix platform. This firewall had good features like Hypertext Transfer Protocol caching. The architectural design, documentation, and the source code of this firewall are available freely. Nowadays, one can get this firewall from installation package of Linux operating system or from downloading from the Internet, free of cost. Thus it saves the developing cost, but since this firewall is easily available with the full source code and documentation, one can easily understand the design architecture and working policies of the firewall and can make the holes in the firewall. This firewall does not provide the data compression facility for data files stored in cache. Raptor Firewall is the commercial firewall. The Raptor Firewall works on the Windows NT operating system. This firewall creates the log file for incoming and outgoing data but not maintain the backup copy of that data. As this is a commercial firewall, the cost of this firewall is too high, and the maintenance cost is also much. So this firewall is suitable only for large-scale organizations. This firewall does not provide facility of data compression on the behalf of the web server. This Security Firewall is developed by the eEye Digital Security as the first-ever IIS application firewall. This is also a commercial firewall. This firewall provides the log file facility for incoming/outgoing data. But this firewall does not have facility for providing backup copy of data sent out or in through the firewall. The purchase cost and the maintenance cost of this firewall are too high, so they are not useful for small organizations and corporate offices. This firewall does not provide the data compression facility over Hypertext Transfer Protocol which the web server provides. Trustix Firewall is another commercial firewall. This firewall operates on the Linux operating system. Basically this firewall uses Squid proxy server adding to its own features. The cost of Trustix Firewall is about USD 1000, and maintenance cost is also high. This firewall also is not providing Hypertext Transfer Protocol data compression facility. This firewall maintains the log files for every incoming and outgoing request/response, but it does not keep the backup copy of actual data sent out from intranet. Thus considering initial and maintenance cost, this firewall is not suitable for small organizations, corporate offices, colleges, etc. Tunix Firewall is a commercial firewall. It provides basic features of the firewall, but due to high cost of purchase and maintenance, it is difficult for small corporate organizations to use the firewall for their networks. This firewall also does not keep the backup copy of incoming and outgoing data at the proxy server. It only maintains the log file with IP address of source and destination, name of requested file, and the time of service. Also this firewall does not have facility for Hypertext Transfer Protocol data compression. The various drawbacks of the above firewalls motivate us to implement the application firewall to achieve the following benefits. Although many firewalls are available in the market, some organizations want to build the firewall from scratch with their own design and implementations. This knowledge may not exist in-house with a vendor-supported firewall. In deciding whether to purchase or build a firewall, the organization first gets the requirements and then sees if it has sufficient resources to build and test the firewall. Prepare the cost analysis for building the firewall, and compare it with cost of commercial firewalls. The organization, after requirement analysis and cost analysis, decided to build a firewall. It requires additional features like backup copy of data at proxy server, data compression at proxy server, scanning for virus, etc. To fulfill all these requirements of the organization, we started to design and implement the application firewall having additional features along with the common features. The paper organized as below proposed model of implementation of firewall using data mining technique.
- Published
- 2020
- Full Text
- View/download PDF
48. Journey to MARS: Interplanetary Coding for relieving CDNs
- Author
-
Juan A. Cabrera, Frank H. P. Fitzek, Justus Rischke, and Sandra Zimmermann
- Subjects
File system ,021110 strategic, defence & security studies ,Hypertext Transfer Protocol ,computer.internet_protocol ,business.industry ,Computer science ,Testbed ,0211 other engineering and technologies ,02 engineering and technology ,computer.software_genre ,Bottleneck ,Transmission (telecommunications) ,Linear network coding ,Server ,The Internet ,business ,computer ,Computer network - Abstract
The amount of consumer data transmitted over the internet will increase in the future. At the moment, this traffic is mainly handled by Content Delivery Networks (CDN) via Hypertext Transfer Protocol (HTTP). However, this server-based approach has the disadvantage that the servers themselves and their connections to the Internet are under a high load, which makes them a bottleneck. Therefore, new approaches for efficient content distribution are sought. One outstanding approach is the Interplanetary File System (IPFS), the synthesis of various successful Peer-to-Peer (P2P) approaches. Theoretically, the load on the server can be reduced by distributing the data to several nodes. To transmit data optimally, however, a high degree of coordination between nodes is necessary. Research on other content distribution schemes has shown that the cooperation effort can be avoided by using Random Linear Network Coding (RLNC). This has not been used in IPFS so far. In this work we present Multi Access Recoding System (MARS), a protocol which combines IPFS with RLNC. Simulations on the Interplanetary Testbed (IPTB) have shown that MARS is able to reduce the server load and download time by up to 50% compared to a transmission using only a single server and up to 45% compared to the same setup without RLNC. Even without coordination, this only causes an additional network load of 30%. In addition, MARS improves the download time by at least 15% even for very few peer nodes or peers with little storage.
- Published
- 2020
- Full Text
- View/download PDF
49. A Parallel Volunteer Computing Based on Server Assisted Communications
- Author
-
Masaru Fukushi and Yuto Watanabe
- Subjects
Database server ,Web server ,Hypertext Transfer Protocol ,Parallel processing (DSP implementation) ,Computer Applications ,Computer science ,computer.internet_protocol ,Computation ,Operating system ,Communication source ,computer.software_genre ,Realization (systems) ,computer - Abstract
Toward the realization of parallel volunteer computing (VC), this paper proposes a parallel computing method based on the concept of server assisted communications and develops a prototype parallel VC system. In VC, individual nodes cannot directly communicate with each other; hence, application of current VC is limited to bag-of-tasks computation, and this prevents widespread use of VC. The proposed method replaces an inter-node communication with two request-driven communications between sender/server and server/receiver. In our parallel VC system, a VC server consists of an Apache web server and a MySQL database server, and several worker nodes are implemented in C++. A software tool is also developed to convert a parallel program written in an MPI communication library into a program with a standard socket library with HTTP protocol. We have confirmed the correct behavior of one-to-one and collective communication functions and evaluated the execution time.
- Published
- 2020
- Full Text
- View/download PDF
50. Detection of Malicious HTTP Requests Using Header and URL Features
- Author
-
Safwan Omari, Jason Perry, Piotr Szczurek, and Ashley Laughter
- Subjects
Web server ,Hypertext Transfer Protocol ,Honeypot ,computer.internet_protocol ,Computer science ,business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,020206 networking & telecommunications ,02 engineering and technology ,Internet traffic ,computer.software_genre ,Internet security ,Web traffic ,Header ,Web page ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,computer ,Computer network - Abstract
Cyber attackers leverage the openness of internet traffic to send specially crafted HyperText Transfer Protocol (HTTP) requests and launch sophisticated attacks for a myriad of purposes including disruption of service, illegal financial gain, and alteration or destruction of confidential medical or personal data. Detection of malicious HTTP requests is therefore essential to counter and prevent web attacks. In this work, we collected web traffic data and used HTTP request header features with supervised machine learning techniques to predict whether a message is likely to be malicious or benign. Our analysis was based on two real world datasets: one collected over a period of 42 days from a low interaction honeypot deployed on a Comcast business class network, and the other collected from a university web server for a similar duration. In our analysis, we observed that: (1) benign and malicious requests differ with respect to their header usage, (2) three specific HTTP headers (i.e., accept-encoding, accept-language, and content-type) can be used to efficiently classify a request as benign or malicious with 93.6% accuracy, (3) HTTP request line lengths of benign and malicious requests differ, (4) HTTP request line length can be used to efficiently classify a request as benign or malicious with 96.9% accuracy. This implies we can use a relatively simple predictive model with a fast classification time to efficiently and accurately filter out malicious web traffic.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.