2,297 results
Search Results
2. Service and Energy Management in Fog Computing: A Taxonomy Approaches, and Future Directions.
- Author
-
Hashemi, S. M., Sahafi, A., Rahmani, A. M., and Bohlouli, M.
- Subjects
ENERGY management ,INTERNET of things ,ENERGY consumption ,COMPUTING platforms ,COMPUTER architecture ,EDGE computing - Abstract
Background and Objectives: Today, the increased number of Internet-connected smart devices require powerful computer processing servers such as cloud and fog and necessitate fulfilling requests and services more than ever before. The geographical distance of IoT devices to fog and cloud servers have turned issues such as delay and energy consumption into major challenges. However, fog computing technology has emerged as a promising technology in this field. Methods: In this paper, service/energy management approaches are generally surveyed. Then, we explain our motivation for the systematic literature review procedure (SLR) and how to select the related works. Results: This paper introduces four domains of service management and energy management, including Architecture, Resource Management, Scheduling management, and Service Management. Scheduling management has been used in 38% of the papers. Therefore, they have the highest service management and energy management. Also, Resource Management is the second domain that has been able to attract about 26% of the papers in service management and energy management. Conclusion: About 81% of the fog computing papers simulated their approaches, and the others implemented their schemes using a testbed in the real environment. Furthermore, 30% of the papers presented an architecture or framework for their research, along with their research. In this systematic literature review, papers have been extracted from five valid databases, including IEEE Xplore, Wiley, Science Direct (Elsevier), Springer Link, and Taylor & Francis, from 2013 to 2022. We obtained 1596 papers related to the discussed subject. We filtered them and achieved 47 distinct studies. In the following, we analyze and discuss these studies; then we review the parameters of service quality in the papers, and ultimately, we present the benefits, drawbacks, and innovations of each study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. COMPARATIVE SECURITY AND COMPLIANCE ANALYSIS OF SERVERLESS COMPUTING PLATFORMS: AWS LAMBDA, AZURE FUNCTIONS, AND GOOGLE CLOUD FUNCTIONS.
- Author
-
KHANAL, DIBYA DARSHAN and MAHARJAN, SUSHIL
- Subjects
COMPLIANT platforms ,CLOUD computing ,DATA encryption ,COMPUTING platforms ,INFRASTRUCTURE (Economics) - Abstract
Serverless computing has revolutionized cloud services by abstracting infrastructure management, enabling developers to focus on application logic. This paper examines the security and compliance features of three major serverless platforms: AWS Lambda, Azure Functions, and Google Cloud Functions. By evaluating authentication mechanisms, data encryption practices, vulnerability management, and compliance certifications, this paper aims to provide a comparative analysis that informs businesses and developers on the most secure and compliant platform for their needs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Is Online Enrollment the Future of Group Benefits?
- Author
-
LANDRY, KIMBERLY A.
- Subjects
EMPLOYEE benefits ,PAPER arts ,COMPUTING platforms - Abstract
The article focuses on the employees methods of enrollment for their insurance benefits or workplace benefits offered by the employers including paper enrollment, and online enrollment platform, where research conducted presents the majority of online enrollment users.
- Published
- 2016
5. A Lightweight Model of Underwater Object Detection Based on YOLOv8n for an Edge Computing Platform.
- Author
-
Fan, Yibing, Zhang, Lanyong, and Li, Peng
- Subjects
OBJECT recognition (Computer vision) ,DEEP learning ,EDGE computing ,COMPUTING platforms ,NATURAL resources ,IMAGE analysis - Abstract
The visual signal object detection technology of deep learning, as a high-precision perception technology, can be adopted in various image analysis applications, and it has important application prospects in the utilization and protection of marine biological resources. While the marine environment is generally far from cities where the rich computing power in cities cannot be utilized, deploying models on mobile edge devices is an efficient solution. However, because of computing resource limitations on edge devices, the workload of performing deep learning-based computationally intensive object detection on mobile edge devices is often insufficient in meeting high-precision and low-latency requirements. To address the problem of insufficient computing resources, this paper proposes a lightweight process based on a neural structure search and knowledge distillation using deep learning YOLOv8 as the baseline model. Firstly, the neural structure search algorithm was used to compress the YOLOv8 model and reduce its computational complexity. Secondly, a new knowledge distillation architecture was designed, which distills the detection head output layer and NECK feature layer to compensate for the accuracy loss caused by model reduction. When compared to YOLOv8n, the computational complexity of the lightweight model optimized in this study (in terms of floating point operations (FLOPs)) was 7.4 Gflops, which indicated a reduction of 1.3 Gflops. The multiply–accumulate operations (MACs) stood at 2.72 G, thereby illustrating a decrease of 32%; this saw an increase in the AP50, AP75, and mAP by 2.0%, 3.0%, and 1.9%, respectively. Finally, this paper designed an edge computing service architecture, and it deployed the model on the Jetson Xavier NX platform through TensorRT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. GLC_FCS30: Global land-cover product with fine classification system at 30 m using time-series Landsat imagery.
- Author
-
Xiao Zhang, Liangyun Liu, Xidong Chen, Yuan Gao, Shuai Xie, and Jun Mi
- Subjects
SPATIAL systems ,COMPUTING platforms ,LAND cover ,CLASSIFICATION ,RANDOM forest algorithms ,PAPER products ,TIME series analysis - Abstract
Over past decades, a lot of global land-cover products have been released, however, these is still lack of a global land-cover map with fine classification system and spatial resolution simultaneously. In this study, a novel global 30-m land-cover classification with a fine classification system for the year 2015 (GLC_FCS30-2015) was produced by combining time-series of Landsat imagery and high-quality training data from the GSPECLib (Global Spatial Temporal Spectra Library) on the Google Earth Engine computing platform. First, the global training data from the GSPECLib were developed by applying a series of rigorous filters to the MCD43A4 NBAR and CCI_LC land-cover products. Secondly, a local adaptive random forest model was built for each 5° x 5° geographical tile by using the multi-temporal Landsat spectral and textures features of the corresponding training data, and the GLC_FCS30-2015 land-cover product containing 30 land-cover types was generated for each tile. Lastly, the GLC_FCS30-2015 was validated using three different validation systems (containing different land-cover details) using 44 043 validation samples. The validation results indicated that the GLC_FCS30-2015 achieved an overall accuracy of 82.5 % and a kappa coefficient of 0.784 for the level-0 validation system (9 basic land-cover types), an overall accuracy of 71.4 % and kappa coefficient of 0.686 for the UN-LCCS (United Nations Land Cover Classification System) level-1 system (16 LCCS land-cover types), and an overall accuracy of 68.7 % and kappa coefficient of 0.662 for the UN-LCCS level-2 system (24 fine land-cover types). The comparisons against other land-cover products (CCI_LC, MCD12Q1, FROM_GLC and GlobeLand30) indicated that GLC_FCS30-2015 provides more spatial details than CCI_LC-2015 and MCD12Q1-2015 and a greater diversity of land-cover types than FROM_GLC-2015 and GlobeLand30-2010, and that GLC_FCS30-2015 achieved the best overall accuracy of 82.5% against FROM_GLC-2015 of 59.1 % and GlobeLand30-2010 of 75.9 %. Therefore, it is concluded that the GLC_FCS30-2015 product is the first global land-cover dataset that provides a fine classification system with high classification accuracy at 30 m. The GLC_FCS30-2015 global land-cover products generated in this paper is available at https://doi.org/10.5281/zenodo.3986871 (Liu et al., 2020). [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. The HSF Conditions Database Reference Implementation.
- Author
-
Mashinistov, Ruslan, Gerlach, Lino, Laycock, Paul, Formica, Andrea, Govi, Giacomo, and Pinkenburg, Chris
- Subjects
DATABASES ,COMPUTING platforms ,COMPUTER architecture ,METADATA ,REDUNDANCY in engineering - Abstract
Conditions data is the subset of non-event data that is necessary to process event data. It poses a unique set of challenges, namely a heterogeneous structure and high access rates by distributed computing. The HSF Conditions Databases activity is a forum for cross-experiment discussions inviting as broad a participation as possible. It grew out of the HSF Community White Paper work to study conditions data access, where experts from ATLAS, Belle II, and CMS converged on a common language and proposed a schema that represents best practice. Following discussions with a broader community, including NP as well as HEP experiments, a core set of use cases, functionality and behaviour was defined with the aim to describe a core conditions database API. This paper will describe the reference implementation of both the conditions database service and the client which together encapsulate HSF best practice conditions data handling. Django was chosen for the service implementation, which uses an ORM instead of the direct use of SQL for all but one method. The simple relational database schema to organise conditions data is implemented in PostgreSQL. The task of storing conditions data payloads themselves is outsourced to any POSIX-compliant filesystem, allowing for transparent relocation and redundancy. Crucially this design provides a clear separation between retrieving the metadata describing which conditions data are needed for a data processing job, and retrieving the actual payloads from storage. The service deployment using Helm on OKD will be described together with scaling tests and operations experience from the sPHENIX experiment running more than 25k cores at BNL. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A CLOUD COMPUTING PLATFORM TO SUPPORT DIGITAL HERITAGE APPLICATION USING A SERVICE-ORIENTED APPROACH.
- Author
-
Yang, S., Hou, M., Huo, P., Li, A., and Jiang, L.
- Subjects
COMPUTING platforms ,DIGITAL technology ,CLOUD computing ,HTTP (Computer network protocol) ,INFORMATION storage & retrieval systems ,DIGITIZATION ,ELECTRONIC paper - Abstract
Since the digital technologies emerge, digital heritage has become an integral part of the world's cultural heritage under the leadership of UNESCO. With the development of digitization and accumulation of data, the digital information processing system for cultural heritage keeps updating. However, there still exist the problems of low information integration degree and weak information sharing ability, severely restricting the promotion and integration of cultural heritage and society. This paper proposes a digital cultural heritage cloud platform, where basic data service, knowledge service, engineering application, visual exhibition were realized through HTTP request. This platform completely encapsulated a large number of cultural heritage data processing algorithms. Accordingly, a cultural heritage data management system, a cultural heritage knowledge construction and application system, a cultural heritage display, analysis and evaluation system, and a cultural heritage microenvironment index monitoring system were embedded in this platform. In addition, the platform provided API for professional customized development to provide effective support. This platform can be flexibly adapted and extended, laying solid foundation for digital information sharing of cultural heritage. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Study on Obstacle Detection Method Based on Point Cloud Registration.
- Author
-
Wang, Hongliang, Wang, Jianing, Wang, Yixin, Pi, Dawei, Chen, Yijie, and Fan, Jingjing
- Subjects
POINT cloud ,REMOTELY piloted vehicles ,COMPUTING platforms ,CLOUD computing ,SEARCH algorithms - Abstract
An efficient obstacle detection system is one of the most important guarantees for improving the active safety performance of autonomous vehicles. This paper proposes an obstacle detection method based on high-precision positioning applied to blocked zones to solve the problems of the high complexity of detection results, low computational efficiency, and high load in traditional obstacle detection methods. Firstly, an NDT registration method which uses the likelihood function as the optimal value of the registration score function to calculate the registration parameters is designed to match the scanning point cloud and the target point cloud. Secondly, a target reduction method combined with threshold judgment and the binary tree search algorithm is designed to filter the point cloud of non-road obstacles to improve the processing speed of the computing platform. Meanwhile, KD-tree is used to speed up the clustering process. Finally, a vehicle remote control simulation platform with the combination of a cloud platform and mobile terminal is designed to verify the effectiveness of the strategy in practical application. The results prove that the proposed obstacle detection method can improve the efficiency and accuracy of detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. 46.2: Invited Paper: The Development of Grating Waveguide Technology for Near‐Eye Display in Augmented Reality.
- Author
-
Xiaogang, Shi and Bingjie, Wang
- Subjects
AUGMENTED reality ,OPTICAL head-mounted displays ,COPLANAR waveguides ,SMARTPHONES ,COMPUTING platforms ,OPTICAL gratings ,TECHNOLOGY - Abstract
Augmented Reality (AR) is generally considered as the next generation computing platform after PC and smart phones, with extensive applications and colossal market capacity. As the most essential hardware of AR wearables, the Near‐Eye Display (NED) system determines the final pattern of the product. In this paper, the NED solutions of mainstream AR eyewares are discussed, and the advantages and disadvantages of prism, freeform, arrayed waveguide and grating waveguide, the four popular technologies in NED of AR, are compared. Based on above information, the strengths of grating waveguide in AR products are analyzed, the principles and processing technology are introduced, and the prospect of grating waveguide to be the future of NED in AR is proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. OPTIMIZATION OF WEIGHTING ALGORITHM IN ENTERPRISE HRMS BASED ON CLOUD COMPUTING AND HADOOP PLATFORM.
- Author
-
GENLIANG ZHAO
- Subjects
COST functions ,PERSONNEL management ,COMPUTING platforms ,CLOUD computing ,ALGORITHMS - Abstract
As enterprises increasingly rely on cloud-based Human Resource Management Systems (HRMS) deployed on the Hadoop platform, the optimization of weighting algorithms becomes imperative to enhance system efficiency. This paper addresses the complex challenge of load balancing in the cloud environment by proposing Effective Load Balancing Strategy (ELBS) a hybrid optimization approach that integrates both Genetic Algorithm (GA) and Grey Wolf Optimization (GWO). The optimization objective involves the allocation of N jobs submitted by cloud users to M processing units, each characterized by a Processing Unit Vector (PUV). The PUV encapsulates critical parameters such as Million Instructions Per Second (MIPS), execution cost α, and delay cost L. Concurrently, each job submitted by a cloud user is represented by a Job Unit Vector (JUV), considering service type, number of instructions (NIC), job arrival time (AT), and worst-case completion time (wc). The proposed hybrid GA-GWO aims to minimize a cost function ζ, incorporating weighted factors of execution cost and delay cost. The challenge lies in determining optimal weights, a task addressed by assigning user preferences or importance as weights. The hybrid algorithm iteratively evolves populations of processing units, applying genetic operators, such as crossover and mutation, along with the exploration capabilities of GWO, to efficiently explore the solution space. This research contributes a comprehensive algorithmic solution to the optimization of weighting algorithms in enterprise HRMS on the cloud and Hadoop platform. The adaptability of the hybrid ELBS to dynamic cloud environments and its efficacy in handling complex optimization problems position it as a promising tool for achieving load balancing in HRMS applications. The proposed approach provides a foundation for further empirical validation and implementation in practical enterprise settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. DATA PROTECTION AND PRIVACY PROTECTION OF ADVERTISING BASED ON CLOUD COMPUTING PLATFORM.
- Author
-
ZHISHE CHEN
- Subjects
DATA privacy ,DEEP learning ,GENETIC algorithms ,COMPUTING platforms ,DATA mapping ,FUZZY algorithms - Abstract
This paper uses the hybrid leapfrog algorithm to mine user information in encrypted advertisements effectively and intelligently. This method handles the nonlinearity of the original data by mapping it into kernel space. The representation of the original ciphertext in the kernel space is obtained by sparsely reconstructing the encrypted original advertising data. Build a corresponding scoring mechanism and select the best advertising data characteristics. The selected data were clustered using the data fuzzy clustering method based on the improved hybrid leapfrog. Set the adjustment coefficient to improve the local optimization performance of hybrid frog leaping. This algorithm uses the tightness and separation in genetic algorithms and constructs a fitness function to determine the clustering critical value. This enables the practical, intelligent mining of homomorphic passwords with privacy protection. Experimental results show that the method proposed in this article can effectively improve the convergence speed and accuracy of clustering. Improve Blowfish by combining multi-threading, sharing encryption and other methods. This enables encryption and decryption of large amounts of model data. The research of this project has very important research value in improving the security performance and effectiveness of cryptographic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Analysis of CLARANS Algorithm for Weather Data Based on Spark.
- Author
-
Jiahao Zhang and Honglin Wang
- Subjects
ALGORITHMS ,K-means clustering ,OUTLIER detection ,COMPUTING platforms ,CLUSTER analysis (Statistics) ,CLOUD computing ,EUCLIDEAN distance - Abstract
ith the rapid development of technology, processing the explosive growth of meteorological data on traditional standalone computing has become increasingly time-consuming, which cannot meet the demands of scientific research and business. Therefore, this paper proposes the implementation of the parallel Clustering Large Application based upon RANdomized Search (CLARANS) clustering algorithm on the Spark cloud computing platform to cluster China’s climate regions using meteorological data from 1988 to 2018. The aim is to address the challenge of applying clustering algorithms to large datasets. In this paper, the morphological similarity distance is adopted as the similarity measurement standard instead of Euclidean distance, which improves clustering accuracy. Furthermore, the issue of local optima caused by an improper selection of initial clustering centers is addressed by utilizing the max-distance criterion. Compared to the k-means clustering algorithm already implemented in the Spark platform, the proposed algorithm has strong robustness, can reduce the interference of outliers in the dataset on clustering results, and has higher parallel performance than the frequently used serial algorithms, thus improving the efficiency of big data analysis. This experiment compares the clustered centroid data with the annual average meteorological data of representative cities in the five typical meteorological regions that exist in China, and the results show that the clustering results are in good agreement with the meteorological data obtained from the National Meteorological Science Data Center. This algorithm has a positive effect on the clustering analysis of massive meteorological data and deserves attention in scientific research activities. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Selected papers of the Third International Conference on the Theory and Practice of Natural Computing, TPNC 2014.
- Author
-
Dediu, Adrian-Horia and Martín-Vide, Carlos
- Subjects
THEORY of constraints ,MATHEMATICAL models ,MATHEMATICAL statistics ,DECISION trees ,COMPUTING platforms - Published
- 2017
- Full Text
- View/download PDF
15. The Adoption of Green Programming Languages as a Promising Approach to Improve Computing's Sustainability.
- Author
-
Kashyap, Arya
- Subjects
PROGRAMMING languages ,SUSTAINABILITY ,ENVIRONMENTAL impact analysis ,COMPUTING platforms ,ENERGY consumption ,COMPUTER operating systems - Abstract
As the environmental impact of computing worsens, transitioning to more sustainable computing practices, also known as green computing, has become crucial. This literature review examines the concept of green computing, focusing on the environmental impacts of software languages and how millions of developers worldwide interact with popular yet energy-intensive languages such as Python and Perl daily without realizing their negative environmental consequences. The review elucidates how such impacts are measured and the potential advantages and drawbacks of adopting more green programming languages as new tools instead of energy-intensive ones. The paper further argues that green computing, while challenging to implement now, is critical for creating a more sustainable future for our society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. PERFORMANCE COMPARISON OF APACHE SPARK AND HADOOP FOR MACHINE LEARNING BASED ITERATIVE GBTR ON HIGGS AND COVID-19 DATASETS.
- Author
-
SEWAL, PIYUSH and SINGH, HARI
- Subjects
DISTRIBUTED computing ,GRAPH algorithms ,BATCH processing ,REGRESSION trees ,COMPUTING platforms ,SQL ,MACHINE learning - Abstract
In the realm of distributed computing frameworks, such as Apache Spark and MapReduce Hadoop, the efficacy of these frameworks varies across diverse applications and algorithms contingent upon distinctive evaluation metrics and critical parameters. This research paper diligently scrutinizes the extant body of research that compares these two frameworks concerning said evaluation metrics and parameters. Subsequently, it conducts empirical investigations to authenticate the performance of these frameworks in the context of an iterative Gradient Boosting Tree Regression (GBTR) algorithm. Remarkably, the comparative analyses in previous studies encompass a spectrum of iterative machine learning regression and classification techniques, batch processing, SQL, and Graph processing algorithms. Furthermore, numerous investigations have explored the application of machine learning algorithms encompassing logistic regression, Page Rank, K-Means, KNN, and the HiBench suite. This paper presents the comparison between the two distributed computing platforms on iterative GBTR for classification task on the HIGGS dataset from the physics domain and for the regression task on the Covid-19 dataset from the healthcare domain. The empirical findings corroborate that Apache Spark exhibits superior execution speed in iterative tasks when the available physical memory significantly exceeds the dataset size. Conversely, Hadoop outperforms Spark when dealing with substantial datasets or constrained physical memory resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. DDoS attacks and machine‐learning‐based detection methods: A survey and taxonomy.
- Author
-
Najafimehr, Mohammad, Zarifzadeh, Sajjad, and Mostafavi, Seyedakbar
- Subjects
DENIAL of service attacks ,COMPUTING platforms ,MACHINE learning ,TAXONOMY ,RESEARCH personnel - Abstract
Distributed denial of service (DDoS) attacks represent a significant cybersecurity challenge, posing a critical risk to computer networks. Developing an effective defense mechanism against these attacks is crucial but challenging, given their diverse attack types, network and computing platform heterogeneity, and complex communication protocols. Moreover, the emergence of innovative DDoS attack methods presents a formidable threat to existing countermeasures. Various machine learning techniques have shown promise in detecting DDoS attacks with low false‐positive rates and high detection rates. This survey paper offers a comprehensive taxonomy of machine learning‐based methods for detecting DDoS attacks, reviewing supervised, unsupervised, hybrid approaches, and analyzing the related challenges. Further, we explore relevant datasets, highlighting their strengths and limitations, and propose future research directions to address the current gaps in this domain. This paper aims to provide a profound understanding of DDoS attack detection mechanisms, aiding researchers, and practitioners in developing effective cybersecurity approaches against such attacks. This research is essential because DDoS attacks are diverse and pose a formidable threat to computer networks, and various machine learning techniques have shown promise in detecting them. Its implications include providing insights that can inform the development of robust defense mechanisms against DDoS attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Design of Fruit-Carrying Monitoring System for Monorail Transporter in Mountain Orchard.
- Author
-
Li, Zhen, Zhou, Yuehuai, Lyu, Shilei, Huang, Ying, Yi, Yuanfei, and Zhao, Chonghai
- Subjects
ORCHARDS ,COMPUTING platforms ,BODY image ,EDGE computing ,TRANSPORTATION equipment ,ORANGES - Abstract
The real-time monitoring and detection of the fruit carrying for monorail transporter in the mountain orchard are significant for the transporter scheduling and safety. In this paper, we present a fruit carrying monitoring system, including the pan-tilt camera platform, AI edge computing platform, improved detection algorithm and the web client. The system used a pan-tilt camera to capture images of the truck body of the monorail transporter, realizing monitoring of fruit carrying. Besides, we present an improved fruit carrying detection algorithm based on YOLOv5s, taking the "basket", "orange" and "fullbasket" as the object. We introduced the improved attention mechanism E-CBAM (Efficient-Convolutional Block Attention Module) based on CBAM, into the C3 module in the neck network of YOLOv5s. Focal loss was introduced to improve the classification and confidence loss to improve detection accuracy; to deploy the model on the embedded platform better, we compressed the model through the EagleEye pruning algorithm to reduce the parameters and improve the detection speed. The experiment was performed on the custom fruit-carrying datasets, the mAP was 91.5%, which was 9.6%, 9.9% and 12.0% higher than that of Faster-RCNN, RetinaNet-Res50 and YOLOv3-tiny, respectively, and detection speed at Jetson Nano was 72 ms/img. The monitoring system and detection algorithm proposed in the paper can provide technical support for the safe transportation of monorail transporter and scheduling transportation equipment more efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Prediction and Analysis of Heart Diseases Using Heterogeneous Computing Platform.
- Author
-
Sinnapolu, GiriBabu, Alawneh, Shadi, and Dixon, Simon R.
- Subjects
HEART diseases ,COMPUTING platforms ,CENTRAL processing units ,GRAPHICS processing units ,HEART failure ,HETEROGENEOUS computing ,VENTRICULAR fibrillation ,ARRHYTHMIA - Abstract
The work in this paper helps study cardiac rhythms and the electrical activity of the heart for two of the most critical cardiac arrhythmias. Various consumer devices exist, but implementation of an appropriate device at a certain position on the body at a certain pressure point containing an enormous number of blood vessels and developing filtering techniques for the most accurate signal extraction from the heart is a challenging task. In this paper, we provide evidence of prediction and analysis of Atrial Fibrillation (AF) and Ventricular Fibrillation (VF). Long-term monitoring of diseases such as AF and VF occurrences is very important, as these will lead to occurrence of ischemic stroke, cardiac arrest and complete heart failure. The AF and VF signal classification accuracy are much higher when processed on a Graphics Processor Unit (GPU) than Central Processing Unit (CPU) or traditional Holter machines. The classifier COMMA-Z filter is applied to the highly-sensitive industry certified Bio PPG sensor placed at the earlobe and computed on GPU. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. A Fine-grained Asynchronous Bulk Synchronous parallelism model for PGAS applications.
- Author
-
Paul, Sri Raj, Hayashi, Akihiro, Chen, Kun, Elmougy, Youssef, and Sarkar, Vivek
- Subjects
COMPUTING platforms ,CONFERENCE papers ,CLOUD computing ,GEOMETRIC approach ,SCALABILITY - Abstract
The Partitioned Global Address Space (PGAS) model is well suited for executing irregular applications on cluster-based systems, due to its efficient support for short, one-sided messages. Separately, the actor model has been gaining popularity as a productive asynchronous message-passing approach for distributed objects in enterprise and cloud computing platforms, typically implemented in languages such as Erlang, Scala or Rust. To the best of our knowledge, there has been no past work on using the actor model to deliver both productivity and scalability to irregular PGAS applications with large number of small messages. In this paper, we introduce a new programming system for PGAS applications, in which point-to-point remote operations can be expressed as fine-grained asynchronous actor messages. In our approach, the programmer does not need to worry about programming complexities related to message aggregation and termination detection. Our approach can be viewed as extending the classical Bulk Synchronous Parallelism model with fine-grained asynchronous communications within a phase or superstep. We believe that our approach offers a desirable point in the productivity-performance space for PGAS applications, with more scalable performance and higher productivity relative to past approaches. Specifically, for seven irregular mini-applications from the Bale Kernels and three graph kernels executed using 2048 cores in the NERSC Cori system, our approach shows geometric mean performance improvements of ≥ 20 × relative to standard PGAS versions (UPC and OpenSHMEM) while maintaining comparable productivity to those versions. This is an extended version of the conference paper "A Productive and Scalable Actor-Based Programming System for PGAS Applications" (Paul et al., 2022) [1] from ICCS 2022. • Extend the BSP model to a Fine-grained Asynchronous Bulk-Synchronous (FABS) model. • Realise FABS model in PGAS programming using Actor model with message aggregation. • Design an automatic termination protocol for the FABS model. • S2S translator from a productive lambda-based API to an efficient class-based API. • Implement a subset of OpenSHMEM APIs with message aggregation using FABS model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Implementation of efficient low-storage techniques for 3-D seismic simulation using the curved grid finite-difference method.
- Author
-
Wang, Wenqiang, Zhang, Zhenguo, Zhang, Wenqiang, and Liu, Qi
- Subjects
FINITE difference method ,ELASTIC waves ,WAVE equation ,HETEROGENEOUS computing ,GRAPHICS processing units ,COMPUTING platforms ,THEORY of wave motion - Abstract
High-resolution 3-D seismic simulation imposes severe demands for computational memory, making low-storage seismic simulation particularly important. Due to its high-efficiency and low-storage, the half-precision floating-point 16-bit format (FP16) is widely used in heterogeneous computing platforms, such as Sunway series supercomputers and graphics processing unit (GPU) computing platforms. Furthermore, the low-storage Runge–Kutta (LSRK) technique requires lower memory resources compared with the classical Runge–Kutta. Therefore, FP16 and LSRK provide the possibility for low-storage seismic simulation. However, the orders of magnitude of the physical quantities (velocity, stress and Lamé constants) in the elastic wave equations are influenced by the P -wave and S -wave velocities and the densities of the elastic media. This results in a huge order of magnitude difference between the stored velocity and stress values, which exceed the range of the stored values of FP16. In this paper, we introduce three dimensionless constants, C
v , Cs and Cp , into elastic wave equations, and new elastic wave equations are derived. The three constants, Cv , Cs and Cp , keep the orders of magnitude of the velocity and stress at a similar level in the new elastic wave equations. Thus, the stored values of these variables in new equations remain within the range of the stored values of FP16. In addition, we introduce the use of the LSRK due to its low-storage characteristic. In this paper, based on the FP16 and LSRK low-storage techniques, we develop 3 optimized multi-GPU solvers for seismic simulation using the curved grid finite-difference method (CGFDM). Moreover, we perform a series of seismic simulations to verify the accuracy, stability, and validity of the optimized solver coupled with the two techniques. The verifications indicate that through maintaining the calculation accuracy, the computational efficiency of the solver is significantly optimized, and the memory usage is remarkably reduced. In particular, under the best conditions, the memory usage can be reduced to nearly 1/3 that of the original CGFDM solver. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
22. Financial Data Security Management Method and Edge Computing Platform Based on Intelligent Edge Computing and Big Data.
- Author
-
Luo, Yanni
- Subjects
COMPUTING platforms ,EDGE computing ,DATA management ,DATA security ,FINANCIAL security ,BIG data - Abstract
This paper mainly studies the financial data management method and edge computing platform based on intelligent edge computing and big computing. This paper first introduces the theoretical basis of edge computing and big data computing, analyzes the collaborative mode of edge computing model and cloud computing, and discusses the cloud data storage scheme. In this paper, edge computing platform is built, and EC Master and Docker private image warehouse are mainly deployed in the data center. In this paper, the edge computing platform constructed was tested. In the mobile edge computing business processing time test, when there was a big gap in computing amount between business 1 and business 2, the total service time of MEC instances was around 0.3 s, showing a good user experience. In the edge acceleration effect test, before and after the adoption of CDN edge acceleration, from the perspective of the overall web page, the average time consumed to open a web page decreased from 1615 ms to 403 ms. From requesting a single HTTP access request, the average time decreased from 520 ms to 74 ms. The financial data management edge computing platform designed in this paper can effectively manage financial data and make an effective contribution to enterprise data management. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Pure-Attention-Based Multifunction Memristive Neuromorphic Circuit and System.
- Author
-
Xiao, He, Sun, Haohang, Zhao, Tianhao, Zhou, Yue, and Hu, Xiaofang
- Subjects
ARTIFICIAL intelligence ,SPEECH perception ,IMAGE recognition (Computer vision) ,AUTOMATIC speech recognition ,COMPUTING platforms - Abstract
The use of memristive neuromorphic circuit and system is a promising solution for next-generation Artificial Intelligence (AI) computing, as it offers possibilities that go beyond conventional GPU-based artificial neural network computing platforms. However, most of the existing memristive neuromorphic circuits and systems are designed for the specific networks, which is lack of universality and flexibility. Therefore, this paper proposes a universal memristive circuit and system framework for pure-attention-based transformer networks to implement multifunction applications on edge devices. Furthermore, the verification of image recognition and speech recognition was achieved by extending the size of the memristor crossbar array macros and reconfiguring the memristor weights without changing the memristive transformer circuit and framework. This paper not only provides a universal edge implementation framework for multifunction applications of the transformer, but also offers a low-power and promising solution for the application of pure-attention-based transformers on edge devices. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. TAMM: Tensor algebra for many-body methods.
- Author
-
Mutlu, Erdal, Panyala, Ajay, Gawande, Nitin, Bagusetty, Abhishek, Glabe, Jeffrey, Kim, Jinsung, Kowalski, Karol, Bauman, Nicholas P., Peng, Bo, Pathak, Himadri, Brabec, Jiri, and Krishnamoorthy, Sriram
- Subjects
TENSOR algebra ,UNIVERSAL algebra ,GRAPHICS processing units ,COMPUTING platforms ,MODULAR construction ,HETEROGENEOUS computing - Abstract
Tensor algebra operations such as contractions in computational chemistry consume a significant fraction of the computing time on large-scale computing platforms. The widespread use of tensor contractions between large multi-dimensional tensors in describing electronic structure theory has motivated the development of multiple tensor algebra frameworks targeting heterogeneous computing platforms. In this paper, we present Tensor Algebra for Many-body Methods (TAMM), a framework for productive and performance-portable development of scalable computational chemistry methods. TAMM decouples the specification of the computation from the execution of these operations on available high-performance computing systems. With this design choice, the scientific application developers (domain scientists) can focus on the algorithmic requirements using the tensor algebra interface provided by TAMM, whereas high-performance computing developers can direct their attention to various optimizations on the underlying constructs, such as efficient data distribution, optimized scheduling algorithms, and efficient use of intra-node resources (e.g., graphics processing units). The modular structure of TAMM allows it to support different hardware architectures and incorporate new algorithmic advances. We describe the TAMM framework and our approach to the sustainable development of scalable ground- and excited-state electronic structure methods. We present case studies highlighting the ease of use, including the performance and productivity gains compared to other frameworks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Combining Machine Learning and Edge Computing: Opportunities, Challenges, Platforms, Frameworks, and Use Cases.
- Author
-
Grzesik, Piotr and Mrozek, Dariusz
- Subjects
MACHINE learning ,EDGE computing ,DATA privacy ,SMART cities ,COMPUTING platforms ,ENVIRONMENTAL monitoring ,VEHICLE routing problem - Abstract
In recent years, we have been observing the rapid growth and adoption of IoT-based systems, enhancing multiple areas of our lives. Concurrently, the utilization of machine learning techniques has surged, often for similar use cases as those seen in IoT systems. In this survey, we aim to focus on the combination of machine learning and the edge computing paradigm. The presented research commences with the topic of edge computing, its benefits, such as reduced data transmission, improved scalability, and reduced latency, as well as the challenges associated with this computing paradigm, like energy consumption, constrained devices, security, and device fleet management. It then presents the motivations behind the combination of machine learning and edge computing, such as the availability of more powerful edge devices, improving data privacy, reducing latency, or lowering reliance on centralized services. Then, it describes several edge computing platforms, with a focus on their capability to enable edge intelligence workflows. It also reviews the currently available edge intelligence frameworks and libraries, such as TensorFlow Lite or PyTorch Mobile. Afterward, the paper focuses on the existing use cases for edge intelligence in areas like industrial applications, healthcare applications, smart cities, environmental monitoring, or autonomous vehicles. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Environment Understanding Algorithm for Substation Inspection Robot Based on Improved DeepLab V3+.
- Author
-
Wang, Ping, Li, Chuanxue, Yang, Qiang, Fu, Lin, Yu, Fan, Min, Lixiao, Guo, Dequan, and Li, Xinming
- Subjects
COMPUTING platforms ,ALGORITHMS ,ROBOTIC exoskeletons - Abstract
Compared with traditional manual inspection, inspection robots can not only meet the all-weather, real-time, and accurate inspection needs of substation inspection, they also reduce the work intensity of operation and maintenance personnel and decrease the probability of safety accidents. For the urgent demand of substation inspection robot intelligence enhancement, an environment understanding algorithm is proposed in this paper, which is an improved DeepLab V3+ neural network. The improved neural network replaces the original dilate rate combination in the ASPP (atrous spatial pyramid pooling) module with a new dilate rate combination with better segmentation accuracy of object edges and adds a CBAM (convolutional block attention module) in the two up-samplings, respectively. In order to be transplanted to the embedded platform with limited computing resources, the improved neural network is compressed. Multiple sets of comparative experiments on the standard dataset PASCAL VOC 2012 and the substation dataset have been made. Experimental results show that, compared with the DeepLab V3+, the improved DeepLab V3+ has a mean intersection-over-union (mIoU) of eight categories of 57.65% on the substation dataset, with an improvement of 6.39%, and the model size of 13.9 M, with a decrease of 147.1 M. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Toward security quantification of serverless computing.
- Author
-
Ni, Kan, Mondal, Subrota Kumar, Kabir, H M Dipu, Tan, Tian, and Dai, Hong-Ning
- Subjects
COMPUTING platforms ,CLOUD computing ,RESEARCH personnel ,RISK assessment ,QUANTITATIVE research - Abstract
Serverless computing is one of the recent compelling paradigms in cloud computing. Serverless computing can quickly run user applications and services regardless of the underlying server architecture. Despite the availability of several commercial and open-source serverless platforms, there are still some open issues and challenges to address. One of the key concerns in serverless computing platforms is security. Therefore, in this paper, we present a multi-layer abstract model of serverless computing for an security investigation. We conduct a quantitative analysis of security risks for each layer. We observe that the Attack Tree and Attack-Defense Tree methodologies are viable approaches in this regard. Consequently, we make use of the Attack Tree and the Attack-Defense Tree to quantify the security risks and countermeasures of serverless computing. We also propose a novel measure called the Relative Risk Matrix (RRM) to quantify the probability of attack success. Stakeholders including application developers, researchers, and cloud providers can potentially apply these findings and implications to better understand and further enhance the security of serverless computing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. An ultra lightweight neural network for automatic modulation classification in drone communications.
- Author
-
Wang, Mengtao, Fang, Shengliang, Fan, Youchen, Li, Jinming, Zhao, Yi, and Wang, Yuying
- Subjects
WIRELESS communications ,AUTOMATIC classification ,COMPUTING platforms ,DRONE aircraft ,DATA augmentation - Abstract
Unmanned aerial vehicle (UAV)-assisted communication based on automatic modulation classification (AMC) technology is considered an effective solution to improve the transmission efficiency of wireless communication systems, as it can adaptively select the most suitable modulation method according to the current communication environment. However, many existing deep learning (DL)-based AMC methods cannot be directly applied to UAV platform with limited computing power and storage space, because of the contradiction between accuracy and efficiency. This paper mainly studies the lightweight of DL-based AMC networks to improve adaptability in resource-constrained scenarios. To address this challenge, we propose an ultra-lightweight neural network (ULNN). This network incorporates a lightweight convolutional structure, attention mechanism, and cross-channel feature fusion technique. Additionally, we introduce data augmentation (DA) based on signal phase offsets during the model training process, aimed at improving the model's generalization ability and preventing overfitting. Through experimental validation on the public dataset RML2016.10 A, the ULNN we proposed achieves an average precision of 62.83% with only 8815 parameters and reaches a peak classification accuracy of 92.11% at SNR = 10 dB. The experimental results show that ULNN can achieve high recognition accuracy while keeping the model lightweight, and is suitable for UAV platform with limited resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A unified web cloud computing platform MiMedSurv for microbiome causal mediation analysis with survival responses.
- Author
-
Jang, Hyojung and Koh, Hyunwook
- Subjects
COMPUTING platforms ,SURVIVAL analysis (Biometry) ,CLOUD computing ,INTERNET servers ,HUMAN microbiota ,GUT microbiome ,MICROBIAL diversity - Abstract
In human microbiome studies, mediation analysis has recently been spotlighted as a practical and powerful analytic tool to survey the causal roles of the microbiome as a mediator to explain the observed relationships between a medical treatment/environmental exposure and a human disease. We also note that, in a clinical research, investigators often trace disease progression sequentially in time; as such, time-to-event (e.g., time-to-disease, time-to-cure) responses, known as survival responses, are prevalent as a surrogate variable for human health or disease. In this paper, we introduce a web cloud computing platform, named as microbiome mediation analysis with survival responses (MiMedSurv), for comprehensive microbiome mediation analysis with survival responses on user-friendly web environments. MiMedSurv is an extension of our prior web cloud computing platform, named as microbiome mediation analysis (MiMed), for survival responses. The two main features that are well-distinguished are as follows. First, MiMedSurv conducts some baseline exploratory non-mediational survival analysis, not involving microbiome, to survey the disparity in survival response between medical treatments/environmental exposures. Then, MiMedSurv identifies the mediating roles of the microbiome in various aspects: (i) as a microbial ecosystem using ecological indices (e.g., alpha and beta diversity indices) and (ii) as individual microbial taxa in various hierarchies (e.g., phyla, classes, orders, families, genera, species). To illustrate its use, we survey the mediating roles of the gut microbiome between antibiotic treatment and time-to-type 1 diabetes. MiMedSurv is freely available on our web server (http://mimedsurv.micloud.kr). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Maintaining the completion-time mechanism for Greening tasks scheduling on DVFS-enabled computing platforms.
- Author
-
Hagras, Tarek and El-Sayed, Gamal A.
- Subjects
COMPUTING platforms ,ENERGY consumption ,COMPUTER simulation ,SCHEDULING ,VOLTAGE - Abstract
The key factor in reducing the consumed energy when dependent-tasks applications are scheduled on DVFS-enabled computing platforms is task execution time slots. The unique and axiomatic approach to reduce the energy consumption on such platforms involves scaling down the execution frequency of each task within its execution time slot, provided a suitable scaling-down frequency is available. Regrettably, scheduling algorithms often shrink task execution time slots due to minimizing task completion times. This paper presents BlueMoon, a mechanism that reschedules the application tasks to extend the execution time slot of each task while ensuring that the overall completion time of the application tasks remains unaffected. BlueMoon is implemented and tested on numerous schedules of application graphs. The experimental results, conducted through computer simulations, demonstrate that BlueMoon substantially extends the execution time slots of tasks when compared to other mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Real-Time Lane Recognition Based on Feature Extraction and Edge Point Voting.
- Author
-
Yang Da, Wei Changhe, Jia Chengyu, and Ye Siqin
- Subjects
FEATURE extraction ,COMPUTING platforms ,HOUGH transforms ,STATISTICAL sampling ,MOTOR vehicle driving ,ALGORITHMS - Abstract
To satisfy the requirement of low power consumption vehicle computing platform for lane detection, this paper proposes a low computing power dependent real-time lane recognition method. Considering the variation of illumination during vehicle driving, a color separation method based on adaptive illumination to extract lane characteristics is proposed. The effective edge point form is defined and the lane lines are determined by edge point voting based on the classical edge detection and Hough transform algorithm. The lane lines are used to filter and supplement the edge points and the lane curve equation is obtained by using the random sample consensus algorithm. The results show that the proposed method achieves a recognition accuracy of over 98% and computation speed of 38 frames per second on a low power processor. Furthermore, the method has proven to be stable and robust in a variety of scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Field‐Free Memristive Spin–Orbit Torque Switching in A1 CoPt Single Layer for Image Edge Detection.
- Author
-
Yang, Liu, Li, Wendi, Zuo, Chao, Tao, Ying, Jin, Fang, Li, Huihui, Tang, RuJun, and Dong, Kaifeng
- Subjects
MAGNETIC control ,COMPUTING platforms ,TORQUE ,SYMMETRY ,HARDWARE ,SPIN-orbit interactions - Abstract
While spin–orbit torque (SOT) devices are extensively investigated due to their potential for use in neural network computation, it remains challenging to explore the hardware for neural networks. In this paper, the field‐free memristive SOT switching of the CoPt single layer is used to propose a neuromorphic hardware circuit for detecting edges in images. Owing to its threefold symmetry of inversion, the polarity of SOT switching can be reversed by rotating the current by 60°. Moreover, the process of current‐induced SOT switching exhibits stable multi‐state magnetic switching behavior, and can be controllably tuned by using the pulse of the current. As the slope of the applied ramp pulse current increased, the wave of the anomalous Hall resistance changed from a curve with normally memristive property to trigonometric, and finally to cosine. The design of the hardware circuit for a single SOT device is subsequently formulated to detect the edges in images. The results of experiments verified the capability of this device to detect the edge lines in images with high accuracy, which confirms its potential for use in the hardware of neuromorphic computing platforms. The work here provides guidance for the application of SOT‐based devices to neuromorphic hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Optimization of Memristor Crossbar's Mapping Using Lagrange Multiplier Method and Genetic Algorithm for Reducing Crossbar's Area and Delay Time.
- Author
-
Cho, Seung-Myeong, Yoon, Rina, Yoon, Ilpyeong, Moon, Jihwan, Oh, Seokjin, and Min, Kyeong-Sik
- Subjects
CONVOLUTIONAL neural networks ,PROCESS capability ,LAGRANGE multiplier ,GENETIC algorithms ,COMPUTING platforms - Abstract
Memristor crossbars offer promising low-power and parallel processing capabilities, making them efficient for implementing convolutional neural networks (CNNs) in terms of delay time, area, etc. However, mapping large CNN models like ResNet-18, ResNet-34, VGG-Net, etc., onto memristor crossbars is challenging due to the line resistance problem limiting crossbar size. This necessitates partitioning full-image convolution into sub-image convolution. To do so, an optimized mapping of memristor crossbars should be considered to divide full-image convolution into multiple crossbars. With limited crossbar resources, especially in edge devices, it is crucial to optimize the crossbar allocation per layer to minimize the hardware resource in term of crossbar area, delay time, and area–delay product. This paper explores three optimization scenarios: (1) optimizing total delay time under a crossbar's area constraint, (2) optimizing total crossbar area with a crossbar's delay time constraint, and (3) optimizing a crossbar's area–delay-time product without constraints. The Lagrange multiplier method is employed for the constrained cases 1 and 2. For the unconstrained case 3, a genetic algorithm (GA) is used to optimize the area–delay-time product. Simulation results demonstrate that the optimization can have significant improvements over the unoptimized results. When VGG-Net is simulated, the optimization can show about 20% reduction in delay time for case 1 and 22% area reduction for case 2. Case 3 highlights the benefits of optimizing the crossbar utilization ratio for minimizing the area–delay-time product. The proposed optimization strategies can substantially enhance the neural network's performance of memristor crossbar-based processing-in-memory architectures, especially for resource-constrained edge computing platforms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A novel adaptive sampling algorithm for cyber-physical systems.
- Author
-
Molhem, Mohammed
- Subjects
CYBER physical systems ,COMPUTING platforms ,ALGORITHMS ,BIG data ,ADAPTIVE sampling (Statistics) ,DETECTORS - Abstract
Sensors are the main components in Cyber-Physical Systems (CPS), which transmit large amounts of physical values and big data to computing platforms for processing. On the other hand, the embedded processors (as edge devices in fog computing) spend most of their time reading the sensor signals as compared with computing time. The impact of sensors on the performance of fog computing is very great, thus, the enhancement of the reading time of sensors will positively affect the performance of fog computing, and solves the CPS challenges such as delay, timed precision, temporal behavior, energy, and cost. In this paper, we propose an algorithm based on the 1st derivative of the sensor signal to generate an adaptive sampling frequency. The proposed algorithm uses an adaptive frequency to capture the sudden and rapid change in sensor signal in the steady state. Finally, we realize and tested it using the Ptolemy II Modeling Environment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Bibliometric Analysis of Global NDVI Research Trends from 1985 to 2021.
- Author
-
Xu, Yang, Yang, Yaping, Chen, Xiaona, and Liu, Yangxiaoyue
- Subjects
NORMALIZED difference vegetation index ,SCIENTIFIC literature ,REMOTE sensing ,SCIENTIFIC method ,COMPUTING platforms ,MACHINE learning - Abstract
As one of the earliest remote sensing indices, the Normalized Difference Vegetation Index (NDVI) has been employed extensively for vegetation research. However, despite an abundance of NDVI review articles, these studies are predominantly limited to either one subject area or one area, with systematic NDVI reviews being relatively rare. Bibliometrics is a useful method of analyzing scientific literature that has been widely used in many disciplines; however, it has not yet been applied to comprehensively analyze NDVI research. Therefore, we used bibliometrics and scientific mapping methods to analyze citation data retrieved from the Web of Science during 1985–2021 with NDVI as the topic. According to the analysis results, the amount of NDVI research increased exponentially during the study period, and the related research fields became increasingly varied. Moreover, a greater number of satellite and aerial remote sensing platforms resulted in more diverse NDVI data sources. In future, machine learning methods and cloud computing platforms led by Google Earth Engine will substantially improve the accuracy and production efficiency of NDVI data products for more effective global research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Development of Cloud Computing Platform Based on Neural Network.
- Author
-
Zhang, Zhi, Liwei Wang, Liu, Ruiying, and Fan, Jinghang
- Subjects
COMPUTING platforms ,SOFTWARE as a service ,DATA warehousing ,ENERGY development ,SOFTWARE architecture - Abstract
Aiming at the problems of small data storage, small platform throughput, and high energy consumption in the development of existing cloud computing platforms, this paper develops a cloud computing platform based on neural networks. In the design of the platform, firstly, the functions of the cloud computing platform are determined. Based on the function design, the hardware and software of the cloud computing platform are designed. In the hardware design, the topology of cloud computing platform, data acquisition module, single-chip microcomputer, node deployment, and other types of functional hardware are designed. In the software design, the neural network is mainly used to remove the redundancy of data stored in a cloud computing platform, design the data processing flow of cloud computing platform, and complete the development of cloud computing platform based on neural network. The experimental results show that the cloud computing platform based on the neural network designed in this paper runs faster, the throughput of platform data has been significantly improved, and the operating energy consumption of the platform is low. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Transitioning GlideinWMS, a multi domain distributed workload manager, from GSI proxies to tokens and other granular credentials.
- Author
-
Mambelli, Marco, Coimbra, Bruno, and Box, Dennis
- Subjects
TOKENS ,JETTONS ,COMPUTER systems ,ALGORITHMS ,COMPUTING platforms - Abstract
GlideinWMS is a distributed workload manager that has been used in production for many years to provision resources for experiments like CERN's CMS, many Neutrino experiments, and the OSG. Its security model was based mainly on GSI (Grid Security Infrastructure), using X.509 certificate proxies and VOMS (Virtual Organization Membership Service) extensions. Even when other credentials, like SSH keys, were used to authenticate with resources, proxies were also added all the time, to establish the identity of the requestor and the associated memberships or privileges. This single credential was used for everything and was, often implicitly, forwarded wherever needed. The addition of identity and access tokens and the phase-out of GSI forced us to reconsider the security model of GlideinWMS, to handle multiple credentials which can differ in type, technology, and functionality. Both identity tokens and access tokens are supported. GSI proxies even if no more mandatory, are still used, together with various JWT (JSON Web Token) based tokens and other certificates. The functionality of the credentials, defined by issuer, audience, and scope, also differ: a credential can allow access to a computing resource, or can protect the GlideinWMS framework from tampering, or can grant read or write access to storage, can provide an identity for accounting or auditing, or can provide a combination of any the formers. Furthermore, the tools in use do not include automatic forwarding and renewal of the new credentials so credential lifetime and renewal requirements became part of the discussion as well. In this paper, we will present how GlideinWMS was able to change its design and code to respond to all these changes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. FTS Service Evolution and LHC Run-3 Operations.
- Author
-
Murray, Steven, Patrascoiu, Mihai, Mascetti, Luca, Lopes, Joao Pedro, Misra, Shubhangi, and Silva Junior, Eraldo
- Subjects
FILE Transfer Protocol (Computer network protocol) ,COMPUTER software ,PHYSICS ,COST estimates ,COMPUTING platforms - Abstract
The File Transfer Service (FTS) is a software system responsible for queuing, scheduling, dispatching and retrying file transfer requests, it is used by three of the LHC experiments, namely ATLAS, CMS and LHCb, as well as non LHC experiments including AMS, Dune and NA62. FTS is critical to the success of many experiments and the service must remain available and performant during the entire LHC Run-3. Experiments use FTS to manage the transfer of their physics files all over the World or more specifically all over the Worldwide LHC Computing Grid (WLCG). Since the start of LHC Run-3 (from 5
th July 2022 to 31st August 2023), FTS has managed the successful transfer of approximately 1.2 billion files totalling 1.83 Exabytes of data. This paper describes how the FTS team has evolved the software and the deployment in order to cope with changes in implementation technologies, increase the efficiency of service, streamline its operations, and to meet the ever changing needs of its user community. We report about the software migration from Python 2 to Python 3, the move from the Pylons web development framework towards Flask and the new database deployment strategy to separate the handling of the critical operations from the long duration monitoring queries. In addition, during 2022 a new HTTP based protocol has been finalised that can now be used between FTS and compatible WLCG tape storage endpoints. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
39. Information System Plan for West Corner Hotel (A Case Study).
- Author
-
Cadorniga, Milstein Rei, Cruz, Ashly Mariel, Cruz, Coleen Dela, Suello, Kathleen, and Intal, Grace Lorraine
- Subjects
HOTEL personnel management ,INFORMATION storage & retrieval systems ,CONCEPTUAL models ,COMPUTING platforms ,SWOT analysis - Abstract
The West Corner hotel (WCH) is located in downtown Olongapo city, at the center of the city. The Hotel has 68 adequately furnished rooms complete with amenities for travelers. This paper aims to create a conceptual model or an applicable information system strategy to improve the use of information at the hotel staff’s disposal. The paper utilizes porter’s five forces of competition to analyze the external factors affecting the Hotel. The current strategies of West Corner are Technology as a tool, Staff Training, Budget-Friendly, Accommodations, and Location Strategy. Their computing environment includes Wi-Fi, local area network, and their Hotel Management System software: A custom-built software. They also utilize Facebook and Google mail to communicate with customers. This paper has found that West Corner is underutilizing their technology and the papers recommends to improve their Facebook Page’s messaging bot to improve their communication between customers and staff. The paper concludes that utilizing the Facebook Messaging Bot through their Facebook page would be able to improve the Hotel’s day-to-day functions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
40. Experimental results on nonlinear distortion compensation using photonic reservoir computing with a single set of weights for different wavelengths.
- Author
-
Gooskens, Emmanuel, Sackesyn, Stijn, Dambre, Joni, and Bienstman, Peter
- Subjects
FORWARD error correction ,BIT error rate ,WAVELENGTH division multiplexing ,SIGNAL processing ,DELAY lines ,COMPUTING platforms - Abstract
Photonics-based computing approaches in combination with wavelength division multiplexing offer a potential solution to modern data and bandwidth needs. This paper experimentally takes an important step towards wavelength division multiplexing in an integrated waveguide-based photonic reservoir computing platform by using a single set of readout weights for up to at least 3 ITU-T channels to efficiently scale the data bandwidth when processing a nonlinear signal equalization task on a 28 Gbps modulated on-off keying signal. Using multiple-wavelength training, we obtain bit error rates well below that of the 1.5 × 10 - 2 forward error correction limit at high fiber input powers of 18 dBm, which result in high nonlinear distortion. The results of the reservoir chip are compared to a tapped delay line filter and clearly show that the system performs nonlinear equalization. This was achieved using only limited post processing which in future work can be implemented in optical hardware as well. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Towards computational awareness in autonomous robots: an empirical study of computational kernels.
- Author
-
Sifat, Ashrarul H., Bharmal, Burhanuddin, Zeng, Haibo, Huang, Jia-Bin, Jung, Changhee, and Williams, Ryan K.
- Subjects
AUTONOMOUS robots ,EMPIRICAL research ,OPTICAL flow ,COMPUTING platforms ,PRECISION farming ,RESCUE work - Abstract
The potential impact of autonomous robots on everyday life is evident in emerging applications such as precision agriculture, search and rescue, and infrastructure inspection. However, such applications necessitate operation in unknown and unstructured environments with a broad and sophisticated set of objectives, all under strict computation and power limitations. We therefore argue that the computational kernels enabling robotic autonomy must be scheduled and optimized to guarantee timely and correct behavior, while allowing for reconfiguration of scheduling parameters at runtime. In this paper, we consider a necessary first step towards this goal of computational awareness in autonomous robots: an empirical study of a base set of computational kernels from the resource management perspective. Specifically, we conduct a data-driven study of the timing, power, and memory performance of kernels for localization and mapping, path planning, task allocation, depth estimation, and optical flow, across three embedded computing platforms. We profile and analyze these kernels to provide insight into scheduling and dynamic resource management for computation-aware autonomous robots. Notably, our results show that there is a correlation of kernel performance with a robot's operational environment, justifying the notion of computation-aware robots and why our work is a crucial step towards this goal. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. A Review of Implementing Ant System Algorithms on Scheduling Problems.
- Author
-
Kashef, Samar and Elshaer, Raafat
- Subjects
ALGORITHMS ,COMBINATORIAL optimization ,METAHEURISTIC algorithms ,COMPUTING platforms ,QUANTITATIVE research - Abstract
The ant system (AS) and scheduling problem are well-known concepts in literature. Ant algorithms have been known to be an effective tool for solving combinatorial optimization problems. Elitist AS (EAS), rank-based AS (RAS), ant colony system (ACS), and max-min AS (MMAS) are the variants of the AS algorithm; they are triggered by the different ways of updating the pheromone trail τ, computing the visibility η, and/or other parameters in the basic AS model. The main contribution of this article is twofold. First, the basic AS and its controlled parameters are presented, the key variants of the ant algorithms are explained, and major changes of each variant from the basic model are tracked. Second, sixty papers are collected between 2015 and 2020 based on a search strategy for tracking the implementation of different AS variants in solving scheduling problems. Numerous findings based on a statistical analysis of the collected papers are reported and discussed. This study will allow the researcher to understand the essence of the ant algorithm, recognize the fundamental differences in its five systems, and determine how each of them can be implemented. Tracking a sample of articles that apply an ant algorithm for a specific case study gives researchers new ideas on how to adjust the original model to fit their problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
43. A convolutional neural network based online teaching method using edge-cloud computing platform.
- Author
-
Zhong, Liu
- Subjects
CONVOLUTIONAL neural networks ,DEEP learning ,ONLINE education ,COMPUTING platforms ,TEACHING methods ,ARTIFICIAL intelligence ,PHYSIOLOGY education - Abstract
Teaching has become a complex essential tool for students' abilities, due to their different levels of learning and understanding. In the traditional offline teaching methods, dance teachers lack a target for students 'classroom teaching. Furthermore, teachers have limited time, so they cannot take full care of each student's learning needs according to their understanding and learning ability, which leads to the polarization of the learning effect. Because of this, this paper proposes an online teaching method based on Artificial Intelligence and edge calculation. In the first phase, standard teaching and student-recorded dance learning videos are conducted through the key frames extraction through a deep convolutional neural network. In the second phase, the extracted key frame images were then extracted for human key points using grid coding, and the fully convolutional neural network was used to predict the human posture. The guidance vector is used to correct the dance movements to achieve the purpose of online learning. The CNN model is distributed into two parts so that the training occurs at the cloud and prediction happens at the edge server. Moreover, the questionnaire was used to obtain the students' learning status, understand their difficulties in dance learning, and record the corresponding dance teaching videos to make up for their weak links. Finally, the edge-cloud computing platform is used to help the training model learn quickly form vast amount of collected data. Our experiments show that the cloud-edge platform helps to support new teaching forms, enhance the platform's overall application performance and intelligence level, and improve the online learning experience. The application of this paper can help dance students to achieve efficient learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. A Formal Verification Framework for Security Issues of Blockchain Smart Contracts.
- Author
-
Sun, Tianyu and Yu, Wensheng
- Subjects
BLOCKCHAINS ,CONTRACTS ,DISTRIBUTED computing ,COMPUTING platforms ,PAPER arts - Abstract
Blockchain technology has attracted more and more attention from academia and industry recently. Ethereum, which uses blockchain technology, is a distributed computing platform and operating system. Smart contracts are small programs deployed to the Ethereum blockchain for execution. Errors in smart contracts will lead to huge losses. Formal verification can provide a reliable guarantee for the security of blockchain smart contracts. In this paper, the formal method is applied to inspect the security issues of smart contracts. We summarize five kinds of security issues in smart contracts and present formal verification methods for these issues, thus establishing a formal verification framework that can effectively verify the security vulnerabilities of smart contracts. Furthermore, we present a complete formal verification of the Binance Coin (BNB) contract. It shows how to formally verify the above security issues based on the formal verification framework in a specific smart contract. All the proofs are checked formally using the Coq proof assistant in which contract model and specification are formalized. The formal work of this paper has a variety of essential applications, such as the verification of blockchain smart contracts, program verification, and the formal establishment of mathematical and computer theoretical foundations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. Development of Cloud Computing Platform Based on Neural Network.
- Author
-
Zhange, Zhi, Liwei Wang, Liu, Ruiying, and Fan, Jinghang
- Subjects
COMPUTING platforms ,SOFTWARE as a service ,ELECTRONIC data processing ,DATA warehousing ,ENERGY development - Abstract
Aiming at the problems of small data storage, small platform throughput, and high energy consumption in the development of existing cloud computing platforms, this paper develops a cloud computing platform based on neural networks. In the design of the platform, firstly, the functions of the cloud computing platform are determined. Based on the function design, the hardware and software of the cloud computing platform are designed. In the hardware design, the topology of cloud computing platform, data acquisition module, single-chip microcomputer, node deployment, and other types of functional hardware are designed. In the software design, the neural network is mainly used to remove the redundancy of data stored in a cloud computing platform, design the data processing flow of cloud computing platform, and complete the development of cloud computing platform based on neural network. The experimental results show that the cloud computing platform based on the neural network designed in this paper runs faster, the throughput of platform data has been significantly improved, and the operating energy consumption of the platform is low. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Fast Gaussian Filter Approximations Comparison on SIMD Computing Platforms.
- Author
-
Rybakova, Ekaterina O., Limonova, Elena E., and Nikolaev, Dmitry P.
- Subjects
COMPUTING platforms ,CENTRAL processing units ,IMAGE analysis ,NOISE control ,APPLICATION software - Abstract
Gaussian filtering, being a convolution with a Gaussian kernel, is a widespread technique in image analysis and computer vision applications. It is the traditional approach for noise reduction. In some cases, performing the exact convolution can be computationally expensive and time-consuming. To address this problem, approximations of the convolution are often used to achieve a balance between accuracy and computational efficiency, such as with running sums, Bell blur, Deriche approximation, etc. At the same time, modern computing devices support data parallelism (vectorization) via Single Instruction Multiple Data (SIMD) and can process integer numbers faster than floating-point approaches. In this paper, we describe several methods for approximating a Gaussian filter, implement the SIMD and quantized versions, and compare them in terms of speed and accuracy. The experiments were performed on central processing units with a x86_64 architecture using a family of SSE SIMD extensions and an ARMv8 architecture using the NEON SIMD extension. All the optimized approximations demonstrated 10–20× speedup while maintaining the accuracy in the range of 1 × 10 − 5 or higher. The fastest method is a trivial Stack blur with a relatively high error, so we recommend using the second-order Vliet–Young–Verbeek filter and quantized Bell blur and running sums as more accurate and still computationally efficient alternatives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. A modular approach to build a hardware testbed for cloud resource management research.
- Author
-
Pons, Lucia, Petit, Salvador, Pons, Julio, Gómez, María E., and Sahuquillo, Julio
- Subjects
- *
RESOURCE management , *MOBILE computing , *MODULAR design , *COMPUTING platforms , *ENERGY consumption , *CLOUD computing , *CACHE memory - Abstract
Research on resource management focuses on optimizing system performance and energy efficiency by distributing shared resources like processor cores, caches, and main memory among competing applications. This research spans a wide range of applications, including those from high-performance computing, machine learning, and mobile computing. Existing research frameworks often simplify research by concentrating on specific characteristics, such as the architecture of the computing nodes, resource monitoring, and representative workloads. For instance, this is typically the case with cloud systems, which introduce additional complexity regarding hardware and software requirements. To avoid this complexity during research, experimental frameworks are being developed. Nevertheless, proposed frameworks often fail regarding the types of nodes included, virtualization support, and management of critical shared resources. This paper presents Stratus, an experimental framework that overcomes these limitations. Stratus includes different types of nodes, a comprehensive virtualization stack, and the ability to partition the major shared resources of the system. Even though Stratus was originally conceived to perform cloud research, its modular design allows Stratus to be extended, broadening its research use on different computing domains and platforms, matching the complexity of modern cloud environments, as shown in the case studies presented in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Design and Evaluation of CPU-, GPU-, and FPGA-Based Deployment of a CNN for Motor Imagery Classification in Brain-Computer Interfaces.
- Author
-
Pacini, Federico, Pacini, Tommaso, Lai, Giuseppe, Zocco, Alessandro Michele, and Fanucci, Luca
- Subjects
MOTOR imagery (Cognition) ,BRAIN-computer interfaces ,FOOTPRINTS ,CONVOLUTIONAL neural networks ,HETEROGENEOUS computing ,COMPUTING platforms ,CLASSIFICATION - Abstract
Brain–computer interfaces (BCIs) have gained popularity in recent years. Among noninvasive BCIs, EEG-based systems stand out as the primary approach, utilizing the motor imagery (MI) paradigm to discern movement intentions. Initially, BCIs were predominantly focused on nonembedded systems. However, there is now a growing momentum towards shifting computation to the edge, offering advantages such as enhanced privacy, reduced transmission bandwidth, and real-time responsiveness. Despite this trend, achieving the desired target remains a work in progress. To illustrate the feasibility of this shift and quantify the potential benefits, this paper presents a comparison of deploying a CNN for MI classification across different computing platforms, namely, CPU-, embedded GPU-, and FPGA-based. For our case study, we utilized data from 29 participants included in a dataset acquired using an EEG cap for training the models. The FPGA solution emerged as the most efficient in terms of the power consumption–inference time product. Specifically, it delivers an impressive reduction of up to 89% in power consumption compared to the CPU and 71% compared to the GPU and up to a 98% reduction in memory footprint for model inference, albeit at the cost of a 39% increase in inference time compared to the GPU. Both the embedded GPU and FPGA outperform the CPU in terms of inference time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Analysis and prediction of virtual machine boot time on virtualized computing environments.
- Author
-
Auliya, Ridlo Sayyidina, Lee, Yen-Lin, Chen, Chia-Ching, Liang, Deron, and Wang, Wei-Jen
- Subjects
VIRTUAL machine systems ,REGRESSION trees ,COMPUTER workstation clusters ,RANDOM forest algorithms ,COMPUTING platforms ,REGRESSION analysis ,BANDWIDTHS ,CLOUD computing - Abstract
Starting a virtual machine (VM) is a common operation in cloud computing platforms. In order to achieve better management of resource provisioning, a cloud platform needs to accurately estimate the VM boot time. In this paper, we have conducted several experiments to analyze the factors that could affect VM boot time in a computer cluster with shared storage. We also implemented four models for VM boot time prediction and evaluated the performance of the four models based on the datasets of four hosts and seven hosts in our environment, where the four models are the rule-based model, the regression tree model, the random forest regression model, and the linear regression model. According to our analysis, we found that host capability and maximal network bandwidth are two main factors that can influence VM boot time. We also found that VM boot time becomes harder to predict when booting VMs at different hosts concurrently due to competition between hosts to obtain resources. According to the experimental results, the proposed random forest regression is the best model for VM boot time prediction with an average accuracy of 94.76 % and 96.59 % in predicting VM boot time in two clusters with four and seven compute hosts, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Cloud benchmarking and performance analysis of an HPC application in Amazon EC2.
- Author
-
Dancheva, Tamara, Alonso, Unai, and Barton, Michael
- Subjects
HIGH performance computing ,COMPUTATIONAL fluid dynamics ,WEB services ,COMPUTING platforms ,RESEARCH personnel ,CLOUD computing - Abstract
Cloud computing platforms have been continuously evolving. Features such as the Elastic Fabric Adapter (EFA) in the Amazon Web Services (AWS) platform have brought yet another revolution in the High Performance Computing (HPC) world, further accelerating the convergence of HPC and cloud computing. Other public clouds also support similar features further fueling this change. In this paper, we show how and why the performance of a large-scale computational fluid dynamics (CFD) HPC application on AWS competes very closely with the one on Beskow—a Cray XC40 supercomputer at the PDC Center for High-Performance Computing - in terms of cost-efficiency with strong scaling up to 2304 processes. We perform an extensive set of micro and macro benchmarks in both environments and conduct a comparative analysis. Until as recently as 2020 these benchmarks have notoriously yielded unsatisfactory results for the cloud platforms compared with on-premise infrastructures. Our aim is to access the HPC capabilities of the cloud, and in general to demonstrate how researchers can scale and evaluate the performance of their application in the cloud. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.