604 results on '"Gigabyte"'
Search Results
2. A proposed multi criteria indexing and ranking model for documents and web pages on large scale data
- Author
-
Ayman E. Khedr, Manal A. Abdel-Fattah, and Mohamed Attia
- Subjects
Search engine ,Information retrieval ,General Computer Science ,Index (publishing) ,Gigabyte ,Computer science ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,Search engine indexing ,Web page ,Rank (computer programming) ,Learning to rank ,Ranking (information retrieval) - Abstract
Due to the expansion of data, search engines encounter different obstacles for retrieving better relevant content to user’s search queries. Consequently, various retrieval and ranking algorithms have been applied to satisfy the result’s relevancy according to user’s needs. Unfortunately, indexing and ranking processes face several challenges to achieve highly accurate results, since most of the existing indexes and ranking algorithms crawl documents and web pages based on limited number of criteria that satisfy user needs. So, this research studies and observes how search engines work and which factors contribute to higher rankings results. The research also proposes a Multi Criteria Indexing and Ranking Model (MCIR) based on weighted documents and pages which depend on one or more ranking factors, aiming at building a model that achieves high performance, better relevant pages, and the ability to index and rank both online/offline pages and documents. The MCIR model was applied on three different experiments to compare documents and pages results in terms of ranking scores, based on one or more criteria of user’s preferences. The results of applying MCIR model proved that final pages ranking results depending on multi-criteria are better than using only one criterion, and some criteria have great effect on ranking results than other criteria. It was also observed that the MCIR model achieved high performance on indexing and ranking dataset up to 100 gigabytes.
- Published
- 2022
- Full Text
- View/download PDF
3. Multi-scale Features Fusion for the Detection of Tiny Bleeding in Wireless Capsule Endoscopy Images
- Author
-
QianBin, JinHai, LuFeng, LinSong, Y ZomayaAlbert, PengChengwangli, LiWei, WangZhiyong, and RanjanRajiv
- Subjects
Gigabyte ,Computer Networks and Communications ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Gastrointestinal tract examination ,Computer Science Applications ,law.invention ,Hardware and Architecture ,Capsule endoscopy ,law ,Medical imaging ,Wireless ,Computer vision ,Artificial intelligence ,business ,Software ,Information Systems - Abstract
Wireless capsule endoscopy is a modern non-invasive Internet of Medical Imaging Things that has been increasingly used in gastrointestinal tract examination. With about one gigabyte image data generated for a patient in each examination, automatic lesion detection is highly desirable to improve the efficiency of the diagnosis process and mitigate human errors. Despite many approaches for lesion detection have been proposed, they mainly focus on large lesions and are not directly applicable to tiny lesions due to the limitations of feature representation. As bleeding lesions are a common symptom in most serious gastrointestinal diseases, detecting tiny bleeding lesions is extremely important for early diagnosis of those diseases, which is highly relevant to the survival, treatment, and expenses of patients. In this article, a method is proposed to extract and fuse multi-scale deep features for detecting and locating both large and tiny lesions. A feature extracting network is first used as our backbone network to extract the basic features from wireless capsule endoscopy images, and then at each layer multiple regions could be identified as potential lesions. As a result, the features maps of those potential lesions are obtained at each level and fused in a top-down manner to the fully connected layer for producing final detection results. Our proposed method has been evaluated on a clinical dataset that contains 20,000 wireless capsule endoscopy images with clinical annotation. Experimental results demonstrate that our method can achieve 98.9% prediction accuracy and 93.5% score, which has a significant performance improvement of up to 31.69% and 22.12% in terms of recall rate and score, respectively, when compared to the state-of-the-art approaches for both large and tiny bleeding lesions. Moreover, our model also has the highest AP and the best medical diagnosis performance compared to state-of-the-art multi-scale models.
- Published
- 2021
- Full Text
- View/download PDF
4. BIG DATA PROCESSING IN THE DIGITALIZATION OF ENTERPRISE ACTIVITIES
- Subjects
Data processing ,Upload ,Gigabyte ,Work (electrical) ,Computer science ,Order (business) ,business.industry ,Emerging technologies ,Big data ,Relevance (information retrieval) ,business ,Data science - Abstract
The growth rate of these enterprises has increased significantly in the last decade. Research has shown that over the past two decades, the amount of data has increased approximately tenfold every two years - this exceeded Moore's Law, which doubles the power of processors. About thirty thousand gigabytes of data are accumulated every second, and their processing requires an increase in the efficiency of data processing. Uploading videos, photos and letters from users on social networks leads to the accumulation of a large amount of data, including unstructured ones. This leads to the need for enterprises to work with big data of different formats, which must be prepared in a certain way for further work in order to obtain the results of modeling and calculations. In connection with the above, the research carried out in the article on processing and storing large data of an enterprise, developing a model and algorithms, as well as using new technologies is relevant. Undoubtedly, every year the information flows of enterprises will increase and in this regard, it is important to solve the issues of storing and processing large amounts of data. The relevance of the article is due to the growing digitalization, the increasing transition to professional activities online in many areas of modern society. The article provides a detailed analysis and research of these new technologies.
- Published
- 2021
- Full Text
- View/download PDF
5. Security Threat and Data Consumption as Mojor Nuisance of Social Media on Wi-Fi Network
- Author
-
Gaddafi Abdul-Salaam, Ibrahim Mohammed Gunu, and Fuseini Inusah
- Subjects
Consumption (economics) ,Gigabyte ,Computer science ,business.industry ,Network mapping ,Computer security ,computer.software_genre ,Megabyte ,Social media ,business ,computer ,Mobile device ,Graphical user interface ,Vulnerability (computing) - Abstract
This research is about the nuisances of social media applications on a Wi-Fi network at a university campus in Ghana. The aim was to access the security risk on the network, the speed of the network, and the data consumption of those platforms on the network. Network Mapper (Nmap Zenmap) Graphical User Interface 7.80 application was used to scan the various social media platforms to identify the protocols, ports, services, etc. to enable in accessing the vulnerability of the network. Data consumption of users’ mobile devices was collected and analyzed. Device Accounting (DA) based on the various social media applications was used. The results of the analysis revealed that the network is prone to attacks due to the nature of the protocols, ports, and services on social media applications. The numerous users with average monthly data consumption per user of 4 gigabytes, 300 megabytes on social media alone are a clear indication of high traffic as well as the cost of maintaining the network. A URL filtering of the social media websites was proposed on Rockus Outdoor AP to help curb the nuisance.
- Published
- 2021
- Full Text
- View/download PDF
6. Evaluation of Burundi Physical Education Teachers, Coaches, and Athletes' Sport Nutrition, Massage, and Physiotherapeutic Exercises Knowledge
- Author
-
Japhet Ndayisenga and Yustinus Sukarmin
- Subjects
Medical education ,Massage ,biology ,Gigabyte ,Athletes ,business.industry ,media_common.quotation_subject ,biology.organism_classification ,Coaching ,Physical education ,Quality (business) ,Low correlation ,business ,Psychology ,Representative sampling ,media_common - Abstract
Knowledge is an important aspect for every person; especially it is very important for physical education teachers, coaches, and athletes. Although the knowledge about nutrition, massage, and physiotherapeutic exercises was investigated in Burundi Country. This study was a descriptive lookup with blended methods. The participants of this lookup were 15 physical education teachers, coaches, and athletes taken by representative sampling. Data evaluation techniques used correlation and linear regression among indicators-variables, and between variables themselves with Software PLS-SEM and SPSS.21. The effects confirmed that there was a low correlation (r: 0.45) between the items-global knowledge, and the negative correlation (r = - 0.068) was found between prices of Gigabyte (GB) and source of learning confirmed that the more the price of gigabyte was expensive, the more the source of information about the learning was not sufficient. The correlation between nutrition and its indicators was not strong, consecutively presented (base: 0.339; components: o.355; knowledge: 0.402). The relation between learning (X1), nutrition (Y1) {Rx1y1: o.421}, and Knowledge (Y3) {Ry1y3} was not strong; the subjects have little knowledge about nutrition. The consecutively correlation of indicators (basics: 0.366; massage course: 0.378; knowledge: 0.441), on massage and physiotherapeutic exercises showed that subjects held little knowledge about the previous courses. Knowledge became strong management tools that help physical education, sport medicine, coaching training managers to decide how to improve peak of performance, to maintain the good quality of athletes and non-athletes: This learn about was the first to apply to evaluate career to the knowledge of physical education teachers, coaches, and athletes.
- Published
- 2020
- Full Text
- View/download PDF
7. GigaByte: Publishing at the Speed of Research
- Author
-
Edmunds, Scott C and Goodman, Laurie
- Subjects
Multimedia ,Gigabyte ,Computer science ,business.industry ,Applied Mathematics ,General Mathematics ,QA75.5-76.95 ,computer.software_genre ,Publishing ,Electronic computers. Computer science ,computer sciences ,business ,data integration ,computer - Abstract
Current practices in scientific publishing are unsuitable for rapidly changing fields and for presenting updatable data sets and software tools. In this regard, and as part of our continuing pursuit of pushing scientific publishing to match the needs of modern research, we are delighted to announce the launch of GigaByte, an online open-access, open data journal that aims to be a new way to publish research following the software paradigm: CODE, RELEASE, FORK, UPDATE and REPEAT. Following on the success of GigaScience in promoting data sharing and reproducibility of research, its new sister, GigaByte, aims to take this even further. With a focus on short articles, using a questionnaire-style review process, and combining that with the custom built publishing infrastructure from River Valley Technologies, we now have a cutting edge, XML-first publishing platform designed specifically to make the entire publication process easier, quicker, more interactive, and better suited to the speed needed to communicate modern research.
- Published
- 2020
- Full Text
- View/download PDF
8. Practical Guide to Storage of Large Amounts of Microscopy Data
- Author
-
Daniel E. S. Koo and Andrey Andreev
- Subjects
Database server ,0303 health sciences ,General Computer Science ,Gigabyte ,business.industry ,Computer science ,Scale (chemistry) ,Big data ,02 engineering and technology ,Terabyte ,021001 nanoscience & nanotechnology ,Data science ,03 medical and health sciences ,Blueprint ,Computer data storage ,0210 nano-technology ,business ,Lagging ,030304 developmental biology - Abstract
Biological imaging tools continue to increase in speed, scale, and resolution, often resulting in the collection of gigabytes or even terabytes of data in a single experiment. In comparison, the ability of research laboratories to store and manage this data is lagging greatly. This leads to limits on the collection of valuable data and slows data analysis and research progress. Here we review common ways researchers store data and outline the drawbacks and benefits of each method. We also offer a blueprint and budget estimation for a currently deployed data server used to store large datasets from zebrafish brain activity experiments using light-sheet microscopy. Data storage strategy should be carefully considered and different options compared when designing imaging experiments.
- Published
- 2020
- Full Text
- View/download PDF
9. Keeping a secure hold on data through modern electronic content management
- Author
-
Paul Hampton
- Subjects
Information Systems and Management ,Gigabyte ,Computer Networks and Communications ,Zettabyte ,Aggregate (data warehouse) ,Urban sprawl ,Plan (drawing) ,Content creation ,Computer security ,computer.software_genre ,Data explosion ,Business ,Electronic content ,Safety, Risk, Reliability and Quality ,computer - Abstract
Proper storage of data is a security necessity. This is because data can often contain valuable company information, which is liable to fall into the wrong hands if not properly managed. The problem is growing in tandem with increased content creation – the world is full of data. It is estimated that the aggregate amount of data, which doubles in size every two years, measures 4.4 zettabytes (trillion gigabytes) and is likely to reach a massive 44 zettabytes by 2020. 1 Companies are attempting to deal with a tidal wave of data. And a lack of integration inside a business can promote the dangerous phenomenon of content sprawl. This occurs when different departments do not harmonise their processes and there is no plan to address outdated content or to ensure that data is stored in the right way. Companies need to gain a tight rein on their digital assets and institute a rigid content management system to keep up with the data explosion. But that can be easier said than done, explains Paul Hampton of Alfresco.
- Published
- 2020
- Full Text
- View/download PDF
10. Volunteer geographic information in the Global South: barriers to local implementation of mapping projects across Africa
- Author
-
Renee Lynch, Stanley Boakye-Achampong, Jason C. Young, Joel Sam, Chris Jowaisas, and Bree Norlander
- Subjects
Volunteered geographic information ,Gigabyte ,Inequality ,media_common.quotation_subject ,Public libraries ,Geography, Planning and Development ,0507 social and economic geography ,Global South ,Face (sociological concept) ,Crowdsourcing ,Colonialism ,Article ,Human geography ,Regional science ,media_common ,business.industry ,05 social sciences ,VGI ,Geography ,Mapping ,0509 other social sciences ,050904 information & library sciences ,business ,050703 geography - Abstract
The world is awash in data—by 2020 it is expected that there will be approximately 40 trillion gigabytes of data in existence, with that number doubling every 2 to 3 years. However, data production is not equal in all places—the global data landscape remains heavily concentrated on English-speaking, urban, and relatively affluent locations within the Global North. This inequality can contribute to new forms of digital and data colonialism. One partial solution to these issues may come in the form of crowdsourcing and volunteer geographic information (VGI), which allow Global South populations to produce their own data. Despite initial optimism about these approaches, many challenges and research gaps remain in understanding the opportunities and barriers that organizations endemic to the Global South face in carrying out their own sustainable crowdsourcing projects. What opportunities and barriers do these endemic organizations face when trying to carry out mapping projects driven by their own goals and desires? This paper contributes answers to this question by examining a VGI project that is currently mapping public libraries across the African continent. Our findings highlight how dramatically digital divides can bias crowdsourcing results; the importance of local cultural views in influencing participation in crowdsourcing; and the continued importance of traditional, authoritative organizations for crowdsourcing. These findings offer important lessons for researchers and organizations attempting to develop their own VGI projects in the Global South.
- Published
- 2020
11. Data-Aided Doppler Frequency Shift Estimation and Compensation for UAVs
- Author
-
Wei Li, Hui Gao, Huiqing Sun, Qixun Zhang, and Zhiyong Feng
- Subjects
Gigabyte ,Computer Networks and Communications ,Orthogonal frequency-division multiplexing ,Computer science ,Mobile broadband ,Real-time computing ,Frame (networking) ,Testbed ,020302 automobile design & engineering ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Compensation (engineering) ,0203 mechanical engineering ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,5G ,Information Systems - Abstract
With the surge of Internet of Things (IoT) applications using unmanned aerial vehicles (UAVs), there is a huge demand for the mobile broadband service with gigabyte per second data rate in the UAV-aided fifth generation (5G) IoT system. However, Doppler frequency shift (DFS) deteriorates the link performance of UAV-aided 5G system in the highly dynamic and mobile scenarios. Therefore, a data-aided DFS estimation and compensation approach is proposed to optimize the DFS estimation process using historical estimation results, aiming to achieve a fast and accurate DFS compensation. The performance of the proposed DFS estimation algorithm is evaluated by both cost function of accuracy based on frame structure and Cramer–Rao lower bound in terms of the mean-squared error and signal-to-noise ratio. Furthermore, an adaptive frequency-domain DFS compensation algorithm is designed by leveraging DFS estimation results to enhance the quality of communication link for UAV-aided 5G system, achieving an optimal tradeoff between accuracy and complexity. Finally, both link-level simulation platform and hardware testbed are designed and developed to evaluate the performance of our proposed data-aided approach over other conventional algorithms.
- Published
- 2020
- Full Text
- View/download PDF
12. Characterizing Big Data Management
- Author
-
Kechi Hirama and Rogério Rossi
- Subjects
FOS: Computer and information sciences ,Big Data ,Gigabyte ,Emerging technologies ,Computer science ,Computer Science - Artificial Intelligence ,Big Data Management ,Big data ,Statistics - Applications ,World Wide Web ,Computer Science - Computers and Society ,Computer Science - Databases ,Big Data Analytics ,Computers and Society (cs.CY) ,Applications (stat.AP) ,Exabyte ,Decision-Making ,lcsh:T58.5-58.64 ,lcsh:Information technology ,business.industry ,Sentiment analysis ,Petabyte ,Databases (cs.DB) ,Big Data Challenges ,Data science ,Artificial Intelligence (cs.AI) ,Data access ,Analytics ,business - Abstract
Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management., volume 12, 2015
- Published
- 2022
13. HIPPI-6400-Designing for Speed
- Author
-
Tolmie, Don E. and Schaeffer, Jonathan, editor
- Published
- 1998
- Full Text
- View/download PDF
14. Tens of gigabytes per second JSON-to-Arrow conversion with FPGA accelerators
- Author
-
Robert Morrow, Matthijs Brobbel, Zaid Al-Ars, Johan Peltenburg, and Akos Hadnagy
- Subjects
Parsing ,accelerator ,Gigabyte ,Computer science ,JSON ,parsing ,computer.software_genre ,Operating system ,Apache Arrow ,Field-programmable gate array ,computer ,FPGA ,computer.programming_language - Abstract
JSON is a popular data interchange format for many web, cloud, and IoT systems due to its simplicity, human readability, and widespread support. However, applications must first parse and convert the data to a native in-memory format before being able to perform useful computations. Many big data applications with high performance requirements convert JSON data to Apache Arrow RecordBatches, the latter being a widely-used columnar in-memory format for large tabular data sets used in data analytics. In this paper, we analyze the performance characteristics of such applications and show that JSON parsing represents a bottleneck in the system. Various strategies are explored to speed up JSON parsing on CPU and GPU as much as possible. Due to performance limitation of the CPU and GPU implementations, we furthermore present an FPGA accelerated implementation. We explain how hardware components that can parse variable-sized and nested structures can be combined to produce JSON parsers for any type of JSON document. Several fully integrated FPGA-accelerated JSON parser implementations are presented using the Intel Arria 10 GX and Xilinx VU37P devices, and compared to the performance of their respective host systems; an Intel Xeon and an IBM POWER9 system. Result show the accelerators achieve an end-to-end throughput close to 7 GB/s with the Arria 10 GX using PCIe, and close to 20 GB/s with the VU37P using OpenCAPI 3. Depending on the complexity of the JSON data to parse, the bandwidth is limited by the host-to-accelerator interface or available FPGA resources. Overall, this provides a throughput increase of up to 6x, compared to the baseline application. Also, we observe a full system energy efficiency improvement of up to 59x more JSON data parsed per joule.
- Published
- 2021
- Full Text
- View/download PDF
15. Event-Driven Deep Learning for Edge Intelligence (EDL-EI) †
- Author
-
Sayed Khushal Shah, Yugyung Lee, Zeenat Tariq, and Jeehwan Lee
- Subjects
Gigabyte ,Edge device ,Computer science ,air-quality event ,Distributed computing ,event-driven deep learning ,Intelligence ,TP1-1185 ,Biochemistry ,Convolutional neural network ,Article ,Analytical Chemistry ,Deep Learning ,Artificial Intelligence ,Humans ,Electrical and Electronic Engineering ,Instrumentation ,Pandemics ,IoT intelligent system ,sensor fusion ,business.industry ,Event (computing) ,SARS-CoV-2 ,Chemical technology ,Deep learning ,COVID-19 ,Sensor fusion ,Atomic and Molecular Physics, and Optics ,United States ,edge intelligence ,Communicable Disease Control ,Data analysis ,Artificial intelligence ,Enhanced Data Rates for GSM Evolution ,business - Abstract
Edge intelligence (EI) has received a lot of interest because it can reduce latency, increase efficiency, and preserve privacy. More significantly, as the Internet of Things (IoT) has proliferated, billions of portable and embedded devices have been interconnected, producing zillions of gigabytes on edge networks. Thus, there is an immediate need to push AI (artificial intelligence) breakthroughs within edge networks to achieve the full promise of edge data analytics. EI solutions have supported digital technology workloads and applications from the infrastructure level to edge networks, however, there are still many challenges with the heterogeneity of computational capabilities and the spread of information sources. We propose a novel event-driven deep-learning framework, called EDL-EI (event-driven deep learning for edge intelligence), via the design of a novel event model by defining events using correlation analysis with multiple sensors in real-world settings and incorporating multi-sensor fusion techniques, a transformation method for sensor streams into images, and lightweight 2-dimensional convolutional neural network (CNN) models. To demonstrate the feasibility of the EDL-EI framework, we presented an IoT-based prototype system that we developed with multiple sensors and edge devices. To verify the proposed framework, we have a case study of air-quality scenarios based on the benchmark data provided by the USA Environmental Protection Agency for the most polluted cities in South Korea and China. We have obtained outstanding predictive accuracy (97.65% and 97.19%) from two deep-learning models on the cities’ air-quality patterns. Furthermore, the air-quality changes from 2019 to 2020 have been analyzed to check the effects of the COVID-19 pandemic lockdown.
- Published
- 2021
16. Testing machine learning techniques for general application by using protein secondary structure prediction. A brief survey with studies of pitfalls and benefits using a simple progressive learning approach
- Author
-
Barry Robson
- Subjects
Artificial neural network ,Gigabyte ,business.industry ,Computer science ,Deep learning ,media_common.quotation_subject ,Proteins ,Health Informatics ,Biological evolution ,Protein secondary structure prediction ,Machine learning ,computer.software_genre ,Protein Structure, Secondary ,Computer Science Applications ,Machine Learning ,Quality (business) ,Artificial intelligence ,business ,Databases, Protein ,computer ,Biomedicine ,Algorithms ,media_common ,Simple (philosophy) - Abstract
Many researchers have recently used the prediction of protein secondary structure (local conformational states of amino acid residues) to test advances in predictive and machine learning technology such as Neural Net Deep Learning. Protein secondary structure prediction continues to be a helpful tool in research in biomedicine and the life sciences, but it is also extremely enticing for testing predictive methods such as neural nets that are intended for different or more general purposes. A complication is highlighted here for researchers testing their methods for other applications. Modern protein databases inevitably contain important clues to the answer, so-called "strong buried clues", though often obscurely; they are hard to avoid. This is because most proteins or parts of proteins in a modern protein data base are related to others by biological evolution. For researchers developing machine learning and predictive methods, this can overstate and so confuse understanding of the true quality of a predictive method. However, for researchers using the algorithms as tools, understanding strong buried clues is of great value, because they need to make maximum use of all information available. A simple method related to the GOR methods but with some features of neural nets in the sense of progressive learning of large numbers of weights, is used to explore this. It can acquire tens of millions and hence gigabytes of weights, but they are learned stably by exhaustive sampling. The significance of the findings is discussed in the light of promising recent results from AlphaFold using Google's DeepMind.
- Published
- 2021
17. How Private is Machine Learning?
- Author
-
Nicolas Carlini
- Subjects
Scheme (programming language) ,Gigabyte ,Computer science ,business.industry ,Existential quantification ,Adversary ,Machine learning ,computer.software_genre ,Phone ,Differential privacy ,The Internet ,Artificial intelligence ,Language model ,business ,computer ,computer.programming_language - Abstract
A machine learning model is private if it doesn't reveal (too much) about its training data. This three-part talk examines to what extent current networks are private. Standard models are not private. We develop an attack that extracts rare training examples (for example, individual people's names, phone numbers, or addresses) out of GPT-2, a language model trained on gigabytes of text from the Internet [2]. As a result there is a clear need for training models with privacy-preserving techniques. We show that InstaHide, a recent candidate, is not private. We develop a complete break of this scheme and can again recover verbatim inputs [1]. Fortunately, there exists provably-correct "differentiallyprivate" training that guarantees no adversary could ever succeed at the above attacks. We develop techniques to that allow us to empirically evaluate the privacy offered by such schemes, and find they may be more private than can be proven formally [3].
- Published
- 2021
- Full Text
- View/download PDF
18. Efficient Android Software Development Using MIT App Inventor 2 for Bluetooth-Based Smart Home
- Author
-
Sinantya Feranti Anindya, Irfan Gani Purwanda, Khilda Afifah, Syifaul Fuada, and Trio Adiono
- Subjects
Gigabyte ,Computer science ,business.industry ,Software development ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Computer Science Applications ,User interface design ,law.invention ,Bluetooth ,Home automation ,law ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Android (operating system) ,User interface ,business ,computer ,Mobile device - Abstract
In this paper, a specific Android application for Bluetooth-based smart home system is presented. The aim of this research is to design, develop, and evaluate a user interface prototype for the smart home system. The designed mobile App is named by MINDS-apps V1, which is expected to be able to perform three tasks, (1) controlling by soft-control mode, i.e. an RGB ambient lamp and Fan; (2) controlling by hard-control, i.e. a generic power switch, curtain, door lock, and (3) monitoring purpose, i.e. humidity and temperature sensor. In total, there are six types of smart home devices used for the experiment. Using MIT App Inventor 2, the design process is divided into two phases: user interface design using the Components Designer and implementation of the programming logic using the Blocks Editor. Once the design is finished, the application is then compiled into debuggable APK file with 2.23 MB in size, after which it is tested on six aforementioned devices. The MINDS-apps is able to operate even in the low-end mobile device with 1 gigabyte of random access memory (RAM) and Bluetooth version 2.1.
- Published
- 2019
- Full Text
- View/download PDF
19. Change Your Cluster to Cold: Gradually Applicable and Serviceable Cold Storage Design
- Author
-
Kyungtae Kang, Dongeun Lee, Yoonsoo Jo, and Chanyoung Park
- Subjects
serviceable cold storage ,Database server ,General Computer Science ,Gigabyte ,mobile messenger ,business.industry ,Computer science ,Distributed storage ,energy-efficiency ,General Engineering ,Cold storage ,020206 networking & telecommunications ,02 engineering and technology ,Energy storage ,Backup ,Server ,Embedded system ,Distributed data store ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 - Abstract
Because of its low cost per gigabyte, hard disk drive (HDD)-based storage are still extensively used despite consuming more power than flash-based storage. In particular, HDDs can be effectively used as cold storage for energy efficiency by making some drives in spin down. However, typical cold storage cannot be used owing to high access latency, unless it is used for archival or backup purposes. Furthermore, it is difficult to apply many power-proportional solutions because they require reconfiguration of the server power domain and adjustments to the data layout. In this paper, we propose a serviceable cold storage design that can be applied gradually to online services. The proposed design only modifies the data server of a typical distributed storage system to let it utilize the spin-up or spin-down features of disk drives and determine the data location. Because the modified data server appears identical to the existing data nodes, it can be implemented in the same manner as the addition or removal of a data server. Our prototype is implemented on Ceph, a well-known distributed storage system, and its effectiveness in managing the skewed I/O pattern of applications is demonstrated using a benchmark that can reproduce the real I/O patterns of the LINE mobile messenger application.
- Published
- 2019
- Full Text
- View/download PDF
20. Document Compression Improvements Based on Data Clustering
- Author
-
Vaclav Snasel, Jan Martinovič, Jiri Dvorsky, and Jan Platos
- Subjects
Web server ,Information retrieval ,Gigabyte ,business.industry ,Computer science ,Disk array ,Data_CODINGANDINFORMATIONTHEORY ,Document clustering ,computer.software_genre ,The Internet ,Information society ,Cluster analysis ,business ,computer ,Information explosion - Abstract
The present information society creates huge quantities of textual information. This information explosion is being handled using Information Retrieval Systems. Their tasks are effective storage and searching in the text collections. The amount of text stored in IRS and auxiliary data structures constitute a suitable material for data compression. However, the data that form the textual database of every IRS are very mixed and it is therefore useful to study special data compression methods. This chapter focuses on high compression ratio algorithms specialized in text compression in IRS that enable a fast decompression of individual documents, fully integrated with the IRS, and work with an adequate compression speed. These methods uses word-based compression methods combined with topical development of input documents. Experimental results prove that clustering has a positive impact on the compression ratio. The advantage of implementing this approach is that it is not necessary to change the existing compression algorithm. The only thing that changes is the ordering in which compressed documents are input. Decompression algorithms are not influenced at all and knowledge of topical development is not necessary.
- Published
- 2021
21. An Anomaly Detection Study on Automotive Sensor Data Time Series for Vehicle Applications
- Author
-
Cihangir Derse, Mohamed El Baghdadi, Hatice Nur Gezer, Omar Hegazy, Umut Sensoz, Mustafa Nil, Electrical Engineering and Power Electronics, Faculty of Engineering, and Electromobility research centre
- Subjects
Gigabyte ,business.industry ,Computer science ,Renewable Energy, Sustainability and the Environment ,Mechanical Engineering ,Real-time computing ,Big data ,Automotive industry ,Particle swarm optimization ,Energy Engineering and Power Technology ,ECU ,artificial intelligence ,automotive anomaly detection ,CAN bus ,Automotive Engineering ,Data analysis ,CAN Bus anomalies ,Anomaly detection ,Electrical and Electronic Engineering ,business ,Cluster analysis ,data analytics - Abstract
Anomaly detection in automotive systems has been a strong challenge: first, during the development phase, then after the manufacturing approval in ramp-up production and finally during the vehicles life cycle management. The numerous sensors positioned inside a vehicle generate more than a gigabyte of data at each second timeframe. These sensors are connected through the vehicle network, which comprises Electronic Control Units (ECUs) and Controller Area Network (CAN) buses. Each ECU gets input from its sensors, executes specific instructions and aims to monitor the vehicle's normal state detecting any irregular action corresponding to its observed behavior. The aggregator of all sensor data and control actions detects the anomalies in vehicle systems, which poses a multi-source big data problem. Detecting anomalies during manufacturing has turned out to be another research challenge after the introduction of Industry 4.0. This paper presents a performance comparison of different anomaly detection algorithms on time series originating from automotive sensor data. Interquartile range, isolation forest, particle swarm optimization and k-means clustering algorithms are used to detect outlier data in the study.
- Published
- 2021
- Full Text
- View/download PDF
22. Why We Are Losing the War Against COVID-19 on the Data Front and How to Reverse the Situation
- Author
-
Reecha Sofat, Jorge Bacallao Gallestey, Henry W W Potts, David Prieto-Merino, Rui Bebiano Da Providencia E Costa, and Sheng-Chia Chung
- Subjects
Gigabyte ,Coronavirus disease 2019 (COVID-19) ,Computer science ,business.industry ,Internet privacy ,Big data ,Petabyte ,COVID-19 ,030204 cardiovascular system & hematology ,03 medical and health sciences ,0302 clinical medicine ,Spanish Civil War ,Resource (project management) ,Viewpoint ,Server ,Statistical analysis ,030212 general & internal medicine ,business ,learning health systems - Abstract
With over 117 million COVID-19–positive cases declared and the death count approaching 3 million, we would expect that the highly digitalized health systems of high-income countries would have collected, processed, and analyzed large quantities of clinical data from patients with COVID-19. Those data should have served to answer important clinical questions such as: what are the risk factors for becoming infected? What are good clinical variables to predict prognosis? What kinds of patients are more likely to survive mechanical ventilation? Are there clinical subphenotypes of the disease? All these, and many more, are crucial questions to improve our clinical strategies against the epidemic and save as many lives as possible. One might assume that in the era of big data and machine learning, there would be an army of scientists crunching petabytes of clinical data to answer these questions. However, nothing could be further from the truth. Our health systems have proven to be completely unprepared to generate, in a timely manner, a flow of clinical data that could feed these analyses. Despite gigabytes of data being generated every day, the vast quantity is locked in secure hospital data servers and is not being made available for analysis. Routinely collected clinical data are, by and large, regarded as a tool to inform decisions about individual patients, and not as a key resource to answer clinical questions through statistical analysis. The initiatives to extract COVID-19 clinical data are often promoted by private groups of individuals and not by health systems, and are uncoordinated and inefficient. The consequence is that we have more clinical data on COVID-19 than on any other epidemic in history, but we have failed to analyze this information quickly enough to make a difference. In this viewpoint, we expose this situation and suggest concrete ideas that health systems could implement to dynamically analyze their routine clinical data, becoming learning health systems and reversing the current situation.
- Published
- 2021
23. Hadoop based EMH framework: A Big Data approach
- Author
-
Mohd Suhaib Kidwai, Faiyaz Ahamad, Mohd. Usman Khan, and Mohammad Zunnun Khan
- Subjects
Data visualization ,Gigabyte ,Computer science ,business.industry ,Data management ,Big data ,Data analysis ,Petabyte ,Cloud computing ,business ,Data science ,Data warehouse - Abstract
The terms like Gigabyte and megabyte have become old and outdated now because with the improvement in the field of storage technology, the storage abilities have increased and cost of storage of data has also been reduced. In present times terms like petabyte, terabyte, etc are in vogue as the e-storage capacity has increased worldwide. But at the same time, with improvement in the storage technology, the world now has ever increasing volume of data, so the need for storage is on the rise.Nowadays, huge amount of data is being generated and updated within a short span of time. Therefore, to take care of its storage, access and further advancement, analysis is required. But since the vast amount of data is getting generated in short span of time so the analysis of data as well as management of data has become a tedious, hectic and a challenging job for the data centers and managers.Various issues and challenges are there while tackling data-related issues. For example, the data is available in different types such as structured, unstructured, semi-structured, heterogeneous, or homogeneous, which makes there sorting and storage difficult. Sometimes, the older tools are not sufficient to handle all these or a combination of one or more types in real-time, or in distributed systems such as cloud systems.The creation of data and the aggregation of data is approximately in zettabytes per year. Such, volume of data is not only the problem for data management, but also affects the attributes like variety, velocity, value, and complexity. Then, there are also some technical issues as well related to the data transportation and storage.Thus, we need to think about the issues of storage and management of ever-increasing e-data and develop the novel perspective and innovative ways to address these issues.In this paper, we have analyzed these issues and challenges by studying different methodologies for big data analysis and design for Sensing and omitting medical data, e-Health record, public health record and clinical data. This paper also includes the proposed framework related to distributed data ware house based computing that has been developed to handle a larger amount of data processing by supporting smart and cost effective medical support system.
- Published
- 2021
- Full Text
- View/download PDF
24. Real-time Optimisation for Industrial Internet of Things (IIoT): Overview, Challenges and Opportunities
- Author
-
Ayse Kortun and Long D. Nguyen
- Subjects
Signal processing ,convex optimization ,lcsh:Computer engineering. Computer hardware ,Gigabyte ,Computer Networks and Communications ,Wireless network ,Computer science ,Emerging technologies ,Scale (chemistry) ,cloud computing ,Context (language use) ,lcsh:TK7885-7895 ,Terabyte ,Computer Science Applications ,lcsh:TA168 ,industrial internet-of-thing ,realtime optimization ,Risk analysis (engineering) ,Control and Systems Engineering ,lcsh:Systems engineering ,5g network ,5G ,Information Systems - Abstract
Industrial Internet-of-Things (IIoT) with massive data transfers and huge numbers of connected devices, incombination with the high demand for greater quality-of-services, signal processing is no longer producing small data sets but rather, very large ones (measured in gigabytes or terabytes), or even higher. This has posed critical challenges in the context of optimisation. Communication scenarios such as online applications come with the need for real-time optimisation. In such scenarios, often under a dynamic environment, a strict real-time deadline is the most important requirement to be met. To this end, embedded convex optimisation, which can be redesigned and updated within a fast time-scale given sufficient computing power, is a candidate to deal with the challenges in real-time optimisation applications. Real-time optimisation is now becoming a reality in signal processing and wireless networks of IIoT. Research into new technologies to meet future demands is receiving urgent attention on a global scale, especially when 5G networks are expected to be in place in 2020. This work addresses the fundamentals, technologies and practically relevant questions related to the many challenges arising from real-time optimisation communications for industrial IoT.
- Published
- 2021
- Full Text
- View/download PDF
25. Optimizing Spark Applications
- Author
-
Hien Luu
- Subjects
SQL ,Memory management ,Gigabyte ,Computer science ,Process (engineering) ,Spark (mathematics) ,Key (cryptography) ,Terabyte ,Know-how ,Data science ,computer ,computer.programming_language - Abstract
Chapter 4 covered major capabilities in Spark SQL to perform simple to complex data processing. When you use Spark to process large datasets in hundreds of gigabytes or terabytes, you encounter interesting and challenging performance issues; therefore, it is important to know how to deal with them. Mastering Spark application performance issues is a very interesting, challenging, and broad topic. It requires a lot of research and a deep understanding of some of the key areas of Spark related to memory management and data movement.
- Published
- 2021
- Full Text
- View/download PDF
26. Conclusions: Future Directions in Sensing for Precision Agriculture
- Author
-
Alexandre Escolà and Ruth Kerry
- Subjects
Automated data ,Lead (geology) ,Gigabyte ,Process (engineering) ,Computer science ,Key (cryptography) ,Precision agriculture ,Terabyte ,Data science ,Sensing system - Abstract
In this final chapter, we provide an overall conclusion to this book on the use of sensors in precision agriculture (PA) based on the conclusions of the individual chapters concerning key themes and future research needs. The authors highlighted aspects related to the need to improve sensor resolutions (spatial, temporal and spectral), increase accuracy and simplify the process of calibration, when required. The ability to obtain gigabytes or even terabytes of data is complicated by the need to store, process and analyse them. Although computing power is increasing continuously, automated data processes are also required to ease the adoption of new sensing systems. In addition, some barriers to the widespread adoption of sensing approaches in PA are identified. Most important are economics and training. Gathering, processing and analysing data from sensing systems should lead farmers to make more informed management decisions and that is only possible if the information derived helps them to increase profits in a sustainable way.
- Published
- 2021
- Full Text
- View/download PDF
27. A Holistic Approach to Data Access for Cloud-Native Analytics and Machine Learning
- Author
-
Srikumar Venugopal, Yiannis Gkoufas, Panos Koutsovasilis, and Christian Pinto
- Subjects
Gigabyte ,business.industry ,Computer science ,Cloud computing ,Machine learning ,computer.software_genre ,Object storage ,Data access ,Analytics ,Spark (mathematics) ,Benchmark (computing) ,Cache ,Artificial intelligence ,business ,computer - Abstract
Cloud providers offer a variety of storage solutions for hosting data, both in price and in performance. For Analytics and machine learning applications, object storage services are the go-to solution for hosting the datasets that exceed tens of gigabytes in size. However, such a choice results in performance degradation for these applications and requires extra engineering effort in the form of code changes to access the data on remote storage. In this paper, we present a generic end-to-end solution that offers seamless data access for remote object storage services, transparent data caching within the compute infrastructure, and data-aware topologies that boost the performance of applications deployed in Kubernetes. We introduce a custom-implemented cache mechanism that supports all the requirements of the former and we demonstrate that our holistic solution leads up to 48% improvement for Spark implementation of the TPC-DS benchmark and up to 191% improvement for the training of deep learning models from the MLPerf benchmark suite.
- Published
- 2021
- Full Text
- View/download PDF
28. From discrete element simulation data to process insights
- Author
-
Daniel N. Wilke, Paul W. Cleary, and Nicolin Govender
- Subjects
Gigabyte ,business.industry ,Physics ,QC1-999 ,Magnetic storage ,Process (computing) ,computer.software_genre ,law.invention ,Interpretation (model theory) ,Knowledge extraction ,law ,Data mining ,Element (category theory) ,Scale (map) ,business ,computer ,Agile software development - Abstract
Industrial-scale discrete element simulations typically generate Gigabytes of data per time step, which implies that even opening a single file may require 5 - 15 minutes on conventional magnetic storage devices. Data science’s inherent multi-disciplinary nature makes the extraction of useful information challenging, often leading to undiscovered details or new insights. This study explores the potential of statistical learning to identify potential regions of interest for large scale discrete element simulations. We demonstrate that our in-house knowledge discovery and data mining system (KDS) can decompose large datasets into i) regions of potential interest to the analyst, ii) multiple decompositions that highlight different aspects of the data, iii) simplify interpretation of DEM generated data by focusing attention on the interpretation of automatically decomposed regions, and iv) streamline the analysis of raw DEM data by letting the analyst control the number of decomposition and the way the decompositions are performed. Multiple decompositions can be automated in parallel and compressed, enabling agile engagement with the analyst’s processed data. This study focuses on spatial and not temporal inferences.
- Published
- 2021
29. Unicode at Gigabytes per Second
- Author
-
Daniel Lemire
- Subjects
Java ,Gigabyte ,Computer science ,computer.internet_protocol ,Transcoding ,computer.software_genre ,ASCII ,Unicode ,JSON ,Operating system ,computer ,XML ,computer.programming_language ,Rust (programming language) - Abstract
We often represent text using Unicode formats (UTF-8 and UTF-16). The UTF-8 format is increasingly popular, especially on the web (XML, HTML, JSON, Rust, Go, Swift, Ruby). The UTF-16 format is most common in Java, .NET, and inside operating systems such as Windows. Software systems frequently have to convert text from one Unicode format to the other. While recent disks have bandwidths of 5 GiB/s or more, conventional approaches transcode non-ASCII text at a fraction of a gigabyte per second. We show that we can validate and transcode Unicode text at gigabytes per second on current systems (x64 and ARM) without sacrificing safety. Our open-source library can be ten times faster than the popular ICU library on non-ASCII strings and even faster on ASCII strings.
- Published
- 2021
- Full Text
- View/download PDF
30. Mind the Amplification: Cracking Content Delivery Networks via DDoS Attacks
- Author
-
Weizhi Meng and Zihao Li
- Subjects
Web server ,Gigabyte ,Computer science ,business.industry ,Network security ,Reliability (computer networking) ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Content delivery ,Denial-of-service attack ,Content delivery network ,computer.software_genre ,The Internet ,business ,computer ,Computer network - Abstract
A content delivery network (CDN) refers to a geographically distributed network of web servers, aiming to enhance the website performance and reliability, i.e., providing fast delivery of the Internet content. CDN is often part of a mature enterprise level network, but due to the distributed nature, CDN may be vulnerable to a distributed denial-of-service attack (DDoS), in which the cyber-attackers try to flood the victim with overloaded traffic or requests. In the literature, the impact of DDoS on CDN has not been widely studied. Motivated by this challenge, in this work, we emulate a CDN environment and investigate the effect of a particular DDoS attack on the environment. It is found that free CDN clusters could be destroyed by only several Gigabyte traffic. In the end, we also discuss some potential solutions to help defend against DDoS attacks.
- Published
- 2021
- Full Text
- View/download PDF
31. Healthcare Data Visualization
- Author
-
Aditya Shastry, H. A. Sanjay, B. S. Prashanth, M. V. Manoj Kumar, and H. R. Sneha
- Subjects
Exploratory data analysis ,Data visualization ,Gigabyte ,business.industry ,Process (engineering) ,Event (computing) ,Computer science ,Information system ,Process mining ,business ,Data science ,Visualization - Abstract
It is evident that a huge amount of data is currently being generated. Across the world, 2.5 quintillion bytes of data is being recorded currently. It is almost equivalent to 0.5 Million TB or it is enough to occupy 10 Million Blue-ray disks. The amount of data is expected to surpass 44 trillion gigabytes at the end of 2020 (as compared to 4.4 trillion gigabytes during the end of 2013). The lion’s share of the data being recorded in the information systems is basically related to healthcare activities. Extracting useful information/insights from a large quantity of data is very important. Visualizing data could yield wonderful results, and summaries hidden in data, especially, visualization could do a wonderful job in health care. Data visualization saves time and conveys information more meaningfully. It is a powerful way to summarize which assists all stakeholders. This chapter presents an attempt to summarize healthcare data through exploratory data analysis and process mining control-flow discovery techniques. Exploratory data analysis of healthcare data presents a way to explore healthcare data meaningfully, and process mining based control flow visualization presents the way to extract the causal relationships between the activities of the process. Process mining way of visualizing healthcare helps in identifying the discrepancies between planned and actual healthcare processes. Final sections of this chapter present Process Mining based control flow visualizations on real-time event log detailed in healthcare information systems.
- Published
- 2021
- Full Text
- View/download PDF
32. Multi-class Bagged Proximal Support Vector Machines for the ImageNet Challenging Problem
- Author
-
Thanh-Nghi Do
- Subjects
Support vector machine ,CUDA ,Gigabyte ,Computer science ,business.industry ,Linear svm ,Large numbers ,Binary number ,Pattern recognition ,Artificial intelligence ,Numerical tests ,business ,Class (biology) - Abstract
We propose the new multi-class of bagged proximal support vector machines (MC-Bag-PSVM) for handling the ImageNet challenging problem with very large number of images and a thousand classes. Our MC-Bag-PSVM trains in the parallel manner ensemble binary PSVM classifiers used for the One-Versus-All (OVA) multi-class strategy on multi-core computer with GPUs. The binary PSVM model is constructed by bagged binary PSVM models built in under-sampling training dataset. The numerical test results on ILSVRC 2010 dataset show that our MC-Bag-PSVM algorithm is faster and more accurate than the state-of-the-art linear SVM algorithm. An example of its effectiveness is given with an accuracy of 75.64% obtained in the classification of ImageNet-1000 dataset having 1,261,405 images in 2048 deep features into 1,000 classes in 29.5 min using a PC Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores and Gigabyte GeForce RTX 2080Ti 11 GB GDDR6, 4352 CUDA cores.
- Published
- 2021
- Full Text
- View/download PDF
33. Sentiment Analysis of Product Reviews on Social Media
- Author
-
David Yoon and Velam Thanu
- Subjects
Service (business) ,Data flow diagram ,Gigabyte ,Scale (social sciences) ,Sentiment analysis ,Social media ,Advertising ,Business ,Product (category theory) ,Python (programming language) ,computer ,computer.programming_language - Abstract
Social media is a platform where people share their interests and opinions. Users posting their opinions about a product or a service they used is very common nowadays. This information can be important to marketers. Knowledge about how a product is perceived in a society is crucial to improve the sales and formulate marketing business strategies. The data flow in social media is of extreme high volume and this data is generated in a scale of hundreds of gigabytes each day. There are many applications which have been built to make the most use of this idle data one of the most important one being sentiment analysis.
- Published
- 2021
- Full Text
- View/download PDF
34. Computational Analysis of Documents
- Author
-
Timothy A. Campbell
- Subjects
Class (computer programming) ,Gigabyte ,Computer science ,business.industry ,Deep learning ,media_common.quotation_subject ,Digital imaging ,Task (project management) ,Action (philosophy) ,Human–computer interaction ,Quality (business) ,Artificial intelligence ,business ,Programmer ,media_common - Abstract
Artificial intelligence is a general term that encompasses all means in which a computer uses modern methods to not just execute a specific action that a programmer has decided, but to teach itself how to conduct a task. Thus, the computer learns sometimes in the way a human might. In 2020, digital imaging sensors have matured and become advanced enough to capture images that far exceed that of film-based photography in the past. Not only has the quality of imaging increased, but also has the rate and ease at which imaging may be done such that it becomes feasible to acquire gigabytes of imagery within a single examination case. Deep learning is a class of machine learning that deals with the ability of a computer to learn, that is, to teach itself a new previously unknown task, as opposed to the methods which are designed and task-specific.
- Published
- 2020
- Full Text
- View/download PDF
35. Design and FPGA Implementation of Optical Fiber Video Image Transmission
- Author
-
Liu Sheng and Li Xian Cheng
- Subjects
Optical fiber ,Gigabyte ,Computer science ,business.industry ,Interface (computing) ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Communications system ,law.invention ,Transmission (telecommunications) ,law ,Transceiver ,business ,Field-programmable gate array ,HDMI ,Computer hardware - Abstract
In modern communication systems, optical fiber transmission is widely used because of its low power consumption and wide frequency band. At the same time, by using the SFP (Small Form-factor Pluggable) module, the video transmission system will quickly build. Therefore, this paper uses the FPGA-integrated GTX (Gigabyte Transceiver) serial high-speed transceiver as the transmission and forwarding medium. The image is captured through the camera, then transmitted by the GTX transceiver and SPF fiber interface, and finally the video image is displayed by the HDMI interface. By the analysis of Vivado, the requirements of timing and layout that could be met.
- Published
- 2020
- Full Text
- View/download PDF
36. Toward a scalable framework for reproducible processing of volumetric, nanoscale neuroimaging datasets
- Author
-
Hannah P. Cowley, Corban G. Rivera, Konrad P. Kording, Erik C. B. Johnson, Miller Wilt, Jordan Matelsky, Brock A. Wester, Raphael Norman-Tenazas, Elizabeth P. Reilly, William Gray-Roncal, Luis M Rodriguez, Theodore J. LaGrow, Joseph Downs, Marisa Hughes, Eva L. Dyer, Nathan Drenkow, and Dean M. Kleissas
- Subjects
microtomography ,Gigabyte ,Computer science ,AcademicSubjects/SCI02254 ,Health Informatics ,Neuroimaging ,Machine learning ,computer.software_genre ,reproducible science ,Workflow ,03 medical and health sciences ,0302 clinical medicine ,Technical Note ,Ecosystem ,030304 developmental biology ,0303 health sciences ,Computational neuroscience ,electron microscopy ,business.industry ,Petabyte ,Computer Science Applications ,Computer data storage ,Scalability ,Connectome ,AcademicSubjects/SCI00960 ,containers ,workflows ,Artificial intelligence ,business ,computer ,optimization ,030217 neurology & neurosurgery ,Algorithms ,Software ,computational neuroscience - Abstract
Background Emerging neuroimaging datasets (collected with imaging techniques such as electron microscopy, optical microscopy, or X-ray microtomography) describe the location and properties of neurons and their connections at unprecedented scale, promising new ways of understanding the brain. These modern imaging techniques used to interrogate the brain can quickly accumulate gigabytes to petabytes of structural brain imaging data. Unfortunately, many neuroscience laboratories lack the computational resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. Results We developed an ecosystem of neuroimaging data analysis pipelines that use open-source algorithms to create standardized modules and end-to-end optimized approaches. As exemplars we apply our tools to estimate synapse-level connectomes from electron microscopy data and cell distributions from X-ray microtomography data. To facilitate scientific discovery, we propose a generalized processing framework, which connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines. Conclusions Our accessible methods and pipelines demonstrate that approaches across multiple neuroimaging experiments can be standardized and applied to diverse datasets. The techniques developed are demonstrated on neuroimaging datasets but may be applied to similar problems in other domains.
- Published
- 2020
37. Wireless Wi-Fi module Testing Procedure in Gigabyte Passive Optical Network to Optical Network Terminal of Equipment
- Author
-
Ekaterina M. Gryaznova
- Subjects
Gigabyte ,business.industry ,Computer science ,Wireless ,The Internet ,Optical network terminal ,business ,Passive optical network ,Information exchange ,Computer network - Abstract
Gigabyte Passive Optical Network (GPON) technology can be called one of the most promising and progressive among all available ways to access the Internet today. This network has the ability to meet the needs of people in the speed of information exchange. It also meets modern requirements and has the potential capabilities for future development.
- Published
- 2020
- Full Text
- View/download PDF
38. The Methods, Benefits and Problems of The Interpretation of Data
- Author
-
Anower Hossain, Rakibul Hasan, Abu Rayhan Soton, and Mohaiminul Islam
- Subjects
Complex data type ,Gigabyte ,business.industry ,Computer science ,Interpretation (philosophy) ,Big data ,Key (cryptography) ,Visual communication ,Qualitative property ,Performance indicator ,business ,Data science - Abstract
Data analysis and interpretation have now taken center stage with the advent of the digital age and the sheer amount of data can be frightening. In fact, a Digital Universe study found that the total data supply in 2012 was 2.8 trillion gigabytes! Based on that amount of data alone, it is clear the calling card of any successful enterprise in today's global world will be the ability to analyze complex data, produce actionable insights and adapt to new market needs all at the speed of thought. Business dashboards are the digital age tools for big data. Capable of displaying key performance indicators (KPIs) for both quantitative and qualitative data analyses, they are ideal for making the fast-paced and data-driven market decisions that push today's industry leaders to sustainable success. Through the art of streamlined visual communication, data dashboards permit businesses to engage in real-time and informed decision making, and are key instruments in data interpretation. This research article based on methods, benefits and problems of the interpretation of data.
- Published
- 2020
- Full Text
- View/download PDF
39. Selective Event Processing for Energy Efficient Mobile Gaming with SNIP
- Author
-
Prasanna Venkatesh Rengasamy, Chita R. Das, Shulin Zhao, Haibo Zhang, Anand Sivasubramaniam, and Mahmut Kandemir
- Subjects
010302 applied physics ,Gigabyte ,Memoization ,Computer science ,business.industry ,Distributed computing ,Complex event processing ,02 engineering and technology ,01 natural sciences ,020202 computer hardware & architecture ,Software ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Android (operating system) ,business ,Game Developer ,Mobile device ,Efficient energy use - Abstract
Gaming is an important class of workloads for mobile devices. They are not only one of the biggest markets for game developers and app stores, but also amongst the most stressful applications for the SoC. In these workloads, much of the computation is user-driven, i.e. events captured from sensors drive the computation to be performed. Consequently, event processing constitutes the bulk of energy drain for these applications. To address this problem, we conduct a detailed characterization of event processing activities in several popular games and show that (i) some of the events are exactly repetitive in their inputs, not requiring any processing at all; or (ii) a significant number of events are redundant in that even if the inputs for these events are different, the output matches events already processed. Memoization is one of the obvious choices to optimize such behavior, however the problem is a lot more challenging in this context because the computation can span even functional/OS boundaries, and the input space required for tables can takes gigabytes of storage. Instead, our Selecting Necessary InPuts (SNIP) software solution uses machine learning to isolate the input features that we really need to track in order to considerably shrink memoization tables. We show that SNIP can save up to 32% of the energy in these games without requiring any hardware modifications.
- Published
- 2020
- Full Text
- View/download PDF
40. PCHA: A Fast Packet Classification Algorithm For IPv6 Based On Hash And AVL Tree
- Author
-
Xu Zhang, Yu-Yan Zhang, and Xing-Xing Chen
- Subjects
IPv6 address ,AVL tree ,Residential gateway ,Gigabyte ,computer.internet_protocol ,Computer science ,business.industry ,Network packet ,Quality of service ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Hash function ,Cloud computing ,Throughput ,IPv4 ,IPv6 ,IPv6 packet ,Firewall (construction) ,Tree (data structure) ,Default gateway ,business ,computer ,Algorithm - Abstract
As the core infrastructure of cloud data operation, exchange and storage, data centerneeds to ensure its security and reliability, which are the important prerequisites for the development of cloud computing. Due to various illegal accesses, attacks, viruses and other security threats, it is necessary to protect the boundary of cloud data center through security gateway. Since the traffic growing up to gigabyte level, the secure gateway must ensure high transmission efficiency and different network services to support the cloud services. In addition, data center is gradually evolving from IPv4 to IPv6 due to excessive consumption of IP addresses. Packet classification algorithm, which can divide packets into different specific streams, is very important for QoS, real-time data stream application and firewall. Therefore, it is necessary to design a high performance IPv6 packet classification algorithm suitable for security gateway.AsIPv6 has a128-bitIP address and a different packet structure compared with IPv4, the traditional IPv4 packet classification algorithm is not suitable properly for IPv6 situations. This paper proposes a fast packet classification algorithm for IPv6 - PCHA (packet classification based on hash andAdelson-Velsky-Landis Tree). It adopts the three flow classification fields of source IPaddress(SA), destination IPaddress(DA) and flow label(FL) in the IPv6 packet defined by RFC3697 to implement fast three-tuple matching of IPv6 packet. It is through hash matching of variable length IPv6 address and tree matching of shorter flow label. Analysis and testing show that the algorithm has a time complexity close to $O(1)$ in the acceptable range of space complexity, which meets the requirements of fast classification of IPv6 packetsand can adapt well to the changes in the size of rule sets, supporting fast preprocessing of rule sets. Our algorithm supports the storage of 500,000 3-tuple rules on the gateway device and can maintain 75% of the performance of throughput for small packets of 78 bytes.
- Published
- 2020
- Full Text
- View/download PDF
41. High-Performance ELM for Memory Constrained Edge Computing Devices with Metal Performance Shaders
- Author
-
Amaury Lendasse, Leonardo Espinosa Leal, Kaj-Mikael Björk, and Anton Akusok
- Subjects
business.product_category ,Gigabyte ,Computer science ,Laptop ,Solver ,business ,Shader ,Mobile device ,Edge computing ,Block (data storage) ,Computational science ,Extreme learning machine - Abstract
This paper proposes a block solution method for the Extreme Learning Machine. It combines the speed of a direct non-iterative solver with minimal memory requirements. The method is suitable for edge computing scenarios running on a mobile device with GPU acceleration. The implementation tested on the GPU of iPad Pro outperforms a laptop CPU, and trains a 19,000-neuron model using under one gigabyte of memory. It confirms the feasibility of Big Data analysis on modern mobile devices.
- Published
- 2020
- Full Text
- View/download PDF
42. 'Hidden' Integration of Industrial Design-Tools in E-Learning Environments
- Author
-
René Hutschenreuter, Johannes Nau, Heinz-Dietrich Wuttke, Karsten Henke, and Robert-Niklas Bock
- Subjects
0303 health sciences ,business.product_category ,Gigabyte ,Computer science ,business.industry ,Overhead (engineering) ,computer.software_genre ,Toolchain ,03 medical and health sciences ,Microcontroller ,0302 clinical medicine ,Software ,Industrial design ,030220 oncology & carcinogenesis ,Laptop ,Compiler ,business ,Software engineering ,computer ,030304 developmental biology - Abstract
One of the problems that has emerged during the intensive usage of the GOLDi remote lab over the years is the usage of external third-party development tools, necessary for the software- or hardware-oriented design. The installation and setup (e.g. of Atmel Studio or Intel Quartus Prime) on a private PC or laptop is quite complicated, requires some expert knowledge to use these tools safely and effectively, and is not always platform-independent. Above this, these IDE occupy several gigabytes in memory size and sometimes take hours to install. These very powerful tools, of which only a fraction of the offered functionality is needed to solve the given educational task, generate a large overhead. In addition, high school and university students may have only little or no knowledge of microcontrollers, FPGA, compiler linker toolchain, hardware-related languages, etc.
- Published
- 2020
- Full Text
- View/download PDF
43. FairHym: Improving Inter-Process Fairness on Hybrid Memory Systems
- Author
-
Satoshi Imamura and Eiji Yoshida
- Subjects
Runtime system ,Multi-core processor ,Hardware_MEMORYSTRUCTURES ,Gigabyte ,Computer science ,business.industry ,Dynamic frequency scaling ,Embedded system ,Metric (mathematics) ,Process (computing) ,Memory bus ,business ,Dram - Abstract
Persistent memory (PMEM) is an emerging byte-addressable memory device that sits on a memory bus like conventional DRAM. A first PMEM product, Intel® Optane™ DC Persistent Memory (DCPM), has a larger capacity and lower cost per gigabyte than DRAM, although its performance is lower than that of DRAM. Therefore, hybrid memory systems that combine DRAM and DCPM for main memory are recommended to take advantage of both of them. However, our previous work revealed significant unfairness between two processes co-running on a real hybrid memory system; the performance of a process accessing DRAM is significantly degraded by another performing frequent writes to DCPM, but not vice versa. In this work, we propose FairHym that is a dynamic frequency scaling technique to improve the inter-process fairness on hybrid memory systems. It decreases the operating frequencies of CPU cores that run a process performing frequent DCPM writes in order to throttle its access frequency and prevent DRAM accesses from being blocked. We implement FairHym as a user-level runtime system and evaluate it with 36 two-process workloads on a real server. The evaluation results show that FairHym improves a fairness metric from 0.66 to 0.86 on average, compared to the default setting that maximizes the frequencies of all cores.
- Published
- 2020
- Full Text
- View/download PDF
44. Enhancing QoS in a University Network by using Containerized Generic Cache
- Author
-
Mohit P. Tahiliani, S. S. Kamath, H. L. Praveen Raj, and P. G. Mohanan
- Subjects
Gigabyte ,business.industry ,Computer science ,Quality of service ,Ranging ,Python (programming language) ,Megabyte ,The Internet ,Cache ,business ,computer ,computer.programming_language ,Computer network ,Data transmission - Abstract
Ubiquitous access and enhanced Internet speeds have paved ways for online educational reforms at a large scale. There has been a widespread adoption of modern educational applications, ranging from interactive applets, video lessons and online quizzes to remotely conducting laboratory experiments. Consequently, there is a demand to provision more bandwidth to satisfy the users expectations. In this paper, we propose an approach to enhance the Quality of Service (QoS) in a University campus network and efficiently utilize the available bandwidth. Typically within a University, some requests are similar e.g., operating system updates, Linux package installs, Python pip packages and others. These are huge data transfer requests ranging from Megabytes to Gigabytes, and consume a large amount of bandwidth on external access links to the Internet. Redundant requests of this nature from a large user base lead to enormous wastage of bandwidth. The proposed approach overcomes this concern by setting up a containerized forward proxy with a generic cache for popular traffic in the University. Our experiments on a live network at National Institute of Technology Karnataka, Surathkal show that a large number of redundant requests can be successfully served from this Virtualized Network Function (VNF), thereby enhancing the QoS and efficiently utilizing the available bandwidth. The proposed system is able to reduce the latency by over 60% and saves 34GB of data per day on an average. Although the proposed approach is tested in a University environment in this work, it is applicable for other caching requirements with minor modifications. Moreover, since this cache is implemented as a VNF, it is portable and easy to deploy.
- Published
- 2020
- Full Text
- View/download PDF
45. Location-Based Social Network Data Generation Based on Patterns of Life
- Author
-
Andreas Züfle, Hyunjee Jin, Dieter Pfoser, Andrew Crooks, Ovi Chris Rouly, Hamdi Kavak, Joon-Seok Kim, and Carola Wenk
- Subjects
education.field_of_study ,Social network ,Gigabyte ,business.industry ,Computer science ,Test data generation ,Population ,02 engineering and technology ,computer.software_genre ,Data science ,Simulation software ,Through-the-lens metering ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,020201 artificial intelligence & image processing ,business ,education ,computer ,Social simulation - Abstract
Location-based social networks (LBSNs) have been studied extensively in recent years. However, utilizing real-world LBSN data sets yields several weaknesses: sparse and small data sets, privacy concerns, and a lack of authoritative ground-truth. To overcome these weaknesses, we leverage a large-scale LBSN simulation to create a framework to simulate human behavior and to create synthetic but realistic LBSN data based on human patterns of life. Such data not only captures the location of users over time but also their interactions via social networks. Patterns of life are simulated by giving agents (i.e., people) an array of “needs” that they aim to satisfy, e.g., agents go home when they are tired, to restaurants when they are hungry, to work to cover their financial needs, and to recreational sites to meet friends and satisfy their social needs. While existing real-world LBSN data sets are trivially small, the proposed framework provides a source for massive LBSN benchmark data that closely mimics the real-world. As such, it allows us to capture 100% of the (simulated) population without any data uncertainty, privacy-related concerns, or incompleteness. It allows researchers to see the (simulated) world through the lens of an omniscient entity having perfect data. Our framework is made available to the community. In addition, we provide a series of simulated benchmark LBSN data sets using different synthetic towns and real-world urban environments obtained from OpenStreetMap. The simulation software and data sets, which comprise gigabytes of spatio-temporal and temporal social network data, are made available to the research community.
- Published
- 2020
- Full Text
- View/download PDF
46. ACM SIGMOD Jim Gray Dissertation Award W Talk
- Author
-
Jose M. Faleiro
- Subjects
Multi-core processor ,Weak consistency ,Exploit ,Gigabyte ,Computer science ,Software deployment ,business.industry ,Distributed computing ,Server ,Strong consistency ,Cloud computing ,business - Abstract
The increasing democratization of server hardware with multi-core CPUs and large main memories has been one of the dominant hardware trends of the last decade. "Bare metal" servers with tens of CPU cores and over 100 gigabytes of main memory have been available for several years now. Recently, this large scale hardware has also been available via the cloud. Database systems, with their roots in uniprocessors and paucity of main memory, have unsurprisingly been found wanting on modern hardware. In addition to changes in hardware, database systems have had to contend with changing application requirements and deployment environments. Database systems have long provided applications with an interactive interface, in which an application can communicate with the database over several round-trips in the course of a single request. A large class of applications, however, does not require interactive interfaces, and is unwilling to pay the performance cost associated with overly flexible interfaces. Some of these applications have eschewed database systems altogether in favor of high-performance key-value stores. Finally, modern applications are increasingly deployed at ever increasing scales, often serving hundreds of thousands to millions of simultaneous clients. These large scale deployments are more prone to errors due to consistency issues in their underlying database systems. Ever since their inception, database systems have allowed applications to tradeoff consistency for performance, and often nudge applications towards weak consistency. When deployed at scale, weak consistency exposes latent consistency-related bugs, in the same way that failures are more likely to occur at scale. Nearly every widely deployed database system provides applications with weak consistency consistency by default, and its widespread use in practice significantly complicates application development, leading to latent Heisenbugs that are only exposed in production. This dissertation proposes and explores the use of deterministic execution to address these concerns. Database systems have traditionally been non-deterministic; given an input list of transactions, the final state of the database, which corresponds to some totally ordered execution of transactions, is dependent on non-deterministic factors such as thread scheduling decisions made by the operating system and failures. Deterministic execution, on the other hand, ensures that the database's final state is always determined by its input list of transactions; in other words, the input list of transactions is the same as the total order of transactions that determines the database's state. While non-deterministic database systems expend significant resources in determining valid total orders of transactions, we show that deterministic systems can exploit simple and low-cost up-front total ordering of transactions to execute and schedule transactions much more efficiently. We show that deterministic execution enables low-overhead, highly-parallel scheduling mechanisms, that can address the performance limitations of existing database systems on modern hardware. Deterministic database systems are designed based on the assumption that applications can submit their transactions in one-shot prepared transactions, instead of multiple round-trips. Finally, we attempt to understand the fundamental reason for the observed performance differences between various consistency levels in database systems, and based on this understanding, show that we can exploit deterministic execution to provide strong consistency at a cost that is competitive with that offered by weak consistency levels.
- Published
- 2020
- Full Text
- View/download PDF
47. SparkLeBLAST: Scalable Parallelization of BLAST Sequence Alignment Using Spark
- Author
-
Karim Youssef and Wu-chun Feng
- Subjects
020203 distributed computing ,0303 health sciences ,Biological data ,Speedup ,Gigabyte ,Computer science ,Genomic data ,Sequence alignment ,02 engineering and technology ,Parallel computing ,Rendering (computer graphics) ,03 medical and health sciences ,ComputingMethodologies_PATTERNRECOGNITION ,Upgrade ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,030304 developmental biology - Abstract
The exponential growth of genomic data presents challenges in analyzing and computing on such biological data at scale. While NCBI’s BLAST is a widely used pairwise sequence alignment tool, it does not scale to large datasets that are hundreds of gigabytes (GB) in size. To address this scalability problem, mpiBLAST emerged and became widely used, enabling scaling to 65,536 processes. However, mpiBLAST suffers from being tightly coupled with a specific implementation of BLAST, rendering it difficult to upgrade with the ever-evolving NCBI BLAST code. To address this shortcoming, recent parallel BLAST tools, such as SparkBLAST, consist of wrappers that are decoupled from the BLAST code but suffer from poor scalability with large sequence databases. Thus, there does not exist any parallel BLAST tool that can simultaneously address the issues of performance, scalability, programmability, and upgradability. To address this void, we propose SparkLeBLAST, a parallel BLAST tool that leverages our performance modeling and the Spark framework to deliver the performance and scalability of mpiBLAST and the ease of programming and upgradability of SparkBLAST, respectively. Ultimately, SparkLeBLAST delivers a 10x speedup relative to the state-of-the-art SparkBLAST and nearly a 2x speedup relative to the latest version of mpiBLAST.
- Published
- 2020
- Full Text
- View/download PDF
48. Large-scale storage of whole slide images and fast retrieval of tiles using DRAM
- Author
-
Deepthi S. Rao, Daniel E. Lopez Barron, Arun Zachariah, Ossama Tawfik, and Praveen Rao
- Subjects
Software analytics ,Gigabyte ,Computer science ,Analytics ,business.industry ,Big data ,Scalability ,Spark (mathematics) ,Digital pathology ,business ,Dram ,Computer hardware - Abstract
The U.S. Food and Drug Administration (FDA) has approved two digital pathology systems for primary diagnosis. These systems produce and consume whole slide images (WSIs) constructed from glass slides using advanced digital slide scanners. WSIs can greatly improve the work ow of pathologists through the development of novel image analytics software for automatic detection of cellular and morphological features and disease diagnosis using histopathology slides. However, the gigabyte size of a WSI poses a serious challenge for storage and retrieval of millions of WSIs. In this paper, we propose a system for scalable storage of WSIs and fast retrieval of image tiles using DRAM. A WSI is partitioned into tiles and sub-tiles using a combination of a space-filling curve, recursive partitioning, and Dewey numbering. They are then stored as a collection of key-value pairs in DRAM. During retrieval, a tile is fetched using key-value lookups from DRAM. Through performance evaluation on a 24-node cluster using 100 WSIs, we observed that, compared to Apache Spark, our system was three times faster to store the 100 WSIs and 1,000 times faster to access a single tile achieving millisecond latency. Such fast access to tiles is highly desirable when developing deep learning-based image analytics solutions on millions of WSIs.
- Published
- 2020
- Full Text
- View/download PDF
49. Implications of Edge Computing in 5G MMW Technology
- Author
-
G. Ramachandra Reddy, Akshaya Balaji, and Harshit Shandilya
- Subjects
Gigabyte ,Computer science ,business.industry ,Transfer (computing) ,Latency (audio) ,Electrical engineering ,Communications system ,business ,Mobile device ,Edge computing ,5G ,Data transmission - Abstract
In today’s epoch, mobile phones have become cosmopolitan. Functions that were hitherto considered inconceivable with a handheld device are now trivial for the quotidian user. The two factors that have led to such reformist developments are the exponential increase in processing power of mobile devices and ever-increasing data transfer rates. The second factor, namely the transfer of data at high speeds is eminently dependent upon the generation of telecommunication networks being used. The latest version of telecommunication networks is 5G. However, as speeds have increased, so have the requirements of the consumers. Data hungry applications demand that hundreds of gigabytes of data be transferred, with minimum latency. This, combined with a high amount of exposure to millimeter-wave radiation poses various health challenges. Among these health concerns, one of the biggest trepidations involves negative ocular effects. Our aim is to propose a technique that utilizes lower frequency millimeter waves (MMW) and decreases their negative effects on human eyes without compromising on the speed and latency that 5G communication systems are meant to deliver. The solution to speed, latency, and high radiation exposure combines lower bandwidths for data transmission while bringing data closer to the consumer. This can be made by the amalgamation of 5G with edge computing. Hence, this paper proposes how the lower bandwidths of 5G technology enhanced with edge computing to decrease latency and the amount of MMW exposure can give us the perfect data transfer mechanism.
- Published
- 2020
- Full Text
- View/download PDF
50. Enhanced Mechanism for Efficient Storage, Retrieval and De-Duplication in Cloud
- Author
-
L.Arokia Jepu Prabhu and K. Nandhini
- Subjects
Service (systems architecture) ,Database ,Gigabyte ,business.industry ,Computer science ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Upload ,020204 information systems ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,Data deduplication ,The Internet ,Operating expense ,business ,Cloud storage ,computer - Abstract
Cloud storage is a service model where data is stored, managed, remotely backed up, and made available to users over a network (usually the Internet). Users generally pay per-consumption per month for their cloud data storage. Although the cost per gigabyte has been driven down drastically, cloud storage companies have introduced operating expenses that can make the service more costly than consumers who have been negotiating for. Therefore, the more room you use, the greater the price that you are liable to pay. The main objective of this project is to reduce the need to purchase additional storage area. Measures have been proposed to avoid duplicate files in the centralized cloud storage area. For this, secured auditing methodologies have been introduced. This will enhance the maintenance and management of files in the storage area. In addition, file classification techniques have been introduced wherein the users can upload the file into one of the following classes : private, public, sensitive and confidential. Files that are categorised as private and public are deleted periodically thereby saving storage space and the files that are confidential and sensitive are stored free of tampering unless the user wishes to delete them manually. At any instance, any file's class can be modified. This will help in reducing the need to buy extra storage space. Further, customised file tagging techniques are proposed where each file can be saved with three tags which aids efficient management of files. Through these approaches, files can be efficiently stored, accessed and managed in the centralised cloud storage area.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.