1,065 results on '"COMPUTER storage capacity"'
Search Results
2. Research for Practice: Crash Consistency: Keeping data safe in the presence of crashes is a fundamental problem.
- Author
-
ALAGAPPAN, RAMNATTHAN
- Subjects
- *
COMPUTER system failures , *COMPUTER system failure prevention , *COMPUTER storage capacity , *APPLICATION software , *ELECTRONIC file management , *COMPUTERS - Abstract
This article discusses system crashes and how each level of the system needs to be implemented correctly and the system component interfaces need to be used correctly by applications in order to prevent crashes. Several papers are cited within this article exploring the file system, a lower level component within the system, an exploration of interface-level guarantees with bug-finders, crash-consistent programs, and how the newer concept of persistent memory interacts with system crashes.
- Published
- 2023
- Full Text
- View/download PDF
3. Efficient Estimation and Validation of Shrinkage Estimators in Big Data Analytics.
- Author
-
du Plessis, Salomi, Arashi, Mohammad, Maribe, Gaonyalelwe, and Millard, Salomon M.
- Subjects
- *
COMPUTER storage capacity , *MULTICOLLINEARITY , *BIG data , *REGRESSION analysis , *FRUIT drying - Abstract
Shrinkage estimators are often used to mitigate the consequences of multicollinearity in linear regression models. Despite the ease with which these techniques can be applied to small- or moderate-size datasets, they encounter significant challenges in the big data domain. Some of these challenges are that the volume of data often exceeds the storage capacity of a single computer and that the time required to obtain results becomes infeasible due to the computational burden of a high volume of data. We propose an algorithm for the efficient model estimation and validation of various well-known shrinkage estimators to be used in scenarios where the volume of the data is large. Our proposed algorithm utilises sufficient statistics that can be computed and updated at the row level, thus minimizing access to the entire dataset. A simulation study, as well as an application on a real-world dataset, illustrates the efficiency of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Emerging Robust Polymer Materials for High-Performance Two-Terminal Resistive Switching Memory.
- Author
-
Li, Bixin, Zhang, Shiyang, Xu, Lan, Su, Qiong, and Du, Bin
- Subjects
- *
SHAPE memory polymers , *COMPUTER storage capacity , *POLYMER blends , *NONVOLATILE random-access memory , *INFORMATION technology , *POLYMERS , *ARTIFICIAL intelligence - Abstract
Facing the era of information explosion and the advent of artificial intelligence, there is a growing demand for information technologies with huge storage capacity and efficient computer processing. However, traditional silicon-based storage and computing technology will reach their limits and cannot meet the post-Moore information storage requirements of ultrasmall size, ultrahigh density, flexibility, biocompatibility, and recyclability. As a response to these concerns, polymer-based resistive memory materials have emerged as promising candidates for next-generation information storage and neuromorphic computing applications, with the advantages of easy molecular design, volatile and non-volatile storage, flexibility, and facile fabrication. Herein, we first summarize the memory device structures, memory effects, and memory mechanisms of polymers. Then, the recent advances in polymer resistive switching materials, including single-component polymers, polymer mixtures, 2D covalent polymers, and biomacromolecules for resistive memory devices, are highlighted. Finally, the challenges and future prospects of polymer memory materials and devices are discussed. Advances in polymer-based memristors will open new avenues in the design and integration of high-performance switching devices and facilitate their application in future information technology. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Resonant properties of the memory capacity of a laser-based reservoir computer with filtered optoelectronic feedback.
- Author
-
Danilenko, G. O., Kovalev, A. V., Viktorov, E. A., Locquet, A., Citrin, D. S., and Rontani, D.
- Subjects
- *
OPTOELECTRONIC devices , *COMPUTER storage capacity , *SEMICONDUCTOR lasers , *MEMORY - Abstract
We provide a comprehensive analysis of the resonant properties of the memory capacity of a reservoir computer based on a semiconductor laser subjected to time-delayed filtered optoelectronic feedback. Our analysis reveals first how the memory capacity decreases sharply when the input-data clock cycle is slightly time-shifted from the time delay or its multiples. We attribute this effect to the inertial properties of the laser. We also report on the damping of the memory-capacity drop at resonance with a decrease of the virtual-node density and its broadening with the filtering properties of the optoelectronic feedback. These results are interpretated using the eigenspectrum of the reservoir obtained from a linear stability analysis. Then, we unveil an invariance in the minimum value of the memory capacity at resonance with respect to a variation of the number of nodes if the number is big enough and quantify how the filtering properties impact the system memory in and out of resonance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Linear Address Spaces.
- Author
-
KAMP, POUL-HENNING
- Subjects
- *
COMPUTER architecture , *COMPUTER storage capacity , *OBJECT-oriented methods (Computer science) , *MOTHERBOARDS , *COMPUTERS - Abstract
The article discusses various aspects of computer-related linear physical and virtual addresses, and it mentions Cambridge University’s CHERI computer architecture platform and the computer memory-based difficulties that are associated with translating from linear virtual to linear physical addresses. The ARM and x64 types of central processing units (CPUs) are examined, along with object-oriented computing and the Rational R1000/s400 computer.
- Published
- 2022
- Full Text
- View/download PDF
7. Graph Ranked Clustering Based Biomedical Text Summarization Using Top k Similarity.
- Author
-
Gupta, Supriya, Sharaff, Aakanksha, and Nagwani, Naresh Kumar
- Subjects
GRAPHIC methods ,READABILITY (Literary style) ,COMPUTER storage capacity ,BIOMEDICAL signal processing ,COHESION - Abstract
Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort. Evaluating and selecting the most informative sentences from biomedical articles is always challenging. This study aims to develop a dual-mode biomedical text summarization model to achieve enhanced coverage and information. The research also includes checking the fitment of appropriate graph ranking techniques for improved performance of the summarization model. The input biomedical text is mapped as a graph where meaningful sentences are evaluated as the central node and the critical associations between them. The proposed framework utilizes the top k similarity technique in a combination of UMLS and a sampled probability-based clustering method which aids in unearthing relevant meanings of the biomedical domain-specific word vectors and finding the best possible associations between crucial sentences. The quality of the framework is assessed via different parameters like information retention, coverage, readability, cohesion, and ROUGE scores in clustering and non-clustering modes. The significant benefits of the suggested technique are capturing crucial biomedical information with increased coverage and reasonable memory consumption. The configurable settings of combined parameters reduce execution time, enhance memory utilization, and extract relevant information outperforming other biomedical baseline models. An improvement of 17% is achieved when the proposed model is checked against similar biomedical text summarizers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Cancionero e imprenta en la red.
- Author
-
RIPOLL, ENRIQUE
- Subjects
INFORMATION technology ,COMPUTER storage capacity ,DATA warehousing ,COMPUTER science ,PHILOLOGY ,COMPUTATIONAL linguistics - Abstract
Copyright of Scripta is the property of Scripta and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
9. Clean up your hard drive.
- Author
-
PEERS, NICK
- Subjects
MACBOOK (Computer) ,HARD disks ,COMPUTER storage devices ,COMPUTER storage capacity ,ELECTRONIC data processing - Abstract
The article provides information and guidance on effectively managing and clearing up storage space on Mac computers. It addresses the common issue of Mac storage filling up over time and offers tips and tools to identify and remove unnecessary files, manage user folders, and uninstall unused apps to optimize the performance and storage capacity of the device.
- Published
- 2023
10. Succinct Range Filters.
- Author
-
Huanchen Zhang, Lim, Hyeontaek, Leis, Viktor, Andersen, David G., Kaminsky, Michael, Keeton, Kimberly, and Pavlo, Andrew
- Subjects
- *
QUERY languages (Computer science) , *DATABASES , *COMPUTER storage capacity , *INFORMATION theory - Abstract
We present the Succinct Range Filter (SuRF), a fast and compact data structure for approximate membership tests. Unlike traditional Bloom filters, SuRF supports both singlekey lookups and common range queries, such as range counts. SuRF is based on a new data structure called the Fast Succinct Trie (FST) that matches the performance of state-of-the-art order-preserving indexes, while consuming only 10 bits per trie node--a space close to the minimum required by information theory. Our experiments show that SuRF speeds up range queries in a widely used database storage engine by up to 5×. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Recurrent neural network modeling of multivariate time series and its application in temperature forecasting.
- Author
-
Nketiah, Edward Appau, Chenlong, Li, Yingchuan, Jing, and Aram, Simon Appah
- Subjects
- *
RECURRENT neural networks , *COMPUTER storage capacity , *TIME series analysis , *ATMOSPHERIC temperature , *DEW point , *FORECASTING - Abstract
Temperature forecasting plays an important role in human production and operational activities. Traditional temperature forecasting mainly relies on numerical forecasting models to operate, which takes a long time and has higher requirements for the computing power and storage capacity of computers. In order to reduce computation time and improve forecast accuracy, deep learning-based temperature forecasting has received more and more attention. Based on the atmospheric temperature, dew point temperature, relative humidity, air pressure, and cumulative wind speed data of five cities in China from 2010 to 2015 in the UCI database, multivariate time series atmospheric temperature forecast models based on recurrent neural networks (RNN) are established. Firstly, the temperature forecast modeling of five cities in China is established by RNN for five different model configurations; secondly, the neural network training process is controlled by using the Ridge Regularizer (L2) to avoid overfitting and underfitting; and finally, the Bayesian optimization method is used to adjust the hyper-parameters such as network nodes, regularization parameters, and batch size to obtain better model performance. The experimental results show that the atmospheric temperature prediction error based on LSTM RNN obtained a minimum error compared to using the base models, and these five models obtained are the best models for atmospheric temperature prediction in the corresponding cities. In addition, the feature selection method is applied to the established models, resulting in simplified models with higher prediction accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Application of local optimisation with Steepest Ascent Algorithm for the residual static corrections in a southern Algeria geophysical survey.
- Author
-
BANSIR, F., ELADJ, S., HARROUCHI, L., DOGHMANE, M. Z., and ALIOUANE, L.
- Subjects
- *
GEOPHYSICAL surveys , *OPTIMIZATION algorithms , *COMPUTER storage capacity , *GENETIC algorithms , *ALGORITHMS - Abstract
Static corrections in the seismic data processing sequence are one of the most sensitive steps in seismic exploration undertaken in areas with complex topography and geology. Using the stack energy as an objective function for the inversion problem, static corrections can be performed without using the cross-correlation of all traces of a Common Depth Point (CDP) with all other CDP traces. This step is a time-consuming operation and requires huge computer memory capacities. The pre-calculation step of the cross-correlation table can provide greater processing efficiency in practice; by either a local optimisation algorithm such as Steepest Ascent applied to the traces, or a global search method such as genetic algorithms. The sudden change of the topography and the signal/ noise (S/N) ratio decrease can cause failure in residual static (RS) corrections operations; consequently, it may lead to poor quality of the seismic section. In this study, we firstly created a synthetic seismic section (synthetic stack), which describes a geological model. Then, the Steepest Ascent Algorithm (SAA) method is used to estimate RS corrections and evaluate its performance, in order that the encountered problems in the field will be overcome. The generated synthetic stack, with a two-layer tabular geological model, has been disturbed by introducing wrong static corrections and random noise. Thus, the model became a noised stack with low S/N ratio and poor synthetic horizons continuity. After 110 iterations, the SAA estimated the appropriate corrections and eliminated disturbances introduced earlier. Moreover, it improves the quality of the stack and the continuity of synthetic horizons. Therefore, we have applied this algorithm using the same methodology for calculating the RS corrections of real data of seismic prospection in southern Algeria; the input data has poor quality caused by near-surface anomalies. We found that our proposed methodology has improved the RS corrections in comparison to currently used conventional methods in the seismic processing in Algerian industry. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. A Hybrid Identity Based Encryption: A Modest countermeasure for device-to-device (D2D) illegal digital file sharing.
- Author
-
E., Ume Leonard and John, Orie Madubuchukwu
- Subjects
INFORMATION technology ,BLUETOOTH technology ,DATA encryption ,PIRACY (Copyright) ,COMPUTER storage capacity - Abstract
The innovations in the field of information and communication technology have come with their attendant problems, which require further research to fix. Innovations such as Bluetooth technology, fast wireless internet connections, high memory capacity, etc. have simplified the way in which digital files can be shared among devices. This escalated the problem of copyright piracy, which is a colossal economic sabotage for any nation. To achieve high security and a faster rate of data encryption, a twopronged security approach was used in this paper, which included the modified EIGamal key exchange algorithm and obfuscation security. To evaluate the performance of this algorithm and techniques, programming was done in HTML5, Java script and PHP to produce a platform for digital file encryption and multi-digital file reader. The testing showed the suggested technique can be used to counter device-to-device sharing of digital files. [ABSTRACT FROM AUTHOR]
- Published
- 2023
14. Vector–Logic Synthesis of Deductive Matrices for Fault Simulation.
- Author
-
Gharibi, W., Hahanova, A., Hahanov, V., Chumachenko, S., Litvinova, E., and Hahanov, I.
- Subjects
VECTOR processing (Computer science) ,ALGORITHMS ,COMPUTER systems ,COMPUTER storage capacity ,DATA encryption - Abstract
The main idea is to create vector-logic computing that uses only read-write transactions on address memory to process large data. The main task is to implement new simple and reliable models and methods of vector computing based on primitive read-write transactions in the technology of vector flexible interpretive simulation of digital system faults. Vector-logic computing is a computational process based on read-write transactions over bits of a binary vector of functionality, where the input data is the addresses of the bits. A vector method for the synthesis of deductive matrices for transporting input fault lists is proposed, which has a quadratic computational complexity. The method is a development of the deductive vector synthesis algorithm based on the truth table. The deductive matrix is intended for the synthesis and verification of tests using parallel simulation of faults, as addresses, based on a read-write transaction of deductive vector cells in memory. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Is Persistent Memory Persistent? A simple and inexpensive test of failure-atomic update mechanisms.
- Author
-
KELLY, TERENCE
- Subjects
- *
NONVOLATILE memory , *ELECTRIC power failures , *APPLICATION software , *COMPUTER storage capacity - Abstract
The article discusses the design and implementation of what is referred to as a testbed for how computer applications on a total hardware/software setup react to power interruptions. The article examines persistent memory and related mechanisms for crash-tolerance and test results, in addition to describing the power-interruption testbed.
- Published
- 2020
- Full Text
- View/download PDF
16. Domain-Specific Hardware Accelerators: DSAs gain efficiency from specialization and performance from parallelism.
- Author
-
DALLY, WILLIAM J., TURAKHIA, YATISH, and SONG HAN
- Subjects
- *
DOMAIN-specific programming languages , *COMPUTER input-output equipment , *MOORE'S law , *ALGORITHMS , *COMPUTER storage capacity - Abstract
The article discusses hardware computing engines known as domain-specific accelerators (DSAs). It discusses Moore's Law, the design of the DSA, and various sources of acceleration for DSAs. It also discusses algorithms, total cost of ownership (TCO), and memory in the accelerators.
- Published
- 2020
- Full Text
- View/download PDF
17. Path ORAM: An Extremely Simple Oblivious RAM Protocol.
- Author
-
STEFANOV, EMIL, VAN DIJK, MARTEN, SHI, ELAINE, CHAN, T.-H. HUBERT, FLETCHER, CHRISTOPHER, REN, LING, Xiangyao Yu, and DEVADAS, SRINIVAS
- Subjects
COMPUTER security ,COMPUTER storage capacity ,BANDWIDTHS ,TELECOMMUNICATION equipment ,INFORMATION sharing - Abstract
We present Path ORAM, an extremely simple Oblivious RAM protocol with a small amount of client storage. Partly due to its simplicity, Path ORAM is the most practical ORAM scheme known to date with small client storage.We formally prove that Path ORAM has a O(log N ) bandwidth cost for blocks of size B = Ω(log² N ) bits. For such block sizes, Path ORAM is asymptotically better than the best-known ORAM schemes with small client storage. Due to its practicality, Path ORAM has been adopted in the design of secure processors since its proposal. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. A Comparative Study on the Demand Analysis of Follow-Up Teaching Courses of English Curriculum Based on Embedded Wireless Communication Multimedia Aid.
- Author
-
Xue, Wei and Lu, Xiaoxia
- Subjects
ENGLISH language ,COLLEGE curriculum ,ECONOMIC demand ,COMPUTER storage capacity ,MULTIMEDIA communications ,INTERACTIVE whiteboards ,COMPUTER assisted language instruction - Abstract
College English follow-up courses are an important part of "University English Curriculum Teaching Requirements" and college English curriculum reform, and its purpose is to meet the individual needs of learners at different levels. "College English Curriculum Teaching Requirements" pointed out "Each school should design its own college English curriculum system according to the curriculum requirements and the college English teaching objectives of the school, while ensuring the improvement of students' language proficiency, it should be conducive to students' personalized learning. to meet their respective development requirements." With the rise and development of computer technology, human society has ushered in the fifth information technology revolution. In just a few decades, people's life and production methods have undergone tremendous changes. Computer information technology has been widely used in military, medical, commercial, education, and industrial fields. Multimedia-assisted teaching has incomparable advantages in these situations. The use of network, computer, and multimedia has made the English classroom teaching methods changed essentially. Due to the huge storage capacity of the computer, it can use graphics, sound, and animation to express, organize, and store the corresponding text knowledge materials, so that the sound, image, and text can truly coexist, achieving an effective increase in the amount of information dissemination and enhancing stimulus intensity of information so as to make the students' enthusiasm for learning knowledge improved. It is precisely because of the powerful information dissemination ability, information processing ability, and convenient and fast interactive form of multimedia itself that it can be used in teaching to enrich the way of classroom teaching activities and make classroom teaching methods no longer single, which can make students better complete the transition from concrete image thinking to abstract logical thinking. The smooth development of college English follow-up courses will continuously improve the English quality of our talents. Learners of different majors have different needs for follow-up college English teaching. Only when students of different majors adopt different teaching courses can they fully reflect the individuality and suitability of the courses. In order to understand the specific needs and views of students of different majors on the courses, students of different majors are made to adopt different teaching courses to get personalized and appropriate learning experience. We distributed questionnaires to senior students at a certain university and adopted Dudley-Evans and St John's needs analysis method for the survey results. It is found that liberal arts students are looking forward to the language application and language skills modules, and their satisfaction with these two modules is as high as 86.23%. The science students and engineering students are more satisfied with the two modules of professional English and language application. Among them, the satisfaction of science students on these two modules is 78.63%, and the satisfaction of engineering students is as high as 92.38%. Students of different majors have a positive attitude towards English follow-up courses, but there is a big difference between the courses offered by universities and the teaching mode. For example, 85.62% of liberal arts students are satisfied with the recognition of the course mode, and only 14.38% are dissatisfied; 66.3% of science students are satisfied with the curriculum model, 33.7% are dissatisfied; 66.45% of engineering students are satisfied with the curriculum, and 33.55% are dissatisfied. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Access probability optimization for streaming media transmission in heterogeneous cellular networks.
- Author
-
Jia, Jie, Xia, Linjiao, Ji, Pengshuo, Chen, Jian, and Wang, Xingwei
- Subjects
- *
STREAMING media , *GENETIC algorithms , *PROBABILITY theory , *COMPUTER storage capacity , *MATHEMATICAL optimization - Abstract
FemtoCaching technology, aiming at maximizing the access probability of streaming media transmission in heterogeneous cellular networks, is investigated in this paper. Firstly, five kinds of streaming media deployment schemes are proposed based on the network topology and the relationship between users and streaming media. Secondly, a matching algorithm for adaptive streaming media deployment is proposed, where the FemtoCaching can be adjusted dynamically. Thirdly, a joint problem is formulated combined with the channel assignment, the power allocation, and the caching deployment. To address this problem, we propose a joint optimization algorithm combining matching algorithm and genetic algorithm to maximize the access probability of streaming media transmission. Simulation experiments demonstrate that: (1) the average access probability of all users accessing streaming media in the network based on the proposed algorithm compared with recent works can be greatly improved, and (2) the performance increases with increasing the number of channels and the storage capacity of micro base stations, but decreases with increasing the number of users. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Big Data Analysis and Application of Liver Cancer Gene Sequence Based on Second-Generation Sequencing Technology.
- Author
-
Xiao, Chaohui, Wang, Fuchuan, Jia, Tianye, Pan, Liru, and Wang, Zhaohai
- Subjects
- *
BIG data , *CANCER genes , *COMPUTER storage capacity , *LIVER cancer , *LIVER analysis - Abstract
In big data analysis with the rapid improvement of computer storage capacity and the rapid development of complex algorithms, the exponential growth of massive data has also made science and technology progress with each passing day. Based on omics data such as mRNA data, microRNA data, or DNA methylation data, this study uses traditional clustering methods such as kmeans, K-nearest neighbors, hierarchical clustering, affinity propagation, and nonnegative matrix decomposition to classify samples into categories, obtained: (1) The assumption that the attributes are independent of each other reduces the classification effect of the algorithm to a certain extent. According to the idea of multilevel grid, there is a one-to-one mapping from high-dimensional space to one-dimensional. The complexity is greatly simplified by encoding the one-dimensional grid of the hierarchical grid. The logic of the algorithm is relatively simple, and it also has a very stable classification efficiency. (2) Convert the two-dimensional representation of the data into the one-dimensional representation of the binary, realize the dimensionality reduction processing of the data, and improve the organization and storage efficiency of the data. The grid coding expresses the spatial position of the data, maintains the original organization method of the data, and does not make the abstract expression of the data object. (3) The data processing of nondiscrete and missing values provides a new opportunity for the identification of protein targets of small molecule therapy and obtains a better classification effect. (4) The comparison of the three models shows that Naive Bayes is the optimal model. Each iteration is composed of alternately expected steps and maximal steps and then identified and quantified by MS. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Making and Breaking Data Security With Quantum Machines.
- Author
-
KHADER, DALIA and SIDDIQI, HUSNA
- Subjects
QUANTUM cryptography ,DATA security ,QUANTUM computers ,DIGITAL communications ,PUBLIC key cryptography ,BELL'S theorem ,COMPUTER storage capacity ,QUBITS - Abstract
The article focuses on the role of quantum mechanics as an emerging trend in enhancing data privacy and security. Topics include the definition of quantum computing and its connection with cybersecurity, the challenges involved in designing cryptographic schemes based on quantum computing, and the importance of developing quantum-resistant cryptography with mathematical orientation.
- Published
- 2022
22. Investigating the origins of high multilevel resistive switching in forming free Ti/TiO2-x-based memory devices through experiments and simulations.
- Author
-
Bousoulas, P., Giannopoulos, I., Asenov, P., Karageorgiou, I., and Tsoukalas, D.
- Subjects
- *
NONVOLATILE random-access memory , *COMPUTER storage capacity , *SWITCHING circuits , *STOCHASTIC analysis , *COMPUTER simulation - Abstract
Although multilevel capability is probably the most important property of resistive random access memory (RRAM) technology, it is vulnerable to reliability issues due to the stochastic nature of conducting filament (CF) creation. As a result, the various resistance states cannot be clearly distinguished, which leads to memory capacity failure. In this work, due to the gradual resistance switching pattern of TiO2-x-based RRAM devices, we demonstrate at least six resistance states with distinct memory margin and promising temporal variability. It is shown that the formation of small CFs with high density of oxygen vacancies enhances the uniformity of the switching characteristics in spite of the random nature of the switching effect. Insight into the origin of the gradual resistance modulation mechanisms is gained by the application of a trapassisted-tunneling model together with numerical simulations of the filament formation physical processes. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
23. Design of a Resilient, High-Throughput, Persistent Storage System for the ATLAS Phase-II DAQ System.
- Author
-
Abed Abud, Adam, Bonaventura, Matias, Farina, Edoardo, and Le Goff, Fabrice
- Subjects
- *
LARGE Hadron Collider , *DATA flow computing , *COMPUTER storage capacity , *DATA structures , *NUCLEAR physics experiments - Abstract
The ATLAS experiment will undergo a major upgrade to take advantage of the new conditions provided by the upgraded High-Luminosity LHC. The Trigger and Data Acquisition system (TDAQ) will record data at unprecedented rates: the detectors will be read out at 1 MHz generating around 5 TB/s of data. The Dataflow system (DF), component of TDAQ, introduces a novel design: readout data are buffered on persistent storage while the event filtering system analyses them to select 10000 events per second for a total recorded throughput of around 60 GB/s. This approach allows for decoupling the detector activity from the event selection process. New challenges then arise for DF: design and implement a distributed, reliable, persistent storage system supporting several TB/s of aggregated throughput while providing tens of PB of capacity. In this paper we first describe some of the challenges that DF is facing: data safety with persistent storage limitations, indexing of data at high-granularity in a highly-distributed system, and high-performance management of storage capacity. Then the ongoing R&D to address each of the them is presented and the performance achieved with a working prototype is shown. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. FLIGHT DELAY PREDICTION BASED WITH MACHINE LEARNING.
- Author
-
Hatıpoğlu, Irmak, Tosun, Ömür, and Tosun, Nedret
- Subjects
FLIGHT delays & cancellations (Airlines) ,MACHINE learning ,COMPUTER storage capacity ,CUSTOMER satisfaction ,AERONAUTICAL flights - Abstract
Background: The delay of a planned flight causes many undesirable situations such as cost, customer satisfaction, environmental pollution. There is only one way to prevent these problems before they occur, and that is to know which flights will be delayed. The aim of this study is to predict delayed flights. For this, the use of machine learning techniques, which have become widespread with the development of computer capacities and data storage systems, is preferred. Methods: Estimations are made with three up-to-date techniques XGBoost, LightGBM, and CatBoost techniques based on Gradient Boosting from machine learning techniques. The bayesian technique is used for hyper-parameter settings. In addition, the Synthetic Minority Over-Sampling Technique (SMOTE) technique is also used, as the majority of flights are on time and delayed flights, which constitute a minority class, may adversely affect the results. The results are analyzed and shared with and without SMOTE. Results: As a consequence of the application, which was run on a data set containing all of an international airline's flights [18148 flights] for a year, it was discovered that flights may be predicted with high accuracy. Conclusions: The application of machine learning techniques to anticipate flight delays is new, but it has a lot of potential. Companies will be able to avert problems before they develop if delays are correctly estimated, which can generate plenty of issues. As a result, concrete advantages such as lower costs and higher customer satisfaction will emerge. Improvements will be made at the most vulnerable place in the aviation business. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. TOONTRACK Stockholm & Stories SDX Expansions.
- Author
-
Schmidt, Roland
- Subjects
COMPUTER storage capacity ,ARCHITECTURAL acoustics ,PERCUSSION instruments ,DRUM set - Abstract
Toontrack's SDX expansion packs for their Superior Drummer suite offer extensive and high-quality drum sampling. The Stockholm SDX was recorded at Riksmixningsverket studios in Sweden, providing a range of drum kits with acoustic color and the ability to blend ambient mics. The Stories SDX was recorded at Power Station studio complex in New England, capturing punch and depth with a variety of drum kits and percussion instruments. Both SDX packs come with MIDI grooves and production-ready mixes, offering versatile options for creating drum tracks. These packs require Toontrack's Superior Drummer 3 plugin. [Extracted from the article]
- Published
- 2024
26. Optimal selection of application loading on cloud services.
- Author
-
Sharma, Sanjay and Sharma, Bharat
- Subjects
CLOUD computing ,COMPUTER storage capacity ,MOTHERBOARDS ,GOAL programming ,SOFTWARE as a service ,NONLINEAR programming ,INFORMATION technology ,BUSINESS expansion - Abstract
There is a need for identifying computing power hours and storage utilisation along with total cost optimisation. The present paper focuses on optimal selection of application loading process on the cloud services considering relevant factors. Using this model, small companies that plan to develop applications and use cloud services may determine cost and optimal selection of service by taking into account its own as well as provider’s perspectives into consideration. The paper consists of four stages. First stage deals with the estimation of required computing power hours in the planned duration. Second stage relates to the calculation of storage capacity. Third stage corresponds to the formation of multi objective goal programme to prioritise computing power hours and storage utilisation requirements of applications and optimise total cost of usage. Finally, fourth stage deals with the mixed integer non-linear programming to minimise total cost considering other variable factors. For small application developers who cannot afford in-house IT infrastructure, we find an optimal framework for allocating number of applications on cloud services such as Infrastructure as a Service and Platform as a Service. For ease in planning, the user company can quickly decide corresponding number of applications at appropriate services, and at the same time can reduce overall usage cost. With the help of proposed method, the service provider may keep a suitable inventory of cores to provide backup computing power and storage capacity. This adds value to developers also, as company can plan for their operations corresponding to the business growth. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
27. Navigating the Constantly Evolving Cybersecurity Threat Landscape.
- Author
-
Squeo, John and Sukthankar, Vikram
- Subjects
- *
INTERNET security , *OPEN source software , *PHYSICIANS' attitudes , *DIGITAL health , *COMPUTER storage capacity - Abstract
The article highlights the cyberattacks on health systems and medical practices omnipresent in the headlines can be tempting for overextended administrators and executives to reach a point in which the gravity of the situation feels overwhelming.
- Published
- 2022
28. The Vehicle Routing Problem: State-of-the-Art Classification and Review.
- Author
-
Tan, Shi-Yi and Yeh, Wei-Chang
- Subjects
VEHICLE routing problem ,COMPUTER storage capacity ,TRANSPORTATION planning ,CLASSIFICATION - Abstract
Transportation planning has been established as a key topic in the literature and social production practices. An increasing number of researchers are studying vehicle routing problems (VRPs) and their variants considering real-life applications and scenarios. Furthermore, with the rapid growth in the processing speed and memory capacity of computers, various algorithms can be used to solve increasingly complex instances of VRPs. In this study, we analyzed recent literature published between 2019 and August of 2021 using a taxonomic framework. We reviewed recent research according to models and solutions, and divided models into three categories of customer-related, vehicle-related, and depot-related models. We classified solution algorithms into exact, heuristic, and meta-heuristic algorithms. The main contribution of our study is a classification table that is available online as Appendix A. This classification table should enable future researchers to find relevant literature easily and provide readers with recent trends and solution methodologies in the field of VRPs and some well-known variants. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. REAL-TIME STREAMING TECHNOLOGY AND ANALYTICS FOR VALUE CREATION.
- Author
-
Shim, J. P., O'Leary, Daniel E., and Nisar, Karan
- Subjects
VALUE creation ,COMPUTER storage capacity ,OPTICAL disks ,HIGH performance computing ,STREAMING technology - Abstract
Real-time streaming technology and analytics capabilities are growing rapidly, whereas a great number of firms and organizations are considering implementing this technology to meet rising business demands. Traditional computer infrastructures for high performance computing and big data analytics are not able to conduct such tasks. To tackle this obstacle, rapid analysis of streaming data requires significant amounts of computer and data storage capacity, which requires real-time streaming technology and analytics. Real-time streaming has become a crucial component where tremendous volumes of data from thousands of sensors and other information sources are processed so that a company extracting the copious amount of real-time data can react to changing conditions instantaneously. Streaming technology and analytics generate real value from real-time data. This paper presents the current architecture, status, and trend of real-time streaming technology and analytics. It discusses value creation of streaming analytics. The paper describes continuous intelligence and value for streaming analytics and the current architecture and status of streaming technology and analytics; showcases the leading vendors for streaming technology and analytics; discusses various real-world use cases and benefits across various industries; analyzes value creation of streaming analytics; and proposes several research issues, along with challenges and recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. The Failures of Mathematical Anti-Evolutionism.
- Author
-
ROSENHOUSE, JASON
- Subjects
COMPUTER storage capacity ,EVOLUTIONARY theories ,MATHEMATICAL notation ,PARAPSYCHOLOGY ,BIOLOGICAL adaptation - Abstract
The article reports that Mathematics has long had a place in the arsenal of anti-evolutionism, but in recent years it has become especially prominent. This is understandable because mathematics affords the possibility of an "in-principle" argument against evolution. It is like representing a defendant at a criminal trial. One approach is to challenge each piece of evidence against your client individually in the hope of creating reasonable doubt.
- Published
- 2022
31. EOS architectural evolution and strategic development directions.
- Author
-
Doglioni, C., Kim, D., Stewart, G.A., Silvestris, L., Jackson, P., Kamleh, W., Bitzes, Georgios, Luchetti, Fabio, Manzi, Andrea, Patrascoiu, Mihai, Peters, Andreas Joachim, Simon, Michal Kamil, and Sindrilaru, Elvin Alin
- Subjects
- *
NUCLEAR physics , *COMPUTER software , *COMPUTER storage capacity , *DATA analysis , *DATA management - Abstract
EOS [1] is the main storage system at CERN providing hundreds of PB of capacity to both physics experiments and also regular users of the CERN infrastructure. Since its first deployment in 2010, EOS has evolved and adapted to the challenges posed by ever-increasing requirements for storage capacity, user-friendly POSIX-like interactive experience and new paradigms like collaborative applications along with sync and share capabilities. Overcoming these challenges at various levels of the software stack meant coming up with a new architecture for the namespace subsystem, completely redesigning the EOS FUSE module and adapting the rest of the components like draining, LRU engine, file system consistency check and others, to ensure a stable and predictable performance. In this paper we detail the issues that triggered all these changes along with the software design choices that we made. In the last part of the paper, we move our focus to the areas that need immediate improvements in order to ensure a seamless experience for the end-user along with increased over-all availability of the service. Some of these changes have far-reaching effects and are aimed at simplifying both the deployment model but more importantly the operational load when dealing with (non/)transient errors in a system managing thousands of disks. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. The DIRAC interware: current, upcoming and planned capabilities and technologies.
- Author
-
Doglioni, C., Kim, D., Stewart, G.A., Silvestris, L., Jackson, P., Kamleh, W., Stagni, Federico, Tsaregorodtsev, Andrei, Sailer, André, and Haen, Christophe
- Subjects
- *
DISTRIBUTED computing , *PARTICLE physics , *NUCLEAR physics , *COMPUTER storage capacity , *WORKLOAD of computers - Abstract
Efficient access to distributed computing and storage resources is mandatory for the success of current and future High Energy and Nuclear Physics Experiments. DIRAC is an interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for the Workload, Data and Production Management tasks of large scientific communities. A single DIRAC installation provides a complete solution for the distributed computing of one, or more than one collaboration. The DIRAC Workload Management System (WMS) provides a transparent, uniform interface for managing computing resources. The DIRAC Data Management System (DMS) offers all the necessary tools to ensure data handling operations: it supports transparent access to storage resources based on multiple technologies, and is easily expandable. Distributed Data management can be performed, also using third party services, and operations are resilient with respect to failures. DIRAC is highly customizable and can be easily extended. For these reasons, a vast and heterogeneous set of scientific collaborations have adopted DIRAC as the base for their computing models. Users from different experiments can interact with the system in different ways, depending on their specific tasks, expertise level and previous experience using command line tools, python APIs or Web Portals. The requirements of the diverse DIRAC user communities and hosting infrastructures triggered multiple developments to improve the system usability: examples include the adoption of industry standard authorization and authentication infrastructure solutions, the management of diverse computing resources (cloud, HPC, GPGPU, etc.), the handling of high-intensity work and data flows, but also advanced monitoring and accounting using no-SQL based solutions and message queues. This contribution will highlight DIRAC's current, upcoming and planned capabilities and technologies. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. Distributed resources of Czech WLCG Tier-2 center.
- Author
-
Doglioni, C., Kim, D., Stewart, G.A., Silvestris, L., Jackson, P., Kamleh, W., Adam, Martin, Adamová, Dagmar, Chudoba, Jiří, Mikula, Alexandr, Svatoš, Michal, Uhlířová, Jana, and Vokáč, Petr
- Subjects
- *
DISTRIBUTED computing , *NUCLEAR physics , *COMPUTER storage capacity , *CACHE memory , *DATA analysis - Abstract
The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences provides compute and storage capacity to several physics experiments. Most resources are used by two LHC experiments, ALICE and ATLAS. In the WLCG, which coordinates computing activities for the LHC experiments, the computing center is Tier-2. The rest of computing resources is used by astroparticle experiments like the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA) or particle experiments like NOvA and DUNE. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room. ALICE uses several xrootd servers located at the Nuclear Physics Institute in Rez, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET (the Czech National Grid Initiative representative) located in Ostrava, more than 100 km away from the CC IoP. Storage is managed by dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. The computing center provides about 8k CPU cores which are used by the experiments based on fair-share. The CPUs are distributed amongst server rooms in the Institute of Physics, in the Faculty of Mathematics and Physics of the Charles University, and in CESNET. For the ATLAS experiment, the resources are extended by opportunistic usage of the Salomon HPC provided by the Czech national HPC center IT4Innovations in Ostrava. The HPC provides 24-core nodes. The maximum number of allowed single-node jobs in the batch system is 200. The contribution of the HPC to the CPU consumption by the ATLAS experiment is about 15% on average. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
34. Idle-Time Garbage-Collection Scheduling.
- Author
-
DEGENBAEV, ULAN, EISINGER, JOCHEN, ERNST, MANFRED, MCILROY, ROSS, and PAYER, HANNES
- Subjects
- *
JAVASCRIPT programming language , *FRAMES (Computer science) , *COMPUTER storage capacity , *INTERNET users - Abstract
The article provides information on an approach used by the programming language JavaScript engine V8 in the Chrome internet browser to schedule garbage-collection pauses when Chrome is idle. It details how this approach can reduce user-visable dropped frames, or jank, on real-world websites and result in fewer dropped frames and memory consumption.
- Published
- 2016
- Full Text
- View/download PDF
35. 3D anisotropic structure of the Japan subduction zone.
- Author
-
Zewei Wang and Dapeng Zhao
- Subjects
- *
SEISMIC anisotropy , *SUBDUCTION , *SUBDUCTION zones , *SPACE sciences , *INTERNAL structure of the Earth , *COMPUTER storage capacity - Abstract
The article offers information on the three-dimensional anisotropic structure of the Japan subduction zone. It mentions that how mantle materials flow and how intraslab fabrics align in subduction zones as among the two essential issues for clarifying material recycling between Earth's interior and surface.
- Published
- 2021
- Full Text
- View/download PDF
36. Improvement of Radiant Heat Efficiency of the Radiant Tube Used for Continuous Annealing Line by Application of Additive Manufacturing Technology.
- Author
-
Ha, Won, Ha, Jaehyun, Roh, Yonghoon, and Lee, Youngseog
- Subjects
COMPUTER storage capacity ,HONEYCOMB structures ,HEAT ,STEEL mills ,CURVED surfaces - Abstract
This study presents the application of additive manufacturing (AM) technology to a W-type INCONEL radiant tube (RT) used to improve its radiant heat efficiency. Appropriate dimensions of honeycomb structure were determined from finite element (FE) analysis and the resulting increase in radiant heat was computed. The honeycomb patterns on the RT surfaces were printed using the directed energy deposition (DED) method. Radiant heat efficiency of a prototype RT with a honeycomb pattern printed was examined in a pilot furnace emulating the continuous annealing line (CAL). Finally, soundness of the prototype RT was tested on-site on the actual the CAL of No. 3 CGL in POSCO Gwangyang Steel Works. The results revealed that partial FE analysis, which predicts the amount of radiant heat by partially modeling the RT structure rather than modeling the entire RT structure, is suitable for overcoming the limitation of the computer memory capacity and calculating the design parameters of honeycomb patterns. The DED is suitable for printing honeycomb patterns on RT with large and curved surfaces. The average amount of gas consumed to maintain 780 °C and 880 °C for 1440 min was reduced by 10.42% and 12.31%, respectively. There were no cracks and no gas leaks on the RT surface in an annual inspection over three years. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
37. A hybrid approach for load balancing in cloud computing.
- Author
-
Roomi, Ghsuoon Badr and Arif, Khaldun. I.
- Subjects
CLOUD computing ,INFORMATION technology ,VIRTUAL machine systems ,DISTRIBUTED computing ,COMPUTER storage capacity - Abstract
Cloud computing is one of the most promising technical developments in recent days. It emerged as a dominant and transformational paradigm. Cloud computing can be considered a vital form of information technology that allows the delivery of services to users via the Internet upon request from the user and based on immediate payment. One of the main challenges and important fields for research in the cloud computing environment is load balancing. Load balancing has become an important point for stability and good system performance. Therefore, the main goal is to establish an effective load balancing algorithm for task scheduling. In this paper, we introduce the Hybrid Algorithm for Load Balancing (HALB), which aims to balance effective loading among virtual machines by balancing the percentage of data usage from RAM in each VM. As the results of the percentages were close, and all percentages did not reach the condition of overload. The proposed hybrid algorithm also reduces average waiting time and turnaround time. As a mechanism of work of our hybrid algorithm relies on two types of scheduling algorithms, one dependent on the other, namely the lottery algorithm and the Shortest Job First algorithm. A specific mechanism has been implemented to allocate tasks resulting from scheduling each algorithm separately, when calculating the total size of data tasks in each VM, the results showed that the volume of data allocated within the VM. Data sizes converge across all VMs. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. WLCG space accounting in the SRM-less world.
- Author
-
Andreeva, Julia, Christidis, Dimitrios, Di Girolamo, Alessandro, Keeble, Oliver, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
- *
ACCOUNTING , *DISTRIBUTED computing , *COMPUTER storage capacity , *DATA analysis , *INFORMATION retrieval - Abstract
The WLCG computing infrastructure provides distributed storage capacity hosted at the geographically dispersed computing sites. In order to effectively organize storage and processing of the LHC data, the LHC experiments require a reliable and complete overview of the storage capacity in terms of the occupied and free space, the storage shares allocated to different computing activities, and the possibility to detect "dark" data that occupies space while being unknown to the experiment's file catalogue. The task of the WLCG space accounting activity is to provide such an overview and to assist LHC experiments and WLCG operations to manage storage space and to understand future requirements. Several space accounting solutions which have been developed by the LHC experiments are currently based on Storage Resource Manager (SRM). In the coming years SRM becomes an optional service for sites which do not provide tape storage. Moreover, already now some of the storage implementations do not provide an SRM interface. Therefore, the next generation of the space accounting systems should not be based on SRM. In order to enable possibility for exposing storage topology and space accounting information the Storage Resource Reporting proposal has been agreed between LHC experiments, sites and storage providers. This contribution describes the WLCG storage resource accounting system which is being developed based on Storage Resource Reporting proposal. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. NAPEL: Near-Memory Computing Application Performance Prediction via Ensemble Learning.
- Author
-
Singh, Gagandeep, Gomez-Luna, Juan, Mariani, Giovanni, Oliveira, Geraldo F., Corda, Stefano, Stuijk, Sander, Mutlu, Onur, and Corporaal, Henk
- Subjects
ENERGY consumption ,AUTOMATION design & construction ,COMPUTER storage capacity ,ELECTRONIC data processing ,RANDOM forest algorithms - Abstract
The cost of moving data between the memory/storage units and the compute units is a major contributor to the execution time and energy consumption of modern workloads in computing systems. A promising paradigm to alleviate this data movement bottleneck is near-memory computing (NMC), which consists of placing compute units close to the memory/storage units. There is substantial research effort that proposes NMC architectures and identifies workloads that can benefit from NMC. System architects typically use simulation techniques to evaluate the performance and energy consumption of their designs. However, simulation is extremely slow, imposing long times for design space exploration. In order to enable fast early-stage design space exploration of NMC architectures, we need high-level performance and energy models. We present NAPEL, a high-level performance and energy estimation framework for NMC architectures. NAPEL leverages ensemble learning to develop a model that is based on microarchitectural parameters and application characteristics. NAPEL training uses a statistical technique, called design of experiments, to collect representative training data efficiently NAPEL provides early design space exploration 220x faster than a state-of-the-art NMC simulator, on average, with error rates of to 8.5% and 11.6% for performance and energy estimations, respectively, compared to the NMC simulator. NAPEL is also capable of making accurate predictions for previously-unseen applications. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. An Efficient Spare-Line Replacement Scheme to Enhance NVM Security.
- Author
-
Jie XU, Dan Feng, Yu Hua, Fangting Huang, Wen Zhou, Wei Tong, and Jingning Liu
- Subjects
NONVOLATILE memory ,PHASE change memory ,MALWARE ,STANDARD deviations ,COMPUTER storage capacity - Abstract
Non-volatile memories (NVMs) are vulnerable to serious threat due to the endurance variation. We identify a new type of malicious attack, called Uniform Address Attack (UAA), which performs uniform and sequential writes to each line of the whole memory, and wears out the weaker lines (lines with lower endurance) early. Experimental results show that the lifetime of NVMs under UAA is reduced to 4.1% of the ideal lifetime. To address such attack, we propose a spare-line replacement scheme called Max-WE (Maximize the Weak lines' Endurance). By employing weak-priority and weak-strong-matching strategies for spare-line allocation, Max-WE is able to maximize the number of writes that the weakest lines can endure. Furthermore, Max-WE reduces the storage overhead of the mapping table by 85% through adopting a hybrid spare-line mapping scheme. Experimental results show that Max-WE can improve the lifetime by 9.5X with the spare-line overhead and mapping overhead as 10% and 0.016% of the total space respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. A 1.17 TOPS/W, 150fps Accelerator for Multi-Face Detection and Alignment.
- Author
-
Huiyu Mo, Leibo Liu, Wenping Zhu, Qiang Li, Hong Liu, Wenjing Hu, Yao Wang, and Shaojun Wei
- Subjects
HUMAN facial recognition software ,COMPUTER storage capacity ,HARDWARE ,DEEP learning ,COMPUTATIONAL complexity - Abstract
Face detection and alignment are highly-correlated, computation-intensive tasks, without being flexibly supported by any facial-oriented accelerator yet. This work proposes the first unified accelerator for multi-face detection and alignment, along with the optimizations on multi-task cascaded convolutional networks algorithm, to implement both multi-face detection and alignment. First, the clustering non-maximum suppression is proposed to significantly reduce intersection over union computation and eliminate the hardware-interfer-ence sorting process, bringing 16.0% speed-up without any loss. Second, a new pipeline architecture is presented to implement the proposal network in more computation-efficient manner, with 41.7% less multiplier usage and 38.3% decrease in memory capacity compared with the similar method. Third, a batch schedule mechanism is proposed to improve hardware utilization of fully-connected layer by 16.7% on average with variable input number in batch process. Based on the TSMC 28 nm CMOS process, this accelerator only consumes 6.7ms at 400 MHz to simultaneously process 5 faces for each image and achieves 1.17 TOPS/W power efficiency, which is 54.8 x higher than the state-of-the-art solution. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Wonder stuff.
- Author
-
Cartwright, Jon, Adee, Sally, Webb, Richard, Crystall, Ben, and Ridgway, Andy
- Subjects
- *
GREEN technology , *RESEARCH & development , *TECHNOLOGICAL innovations & the environment , *FUSED silica , *COMPUTER storage capacity , *CHITIN , *BIODEGRADABLE plastics , *HYDROGEN storage , *BERYLLIUM , *NANOSTRUCTURED materials - Abstract
The article discusses seven revolutionary green technologies. Topics include the development of a memory material made of type of glass, called fused quartz, by physicist Peter Kazansky and colleagues that has more storage density than a fused quartz method developed by Japanese technology company Hitachi, the development of biodegradable plastics derived from the organic polymer chitin, called shrillk, by Donald Ingber and Javier Fernandez, and the development of the hydrogen storage material beryllium nanofoams by Fedor Naumkin and colleagues.
- Published
- 2014
- Full Text
- View/download PDF
43. Edging Machinery.
- Author
-
Wilkinson, Peter
- Subjects
BAR codes ,COMPUTER storage capacity ,MACHINERY - Published
- 2021
44. Leaking Space.
- Author
-
MITCHELL, NEIL
- Subjects
- *
PROGRAMMING languages , *COMPUTER programming , *SYSTEMS programming (Computer science) , *COMPUTER storage capacity , *COMPUTER storage devices , *COMPUTER software , *COMPUTER science - Abstract
The article discusses space leaks, which is when a computer program uses more of the computer's memory than is necessary. The control of a computer program's evaluation to reduce the time between allocating and discarding memory to eliminate space leaks is discussed. Lazy evaluations, when an evaluation of an expressing is delayed until its value is needed, and closures, a function combined with its environment are both considered as a cause of space leaks and attributed to lazy functional programming languages.
- Published
- 2013
- Full Text
- View/download PDF
45. Viewpoint on the Application of Virtual Microscopy in Teaching at a Medical College in Saudi Arabia.
- Author
-
Akhund, Shahid
- Subjects
- *
MEDICAL microscopy , *MEDICAL schools , *COLLEGE teaching , *STUDENT engagement , *COMPUTER storage capacity , *SIMULATED patients - Published
- 2023
- Full Text
- View/download PDF
46. New Technology 2020.
- Subjects
TECHNOLOGICAL innovations ,POWER transmission ,FOREST products industry ,LOGGING ,COMPUTER storage capacity ,COMPUTER firmware - Published
- 2020
47. Associating working memory capacity and code change ordering with code review performance.
- Author
-
Baum, Tobias, Schneider, Kurt, and Bacchelli, Alberto
- Subjects
SHORT-term memory ,COMPUTER software quality control ,QUALITY assurance ,COGNITIVE load ,COMPUTER storage capacity - Abstract
Change-based code review is a software quality assurance technique that is widely used in practice. Therefore, better understanding what influences performance in code reviews and finding ways to improve it can have a large impact. In this study, we examine the association of working memory capacity and cognitive load with code review performance and we test the predictions of a recent theory regarding improved code review efficiency with certain code change part orders. We perform a confirmatory experiment with 50 participants, mostly professional software developers. The participants performed code reviews on one small and two larger code changes from an open source software system to which we had seeded additional defects. We measured their efficiency and effectiveness in defect detection, their working memory capacity, and several potential confounding factors. We find that there is a moderate association between working memory capacity and the effectiveness of finding delocalized defects, influenced by other factors, whereas the association with other defect types is almost non-existing. We also confirm that the effectiveness of reviews is significantly larger for small code changes. We cannot conclude reliably whether the order of presenting the code change parts influences the efficiency of code review. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. On a Fractional Version of Haemers’ Bound.
- Author
-
Bukh, Boris and Cox, Christopher
- Subjects
- *
COMPUTER storage capacity , *MATHEMATICAL bounds , *CHANNEL coding , *GRAPH theory , *LINEAR programming , *HAFNIUM - Abstract
In this paper, we present a fractional version of Haemers’ bound on the Shannon capacity of a graph, which is originally due to Blasiak. This bound is a common strengthening of both Haemers’ bound and the fractional chromatic number of a graph. We show that this fractional version outperforms any bound on the Shannon capacity that could be attained through Haemers’ bound. We show also that this bound is multiplicative, unlike Haemers’ bound. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. The Capacity of Private Computation.
- Author
-
Sun, Hua and Jafar, Syed Ali
- Subjects
- *
INFORMATION storage & retrieval systems , *COMPUTER storage capacity , *CLIENT/SERVER computing , *DATA privacy , *ELECTRICAL engineering - Abstract
We introduce the problem of private computation, comprised of $ {N}$ distributed and non-colluding servers, $ {K}$ independent datasets, and a user who wants to compute a function of the datasets privately, i.e., without revealing which function he wants to compute, to any individual server. This private computation problem is a strict generalization of the private information retrieval (PIR) problem, obtained by expanding the PIR message set (which consists of only independent messages) to also include functions of those messages. The capacity of private computation, $ {C}$ , is defined as the maximum number of bits of the desired function that can be retrieved per bit of total download from all servers. We characterize the capacity of private computation, for $ {N}$ servers and $ {K}$ independent datasets that are replicated at each server, when the functions to be computed are arbitrary linear combinations of the datasets. Surprisingly, the capacity, $ {C}=\left ({1+1/ {N}+\cdots +1/ {N}^{ {K}-1}}\right)^{-1}$ , matches the capacity of PIR with $ {N}$ servers and $ {K}$ messages. Thus, allowing arbitrary linear computations does not reduce the communication rate compared to pure dataset retrieval. The same insight is shown to hold even for arbitrary non-linear computations when the number of datasets $ {K}\rightarrow \infty $. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. An effective online data monitoring and saving strategy for large-scale climate simulations.
- Author
-
Xian, Xiaochen, Archibald, Rick, Mayer, Benjamin, Liu, Kaibo, and Li, Jian
- Subjects
COMPUTER simulation of climate change ,BIG data ,COMPUTER storage capacity ,INFORMATION retrieval ,INFORMATION theory - Abstract
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk, the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This paper proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.