182 results on '"Data content"'
Search Results
2. Optimization of Balanced Menu for Pregnant Women in Grobogan-Central Java using Simplex Method
- Author
-
Fitroh Resmi, Pukky Tetralian Bantining Ngastiti, and Nihaya Alivia Coraima Dewi
- Subjects
Vitamin ,Dietary composition ,stunting ,food and beverages ,Toxicology ,chemistry.chemical_compound ,chemistry ,QA1-939 ,Optimal combination ,Data content ,simplex method ,pregnant women ,Mathematics - Abstract
This study aims to determine the optimization of balanced dietary composition for pregnant women. Determination of the optimization of balanced food is carried out by forming a linear model along with boundary conditions and objective functions, as well as inputting data on the age of pregnant women, age of pregnancy and maternal nutritional needs, then the calculation is carried out using the simplex method in order to obtain the weight of food ingredients that must be consumed to get a balanced nutrition, namely with 75 combinations that have been analyzed on groups of pregnant women aged 19-29 years and 30-49 years in three trimesters, including staple foods, vegetables (spinach, green mustard, cauliflower, kale, carrots), fruit, side dishes vegetables, nuts, sugar and milk with the recommended nutritional adequacy rate for the data content of water, energy, protein, fat, carbohydrate (KH), fiber, vitamin A, B1, B2, B3 and vitamin C. In the group of pregnant women aged 19-29 years and women aged 30-49 years in the three trimesters, it was found that the combination of 55 was the optimal combination with rice, kale, watermelon, and tofu.
- Published
- 2021
- Full Text
- View/download PDF
3. The impact of acquisition geometry on full-waveform inversion updates
- Author
-
Denes Vigh, Xin Cheng, Wei Kang, Kun Jiao, and Nolan Brand
- Subjects
Earth model ,Geophysics ,Geology ,Inversion (meteorology) ,Data content ,Full waveform - Abstract
Full-waveform inversion (FWI) is a high-resolution model-building technique that uses the entire recorded seismic data content to build the earth model. Conventional FWI usually utilizes diving and refracted waves to update the low-wavenumber components of the velocity model. However, updates are often depth limited due to the limited offset range of the acquisition design. To extend conventional FWI beyond the limits imposed by using only transmitted energy, we must utilize the full acquired wavefield. Analyzing FWI kernels for a given geology and acquisition geometry can provide information on how to optimize the acquisition so that FWI is able to update the velocity model for targets as deep as basement level. Recent long-offset ocean-bottom node acquisition helped FWI succeed, but we would also like to be able to utilize the shorter-offset data from wide-azimuth data acquisitions to improve imaging of these data sets by developing the velocity field with FWI. FWI models are heading toward higher and higher wavenumbers, which allows us to extract pseudoreflectivity directly from the developed velocity model built with the acoustic full wavefield. This is an extremely early start to obtaining a depth image that one would usually produce in much later processing stages.
- Published
- 2021
- Full Text
- View/download PDF
4. Model of Learning Activities for Improving Life Quality of the Elderly Using Elderly School as Base
- Author
-
Phackaphon Salathong and Rawiwan Thoranee
- Subjects
Syllabus ,Medical education ,Process (engineering) ,education ,Developmental and Educational Psychology ,Life quality ,Sample (statistics) ,Data content ,Psychology ,Set (psychology) ,General Psychology ,Field (computer science) ,Education - Abstract
This research has the objective to analyze the learning activities and format of the activities to develop life quality of the Elderly using school as base. This is intended to collect data from in-depth interviews, discussions as well as observation and field reporting to analyze data content oriented and descriptive from actual occurrences. Results of the survey finds that learning activities to develop life quality of the Elderly adopts for consideration the following: 1) background of the school for the Elderly, 2) objective of knowledge management, 3) content or course outline to teach within the school for the Elderly, 4) format, process and method of knowledge management, 5) evaluation or monitoring regarding the format of activities for knowledge which finds that there are 2 characteristics namely 1. format of activities for knowledge management in the form of institution with course and syllabus, and 2. format of activities for knowledge management in the form of rural which select the attendees while the form of teaching is showing to set as sample to follow.
- Published
- 2021
- Full Text
- View/download PDF
5. PROJETO DE EXTENSÃO: AS CONTRIBUIÇÕES PARA O ALCANCE DA AGENDA 2030 PARA O DESENVOLVIMENTO SUSTENTÁVEL
- Author
-
Adeildo Cabral da Silva, Nájila Rejanne Alencar Julião Cabral, and Maria de Lourdes da Silva Neta
- Subjects
Social commitment ,Political science ,Local Development ,Data content ,General Medicine ,Humanities - Abstract
EnglishThis paper aimed to analyze the environmental education actions performed in extension projects by the IFCE (Federal Institute of Ceara) by students and teachers, in the period from 2015 to 2019, developed in the Casa Maranguape Project, with respect to the adoption of the Sustainable Development Goals (SDGs) of the 2030 Agenda. This research had a qualitative approach and was based on document research with the data content analysis technique described by Silva; Fossa (2015). The educational contexts emphasized the extensionist action and its contribution to the SDGs achievement. The results show an approach towards environmental education in 67% of the activities, with adherence of 11 among the 17 SDGs, demonstrating interdisciplinarity with the 2030 Agenda. As for the contribution to the teaching practice, the social commitment of the institution was observed along with the achievement of the extension fundamental mission, which is to provide answers to society, in addition to the acquisition of experimental knowledge aiming at sustainable local development. portuguesO objetivo deste artigo foi analisar as acoes de educacao ambiental executadas no âmbito da extensao por alunos e professores do IFCE, no periodo de 2015 a 2019, desenvolvidas no Projeto Casa Maranguape, com relacao a adocao dos Objetivos do Desenvolvimento Sustentavel (ODS) da Agenda 2030. Esta investigacao teve abordagem qualitativa e pautou-se em pesquisa documental, com tecnica de analise de conteudo de dados, descrita por Silva e Fossa (2015). Os contextos educacionais enfatizaram a acao extensionista e sua contribuicao para o alcance dos ODS. Os resultados mostram abordagem em educacao ambiental de 67% das atividades, com aderencia de 11 entre os 17 ODS, demonstrando interdisciplinaridade com a Agenda 2030. Quanto a contribuicao a pratica docente, observaram-se o compromisso social da Instituicao e a realizacao da missao fundamental da extensao, que e dar respostas a sociedade, alem da aquisicao de saberes experimentais voltados ao desenvolvimento local sustentavel.
- Published
- 2021
- Full Text
- View/download PDF
6. توظیف صحافة البیانات فی تناول فیروس کورونا المستجد بالمواقع الإلکترونیة العربیة والعالمیة- دراسة تحلیلیة
- Subjects
Medical staff ,Coronavirus disease 2019 (COVID-19) ,Information accessibility ,Data content ,Sociology ,Social science ,Information coverage ,Data journalism - Abstract
هدفت الدراسة إلى الکشف عن طبيعة توظيف صحافة البيانات في تناول فيروس کورونا المستجد بالمواقع العربية والعالمية، من خلال تحليل شکل ومضمون صحافة البيانات بأشکالها المختلفة وبخاصة القصص المدعومة بالبيانات، وذلک بالتطبيق على مواقع "مصراوي"، "عکاظ"، "العين الإخبارية"، "الجارديان" و"يو إس ايه توداي" وذلک خلال الفترة من يناير وحتى أبريل 2020. اتخذت الدراسة من نظرية ثراء الوسيلة إطارا نظريا، واعتمدت على منهج المسح الإعلامي، ووظفت أداة تحليل الشکل والمضمون لعينة من (1398) شکلا من أشکال صحافة البيانات، وخلصت الدراسة إلى تفوق المواقع العالمية في حجم اهتمامها بصحافة البيانات من حيث عدد القصص المدفوعة بالبيانات حول فيروس کورونا، في مقابل تفوق المواقع العربية في توظيف الإنفو جرافيک والوسائط المتعددة في تناول الموضوع نفسه، وکشفت الدراسة عن تمتع المواقع العالمية بثراء معلوماتي ملحوظ في محتوى صحافة البيانات بها، يرجع لاعتمادها على مصادر متخصصة من العلماء والباحثين والطواقم الطبية ومراکز ومختبرات علمية متخصصة في الأمراض والأوبئة، مقابل الضعف المعلوماتي في المحتوى بالمواقع العربية لاهتمامها بالمصادر الرسمية بغض النظر عن التخصص الذي يفرضه طبيعة الموضوع والقيود المتعلقة بالوصول لمصادر المعلومات عامة، وعکست الدراسة التفاوت الکبير في إفادة المواقع من ثراء الوسيلة، حيث ارتفعت التفاعلية والمشارکة بدرجة کبيرة في المواقع العالمية مقارنة بانخفاضها في المواقع العربية، باستثناء موقع "مصراوي". Arab and international websites' usage of data journalism during the coronavirus epidemic: An analytical study This study examined how Arab and global websites utilize data journalism for the coverage of the Novel Coronavirus (COVID-19) news. This was achieved by analyzing various types of data journalism, concerning both format and content, particularly data-driven/ data-supported news stories related to COVID-19. For the study, the material published on the websites, "Masrawy," "Okaz," "Al Ain," "the Guardian," and "USA Today," during the period from January to April 2020, are explored and analyzed. The research used the media richness theory as a theoretical framework, using media survey methodology, and applying the format- content analysis tool to a selected sample of (1398) media materials representing various types of data journalism (related to the topic of COVID-19). The study found that global websites gave more attention to data journalism in terms of the number of driven news stories about COVID-19. In contrast, the Arab sites had an edge in utilizing infographics and multimedia to handle the same topic. The study also found that global websites included a considerable amount of information richness in their coverage of data content due to their dependence on specialized sources. Scientists, researchers, members of medical staff, scientific research centers, laboratories specialized in diseases, and epidemiology were vital sources. On the other hand, the Arab websites depended on a weak and poor information coverage as they mainly rely on the official sources regardless of whether they have the specializations needed to cover the topic at hand, or not. Also, they suffer from limitations related to information accessibility. The study's findings reflected the considerable gap between global and Arab websites in terms of their utilization of the media richness theory. The global sites had much higher interactivity and engagement among the targeted audience (i.e., audience participating, sharing, commenting, and even crowdsourcing), compared with the low levels of audience interactivity and engagement seen on the Arab websites, except for "Masrawy" website
- Published
- 2020
- Full Text
- View/download PDF
7. A Novel Approach of Data Content Zeroization Under Memory Attacks
- Author
-
Ankush Srivastava and Prokash Ghosh
- Subjects
Authentication ,business.industry ,Computer science ,020208 electrical & electronic engineering ,02 engineering and technology ,020202 computer hardware & architecture ,Software ,Mode (computer interface) ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,Data content ,Electrical and Electronic Engineering ,Federal Information Processing Standards ,business ,Wearable technology ,Conventional memory ,Random access - Abstract
Protecting user’s secret data on the devices like smartphones, tablets, wearable devices etc, from memory attacks is always a challenge for system designers. The most stringent security requirements and protocols in today’s state-of-the-art systems are governed by Federal Information Processing Standards (FIPS). Specifically, it ensures the protection of sensitive data by erasing them from random access memories (RAMs) and associated flip-flip based registers, as soon as security violation(s) is(are) detected. Traditionally, the sensitive data like authentication credentials, cryptographic keys and other on-chip secrets are erased (or zeroized) by sequential write transactions initiated either by dedicated hardware or using software programs. This paper, for the first time, proposes a novel approach of erasing secured data content from on-chip RAMs using conventional memory built-in-self-test (MBIST) hardware in mission mode. The proposed zeroization approach is proved to be substantially faster than the traditional techniques in erasing data content. As it helps in re-using Memory BIST hardware for on-chip data content zeroization, this guarantees to save silicon area and power by removing dedicated conventional hardware from the device. This paper also discusses the micro-architectural implementation and security challenges of using Memory BIST hardware in mission mode and proposes practical solutions to fill the gaps.
- Published
- 2020
- Full Text
- View/download PDF
8. Drug databases and their contributions to drug repurposing
- Author
-
Yadollah Omidi, Yosef Masoudi-Sobhanzadeh, Massoud Amanlou, and Ali Masoudi-Nejad
- Subjects
0106 biological sciences ,Drug Databases ,0303 health sciences ,Scope (project management) ,Databases, Pharmaceutical ,Drug discovery ,Drug Repositioning ,Biology ,computer.software_genre ,01 natural sciences ,Data science ,Field (computer science) ,03 medical and health sciences ,Drug repositioning ,Genetics ,Data content ,Web service ,computer ,Repurposing ,030304 developmental biology ,010606 plant biology & botany - Abstract
Drug repurposing is an interesting field in the drug discovery scope because of reducing time and cost. It is also considered as an appropriate method for finding medications for orphan and rare diseases. Hence, many researchers have proposed novel methods based on databases which contain different information. Thus, a suitable organization of data which facilitates the repurposing applications and provides a tool or a web service can be beneficial. In this review, we categorize drug databases and discuss their advantages and disadvantages. Surprisingly, to the best of our knowledge, the importance and potential of databases in drug repurposing are yet to be emphasized. Indeed, the available databases can be divided into several groups based on data content, and different classes can be applied to find a new application of the existing drugs. Furthermore, we propose some suggestions for making databases more effective and popular in this field.
- Published
- 2020
- Full Text
- View/download PDF
9. An interactive approach to text browsing based on anchor location
- Author
-
Lei Wang, Shuai Liu, Xiaodan Xie, Xiangzhen Li, Xindong Cui, and Jiajian Lu
- Subjects
Information retrieval ,Anchor point ,Computer science ,Product (mathematics) ,Web page ,Process (computing) ,Feature (machine learning) ,Data content - Abstract
Due to the rapid advancement of informationization process, a large number of product characteristic data have been accumulated in the process of testing, and the quantity of product characteristic data is increasing exponentially. The traditional characteristic data often presents the data content in the way of query, etc., and fails to display the data existing in the target rapidly and integrally. This paper designs an anchor point positioning based on the text through interactive method, used in a web page display product feature information in the form of text, use convenient interaction provides users with convenient, fast text browsing experience, elaborated the method of technical implementation ways, finally proved that the method enables users more intuitive preview, Improve the efficiency of viewing and analyzing data.
- Published
- 2021
- Full Text
- View/download PDF
10. Evaluation of an Evolutionary Algorithm to Dynamically Alter Partition Sizes in Web Caching Systems
- Author
-
Richard Hurley and Graeme Young
- Subjects
Web server ,Computer science ,business.industry ,Distributed computing ,Web cache ,Hit rate ,Evolutionary algorithm ,Data content ,The Internet ,computer.software_genre ,business ,Partition (database) ,computer - Abstract
There has been an explosion in the volume of data that is being accessed from the Internet. As a result, the risk of a Web server being inundated with requests is ever-present. One approach to reducing the performance degradation that potentially comes from Web server overloading is to employ Web caching where data content is replicated in multiple locations. In this paper, we investigate the use of evolutionary algorithms to dynamically alter partition size in Web caches. We use established modeling techniques to compare the performance of our evolutionary algorithm to that found in statically-partitioned systems. Our results indicate that utilizing an evolutionary algorithm to dynamically alter partition sizes can lead to performance improvements especially in environments where the relative size of large to small pages is high.
- Published
- 2020
- Full Text
- View/download PDF
11. DEVELOPMENT of MATHEMATICAL QUESTIONS in the PISA MODEL on UNCERTAINTY and DATA CONTENT to MEASURE the MATHEMATICAL REASONING ABILITY of MIDDLE SCHOOL STUDENTS
- Author
-
Siti Asyah, Elya Rosalina, and As Elly
- Subjects
Mathematical problem ,Computer science ,Mathematics education ,Data content ,Mathematical reasoning ,Predicate (grammar) - Abstract
This study aims to produce a Mathematical Problem based on the PISA Model in measuring the level of mathematical reasoning ability in valid and practical SMP 1 students and looking at the mathematical reasoning abilities of students in solving the mathematical problems of the PISA model on uncertainty and data content. This research is a development research with ADDIE development model. This model consists of five stages of development, namely Analysis, Design, Development, Implementation and Evaluation. Products developed in this study are in the form of PISA questions on statistical material and opportunities. From the results of the research carried out by the researcher obtained: (1) the validity of a question indicates that the question depends on the feasibility category such as the developed question obtaining good results in the feasibility component of language by obtaining a mean score of 3.16, on the material feasibility get a good category with an average score of 3.22, then in the construct feasibility category get a very good predicate with a mean score of 3.85. So that the average score obtained by the three experts is 3.35 and is categorized very well. (2) the quality of the question is viewed from various aspects such as practicality which are classified as "Good" with an average score of 3.43 then after that can be determined through the response results assessed by students regarding mathematical problems based on PISA models for uncertainty and data content so that they can measure students' mathematical reasoning abilities. For mathematical reasoning abilities in students using mathematical questions based on the PISA model get an average score of 2.19 categorized quite well.
- Published
- 2019
- Full Text
- View/download PDF
12. A Dual Blockchain Framework to Enhance Data Trustworthiness in Digital Twin Network
- Author
-
Ke Wang, Wenyu Dong, Shen He, Junzhi Yan, and Bo Yang
- Subjects
Set (abstract data type) ,Blockchain ,Trustworthiness ,Basis (linear algebra) ,Computer science ,Distributed computing ,Scalability ,Data security ,Data content ,DUAL (cognitive architecture) - Abstract
Data are the basis in Digital Twin (DT) to set up bidirectional mapping between physical and virtual spaces, and realize critical environmental sensing, decision making and execution. Thus, trustworthiness is a necessity in data content as well as data operations. A dual blockchain framework is proposed to realize comprehensive data security in various DT scenarios. It is highly adaptable, scalable, evolvable, and easy to be integrated into Digital Twin Network (DTN) as enhancement.
- Published
- 2021
- Full Text
- View/download PDF
13. Análisis cualitativo de la gestión tecnológica para la innovación de servicios financieros: Estudio de casos múltiples de startups FinTech en Lima Metropolitana
- Author
-
Héctor Guardamino and Marta Tostes
- Subjects
business.industry ,Business administration ,Data content ,business ,Management process ,Financial services - Abstract
This multiple-case study analyses, from a qualitative approach, the technological management for the innovation of the financial services in four startups FinTech from Metropolitan Lima in Peru: Apurata, Difondy, TasaTop and Tranzfer.me. The Six Facets model (Kearns et al., 2005) was used to assess the technological management processes of these FinTech. In addition, a rubric was developed from criteria and sub-criteria. Subsequently, thoroughly interviews were applied as a technique to collect information related to each of the six model processes (variables) and their respective principles (subvariables). On the other hand, the collected information was organized and systematized using the data content management software WebQDA. Finally, the results indicate that the Six Facets model is a potential tool to analyze the technological management processes that are developed in FinTech with internal technology areas. This in order to be able to find points of improvement whose solution contributes to the innovation of FinTech services, impacting its competitiveness. Likewise, among the studied FinTech, the most solid technological process is the planning, whereas the process in which the analyzed FinTech suffer the most is in the process of customer formation. Finally, it was determined that TasaTop is the most prepared FinTech for the innovation of its services.
- Published
- 2021
- Full Text
- View/download PDF
14. Field-Portable Microplastic Sensing in Aqueous Environments: A Perspective on Emerging Techniques
- Author
-
Patricia Swierk, Jose A. Santos, William Robberson, Mark F. Witinski, Beckett C. Colson, Louis B. Kratchman, Alexandra Z. Greenbaum, Joseph L. Hollmann, Peter Miraglia, Melissa M. Sprachman, Harry L. Allen, Steven Tate, Kenneth A. Markoski, Anna-Marie Cook, Morgan G. Blevins, Sheila S. Hemami, Ernest S. Kim, Anna P. M. Michel, Vienna L. Mott, and Ava A. LaRocca
- Subjects
aqueous solutions ,microplastics ,Data products ,water ,Nanotechnology ,TP1-1185 ,010501 environmental sciences ,sensors ,01 natural sciences ,Biochemistry ,Field (computer science) ,plastic pollution ,analytical chemistry ,Sample preparation ,Data content ,Electrical and Electronic Engineering ,freshwater ,Instrumentation ,polymers ,0105 earth and related environmental sciences ,Aqueous solution ,Chemical technology ,010401 analytical chemistry ,Perspective (graphical) ,Dielectrophoresis ,ocean ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,marine pollution ,Perspective ,Environmental science ,Sample collection ,environment - Abstract
Microplastics (MPs) have been found in aqueous environments ranging from rural ponds and lakes to the deep ocean. Despite the ubiquity of MPs, our ability to characterize MPs in the environment is limited by the lack of technologies for rapidly and accurately identifying and quantifying MPs. Although standards exist for MP sample collection and preparation, methods of MP analysis vary considerably and produce data with a broad range of data content and quality. The need for extensive analysis-specific sample preparation in current technology approaches has hindered the emergence of a single technique which can operate on aqueous samples in the field, rather than on dried laboratory preparations. In this perspective, we consider MP measurement technologies with a focus on both their eventual field-deployability and their respective data products (e.g., MP particle count, size, and/or polymer type). We present preliminary demonstrations of several prospective MP measurement techniques, with an eye towards developing a solution or solutions that can transition from the laboratory to the field. Specifically, experimental results are presented from multiple prototype systems that measure various physical properties of MPs: pyrolysis-differential mobility spectroscopy, short-wave infrared imaging, aqueous Nile Red labeling and counting, acoustophoresis, ultrasound, impedance spectroscopy, and dielectrophoresis.
- Published
- 2021
15. Metrics for identifying food security status
- Author
-
Nicholas Ogot
- Subjects
Measure (data warehouse) ,Food security ,Risk analysis (engineering) ,Computer science ,Process (engineering) ,Stability (learning theory) ,Data content ,Data type ,ComputingMilieux_MISCELLANEOUS ,Field (computer science) ,Variety (cybernetics) - Abstract
This chapter introduces food security as “a state when at all the time, all people can access adequate, safe and efficient nutritive food that meet their dietary needs and food preferences for a healthy and active healthy life.” While metrics refer to the measure of quantitative elements assessed in a research, a wide variety of data content obtained from the field is guided by a process of numerous methods that aim at efficiency. Food security metrics focus basically on the four domains of food security: availability, accessibility, utilization, and stability. The need for new metrics is essential for improving the current food security measurements. The types of data used, assumptions for measuring food security, and the intended uses of the different measurements advocate for the precision, accuracy, results interpretation, and implementation of policies. The metrics identified here complete approaches for food security status determination while gauging other factors.
- Published
- 2021
- Full Text
- View/download PDF
16. Community Preference-Based Information-Centric Networking Cache
- Author
-
Haozhe Liang, Chaofeng Zhang, Xiao Yang, and Caijuan Chen
- Subjects
Computer science ,05 social sciences ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Recommender system ,Preference ,World Wide Web ,0508 media and communications ,Information-centric networking ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Data content ,Cache - Abstract
Information-centric networking (ICN) framework has been proposed to connect data content and network users together, which leads to great efficiency in comparison to conventional network. Cache policy performances as an essential part of ICN, and many studies have been conducted to research this. However, there are still two problems remaining unsolved, 1) ignoring the network user community features, 2) without considering the correlations between users and data content. To optimize these shortcomings, a community preference-based ICN cache policy was proposed. This policy will comprehensively consider the data content features and user community preference and then utilize recommendation system to cache the data into corresponding servers. Moreover, policy advantages and simulation will also be evaluated in this paper.
- Published
- 2020
- Full Text
- View/download PDF
17. AgentG: An Engaging Bot to Chat for E-Commerce Lovers
- Author
-
Aditi Katiyar, V. Srividya, Neha Akhtar, and B. K. Tripathy
- Subjects
Web browser ,Point (typography) ,Computer science ,business.industry ,E-commerce ,computer.software_genre ,Chatbot ,World Wide Web ,Product (business) ,Order (business) ,Customer service ,Data content ,business ,computer - Abstract
Regular customer assistance chatbots are generally based on dialogues delivered by a human. It faces symbolic issues usually related to data scaling and privacy of one’s information. In this paper, we present AgentG, an intelligent chatbot used for customer assistance. It is built using the deep neural network architecture. It clouts huge-scale and free publicly accessible e-commerce data. Different from the existing counterparts, AgentG takes a great data advantage from in-pages that contain product descriptions along with user-generated data content from these online e-commerce websites. It results in more efficient from a practical point of view as well as cost-effective while answering questions that are repetitive, which helps in providing a freedom to people who work as customer service in order to answer questions with highly accurate answers. We have demonstrated how AgentG acts as an additional extension to the actual stream Web browsers and how it is useful to users in having a better experience who are doing online shopping.
- Published
- 2020
- Full Text
- View/download PDF
18. The Branching Data Model as a New Level of Abstraction over the Relational Database
- Author
-
H. Paul Zellweger
- Subjects
Theoretical computer science ,Relational database ,Computer science ,lcsh:A ,General Medicine ,Tree structure ,relational database ,Schema (psychology) ,end-user navigation ,named set theory ,Data content ,branching data model ,lcsh:General Works ,Data patterns - Abstract
Information is often stored in the relational database. This technology is now fifty years old, but there remain patterns of relational data that have not yet been studied. The abstract presents a new data pattern called the branching data model (BDM). It represents the pure alignment of the table’s schema, its data content, and the operations on these two structures. Using a well-defined SELECT statement, an input data condition and its output values form a primitive tree structure. While this relationship is formed outside of the query, the abstract shows how we can view it as a tree structure within the table. Using algorithms, including AI, it goes on to show how this data model connects with others, within the table and between them, to form a new, uniform level of abstraction over the data throughout the database.
- Published
- 2020
- Full Text
- View/download PDF
19. Naming and Routing Scheme for Data Content Objects in Information-Centric Network
- Author
-
Natallia Pastei, Fatima Rahal, Ghassan Jaber, and Ahmad Abboud
- Subjects
Sinc function ,business.industry ,Computer science ,Routing table ,The Internet ,Data content ,Reuse ,business ,Flooding (computer networking) ,Computer network - Abstract
The subject of this article is devoted to the problems of naming and routing scheme for data content object in information-centric networks (ICN). The relevance of the work is determined by the idea of an information-content networks as a future promising technology for the Internet. The article introduces a new naming strategy for ICN – named Semantic Information-Centric Network (SINC). SINC uses three addresses: Geographical, Semantic and Publisher ID address. It was done the classifying data into the four types and classifying subscriber request into four classes where the SICN can cope with these different types and classes. Briefly discussed routing tables structure Geo-ID, Geo-Sematic, ID-Semantic and some algorithms for updating, removing, merging and matching records. The results of two scenarios modeling evaluation by comparing the SICN with different schemes and projects including IP, DONA, PURSUIT, CBCB, KBN according to following metrics: time delay, flooding or traffic, and efficiency reuse factor for data are presented. In terms of flooding and time delay SICN outperforms the other ICN projects. In terms of efficiency SICN shows a good results compared with other schemes.
- Published
- 2020
- Full Text
- View/download PDF
20. A Survey on Contribution of Data Mining Techniques and Graph Reading Algorithms in Concept Map Generation
- Author
-
A. Auxilia Princy and B. Lavanya
- Subjects
Complex data type ,Concept map ,business.industry ,Computer science ,Big data ,Data content ,Data mining ,business ,computer.software_genre ,Algorithm ,computer ,Graph - Abstract
Concept maps are a pictorial representation of concepts found in data and it shows relationship between concepts. These Concept map help us to understand the whole data content, makes it easily readable and memorable. They are used to deliver complex data in an understandable form (map, tree, graph, etc), which is used for a better understanding and decision making for researchers and business, etc. This paper discusses the recent researches about concept maps and data mining techniques, and graph reading algorithms used for concept map generation.
- Published
- 2018
- Full Text
- View/download PDF
21. PubChem 2019 update: improved access to chemical data
- Author
-
Paul A. Thiessen, Sunghwan Kim, Jia He, Leonid Zaslavsky, Qingliang Li, Bo Yu, Tiejun Cheng, Evan E Bolton, Benjamin A. Shoemaker, Asta Gindulyte, Jian Zhang, Siqian He, and Jie Chen
- Subjects
Information Storage and Retrieval ,Biology ,Patents as Topic ,Small Molecule Libraries ,World Wide Web ,Structure-Activity Relationship ,03 medical and health sciences ,0302 clinical medicine ,Information resource ,Research community ,Drug Discovery ,Genetics ,Animals ,Humans ,Database Issue ,Data content ,030304 developmental biology ,Internet ,0303 health sciences ,Molecular Structure ,business.industry ,Computational Biology ,Chemical data ,High-Throughput Screening Assays ,Pharmaceutical Preparations ,Biological Assay ,The Internet ,business ,Databases, Chemical ,030217 neurology & neurosurgery ,PubChem - Abstract
PubChem (https://pubchem.ncbi.nlm.nih.gov) is a key chemical information resource for the biomedical research community. Substantial improvements were made in the past few years. New data content was added, including spectral information, scientific articles mentioning chemicals, and information for food and agricultural chemicals. PubChem released new web interfaces, such as PubChem Target View page, Sources page, Bioactivity dyad pages and Patent View page. PubChem also released a major update to PubChem Widgets and introduced a new programmatic access interface, called PUG-View. This paper describes these new developments in PubChem.
- Published
- 2018
- Full Text
- View/download PDF
22. Datengüte des TraumaRegister DGU®
- Author
-
G. Matthes, Ulrike Nienaber, F. Laue, R. Volland, T. Ziprian, N. Ramadanov, and Rolf Lefering
- Subjects
business.industry ,030208 emergency & critical care medicine ,medicine.disease ,03 medical and health sciences ,0302 clinical medicine ,Emergency Medicine ,Medicine ,Orthopedics and Sports Medicine ,Surgery ,Data content ,030212 general & internal medicine ,Medical emergency ,business ,Trauma surgery - Abstract
Background Registries are becoming increasingly more important in clinical research. The TraumaRegister DGU® of the German Society for Trauma Surgery plays an excellent role with respect to the care of severely injured patients. Aim Within the framework of this investigation the quality of data provided by this registry was to be verified. Material and methods Certified hospitals participating in the TraumaNetzwerk DGU® of the German Society for Trauma Surgery are obliged to submit data of treated severely injured patients to the TraumaRegister DGU®. Participating hospitals have to undergo a re-certification process every 3 years. Within the framework of this re-audit, data from 5 out of 8 randomly chosen patient cases included in the registry are controlled and compared to the patient files of the certified hospital. In the present investigation discrepancies concerning data provided were documented and the pattern of deviation was analyzed. Results The results of 1075 re-certification processes carried out in 631 hospitals including the documentation of 5409 checked patient cases from 2012-2017 were analyzed. The highest number of discrepancies detected concerned the documented time until initial CT (15.8%) and the lowest concerned the discharge site (3.2%). The majority of data sheets with discrepancies showed deviations in only one out of seven checked parameters. Interestingly, large trauma centers with a high throughput of severely injured patients showed the most deviations. Conclusion The present investigation underlines the importance of standardized checks concerning data provided for registries in order to be able to guarantee an improvement in entering data.
- Published
- 2018
- Full Text
- View/download PDF
23. Prevention of Data Content Leakage
- Author
-
Meghana N. Jadhav
- Subjects
Petroleum engineering ,Environmental science ,Data content ,Leakage (electronics) - Published
- 2018
- Full Text
- View/download PDF
24. ROSETTA: How to archive more than 10 years of mission
- Author
-
F. Vallejo, P. Martin, David Heather, Sebastien Besse, M. F. A'Hearn, D. Fraga, E. Grotheer, Matthew Taylor, Ludmilla Kolokolova, Maud Barthelemy, R. Andres, Laurence O'Rourke, and T. Barnes
- Subjects
Scientific instrument ,Engineering ,010504 meteorology & atmospheric sciences ,business.industry ,Astronomy and Astrophysics ,01 natural sciences ,Planetary Data System ,Astrobiology ,Planetary science ,Aeronautics ,Space and Planetary Science ,0103 physical sciences ,Comet (programming) ,Data content ,business ,010303 astronomy & astrophysics ,0105 earth and related environmental sciences - Abstract
The Rosetta spacecraft was launched in 2004 and, after several planetary and two asteroid fly-bys, arrived at comet 67P/Churyumov-Gerasimenko in August 2014. After escorting the comet for two years and executing its scientific observations, the mission ended on 30 September 2016 through a touch down on the comet surface. This paper describes how the Planetary Science Archive (PSA) and the Planetary Data System – Small Bodies Node (PDS-SBN) worked with the Rosetta instrument teams to prepare the science data collected over the course of the Rosetta mission for inclusion in the science archive. As Rosetta is an international mission in collaboration between ESA and NASA, all science data from the mission are fully archived within both the PSA and the PDS. The Rosetta archiving process, supporting tools, archiving systems, and their evolution throughout the mission are described, along with a discussion of a number of the challenges faced during the Rosetta implementation. The paper then presents the current status of the archive for each of the science instruments, before looking to the improvements planned both for the archive itself and for the Rosetta data content. The lessons learned from the first 13 years of archiving on Rosetta are finally discussed with an aim to help future missions plan and implement their science archives.
- Published
- 2018
- Full Text
- View/download PDF
25. NON - PHARMACOLOGICAL PAIN MANAGEMENT IN POSTOPERATIVE CARE OF SCHOOL - AGE CHILDREN
- Author
-
Sari Laanterä, Svajūnė Goštautaitė, Viktorija Piščalkienė, and Leena Uosukainen
- Subjects
medicine.medical_specialty ,School age child ,030504 nursing ,business.industry ,Lithuanian ,Pain management ,language.human_language ,03 medical and health sciences ,0302 clinical medicine ,Pain assessment ,030225 pediatrics ,Family medicine ,Assessment methods ,language ,medicine ,Data content ,0305 other medical science ,business ,Non pharmacological - Abstract
The aim of this study was to evaluate children‘s postoperative pain assessment and management methods applied in practice by nurses from Lithuania and Finland. Methods. Individual in - depth semi - structured interviews by non - probabilic snowball (network) and purposive sampling, data content analysis. 20 nurses in Lithuania and 5 nurses in Finland, who work at pediatric surgical and pediatric wards, where children after surgeries are treated. Results. The research has shown differences between postoperative school - age children‘s pain management practise by Lithuanian and Finnish nurses. Lithuanian nurses use smaller variety of these methods than nurses from Finland. All nurses agree that non - pharmacological children pain management is effective and useful. Conclusions. The usage of subjective and objective pain assessment methods by Finnish and Lithuanian nurses is similar, just Lithuanians mostly trust subjective verbal and objective behavioral, appearance pain assessment methods, whereas Finnish combine and use all the subjective pain assessment methods like verbal, parental assessment and using scales as well as objective behavioural assessment. There is a difference between pain management practise by Finnish and Lithuanian nurses. Finnish nurses evenly use all of non - pharmacological methods, whereas Lithuanian nurses mostly trust on physical and rehabilitation methods as well as communication.
- Published
- 2017
- Full Text
- View/download PDF
26. Constructing automatic domain-specific sentiment lexicon using KNN search via terms discrimination vectors
- Author
-
Hatem Abdelkader, Fahd Alqasemi, and Amira Abdelwahab
- Subjects
Computer science ,business.industry ,Sentiment analysis ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Lexicon ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Domain (software engineering) ,Text mining ,Hardware and Architecture ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data content ,Artificial intelligence ,business ,computer ,Software ,Natural language processing - Abstract
Web textual data content is a viable source for decision-makers’ knowledge, so are text analytic applications. Sentiment analysis (SA) is one of text mining fields, in which text is analyzed to rec...
- Published
- 2017
- Full Text
- View/download PDF
27. A Case for Memory Content-Based Detection and Mitigation of Data-Dependent Failures in DRAM
- Author
-
Onur Mutlu, Donghyuk Lee, Alaa R. Alameldeen, Samira Khan, and Christopher B. Wilkerson
- Subjects
010302 applied physics ,Hardware_MEMORYSTRUCTURES ,Computer science ,business.industry ,02 engineering and technology ,System level testing ,01 natural sciences ,020202 computer hardware & architecture ,Hardware and Architecture ,Embedded system ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Data content ,Latency (engineering) ,business ,Data dependent ,Dram ,Efficient energy use ,Dram chip - Abstract
DRAM cells in close proximity can fail depending on the data content in neighboring cells. These failures are called data-dependent failures . Detecting and mitigating these failures online while the system is running in the field enables optimizations that improve reliability, latency, and energy efficiency of the system. All these optimizations depend on accurately detecting every possible data-dependent failure that could occur with any content in DRAM. Unfortunately, detecting all data-dependent failures requires the knowledge of DRAM internals specific to each DRAM chip. As internal DRAM architecture is not exposed to the system, detecting data-dependent failures at the system-level is a major challenge. Our goal in this work is to decouple the detection and mitigation of data-dependent failures from physical DRAM organization such that it is possible to detect failures without knowledge of DRAM internals. To this end, we propose MEMCON , a memory content-based detection and mitigation mechanism for data-dependent failures in DRAM. MEMCON does not detect every possible data-dependent failure. Instead, it detects and mitigates failures that occur with the current content in memory while the programs are running in the system. Using experimental data from real machines, we demonstrate that MEMCON is an effective and low-overhead system-level detection and mitigation technique for data-dependent failures in DRAM.
- Published
- 2017
- Full Text
- View/download PDF
28. Towards Geo-Context Aware IoT Data Distribution
- Author
-
Jonathan Hasenburg and David Bermbach
- Subjects
Focus (computing) ,Work (electrical) ,business.industry ,Computer science ,Control data ,Distribution (economics) ,Context (language use) ,Relevance (information retrieval) ,Data content ,Internet of Things ,business ,Data science - Abstract
In the Internet of Things, the relevance of data often depends on the geographic context of data producers and consumers. Today’s data distribution services, however, mostly focus on data content and not on geo-context, which would benefit many scenarios greatly. In this paper, we propose to use the geo-context information associated with devices to control data distribution. We define what geo-context dimensions exist and compare our definition with concepts from related work. By example, we discuss how geo-contexts enable new scenarios and evaluate how they also help to reduce unnecessary data distributions.
- Published
- 2020
- Full Text
- View/download PDF
29. Subset Ratio Dynamic Selection for Consistency Enhancement Evaluation
- Author
-
Shi Tingting, Hao Liu, Shen Gang, and Wang Kaixun
- Subjects
Degree (graph theory) ,Selection (relational algebra) ,Computer science ,business.industry ,Sampling (statistics) ,Pattern recognition ,02 engineering and technology ,Confidence interval ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Distribution (mathematics) ,Consistency (statistics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data content ,Selection method ,Artificial intelligence ,business - Abstract
Due to poor imaging conditions, large-scale underwater images need the consistency enhancement. According to the subset-guided consistency enhancement evaluation criterion, the existing subset selection methods need too much candidate samples from a whole imageset without any adaptation on data content. Therefore, this paper proposes a subset ratio dynamic selection method for consistency enhancement evaluation. The proposed method firstly divides the candidate samples into several sampling subsets. Based on a non-return sampling strategy, the consistency enhancement degree of an enhancement algorithm is obtained for each sampling subset. By using the student-t distribution under a certain confidence level, the proposed method can adaptively determine the subset ratio for a whole imageset, and the candidate subset is used to predict the consistency enhancement degree of the enhancement algorithm on the whole imageset. Experimental results show that as compared with the existing subset selection methods, the proposed method can reduce the subset ratio in all cases, and correctly judge the consistency performance of each enhancement algorithm. With similar evaluation error, the subset ratio of the proposed method can decrease by 2%~20% over that of the subset fixed ratio method, and decrease by 3%~13% over that of the subset gradual addition method, and thus the complexity is reduced for subset-guided consistency enhancement evaluation.
- Published
- 2020
- Full Text
- View/download PDF
30. PHI-base: the pathogen–host interactions database
- Author
-
Kim E. Hammond-Kosack, Kim Rutherford, Keywan Hassani-Pak, Alayne Cuzick, Martin Urban, Manuel Carbajo Martinez, Helder Pedro, Shilpa Yagwakote Venkatesh, Nishadi De Silva, James Seager, Andrew D. Yates, and Valerie Wood
- Subjects
Crops, Agricultural ,Antifungal Agents ,PHI-base ,Databases, Factual ,Multiple host species ,multiple host species ,Multiple pathogen species ,Biology ,Crop species ,computer.software_genre ,Communicable Diseases ,Genome ,fully downloadable content ,Fully downloadable content ,03 medical and health sciences ,Pathogen-Host Interactions ,Genetics ,Animals ,Humans ,Database Issue ,Data content ,Gene ,Data Management ,030304 developmental biology ,Internet ,0303 health sciences ,Database ,multiple pathogen species ,030306 microbiology ,Computational Biology ,Manually curated literature ,Plants ,Pathogenicity ,manually curated literature ,Phenotype ,Search Engine ,BLAST tool ,Host-Pathogen Interactions ,Biological Assay ,computer ,Algorithms ,Genome, Plant - Abstract
The pathogen–host interactions database (PHI-base) is available at www.phi-base.org. PHI-base contains expertly curated molecular and biological information on genes proven to affect the outcome of pathogen–host interactions reported in peer reviewed research articles. PHI-base also curates literature describing specific gene alterations that did not affect the disease interaction phenotype, in order to provide complete datasets for comparative purposes. Viruses are not included, due to their extensive coverage in other databases. In this article, we describe the increased data content of PHI-base, plus new database features and further integration with complementary databases. The release of PHI-base version 4.8 (September 2019) contains 3454 manually curated references, and provides information on 6780 genes from 268 pathogens, tested on 210 hosts in 13,801 interactions. Prokaryotic and eukaryotic pathogens are represented in almost equal numbers. Host species consist of approximately 60% plants (split 50:50 between cereal and non-cereal plants), and 40% other species of medical and/or environmental importance. The information available on pathogen effectors has risen by more than a third, and the entries for pathogens that infect crop species of global importance has dramatically increased in this release. We also briefly describe the future direction of the PHI-base project, and some existing problems with the PHI-base curation process.
- Published
- 2019
- Full Text
- View/download PDF
31. Text Competence Of Pupils Attending The Second Grade Of The Elementary School
- Author
-
Jana Adámková
- Subjects
High rate ,Mathematics education ,Writing process ,Metacognition ,Survey research ,Data content ,Psychology ,Competence (human resources) ,Research data ,Qualitative research - Abstract
This study brings results of monitoring of text production process of pupils of the second grade of elementary schools. Through research survey we will observe relation between pupils‘ metacognition and quality of their texts. Goal of this research is presented by verification of innovative didactical conception through systematic work with experimental group of pupils attending the second grade of the elementary school, effective interconnection of research with pedagogical practice and maximalization of influence of research probe to educational process itself, monitoring of individuals during production of text and detail analysis of psycho-didactical aspects of writing process. Within the frame of the qualitative research we used following methods: General self-efficacy scale and its modified version, data content analysis. On the basis of analysis of all research data we found that there is direct relation between level of metacognitive knowledge and metacognitive skills (monitoring, self-regulation and self-evaluation) and quality of texts. Pupils with high rate of personal ability usually have higher level of metacognitive skills and at the same time they are authors of good texts.
- Published
- 2019
- Full Text
- View/download PDF
32. Company Case Study 12: Employee Perceptions in Innovation-Driven SMEs—D-Orbit
- Author
-
Giuseppe Lentini and Giorgia Nigri
- Subjects
Benefit corporation ,Employee perceptions ,Corporate social responsibility ,Data content ,Business ,Certification ,Marketing ,Public benefit ,Value systems ,Profit (economics) - Abstract
In recent years, boundaries between profit and non-profit company forms and assessments are increasingly blurred, converging towards new types of hybrid organizations that mix elements, value systems and action logics of various sectors of society. On the one hand, we find organizations employed in the social industry that behave in a more business-like way and, on the other, business organizations that progress a social agenda in addition to their for-profit objective. Benefit Corporations and B Corps, with their reinforced commitment to corporate social responsibility (CSR) practices and a mission to generate a public benefit, are a clear example of the convergence of for-profit companies toward a strong CSR focus. For sustainable outputs to be valid, it is essential to have not only the right practices in place but the correct employee understanding of those practices. Companies need to become aware of the power of knowledge, learn what circumstances are likely to cause wrong opinions and learn how to manage their employee perceptions. The study aims to evaluate employee perceptions in an innovation-driven small and medium-sized enterprise through an empirical case study on D-Orbit, a high-tech certified Benefit Corporation. To achieve this goal, D-Orbit’s Benefit Reports and Annual Impact Report were analyzed and an in-depth interview with D-Orbit’s Quality and Impact Manager was carried out. Future research utilizes this case as a pilot to refine data content and procedure.
- Published
- 2019
- Full Text
- View/download PDF
33. The case for using Mapped Exonic Non-Duplicate (MEND) read counts in RNA-Seq experiments: examples from pediatric cancer datasets
- Author
-
Holly C. Beale, Jacquelyn M. Roger, Matthew A. Cattle, Liam T. McKay, Drew K. A. Thomson, Katrina Learned, A. Geoffrey Lyle, Ellen T. Kephart, Rob Currie, Du Linh Lam, Lauren Sanders, Jacob Pfeil, John Vivian, Isabel Bjork, Sofie R. Salama, David Haussler, and Olena M. Vaske
- Subjects
Gene expression profiling ,Gene expression ,Sequencing data ,RNA ,Sample (statistics) ,Data content ,Computational biology ,Biology ,Gene ,Genome - Abstract
BackgroundThe accuracy of gene expression as measured by RNA sequencing (RNA-Seq) is dependent on the amount of sequencing performed. However, some types of reads are not informative for determining this accuracy. Unmapped and non-exonic reads do not contribute to gene expression quantification. Duplicate reads can be the product of high gene expression or technical errors.FindingsWe surveyed bulk RNA-Seq datasets from 2179 tumors in 48 cohorts to determine the fractions of uninformative reads. Total sequence depth was 0.2-668 million reads (median (med.) 61 million; interquartile range (IQR) 53 million). Unmapped reads constitute 1-77% of all reads (med. 3%; IQR 3%); duplicate reads constitute 3-100% of mapped reads (med. 27%; IQR 30%); and non-exonic reads constitute 4-97% of mapped, non-duplicate reads (med. 25%; IQR 21%). Informative reads--Mapped, Exonic, Non-duplicate (MEND) reads--constitute 0-79% of total reads (med. 50%; IQR 31%). Further, we find that MEND read counts have a 0.22 Pearson correlation to the number of genes expressed above 1 Transcript Per Million, while total reads have a correlation of −0.05.ConclusionsSince the fraction of uninformative reads vary, we propose using only definitively informative reads, MEND reads, for the purposes of asserting the accuracy of gene expression measured in a bulk RNA-Seq experiment. We provide a Docker image containing 1) the existing required tools (RSeQC, sambamba and samblaster) and 2) a custom script. We recommend that all results, sensitivity studies and depth recommendations use MEND units.
- Published
- 2019
- Full Text
- View/download PDF
34. Sinkronisasi Ekonomi Pancasila Dan Ekonomi Islam
- Author
-
Muhammad Ali Akbar and Moh. Idil Ghufron
- Subjects
media_common.quotation_subject ,Subject (philosophy) ,Islam ,Social justice ,language.human_language ,Epistemology ,Indonesian ,language ,Data content ,Sociology ,Prosperity ,Islamic economics ,media_common ,Social equality - Abstract
This study discusses the concept of Pancasila economics and Islamic economics which have basic and not conflicting similarities. Both of these economic concepts share the goal of realizing social justice for all Indonesian people and social equality in prosperity and prosperity as stated in the five precepts of the Pancasila, as well as imbued with the first principle of the One Godhead Pancasila as its basis. In this study also explained that between the Pancasila economic concepts and Islamic economics are two economic concepts whose basic principles, characteristics and systems are in accordance with the teachings in the Qur'an which is the holy book of Muslims. So that this can strengthen the confidence of the Indonesian people to not hesitate to implement the Pancasila economy which is the noble heritage of the founding fathers of the nation. The method used in this study is library research, which refers to primal materials from two components which are the subject of discussion, namely the study of Pancasila economics and Islamic economics. And added with other books relating to the problems studied. While the writing method used is descriptive analytical, namely by collecting actual information in detail and thoroughly from the data obtained, to then describe the exact problem under study, then analyze it directly to be compiled as needed in this study by using data content analysis analysis. Kata Kunci: Ekonomi, Pancasila, Islam, Al-Qur‟an, Keadilan, Kesejahteraan.
- Published
- 2019
- Full Text
- View/download PDF
35. Data Content Weighing for Subjective versus Objective Picture Quality Assessment of Natural Pictures
- Author
-
Suresha D and H. N. Prakash
- Subjects
03 medical and health sciences ,0302 clinical medicine ,Multimedia ,Computer science ,Image quality ,0202 electrical engineering, electronic engineering, information engineering ,Natural (music) ,020206 networking & telecommunications ,Data content ,02 engineering and technology ,computer.software_genre ,computer ,030218 nuclear medicine & medical imaging - Published
- 2017
- Full Text
- View/download PDF
36. Web Usage Minning using Patterns with Different Algorithm
- Author
-
Sobia Mehrban
- Subjects
Web server ,Web mining ,Computer science ,Data discovery ,Data content ,Web usage ,computer.software_genre ,computer ,Algorithm - Abstract
Web usage mining is a part of data mining. Data usage mining is divided into three parts 1) Data content mining 2) Data structured mining 3) Data usage mining. In this paper I am discussing about log files which are used in data usage mining. Log files are used to store user’s activity in web server using websites. So that websites can be improved by gathering user data. Web usage mining having three sub parts which is reprocessing, data discovery and data analysis. Further, in this paper, details about web log files are discussed. Three algorithms are discussed which are used for patterns of log files. There comparison is showed in this paper with the help of graphs.
- Published
- 2017
- Full Text
- View/download PDF
37. Diagnostic and Assessment Benefits and Barriers of BIM in Construction Project Management
- Author
-
Kamil A. K. Al-Shaikhli, Ibraheem A. Mohammed, and Faiq M. S. Al-Zwainy
- Subjects
Construction management ,Engineering ,Environmental Engineering ,Process management ,business.industry ,0211 other engineering and technologies ,02 engineering and technology ,Building and Construction ,010501 environmental sciences ,Geotechnical Engineering and Engineering Geology ,01 natural sciences ,Building information modeling ,Data exchange ,021105 building & construction ,Operations management ,Data content ,business ,Personal interview ,0105 earth and related environmental sciences ,Civil and Structural Engineering - Abstract
This paper aims to diagnostic and assessment the benefits and barriers of Building Information Modelling in the construction project management. Both open and closed questionnaire was used to explore the views of a number of Iraqi engineers; in order to investigate the level of BIM implementation in Iraq construction sector. Questionnaire indicated an acceptable awareness of (BIM) in Iraq, especially among the young generation of engineers which indicates the arrival of the evolutionary stream of BIM in the next few years. Moreover, questionnaire showed that the most important advantage of implementing BIM in the Iraqi construction sector was the ability to generate accurate 2D plans at any stage, while the least important advantage was the ability to provide a careful planning of the site facilities, with a relative importance of (82 %) and (33 %) respectively. Furthermore, the most important barrier to its implementation was the unspecified responsibilities for data content, as presented by questionnaire, while the least important barrier was the lack of programs efficiency in data exchange and internal collaboration, with a relative importance of (81 %) and (34 %) respectively.
- Published
- 2017
- Full Text
- View/download PDF
38. Understanding the content and features of open data portals in American cities
- Author
-
Kristen Wolslegel, Genie N. L. Stowers, Jeffrey Thorsby, and Ellie Tumbuan
- Subjects
Open government ,education.field_of_study ,Index (economics) ,Sociology and Political Science ,05 social sciences ,Population ,Regression analysis ,Variation (game tree) ,Library and Information Sciences ,Data science ,0506 political science ,Open data ,Geography ,Categorization ,050602 political science & public administration ,Data content ,0509 other social sciences ,050904 information & library sciences ,education ,Law - Abstract
In this paper, we present the results of research on features and content of open data portals in American cities. Five scales are developed to categorize and describe these portals: the Open Data Portal Index (ODPI), Data Content Index (DCI), a compilation of the two (Overall Index), the Number of Datasets and Number of Datasets per 100,000. Regression models explaining variation between cities on these scales indicate city population as an important influence, along with participation in a regional consortium. More variation could be explained in the number of datasets model (79.8%) than in any other model. Overall, results indicate portals are in a very early stage of development and need a great deal of work to improve user help and analysis features as well as inclusion of features to help citizens understand the data, such as more charting and analysis.
- Published
- 2017
- Full Text
- View/download PDF
39. A Novel Approach of Encoding for Image Steganography with Reduced Data Content
- Author
-
Rakesh Kumar, Ashadeep Kaur, and Nidhi Bhatla
- Subjects
Steganography tools ,Information retrieval ,Multimedia ,Computer science ,05 social sciences ,050301 education ,02 engineering and technology ,Image steganography ,computer.software_genre ,Encoding (memory) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data content ,0503 education ,computer - Published
- 2016
- Full Text
- View/download PDF
40. Mouse Genome Database (MGD)-2017: community knowledge resource for the laboratory mouse
- Author
-
Carol J. Bult, Cynthia L. Smith, James A. Kadin, Judith A. Blake, Joel E. Richardson, and Janan T. Eppig
- Subjects
0301 basic medicine ,Mouse Genome Database ,Computational biology ,Web Browser ,Biology ,ENCODE ,Polymorphism, Single Nucleotide ,Genome ,Mice ,03 medical and health sciences ,0302 clinical medicine ,Resource (project management) ,Databases, Genetic ,Genetics ,Database Issue ,Animals ,Humans ,Genetic Predisposition to Disease ,Data content ,Alleles ,Web browser ,Laboratory mouse ,Computational Biology ,Genomics ,Search Engine ,Gene Ontology ,030104 developmental biology ,Knowledge resource ,Software ,030217 neurology & neurosurgery - Abstract
The Mouse Genome Database (MGD: http://www.informatics.jax.org) is the primary community data resource for the laboratory mouse. It provides a highly integrated and highly curated system offering a comprehensive view of current knowledge about mouse genes, genetic markers and genomic features as well as the associations of those features with sequence, phenotypes, functional and comparative information, and their relationships to human diseases. MGD continues to enhance access to these data, to extend the scope of data content and visualizations, and to provide infrastructure and user support that ensures effective and efficient use of MGD in the advancement of scientific knowledge. Here, we report on recent enhancements made to the resource and new features.
- Published
- 2016
- Full Text
- View/download PDF
41. PISA-LIKE: Uncertainty and data content in Statistics subject with futsal context
- Author
-
Mutia, K N S Effendi, and Sutirna
- Subjects
History ,Mathematics education ,Subject (documents) ,Context (language use) ,Data content ,Psychology ,Computer Science Applications ,Education - Abstract
This research is a development research that will produce about a valid PISA-LIKE of uncertainty and data content with futsal context. The development model used in this study is the 4-D model (Define, Design, Develop, and Disseminate). The development phase discussed are a result of expert review and simulation. The review stage expert of this study were involved material experts, and language expert, as well as IX grade students in one of junior high school in Karawang at the simulation stage as the subjects. Da ta collection is done by documentation and interviews. Based on the result of data analysis, it can be concluded that this research has produced valid PISA-LIKE of uncertainty and data content with futsal context, it can be seen from the comments given by the validator and also the students at the simulation stage.
- Published
- 2021
- Full Text
- View/download PDF
42. GeoBroker: A pub/sub broker considering geo-context information
- Author
-
Jonathan Hasenburg and David Bermbach
- Subjects
World Wide Web ,Software ,business.industry ,Computer science ,Control data ,Bandwidth (signal processing) ,Data content ,business ,Credential - Abstract
The majority of today’s pub/sub brokers only focus on data content when deciding where to distribute data. In this paper, we introduce the GeoBroker pub/sub broker software that additionally considers the geo-context associated with data producers and consumers. As a result, data consumers only receive relevant messages which preserves bandwidth and compute power while data producers can use the geo-context of clients to control data distribution, e.g., for enhanced privacy or as an alternative to credential-based authentication.
- Published
- 2020
- Full Text
- View/download PDF
43. Systematic Assessment of Exposure Variations on Observed Bioactivity in Zebrafish Chemical Screening
- Author
-
Lindsay B Wilson, Robyn L Tanguay, Lisa Truong, and Michael T. Simonich
- Subjects
Prioritization ,Health, Toxicology and Mutagenesis ,010501 environmental sciences ,Pharmacology ,Biology ,lcsh:Chemical technology ,Toxicology ,high-throughput screening ,01 natural sciences ,Article ,03 medical and health sciences ,Screening design ,Full model ,lcsh:TP1-1185 ,Data content ,alternative testing ,Zebrafish ,030304 developmental biology ,0105 earth and related environmental sciences ,Protocol (science) ,0303 health sciences ,Chemical Health and Safety ,zebrafish ,exposure regimen ,biology.organism_classification ,Chemical screening ,bioactivity ,Standard protocol - Abstract
The embryonic zebrafish is a powerful tool for high-throughput screening of chemicals. While this model has significant potential for use in safety assessments and chemical prioritization, a lack of exposure protocol harmonized across laboratories has limited full model adoption. To assess the potential that exposure protocols alter chemical bioactivity, we screened a set of eight chemicals and one 2D nanomaterial across four different regimens: (1) the current Tanguay laboratory’s standard protocol of dechorionated embryos and static exposure in darkness; (2) exposure with chorion intact; (3) exposure under a 14 h light: 10 h dark cycle; and (4) exposure with daily chemical renewal. The latter three regimens altered the concentrations, resulting in bioactivity of the test agents compared to that observed with the Tanguay laboratory’s standard regimen, though not directionally the same for each chemical. The results of this study indicate that with the exception for the 2D nanomaterial, the screening design did not change the conclusion regarding chemical bioactivity, just the nominal concentrations producing the observed activity. Since the goal of tier one chemical screening often is to differentiate active from non-active chemicals, researchers could consider the trade-offs regarding cost, labor, and sensitivity in their study design without altering hit rates. Taken further, these results suggest that it is reasonably feasible to reach agreement on a standardized exposure regiment, which will promote data sharing without sacrificing data content.
- Published
- 2020
- Full Text
- View/download PDF
44. A Robust Blind Video Watermarking Scheme based on Discrete Wavelet Transform and Singular Value Decomposition
- Author
-
Amal Ben Hamida, Amal Hammami, and Chokri Ben Amar
- Subjects
Discrete wavelet transform ,business.industry ,Computer science ,Data_MISCELLANEOUS ,020207 software engineering ,Pattern recognition ,Watermark ,02 engineering and technology ,Digital media ,Mid-frequency ,Robustness (computer science) ,Singular value decomposition ,0202 electrical engineering, electronic engineering, information engineering ,Data content ,Artificial intelligence ,business ,Digital watermarking - Abstract
The outgrowth in technological world has massively promoted to information fraud and misappropriation by the ease of multimedia data content regeneration and modification. Consequently, security of digital media is considered among the biggest issues in multimedia services. Watermarking, consisting in hiding a signature known as watermark in a host signal, is one of the potential solutions used purposely for media security and authentication. In this paper, we propose a robust video watermarking scheme using Discrete Wavelet Transform and Singular Value Decomposition. We embed the watermark into the mid frequency sub-bands based on an additive method. The extraction process operates following a blind detection algorithm. Several attacks are applied and different performance metrics are computed to assess the robustness and the imperceptibility of the proposed watermarking. The results reveal that the proposed scheme is robust against different attacks and achieves a good level of imperceptibility.
- Published
- 2019
- Full Text
- View/download PDF
45. A Comparison of Microbial Genome Web Portals
- Author
-
Wai Kit Ong, Peter D. Karp, Natalia Ivanova, Mario Latendresse, Markus Krummenacker, Peter E. Midford, Suzanne M. Paley, Nikos C. Kyrpides, and Rekha Seshadri
- Subjects
0106 biological sciences ,Microbiology (medical) ,genome databases ,Computer science ,Environmental Science and Management ,lcsh:QR1-502 ,Review ,Computational biology ,01 natural sciences ,Genome ,microbial genome databases ,Microbiology ,lcsh:Microbiology ,Omics data ,03 medical and health sciences ,010608 biotechnology ,Genetics ,Ensembl ,Quantitative Biology - Genomics ,Data content ,KEGG ,030304 developmental biology ,Genomics (q-bio.GN) ,0303 health sciences ,microbial genomes ,Human Genome ,microbial genomics ,genome portals ,ComputingMethodologies_PATTERNRECOGNITION ,FOS: Biological sciences ,Soil Sciences ,Table (database) ,Analysis tools ,Microbial genome ,Biotechnology - Abstract
Microbial genome web portals have a broad range of capabilities that address a number of information-finding and analysis needs for scientists. This article compares the capabilities of the major microbial genome web portals to aid researchers in determining which portal(s) are best suited to solving their information-finding and analytical needs. We assessed both the bioinformatics tools and the data content of BioCyc, KEGG, Ensembl Bacteria, KBase, IMG, and PATRIC. For each portal, our assessment compared and tallied the available capabilities. The strengths of BioCyc include its genomic and metabolic tools, multi-search capabilities, table-based analysis tools, regulatory network tools and data, omics data analysis tools, breadth of data content, and large amount of curated data. The strengths of KEGG include its genomic and metabolic tools. The strengths of Ensembl Bacteria include its genomic tools and large number of genomes. The strengths of KBase include its genomic tools and metabolic models. The strengths of IMG include its genomic tools, multi-search capabilities, large number of genomes, table-based analysis tools, and breadth of data content. The strengths of PATRIC include its large number of genomes, table-based analysis tools, metabolic models, and breadth of data content.
- Published
- 2019
46. Research on Addressing Method in XML File Based on XPointer
- Author
-
Chanyuan Fan and Zhijiang Li
- Subjects
Markup language ,Information retrieval ,computer.internet_protocol ,Computer science ,Location systems ,XSLT ,XLink ,XPointer ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Data content ,computer ,XML ,XPath ,computer.programming_language - Abstract
XML is a markup language that describes data and is widely used to exchange data across platforms. Currently, there are three main tools for addressing XML document: XLink, XPath, and XPointer. Among them, XPointer is an advanced addressing tool, which can not only address elements, but also can address specific data such as strings, points and ranges in XML documents. However, for the positioning based on XPointer, how to extract the non-well-formed data content in XML document is still a problem. This paper focuses on the extraction of non-well-formed data content in XML documents. Based on XPath 3.0, extracting and filtering nodes was analyzed. Based on XSLT template, the content of XML documents was selectively output. Finally, a location system based on XPointer was derived, and ultimately achieved advanced addressing for XML documents. 20 XML files were selected as experiment samples to verify the model proposed in the paper. The experimental results demonstrate that the proposed method can locate and represent non-well-formed as well as well-formed data content in XML documents.
- Published
- 2019
- Full Text
- View/download PDF
47. Africa’s Online Access: What Data Is Getting Accessed and Where It Is Hosted?
- Author
-
Alassane Diop, Désiré Bansé, Assane Gueye, and Babacar Mbaye
- Subjects
business.industry ,Internet privacy ,Data content ,Business ,Web content ,Repatriation - Abstract
Recent studies have shown that most of the web traffic going from one African country to another has to transit through ISP’s in other continents before coming back to Africa. This phenomenon is known as boomerang routing and proposals are being made on how to correct it. However, there is a more fundamental question that needs to be addressed: what web content is of interest to Africans and where is it hosted? Indeed, if most of the data needed by Africans is within the continent and yet boomerang is still prevalent, then correcting it is of paramount importance. If, on the other hand, most the data accessed by Africans is hosted outside the continent, then data repatriation might be more beneficial than boomerang correction.
- Published
- 2019
- Full Text
- View/download PDF
48. Using Models for Communication in Cyber-Physical Systems
- Author
-
Yaser P. Fallah
- Subjects
Structure (mathematical logic) ,Computer science ,Mechanism (biology) ,Distributed computing ,Control (management) ,Process (computing) ,Cyber-physical system ,Communication link ,Data content ,Collision avoidance - Abstract
One of the main components of cyber-physical systems (CPS) is the underlying communication mechanism that enables control and decision-making. Communication has traditionally taken the form of sensing a physical phenomenon, or a cyber process, and then transmitting the sensed data to other entities within the system. With CPS being in general much more complex than a single physical or cyber process, the requirements on communication and data content are high. Therefore, communication of all the required information for control of a CPS may become a challenge. In this chapter, we present a new paradigm in communication which utilizes communication of models and model updates rather than raw sensed data. This approach, which transforms overall communication structure, has the potential to considerably reduce the communication load, and provide a mechanism for richer understanding of the processes whose data is being received over a communication link. We take the example of a vehicular CPS that relies on communication for collision avoidance and demonstrate the effectiveness of the model-based communication (MBC) concept.
- Published
- 2019
- Full Text
- View/download PDF
49. CPPsite 2.0: a repository of experimentally validated cell-penetrating peptides
- Author
-
Salman Sadullah Usmani, Ankur Gautam, Piyush Agrawal, Sherry Bhalla, Sandeep Singh, Kumardeep Chaudhary, and Gajendra P. S. Raghava
- Subjects
0301 basic medicine ,Drug Carriers ,Protein Conformation ,Cell-Penetrating Peptides ,Computational biology ,Biology ,Bioinformatics ,Protein tertiary structure ,Structure-Activity Relationship ,03 medical and health sciences ,030104 developmental biology ,Genetics ,Database Issue ,Data content ,Databases, Protein - Abstract
CPPsite 2.0 (http://crdd.osdd.net/raghava/cppsite/) is an updated version of manually curated database (CPPsite) of cell-penetrating peptides (CPPs). The current version holds around 1850 peptide entries, which is nearly two times than the entries in the previous version. The updated data were curated from research papers and patents published in last three years. It was observed that most of the CPPs discovered/ tested, in last three years, have diverse chemical modifications (e.g. non-natural residues, linkers, lipid moieties, etc.). We have compiled this information on chemical modifications systematically in the updated version of the database. In order to understand the structure-function relationship of these peptides, we predicted tertiary structure of CPPs, possessing both modified and natural residues, using state-of-the-art techniques. CPPsite 2.0 also maintains information about model systems (in vitro/in vivo) used for CPP evaluation and different type of cargoes (e.g. nucleic acid, protein, nanoparticles, etc.) delivered by these peptides. In order to assist a wide range of users, we developed a user-friendly responsive website, with various tools, suitable for smartphone, tablet and desktop users. In conclusion, CPPsite 2.0 provides significant improvements over the previous version in terms of data content.
- Published
- 2015
- Full Text
- View/download PDF
50. EBI metagenomics in 2016 - an expanding and evolving resource for the analysis and archiving of metagenomic data
- Author
-
François Bucchini, Peter Sterk, Petra ten Hoopen, Sebastien Pesseat, Maxim Scheremetjew, Matthew Fraser, Alex L. Mitchell, Robert D. Finn, Simon C. Potter, Guy Cochrane, and Hubert Denise
- Subjects
0301 basic medicine ,Internet ,Resource (biology) ,business.industry ,Gene Expression Profiling ,Oceans and Seas ,Biology ,Bioinformatics ,Data science ,Pipeline (software) ,Ocean sampling ,03 medical and health sciences ,030104 developmental biology ,Metagenomics ,Data presentation ,Genetics ,Database Issue ,Web application ,The Internet ,Data content ,14. Life underwater ,Databases, Nucleic Acid ,business ,Software - Abstract
EBI metagenomics (https://www.ebi.ac.uk/metagenomics/) is a freely available hub for the analysis and archiving of metagenomic and metatranscriptomic data. Over the last 2 years, the resource has undergone rapid growth, with an increase of over five-fold in the number of processed samples and consequently represents one of the largest resources of analysed shotgun metagenomes. Here, we report the status of the resource in 2016 and give an overview of new developments. In particular, we describe updates to data content, a complete overhaul of the analysis pipeline, streamlining of data presentation via the website and the development of a new web based tool to compare functional analyses of sequence runs within a study. We also highlight two of the higher profile projects that have been analysed using the resource in the last year: the oceanographic projects Ocean Sampling Day and Tara Oceans.
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.