247 results on '"Data content"'
Search Results
2. Africa’s Online Access: What Data Is Getting Accessed and Where It Is Hosted?
- Author
-
Mbaye, Babacar, Gueye, Assane, Banse, Desire, Diop, Alassane, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin (Sherman), Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Bassioni, Ghada, editor, Kebe, Cheikh M.F., editor, Gueye, Assane, editor, and Ndiaye, Ababacar, editor
- Published
- 2019
- Full Text
- View/download PDF
3. La genèse systémique d'empreinte pour une maîtrise de l'observation de la Terre.
- Author
-
Fargette, Mireille, Loireau, Maud, Raouani, Najet, and Libourel, Thérèse
- Subjects
- *
DATA analysis , *ACQUISITION of data , *ONTOLOGY , *CONCEPTUAL models , *PHYSIOGNOMY , *SCIENTIFIC knowledge - Abstract
This work is interested in observation, in scientific knowledge acquired from what is perceived (Link making Sense) from a complex systemic world. The approach leads to proposing the concept of imprint within the interdisciplinary framework "System -Reality - World as perceived - Model " and testing it against data, then to proposing systemic ontology as an approach. This makes it possible to deploy the Link making Shape from the systemic domain to the world as perceived, to analyze and describe the relevant part in the data and to show how the whole of this mostly symbolic work can contribute, with respect to semantic, technological and digital aspects, to better control data collection and analysis and Earth observation as a whole. Illustrations mainly focus on oasis. This work discusses its contribution to constructing an observation science. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Drug databases and their contributions to drug repurposing.
- Author
-
Masoudi-Sobhanzadeh, Yosef, Omidi, Yadollah, Amanlou, Massoud, and Masoudi-Nejad, Ali
- Subjects
- *
DATABASES , *WEB services , *RARE diseases - Abstract
Drug repurposing is an interesting field in the drug discovery scope because of reducing time and cost. It is also considered as an appropriate method for finding medications for orphan and rare diseases. Hence, many researchers have proposed novel methods based on databases which contain different information. Thus, a suitable organization of data which facilitates the repurposing applications and provides a tool or a web service can be beneficial. In this review, we categorize drug databases and discuss their advantages and disadvantages. Surprisingly, to the best of our knowledge, the importance and potential of databases in drug repurposing are yet to be emphasized. Indeed, the available databases can be divided into several groups based on data content, and different classes can be applied to find a new application of the existing drugs. Furthermore, we propose some suggestions for making databases more effective and popular in this field. • We categorize drug databases based on their data content. • It is described how each class of the databases can be applied to the repurposing applications. • The advantages and disadvantages of the databases are discussed. • Future directions for developing and improving the databases are proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. A Cloud-Based Data Network Approach for Translational Cancer Research
- Author
-
Xing, Wei, Tsoumakos, Dimitrios, Ghanem, Moustafa, Cohen, Irun R., Series editor, Lajtha, N.S. Abel, Series editor, Paoletti, Rodolfo, Series editor, Lambris, John D., Series editor, Vlamos, Panayiotis, editor, and Alexiou, Athanasios, editor
- Published
- 2015
- Full Text
- View/download PDF
6. Long-Term Temporal Data Representation of Personal Health Data
- Author
-
Mallaug, Tore, Bratbergsengen, Kjell, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Dough, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Eder, Johann, editor, Haav, Hele-Mai, editor, Kalja, Ahto, editor, and Penjam, Jaan, editor
- Published
- 2005
- Full Text
- View/download PDF
7. A Study of the Design of Data Content Works Metaphorically Linked to Taste - Focusing on Prototypes, [The Banquet of Mouth] Series
- Author
-
Zune Lee
- Subjects
Banquet ,Taste (sociology) ,media_common.quotation_subject ,Ocean Engineering ,Data content ,Art ,Visual arts ,media_common - Published
- 2021
- Full Text
- View/download PDF
8. Searchable Querical Data Networks
- Author
-
Banaei-Kashani, Farnoush, Shahabi, Cyrus, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Aberer, Karl, editor, Koubarakis, Manolis, editor, and Kalogeraki, Vana, editor
- Published
- 2004
- Full Text
- View/download PDF
9. Optimization of Balanced Menu for Pregnant Women in Grobogan-Central Java using Simplex Method
- Author
-
Fitroh Resmi, Pukky Tetralian Bantining Ngastiti, and Nihaya Alivia Coraima Dewi
- Subjects
Vitamin ,Dietary composition ,stunting ,food and beverages ,Toxicology ,chemistry.chemical_compound ,chemistry ,QA1-939 ,Optimal combination ,Data content ,simplex method ,pregnant women ,Mathematics - Abstract
This study aims to determine the optimization of balanced dietary composition for pregnant women. Determination of the optimization of balanced food is carried out by forming a linear model along with boundary conditions and objective functions, as well as inputting data on the age of pregnant women, age of pregnancy and maternal nutritional needs, then the calculation is carried out using the simplex method in order to obtain the weight of food ingredients that must be consumed to get a balanced nutrition, namely with 75 combinations that have been analyzed on groups of pregnant women aged 19-29 years and 30-49 years in three trimesters, including staple foods, vegetables (spinach, green mustard, cauliflower, kale, carrots), fruit, side dishes vegetables, nuts, sugar and milk with the recommended nutritional adequacy rate for the data content of water, energy, protein, fat, carbohydrate (KH), fiber, vitamin A, B1, B2, B3 and vitamin C. In the group of pregnant women aged 19-29 years and women aged 30-49 years in the three trimesters, it was found that the combination of 55 was the optimal combination with rice, kale, watermelon, and tofu.
- Published
- 2021
- Full Text
- View/download PDF
10. Partial Decoding of the GPS Extended Prediction Orbit File
- Author
-
Vladimir Vinnikov, Maria Gritsevich, Ekaterina Pshehotskaya, Balandin, Sergey, Koucheryavy, Yevgeni, Tyutina, Tatiana, and Department of Physics
- Subjects
extended ephemeris ,decoding ,Computer science ,Data field ,assisted gps ,data structure ,TK5101-6720 ,computer.software_genre ,114 Physical sciences ,almanac ,cypher-text only attack ,extended prediction orbit ,111 Mathematics ,Data content ,business.industry ,binary format ,115 Astronomy, Space science ,Orbit (dynamics) ,Global Positioning System ,Telecommunication ,Table (database) ,Satellite ,Web service ,business ,Algorithm ,computer ,Decoding methods - Abstract
Publisher Copyright: © 2021 FRUCT. The paper is concerned with decoding the Extended Prediction Orbit data format file for an Assisted-GPS web-service via cypher-text only attack. We consider mandatory data content of the file and reveal the changes of this content at different moments. The frequency of changes hints at the location of records for current GPS date and satellite orbits information. Comparing the repeating data patterns against reference orbits information, we obtain the meaning of data fields of the orbit record for each operational satellite. The partially deciphered GPS almanac data layout is provided as a table within the paper.
- Published
- 2021
11. The impact of acquisition geometry on full-waveform inversion updates
- Author
-
Denes Vigh, Xin Cheng, Wei Kang, Kun Jiao, and Nolan Brand
- Subjects
Earth model ,Geophysics ,Geology ,Inversion (meteorology) ,Data content ,Full waveform - Abstract
Full-waveform inversion (FWI) is a high-resolution model-building technique that uses the entire recorded seismic data content to build the earth model. Conventional FWI usually utilizes diving and refracted waves to update the low-wavenumber components of the velocity model. However, updates are often depth limited due to the limited offset range of the acquisition design. To extend conventional FWI beyond the limits imposed by using only transmitted energy, we must utilize the full acquired wavefield. Analyzing FWI kernels for a given geology and acquisition geometry can provide information on how to optimize the acquisition so that FWI is able to update the velocity model for targets as deep as basement level. Recent long-offset ocean-bottom node acquisition helped FWI succeed, but we would also like to be able to utilize the shorter-offset data from wide-azimuth data acquisitions to improve imaging of these data sets by developing the velocity field with FWI. FWI models are heading toward higher and higher wavenumbers, which allows us to extract pseudoreflectivity directly from the developed velocity model built with the acoustic full wavefield. This is an extremely early start to obtaining a depth image that one would usually produce in much later processing stages.
- Published
- 2021
- Full Text
- View/download PDF
12. i-Cube: A Tool-Set for the Dynamic Extraction and Integration of Web Data Content
- Author
-
Poon, Frankie, Kontogiannis, Kostas, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Kou, Weidong, editor, Yesha, Yelena, editor, and Tan, Chung Jen, editor
- Published
- 2001
- Full Text
- View/download PDF
13. A Heuristic Approach for Converting HTML Documents to XML Documents
- Author
-
Lim, Seung-Jin, Ng, Yiu-Kai, Goos, G., editor, Hartmanis, J., editor, van Leeuwen, J., editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Lloyd, John, editor, Dahl, Veronica, editor, Furbach, Ulrich, editor, Kerber, Manfred, editor, Lau, Kung-Kiu, editor, Palamidessi, Catuscia, editor, Pereira, Luís Moniz, editor, Sagiv, Yehoshua, editor, and Stuckey, Peter J., editor
- Published
- 2000
- Full Text
- View/download PDF
14. Model of Learning Activities for Improving Life Quality of the Elderly Using Elderly School as Base
- Author
-
Phackaphon Salathong and Rawiwan Thoranee
- Subjects
Syllabus ,Medical education ,Process (engineering) ,education ,Developmental and Educational Psychology ,Life quality ,Sample (statistics) ,Data content ,Psychology ,Set (psychology) ,General Psychology ,Field (computer science) ,Education - Abstract
This research has the objective to analyze the learning activities and format of the activities to develop life quality of the Elderly using school as base. This is intended to collect data from in-depth interviews, discussions as well as observation and field reporting to analyze data content oriented and descriptive from actual occurrences. Results of the survey finds that learning activities to develop life quality of the Elderly adopts for consideration the following: 1) background of the school for the Elderly, 2) objective of knowledge management, 3) content or course outline to teach within the school for the Elderly, 4) format, process and method of knowledge management, 5) evaluation or monitoring regarding the format of activities for knowledge which finds that there are 2 characteristics namely 1. format of activities for knowledge management in the form of institution with course and syllabus, and 2. format of activities for knowledge management in the form of rural which select the attendees while the form of teaching is showing to set as sample to follow.
- Published
- 2021
- Full Text
- View/download PDF
15. PROJETO DE EXTENSÃO: AS CONTRIBUIÇÕES PARA O ALCANCE DA AGENDA 2030 PARA O DESENVOLVIMENTO SUSTENTÁVEL
- Author
-
Adeildo Cabral da Silva, Nájila Rejanne Alencar Julião Cabral, and Maria de Lourdes da Silva Neta
- Subjects
Social commitment ,Political science ,Local Development ,Data content ,General Medicine ,Humanities - Abstract
EnglishThis paper aimed to analyze the environmental education actions performed in extension projects by the IFCE (Federal Institute of Ceara) by students and teachers, in the period from 2015 to 2019, developed in the Casa Maranguape Project, with respect to the adoption of the Sustainable Development Goals (SDGs) of the 2030 Agenda. This research had a qualitative approach and was based on document research with the data content analysis technique described by Silva; Fossa (2015). The educational contexts emphasized the extensionist action and its contribution to the SDGs achievement. The results show an approach towards environmental education in 67% of the activities, with adherence of 11 among the 17 SDGs, demonstrating interdisciplinarity with the 2030 Agenda. As for the contribution to the teaching practice, the social commitment of the institution was observed along with the achievement of the extension fundamental mission, which is to provide answers to society, in addition to the acquisition of experimental knowledge aiming at sustainable local development. portuguesO objetivo deste artigo foi analisar as acoes de educacao ambiental executadas no âmbito da extensao por alunos e professores do IFCE, no periodo de 2015 a 2019, desenvolvidas no Projeto Casa Maranguape, com relacao a adocao dos Objetivos do Desenvolvimento Sustentavel (ODS) da Agenda 2030. Esta investigacao teve abordagem qualitativa e pautou-se em pesquisa documental, com tecnica de analise de conteudo de dados, descrita por Silva e Fossa (2015). Os contextos educacionais enfatizaram a acao extensionista e sua contribuicao para o alcance dos ODS. Os resultados mostram abordagem em educacao ambiental de 67% das atividades, com aderencia de 11 entre os 17 ODS, demonstrando interdisciplinaridade com a Agenda 2030. Quanto a contribuicao a pratica docente, observaram-se o compromisso social da Instituicao e a realizacao da missao fundamental da extensao, que e dar respostas a sociedade, alem da aquisicao de saberes experimentais voltados ao desenvolvimento local sustentavel.
- Published
- 2021
- Full Text
- View/download PDF
16. PubChem in 2021: new data content and improved web interfaces
- Author
-
Kim, Sunghwan, Chen, Jie, Cheng, Tiejun, Gindulyte, Asta, He, Jia, He, Siqian, Li, Qingliang, Shoemaker, Benjamin A, Thiessen, Paul A, Yu, Bo, Zaslavsky, Leonid, Zhang, Jian, and Bolton, Evan E
- Subjects
Coronavirus disease 2019 (COVID-19) ,AcademicSubjects/SCI00010 ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Information Storage and Retrieval ,Biology ,World Wide Web ,03 medical and health sciences ,User-Computer Interface ,0302 clinical medicine ,Information resource ,Drug Discovery ,Genetics ,Database Issue ,Humans ,Data content ,Epidemics ,030304 developmental biology ,0303 health sciences ,Internet ,Data collection ,business.industry ,SARS-CoV-2 ,COVID-19 ,Data model ,030220 oncology & carcinogenesis ,The Internet ,Public Health ,business ,PubChem ,Databases, Chemical ,Software - Abstract
PubChem (https://pubchem.ncbi.nlm.nih.gov) is a popular chemical information resource that serves the scientific community as well as the general public, with millions of unique users per month. In the past two years, PubChem made substantial improvements. Data from more than 100 new data sources were added to PubChem, including chemical-literature links from Thieme Chemistry, chemical and physical property links from SpringerMaterials, and patent links from the World Intellectual Properties Organization (WIPO). PubChem's homepage and individual record pages were updated to help users find desired information faster. This update involved a data model change for the data objects used by these pages as well as by programmatic users. Several new services were introduced, including the PubChem Periodic Table and Element pages, Pathway pages, and Knowledge panels. Additionally, in response to the coronavirus disease 2019 (COVID-19) outbreak, PubChem created a special data collection that contains PubChem data related to COVID-19 and the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).
- Published
- 2020
17. توظیف صحافة البیانات فی تناول فیروس کورونا المستجد بالمواقع الإلکترونیة العربیة والعالمیة- دراسة تحلیلیة
- Subjects
Medical staff ,Coronavirus disease 2019 (COVID-19) ,Information accessibility ,Data content ,Sociology ,Social science ,Information coverage ,Data journalism - Abstract
هدفت الدراسة إلى الکشف عن طبيعة توظيف صحافة البيانات في تناول فيروس کورونا المستجد بالمواقع العربية والعالمية، من خلال تحليل شکل ومضمون صحافة البيانات بأشکالها المختلفة وبخاصة القصص المدعومة بالبيانات، وذلک بالتطبيق على مواقع "مصراوي"، "عکاظ"، "العين الإخبارية"، "الجارديان" و"يو إس ايه توداي" وذلک خلال الفترة من يناير وحتى أبريل 2020. اتخذت الدراسة من نظرية ثراء الوسيلة إطارا نظريا، واعتمدت على منهج المسح الإعلامي، ووظفت أداة تحليل الشکل والمضمون لعينة من (1398) شکلا من أشکال صحافة البيانات، وخلصت الدراسة إلى تفوق المواقع العالمية في حجم اهتمامها بصحافة البيانات من حيث عدد القصص المدفوعة بالبيانات حول فيروس کورونا، في مقابل تفوق المواقع العربية في توظيف الإنفو جرافيک والوسائط المتعددة في تناول الموضوع نفسه، وکشفت الدراسة عن تمتع المواقع العالمية بثراء معلوماتي ملحوظ في محتوى صحافة البيانات بها، يرجع لاعتمادها على مصادر متخصصة من العلماء والباحثين والطواقم الطبية ومراکز ومختبرات علمية متخصصة في الأمراض والأوبئة، مقابل الضعف المعلوماتي في المحتوى بالمواقع العربية لاهتمامها بالمصادر الرسمية بغض النظر عن التخصص الذي يفرضه طبيعة الموضوع والقيود المتعلقة بالوصول لمصادر المعلومات عامة، وعکست الدراسة التفاوت الکبير في إفادة المواقع من ثراء الوسيلة، حيث ارتفعت التفاعلية والمشارکة بدرجة کبيرة في المواقع العالمية مقارنة بانخفاضها في المواقع العربية، باستثناء موقع "مصراوي". Arab and international websites' usage of data journalism during the coronavirus epidemic: An analytical study This study examined how Arab and global websites utilize data journalism for the coverage of the Novel Coronavirus (COVID-19) news. This was achieved by analyzing various types of data journalism, concerning both format and content, particularly data-driven/ data-supported news stories related to COVID-19. For the study, the material published on the websites, "Masrawy," "Okaz," "Al Ain," "the Guardian," and "USA Today," during the period from January to April 2020, are explored and analyzed. The research used the media richness theory as a theoretical framework, using media survey methodology, and applying the format- content analysis tool to a selected sample of (1398) media materials representing various types of data journalism (related to the topic of COVID-19). The study found that global websites gave more attention to data journalism in terms of the number of driven news stories about COVID-19. In contrast, the Arab sites had an edge in utilizing infographics and multimedia to handle the same topic. The study also found that global websites included a considerable amount of information richness in their coverage of data content due to their dependence on specialized sources. Scientists, researchers, members of medical staff, scientific research centers, laboratories specialized in diseases, and epidemiology were vital sources. On the other hand, the Arab websites depended on a weak and poor information coverage as they mainly rely on the official sources regardless of whether they have the specializations needed to cover the topic at hand, or not. Also, they suffer from limitations related to information accessibility. The study's findings reflected the considerable gap between global and Arab websites in terms of their utilization of the media richness theory. The global sites had much higher interactivity and engagement among the targeted audience (i.e., audience participating, sharing, commenting, and even crowdsourcing), compared with the low levels of audience interactivity and engagement seen on the Arab websites, except for "Masrawy" website
- Published
- 2020
- Full Text
- View/download PDF
18. A survey on the enablers and nurturers of physical activity in women with prediabetes
- Author
-
Hosein Fallahzadeh, Seyed Saeed Mazloomy Mahmoodabad, Hamid Karimi, Fereshteh Sohrabi Vafa, and Ali Akbar Vaezi
- Subjects
Gerontology ,business.industry ,lcsh:R ,Enabling Factors ,Physical activity ,lcsh:Medicine ,physical activity ,030209 endocrinology & metabolism ,PEN-3 model ,prediabetes ,Service provider ,medicine.disease ,Nature versus nurture ,Domain (software engineering) ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Data content ,Original Article ,030212 general & internal medicine ,Prediabetes ,Metabolic syndrome ,business - Abstract
Background and objective Metabolic syndrome, especially prediabetes, is one of the most common health problems due to incomplete glucose metabolism that has a direct relationship with lifestyle. This study was conducted to determine the factors that enable and nurture physical activity in women with prediabetes based on the PEN-3 model. Material and method This descriptive-analytical study was conducted on 41 prediabetic women aged 30-65 years and 9 service providers in health centers. Data were collected from a semi-structured individual interview based on the PEN-3 model. The Graneheim and Lundman method was used to analyze the data. Results During data analysis, two main themes that include enabling factors and nurturing factors in the domain of physical activity and 6 classes including enablers (positive, negative, and existential) and nurturers (positive, negative, and existential) were extracted from the data content. Conclusion By determining enablers and nurturers, the providers of services can facilitate the participation of prediabetic women in physical activity by applying positive social and structural effects and also to eliminate negative environmental conditions.
- Published
- 2020
19. A Novel Approach of Data Content Zeroization Under Memory Attacks
- Author
-
Ankush Srivastava and Prokash Ghosh
- Subjects
Authentication ,business.industry ,Computer science ,020208 electrical & electronic engineering ,02 engineering and technology ,020202 computer hardware & architecture ,Software ,Mode (computer interface) ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,Data content ,Electrical and Electronic Engineering ,Federal Information Processing Standards ,business ,Wearable technology ,Conventional memory ,Random access - Abstract
Protecting user’s secret data on the devices like smartphones, tablets, wearable devices etc, from memory attacks is always a challenge for system designers. The most stringent security requirements and protocols in today’s state-of-the-art systems are governed by Federal Information Processing Standards (FIPS). Specifically, it ensures the protection of sensitive data by erasing them from random access memories (RAMs) and associated flip-flip based registers, as soon as security violation(s) is(are) detected. Traditionally, the sensitive data like authentication credentials, cryptographic keys and other on-chip secrets are erased (or zeroized) by sequential write transactions initiated either by dedicated hardware or using software programs. This paper, for the first time, proposes a novel approach of erasing secured data content from on-chip RAMs using conventional memory built-in-self-test (MBIST) hardware in mission mode. The proposed zeroization approach is proved to be substantially faster than the traditional techniques in erasing data content. As it helps in re-using Memory BIST hardware for on-chip data content zeroization, this guarantees to save silicon area and power by removing dedicated conventional hardware from the device. This paper also discusses the micro-architectural implementation and security challenges of using Memory BIST hardware in mission mode and proposes practical solutions to fill the gaps.
- Published
- 2020
- Full Text
- View/download PDF
20. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network.
- Author
-
Kai Lin, Di Wang, and Long Hu
- Abstract
With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. An interactive approach to text browsing based on anchor location
- Author
-
Lei Wang, Shuai Liu, Xiaodan Xie, Xiangzhen Li, Xindong Cui, and Jiajian Lu
- Subjects
Information retrieval ,Anchor point ,Computer science ,Product (mathematics) ,Web page ,Process (computing) ,Feature (machine learning) ,Data content - Abstract
Due to the rapid advancement of informationization process, a large number of product characteristic data have been accumulated in the process of testing, and the quantity of product characteristic data is increasing exponentially. The traditional characteristic data often presents the data content in the way of query, etc., and fails to display the data existing in the target rapidly and integrally. This paper designs an anchor point positioning based on the text through interactive method, used in a web page display product feature information in the form of text, use convenient interaction provides users with convenient, fast text browsing experience, elaborated the method of technical implementation ways, finally proved that the method enables users more intuitive preview, Improve the efficiency of viewing and analyzing data.
- Published
- 2021
- Full Text
- View/download PDF
22. Evaluation of an Evolutionary Algorithm to Dynamically Alter Partition Sizes in Web Caching Systems
- Author
-
Richard Hurley and Graeme Young
- Subjects
Web server ,Computer science ,business.industry ,Distributed computing ,Web cache ,Hit rate ,Evolutionary algorithm ,Data content ,The Internet ,computer.software_genre ,business ,Partition (database) ,computer - Abstract
There has been an explosion in the volume of data that is being accessed from the Internet. As a result, the risk of a Web server being inundated with requests is ever-present. One approach to reducing the performance degradation that potentially comes from Web server overloading is to employ Web caching where data content is replicated in multiple locations. In this paper, we investigate the use of evolutionary algorithms to dynamically alter partition size in Web caches. We use established modeling techniques to compare the performance of our evolutionary algorithm to that found in statically-partitioned systems. Our results indicate that utilizing an evolutionary algorithm to dynamically alter partition sizes can lead to performance improvements especially in environments where the relative size of large to small pages is high.
- Published
- 2020
- Full Text
- View/download PDF
23. DEVELOPMENT of MATHEMATICAL QUESTIONS in the PISA MODEL on UNCERTAINTY and DATA CONTENT to MEASURE the MATHEMATICAL REASONING ABILITY of MIDDLE SCHOOL STUDENTS
- Author
-
Siti Asyah, Elya Rosalina, and As Elly
- Subjects
Mathematical problem ,Computer science ,Mathematics education ,Data content ,Mathematical reasoning ,Predicate (grammar) - Abstract
This study aims to produce a Mathematical Problem based on the PISA Model in measuring the level of mathematical reasoning ability in valid and practical SMP 1 students and looking at the mathematical reasoning abilities of students in solving the mathematical problems of the PISA model on uncertainty and data content. This research is a development research with ADDIE development model. This model consists of five stages of development, namely Analysis, Design, Development, Implementation and Evaluation. Products developed in this study are in the form of PISA questions on statistical material and opportunities. From the results of the research carried out by the researcher obtained: (1) the validity of a question indicates that the question depends on the feasibility category such as the developed question obtaining good results in the feasibility component of language by obtaining a mean score of 3.16, on the material feasibility get a good category with an average score of 3.22, then in the construct feasibility category get a very good predicate with a mean score of 3.85. So that the average score obtained by the three experts is 3.35 and is categorized very well. (2) the quality of the question is viewed from various aspects such as practicality which are classified as "Good" with an average score of 3.43 then after that can be determined through the response results assessed by students regarding mathematical problems based on PISA models for uncertainty and data content so that they can measure students' mathematical reasoning abilities. For mathematical reasoning abilities in students using mathematical questions based on the PISA model get an average score of 2.19 categorized quite well.
- Published
- 2019
- Full Text
- View/download PDF
24. MISIM v2.0: a web server for inferring microRNA functional similarity based on microRNA-disease associations
- Author
-
Yuan Zhou, Jianwei Li, Yingshu Zhao, Shan Zhang, Jiangcheng Shi, Qinghua Cui, and Yanping Wan
- Subjects
0303 health sciences ,Web server ,Internet ,Computational biology ,Biology ,computer.software_genre ,03 medical and health sciences ,MicroRNAs ,0302 clinical medicine ,030220 oncology & carcinogenesis ,microRNA ,Web Server Issue ,Genetics ,Humans ,Data content ,Disease ,Functional similarity ,computer ,Algorithms ,Software ,030304 developmental biology - Abstract
MicroRNAs (miRNAs) are one class of important small non-coding RNA molecules and play critical roles in health and disease. Therefore, it is important and necessary to evaluate the functional relationship of miRNAs and then predict novel miRNA-disease associations. For this purpose, here we developed the updated web server MISIM (miRNA similarity) v2.0. Besides a 3-fold increase in data content compared with MISIM v1.0, MISIM v2.0 improved the original MISIM algorithm by implementing both positive and negative miRNA-disease associations. That is, the MISIM v2.0 scores could be positive or negative, whereas MISIM v1.0 only produced positive scores. Moreover, MISIM v2.0 achieved an algorithm for novel miRNA-disease prediction based on MISIM v2.0 scores. Finally, MISIM v2.0 provided network visualization and functional enrichment analysis for functionally paired miRNAs. The MISIM v2.0 web server is freely accessible at http://www.lirmed.com/misim/.
- Published
- 2019
25. A Dual Blockchain Framework to Enhance Data Trustworthiness in Digital Twin Network
- Author
-
Ke Wang, Wenyu Dong, Shen He, Junzhi Yan, and Bo Yang
- Subjects
Set (abstract data type) ,Blockchain ,Trustworthiness ,Basis (linear algebra) ,Computer science ,Distributed computing ,Scalability ,Data security ,Data content ,DUAL (cognitive architecture) - Abstract
Data are the basis in Digital Twin (DT) to set up bidirectional mapping between physical and virtual spaces, and realize critical environmental sensing, decision making and execution. Thus, trustworthiness is a necessity in data content as well as data operations. A dual blockchain framework is proposed to realize comprehensive data security in various DT scenarios. It is highly adaptable, scalable, evolvable, and easy to be integrated into Digital Twin Network (DTN) as enhancement.
- Published
- 2021
- Full Text
- View/download PDF
26. Análisis cualitativo de la gestión tecnológica para la innovación de servicios financieros: Estudio de casos múltiples de startups FinTech en Lima Metropolitana
- Author
-
Héctor Guardamino and Marta Tostes
- Subjects
business.industry ,Business administration ,Data content ,business ,Management process ,Financial services - Abstract
This multiple-case study analyses, from a qualitative approach, the technological management for the innovation of the financial services in four startups FinTech from Metropolitan Lima in Peru: Apurata, Difondy, TasaTop and Tranzfer.me. The Six Facets model (Kearns et al., 2005) was used to assess the technological management processes of these FinTech. In addition, a rubric was developed from criteria and sub-criteria. Subsequently, thoroughly interviews were applied as a technique to collect information related to each of the six model processes (variables) and their respective principles (subvariables). On the other hand, the collected information was organized and systematized using the data content management software WebQDA. Finally, the results indicate that the Six Facets model is a potential tool to analyze the technological management processes that are developed in FinTech with internal technology areas. This in order to be able to find points of improvement whose solution contributes to the innovation of FinTech services, impacting its competitiveness. Likewise, among the studied FinTech, the most solid technological process is the planning, whereas the process in which the analyzed FinTech suffer the most is in the process of customer formation. Finally, it was determined that TasaTop is the most prepared FinTech for the innovation of its services.
- Published
- 2021
- Full Text
- View/download PDF
27. THE EXPERIENCE OF SOCIAL PEDAGOGUES AND SOCIAL WORKERS WHEN PROMOTING POSITIVE BEHAVIOUR IN CHILDREN AT DAY-CARE CENTRES
- Author
-
Rita Raudeliunaite
- Subjects
Teamwork ,day-care centre, children, promoting positive behaviour, social pedagogues, social workers ,Social work ,General partnership ,media_common.quotation_subject ,Meaningful use ,Leisure time ,Data content ,Day care centre ,Day care ,Psychology ,Developmental psychology ,media_common - Abstract
The article presents the study, the purpose of which is to reveal the experience of social pedagogues and social workers when promoting positive behaviour in children at day-care centres. The qualitative type of research was chosen when applying the method of semi-structured inteview. The data of the study were processed by applying the method of data content analysis. 3 social pedagogues and 5 social workers, who work at children day- care centres, participated in the study. The data of the study revealed that when promoting positive behaviour in children at day-care centres social pedagogues and social workers create positive relationships with children and strengthen positive interrelationships between children, have individual conversations with children. Meaningful use of children’s leisure time plays an important role when promoting positive behaviour in children. In this case, it is important that the activities, which are organised for children, would be personally and socially meaningful, that they would encourage independence and responsibility in children and humane relationships. When promoting positive behaviour in children social pedagogues and social workers encourage the culture of teamwork and cooperation in order to maintain partnership relationships with parents of the child and other specialists by sharing their experience and rendering assistance to other.
- Published
- 2021
28. Field-Portable Microplastic Sensing in Aqueous Environments: A Perspective on Emerging Techniques
- Author
-
Patricia Swierk, Jose A. Santos, William Robberson, Mark F. Witinski, Beckett C. Colson, Louis B. Kratchman, Alexandra Z. Greenbaum, Joseph L. Hollmann, Peter Miraglia, Melissa M. Sprachman, Harry L. Allen, Steven Tate, Kenneth A. Markoski, Anna-Marie Cook, Morgan G. Blevins, Sheila S. Hemami, Ernest S. Kim, Anna P. M. Michel, Vienna L. Mott, and Ava A. LaRocca
- Subjects
aqueous solutions ,microplastics ,Data products ,water ,Nanotechnology ,TP1-1185 ,010501 environmental sciences ,sensors ,01 natural sciences ,Biochemistry ,Field (computer science) ,plastic pollution ,analytical chemistry ,Sample preparation ,Data content ,Electrical and Electronic Engineering ,freshwater ,Instrumentation ,polymers ,0105 earth and related environmental sciences ,Aqueous solution ,Chemical technology ,010401 analytical chemistry ,Perspective (graphical) ,Dielectrophoresis ,ocean ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,marine pollution ,Perspective ,Environmental science ,Sample collection ,environment - Abstract
Microplastics (MPs) have been found in aqueous environments ranging from rural ponds and lakes to the deep ocean. Despite the ubiquity of MPs, our ability to characterize MPs in the environment is limited by the lack of technologies for rapidly and accurately identifying and quantifying MPs. Although standards exist for MP sample collection and preparation, methods of MP analysis vary considerably and produce data with a broad range of data content and quality. The need for extensive analysis-specific sample preparation in current technology approaches has hindered the emergence of a single technique which can operate on aqueous samples in the field, rather than on dried laboratory preparations. In this perspective, we consider MP measurement technologies with a focus on both their eventual field-deployability and their respective data products (e.g., MP particle count, size, and/or polymer type). We present preliminary demonstrations of several prospective MP measurement techniques, with an eye towards developing a solution or solutions that can transition from the laboratory to the field. Specifically, experimental results are presented from multiple prototype systems that measure various physical properties of MPs: pyrolysis-differential mobility spectroscopy, short-wave infrared imaging, aqueous Nile Red labeling and counting, acoustophoresis, ultrasound, impedance spectroscopy, and dielectrophoresis.
- Published
- 2021
29. Clinically relevant updates of the HbVar database of human hemoglobin variants and thalassemia mutations
- Author
-
David H.K. Chui, Ross C. Hardison, Belinda Giardine, Philippe Joly, Henri Wajcman, Serge Pissard, George P. Patrinos, and Pathology
- Subjects
Front page ,Heterozygote ,Genotype ,AcademicSubjects/SCI00010 ,Thalassemia ,MEDLINE ,Gene Expression ,Biology ,computer.software_genre ,03 medical and health sciences ,Hemoglobins ,Research community ,Databases, Genetic ,Genetics ,medicine ,Database Issue ,Humans ,Data content ,Globin ,Clinical phenotype ,030304 developmental biology ,0303 health sciences ,Internet ,Database ,Genome, Human ,030305 genetics & heredity ,Hemoglobin variants ,Genomics ,medicine.disease ,Phenotype ,Genetic Loci ,Mutation ,computer ,Software - Abstract
HbVar (http://globin.bx.psu.edu/hbvar) is a widely-used locus-specific database (LSDB) launched 20 years ago by a multi-center academic effort to provide timely information on the numerous genomic variants leading to hemoglobin variants and all types of thalassemia and hemoglobinopathies. Here, we report several advances for the database. We made clinically relevant updates of HbVar, implemented as additional querying options in the HbVar query page, allowing the user to explore the clinical phenotype of compound heterozygous patients. We also made significant improvements to the HbVar front page, making comparative data querying, analysis and output more user-friendly. We continued to expand and enrich the regular data content, involving 1820 variants, 230 of which are new entries. We also increased the querying potential and expanded the usefulness of HbVar database in the clinical setting. These several additions, expansions and updates should improve the utility of HbVar both for the globin research community and in a clinical setting.
- Published
- 2021
30. BIM:n hyödyntäminen hankintapaketin kustannusten hallinnassa : Runkohankinnan näkökulmasta
- Author
-
Korhonen, Joona, Rakennetun ympäristön tiedekunta - Faculty of Built Environment, and Tampere University
- Subjects
rakennuksen runko ,building information model ,hankinta ,kustannushallinta ,concrete element ,building frame ,kustannukset ,tietosisältö ,Rakennustekniikan DI-ohjelma - Master's Programme in Civil Engineering ,YTV ,BIM ,procurement ,data content ,tietomallit ,betonielementti ,cost control - Abstract
Tämän tutkimuksen tavoitteena oli selvittää, miten betonielementtirungon kustannuksia voidaan hallita rakennuksen tietomallia apuna käyttäen. Ongelmana on hankkeista saatavan kustannustiedon suuri määrä sekä saadun tiedon vaikeaselkoisuus ja epätarkkuus. Nämä osaltaan hankaloittavat hankkeista syntyvän kustannuskäsityksen muodostumista ja vaikeuttavat kustannusseurantaa. Kustannustiedon näkökulmasta tutkimuksen päätavoitteena oli selvittää betonielementtirungon kustannusjakauma sekä tunnistaa rungon kokonaiskustannusten kannalta merkittävimmät muuttuvat tekijät. Tutkimuksen toisena päätavoitteena oli selvittää, mitkä kustannusten hallinnan näkökulmasta ovat hankkeen elinkaaren eri vaiheiden vaatimukset tietomallin tietosisällölle.Tutkimus koostui kirjallisuus- ja aineistotutkimuksesta, case-tutkimuksesta sekä haastattelututkimuksesta. Kirjallisuus- ja aineistotutkimuksessa selvitettiin saatavilla olevan julkisen tiedon ja kohdeyrityksen oman lähdemateriaalin avulla tyypillisen betonielementtirungon kustannusjakaumaa sekä tietomallinnuksen nykytasoa ja ohjaavia vaatimuksia. Case-tutkimuksessa tarkasteltiin Solibri Office -ohjelmistolla viiden kohteen viiden eri rakennemallirevision välillä tapahtuneita muutoksia. Revisioiden vertailu tehtiin, jotta eri revisioiden välillä tapahtuvat merkittävät muutokset, niiden suuruudet sekä syyt muutosten taustalla voitiin selvittää. Haastattelututkimuksessa haastateltiin sekä yrityksen sisäisiin että ulkoisiin sidosryhmiin kuuluvia asiantuntijoita. Laajat ja koko hankkeen elinkaaren kattavat haastattelut toteutettiin yhtenäistä haastattelupohjaa hyödyntäen, jotta saatiin selville hankkeen eri vaiheiden vaatimukset rakennemallin tietosisällölle. Työssä saatiin selville, että todellisten hankkeiden rungon kustannusjakauma noudattelee melko hyvin kirjallisuusaineistoissa esiteltyä jakaumaa. Case-tutkimuksessa saatiin selville, että kohteiden merkittävissä muutoksissa tiettyjen rakenneosien muutokset toistuvat. Myös merkittävien muutosten taustalla olevissa syissä on yhteneväisyyksiä hankkeiden välillä. Tutkimuksessa huomattiin, että rungon kustannusten hallinta tehostuu, kun rakennemallien tietosisältötarve on asetettu ja ajoitettu tarkasti. Tällöin jokaisessa vaiheessa on tarvittava tieto päätöksenteon tukena oikea-aikaisesti, jolloin ali- ja ylisuunnittelu vähenee sekä materiaalien ja työn hukka pienenee tarpeettomien tietomallimuutosten vähentyessä. The purpose of this research was to find out how to manage costs of the concrete frame when using building information models. The challenge is the large amount of cost information obtained from projects, as well as the unclarity and inaccuracy of the information obtained. These factors hinder the formation of cost perceptions and make it more difficult to monitor costs. From a cost information perspective, the main objective of the study was to find out the cost distribution of the precast concrete frame and to identify the significant changing factors in the total cost of the frame. Cost management from the perspective of the BIM to be utilized, the second main objective of the study was to sort out what the BIMs data content requirements are, as determined by the different stages of the project life cycle. The study consisted of a literature study, case study and an interview study. In the literature study the aim was to sort out the cost distribution of a typical precast concrete frame as well as the current level and guiding requirements of BIM. The literature study utilized both the company's own material and public source material. The case study examined the changes that took place between five different structural model revisions at five different construction sites by using the Solibri Office software. A comparison of the revisions was made in order to find out the significant changes between the different revisions. The aim was also to find out the magnitude and reasoning behind the changes. In the interview study, interviewed experts represented both sides, the company’s internal and external stakeholders. Extensive interviews covered the entire project life cycle and were conducted by using a unified interview base. With these methods the target was to find out the requirements for the different phases of the project for concerning the information content of the structural model. The study revealed that in actual projects, the frames cost distribution follows the distribution presented in the literature quite well. The biggest result in the case study was the finding that from one project to another, significant changes repeats in certain components. There are also similarities between projects within the reasons behind the significant changes. One of the findings was also that the cost control of the frame becomes more efficient when the data content needed by the structural models is set and timed precisely. In that way, the necessary information is always available when needed. Accurate timing and relevant data content aid the decision-making process, reducing under- and overplanning. When the unnecessary data model changes are reduced, the loss of materials and work is also reduced.
- Published
- 2021
31. On the Prospects of the Development of Non-public Ownership Economy under the Background of Big Data
- Author
-
Manli Jia
- Subjects
Data source ,Environmental sciences ,Resource development ,Public ownership ,Economy ,Process (engineering) ,business.industry ,Big data ,Data content ,Economic shortage ,GE1-350 ,Business ,Data application - Abstract
This article analyzes the interaction between big data and the non-public economy. The author studied the problems faced in the deep fusion process. The research content of this paper includes data source activation, data cleanup, hidden dangers of data content, shortage of talents for analysis, and development of integrated platforms. The author analyzes how to establish a comprehensive development platform, improve the in-depth integration model, enhance the awareness of data application, do a good job in personnel training, improve the safety management mechanism, optimize the resource development environment, and do a good job in market supervision and management. The purpose of this article is to speed up the integration of big data and the non-public economy, and promote the development of enterprise economy.
- Published
- 2021
32. Metrics for identifying food security status
- Author
-
Nicholas Ogot
- Subjects
Measure (data warehouse) ,Food security ,Risk analysis (engineering) ,Computer science ,Process (engineering) ,Stability (learning theory) ,Data content ,Data type ,ComputingMilieux_MISCELLANEOUS ,Field (computer science) ,Variety (cybernetics) - Abstract
This chapter introduces food security as “a state when at all the time, all people can access adequate, safe and efficient nutritive food that meet their dietary needs and food preferences for a healthy and active healthy life.” While metrics refer to the measure of quantitative elements assessed in a research, a wide variety of data content obtained from the field is guided by a process of numerous methods that aim at efficiency. Food security metrics focus basically on the four domains of food security: availability, accessibility, utilization, and stability. The need for new metrics is essential for improving the current food security measurements. The types of data used, assumptions for measuring food security, and the intended uses of the different measurements advocate for the precision, accuracy, results interpretation, and implementation of policies. The metrics identified here complete approaches for food security status determination while gauging other factors.
- Published
- 2021
- Full Text
- View/download PDF
33. Companion guide to the Marsquake catalog from InSight, sols 0–478: Data content and non-seismic events
- Author
-
Clément Perrin, Nikolaj Dahmen, Suzanne E. Smrekar, Raphaël F. Garcia, Martin Schimmel, Simon Stähler, Maren Böse, William B. Banerdt, Anna Horleston, Alexander E. Stott, Sharon Kedar, Eléonore Stutzmann, Ralph D. Lorenz, Jan ten Pierick, K. Hurst, Don Banfield, John Clinton, Taichi Kawamura, Constantinos Charalambous, Fabian Euchner, Aymeric Spiga, Eric Beucler, Amir Khan, William T. Pike, Vincent Conejero, Philippe Lognonné, Savas Ceylan, Martin van Driel, Domenico Giardini, Mark P. Panning, Constanza Pardo, Guenolé Orhand-Mainsant, John-Robert Scholz, Swiss National Science Foundation, Agence Nationale de la Recherche (France), Swiss National Supercomputing Centre, UK Space Agency, Centre National D'Etudes Spatiales (France), ETH Zurich, State Secretariat for Education, Research and Innovation (Switzerland), Schimmel, Martin, Département Electronique, Optronique et Signal (DEOS), Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO), Institut Supérieur de l'Aéronautique et de l'Espace - ISAE-SUPAERO (FRANCE), and Schimmel, Martin [0000-0003-2601-4462]
- Subjects
[SPI.OTHER]Engineering Sciences [physics]/Other ,Seismometer ,010504 meteorology & atmospheric sciences ,Physics and Astronomy (miscellaneous) ,Mars ,Data inventory ,010502 geochemistry & geophysics ,01 natural sciences ,Autre ,Broadband ,Data content ,InSight mission ,Non-seismic signals ,Marsquakes ,0105 earth and related environmental sciences ,Scientific instrument ,Martian ,Payload ,Astronomy and Astrophysics ,Mars Exploration Program ,Geophysics ,13. Climate action ,Space and Planetary Science ,Noise (video) ,Mars seismology ,Geology ,Seismology - Abstract
The InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission landed on the surface of Mars on November 26, 2018. One of the scientific instruments in the payload that is essential to the mission is the SEIS package (Seismic Experiment for Interior Structure) which includes a very broadband and a short period seismometer. More than one year since the landing, SEIS continues to be fully operational and has been collecting an exceptional data set which contains not only the signals of seismic origins, but also noise and artifacts induced by the martian environment, the hardware on the ground that includes the seismic sensors, and the programmed operational activities of the lander. Many of these non-seismic signals will be unfamiliar to the scientific community. In addition, many of these signals have signatures that may resemble seismic events either or both in time and frequency domains. Here, we report our observations of common non-seismic signals as seen during the first 478 sols of the SEIS data, i.e. from landing until the end of March 2020. This manuscript is intended to provide a guide to scientists who use the data recorded on SEIS, detailing the general attributes of the most commonly observed non-seismic features. It will help to clarify the characteristics of the seismic dataset for future research, and to avoid misinterpretations when searching for marsquakes., We acknowledge NASA, CNES, their partner agencies and institutions (UKSA, SSO, DLR, JPL, IPGP-CNRS, ETHZ, IC, MPS-MPG) and the flight operations team at JPL, SISMOC, MSDS, IRIS-DMC and PDS for providing SEED SEIS data. We also acknowledge the funding by (1) Swiss National Science Foundation and French Agence Nationale de la Recherche (SNF-ANR project “Seismology on Mars”, ANR-14-CE36-0012-02 and MAGIS, ANR-19-CE31-0008-08), (2) Swiss State Secretariat for Education, Research and Innovation (SEFRI project “MarsQuake Service-Preparatory Phase”), (3) ETH Research grant ETH-06 17-02 and, for French co-authors, (4) the French Space agency CNES. Additional support came from the Swiss National Supercomputing Centre (CSCS) under project ID s922. AH is funded by UKSA through grant #ST/R002096/1. Visualizations were created using Matplotlib (Hunter, 2007). The data was processed with NumPy (Oliphant, 2006), Scipy (Jones et al., 2001), ObsPy (Krischer et al., 2015) and custom software developed by gempa GmbH. This paper is InSight Contribution Number 150.
- Published
- 2021
- Full Text
- View/download PDF
34. Community Preference-Based Information-Centric Networking Cache
- Author
-
Haozhe Liang, Chaofeng Zhang, Xiao Yang, and Caijuan Chen
- Subjects
Computer science ,05 social sciences ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Recommender system ,Preference ,World Wide Web ,0508 media and communications ,Information-centric networking ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Data content ,Cache - Abstract
Information-centric networking (ICN) framework has been proposed to connect data content and network users together, which leads to great efficiency in comparison to conventional network. Cache policy performances as an essential part of ICN, and many studies have been conducted to research this. However, there are still two problems remaining unsolved, 1) ignoring the network user community features, 2) without considering the correlations between users and data content. To optimize these shortcomings, a community preference-based ICN cache policy was proposed. This policy will comprehensively consider the data content features and user community preference and then utilize recommendation system to cache the data into corresponding servers. Moreover, policy advantages and simulation will also be evaluated in this paper.
- Published
- 2020
- Full Text
- View/download PDF
35. AgentG: An Engaging Bot to Chat for E-Commerce Lovers
- Author
-
Aditi Katiyar, V. Srividya, Neha Akhtar, and B. K. Tripathy
- Subjects
Web browser ,Point (typography) ,Computer science ,business.industry ,E-commerce ,computer.software_genre ,Chatbot ,World Wide Web ,Product (business) ,Order (business) ,Customer service ,Data content ,business ,computer - Abstract
Regular customer assistance chatbots are generally based on dialogues delivered by a human. It faces symbolic issues usually related to data scaling and privacy of one’s information. In this paper, we present AgentG, an intelligent chatbot used for customer assistance. It is built using the deep neural network architecture. It clouts huge-scale and free publicly accessible e-commerce data. Different from the existing counterparts, AgentG takes a great data advantage from in-pages that contain product descriptions along with user-generated data content from these online e-commerce websites. It results in more efficient from a practical point of view as well as cost-effective while answering questions that are repetitive, which helps in providing a freedom to people who work as customer service in order to answer questions with highly accurate answers. We have demonstrated how AgentG acts as an additional extension to the actual stream Web browsers and how it is useful to users in having a better experience who are doing online shopping.
- Published
- 2020
- Full Text
- View/download PDF
36. The Branching Data Model as a New Level of Abstraction over the Relational Database
- Author
-
H. Paul Zellweger
- Subjects
Theoretical computer science ,Relational database ,Computer science ,lcsh:A ,General Medicine ,Tree structure ,relational database ,Schema (psychology) ,end-user navigation ,named set theory ,Data content ,branching data model ,lcsh:General Works ,Data patterns - Abstract
Information is often stored in the relational database. This technology is now fifty years old, but there remain patterns of relational data that have not yet been studied. The abstract presents a new data pattern called the branching data model (BDM). It represents the pure alignment of the table’s schema, its data content, and the operations on these two structures. Using a well-defined SELECT statement, an input data condition and its output values form a primitive tree structure. While this relationship is formed outside of the query, the abstract shows how we can view it as a tree structure within the table. Using algorithms, including AI, it goes on to show how this data model connects with others, within the table and between them, to form a new, uniform level of abstraction over the data throughout the database.
- Published
- 2020
- Full Text
- View/download PDF
37. Naming and Routing Scheme for Data Content Objects in Information-Centric Network
- Author
-
Natallia Pastei, Fatima Rahal, Ghassan Jaber, and Ahmad Abboud
- Subjects
Sinc function ,business.industry ,Computer science ,Routing table ,The Internet ,Data content ,Reuse ,business ,Flooding (computer networking) ,Computer network - Abstract
The subject of this article is devoted to the problems of naming and routing scheme for data content object in information-centric networks (ICN). The relevance of the work is determined by the idea of an information-content networks as a future promising technology for the Internet. The article introduces a new naming strategy for ICN – named Semantic Information-Centric Network (SINC). SINC uses three addresses: Geographical, Semantic and Publisher ID address. It was done the classifying data into the four types and classifying subscriber request into four classes where the SICN can cope with these different types and classes. Briefly discussed routing tables structure Geo-ID, Geo-Sematic, ID-Semantic and some algorithms for updating, removing, merging and matching records. The results of two scenarios modeling evaluation by comparing the SICN with different schemes and projects including IP, DONA, PURSUIT, CBCB, KBN according to following metrics: time delay, flooding or traffic, and efficiency reuse factor for data are presented. In terms of flooding and time delay SICN outperforms the other ICN projects. In terms of efficiency SICN shows a good results compared with other schemes.
- Published
- 2020
- Full Text
- View/download PDF
38. A Survey on Contribution of Data Mining Techniques and Graph Reading Algorithms in Concept Map Generation
- Author
-
A. Auxilia Princy and B. Lavanya
- Subjects
Complex data type ,Concept map ,business.industry ,Computer science ,Big data ,Data content ,Data mining ,business ,computer.software_genre ,Algorithm ,computer ,Graph - Abstract
Concept maps are a pictorial representation of concepts found in data and it shows relationship between concepts. These Concept map help us to understand the whole data content, makes it easily readable and memorable. They are used to deliver complex data in an understandable form (map, tree, graph, etc), which is used for a better understanding and decision making for researchers and business, etc. This paper discusses the recent researches about concept maps and data mining techniques, and graph reading algorithms used for concept map generation.
- Published
- 2018
- Full Text
- View/download PDF
39. PubChem 2019 update: improved access to chemical data
- Author
-
Paul A. Thiessen, Sunghwan Kim, Jia He, Leonid Zaslavsky, Qingliang Li, Bo Yu, Tiejun Cheng, Evan E Bolton, Benjamin A. Shoemaker, Asta Gindulyte, Jian Zhang, Siqian He, and Jie Chen
- Subjects
Information Storage and Retrieval ,Biology ,Patents as Topic ,Small Molecule Libraries ,World Wide Web ,Structure-Activity Relationship ,03 medical and health sciences ,0302 clinical medicine ,Information resource ,Research community ,Drug Discovery ,Genetics ,Animals ,Humans ,Database Issue ,Data content ,030304 developmental biology ,Internet ,0303 health sciences ,Molecular Structure ,business.industry ,Computational Biology ,Chemical data ,High-Throughput Screening Assays ,Pharmaceutical Preparations ,Biological Assay ,The Internet ,business ,Databases, Chemical ,030217 neurology & neurosurgery ,PubChem - Abstract
PubChem (https://pubchem.ncbi.nlm.nih.gov) is a key chemical information resource for the biomedical research community. Substantial improvements were made in the past few years. New data content was added, including spectral information, scientific articles mentioning chemicals, and information for food and agricultural chemicals. PubChem released new web interfaces, such as PubChem Target View page, Sources page, Bioactivity dyad pages and Patent View page. PubChem also released a major update to PubChem Widgets and introduced a new programmatic access interface, called PUG-View. This paper describes these new developments in PubChem.
- Published
- 2018
- Full Text
- View/download PDF
40. Datengüte des TraumaRegister DGU®
- Author
-
G. Matthes, Ulrike Nienaber, F. Laue, R. Volland, T. Ziprian, N. Ramadanov, and Rolf Lefering
- Subjects
business.industry ,030208 emergency & critical care medicine ,medicine.disease ,03 medical and health sciences ,0302 clinical medicine ,Emergency Medicine ,Medicine ,Orthopedics and Sports Medicine ,Surgery ,Data content ,030212 general & internal medicine ,Medical emergency ,business ,Trauma surgery - Abstract
Background Registries are becoming increasingly more important in clinical research. The TraumaRegister DGU® of the German Society for Trauma Surgery plays an excellent role with respect to the care of severely injured patients. Aim Within the framework of this investigation the quality of data provided by this registry was to be verified. Material and methods Certified hospitals participating in the TraumaNetzwerk DGU® of the German Society for Trauma Surgery are obliged to submit data of treated severely injured patients to the TraumaRegister DGU®. Participating hospitals have to undergo a re-certification process every 3 years. Within the framework of this re-audit, data from 5 out of 8 randomly chosen patient cases included in the registry are controlled and compared to the patient files of the certified hospital. In the present investigation discrepancies concerning data provided were documented and the pattern of deviation was analyzed. Results The results of 1075 re-certification processes carried out in 631 hospitals including the documentation of 5409 checked patient cases from 2012-2017 were analyzed. The highest number of discrepancies detected concerned the documented time until initial CT (15.8%) and the lowest concerned the discharge site (3.2%). The majority of data sheets with discrepancies showed deviations in only one out of seven checked parameters. Interestingly, large trauma centers with a high throughput of severely injured patients showed the most deviations. Conclusion The present investigation underlines the importance of standardized checks concerning data provided for registries in order to be able to guarantee an improvement in entering data.
- Published
- 2018
- Full Text
- View/download PDF
41. Prevention of Data Content Leakage
- Author
-
Meghana N. Jadhav
- Subjects
Petroleum engineering ,Environmental science ,Data content ,Leakage (electronics) - Published
- 2018
- Full Text
- View/download PDF
42. IDENTIFICATION OF DATA CONTENT BASED ON MEASUREMENT OF QUALITY OF PERFORMANCE.
- Author
-
Simonová, Stanislava
- Abstract
Managing of business processes is done based on decision making and performance of workers in management positions who utilize business information systems as a necessary support tool. Business processes and business data form a coherent unit as business processes need relevant data for their execution while business data should fully serve to business processes. A change of the information system as a whole or a modification of a partial module of the information system are initiated exclusively by changes in the flow of instances of the business flow; this means either external changes such as customers' requirements, change in suppliers' approach or internal changes such as a change in activity flow or change in work activities as such. Current information systems are developed on the platform of relation database systems with the basic principle of relation data model. Analysis and design of a relation data model requires application of specialized methods leading to definition of relation schemes including integrity constraints. For development of a data model technical expertise is necessary which cannot be delegated on actors of working activities or managers of business processes. However, only the actors in process instances and managers of the process know their needs in relation with execution of the process, therefore, they should characterize their requirements on the module of the information system by identifying the data content. Information system serves as support for carrying out particular business process. Composition of indicators is changeable together with methods of measuring and evaluating and also the process itself changes as a result of its improvements. When controlling the values of indicators, it is necessary to take into account that within one process it is possible to measure indicators and then evaluate the same process or it is possible to measure indicators which provide data about quality or lack of quality of another process. The composition of indicators for measuring and evaluating quality of process performance or any change in the method of controlling should be an impulse for change of information environment. Therefore, development of each data model should be preceded by identification of data content with exact characteristics of requirements of composition and relation between measuring indicators. [ABSTRACT FROM AUTHOR]
- Published
- 2012
43. IDENTIFIKACE DATOVÉHO OBSAHU PRO VÝVOJ DATOVÉHO MODELU.
- Author
-
Šimonova, Stanislava and Kořínek, Martin
- Subjects
- *
QUALITY , *DATA modeling , *EXECUTIVES , *INFORMATION resources management , *MANAGEMENT - Abstract
Each organization or company wants to reach good output in a long-term, whether output products are commodities or services. The goals are improving quality and efficiency of production. Concept and norms of quality feature recommendation, of which process approach and efficient information / data administration take significant place. Business processes and business data constitute linked unit, because business processes need relevant data for work and business data should fully serve to business processes. Similarly, linked unit is constituted by process and data modeling, which needs to be done by cooperation of processes actors and data users, thus by cooperation of managers. Role of managers is un-substitutable, for they create requirements for improving and for higher quality via their models. The main topic of this paper is identification of business data content with a view to management. [ABSTRACT FROM AUTHOR]
- Published
- 2011
44. ROSETTA: How to archive more than 10 years of mission
- Author
-
F. Vallejo, P. Martin, David Heather, Sebastien Besse, M. F. A'Hearn, D. Fraga, E. Grotheer, Matthew Taylor, Ludmilla Kolokolova, Maud Barthelemy, R. Andres, Laurence O'Rourke, and T. Barnes
- Subjects
Scientific instrument ,Engineering ,010504 meteorology & atmospheric sciences ,business.industry ,Astronomy and Astrophysics ,01 natural sciences ,Planetary Data System ,Astrobiology ,Planetary science ,Aeronautics ,Space and Planetary Science ,0103 physical sciences ,Comet (programming) ,Data content ,business ,010303 astronomy & astrophysics ,0105 earth and related environmental sciences - Abstract
The Rosetta spacecraft was launched in 2004 and, after several planetary and two asteroid fly-bys, arrived at comet 67P/Churyumov-Gerasimenko in August 2014. After escorting the comet for two years and executing its scientific observations, the mission ended on 30 September 2016 through a touch down on the comet surface. This paper describes how the Planetary Science Archive (PSA) and the Planetary Data System – Small Bodies Node (PDS-SBN) worked with the Rosetta instrument teams to prepare the science data collected over the course of the Rosetta mission for inclusion in the science archive. As Rosetta is an international mission in collaboration between ESA and NASA, all science data from the mission are fully archived within both the PSA and the PDS. The Rosetta archiving process, supporting tools, archiving systems, and their evolution throughout the mission are described, along with a discussion of a number of the challenges faced during the Rosetta implementation. The paper then presents the current status of the archive for each of the science instruments, before looking to the improvements planned both for the archive itself and for the Rosetta data content. The lessons learned from the first 13 years of archiving on Rosetta are finally discussed with an aim to help future missions plan and implement their science archives.
- Published
- 2018
- Full Text
- View/download PDF
45. NON - PHARMACOLOGICAL PAIN MANAGEMENT IN POSTOPERATIVE CARE OF SCHOOL - AGE CHILDREN
- Author
-
Sari Laanterä, Svajūnė Goštautaitė, Viktorija Piščalkienė, and Leena Uosukainen
- Subjects
medicine.medical_specialty ,School age child ,030504 nursing ,business.industry ,Lithuanian ,Pain management ,language.human_language ,03 medical and health sciences ,0302 clinical medicine ,Pain assessment ,030225 pediatrics ,Family medicine ,Assessment methods ,language ,medicine ,Data content ,0305 other medical science ,business ,Non pharmacological - Abstract
The aim of this study was to evaluate children‘s postoperative pain assessment and management methods applied in practice by nurses from Lithuania and Finland. Methods. Individual in - depth semi - structured interviews by non - probabilic snowball (network) and purposive sampling, data content analysis. 20 nurses in Lithuania and 5 nurses in Finland, who work at pediatric surgical and pediatric wards, where children after surgeries are treated. Results. The research has shown differences between postoperative school - age children‘s pain management practise by Lithuanian and Finnish nurses. Lithuanian nurses use smaller variety of these methods than nurses from Finland. All nurses agree that non - pharmacological children pain management is effective and useful. Conclusions. The usage of subjective and objective pain assessment methods by Finnish and Lithuanian nurses is similar, just Lithuanians mostly trust subjective verbal and objective behavioral, appearance pain assessment methods, whereas Finnish combine and use all the subjective pain assessment methods like verbal, parental assessment and using scales as well as objective behavioural assessment. There is a difference between pain management practise by Finnish and Lithuanian nurses. Finnish nurses evenly use all of non - pharmacological methods, whereas Lithuanian nurses mostly trust on physical and rehabilitation methods as well as communication.
- Published
- 2017
- Full Text
- View/download PDF
46. Constructing automatic domain-specific sentiment lexicon using KNN search via terms discrimination vectors
- Author
-
Hatem Abdelkader, Fahd Alqasemi, and Amira Abdelwahab
- Subjects
Computer science ,business.industry ,Sentiment analysis ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Lexicon ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Domain (software engineering) ,Text mining ,Hardware and Architecture ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data content ,Artificial intelligence ,business ,computer ,Software ,Natural language processing - Abstract
Web textual data content is a viable source for decision-makers’ knowledge, so are text analytic applications. Sentiment analysis (SA) is one of text mining fields, in which text is analyzed to rec...
- Published
- 2017
- Full Text
- View/download PDF
47. KNOTTIN: the database of inhibitor cystine knot scaffold after 10 years, toward a systematic structure modeling
- Author
-
Jean-Christophe Gelly, Jérôme Gracy, Charlotte Perin, Laurent Chiche, and Guillaume Postic
- Subjects
0301 basic medicine ,Models, Molecular ,Protein Conformation, alpha-Helical ,Scaffold ,Gene Expression ,Biology ,computer.software_genre ,Ligands ,History, 21st Century ,03 medical and health sciences ,User-Computer Interface ,Sequence Analysis, Protein ,Genetics ,Computer Graphics ,Database Issue ,Humans ,Data content ,Amino Acid Sequence ,Disulfides ,Databases, Protein ,Structure (mathematical logic) ,Internet ,030102 biochemistry & molecular biology ,Database ,Multiple applications ,Disulfide bond ,Cystine-Knot Miniproteins ,3. Good health ,030104 developmental biology ,Protein Conformation, beta-Strand ,Inhibitor cystine knot ,Pseudoknot ,computer ,Sequence Alignment - Abstract
Knottins, or inhibitor cystine knots (ICKs), are ultra-stable miniproteins with multiple applications in drug design and medical imaging. These widespread and functionally diverse proteins are characterized by the presence of three interwoven disulfide bridges in their structure, which form a unique pseudoknot. Since 2004, the KNOTTIN database (www.dsimb.inserm.fr/KNOTTIN/) has been gathering standardized information about knottin sequences, structures, functions and evolution. The website also provides access to bibliographic data and to computational tools that have been specifically developed for ICKs. Here, we present a major upgrade of our database, both in terms of data content and user interface. In addition to the new features, this article describes how KNOTTIN has seen its size multiplied over the past ten years (since its last publication), notably with the recent inclusion of predicted ICKs structures. Finally, we report how our web resource has proved usefulness for the researchers working on ICKs, and how the new version of the KNOTTIN website will continue to serve this active community.
- Published
- 2017
48. BCNTB bioinformatics: the next evolutionary step in the bioinformatics of breast cancer tissue banking
- Author
-
Emanuela Gadaleta, Abu Z. Dayem Ullah, Stefano Pirrò, Jacek Marzec, and Claude Chelala
- Subjects
0301 basic medicine ,PubMed ,Breast Neoplasms ,Tissue Banks ,Biology ,Bioinformatics ,Automated data ,03 medical and health sciences ,0302 clinical medicine ,Breast cancer ,Cancer genome ,Cell Line, Tumor ,Genetics ,medicine ,Database Issue ,Data Mining ,Humans ,Data content ,Modalities ,Experimental data ,Computational Biology ,medicine.disease ,030104 developmental biology ,ComputingMethodologies_PATTERNRECOGNITION ,030220 oncology & carcinogenesis ,Tissue bank ,Female ,Tissue Banking - Abstract
Here, we present an update of Breast Cancer Now Tissue Bank bioinformatics, a rich platform for the sharing, mining, integration and analysis of breast cancer data. Its modalities provide researchers with access to a centralised information gateway from which they can access a network of bioinformatic resources to query findings from publicly available, in-house and experimental data generated using samples supplied from the Breast Cancer Now Tissue Bank. This in silico environment aims to help researchers use breast cancer data to their full potential, irrespective of any bioinformatics barriers. For this new release, a complete overhaul of the IT and bioinformatic infrastructure underlying the portal has been conducted and a host of novel analytical modules established. We developed and adopted an automated data selection and prioritisation system, expanded the data content and included tissue and cell line data generated from The Cancer Genome Atlas and the Cancer Cell Line Encyclopedia, designed a host of novel analytical modalities and enhanced the query building process. Furthermore, the results are presented in an interactive format, providing researchers with greater control over the information on which they want to focus. Breast Cancer Now Tissue Bank bioinformatics can be accessed at http://bioinformatics.breastcancertissuebank.org/.
- Published
- 2017
49. A Case for Memory Content-Based Detection and Mitigation of Data-Dependent Failures in DRAM
- Author
-
Onur Mutlu, Donghyuk Lee, Alaa R. Alameldeen, Samira Khan, and Christopher B. Wilkerson
- Subjects
010302 applied physics ,Hardware_MEMORYSTRUCTURES ,Computer science ,business.industry ,02 engineering and technology ,System level testing ,01 natural sciences ,020202 computer hardware & architecture ,Hardware and Architecture ,Embedded system ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Data content ,Latency (engineering) ,business ,Data dependent ,Dram ,Efficient energy use ,Dram chip - Abstract
DRAM cells in close proximity can fail depending on the data content in neighboring cells. These failures are called data-dependent failures . Detecting and mitigating these failures online while the system is running in the field enables optimizations that improve reliability, latency, and energy efficiency of the system. All these optimizations depend on accurately detecting every possible data-dependent failure that could occur with any content in DRAM. Unfortunately, detecting all data-dependent failures requires the knowledge of DRAM internals specific to each DRAM chip. As internal DRAM architecture is not exposed to the system, detecting data-dependent failures at the system-level is a major challenge. Our goal in this work is to decouple the detection and mitigation of data-dependent failures from physical DRAM organization such that it is possible to detect failures without knowledge of DRAM internals. To this end, we propose MEMCON , a memory content-based detection and mitigation mechanism for data-dependent failures in DRAM. MEMCON does not detect every possible data-dependent failure. Instead, it detects and mitigates failures that occur with the current content in memory while the programs are running in the system. Using experimental data from real machines, we demonstrate that MEMCON is an effective and low-overhead system-level detection and mitigation technique for data-dependent failures in DRAM.
- Published
- 2017
- Full Text
- View/download PDF
50. Towards Geo-Context Aware IoT Data Distribution
- Author
-
Jonathan Hasenburg and David Bermbach
- Subjects
Focus (computing) ,Work (electrical) ,business.industry ,Computer science ,Control data ,Distribution (economics) ,Context (language use) ,Relevance (information retrieval) ,Data content ,Internet of Things ,business ,Data science - Abstract
In the Internet of Things, the relevance of data often depends on the geographic context of data producers and consumers. Today’s data distribution services, however, mostly focus on data content and not on geo-context, which would benefit many scenarios greatly. In this paper, we propose to use the geo-context information associated with devices to control data distribution. We define what geo-context dimensions exist and compare our definition with concepts from related work. By example, we discuss how geo-contexts enable new scenarios and evaluate how they also help to reduce unnecessary data distributions.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.