668 results on '"Data needs"'
Search Results
2. Conceptualizing Data Needs within Contexts of Data Discoverability and Reuse: A Study of Environmental and Social Scientists
- Author
-
Liu, Ying-Hsang, Huvila, Isto, Kaiser, Jessica, Friberg, Zanna, Sköld, Olle, Andersson, Lisa, Power, Megan, and Wu, Mingfang
- Subjects
Environmental scientists ,Data discovery contexts ,Data reuse ,Data needs ,Social scientists - Abstract
This study contributes to the conference theme of information science processes and practices by examining the research data discovery contexts of ecological and social scientists who reuse datasets for research. The aim of the research was to gain insight into the data discovery contexts of these scientists and to understand the data needs related to data discoverability and reuse. The study identified four dimensions of data needs, including research processes, making sense of data, data reuse, and data access. Additionally, a conceptualization of data needs within the context of data reuse was proposed, which has not been thoroughly examined in previous studies. The study employed a mixed-method approach within the post-positivist research paradigm to identify the different contexts in which data is discovered. A combination of survey and in-depth interview techniques were used to investigate the broader contexts of data discovery in people's information-seeking processes. The critical incident technique was used to elicit the contexts of data discovery, and the interview protocol was structured based on the stages of a data lifecycle. Interviews were conducted with 24 participants from three organizations, including TERN, ADA, and CSIRO. Participants held diverse job roles and were at different career stages. The study identified four dimensions of data needs and examined their relationship with the roles of data managers and end-users. The findings contribute to the existing literature on data needs and emphasize the potential usefulness of research data and the need for paradata. The study also suggests that anticipating the contexts of data reuse involves considering what data users may find useful. Ensuring data quality is crucial for successful data reuse, which involves having access to organizational data expertise, providing data in various formats and platforms, and ensuring sufficient data coverage. Data needs are influenced by the specific research objectives, which in turn affects the criteria for selecting and reusing data. Clear data licensing conditions are crucial to facilitate data reuse.
- Published
- 2023
- Full Text
- View/download PDF
3. National Weather Service Data Needs for Short-Term Forecasts and the Role of Unmanned Aircraft in Filling the Gap: Results from a Nationwide Survey
- Author
-
Lisa M. PytlikZillig, Adam L. Houston, and Janell C Walther
- Subjects
Transport engineering ,Atmospheric Science ,Data needs ,Environmental science ,National weather service ,Nationwide survey ,Term (time) - Abstract
Inclusion of unmanned aircraft systems (UAS) into the weather surveillance network has the potential to improve short-term (
- Published
- 2021
4. Type I and II error rates of Bayesian two-sample tests under preliminary assessment of normality in balanced and unbalanced designs and its influence on the reproducibility of medical research
- Author
-
Riko Kelter
- Subjects
Statistics and Probability ,Reproducibility ,Applied Mathematics ,Data needs ,media_common.quotation_subject ,Bayesian probability ,Medical research ,law.invention ,Randomized controlled trial ,law ,Modeling and Simulation ,Statistics ,Two sample ,Statistics, Probability and Uncertainty ,Normality ,Student's t-test ,media_common ,Mathematics - Abstract
Student's two-sample t-test is often used in medical research like randomized controlled trials. To control type I errors, normality of the observed data needs to be assessed. In practice, a two-st...
- Published
- 2021
5. Comparison of free‐living physical activity data obtained from a Fitbit Zip, the Apple iPhone Health app and a modified Bouchard Activity Record
- Author
-
Rona Macniven, Veronica M Smith, and Rebecca Reynolds
- Subjects
Community and Home Care ,030505 public health ,Data needs ,Activity tracker ,Public Health, Environmental and Occupational Health ,Physical activity ,Monitoring, Ambulatory ,Reproducibility of Results ,Mobile Applications ,Physical Activity Measurement ,03 medical and health sciences ,Malus ,Accelerometry ,Humans ,Community setting ,0305 other medical science ,Psychology ,Exercise ,human activities ,Demography - Abstract
Issue addressed Physical activity tracking devices have potential to improve public health, but their data needs to be reliable. No study has compared movement data between the Fitbit Zip, Apple iPhone Health app and physical activity records in a community setting over 10 days. Methods University students aged 18+ years wore both a Fitbit Zip and an iPhone at/near their right waist and completed a modified Bouchard Activity Record (BAR) for 10 days in a free-living setting. Comparisons were made between the Fitbit Zip and iPhone for the number of steps and the distance travelled and between the Fitbit Zip and BAR for the minutes of activity in three different intensities. Results Eighteen students provided sufficient data for inclusion. There were strong correlations between steps per day (r = .87) and distance travelled (r = .88) between the Fitbit Zips and iPhones. However, the Fitbit Zip measured significantly more steps per day (mean 8437 vs 7303; P ≤ .001) and greater distances (mean 5.9 vs 4.9; P ≤ .001) than the iPhone. Correlations between the Fitbit Zips and the BARs were moderate for minutes of total (r = .51) and light (r = .40) activity and weak for moderate/fairly active (r = .20) and vigorous/very active (r = .25). Conclusions There were strong correlations between the physical activity data measured by Fitbit Zips and iPhones, but the iPhone Health app significantly underestimated the number of steps per day taken and the distance travelled when compared to the Fitbit Zip. SO WHAT?: Understanding the comparability of accelerometer devices provides useful information for future pragmatic physical activity measurement.
- Published
- 2021
6. Place-Based Philanthropy and Measuring Community Well Being in the Age of COVID-19
- Author
-
Frank Ridzi
- Subjects
Community well-being ,Value (ethics) ,2019-20 coronavirus outbreak ,Community indicators ,Coronavirus disease 2019 (COVID-19) ,business.industry ,Data needs ,COVID-19 ,Public relations ,Place-based funders ,Psychiatry and Mental health ,Action (philosophy) ,Political science ,Philanthropy ,Community well being ,Original Research Article ,Scenario planning ,business ,Public awareness - Abstract
Place-based philanthropic organizations have long defined their value in terms of ability to improve well-being in the communities they serve. Desire to quantify and prove this impact has led such charities to be interested in and even invest in measures of community well-being. In this paper I explore how the onset of the COVID-19 pandemic has affected local philanthropy's relationship with data and information by increasing public awareness of community data as a tool for describing rapidly changing community needs, raising expectations for an expedited connection between data analysis and action, and compelling civic leaders to engage in scenario planning. I draw on the case of Syracuse, NY to illustrate how the presence of a real time collaborative data infrastructure presents promising opportunities to address the data needs of place-based philanthropy when it comes to monitoring and acting to improve community well-being in the COVID era.
- Published
- 2021
7. Application of Remote Sensing on El Niño Extreme Effect in Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI)
- Author
-
Oliver Valentine Eboy and Ricky Anak Kemarau
- Subjects
Carbon dioxide content ,Index (economics) ,Remote sensing (archaeology) ,Data needs ,Environmental science ,Normalized difference water index ,Spatial analysis ,Extreme temperature ,Normalized Difference Vegetation Index ,Remote sensing - Abstract
The years 1997/1998 and 2015/2016 saw the worst El Niño occurrence in human history. The occurrence of El Niño causes extreme temperature events which are higher than usual, drought and prolonged drought. The incident caused a decline in the ability of plants in carrying out the process of photosynthesis. This causes the carbon dioxide content to be higher than normal. Studies on the effects of El Niño and its degree of strength are still under-studied especially by researchers in the tropics. This study uses remote sensing technology that can provide spatial information. The first step of remote sensing data needs to go through the pre-process before building the NDVI (Normalized Difference Vegetation Index) and Normalized Difference Water Index (NDWI) maps. Next this study will identify the relationship between Oceanic Nino Index (ONI) with Application Remote Sensing in The Study Of El Niño Extreme Effect 1997/1998 and 2015/2016 On Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI)NDWI and NDWI landscape indices. Next will make a comparison, statistical and spatial information space between NDWI and NDVI for each year 1997/1998 and 2015/2016. This study is very important in providing spatial information to those responsible in preparing measures in reducing the impact of El Niño.
- Published
- 2021
8. Data Needs in Opioid Systems Modeling: Challenges and Future Directions
- Author
-
Reza Kazemi-Tabriz, Sara Eggers, Tse Yang Lim, Calvin B. Bannister, Lukas Glos, Rosalie Liccardo Pacula, Emily Ewing, Celia A. Stafford, Hawre Jalal, Erin Stringfellow, and Mohammad S. Jalali
- Subjects
Government ,Data collection ,Epidemiology ,Computer science ,Data needs ,010102 general mathematics ,Simulation modeling ,Public Health, Environmental and Occupational Health ,Psychological intervention ,Context (language use) ,Systems modeling ,01 natural sciences ,Data science ,Article ,Analgesics, Opioid ,03 medical and health sciences ,0302 clinical medicine ,Work (electrical) ,Humans ,030212 general & internal medicine ,Opioid Epidemic ,0101 mathematics ,Forecasting - Abstract
Introduction The opioid crisis is a pervasive public health threat in the U.S. Simulation modeling approaches that integrate a systems perspective are used to understand the complexity of this crisis and analyze what policy interventions can best address it. However, limitations in currently available data sources can hamper the quantification of these models. Methods To understand and discuss data needs and challenges for opioid systems modeling, a meeting of federal partners, modeling teams, and data experts was held at the U.S. Food and Drug Administration in April 2019. This paper synthesizes the meeting discussions and interprets them in the context of ongoing simulation modeling work. Results The current landscape of national-level quantitative data sources of potential use in opioid systems modeling is identified, and significant issues within data sources are discussed. Major recommendations on how to improve data sources are to: maintain close collaboration among modeling teams, enhance data collection to better fit modeling needs, focus on bridging the most crucial information gaps, engage in direct and regular interaction between modelers and data experts, and gain a clearer definition of policymakers’ research questions and policy goals. Conclusions This article provides an important step in identifying and discussing data challenges in opioid research generally and opioid systems modeling specifically. It also identifies opportunities for systems modelers and government agencies to improve opioid systems models.
- Published
- 2021
9. An output-based measurement of EU bioeconomy services: Marrying statistics with policy insight
- Author
-
George Philippidis, Tévécia Ronzon, and Susanne Iost
- Subjects
Employment ,Economics and Econometrics ,Data needs ,Servicios ,WASS ,Gross domestic product ,Value added ,Statistics ,Agricultural Economics and Rural Policy ,media_common.cataloged_instance ,European union ,Valor añadido ,Productivity ,media_common ,Service (business) ,Productividad ,Service ,Scope (project management) ,Member states ,Agrarische Economie en Plattelandsbeleid ,Bioeconomy ,Bioeconomía ,Europe ,Value (economics) ,Business ,Europa ,Empleo - Abstract
In its revised bioeconomy strategy, the European Union (EU) has extended the scope of activities to include services. Employing an output-based approach, this study quantifies the contribution of bioeconomy services to gross domestic product and employment in the EU Member States over 2008-2017. Moreover, it also identifies the main sectoral sources of employment and growth within bioeconomy services. The choice of Eurostat statistics ensures data harmonisation across countries and continuity for future updates, although important data needs are identified to enhance the representation of bioeconomy services within European statistical frameworks. In 2015-2017, economic growth was stronger in bioeconomy services than in the total EU economy. Bioeconomy services accounted for between 5.0-8.6% and 10.2-16.9% of EU gross domestic product and the EU labour force, respectively, whilst three service sectors account for more than 60% of bioeconomy services employment and value added. Interestingly, in the decade up to 2017, labour productivity in bioeconomy services improved.
- Published
- 2022
10. <scp>COVID</scp>‐19 Policy Modeling in Sub‐Saharan Africa
- Author
-
Megan Jehn, Valerie Mueller, Glenn Sheriff, and Corinna Keeler
- Subjects
Economics and Econometrics ,2019-20 coronavirus outbreak ,Sub saharan ,Coronavirus disease 2019 (COVID-19) ,Poverty ,Data needs ,05 social sciences ,Context (language use) ,Development ,0502 economics and business ,Development economics ,Pandemic ,Economics ,050202 agricultural economics & policy ,050207 economics ,Health policy - Abstract
After an initial delay, Sub-Saharan Africa (SSA) is being hit by the pandemic Demand for exports is falling and caseloads are rising Governments have approached this crisis with a range of policy options Optimal policy balances reduced infection rates with lost economic output This paper discusses how an economic-epidemiological model used to analyze policy in high-income countries could be adapted to a context where poverty considerations are paramount Differences in country characteristics across the continent affect benefits and costs of alternative policy designs We conclude by highlighting data needs and model calibration challenges for COVID-19 policy research in SSA
- Published
- 2020
11. A preliminary study to identify data needs for improving fit of hand and wrist orthosis using verbal protocol analysis
- Author
-
Xinyang Tan, Jiangang Cao, Saeema Ahmed-Kristensen, and Wei Chen
- Subjects
Adult ,Orthotic Devices ,Wrist orthosis ,Computer science ,medicine.medical_treatment ,Data needs ,Psychological intervention ,Physical Therapy, Sports Therapy and Rehabilitation ,Human Factors and Ergonomics ,Protocol analysis ,03 medical and health sciences ,0302 clinical medicine ,Occupational Therapists ,Human–computer interaction ,medicine ,Humans ,0501 psychology and cognitive sciences ,050107 human factors ,Rehabilitation ,Communication ,05 social sciences ,Hand Injuries ,Equipment Design ,030229 sport sciences ,Middle Aged ,Delayed delivery ,Current practice ,Task analysis ,Clinical Competence ,Needs Assessment - Abstract
The delayed delivery, poor fitting and discomfort of customised orthoses are reported in rehabilitation clinics as resulting in more invasive interventions. The current practice of orthosis customisation relies heavily upon the experience and fabrication processes of therapists. In order to better understand the current practice, and thus identify data that is required for better comfort moving towards a data-driven customisation, this article describes a study generating working models of therapists. Customisations of hand and wrist orthoses for 18 patients were observed. Verbal protocol analysis was employed to extend the current understanding of fabrication processes. Working models of four therapists were established with quantitative evaluation on major phases, interactive activities and iterations of performing tasks during fabrication, revealing different working models between in- and out-patient departments (e.g. fabrication for in-patients was more complex and focussed on ergonomic fitting whereas fabrication for out-patients paid attention to durability) which were qualitatively explained. Practitioner summary: Fit and comfort are imperative for orthosis design and fabrication, however the current practice of customisation of an orthosis relies upon the experience of individual hand therapist. The article presents working models of hand therapists, and relevant data that would enable customisation of orthosis for better fit. Abbreviations: VPA: verbal protocol analysis; hw LTT: low temperature thermoplastic; ANOVA: analysis of variance.
- Published
- 2020
12. Surgical Implantation of Acoustic Tags in American Shad to Resolve Riverine and Marine Restoration Challenges
- Author
-
Benjamin I. Gahagan and Michael M. Bailey
- Subjects
0106 biological sciences ,Alosa ,Watershed ,food.ingredient ,biology ,010604 marine biology & hydrobiology ,Data needs ,04 agricultural and veterinary sciences ,Aquatic Science ,biology.organism_classification ,01 natural sciences ,Spawn (biology) ,Fishery ,food ,Geography ,Statistical analyses ,040102 fisheries ,0401 agriculture, forestry, and fisheries ,American shad ,Ecology, Evolution, Behavior and Systematics ,Overwintering - Abstract
A variety of data needs challenge the successful restoration and management of alosine populations, including information on the migration, mortality, behavior, demographic rates, and distribution of fish, both in riverine and marine environments. Radiotelemetry with gastric‐implanted transmitters has typically been used to answer some of these questions; however, observing alosines over extended periods and in the marine environment has remained beyond the limitations of this technology and implantation technique. To address these issues, we conducted an acoustic telemetry study on American Shad Alosa sapidissima by using surgical implantation methods. We tagged fish during 2015 (n = 46) and 2016 (n = 52) in the Charles River, Massachusetts, an urbanized watershed where American Shad were believed to be extirpated prior to restoration efforts beginning in 2006. Surgical implantation produced rates of in‐river mortality (40% overall) and posttagging fallback (39% overall) that were comparable to those from traditionally used gastric implantation methods. Data from American Shad that were retained for statistical analyses (n = 59) demonstrated that Watertown Dam (at river kilometer 14.3) impeded upstream migration and that New Boston Dam and Locks (at the mouth of the river) delayed postspawn emigration from the river. In total, 49 American Shad were detected outside of the Charles River. The distribution and low number of total detections, despite a large number of nearshore arrays, suggest that American Shad occupy waters farther offshore during their marine phase. American Shad were detected as overwintering on the Scotian Shelf (n = 5) and the Mid‐Atlantic Bight (n = 1). In 2017, 10 of the individuals that were tagged in 2016 returned to spawn, providing the first reported data on total migration timing and migratory behavior free of handling effects. Surgical implantation of acoustic telemetry tags is an effective method that can provide necessary and previously unattainable data on a species of conservation need.
- Published
- 2020
13. Compressed sensing–based electromechanical admittance data loss recovery for concrete structural health monitoring
- Author
-
Hongping Zhu, Hedong Li, Hui Luo, and Demi Ai
- Subjects
Admittance ,Computer science ,Mechanical Engineering ,Data needs ,Biophysics ,020101 civil engineering ,02 engineering and technology ,Data loss ,01 natural sciences ,Matching pursuit ,0201 civil engineering ,Compressed sensing ,0103 physical sciences ,Convex optimization ,Electronic engineering ,Structural health monitoring ,010301 acoustics - Abstract
Considerable amount of electromechanical admittance data needs to be collected, transmitted and stored during in-situ and long-term structural health monitoring applications, and data loss could be inevitably met when processing the monitoring electromechanical admittance signals. In this article, an innovative compressed sensing–based approach is proposed to implement data recovery for electromechanical admittance technique–based concrete structural health monitoring. The basis of this approach is to first project the original conductance signature onto an observation vector as sampled data, and then transmit the observation vector with data loss to storage station, and finally recover the missing data via a compressed sensing process. For comparison, both convex optimization theory and orthogonal matching pursuit algorithm are introduced to accomplish the compressed sensing–based electromechanical admittance data loss recovery. Prior detection test of a concrete cube subjected to varied temperatures and practical monitoring experiment of full-scale concrete shield tunnel segment undergone bolt-loosened defects are utilized to validate the feasibility of the proposed approach. In lost electromechanical admittance data recovery process, two types of data loss, namely, single-consecutive-segment loss and multiple-consecutive-segment losses, in sampled data are taken into consideration for sufficiently interpreting the effectiveness and accuracy of the convex optimization and orthogonal matching pursuit approaches. In the temperature recognition and damage identification stage, amplitude and frequency shifts in resonance peaks, cooperated with a common statistical index called root-mean-squared-deviation, are harnessed to achieve the goal after the lossy conductance signatures are recovered. The results show that the orthogonal matching pursuit–based data recovery approach is superior to the convex optimization approach because of its smaller calculation of consumption as well as lower recovered errors.
- Published
- 2020
14. Data market platforms
- Author
-
Michael J. Franklin, Raul Fernandez, and Pranav Subramaniam
- Subjects
FOS: Computer and information sciences ,Computer science ,Data needs ,Data market ,05 social sciences ,General Engineering ,Databases (cs.DB) ,Integration problem ,Asset (computer security) ,Data science ,0506 political science ,Data sharing ,Lead (geology) ,Incentive ,Computer Science - Databases ,Value (economics) ,050602 political science & public administration ,0509 other social sciences ,050904 information & library sciences - Abstract
Data only generates value for a few organizations with expertise and resources to make data shareable, discoverable, and easy to integrate. Sharing data that is easy to discover and integrate is hard because data owners lack information (who needs what data) and they do not have incentives to prepare the data in a way that is easy to consume by others. In this paper, we propose data market platforms to address the lack of information and incentives and tackle the problems of data sharing, discovery, and integration. In a data market platform, data owners want to share data because they will be rewarded if they do so. Consumers are encouraged to share their data needs because the market will solve the discovery and integration problem for them in exchange for some form of currency. We consider internal markets that operate within organizations to bring down data silos, as well as external markets that operate across organizations to increase the value of data for everybody. We outline a research agenda that revolves around two problems. The problem of market design, or how to design rules that lead to desired outcomes, and the systems problem, how to implement the market and enforce the rules. Treating data as a first-class asset is sorely needed to extend the value of data to more organizations, and we propose data market platforms as one mechanism to achieve this goal.
- Published
- 2020
15. UMKM Naik Kelas: Mengkonstruksi Sebuah Desain Faktor Determinant Berluaran Perkembangan Usaha (Studi pada UMKM di Kota Semarang)
- Author
-
Rauly Sijabat
- Subjects
media_common.quotation_subject ,Data needs ,Business administration ,Unemployment ,Asian country ,Business ,Business development ,Competence (human resources) ,Structural equation modeling ,media_common ,Research data - Abstract
UMKM is recognized as having an important role as an economic buffer through its contribution to national GDP and unemployment reduction. However, the acknowledgment, the reality is not linear with improving business performance and the development of the UMKM business itself. Even though it has contributed greatly, its performance and development still cannot compete with MSMEs in other Asian countries. This phenomenon encourages a study to empirically model determinant factors that influence the development of MSME businesses. The result is an empirical model of business development that is explained by business performance, entrepreneurial competence, personal factors, organizational factors and environmental factors. To meet the data needs of these variables, interviews were conducted using a questionnaire. Testing empirical models and hypotheses through the research data obtained was carried out using the Structural Equation Modeling (SEM) approach. The findings from the test results are that business development is explained by business performance. While business performance is determined by entrepreneurial competence, personal factors and environmental factors. Organizational factors cannot be proven in this study.
- Published
- 2020
16. Statistical Study of Machine Learning Algorithms Using Parametric and Non-Parametric Tests
- Author
-
Gitanjali R. Shinde, Vijay M. Khadse, and Parikshit N. Mahalle
- Subjects
business.industry ,Computer science ,Data needs ,media_common.quotation_subject ,Nonparametric statistics ,020206 networking & telecommunications ,02 engineering and technology ,Machine learning ,computer.software_genre ,Multiple comparisons problem ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Internet of Things ,business ,computer ,Software ,Normality ,Parametric statistics ,media_common - Abstract
The emerging area of the internet of things (IoT) generates a large amount of data from IoT applications such as health care, smart cities, etc. This data needs to be analyzed in order to derive useful inferences. Machine learning (ML) plays a significant role in analyzing such data. It becomes difficult to select optimal algorithm from the available set of algorithms/classifiers to obtain best results. The performance of algorithms differs when applied to datasets from different application domains. In learning, it is difficult to understand if the difference in performance is real or due to random variation in test data, training data, or internal randomness of the learning algorithms. This study takes into account these issues during a comparison of ML algorithms for binary and multivariate classification. It helps in providing guidelines for statistical validation of results. The results obtained show that the performance measure of accuracy for one algorithm differs by critical difference (CD) than others over binary and multivariate datasets obtained from different application domains.
- Published
- 2020
17. Business intelligence in academic libraries in Jordan: Opportunities and challenges
- Author
-
Faten Hamad, Razan Al-Aamr, Sinaria Abdel Jabbar, and Hussam Fakhuri
- Subjects
Knowledge management ,business.industry ,Data needs ,05 social sciences ,Academic library ,02 engineering and technology ,Library and Information Sciences ,020204 information systems ,Business intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,0509 other social sciences ,050904 information & library sciences ,business - Abstract
Data plays a major role in helping to understand clearly the changing needs of academic library users, and in helping libraries to innovate their services and procedures accordingly. Data needs to be transformed into information for decision-making and strategic planning. Business intelligence offers powerful analytical tools, such as visualization and data-mining tools, which lead to informed decisions and hence transform the user’s experience, bringing it to a more advanced level. This research investigates the concept of business intelligence from the perceptions of information department staff at academic libraries in Jordan. The opportunities and challenges associated with it are also discussed and explored. As indicated by the results, information department staff agree that business intelligence improves decision-making, helping decision-makers to make the most accurate and timely decisions for the library. The results also indicate that an appropriate infrastructure is important for the successful implementation of business intelligence in academic libraries in Jordan.
- Published
- 2020
18. How accurate are policy document mentions? A first look at the role of altmetrics database
- Author
-
Zhenyi Yang, Houqiang Yu, Tingting Xiao, and Xueting Cao
- Subjects
Database ,Computer science ,Data needs ,05 social sciences ,General Social Sciences ,Data provider ,Societal impact of nanotechnology ,Library and Information Sciences ,050905 science studies ,computer.software_genre ,Computer Science Applications ,High complexity ,Data quality ,Transcription error ,Altmetrics ,0509 other social sciences ,050904 information & library sciences ,computer ,Coding (social sciences) - Abstract
Policy document mention is considered to indicate the significance and societal impact of scientific product. However, the accuracy of policy document altmetrics data needs to be evaluated to fully understand its strength and limitation. An in-depth coding analysis was conducted on sample policy documents records of Altmetric.com database. The sample consists of 2079 records from all 79 distinct policy document source platforms tracked by the database. Errors about mentioned publications in the policy documents (type A error) are found in 8% of the records, while errors about either the recorded policy documents or the mentioned publications in the altmetrics database (type B error) are found in 70% of the records. In type B error, policy document link error (5% of the records) could be attributable to the policy document website, transcription error (52% of the records) could be attributable to the third-party bibliographic data provider. These two categories of error are relatively minor and may have limited influence on altmetrics research and practices. False positive policy document mention (13% of the records), however, could be attributable to the Altmetric database and may diminish the validity of research based on the policy document altmetrics data. The underlying reasons remain to be further investigated. Considering the high complexity of extracting mentions of publications from various sources and formats of policy documents as well as its short history, Altmetric database has achieved excellent performance.
- Published
- 2020
19. Optimalisasi Peta Pengendalian Penduduk untuk Diintegrasikan pada Organisasi Perangkat Daerah (OPD) di Kabupaten Bandung Barat
- Author
-
edsryan puspadianis Ramitha Satya putri and Muhammad Rozahi Istambul
- Subjects
education.field_of_study ,Demographic Maps, Population Control, Data Integration, Optimization, Duplication ,Data collection ,lcsh:T ,Data needs ,Population ,lcsh:Technology ,Population control ,lcsh:QA75.5-76.95 ,Geography ,National health insurance ,lcsh:Electronic computers. Computer science ,Socioeconomics ,education - Abstract
Demographic maps can present statistical data on demographic education levels, population stages, family stages, education levels, age, and number of lives in families based on participation in the National Health Insurance (JKN). Where every Regional Apparatus Organization (OPD) in West Bandung Regency that requires population control data needs a data integration and performance optimization, so that in each Regional Apparatus Organization (OPD) in work there is no more duplication of work. In this study, it is expected to be able to map all information about population control data to help the community and Regional Apparatus Organizations (OPD) in West Bandung Regency in order to obtain information clearly from population control data based on the results of family data collection which will make it easier for Regional Apparatus Organizations (OPD) related in order to help the entire community in West Bandung Regency. In this case a solution will be made by utilizing the QGIS application in solving problems where for population control maps can be integrated in the Regional Apparatus Organization (OPD) in West Bandung Regency.
- Published
- 2020
20. Validation of Bicycle Level of Traffic Stress and Perceived Safety for Children
- Author
-
Nicholas N. Ferenchak and Wesley E. Marshall
- Subjects
Perceived safety ,050210 logistics & transportation ,Measure (data warehouse) ,030505 public health ,Injury control ,Computer science ,Mechanical Engineering ,Data needs ,05 social sciences ,Poison control ,Track (rail transport) ,Occupational safety and health ,Transport engineering ,03 medical and health sciences ,0502 economics and business ,Stress (linguistics) ,0305 other medical science ,Civil and Structural Engineering - Abstract
The level of traffic stress (LTS) methodology was developed to measure, track, and improve the suitability of bicycle networks. Thanks to the simplicity of its data needs and interpretation, LTS has been implemented by several states, regions, cities, non-profits, and researchers. However, relatively few validations of the methodology exist. There is a specific gap in relation to safety perceptions for children, an important group since it serves as the critical population for LTS 1. This study validates LTS using a survey of parents in Denver, Colorado, in which they are asked about perceived safety and biking allowance relative to roadway design characteristics. After the LTS score and biking allowance rates for 612 roadway scenarios are determined, a one-way analysis of variance (ANOVA) is used to determine the suitability of LTS for children. Findings suggest that while LTS 1 and LTS 4 align well with stated preferences, parents told that their children would be able to tolerate some roadway conditions—when allowing for adult supervision—that are currently considered LTS 2 or even LTS 3. These scenarios are primarily on low-volume roadways that have bike lanes. By further refining LTS, it is hoped to ensure that all populations have access to safe and comfortable bicycle facilities.
- Published
- 2020
21. Data Needs for Hyperspectral Detection of Algal Diversity Across the Globe
- Author
-
Kevin Ruddick, Vittorio E. Brando, Astrid Bracher, Hubert Loisel, and Heidi M. Dierssen
- Subjects
0106 biological sciences ,010504 meteorology & atmospheric sciences ,business.industry ,010604 marine biology & hydrobiology ,Data needs ,media_common.quotation_subject ,Environmental resource management ,detection ,Hyperspectral imaging ,Globe ,algal ,Oceanography ,01 natural sciences ,hyperspectral ,Geography ,medicine.anatomical_structure ,13. Climate action ,medicine ,14. Life underwater ,business ,0105 earth and related environmental sciences ,Diversity (politics) ,media_common - Abstract
A group of 38 experts specializing in hyperspectral remote-sensing methods for aquatic ecosystems attended an interactive Euromarine Foresight Workshop at the Flanders Marine Institute (VLIZ) in Ostend, Belgium, June 4–6, 2019. The objective of this workshop was to develop recommendations for comprehensive, efficient, and effective laboratory and field programs to supply data for development of algorithms and validation of hyperspectral satellite imagery for micro-, macro- and endosymbiotic algal characterization across the globe. The international group of researchers from Europe, Asia, Australia, and North and South America (see online Supplementary Materials) tackled how to develop global databases that merge hyperspectral optics and phytoplankton group composition to support the next generation of hyperspectral satellites for assessing biodiversity in the ocean and in food webs and for detecting water quality issues such as harmful algal blooms. Through stimulating discussions in breakout groups, the team formulated a host of diverse programmatic recommendations on topics such as how to better integrate optics into phytoplankton monitoring programs; approaches to validating phytoplankton composition with ocean color measurements and satellite imagery; new database specifications that match optical data with phytoplankton composition data; requirements for new instrumentation that can be implemented on floats, moorings, drones, and other platforms; and the development of international task forces. Because in situ observations of phytoplankton biogeography and abundance are scarce, and many vast oceanic regions are too remote to be routinely monitored, satellite observations are required to fully comprehend the diversity of micro-, macro-, and endosymbiotic algae and any variability due to climate change. Ocean color remote sensing that provides regular synoptic monitoring of aquatic ecosystems is an excellent tool for assessing biodiversity and abundance of phytoplankton and algae in aquatic ecosystems. However, neither the spatial, temporal, nor spectral resolution of the current ocean color missions are sufficient to characterize phytoplankton community composition adequately. The near-daily overpasses from ocean color satellites are useful for detecting the presence of blooms, but the spatial resolution is often too coarse to assess the patchy distribution of blooms, and the multiband spectral resolution is generally insufficient to identify different types of phytoplankton from each other, even if progress has undeniably been achieved during the last two decades (e.g., IOCGG, 2014). Moreover, the methods developed for multichannel sensor use are often highly tuned to a region but are inaccurate when applied broadly. New orbital imaging spectrometers are being developed that cover the full visible and near-infrared spectrum with a large number of narrow bands dubbed “hyperspectral” (e.g., TROPOMI, PRISMA, EnMAP, PACE, CHIME, SBG). Hyper-spectral methods have been explored for many years to assess phytoplankton groups and map seafloor habitats. However, the utility of hyperspectral imaging still needs to be demonstrated across diverse aquatic regimes. Aquatic applications of hyperspectral imagery have been limited by both the technology and the ability to validate products. Some of the past hyperspectral space-based sensors have suffered from calibration artifacts, low sensitivity in aquatic ecosystems (e.g., CHRIS, HICO), and very low spatial resolution (e.g., SCIAMACHY), but the next generation of sensors are planned to have high signal-to-noise ratio and improved performance over aquatic targets. Providing data to develop and validate hyperspectral approaches to characterize phytoplankton groups across the globe poses new challenges. Several recent studies have documented gaps that need to be filled in order to assess algal diversity across the globe (IOCCG, 2014; Mouw et al., 2015; Bracher et al., 2017), which promoted/inspired the formation of this workshop.
- Published
- 2020
22. Pemanfaatan Aplikasi Google Form dalam Meningkatkan Pelaksanaan Supervisi Pendidikan Pengawas Madrasah
- Author
-
Sri Rahmiyati
- Subjects
World Wide Web ,Supervisor ,Data retrieval ,Computer science ,Data needs ,Field (computer science) - Abstract
Utilization of the Google Form application is done as an alternative action when the supervisor of Madrasah encountered several obstacles in the field relating to the area and the number of building Madrasah. The ease of the Google Form application for daily activities is: 1) real-time distribution and tabulation; 2) Real time collaboration; 3) secure; Storing important files or school assignments is not afraid of lost or damaged or exposed to viruses. Google Form is also an easy-to-use application even for beginners, free (free to pay), usually the data results are presented in an Excel file so it is easy to use and a fairly lightweight program. Utilization of the Google Form application in assisting the oversight of the madrasah to be effective and efficient can be described as follows; 1) Facilitate the supervisors to collect data without having to meet the speakers directly, especially for speakers who are away from the location of supervision activities; 2) for the necessary data of the activity or administration report that does not require the presence of a supervisor in the field, then data retrieval through instruments created by Google Form will assist the implementation of the supervisory duties more effective and efficient madrasah. 3) Availability of features available in Google Form such as charts, tables, Excel and others, greatly supports the data needs of the madrasah supervisor.
- Published
- 2020
23. Correlations between macroseismic intensity estimations and ground motion measures of seismic events
- Author
-
Leonardo Chiauzzi, Vincenzo Manfredi, Angelo Masi, and Giuseppe Nicodemo
- Subjects
Ground motion ,021110 strategic, defence & security studies ,Peak ground acceleration ,Data needs ,0211 other engineering and technologies ,02 engineering and technology ,Building and Construction ,Geotechnical Engineering and Engineering Geology ,Technical literature ,Intensity (physics) ,Geophysics ,European Macroseismic Scale ,Statistical analyses ,Seismology ,Geology ,Civil and Structural Engineering - Abstract
Macroseismic intensity data are an important source to learn from historical earthquakes. Nevertheless, this data needs to be converted into more suitable intensity measures to be used in risk analyses, as well as in design practice. To this purpose, in this paper, correlations between macroseismic scales and ground motion parameters have been derived. Peak Ground Acceleration (PGA), Peak Ground Velocity (PGV) and Housner Intensity (IH) as instrumental measures, and European Macroseismic Scale (EMS-98) and Mercalli-Cancani-Sieberg (MCS) as macroseismic measures, have been considered. 179 ground-motion records belonging to 32 earthquake events occurred in Italy in the last 40 years have been selected, provided that for each record, macroseismic intensity in terms of either EMS-98 or MCS or both were also available. Statistical analyses have been carried out to derive both direct (i.e. macroseismic vs instrumental intensity) and inverse (instrumental vs macroseismic intensity) relationships. Results obtained from the proposed relationships have been analyzed and compared with some of the most prominent results available in the technical literature.
- Published
- 2020
24. The last decade in orthodontics: A scoping review of the hits, misses and the near misses!
- Author
-
Donald J. Ferguson, Pratik Premjani, M. Ali Darendeliler, Nikhilesh R. Vaid, and Narayan H. Gandedkar
- Subjects
Orthodontics ,Data needs ,030206 dentistry ,Practice management ,Outcome assessment ,Near miss ,03 medical and health sciences ,Patient confidentiality ,0302 clinical medicine ,Social media ,Road map ,Psychology ,030217 neurology & neurosurgery ,Patient education - Abstract
The past decade (2009-19) has seen orthodontics incorporate many new infusions into its fold. This scoping review analyzes published orthodontic literaure in five different domains:(1) Recent advancements in orthodontic 3D applications including 3D printing, diagnosis and management. (2) Recent advancements in orthodontic biomaterials, nanotechnology, biomimetics, battery-driven devices. (3) Recent advancements in orthodontic patient education, orthodontic training, and orthodontics practice management. (4) Recent advancements in orthodontic E-health protocols, tele-orthodontics, teleconsultations etc. and (5) Recent advancements in orthodontic marketing and social media influences. A total of 1245 records were searched,of which 65 potentially relevant articles were retrieved in full. 42 studies met the selection criteria following screening and were included in the scoping review. The review found studies pertaining to morphological features or surface characteristics with respect to 3D applications (3D printing, diagnosis and management)as the most represented outcome assessment (49%).Orthodontic Marketing & Influence of Social Media (27%) and Biomaterials,Nano-Technology,Biomimetics and battery Driven devices have also been considerably reported (20%) in the past decade. More scientific data needs to be gathered in the field of Patient education, E-health, tele-orthodontics, and protection of patient confidentiality. The authors present COS (Core Outcome Sets)that could be a road map for evaluating currently employed developments as well as testing new ones in future.
- Published
- 2019
25. Building Data Capacity for Patient-Centered Outcomes Research
- Author
-
Telecommunications Board
- Subjects
Data capacity ,Patient-centered outcomes ,Data needs ,Operations management ,Psychology ,Interim report - Published
- 2021
26. CLINICAL AND EPIDEMIOLOGICAL CHARACTERISTICS OF CHILDREN WITH PCR-CONFIRMED COVID-19 IN VOLGOGRAD REGION, Russia
- Author
-
Ivan Shishimorov, Tatyana Zayachnikova, Kirill Kaplunov, and Lubov Kramar
- Subjects
Cultural Studies ,History ,2019-20 coronavirus outbreak ,Pediatrics ,medicine.medical_specialty ,Literature and Literary Theory ,Coronavirus disease 2019 (COVID-19) ,business.industry ,Data needs ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,medicine.disease ,Dysosmia ,Dysgeusia ,Epidemiology ,medicine ,medicine.symptom ,business ,Exanthem - Abstract
The paper identifies for the first time for the Volgograd region (Russia) the clinical and epidemiological characteristics of children 0-16 years old with the laboratory confirmed diagnosis of COVID-19 hospitalized in a children’s infectious clinic in April-August 2020. The Volgograd Region is one of the biggest territorial entities in Southern Russia; therefore our first published analytical data needs to be compared with relevant Russian/ European data. The diagnosis was verified by the nucleic acid amplification: isolation of SARS-CoV-2 RNA by the polymerase chain reaction (PCR) from the upper respiratory tract mucosa. The study did not include outpatients. Rarity of such symptoms as headache and weakness (7.1%), as well as dysosmia (4.1%) in the complete absence of dysgeusia attracted our attention. In none of the patients we were able to identify an exanthem over the course of infection.
- Published
- 2021
27. Data Needs and Limitations for Public Services Modeling
- Author
-
J. M. (Jack) Whitmer
- Subjects
Computer science ,Data needs ,Data science - Published
- 2021
28. Similarities and Differences in the Data Needs for Farmer Planning, Economic Research, and Policy Analysis
- Author
-
Arne Hallam
- Subjects
Economic research ,Public economics ,Data needs ,Business ,Policy analysis - Published
- 2021
29. Determining Dark Diversity of Different Faunal Groups in Indian Estuarine Ecosystem: A New Approach with Computational Biodiversity
- Author
-
Kartick Chandra Mondal, Anirban Roy, and Moumita Ghosh
- Subjects
geography ,geography.geographical_feature_category ,business.industry ,Computer science ,Data management ,media_common.quotation_subject ,Data needs ,Environmental resource management ,Biodiversity ,Probabilistic logic ,Distribution (economics) ,Estuary ,Ecosystem ,business ,Diversity (politics) ,media_common - Abstract
Computational Biodiversity can broadly be understood as the effort of computational approaches for exploring, interpreting, and analyzing biodiversity data. An enormous load of growing biodiversity data needs algorithmic care for accurate data management, and therefore the term computational biodiversity comes. Instead of relying purely on presence data, the probabilistic forecast of member distribution including the regions of not occurrence can neutralize biodiversity loss by restoring potential ecosystems. This paper is aiming at revealing the perspective of computational biodiversity as a counteract for biodiversity loss by correlating the concept of dark diversity. The computation of the dark diversity is accompanied by a data mining algorithm for establishing rules with more nobility to manage the depletion of biodiversity. We generate a dataset for the Indian estuarine ecosystem and show the use of our approach by ending up with rules worthwhile for the ecologists. These would step up reinforcing biological diversity via introducing or rehabilitating specific faunal groups to an estuary under survey.
- Published
- 2021
30. Cancer Needs a Robust 'Metadata Supply Chain' to Realize the Promise of Artificial Intelligence
- Author
-
David A. Jaffray and Caroline Chung
- Subjects
Cancer Research ,Metadata ,business.industry ,Computer science ,Data needs ,Supply chain ,Volume (computing) ,Context (language use) ,Data governance ,Clinical Practice ,Oncology ,Artificial Intelligence ,Neoplasms ,Humans ,Artificial intelligence ,business - Abstract
Profound advances in computational methods, including artificial intelligence (AI), present the opportunity to use the exponentially growing volume and complexity of available cancer measurements toward data-driven personalized care. While exciting, this opportunity has highlighted the disconnect between the promise of compute and the supply of high-quality data. The current paradigm of ad-hoc aggregation and curation of data needs to be replaced with a “metadata supply chain” that provides robust data in context with known provenance, that is, lineage and comprehensive data governance that will allow the promise of AI technology to be realized to its full potential in clinical practice.
- Published
- 2021
31. Prestack Q compensation with sparse tau-p operators
- Author
-
Mehdi Aharchaou and Erik Neumann
- Subjects
Geophysics ,010504 meteorology & atmospheric sciences ,Radon transform ,Geochemistry and Petrology ,Data needs ,Prestack ,010502 geochemistry & geophysics ,01 natural sciences ,Algorithm ,0105 earth and related environmental sciences ,Mathematics ,Compensation (engineering) - Abstract
The application of [Formula: see text] compensation to prestack marine data needs the proper removal of the water-layer time from the total traveltime, a process known as “time referencing.” To obtain the water-layer time, current industry practices use some form of normal moveout equation that requires subsurface velocities. We have derived a more straightforward and accurate formula for time referencing that does not require subsurface velocities and works under the same assumptions. The formula is based on a local angle decomposition via the tau-[Formula: see text] transform. Further complicating the [Formula: see text] compensation task in the prestack domain is the proper treatment of spatially aliased energy and high-frequency noise. We found out how time-slowness sparsity, used as a constraint for [Formula: see text] compensation, gives excellent immunity to incoherent noise and spatial aliasing, and we evaluate its role in accelerating the convergence rate for our iterative inversion algorithm.
- Published
- 2019
32. Literature review on modeling and simulation of energy infrastructures from a resilience perspective
- Author
-
Jianhui Wang, Landolf Rhode-Barbarigos, Yanling Lin, Jing Wang, Wangda Zuo, and Xing Lu
- Subjects
021110 strategic, defence & security studies ,021103 operations research ,Computer science ,Data needs ,media_common.quotation_subject ,Energy (esotericism) ,Perspective (graphical) ,0211 other engineering and technologies ,02 engineering and technology ,Industrial and Manufacturing Engineering ,Energy infrastructure ,Modeling and simulation ,Interdependence ,Risk analysis (engineering) ,Scale (social sciences) ,Safety, Risk, Reliability and Quality ,Resilience (network) ,media_common - Abstract
Recent years have witnessed an increasing frequency of disasters, both natural and human-induced. This applies pressure to critical infrastructures (CIs). Among all the CI sectors, the energy infrastructure plays a critical role, as almost all other CIs depend on it. In this paper, 30 energy infrastructure models dedicated for the modeling and simulation of power or natural gas networks are collected and reviewed using the emerging concept of resilience. Based on the review, typical modeling approaches for energy infrastructure resilience problems are summarized and compared. The authors, then, propose five indicators for evaluating a resilience model; namely, catering to different stakeholders, intervening in development phases, dedicating to certain stressor and failure, taking into account different interdependencies, and involving socio-economic characteristics. As a supplement, other modeling features such as data needs and time scale are further discussed. Finally, the paper offers observations of existing energy infrastructure models as well as future trends for energy infrastructure modeling.
- Published
- 2019
33. Factors influencing the choice of freight transport models by local government
- Author
-
Daniel Kaszubowski
- Subjects
Structure (mathematical logic) ,Software ,Risk analysis (engineering) ,Modelling methods ,Process (engineering) ,Computer science ,business.industry ,Data needs ,Local government ,Level of analysis ,business ,Management by objectives - Abstract
Urban freight modeling is a subject of continuous interest for both researchers and practitioners. This has resulted in many transport models, diversified in terms of structure and functionality, starting with fragmentary theoretical considerations and ending in the few models available as complete software packages. Given the areas they can be applied, these models are designed to be used by local governments, which are in charge of managing entire urban transport systems. However, local decision makers do not have any solutions to help them to select a model which would meet their requirements. This is why efforts were taken to identify the factors which influence the choice of a freight transport model by a local government and to design the principles for a comprehensive method supporting this process. The need to create such a method follows from a clear research gap in this area, which manifests itself in disconnected research on modeling methods, the conditions for implementing improvements and urban freight management objectives. For this purpose, potential freight management objectives have been parametrised as a way to integrate them with the analytical requirements of potential improvement measures. The use of quantitative indicators for the implementation of strategic objectives, in correlation with the data needs of the implementation tools, has allowed a cross-cutting analysis of these two areas and their practical reference to the functionality provided by the selected models. An approach based on quantitative indicators has also made it possible to introduce a third level of analysis to verify the models’ data needs, including the requirements identified at the earlier levels of analysis.
- Published
- 2019
34. Dynamic Heat Flux Measurements from Finishing Pigs
- Author
-
Joseph Darrington, Robert C. Thaler, and Erin L. Cortus
- Subjects
Materials science ,Animal science ,Steady state ,Heat flux ,Data needs ,General Engineering ,Heat losses ,Heat transfer model ,Sensible heat ,Tissue resistance ,Calorimeter - Abstract
Animal heat production and transfer data needs updating as genetics evolve, and newer technology provides alternative settings for collecting heat flux measurements. The project objective was to use heat flux sensors to measure the postural (resting, standing) effects on heat flux from finishing pigs, and compare estimated tissue resistance and sensible heat production based on these measurements to literature values. We measured heat flux from 12 individually housed active barrows with average (±SD) weight ranges of 95.6±15.5 kg and 111±13.9 kg, for Trials 1 and 2, respectively. We affixed heat flux sensors to the shaved areas on the right and left sides and rumps of the pigs to collect heat flux measurements every minute over a 6-h period during each trial. An overhead video camera system recorded pig behavior and positioning within each pen throughout the trials. Heat flux measurements showed rapid heat loss between a pig and floor when the pig lies on a cooled surface, but the heat flux starts to decrease almost immediately to steady state levels. When standing, the average heat flux from the rear of the pigs (124±8 W m-2) was greater than the heat flux from the sides of the animal (117±8 W m-2) (p2=0.2735, n=24) with an intercept of 143 W m-2, suggests the heat flux decreases 0.25 W m-2 for each 1 kg increase in pig mass. Tissue resistance estimates from these heat flux measurements are approximately 60% of tissue resistance model estimates from the 1990’s. The heat flux data translate to sensible heat production estimates of 1.8 W kg-1, within the range of calorimeter-based estimates from recent literature. Keywords: Heat transfer model, Pigs, Pig housing, Tissue resistance.
- Published
- 2019
35. Validation of Sea Surface Temperature from GCOM-C Satellite Using iQuam Datasets and MUR-SST in Indonesian Waters
- Author
-
Bambang Sukresno, Dinarika Jatisworo, and Rizki Hanintyo
- Subjects
Sea surface temperature ,Validation ,Sea surface temperature (SST) ,GCOM-C ,iQuam ,MUR-SST ,Sea Surface temperature ,Error analysis ,Climatology ,Data needs ,Geography, Planning and Development ,Remote Sensing ,Mapping ,Environmental science ,Satellite - Abstract
Sea surface temperature (SST) is an important variable in oceanography. One of the SST data can be obtained from the Global Observation Mission-Climate (GCOM-C) satellite. Therefore, this data needs to be validated before being applied in various fields. This study aimed to validate SST data from the GCOM-C satellite in the Indonesian Seas. Validation was performed using the data of Multi-sensor Ultra-high Resolution sea surface temperature (MUR-SST) and in situ sea surface temperature Quality Monitor (iQuam). The data used are the daily GCOM-C SST dataset from January to December 2018, as well as the daily dataset from MUR-SST and iQuam in the same period. The validation process was carried out using the three-way error analysis method. The results showed that the accuracy of the GCOM-C SST was 0.37oC.
- Published
- 2021
36. Evaluation of Data Needs for Assessments of Aquifers Supporting Irrigated Agriculture
- Author
-
James J. Butler, B. Brownie Wilson, Donald O. Whittemore, and Geoffrey C. Bohling
- Subjects
geography ,Irrigation ,Water balance ,geography.geographical_feature_category ,Data needs ,Sustainability ,Environmental science ,Aquifer ,Water resource management ,Irrigated agriculture ,Water Science and Technology - Published
- 2021
37. 0083 Injuries prevention: from data needs towards effective strategies in Georgia
- Author
-
Nino Chikhladze, Alexander Tsiskaridze, K Axobadze, Eka Burkadze, M Kereselidze, and Nino Chkhaberidze
- Subjects
Related factors ,medicine.medical_specialty ,Future studies ,business.industry ,Environmental health ,Public health ,Data needs ,Incidence (epidemiology) ,Epidemiology ,medicine ,Baseline data ,Emerging markets ,business - Abstract
Statement of purpose Traumatic injuries account for significant global burden of disease, causing 9% of all deaths worldwide and substantial short- and long-term disability. Injury rates are disproportionately high in low- and middle -income countries (LMICs). However, despite the fact that more than 90% of injury related deaths occur in LMICs, most of the research comes from high- income countries. In spite of the overall impact and importance of the topic, emerging economies such as Georgia experience high injury rates yet have little research addressing incidence, characteristics, risk factors, and prevention strategies. Methods/Approach The aim of this research was to describe the epidemiological characteristics of injury in two tertiary teaching hospitals in Georgia. The data was extracted from the official database of the National Center for Disease Control and Public Health for 2018. Results A total of 1494 adults patients were admitted of whom 912 (61%) were males and 582 (39%) were - females. The highest prevalence was among the age group 25–44 years old (36%), followed by 45–64 years old (26%). The main mechanism of injuries were falls (61%) and road traffic incidents (22%). Over 17% of injuries resulted in death after hospitalization. These findings provide empirical bases for future studies. More research is needed to identify injury related factors useful for planning effective prevention strategies. Conclusion The study was conducted with the goal of providing the baseline data to policy makers and other stakeholders to help guide future research, policy and funding agendas.
- Published
- 2021
38. Visualizing stemming techniques on online news articles text analytics
- Author
-
Muhammad Zharif Zamri, Noraini Seman, Sharifah Syafiera Syed Ghazalli, and Nurul Atiqah Razmi
- Subjects
Root (linguistics) ,Control and Optimization ,Information retrieval ,Computer Networks and Communications ,Computer science ,Process (engineering) ,business.industry ,Data needs ,Interpretation (philosophy) ,Text analytics ,Visualization ,Porter stemmer ,Lancaster stemmer ,Text mining ,Hardware and Architecture ,Control and Systems Engineering ,Stemming ,Computer Science (miscellaneous) ,Electrical and Electronic Engineering ,business ,Instrumentation ,Information Systems ,Mass media - Abstract
Stemming is the process to convert words into their root words by the stemming algorithm. It is one of the main processes in text analytics where the text data needs to go through stemming process before proceeding to further analysis. Text analytics is a very common practice nowadays that is practiced toanalyze contents of text data from various sources such as the mass media and media social. In this study, two different stemming techniques; Porter and Lancaster are evaluated. The differences in the outputs that are resulted from the different stemming techniques are discussed based on the stemming error and the resulted visualization. The finding from this study shows that Porter stemming performs better than Lancaster stemming, by 43%, based on the stemming error produced. Visualization can still be accommodated by the stemmed text data but some understanding of the background on the text data is needed by the tool users to ensure that correct interpretation can be made on the visualization outputs.
- Published
- 2021
39. The Quest for Seafloor Macrolitter: A Critical Review of Background Knowledge, Current Methods and Future Prospects
- Author
-
Canals, Miquel, Pham, Christopher K., Bergmann, Melanie, Gutow, Lars, Hanke, Georg, van Sebille, Erik, Angiolillo, Michela, Buhl-Mortensen, Lene, Cau, Alessando, Ioakeimidis, Christos, Kammann, Ulrike, Lundsten, Lonny, Papatheodorou, George, Purser, Autun, Sanchez-Vidal, Anna, Schulz, Marcus, Vinci, Matteo, Chiba, Sanae, Galgani, François, Langenkämper, Daniel, Möller, Tiia, Nattkemper, Tim W., Ruiz, Marta, Suikkanen, Sanna, Woodall, Lucy, Fakiris, Elias, Molina Jack, Maria Eugenia, Giorgetti, Alessandra, Sub Physical Oceanography, Marine and Atmospheric Research, Sub Physical Oceanography, and Marine and Atmospheric Research
- Subjects
Marine litter ,010504 meteorology & atmospheric sciences ,Ocean modeling ,visual surveys ,roskaaminen ,meriensuojelu ,010501 environmental sciences ,01 natural sciences ,meriroska ,Data harmonisation ,trawl surveys ,Environmental Science(all) ,Marine debris ,merien saastuminen ,ocean bottom ,General Environmental Science ,mittaus ,seafloor ,Environmental resource management ,Comparability ,datan harmonisointi ,syvämeri ,Visual surveys ,Seafloor spreading ,Deep sea ,deep sea ,littering ,Public Health ,meret ,mallintaminen ,marine litter ,conservation of the seas ,Baltic Sea ,Data needs ,Harmonization ,merenpohja ,seas ,Modelling ,modelling ,Trawl surveys ,Seafloor ,Ecosystem ,Renewable Energy ,14. Life underwater ,0105 earth and related environmental sciences ,Sustainability and the Environment ,business.industry ,Renewable Energy, Sustainability and the Environment ,Environmental and Occupational Health ,Public Health, Environmental and Occupational Health ,15. Life on land ,marine research ,troolit ,13. Climate action ,merentutkimus ,Environmental science ,business ,data harmonisation - Abstract
The seafloor covers some 70% of the Earth’s surface and has been recognised as a major sink for marine litter. Still, litter on the seafloor is the least investigated fraction of marine litter, which is not surprising as most of it lies in the deep sea, i.e. the least explored ecosystem. Although marine litter is considered a major threat for the oceans, monitoring frameworks are still being set up. This paper reviews current knowledge and methods, identifies existing needs, and points to future developments that are required to address the estimation of seafloor macrolitter. It provides background knowledge and conveys the views and thoughts of scientific experts on seafloor marine litter offering a review of monitoring and ocean modelling techniques. Knowledge gaps that need to be tackled, data needs for modelling, and data comparability and harmonisation are also discussed. In addition, it shows how research on seafloor macrolitter can inform international protection and conservation frameworks to prioritise efforts and measures against marine litter and its deleterious impacts.
- Published
- 2021
40. Data-Efficient Training Strategies for Neural TTS Systems
- Author
-
C. V. Jawahar and K R Prajwal
- Subjects
education.field_of_study ,Computer science ,business.industry ,Data needs ,Population ,Speech synthesis ,Content creation ,computer.software_genre ,Training (civil) ,Naturalness ,Scalability ,Artificial intelligence ,Transfer of learning ,education ,business ,computer ,Natural language processing - Abstract
India is a country with thousands of languages and dialects spoken across a billion-strong population. For multi-lingual content creation and accessibility, text-to-speech systems will play a crucial role. However, the current neural TTS systems are data-hungry and need about 20 hours of clean single-speaker speech data for each language and speaker. This is not scalable for the large number of Indian languages and dialects. In this work, we demonstrate three simple, yet effective pre-training strategies that allow us to train neural TTS systems with just about one-tenth of the data needs while also achieving better accuracy and naturalness. We show that such pre-trained neural TTS systems can be quickly adapted to different speakers across languages and genders with less than 2 hours of data, thus significantly reducing the effort for future expansions to the thousands of rare Indian languages. We specifically highlight the benefits of multi-lingual pre-training and its consistent impact across our neural TTS systems for 8 Indian languages.
- Published
- 2021
41. A Novel Multilevel RDH Approach for Medical Image Authentication
- Author
-
Madhusmita Das and Jayanta Mondal
- Subjects
Measure (data warehouse) ,Authentication ,business.industry ,Computer science ,Data needs ,Encryption ,computer.software_genre ,Image (mathematics) ,Least significant bit ,Information hiding ,Confidentiality ,Data mining ,business ,computer - Abstract
Online healthcare is the next big thing and proper security mechanisms with privacy preservation techniques for sensitive data is the need of the hour. Handling sensitive datasets such as medical data needs ultimate security which covers confidentiality, integrity, authentication, and reversibility. This paper presents a novel approach for authentication using reversible data hiding (RDH) technique. Traditional RDH methods provide adequate security for sensitive image. The proposed RDH technique uses a combination of reversible data marking techniques in multiple levels to provide a robust authentication measure for medical images. Least significant bit (LSB) modification works as the base methodology for data marking to ensure complete reversibility.
- Published
- 2021
42. The BIM Management System: A Common Data Environment Using Linked Data to Support the Efficient Renovation in Buildings
- Author
-
Jacopo Chiappetti, Diego Farina, Alessandro Valra, and Davide Madeddu
- Subjects
0209 industrial biotechnology ,Process management ,Computer science ,Process (engineering) ,Data needs ,010401 analytical chemistry ,Interoperability ,02 engineering and technology ,Linked data ,computer.file_format ,01 natural sciences ,0104 chemical sciences ,020901 industrial engineering & automation ,Construction industry ,11. Sustainability ,Management system ,SPARQL ,computer - Abstract
One of the main challenges of the construction industry is the management of the huge amount of data generated by the stakeholders during the whole lifecycle of the buildings. Data needs to be found, collected, shared, and updated minimizing the process and technological inefficiencies. Recent advances have been seen in the adoption of BIM-based approaches and in the implementation of CDE as an agreed source of information. This paper describes the development of the BIM management system as a platform to manage the building lifecycle data using the linked data paradigm to improve the interoperability and the interdisciplinary collaboration
- Published
- 2021
43. Using Entropy Measures for Evaluating the Quality of Entity Resolution
- Author
-
Awaad K. Al Sarkhi and John R. Talburt
- Subjects
Computer science ,Process (engineering) ,Data needs ,Data quality ,media_common.quotation_subject ,Sample (statistics) ,Quality (business) ,Data mining ,Entropy (energy dispersal) ,computer.software_genre ,computer ,Blocking (computing) ,media_common - Abstract
This research describes some of the results from an unsupervised ER process using cluster entropy as a way to self-regulate linking. The experiments were performed using synthetic person references of varying quality. The process was able to obtain a linking accuracy of 93% for samples with moderate to high data quality. While results for low-quality references were much lower, there are many possible avenues of research that could further improve the results from this process. The purpose of this research is to allow ER processes to self-regulate linking based on cluster entropy. The results are very promising for entity references of relatively high quality; using this process for low-quality data needs further improvement. The best overall result obtained from the sample was just over 50% linking accuracy.
- Published
- 2021
44. Reduction of Stain Variability in Bone Marrow Microscopy Images
- Author
-
Philipp Gräbel, Peter Boor, Tim H. Brümmendorf, Dorit Merhof, Martina Crysandt, Reinhild Herwartz, Melanie Baumann, and Barbara M. Klinkhammer
- Subjects
Computer science ,medicine.medical_treatment ,Data needs ,Normalization (image processing) ,medicine.disease ,Stain ,Staining ,Leukemia ,medicine.anatomical_structure ,Microscopy ,medicine ,Bone marrow ,Reduction (orthopedic surgery) ,Biomedical engineering - Abstract
The analysis of cells in bone marrow microscopy images is essential for the diagnosis of many hematopoietic diseases such as leukemia. Automating detection, classification and quantification of different types of leukocytes in whole slide images could improve throughput and reliability. However, variations in the staining agent used to highlight cell features can reduce the accuracy of these methods. In histopathology, data augmentation and normalization techniques are used to make neural networks more robust but their application to hematological image data needs to be investigated. In this paper, we compare six promising approaches on three image sets with different staining characteristics in terms of detection and classification.
- Published
- 2021
45. Safety and Effectiveness of Combining Biologics and Small Molecules in IBD: Systematic Review With Meta-Analysis
- Author
-
Scott D. Lee, Edward V. Loftus, Deborah Thomas, Andres Yarur, Anish Patel, Osama Altayar, Ernesto M. Llano, Kindra Clark-Snustad, David I. Fudman, Lukasz Kwapisz, Matthew A. Ciorba, Quazim A. Alayo, Marc Fenster, Benjamin L Cohen, Jean-Frederic Colombel, Parakkal Deepak, Kerri Glassner, and Bincy Abraham
- Subjects
medicine.medical_specialty ,Web of science ,Competing interests ,business.industry ,Meta-analysis ,Data needs ,Family medicine ,Health care ,medicine ,business ,Bristol-Myers - Abstract
Background: Combining biologics and small molecules could potentially overcome the plateau of drug efficacy in inflammatory bowel disease (IBD). We conducted a systematic review and meta-analysis (SRMA) to assess the safety and effectiveness of dual biologic therapy (DBT) or small molecule combined with a biologic therapy (SBT) in IBD patients. Methods: MEDLINE, EMBASE, Scopus, Web of Science, Cochrane Database of Systematic Reviews and Clinical trials.gov were reviewed from inception to Nov 3, 2020. Studies with two or more IBD patients on DBT or SBT were included while single-case reports were excluded. The main outcome was safety assessed as pooled rates of adverse events (AEs) and serious AEs (SAEs) for each combination category. Effectiveness was reported as pooled rates of clinical, endoscopic and/or radiographic response and remission. Random-effects modeling was performed for between-study heterogeneity using the I² statistic. The SRMA was registered with PROSPERO (CRD4202018361). Findings: Of the 3,688 publications identified, 13 studies involving 266 patients on seven different combinations were included. Median number of prior biologics ranged from 0-4, and median duration of follow-up was 16-68 weeks. Most common DBT and SBT were vedolizumab (VDZ) with anti-Tumour Necrosis Factor (aTNF, n=56) or tofacitinib (Tofa, n=57), respectively. Pooled rates of SAE for these were 9·6% (95% CI, 1·5 – 21·4; 8 studies; I2 0%) for VDZ-aTNF and 1·0% (95% CI, 0·0 – 7·6; 5 studies; I2 0%) for Tofa-VDZ. Pooled clinical remission rates for these were 55·1% (95% CI, 19·6 – 88·5; 8 studies; 53 TTs; I2 81%) for VDZ-aTNF and 47·8% (95% CI, 19·0 – 77·4; 5 studies; 49 TTs; I2 69%) for Tofa-VDZ. No new safety signal was reported for any combination. There was considerable heterogeneity across studies (I2, 0 to 87%). Interpretation: DBT or SBT appears to be generally safe and effective in IBD patients with conclusions on effectiveness limited by between-study heterogeneity. This data needs to be confirmed in prospective studies. Registration: The SRMA was registered with PROSPERO (CRD4202018361). Funding Statement: None. Declaration of Interests: Kindra Clark-Snustad: Dr. Clark-Snustad reports personal fees from BMS, personal fees from Pfizer, outside the submitted work; Anish Patel: Dr. Patel reports personal fees from JANSSEN, personal fees from TAKEDA, personal fees from ABBVIE, outside the submitted work; Andres Yarur: Dr. Yarur reports personal fees from Takeda, personal fees from Prometheus Bioscience, personal fees from Arena Pharmaceutical, personal fees from Bristol Myers Squibb, during the conduct of the study; Benjamin L. Cohen: Dr. Cohen reports personal fees from Abbvie, personal fees and nonfinancial support from Pfizer, personal fees from Bristol Myers Squibb, personal fees from Janssen, personal fees from Target RWE, personal fees from Sublimity Therapeutics, outside the submitted work; Matthew A. Ciorba: Dr. Ciorba reports grants and personal fees from Pfizer, grants and personal fees from Takeda, outside the submitted work; Scott D. Lee MD: Dr. Lee reports grants and personal fees from Abbvie, grants and personal fees from UCB, grants and personal fees from JANSSEN, grants and personal fees from TAKEDA, grants from BMS, grants from ABGENOMICS, grants and personal fees from ELI LILLY, grants from ARENA, outside the submitted work; Edward V. Loftus, Jr.: Dr. Loftus reports grants and personal fees from AbbVie, personal fees from Allergan, grants and personal fees from Amgen, personal fees from Arena, personal fees from Boehringer Ingelheim, grants and personal fees from Bristol-Myers Squibb, personal fees from Calibr, grants and personal fees from Celgene, personal fees from Celltrion Healthcare, personal fees from Eli Lilly, grants and personal fees from Genentech, grants and personal fees from Gilead, personal fees from Iterative Scopes, grants and personal fees from Janssen, personal fees from Ono Pharma, grants and personal fees from Pfizer, grants from Receptos, grants from Robarts Clinical Trials, personal fees from Sun Pharma, grants and personal fees from Takeda, grants from Theravance, grants and personal fees from UCB, outside the submitted work; David Fudman: Dr. Fudman reports personal fees from Pfizer, outside the submitted work; Bincy P. Abraham: Dr. Abraham reports personal fees from Abbvie, personal fees from Ferring, grants and personal fees from Takeda, personal fees from Janssen, personal fees from Pfizer, personal fees from Medtronics, personal fees from Samsung bioepis, personal fees from Bristol-Myers Squibb, outside the submitted work; Jean-Frederic Colombel: Dr. Colombel reports grants and personal fees from Abbvie, personal fees from Amgen, personal fees from Allergan, personal fees from Arena Phramaceuticals, personal fees from Boehringer Ingelheim, personal fees from Bristol-Myers-Squibb, personal fees from Celgene Corporation, personal fees from Celltrion, personal fees from Eli Lilly, personal fees from Enterome, personal fees from Ferring Phramaceuticals, personal fees from Genentech, personal fees from Gilead, personal fees from Iterative Scopes, personal fees from Ipsen, personal fees from Immunic, personal fees from Imtbio, personal fees from Inotrem, grants and personal fees from Janssen Pharmaceuticals, personal fees from Landos, personal fees from Limmatech, personal fees from Medimmune, personal fees from Merck, personal fees from Novartis, personal fees from OMass, personal fees from Otsuka, personal fees from Pfizer, personal fees from Shire, grants and personal fees from Takeda, personal fees from Tigenix, personal fees from VielaBio, outside the submitted work; Parakkal Deepak: Dr. Deepak reports personal fees from Janssen, personal fees from Pfizer, personal fees from Prometheus Biosciences, other from Boehringer Ingelheim, personal fees from Arena Pharmaceuticals, grants from Takeda Pharmaceuticals, grants from Arena Pharmaceuticals, grants from Bristol Myers Squibb-Celgene, grants from Boehringer Ingelheim, outside the submitted work; All other authors declare no competing interests.
- Published
- 2021
46. Privacy and Trust Redefined in Federated Machine Learning
- Author
-
William J Buchanan, Will Abramson, Adam James Hall, Pavlos Papadopoulos, and Nikolaos Pitropakis
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Information privacy ,Computer Science - Cryptography and Security ,lcsh:Computer engineering. Computer hardware ,Computer science ,Data needs ,0211 other engineering and technologies ,lcsh:TK7885-7895 ,02 engineering and technology ,Cyber-security ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,Computer Science - Computers and Society ,Health care ,Computers and Society (cs.CY) ,Centre for Distributed Computing, Networking and Security ,0202 electrical engineering, electronic engineering, information engineering ,decentralised identifiers ,021110 strategic, defence & security studies ,federated learning ,business.industry ,trust ,Possession (law) ,verifiable credentials ,AI and Technologies ,Highly sensitive ,Workflow ,machine learning ,Computer Science - Distributed, Parallel, and Cluster Computing ,Identity (object-oriented programming) ,020201 artificial intelligence & image processing ,Verifiable secret sharing ,Artificial intelligence ,Distributed, Parallel, and Cluster Computing (cs.DC) ,business ,computer ,Cryptography and Security (cs.CR) - Abstract
A common privacy issue in traditional machine learning is that data needs to be disclosed for the training procedures. In situations with highly sensitive data such as healthcare records, accessing this information is challenging and often prohibited. Luckily, privacy-preserving technologies have been developed to overcome this hurdle by distributing the computation of the training and ensuring the data privacy to their owners. The distribution of the computation to multiple participating entities introduces new privacy complications and risks. In this paper, we present a privacy-preserving decentralised workflow that facilitates trusted federated learning among participants. Our proof-of-concept defines a trust framework instantiated using decentralised identity technologies being developed under Hyperledger projects Aries/Indy/Ursa. Only entities in possession of Verifiable Credentials issued from the appropriate authorities are able to establish secure, authenticated communication channels authorised to participate in a federated learning workflow related to mental health data., Comment: MDPI Mach. Learn. Knowl. Extr. 2021, 3(2), 333-356; https://doi.org/10.3390/make3020017
- Published
- 2021
- Full Text
- View/download PDF
47. Environmental footprint of roses: representative product study
- Author
-
Roel Helmes, Pietro Goglio, Rick van der Linden, and Irina Verweij-Novikova
- Subjects
Ecological footprint ,Performance and Impact Agrosectors ,Data needs ,Greenhouse ,Context (language use) ,Environmental economics ,Performance en Impact Agrosectoren ,Unit (housing) ,Product life-cycle management ,Life Science ,Environmental science ,Consument & Keten ,Product (category theory) ,Consumer and Chain - Abstract
This document represents a representative product study carried out in the context of the development of a methodology for calculating the environmental footprints of horticultural products, according to the newly released methodological standard - HortiFootprint category rules. The purpose of this product study was to identify the most relevant impact categories, life cycle stages, processes and direct elementary flows and also to identify the data needs, all feeding into the methodology development. This publication is meant as an illustration of a product environmental footprint (PEF) study for roses that are produced in a Dutch greenhouse with combined heat and power (CHP) system, transported across the main countries of export. The functional unit is one stem of 70 cm long roses at commercial grade.
- Published
- 2021
48. Modeling Oil Production, CO2 Injection and Associated Storage in Depleted Oil Reservoirs: A Comparison of Approaches
- Author
-
Samin Raziperchikolaee, Mark Kelley, Priya Ravi Ganesh, Ashwin Pasumarti, Neeraj Gupta, Valerie Smith, Srikanta Mishra, Autumn Haagsma, and Rick Pardini
- Subjects
chemistry.chemical_compound ,geography ,geography.geographical_feature_category ,chemistry ,Petroleum engineering ,Oil production ,Data needs ,Reservoir modeling ,Carbonate ,Environmental science ,Carbon sequestration ,Reef - Abstract
The Midwest Regional Carbon Sequestration Partnership has been investigating various reservoir characterization and modeling technologies as part of its commercial-scale implementation of carbon dioxide injection for geologic storage in multiple Silurian carbonate pinnacle reefs in northern Michigan, USA. This paper compares multiple reservoir modeling approaches for history-matching oil production and CO2 injection responses, and estimating associated storage, to characterize these small spatial footprint depleted reef reservoirs. The three approaches considered are: fully compositional simulation, black-oil with pseudo-miscibility treatment, and capacitance resistance modeling (CRM). Modeling results from three reefs illustrating each modeling approach are presented, and their applicability and limitations with respect to data needs and modeling objectives are discussed.
- Published
- 2021
49. Can the palaeoepidemiology of rickets during the industrialisation period in France be studied through bioarchaeological grey literature and French medico-historical literature of the 18th-early 20th centuries? Preliminary examination of a complex topic
- Author
-
Olivier Dutour, Hélène Coqueugniot, and Antony Colombo
- Subjects
Archeology ,education.field_of_study ,060101 anthropology ,History ,060102 archaeology ,Data needs ,Population ,Rickets ,06 humanities and the arts ,Grey literature ,medicine.disease ,Vitamin D Deficiency ,Pathology and Forensic Medicine ,Gray Literature ,Industrialisation ,medicine ,Ethnology ,Humans ,0601 history and archaeology ,Industrial Development ,France ,education ,Child ,Period (music) - Abstract
Objective This study explores whether data relating to rickets from the French medico-historical literature (FMHL) and bioarchaeological grey literature are useful in evaluating its epidemiology during the industrialisation of France. Unlike other European countries such as England, industrialisation in France was a slow and continuous process with two phases: the first in 1830–1870 and the second in 1870–1914. Materials and methods A bibliographical analysis of 2800 FMHL sources from the 18th to the early 20th centuries and 50 archaeological excavation reports from the last 21 years was undertaken. Results The FMHL data is very heterogeneous and predominantly dates to the second phase of industrialisation. The bioarchaeological data is very incomplete and predominantly relates to the period before industrialisation. At the same time, knowledge improvement and institutional changes to protect children could explain more systematic registration of cases of rickets. Conclusions No solid conclusions can be made regarding the prevalence of rickets at present, however these data hold great potential. Significance In comparison to England, no systematic investigation of rickets prevalence during the period of industrialisation in France has been undertaken to date. Limitations The lack of archaeological excavations from this period and the limited paleopathological analysis of the sites excavated have contributed to our current lack of understanding regarding the impact of industrialization on the prevalence of rickets on the French population. Suggestions for further work The FMHL data needs to be homogenized and osteoarchaeological collections need to be restudied with a common protocol focusing on signs of vitamin D deficiency.
- Published
- 2020
50. Library Impact Practice Brief: Supporting Bibliometric Data Needs at Academic Institutions
- Author
-
Alison Hitchens and Shannon Gordon
- Subjects
Data needs ,Library services ,Political science ,Technical report ,Library science ,Bibliometrics - Abstract
This practice brief presents research conducted by staff at the University of Waterloo Library as part of the library’s participation in ARL’s Research Library Impact Framework initiative. The research addressed the question, “How can research libraries support their campus community in accessing needed bibliometric data for institutional-level purposes?” The brief explores: service background, partners, service providers and users, how bibliometric data are used, data sources, key lessons learned, and recommended resources.
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.