43,096 results on '"database management"'
Search Results
2. 50 Years of Queries.
- Author
-
Chamberlin, Donald
- Subjects
- *
DATABASE industry , *DATABASE management , *RELATIONAL databases , *SQL - Abstract
This article details the history of database management. The author, Donald Chamberlin, starts with the evolution of digital data management including Charles Bachman’s Integrated Data Store as well as the author’s own contributions to the field. Topics include E.F. Codd’s relational data model and its continued influence on the field as well as the development and current use of SEQUEL, SQL, the query language. Also discussed is the current database management movement known as NoSQL.
- Published
- 2024
- Full Text
- View/download PDF
3. Verifiable Conjunctive Searchable Symmetric Encryption with Result Pattern Hiding
- Author
-
Chung-Nguyen, Huy-Hoang, Yuan, Dandan, Cui, Shujie, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Joseph K., editor, Chen, Liqun, editor, Sun, Shi-Feng, editor, and Liu, Xiaoning, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Advancing early cervical cancer detection: Integrating infrared imaging, colposcopy and AI-driven database management.
- Author
-
Dorle, Akshay, Sengupta, Rajasi, and Wankhade, Nisha
- Subjects
- *
EARLY detection of cancer , *DATABASE management , *INFRARED imaging , *COLPOSCOPY , *CERVICAL cancer - Abstract
Cervical cancer ranks as an important cause in female mortality on a global scale. The use of infrared (IR) imaging for detecting anomalies in cervical tissues has proven to be remarkably useful. The accurate diagnosis of cervical cancer is still a significant difficulty. The goal of this research is to develop a comprehensive strategy that successfully integrates IR imaging with colposcopy while utilizing cutting-edge computational approaches for improved database management. The main goal of this method is to transform early cervical cancer detection, providing significant benefits to medical professionals and the patients they treat. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Fake certificate detection by using blockchain.
- Author
-
Thakare, Nikhil, Narad, Supriya, and Surysvanshi, Yogesh
- Subjects
- *
DIGITAL certificates , *STUDENT records , *DATABASE management , *ELECTRONIC records , *PRIVATE sector - Abstract
Education is developing in India and people are realizing the importance of education. Students earn many certificates. With their certificates, they can apply for jobs in the public or private sector, where all these certificates need to be verified manually. There are cases where students have submitted fake certificates and this leads to difficulty in identification of those fake certificates. This issue of fake academic certificates is aproblem in the academic community. To make data safer and more secure, everything must be digitized on the basis of confidentiality, reliability and availability. All this can be achieved using a technology called Block chain. Block chain technology provides its own quality of security where it can be used to generate a digital certificate that is tamper proof and easily verified. Each certificate will have a unique identifier key that any organization can use through the portal to verify the authenticity of the certificate. Certificate validation can also be done easily. The large quantity of certificates in public domain has become challenging. In today's era, certificate fraudulent copying has become a business decline from people's need/desire for unemployment. To solve this query, programmers designed a certificate verification system. By providing a central database for the electronic management of these records such as accessing to student records existing systems can solve some of the large problems, however, system can be easily hacked and operated as it is readily available on servers. In this paper, a combination of certificate generation and verification using blockchain technology and a Quick Response (QR) code is implemented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. How much missing data is too much to impute for longitudinal health indicators? A preliminary guideline for the choice of the extent of missing proportion to impute with multiple imputation by chained equations.
- Author
-
Junaid, K. P., Kiran, Tanvi, Gupta, Madhu, Kishore, Kamal, and Siwatch, Sujata
- Subjects
- *
STATISTICAL models , *MEDICAL protocols , *REPEATED measures design , *DATA science , *HEALTH status indicators , *DATABASE management , *DATA analysis , *INFANT mortality , *HEALTH , *INFORMATION resources , *DESCRIPTIVE statistics , *CHI-squared test , *BIOINFORMATICS , *LONGITUDINAL method , *ANALYSIS of variance , *STATISTICS - Abstract
Background: The multiple imputation by chained equations (MICE) is a widely used approach for handling missing data. However, its robustness, especially for high missing proportions in health indicators, is under-researched. The study aimed to provide a preliminary guideline for the choice of the extent of missing proportion to impute longitudinal health-related data using the MICE method. Methods: The study obtained complete data on five mortality-related health indicators of 100 countries (2015–2019) from the Global Health Observatory. Nine incomplete datasets with missing rates from 10 to 90% were generated and imputed using MICE. The robustness of MICE was assessed through three approaches: comparison of means using the Repeated Measures- Analysis of variance, estimation of evaluation metrics (Root mean square error, mean absolute deviation, Bias, and proportionate variance), and visual inspection of box plots of imputed and non-imputed data. Results: The Repeated Measures- Analysis of variance revealed significant differences between complete and imputed data, primarily in imputed data with over 50% missing proportions. Evaluation metrics exhibited 'high performance' for the dataset with a 50% missing proportion for various health indicators However, with missing proportions exceeding 70%, the majority of indicators demonstrated a 'low' performance level in terms of most evaluation metrics. The visual inspection of the box plot revealed severe variance shrinkage in imputed datasets with missing proportions beyond 70%, corroborating the findings from the evaluation metrics. Conclusion: It demonstrates high robustness up to 50% missing values, with marginal deviations from complete datasets. Caution is warranted for missing proportions between 50 and 70%, as moderate alterations are observed. Proportions beyond 70% lead to significant variance shrinkage and compromised data reliability, emphasizing the importance of acknowledging imputation limitations for practical decision-making. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. Increasing Serious Illness Conversations in Patients at High Risk of One-Year Mortality Using Improvement Science: A Quality Improvement Study.
- Author
-
Sharma, Kanishk D., Godambe, Sandip A., Chavan, Prachi P., Parks-Savage, Agatha, and Galicia-Castillo, Marissa
- Subjects
MORTALITY risk factors ,RISK assessment ,DOCUMENTATION ,PEARSON correlation (Statistics) ,CONVERSATION ,CRITICALLY ill ,PATIENTS ,STATISTICAL significance ,DATABASE management ,FISHER exact test ,CATASTROPHIC illness ,CHI-squared test ,DESCRIPTIVE statistics ,PRE-tests & post-tests ,TRAUMA centers ,MEDICAL records ,ACQUISITION of data ,QUALITY assurance ,DATA analysis software ,ADVANCE directives (Medical care) - Abstract
Background: Serious illness conversation (SIC) in an important skillset for clinicians. A review of mortality meetings from an urban academic hospital highlighted the need for early engagement in SICs and advance care planning (ACP) to align medical treatments with patient-centered outcomes. The aim of this study was to increase SICs and their documentation in patients with low one-year survival probability identified by updated Charlson Comorbidity Index (CCI) scores. Methods: This was a quality improvement study with data collected pre- and post-intervention at a large urban level one trauma center in Virginia, which also serves as a primary teaching hospital to about 400 residents and fellows. Patient chart reviews were completed to assess medical records and hospitalization data. Chi square tests were used to identify statistical significance with the alpha level set at <0.05. Integrated care managers were trained to identify and discuss high CCI scores during interdisciplinary rounds. Providers were encouraged to document SICs with identified patients in extent of care (EOC) notes within the hospital's cloud-based electronic health record known as EPIC. Results: Sixty-two patients with high CCI scores were documented, with 16 (25.81%, p = 0.0001) having EOC notes. Patients with documented EOC notes were significantly more likely to change their focus of care, prompting palliative care (63.04% vs. 50%, p = 0.007) and hospice consults (93.48% vs. 68.75%, p = 0.01), compared to those without. Post-intervention surveys revealed that although 50% of providers conducted SICs, fewer used EOC notes for documentation. Conclusion: This initial intervention suggests that the documentation of SICs increases engagement in ACP, palliative care, hospice consultations, and do not resuscitate decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. AI-Driven Enhancement of Skin Cancer Diagnosis: A Two-Stage Voting Ensemble Approach Using Dermoscopic Data.
- Author
-
Chiu, Tsu-Man, Li, Yun-Chang, Chi, I-Chun, and Tseng, Ming-Hseng
- Subjects
- *
PREDICTIVE tests , *SKIN tumors , *DATABASE management , *EARLY medical intervention , *RESEARCH funding , *ARTIFICIAL intelligence , *CONVOLUTIONAL neural networks , *DIAGNOSTIC errors , *SKIN , *CLINICAL pathology , *DERMOSCOPY , *MACHINE learning - Abstract
Simple Summary: This study utilized datasets from two ethnic groups to develop an AI diagnostic model. This model was trained using transfer learning, leveraging eight pre-trained models, including convolutional neural networks and vision transformers. The three-class AI model assists doctors in distinguishing between patients with melanoma who require urgent treatment, those with non-melanoma skin cancers who can be treated later, and benign cases that do not require intervention. The proposed two-stage classification strategy significantly improved diagnostic accuracy and reduced false negatives. This research demonstrates the success of the proposed method in both datasets. These findings highlight the potential of AI technology in skin cancer diagnosis, particularly in resource-limited medical settings, where it could become a valuable clinical tool to improve diagnostic accuracy, reduce skin cancer mortality, and decrease healthcare costs. Background: Skin cancer is the most common cancer worldwide, with melanoma being the deadliest type, though it accounts for less than 5% of cases. Traditional skin cancer detection methods are effective but are often costly and time-consuming. Recent advances in artificial intelligence have improved skin cancer diagnosis by helping dermatologists identify suspicious lesions. Methods: The study used datasets from two ethnic groups, sourced from the ISIC platform and CSMU Hospital, to develop an AI diagnostic model. Eight pre-trained models, including convolutional neural networks and vision transformers, were fine-tuned. The three best-performing models were combined into an ensemble model, which underwent multiple random experiments to ensure stability. To improve diagnostic accuracy and reduce false negatives, a two-stage classification strategy was employed: a three-class model for initial classification, followed by a binary model for secondary prediction of benign cases. Results: In the ISIC dataset, the false negative rate for malignant lesions was significantly reduced, and the number of malignant cases misclassified as benign dropped from 124 to 45. In the CSMUH dataset, false negatives for malignant cases were completely eliminated, reducing the number of misclassified malignant cases to zero, resulting in a notable improvement in diagnostic precision and a reduction in the false negative rate. Conclusions: Through the proposed method, the study demonstrated clear success in both datasets. First, a three-class AI model can assist doctors in distinguishing between melanoma patients who require urgent treatment, non-melanoma skin cancer patients who can be treated later, and benign cases that do not require intervention. Subsequently, a two-stage classification strategy effectively reduces false negatives in malignant lesions. These findings highlight the potential of AI technology in skin cancer diagnosis, particularly in resource-limited medical settings, where it could become a valuable clinical tool to improve diagnostic accuracy, reduce skin cancer mortality, and reduce healthcare costs. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. An Intelligent Optimized Compression Framework for Columnar Database.
- Author
-
Jadhawar, B. A. and Sharma, Narendra
- Subjects
- *
DATABASE management , *OLAP technology , *DATABASES , *ONLINE data processing - Abstract
Instead of storing data in rows, a columnar database is a type of Database Management System (DBMS). To speed up the processing and reply to a question, a columnar database's job is to efficiently write and read data to and from hard disc storage. One of the most crucial methods in the creation of column-oriented database systems is compression. For columns with Zero-length string types, all heavier and light-in-weight compression techniques have limitations. Processing of transactions online, these databases are substantially more effective for online analytical processing than for online transactional processing. This indicates that although they are made to examine transactions, they are not very effective at updating them. To overcome these issues a Zero Length Recurrent based Fruit Fly Optimization (ZLRFF) model is used. Additionally, a reduction technique is known as ZLRFF was designed to achieve a high compression ratio and allow straight lookups on compressed material without decompression first. ZLRFF's main goal is to divide a Zero-length string written column vertically into smaller columns that can each be compressed using a separate lightweight compression technique. To search directly on compressed data, we also provide a search technique we call FF-search. Extensive testing demonstrates that ZLRFF supports direct searching on compressed data in addition to achieving a decent compression ratio, which enhances query performance. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Disruptions and adaptations of an urban nutrition intervention delivering essential services for women and children during a major health system crisis in Dhaka, Bangladesh.
- Author
-
Escobar‐DeMarco, Jessica, Nguyen, Phuong, Kundu, Gourob, Kabir, Rowshan, Ali, Mohsin, Ireen, Santhia, Ash, Deborah, Mahmud, Zeba, Sununtnasuk, Celeste, Menon, Purnima, and Frongillo, Edward A.
- Subjects
- *
HEALTH services accessibility , *MATERNAL health services , *CLINICAL supervision , *DATABASE management , *MEDICAL care , *LABOR turnover , *PSYCHOLOGY of women , *CRISIS intervention (Mental health services) , *PSYCHOLOGICAL adaptation , *DESCRIPTIVE statistics , *PRENATAL care , *METROPOLITAN areas , *ORGANIZATIONAL change , *SOCIAL support , *COUNSELING , *NUTRITION , *COVID-19 pandemic , *LABOR supply - Abstract
Systematic crises may disrupt well‐designed nutrition interventions. Continuing services requires understanding the intervention paths that have been disrupted and adapting as crises permit. Alive & Thrive developed an intervention to integrate nutrition services into urban antenatal care services in Dhaka, which started at the onset of COVID‐19 and encountered extraordinary disruption of services. We investigated the disruptions and adaptations that occurred to continue the delivery of services for women and children and elucidated how the intervention team made those adaptations. We examined the intervention components planned and those implemented annotating the disruptions and adaptations. Subsequently, we detailed the intervention paths (capacity building, supportive supervision, demand generation, counselling services, and reporting, data management and performance review). We sorted out processes at the system, organizational, service delivery and individual levels on how the intervention team made the adaptations. Disruptions included decreased client load and demand for services, attrition of providers and intervention staff, key intervention activities becoming unfeasible and clients and providers facing challenges affecting utilization and provision of services. Adaptations included incorporating new guidance for the continuity of services, managing workforce turnover and incorporating remote modalities for all intervention components. The intervention adapted to continue by incorporating hybrid modalities including both original activities that were feasible and adapted activities. Amidst health system crises, the adapted intervention was successfully delivered. This knowledge of how to identify disruptions and adapt interventions during major crises is critical as Bangladesh and other countries face new threats (conflict, climate, economic downturns, inequities and epidemics). Key messages: Well‐designed nutrition interventions may be disrupted by crises that affect the interventions themselves and the platforms on which they run.Combining contextualized expertise in operational settings with a data‐driven decision‐making process can facilitate the timely identification of intervention disruptions and enable swift adaptations.Continuity of nutrition services amidst crises is feasible by adopting hybrid modalities including both original and adapted implementation paths.Visualizing adaptations to the intervention paths sheds light on how to deliver nutrition services during major systematic disruptions.Knowledge of how to adapt nutrition interventions during crises is critical going forward to respond successfully in future disruptive events. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Attaining consensus on a core dataset for upper limb lymphoedema using the Delphi method: A foundational step in creating a clinical support system.
- Author
-
Sierla, Robyn, Dylke, Elizabeth, Poon, Simon, Shaw, Tim, and Kilbreath, Sharon
- Subjects
- *
CONSENSUS (Social sciences) , *LYMPHEDEMA , *ARM , *DATABASE management , *RESEARCH funding , *CLINICAL decision support systems , *DIGITAL health , *INTERVIEWING , *JUDGMENT sampling , *DESCRIPTIVE statistics , *COMMUNICATION , *RESEARCH methodology , *DELPHI method , *MANAGEMENT of medical records , *DATA analysis software - Abstract
Background: Lymphoedema is a condition of localised swelling caused by a compromised lymphatic system. The protein-rich fluid accumulating in the interstitial tissue can create inflammation and irreversible changes to the skin and underlying tissue. An array of methods has been used to assess and report these changes. Heterogeneity is evident in the clinic and in the literature for the domains assessed, outcomes and outcome measures selected, measurement protocols followed, methods of analysis, and descriptors used to report change. Objective: This study seeks consensus on the required items for inclusion in a core data set for upper limb lymphoedema to digitise the monitoring and reporting of upper limb lymphoedema. Methods: The breadth of outcomes and descriptors in common use were captured in prior studies by this research group. This list was refined by frequency and proposed to experts in the field (n = 70) through a two-round online modified Delphi study. These participants rated the importance of each item for inclusion in the dataset and identified outcomes or descriptors they felt were missing in Round 1. In Round 2, participants rated any new outcomes or descriptors proposed and preference for how numeric data is displayed. Results: The core dataset was confirmed on completion of Round 2. Interlimb difference as a percentage, and limb volume were preferred for graphed display over time; and descriptors for observed and palpated change narrowed from 42 to 20. Conclusion: This dataset provides the foundation to create a clinical support system for upper limb lymphoedema. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
12. Critical Appraisal of Evidence: Synthesis and Recommendations.
- Author
-
Farus-Brown, Susan, Fineout-Overholt, Ellen, Hays, Deana, Zonsius, Mary C., and Milner, Kerry A.
- Subjects
- *
EVIDENCE-based nursing , *NURSES , *DOCTOR of philosophy degree , *DATABASE management , *DECISION making , *RESEARCH methodology , *QUALITY assurance - Abstract
This is the fifth article in a new series designed to provide readers with insight into educating nurses about evidence-based decision-making (EBDM). It builds on AJN 's award-winning previous series—Evidence-Based Practice, Step by Step and EBP 2.0: Implementing and Sustaining Change (to access both series, go to https://links.lww.com/AJN/A133). This follow-up series on EBDM will address how to teach and facilitate learning about the evidence-based practice (EBP) and quality improvement (QI) processes and how they impact health care quality. This series is relevant for all nurses interested in EBP and QI, especially DNP faculty and students. The brief case scenario included in each article describes one DNP student's journey. To access previous articles in this EBDM series, go to https://links.lww.com/AJN/A256. This article, the fifth in a series on how to teach and facilitate learning about evidence-based practice and quality improvement, discusses the last 2 phases of the critical appraisal process: synthesis and recommendation. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
13. GPUDLMOI: A Moving Object Indexing Method Using Deep Learning.
- Author
-
Xiaofeng Liu, Ji Li, Chuanwen Li, and Liangyu Chu
- Subjects
- *
LOCATION data , *DATABASES , *DATABASE management , *DEEP learning , *INDEXING , *DETECTORS - Abstract
With advancements in positioning technology and the widespread adoption of wireless sensors, numerous wireless handheld and vehicular devices now come equipped with positioning capabilities. This has enabled a variety of new applications and generated large volumes of moving object data. The continuously changing location information of these moving objects requires efficient management in databases. Traditional database systems, which typically assume static attribute values until explicitly updated, face challenges in managing such dynamic, constantly changing location data efficiently. Current moving object indexing structures fall mainly into two categories: grid-based and tree-based indexing. However, each approach has inherent limitations. In this paper, we propose a novel indexing method that combines grid and quadtree structures and utilizes a deep learning model to intelligently determine when leaf nodes should be split or merged. Our method is not just a theoretical concept, but a practical solution that can be applied to a wide range of scenarios. Experimental results demonstrate that our method achieves higher throughput and reduces response times, particularly in skewed moving object distributions, offering significant improvements over existing indexing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2025
14. Effect of management system and dietary seasonal variability on environmental efficiency and human net food supply of mountain dairy farming systems.
- Author
-
Zanon, Thomas, Hörtenhuber, Stefan, Fichter, Greta, Peratoner, Giovanni, Zollitsch, Werner, Gatterer, Markus, and Gauly, Matthias
- Subjects
- *
HILL farming , *GREENHOUSE gases , *SUSTAINABLE agriculture , *DATABASE management , *FARM management - Abstract
The list of standard abbreviations for JDS is available at adsa.org/jds-abbreviations-24. Nonstandard abbreviations are available in the Notes. Mountain dairy cattle farming systems are pivotal for the economy, as well as for social and environmental aspects. They significantly contribute to rural development, which is currently strongly prioritized in the common European Union agricultural policy; at the same time, they are also increasingly criticized for having a relatively high environmental impact (such as greenhouse gas emissions) per kilogram of product. Consequently, the aim of this study was to assess and compare the environmental efficiency of 2 common alpine dairy farming systems, with a focus on the effects of grazing, considering the seasonal variability in feeding at the individual cow level and farm management over a 3-yr period. This study focuses on alpine farming systems, but can also be considered to effectively represent other topographically disadvantaged mountain areas. We compared an intensively managed and globally dominating production system (high-input) aimed at high milk yield through relatively intensive feeding and the use of the high-yielding dual-purpose Simmental cattle permanently confined in stables, with a forage-based production system (low-input) based on seasonal grazing and the use of the autochthonous dual-purpose breed Tyrolean Grey. For the present analysis, we used a dataset with information on feed intake and diet composition, as well as animal productivity at the individual cow level and farm management data based on multiyear data recording. We quantified 4 impact categories for 3 consecutive years: global warming potential (GWP 100), acidification potential (AP), marine eutrophication potential (MEP), and land use (LU; in square meters per year and eco points [Pt], with the latter additionally considering the soil quality index). In addition to being attributed to 1 kg of fat- and protein-corrected milk (FPCM), these impact categories were also related to 1 m2 of on-farm area. Due to limited agronomic options beyond forage production and pasture use in alpine regions, net provision of protein was calculated for both farming systems to assess food supply and quantify the respective food-feed competition. Overall, the low-input farming system had greater environmental efficiency in terms of MEP per kilogram of FPCM, as well as MEP and AP per square meter than the high-input system. Land use was found to be consistently higher for the high-input than for the low-input system, the GWP 100 per kilogram of FPCM was lower for the high-input system. Additionally, pasture access had a significant effect on the reduction of environmental impacts. Lastly, the net protein provision was slightly negative for the high-input system and marginally positive for the low-input system, indicating a lower food-feed competition for the latter. Future studies should also address the social and economic aspects of the farming systems in order to offer a comprehensive overview of the 3 key factors necessary for achieving more sustainable farming systems, particularly in disadvantaged marginal regions such as mountain areas. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
15. Interoperability of health data using FHIR Mapping Language: transforming HL7 CDA to FHIR with reusable visual components.
- Author
-
Bossenko, Igor, Randmaa, Rainer, Piho, Gunnar, and Ross, Peeter
- Subjects
DATA transmission systems standards ,DOCUMENTATION ,CLINICAL medicine ,MEDICAL information storage & retrieval systems ,DATABASE management ,COMPUTER software ,HEALTH ,MEDICAL care ,INFORMATION resources ,DATA analytics ,EXPERIMENTAL design ,INFORMATION science ,ELECTRONIC health records ,CONCEPTUAL structures ,SEMANTICS ,HEALTH facilities ,HEALTH information systems ,ACCESS to information - Abstract
Introduction: Ecosystem-centered healthcare innovations, such as digital health platforms, patient-centric records, and mobile health applications, depend on the semantic interoperability of health data. This ensures efficient, patient-focused healthcare delivery in a mobile world where citizens frequently travel for work and leisure. Beyond healthcare delivery, semantic interoperability is crucial for secondary health data use. This paper introduces a tool and techniques for achieving health data semantic interoperability, using reusable visual transformation components to create and validate transformation rules and maps, making them usable for domain experts with minimal technical skills. Methods: The tool and techniques for health data semantic interoperability have been developed and validated using Design Science, a common methodology for developing software artifacts, including tools and techniques. Results: Our tool and techniques are designed to facilitate the interoperability of Electronic Health Records (EHRs) by enabling the seamless unification of various health data formats in real time, without the need for extensive physical data migrations. These tools simplify complex health data transformations, allowing domain experts to specify and validate intricate data transformation rules and maps. The need for such a solution arises from the ongoing transition of the Estonian National Health Information System (ENHIS) from Clinical Document Architecture (CDA) to Fast Healthcare Interoperability Resources (FHIR), but it is general enough to be used for other data transformation needs, including the European Health Data Space (EHDS) ecosystem. Conclusion: The proposed tool and techniques simplify health data transformation by allowing domain experts to specify and validate the necessary data transformation rules and maps. Evaluation by ENHIS domain experts demonstrated the usability, effectiveness, and business value of the tool and techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
16. Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions.
- Author
-
Zantvoort, Kirsten, Nacke, Barbara, Görlich, Dennis, Hornstein, Silvan, Jacobi, Corinna, and Funk, Burkhardt
- Subjects
MENTAL illness treatment ,PSYCHOTHERAPY ,PREDICTIVE tests ,PSYCHIATRIC treatment ,MENTAL health services ,DATABASE management ,RESEARCH funding ,DIGITAL health ,PROBABILITY theory ,ARTIFICIAL intelligence ,DESCRIPTIVE statistics ,ARTIFICIAL neural networks ,MACHINE learning ,ALGORITHMS - Abstract
Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. When data sharing is an answer and when (often) it is not: Acknowledging data‐driven, non‐data, and data‐decentered cultures.
- Author
-
Huvila, Isto and Sinnamon, Luanne S.
- Subjects
- *
INTELLECTUAL property , *SOCIAL sciences , *DATABASE management , *QUALITATIVE research , *RESEARCH funding , *PRIVACY , *INTERVIEWING , *MEDICAL research , *RESEARCH methodology , *MEDICAL ethics - Abstract
Contemporary research and innovation policies and advocates of data‐intensive research paradigms continue to urge increased sharing of research data. Such paradigms are underpinned by a pro‐data, normative data culture that has become dominant in the contemporary discourse. Earlier research on research data sharing has directed little attention to its alternatives as more than a deficit. The present study aims to provide insights into researchers' perspectives, rationales and practices of (non‐)sharing of research data in relation to their research practices. We address two research questions, (RQ1) what underpinning patterns can be identified in researchers' (non‐)sharing of research data, and (RQ2) how are attitudes and data‐sharing linked to researchers' general practices of conducting their research. We identify and describe data‐decentered culture and non‐data culture as alternatives and parallels to the data‐driven culture, and describe researchers de‐inscriptions of how they resist and appropriate predominant notions of data in their data practices by problematizing the notion of data, asserting exceptions to the general case of data sharing, and resisting or opting out from data sharing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Typed Unknown Values: A Step towards Solving the Problem of Missing Data Representation in Relational Databases.
- Author
-
Kuznetsov, S. D.
- Subjects
- *
MISSING data (Statistics) , *DATABASE management , *DATA management , *PROBLEM solving , *SQL - Abstract
The state of affairs in the field of missing data management in relational databases leaves much to be desired. The SQL standard uses the universal null value to represent missing data, and the management is based on three-valued logic, in which the null value is identified with the third Boolean value. This solution is conceptually inconsistent and often results in DBMS behavior that is not intuitive. An alternative approach based on typed special values leaves all handling of missing data to users. In this paper, we analyze the long history of research and development that led to this situation. We come to the conclusion that no other solution could have appeared in the SQL standard because of the choice of the mechanism of universal null value more than 50 years ago, whereas the alternative mechanism cannot provide system support for special values due to the use of two-valued logic. We propose a combined approach using typed special values based on three-valued logic. This approach allows one to use the semantics of data types when processing queries with conditions that involve unknown data. In addition, our approach makes it possible to define a full-fledged three-valued logic in which a special value of the Boolean type is the third Boolean value. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Sexual decision-making: an exploratory interview study of Cambodian adolescents.
- Author
-
Park, Gloria and Yang, Youngran
- Subjects
DECISION making in adolescence ,SOCIAL media ,CULTURAL awareness ,RESEARCH funding ,QUALITATIVE research ,DATABASE management ,HUMAN sexuality ,INTERVIEWING ,ATTITUDES toward sex ,SEX distribution ,SEXUAL excitement ,SEX education ,AFFINITY groups ,CAMBODIANS ,FAMILIES ,SEX customs ,RESEARCH ,RESEARCH methodology ,SEXUAL intercourse ,SOCIAL networks ,HEALTH promotion ,HEALTH education ,ALCOHOL drinking ,PSYCHOSOCIAL factors - Abstract
Introduction: The rate of sexual activity among adolescents is very high, with serious repercussions such as human immunodeficiency virus (HIV) and sexually transmitted diseases. Understanding the factors that influence adolescents' engagement in sexual activity is crucial for promoting healthy sexual attitudes and behaviors in schools, sex education programs, communities, and families. This study aimed to examine the factors influencing sexual decision-making among Cambodian adolescents. Methods: In accordance with the Standards for Reporting Qualitative Research (SRQR), this study used a descriptive qualitative methodology with individual interviews. The participants in the study were 30 Cambodian adolescents (15 males and 15 females) who were all unmarried and sexually active. They were recruited using various methods, including social networking services, and interviewed to explore their sexual decision-making processes. Results: The analysis revealed that the decision-making process was influenced by both internal and external factors. Internal factors included sexually explicit Internet material and arousal from sexy outfits, while external factors included foreign vs. Khmer culture, the surrounding environment including community, peers, and family, and educational advice received at school. Gender differences were noted in responses to stimuli like sexy outfits and perceptions of cultural norms. Conclusions: This study underscores the complexity of adolescent sexual decision-making in Cambodia. It highlights the need for sex education that is not only comprehensive but also culturally sensitive, addressing the diverse influences on these adolescents. Future research should include a broader demographic group, including rural adolescents, to gain more comprehensive insights. Implications for practice: This study uncovers how cultural norms, peers, and the media impact sexual behaviors, emphasizing the significant gender differences in these aspects. The findings shed light on the necessity of culturally sensitive and comprehensive sex education and the urgent need for tailored approaches to health promotion and education. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Associations between number and type of conditions and physical activity levels in adults with multimorbidity - a cross-sectional study from the Danish Lolland-Falster health study.
- Author
-
Jørgensen, Lars Bo, Mortensen, Sofie Rath, Tang, Lars Hermann, Grøntved, Anders, Brønd, Jan Christian, Jepsen, Randi, Petersen, Therese Lockenwitz, and Skou, Søren T.
- Subjects
CROSS-sectional method ,BODY mass index ,DATABASE management ,RESEARCH funding ,ACCELEROMETRY ,SEDENTARY lifestyles ,MENTAL illness ,QUESTIONNAIRES ,SEX distribution ,EXERCISE intensity ,MULTIVARIATE analysis ,DESCRIPTIVE statistics ,AGE distribution ,COMPARATIVE studies ,DATA analysis software ,PHYSICAL activity ,ACTIVITIES of daily living ,COMORBIDITY ,REGRESSION analysis - Abstract
Aim: To provide detailed descriptions of the amount of daily physical activity (PA) performed by people with multimorbidity and investigate the association between the number of conditions, multimorbidity profiles, and PA. Methods: All adults (≥18 years) from The Lolland-Falster Health Study, conducted from 2016 to 2020, who had PA measured with accelerometers and reported medical conditions were included (n=2,158). Sedentary behavior and daily PA at light, moderate, vigorous, and moderate to vigorous intensity and number of steps were measured with two accelerometers. Associations were investigated using multivariable and quantile regression analyses. Results: Adults with multimorbidity spent nearly half their day sedentary, and the majority did not adhere to the World Health Organization's (WHO) PA recommendations (two conditions: 63%, three conditions: 74%, ≥four conditions: 81%). Number of conditions was inversely associated with both PA for all intensity levels except sedentary time and daily number of steps. Participants with multimorbidity and presence of mental disorders (somatic/mental multimorbidity) had significantly lower levels of PA at all intensity levels, except sedentary time, and number of daily steps, compared to participants with multimorbidity combinations of exclusively somatic conditions. Conclusion: Levels of sedentary behavior and non-adherence to PA recommendations in adults with multimorbidity were high. Inverse associations between PA and the number of conditions and mental multimorbidity profiles suggest that physical inactivity increases as multimorbidity becomes more complex. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. 高放废物处置地下实验室现场试验数据管理顶 层设计.
- Author
-
王鹏, 王驹, 黄树桃, and 马明清
- Subjects
DATABASE management ,UNDERGROUND construction ,DATA management ,DATA quality ,RADIOACTIVE waste disposal ,TEST methods - Abstract
Copyright of World Nuclear Geoscience is the property of World Nuclear Geoscience Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
22. It's late, but not too late to transform health systems: a global digital citizen science observatory for local solutions to global problems.
- Author
-
Katapally, Tarun Reddy
- Subjects
WORLD Wide Web ,MOBILE apps ,MEDICAL information storage & retrieval systems ,DATABASE management ,DIGITAL health ,DECISION making in clinical medicine ,TELEMEDICINE ,CITIZEN science ,ELECTRONIC health records ,CLOUD computing - Abstract
A key challenge in monitoring, managing, and mitigating global health crises is the need to coordinate clinical decision-making with systems outside of healthcare. In the 21st century, human engagement with Internet-connected ubiquitous devices generates an enormous amount of big data, which can be used to address complex, intersectoral problems via participatory epidemiology and mHealth approaches that can be operationalized with digital citizen science. These big data – which traditionally exist outside of health systems – are underutilized even though their usage can have significant implications for prediction and prevention of communicable and non-communicable diseases. To address critical challenges and gaps in big data utilization across sectors, a Digital Citizen Science Observatory (DiScO) is being developed by the Digital Epidemiology and Population Health Laboratory by scaling up existing digital health infrastructure. DiScO's development is informed by the Smart Framework, which leverages ubiquitous devices for ethical surveillance. The Observatory will be operationalized by implementing a rapidly adaptable, replicable, and scalable progressive web application that repurposes jurisdiction-specific cloud infrastructure to address crises across jurisdictions. The Observatory is designed to be highly adaptable for both rapid data collection as well as rapid responses to emerging and existing crises. Data sovereignty and decentralization of technology are core aspects of the observatory, where citizens can own the data they generate, and researchers and decision-makers can re-purpose digital health infrastructure. The ultimate aim of DiScO is to transform health systems by breaking existing jurisdictional silos in addressing global health crises. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Opportunities, challenges and future perspectives of using bioinformatics and artificial intelligence techniques on tropical disease identification using omics data.
- Author
-
Vidanagamachchi, S. M. and Waidyarathna, K. M. G. T. R.
- Subjects
TROPICAL medicine ,RISK assessment ,GENERATIVE adversarial networks ,FEDERATED learning ,PUBLIC health surveillance ,VACCINE development ,PHARMACOLOGY ,SAFETY ,DATA security ,COST effectiveness ,PREDICTION models ,GENOMICS ,DATABASE management ,THERAPEUTICS ,DIFFUSION of innovations ,ARTIFICIAL intelligence ,MULTIOMICS ,DRUG resistance in microorganisms ,PRIVACY ,CLINICAL trials ,BIOLOGICAL products ,DATA analytics ,BIOINFORMATICS ,COMPUTER-aided diagnosis ,EPIDEMICS ,CRISPRS ,GENE expression profiling ,PROTEOMICS ,COMPUTERS in medicine ,EARLY diagnosis ,ACCURACY ,MACHINE learning ,AUTOMATION ,INDIVIDUALIZED medicine ,DRUG development ,PUBLIC health ,DATA quality ,BIOMARKERS ,SEQUENCE analysis ,SENSITIVITY & specificity (Statistics) ,MEDICAL ethics ,ACCESS to information ,CLOUD computing ,DISEASE risk factors - Abstract
Tropical diseases can often be caused by viruses, bacteria, parasites, and fungi. They can be spread over vectors. Analysis of multiple omics data types can be utilized in providing comprehensive insights into biological system functions and disease progression. To this end, bioinformatics tools and diverse AI techniques are pivotal in identifying and understanding tropical diseases through the analysis of omics data. In this article, we provide a thorough review of opportunities, challenges, and future directions of utilizing Bioinformatics tools and AI-assisted models on tropical disease identification using various omics data types. We conducted the review from 2015 to 2024 considering reliable databases of peer-reviewed journals and conference articles. Several keywords were taken for the article searching and around 40 articles were reviewed. According to the review, we observed that utilization of omics data with Bioinformatics tools like BLAST, and Clustal Omega can make significant outcomes in tropical disease identification. Further, the integration of multiple omics data improves biomarker identification, and disease predictions including disease outbreak predictions. Moreover, AI-assisted models can improve the precision, cost-effectiveness, and efficiency of CRISPR-based gene editing, optimizing gRNA design, and supporting advanced genetic correction. Several AI-assisted models including XAI can be used to identify diseases and repurpose therapeutic targets and biomarkers efficiently. Furthermore, recent advancements including Transformer-based models such as BERT and GPT-4, have been mainly applied for sequence analysis and functional genomics. Finally, the most recent GeneViT model, utilizing Vision Transformers, and other AI techniques like Generative Adversarial Networks, Federated Learning, Transfer Learning, Reinforcement Learning, Automated ML and Attention Mechanism have shown significant performance in disease classification using omics data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Genomics and Biodiversity: Applications and Ethical Considerations for Climate‐Just Conservation.
- Author
-
Miner, Skye A. and Thurman, Timothy J.
- Subjects
- *
ENVIRONMENTAL health , *GENOMICS , *CONSERVATION of natural resources , *SOCIAL justice , *DATABASE management , *CLIMATE change , *AT-risk people , *GENETIC engineering , *ECOSYSTEMS , *BIOETHICS , *ANTHROPOGENIC effects on nature , *ENVIRONMENTAL justice , *STAKEHOLDER analysis , *SEQUENCE analysis - Abstract
Genomics holds significant potential for conservationists, offering tools to monitor species risks, enhance conservation strategies, envision biodiverse futures, and advance climate justice. However, integrating genomics into conservation requires careful consideration of its impacts on biodiversity, the diversity of scientific researchers, and governance strategies for data usage. These factors must be balanced with the varied interests of affected communities and environmental concerns. We argue that conservationists should engage with diverse communities, particularly those historically marginalized and most vulnerable to climate change. This inclusive approach can ensure that genomic technologies are applied ethically and effectively, aligning conservation efforts with broader social and environmental justice goals. Engaging diverse stakeholders will help guide responsible genomic integration, fostering equitable and sustainable conservation outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Genomics and Health Data Governance in Africa: Democratize the Use of Big Data and Popularize Public Engagement.
- Author
-
Munung, Nchangwi Syntia, Royal, Charmaine D., de Kock, Carmen, Awandare, Gordon, Nembaware, Victoria, Nguefack, Seraphin, Treadwell, Marsha, and Wonkam, Ambroise
- Subjects
- *
CONSUMER education , *DATA transmission systems , *CONTRACTS , *POWER (Social sciences) , *DATA science , *GENOMICS , *DATABASE management , *GOVERNMENT policy , *INTERPROFESSIONAL relations , *SOCIAL justice , *KNOWLEDGE management , *HUMAN research subjects , *DATA analytics , *PHILOSOPHY , *SOCIAL responsibility , *DECISION making , *MEDICAL research , *THEORY of knowledge , *INFORMED consent (Medical law) , *ACTION research , *SOCIOLOGY , *PATIENT participation , *COOPERATIVENESS , *GENETICS , *ACCESS to information - Abstract
Effectively addressing ethical issues in precision medicine research in Africa requires a holistic social contract that integrates biomedical knowledge with local cultural values and Indigenous knowledge systems. Drawing on African epistemologies such as ubuntu and ujamaa and on our collective experiences in genomics and big data research for sickle cell disease, hearing impairment, and fragile X syndrome and the project Public Understanding of Big Data in Genomics Medicine in Africa, we envision a transformative shift in health research data governance in Africa that could help create a sense of shared responsibility between all stakeholders in genomics and data‐driven health research in Africa. This shift includes proposing a social contract for genomics and data science in health research that is grounded in African communitarianism such as solidarity, shared decision‐making, and reciprocity. We make several recommendations for a social contract for genomics and data science in health, including the coproduction of genomics knowledge with study communities, power sharing between stakeholders, public education on the ethical and social implications of genetics and data science, benefit sharing, giving voice to data subjects through dynamic consent, and democratizing data access to allow wide access by all research stakeholders. Achieving this would require adopting participatory approaches to genomics and data governance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Can Open Science Advance Health Justice? Genomic Research Dissemination in the Evolving Data‐Sharing Landscape.
- Author
-
Kraft, Stephanie A. and Mittendorf, Kathleen F.
- Subjects
- *
GENETIC research , *HEALTH services accessibility , *DATA security , *GENOMICS , *SOCIAL justice , *DATABASE management , *PRIVACY , *BIOETHICS , *COMMUNICATION , *PUBLIC health , *ACCESS to information , *MEDICAL ethics - Abstract
Scientific data‐sharing and open science initiatives are increasingly important mechanisms for advancing the impact of genomic research. These mechanisms are being implemented as growing attention is paid to the need to improve the inclusion of research participants from marginalized and underrepresented groups. Together, these efforts aim to promote equitable advancements in genomic medicine. However, if not guided by community‐informed protections, these efforts may harm the very participants and communities they aim to benefit. This essay examines potential benefits and harms of open science and explores how to advance a more just vision of open science in genomics. Drawing on relational ethics frameworks, we argue that researchers should consider their obligations to participants as well as the broader communities that are impacted by their research. We propose eight strategies to provide a foundation of practical steps for researchers to reduce the possibility of harms stemming from open science and to work toward genomic justice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Leveraging natural language processing to aggregate field safety notices of medical devices across the EU.
- Author
-
Ren, Yijun and Caiani, Enrico Gianluca
- Subjects
WORLD Wide Web ,PRODUCT safety ,RESEARCH funding ,TASK performance ,GRAPHICAL user interfaces ,DATABASE management ,COMPUTER software ,RESEARCH evaluation ,NATURAL language processing ,DESCRIPTIVE statistics ,DIAGNOSTIC errors ,BUSINESS ,COMMUNICATION ,MANUFACTURING industries ,DEEP learning ,INFORMATION retrieval ,PUBLISHING ,MEMORY ,AUTOMATION ,REPORT writing ,CONFIDENCE intervals ,ACCURACY ,MEDICAL equipment safety measures ,EQUIPMENT & supplies - Abstract
The European Union (EU) Medical Device Regulation and In Vitro Medical Device Regulation have introduced more rigorous regulatory requirements for medical devices, including new rules for post-market surveillance. However, EU market vigilance is limited by the absence of harmonized reporting systems, languages and nomenclatures among Member States. Our aim was to develop a framework based on Natural Language Processing capable of automatically collecting publicly available Field Safety Notices (FSNs) reporting medical device problems by applying web scraping to EU authority websites, to attribute the most suitable device category based on the European Medical Device Nomenclature (EMDN), and to display processed FSNs in an aggregated way to allow multiple queries. 65,036 FSNs published up to 31/12/2023 were retrieved from 16 EU countries, of which 40,212 (61.83%) were successfully assigned the proper EMDN. The framework's performance was successfully tested, with accuracies ranging from 87.34% to 98.71% for EMDN level 1 and from 64.15% to 85.71% even for level 4. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Community-Engaged Data Science (CEDS): A Case Study of Working with Communities to Use Data to Inform Change.
- Author
-
Olvera, Ramona G., Plagens, Courtney, Ellison, Sylvia, Klingler, Kesla, Kuntz, Amy K., and Chase, Rachel P.
- Subjects
- *
DATA science , *COMMUNITY health services , *DATABASE management , *MEDICAL research , *LOGIC , *MATHEMATICAL models , *OPIOID epidemic , *THEORY , *PUBLIC health , *COALITIONS - Abstract
Data-informed decision making is a critical goal for many community-based public health research initiatives. However, community partners often encounter challenges when interacting with data. The Community-Engaged Data Science (CEDS) model offers a goal-oriented, iterative guide for communities to collaborate with research data scientists through data ambassadors. This study presents a case study of CEDS applied to research on the opioid epidemic in 18 counties in Ohio as part of the HEALing Communities Study (HCS). Data ambassadors provided a pivotal role in empowering community coalitions to translate data into action using key steps of CEDS which included: data landscapes identifying available data in the community; data action plans from logic models based on community data needs and gaps of data; data collection/sharing agreements; and data systems including portals and dashboards. Throughout the CEDS process, data ambassadors emphasized sustainable data workflows, supporting continued data engagement beyond the HCS. The implementation of CEDS in Ohio underscored the importance of relationship building, timing of implementation, understanding communities' data preferences, and flexibility when working with communities. Researchers should consider implementing CEDS and integrating a data ambassador in community-based research to enhance community data engagement and drive data-informed interventions to improve public health outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. What is research data "misuse"? And how can it be prevented or mitigated?
- Author
-
Pasquetto, Irene V., Cullen, Zoë, Thomer, Andrea, and Wofford, Morgan
- Subjects
- *
MEDICAL information storage & retrieval systems , *DATABASE management , *PATIENT safety , *PRIVACY , *DATA security failures , *HUMAN research subjects , *IDENTITY theft , *MEDICAL research , *MEDICAL records , *COMMUNICATION , *INFORMED consent (Medical law) , *INFORMATION literacy , *MEDICAL ethics , *ALGORITHMS - Abstract
Despite increasing expectations that researchers and funding agencies release their data for reuse, concerns about data misuse hinder the open sharing of data. The COVID‐19 crisis brought urgency to these concerns, yet we are currently missing a theoretical framework to understand, prevent, and respond to research data misuse. In the article, we emphasize the challenge of defining misuse broadly and identify various forms that misuse can take, including methodological mistakes, unauthorized reuse, and intentional misrepresentation. We pay particular attention to underscoring the complexity of defining misuse, considering different epistemological perspectives and the evolving nature of scientific methodologies. We propose a theoretical framework grounded in the critical analysis of interdisciplinary literature on the topic of misusing research data, identifying similarities and differences in how data misuse is defined across a variety of fields, and propose a working definition of what it means "to misuse" research data. Finally, we speculate about possible curatorial interventions that data intermediaries can adopt to prevent or respond to instances of misuse. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. A review of multimodal deep learning methods for genomic-enabled prediction in plant breeding.
- Author
-
Montesinos-López, Osval A, Chavira-Flores, Moises, Kismiantini, Crespo-Herrera, Leo, Piere, Carolina Saint, Li, HuiHui, Fritsche-Neto, Roberto, Al-Nowibet, Khalid, Montesinos-López, Abelardo, and Crossa, José
- Subjects
- *
STATISTICAL models , *GENOMICS , *DATABASE management , *FOOD security , *POPULATION health , *DEEP learning , *CONCEPTUAL structures , *ARTIFICIAL neural networks , *COMPUTER networks , *PREDICTIVE validity , *AGRICULTURE - Abstract
Deep learning methods have been applied when working to enhance the prediction accuracy of traditional statistical methods in the field of plant breeding. Although deep learning seems to be a promising approach for genomic prediction, it has proven to have some limitations, since its conventional methods fail to leverage all available information. Multimodal deep learning methods aim to improve the predictive power of their unimodal counterparts by introducing several modalities (sources) of input information. In this review, we introduce some theoretical basic concepts of multimodal deep learning and provide a list of the most widely used neural network architectures in deep learning, as well as the available strategies to fuse data from different modalities. We mention some of the available computational resources for the practical implementation of multimodal deep learning problems. We finally performed a review of applications of multimodal deep learning to genomic selection in plant breeding and other related fields. We present a meta-picture of the practical performance of multimodal deep learning methods to highlight how these tools can help address complex problems in the field of plant breeding. We discussed some relevant considerations that researchers should keep in mind when applying multimodal deep learning methods. Multimodal deep learning holds significant potential for various fields, including genomic selection. While multimodal deep learning displays enhanced prediction capabilities over unimodal deep learning and other machine learning methods, it demands more computational resources. Multimodal deep learning effectively captures intermodal interactions, especially when integrating data from different sources. To apply multimodal deep learning in genomic selection, suitable architectures and fusion strategies must be chosen. It is relevant to keep in mind that multimodal deep learning, like unimodal deep learning, is a powerful tool but should be carefully applied. Given its predictive edge over traditional methods, multimodal deep learning is valuable in addressing challenges in plant breeding and food security amid a growing global population. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Coactivation pattern analysis reveals altered whole-brain functional transient dynamics in autism spectrum disorder.
- Author
-
Li, Lei, Zheng, Qingyu, Xue, Yang, Bai, Miaoshui, and Mu, Yueming
- Subjects
- *
DATABASE management , *BRAIN , *AUTISM , *EXECUTIVE function , *MAGNETIC resonance imaging , *SEVERITY of illness index , *DESCRIPTIVE statistics , *ATTENTION , *SUPPORT vector machines , *LARGE-scale brain networks , *ASPERGER'S syndrome - Abstract
Recent studies on autism spectrum disorder (ASD) have identified recurring states dominated by similar coactivation pattern (CAP) and revealed associations between dysfunction in seed-based large-scale brain networks and clinical symptoms. However, the presence of abnormalities in moment-to-moment whole-brain dynamics in ASD remains uncertain. In this study, we employed seed-free CAP analysis to identify transient brain activity configurations and investigate dynamic abnormalities in ASD. We utilized a substantial multisite resting-state fMRI dataset consisting of 354 individuals with ASD and 446 healthy controls (HCs, from HC groups and 2). CAP were generated from a subgroup of all HC subjects (HC group 1) through temporal K-means clustering, identifying four CAPs. These four CAPs exhibited either the activation or inhibition of the default mode network (DMN) and were grouped into two pairs with opposing spatial CAPs. CAPs for HC group 2 and ASD were identified by their spatial similarity to those for HC group 1. Compared with individuals in HC group 2, those with ASD spent more time in CAPs involving the ventral attention network but less time in CAPs related to executive control and the dorsal attention network. Support vector machine analysis demonstrated that the aberrant dynamic characteristics of CAPs achieved an accuracy of 74.87% in multisite classification. In addition, we used whole-brain dynamics to predict symptom severity in ASD. Our findings revealed whole-brain dynamic functional abnormalities in ASD from a single transient perspective, emphasizing the importance of the DMN in abnormal dynamic functional activity in ASD and suggesting that temporally dynamic techniques offer novel insights into time-varying neural processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Preparing for drug diversion software: Enhancing technologies and diversion prevention program growth.
- Author
-
Gamble, Bethanie
- Subjects
- *
SUBSTANCE abuse prevention , *DOCUMENTATION , *PATIENT compliance , *AUDITING , *COMPUTER software , *HUMAN services programs , *DATABASE management , *ARTIFICIAL intelligence , *LEADERSHIP , *REFLECTION (Philosophy) , *WORKFLOW , *TECHNOLOGY , *ELECTRONIC health records , *DRUGS , *AUTOMATION , *ALGORITHMS - Abstract
The article describes several steps that the healthcare system needs to complete to prepare for the use of drug diversion software to help monitor controlled substances. The steps can be divided into four phases, which are evaluation of system capabilities, data streams and limitations and ensuring capture of investigation workflows and documentation, software maintenance and diversion prevention program growth, leveraging the software to create new workflows and initiating a refinement stage.
- Published
- 2024
- Full Text
- View/download PDF
33. Does personal therapy predict better trainee effectiveness?
- Author
-
Li, Xu, Wang, Yuanming, and Li, Feihan
- Subjects
- *
PSYCHOTHERAPY , *STATISTICAL models , *WORK , *DATABASE management , *SATISFACTION , *TREATMENT effectiveness , *TREATMENT duration , *HOSPITAL medical staff , *CLINICAL competence , *COUNSELING - Abstract
Objectives: The aim of this study was to examine whether the history of personal therapy among therapist trainees predicts their clinical effectiveness in terms of client symptom reduction. Methods: Two anonymous archived datasets from a longitudinal research project on mental health counselling training in China were used. Both datasets included trainee‐reported history of personal therapy and their client‐reported symptom levels prior to each counselling session. Results: Using multilevel modelling, we found that, in Dataset 1, neither of the personal therapy variables (whether trainees had undergone personal therapy nor number of personal therapy hours) significantly predicted trainees' client symptom outcome. Dataset 2, which included whether trainees were satisfied with their personal therapy, showed that more hours of unsatisfactory personal therapy for a trainee were associated with decreased average client symptom improvement, whereas more hours of highly satisfactory personal therapy for a trainee were associated with greater client symptom improvement. Conclusions: Findings in this study suggested that the association between trainees' personal therapy length and their clinical effectiveness may be moderated by the quality of their personal therapy: Whereas satisfactory personal therapy might be beneficial in the trainee's clinical work, longer unsatisfactory personal therapy was associated with decreased trainee effectiveness. Research limitations and implications for training are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. The Effect of Missing Item Data on the Relative Predictive Accuracy of Correctional Risk Assessment Tools.
- Author
-
Perley-Robertson, Bronwen, Babchishin, Kelly M., and Helmus, L. Maaike
- Subjects
- *
RECIDIVISM -- Risk factors , *RISK assessment , *DATABASE management , *SEX crimes , *RESEARCH methodology evaluation , *DOMESTIC violence , *PREDICTIVE validity , *EVALUATION - Abstract
Missing data are pervasive in risk assessment but their impact on predictive accuracy has largely been unexplored. Common techniques for handling missing risk data include summing available items or proration; however, multiple imputation is a more defensible approach that has not been methodically tested against these simpler techniques. We compared the validity of these three missing data techniques across six conditions using STABLE-2007 (N = 4,286) and SARA-V2 (N = 455) assessments from men on community supervision in Canada. Condition 1 was the observed data (low missingness), and Conditions 2 to 6 were generated missing data conditions, whereby 1% to 50% of items per case were randomly deleted in 10% increments. Relative predictive accuracy was unaffected by missing data, and simpler techniques performed just as well as multiple imputation, but summed totals underestimated absolute risk. The current study therefore provides empirical justification for using proration when data are missing within a sample. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A worked example of contextualising and using reflexive thematic analysis in nursing research.
- Author
-
Rowland, Emma and Conolly, Anna
- Subjects
- *
NURSES , *EVIDENCE-based nursing , *DATA analysis , *QUALITATIVE research , *DATABASE management , *ETHNOLOGY research , *THEMATIC analysis , *REFLEXIVITY , *EXPERIMENTAL design , *PHILOSOPHY of nursing , *NURSING research , *RESEARCH , *CONTENT mining , *MEDICAL coding , *MEDICAL research personnel , *ONTOLOGIES (Information retrieval) , *PHENOMENOLOGY , *GROUNDED theory , *CASE studies - Abstract
Why you should read this article: • To support the use of reflexive thematic analysis (RTA) in analysing qualitative systematic reviews and empirical data. • To review your understanding of RTA to analyse nursing research within the context of wider methodological and methods considerations. • To explore practical examples of RTA in nursing research.. Background: A researcher must consider their research question within their world view before selecting a technique appropriate for analysing their data. This will affect their choices of methodology and methods for collecting and analysing data. Reflexive thematic analysis (RTA) has become a go-to technique for qualitative nurse researchers. However, the justifications for using it and its application in the context of a wider approach are under-discussed. Aim: To rationalise the use of RTA within a wider philosophical-methodological-methods-analysis approach and provide nurse researchers with practical guidance about how to apply it to qualitative data. Discussion: This article conceptually grounds the seminal work of Braun and Clarke (2006) and provides a process for rigorously and systematically analysing qualitative data. Researchers undertaking qualitative research must use a rigorous philosophical-methodological-method-analysis approach. Before selecting a technique appropriate for analysing their data, they must consider their research question within their own world view. This has implications for their choice of methodology and consequently the data collection methods and analysis techniques they use. Researchers should be mindful of RTA’s conceptual roots when applying it. Conclusion: Transparent and rigorous data analysis leads to credible findings, supports evidence-based practice and contributes to the growing body of nursing research. Within the context of the wider philosophical-methodological-methods-analysis approach, RTA produces high-quality, credible findings when applied well. Implications: for practice This article can guide nursing students and novice researchers in choosing and applying RTA to their research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Are EU member states ready for the European Health Data Space? Lessons learnt on the secondary use of health data from the TEHDAS Joint Action.
- Author
-
Kessissoglou, Irini A, Cosgrove, Shona M, Abboud, Linda A, Bogaert, Petronille, Peolsson, Michael, and Calleja, Neville
- Subjects
- *
HEALTH services accessibility , *DATABASE management , *INTERPROFESSIONAL relations , *RESEARCH funding , *INTERVIEWING , *HEALTH policy , *INSTITUTIONAL cooperation , *RESEARCH methodology , *HEALTH information systems - Abstract
The proposal for a regulation on the European Health Data Space (EHDS) contains provisions that would significantly change health data management systems in European member states (MS). This article presents results of a country mapping exercise conducted during the Joint Action 'Towards the European Health Data Space' (TEHDAS) in 2022. It presents the state-of-play of health data management systems in 12 MS and their preparedness to comply with the EHDS provisions. The country mapping exercise consisted of virtual or face-to-face semi-structured interviews to a selection of key stakeholders of the health information systems. A semi-quantitative analysis of the reports was conducted and is presented here, focusing on key aspects related to the user journey through the EHDS. This article reveals a heterogenous picture in countries' readiness to comply with the EHDS provisions. There is a need to improve digitalization and quality of health data at source across most countries. Less than half of the countries visited have or are developing a national datasets catalogue. Although the process to access health data varies, researchers can analyse health data in secure processing environments in all countries visited. Most of the countries use a unique personal identifier for health to facilitate data linkage. The study concluded that the current landscape is heterogeneous, and no member state is fully ready yet to comply with the future regulation. However, there is general political will and ongoing efforts to align health data management systems with the provisions in the EHDS legislative proposal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Bootstrap approach to disparity testing with source uncertainty in the data.
- Author
-
McDonald, Gary C. and Willard, Joseph F.
- Subjects
- *
STATISTICAL models , *DATABASE management , *RACISM , *HYPOTHESIS , *INFERENTIAL statistics , *HEALTH equity - Abstract
This article addresses the problem of one-sided hypothesis testing on the means of two populations when there is uncertainty as to which population a datum is drawn. Such situations arise, for example, in the use of Bayesian imputation methods to assess racial and ethnic disparities with certain survey, health, and financial data. Earlier work on this problem, from a frequentist point of view, is limited in the sample sizes for which computations could reasonably be executed. By employing a bootstrap approach to generate summary statistics for the population of p-values arising from all possible configurations of the data, the credibility of the disparity hypothesis is ascertained. Previous limitations on sample sizes are eliminated. R-codes to carry out the relevant computations, illustrated in this article, are provided in the Appendices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Application of edge computing and IoT technology in supply chain finance.
- Author
-
Yin, Yuanxing, Wang, Xinyu, Wang, Huan, and Lu, Baoli
- Subjects
DATABASE management ,EDGE computing ,SUPPLY chain management ,ELECTRONIC data processing ,CREDIT risk ,BLOCKCHAINS - Abstract
This study proposes an IoT data management framework based on blockchain and edge computing and conducts detailed simulation experiment design and performance evaluation for the field of supply chain finance. Previous methods often struggled with high latency, limited scalability, and inadequate risk management, and to address these issues, the experimental platform includes traditional centralized systems, distributed systems, blockchain-based systems, and edge computing + blockchain systems. By monitoring indicators such as data processing delay, data transmission delay, data retrieval time, system throughput, and data integrity of each platform, and introducing evaluation formulas for credit risk, operational risk, and market risk, the performance of different systems in processing supply chain financial data is comprehensively analyzed. The experimental results show that the edge computing + blockchain system performs well in data processing efficiency, security, real-time, and system throughput, especially under high load conditions. Our market risk value is reduced to 0.015. This framework improves the efficiency and security of data transmission and effectively reduces credit risk and operational risk, providing an efficient and reliable solution for data management in supply chain finance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. THE IMPACT OF KNOWLEDGE MANAGEMENT PROCESSES IN MAKING INTELLIGENCE DECISIONS AT AL-MANASEER GROUP IN JORDAN.
- Author
-
Khrisat, Diana Abdulrazaq
- Subjects
ARTIFICIAL intelligence ,KNOWLEDGE management ,DATABASE management ,DECISION making ,MIDDLE managers - Abstract
Copyright of International Journal of Professional Business Review (JPBReview) is the property of Open Access Publications LLC and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
40. ANÁLISE COMPARATIVA DE PERFORMANCE ENTRE FROTAS DE CAMINHÕES EM UMA MINA DE OURO A CÉU ABERTO.
- Author
-
Silva, Danilo José, de Vilhena Costa, Leandro, Libera da Silva, Fabiano Della, and de Souza Arcanjo, Silvia Helena
- Subjects
VOLVO trucks ,DATABASE management ,DATABASES ,TRUCKS ,DATA analysis - Abstract
Copyright of Revista Foco (Interdisciplinary Studies Journal) is the property of Revista Foco and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
41. Selection of database for efficient energy management in a power grid.
- Author
-
POWROŹNIK, Piotr, OPŁOTNY, Filip, KOMAROWSKI, Mateusz, KOROPIECKI, Igor, TURCHAN, Krzysztof, and PIOTROWSKI, Krzysztof
- Subjects
GRAPHICAL user interfaces ,DATABASES ,DATABASE management ,ELECTRIC power distribution grids ,DATA security - Abstract
Copyright of Przegląd Elektrotechniczny is the property of Przeglad Elektrotechniczny and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
42. What is Agile Project Management? Developing a New Definition Following a Systematic Literature Review.
- Author
-
Hao Dong, Dacre, Nicholas, Baxter, David, and Ceylan, Serkan
- Subjects
AGILE software development ,COMPUTER software development ,BUSINESS databases ,DATABASE management ,COLLEGE majors - Abstract
The concept of “Agile Project Management” has gained significant traction in various sectors, beyond its origins in software development. However, a coherent, universally accepted definition remains elusive, prompting this study to embark on a systematic exploration of agile practices and their implications in broader contexts. Employing a systematic literature review across three major academic databases on business and management studies in the past two decades, this research scrutinizes a final selection of 80 high-quality academic papers. The principal contribution of our research is the articulation of a nuanced definition of Agile Project Management, which demarcates it from traditional project management frameworks and those agile practices specific to software development. This study not only sheds light on the prevailing ambiguities in the understanding of Agile Project Management but also sets the stage for future research into the emerging organizational dynamics engendered by the adoption of agile practices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Cognitive impairment and exploitation: connecting fragments of a bigger picture through data.
- Author
-
Abubakar, Aisha M, Seymour, Rowland G, Gardner, Alison, Lambert, Imogen, Fyson, Rachel, and Wright, Nicola
- Subjects
COGNITION disorder risk factors ,DATABASE management ,PATIENT safety ,RESEARCH funding ,MEDICAL care ,HEALTH ,DESCRIPTIVE statistics ,COGNITION disorders ,ANALYSIS of variance ,DISCRIMINATION (Sociology) ,SOCIAL support ,NEEDS assessment ,SLAVERY - Abstract
Background Exploitation poses a significant public health concern. This paper highlights 'jigsaw pieces' of statistical evidence, indicating cognitive impairment as a pre- or co-existing factor in exploitation. Methods We reviewed English Safeguarding Adults Collection (SAC) data and Safeguarding Adults Reviews (SARs) from 2017 to 22. Data relevant to exploitation and cognitive impairment were analysed using summary statistics and 'analysis of variance'. Results Despite estimates suggesting cognitive impairments may be prevalent among people experiencing exploitation in England, national datasets miss opportunities to illuminate this issue. Although SAC data include statistics on support needs and various forms of abuse and exploitation, they lack intersectional data. Significant regional variations in recorded safeguarding investigations and potential conflation between abuse and exploitation also suggest data inconsistencies. Increased safeguarding investigations for people who were not previously in contact with services indicate that adults may be 'slipping through the net'. SARs, although representing serious cases, provide stronger evidence linking cognitive impairment with risks of exploitation. Conclusions This study identifies opportunities to collect detailed information on cognitive impairment and exploitation. The extremely limited quantitative evidence-base could be enhanced using existing data channels to build a more robust picture, as well as improve prevention, identification and response efforts for 'at-risk' adults. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. BIM implementation for Nigeria's polytechnic built environment undergraduates: challenges and possible measures from stakeholders.
- Author
-
Ebekozien, Andrew, Aigbavboa, Clinton, Samsurijan, Mohamad Shaharudin, Azazi, Noor Alyani Nor, and Duru, Okechukwu Dominic Saviour
- Subjects
BUILT environment ,UNIVERSITIES & colleges ,BUILDING information modeling ,DATABASE management ,LABOR productivity - Abstract
Purpose: Studies show that building information modelling (BIM) technology can improve construction productivity regarding the design, construction and maintenance of a project life cycle in the 21st century. Revit has been identified as a frequently used tool for delivering BIM in the built environment. Studies about BIM technology via Revit are scarce in training middle-level workforce higher education institutions. Thus, this study aims to investigate the relevance of BIM technology and offer measures to promote digitalisation in Nigeria's built environment polytechnic undergraduates via Revit. Design/methodology/approach: Given the unexplored nature of training the middle-level workforce in Nigeria, 37 semi-structured virtual interviews were conducted across Nigeria, and saturation was achieved. The participants were knowledgeable about construction-related BIM. The researchers used a thematic analysis for the collected data and honed them with secondary sources. Findings: Improved visualisation of design, effective and efficient work productivity, automatic design and quantification, improved database management and collaboration and data storage in the centrally coordinated model, among others, emerged as BIM's benefits. BIM technology via Revit is challenging, especially in Nigeria's polytechnic education curriculum. The 24 perceived issues were grouped into government/regulatory agencies-related, polytechnic management-related and polytechnic undergraduate students-related hindrances in Nigeria's built environment. Research limitations/implications: This study is limited to BIM implications for Nigeria's built environment polytechnic undergraduates. Originality/value: This study contributes to the literature paucity in attempting to uncover perceived issues hindering the implementation of BIM technology via Revit in training Nigeria's built environment polytechnic undergraduates via a qualitative approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Assessment of publication time in Campbell Systematic Reviews: A cross‐sectional survey.
- Author
-
Pan, Bei, Ge, Long, Wang, Xiaoman, Ma, Ning, Wei, Zhipeng, Honghao, Lai, Hou, Liangying, and Yang, Kehu
- Subjects
SERIAL publications ,CROSS-sectional method ,WORLD Wide Web ,RESEARCH funding ,DATABASE management ,QUESTIONNAIRES ,AUTHORSHIP ,DESCRIPTIVE statistics ,RESEARCH protocols ,SYSTEMATIC reviews ,RECORDING & registration ,KAPLAN-Meier estimator ,PUBLISHING ,ACQUISITION of data ,DATA analysis software - Abstract
Delayed publication of systematic reviews increases the risk of presenting outdated data. To date, no studies have examined the time and review process from title registration and protocol publication to the final publication of Campbell systematic reviews. This study aims to examine the publication time from protocol to full review publication and the time gap between database searches and full review publication for Campbell systematic reviews. All Campbell systematic reviews in their first published version were included. We searched the Campbell systematic review journals on the Wiley Online Library website to identify all completed studies to date. We manually searched the table of contents of all Campbell systematic reviews to obtain the date of title registration from the journal's website. We used SPSS software to perform the statistical analysis. We used descriptive statistics to report publication times which were calculated stratified by characteristics, including year of review publication, type of reviews, number of authors, difference in authors between protocol and review, and Campbell Review Groups. Non‐normal distributed data were reported as medians, interquartile range, and range, and normal distributed data will be reported as mean ± standard deviation. And we also visualized the overall publication time and the distribution of data. Approximately 18% of reviews were published within one to 2 years, faster than the aims set by Campbell systematic review policies and guidelines, which was 2 years. However, more than 40% of the reviews were published more than 2 years after protocol publication. Furthermore, over 50% of included reviews were published with a time gap of more than 2 years after database searches. There was no significant difference between Campbell coordinating groups' median publication times and time gap from searches of databases to full review publication existed. However, the methods group only published one full review with almost a 3‐year time gap from searches of databases to review publication. And there was a major difference between specific types of review. Systematic reviews had the longest median publication time of 2.4 years, whereas evidence and gap maps had the lowest median publication time of 13 months. Half of Campbell reviews were published more than 2 years after protocol publication. Furthermore, the median time from protocol publication to review publication varied widely depending on the specific type of review. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. QRLIT: Quantum Reinforcement Learning for Database Index Tuning.
- Author
-
Barbosa, Diogo, Gruenwald, Le, D'Orazio, Laurent, and Bernardino, Jorge
- Subjects
DATABASE management ,DATABASES ,QUANTUM computing ,DATABASE searching ,RESEARCH implementation - Abstract
Selecting indexes capable of reducing the cost of query processing in database systems is a challenging task, especially in large-scale applications. Quantum computing has been investigated with promising results in areas related to database management, such as query optimization, transaction scheduling, and index tuning. Promising results have also been seen when reinforcement learning is applied for database tuning in classical computing. However, there is no existing research with implementation details and experiment results for index tuning that takes advantage of both quantum computing and reinforcement learning. This paper proposes a new algorithm called QRLIT that uses the power of quantum computing and reinforcement learning for database index tuning. Experiments using the database TPC-H benchmark show that QRLIT exhibits superior performance and a faster convergence compared to its classical counterpart. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. ClusteredLog: Optimizing Log Structures for Efficient Data Recovery and Integrity Management in Database Systems.
- Author
-
Ahmad, Mariha Siddika and Panda, Brajendra Nath
- Subjects
DATABASE management ,DATABASES ,DATA recovery ,SYSTEM failures ,DATA corruption - Abstract
In modern database systems, efficient log management is crucial for ensuring data integrity and facilitating swift recovery from potential data corruption or system failures. Traditional log structures, which store operations sequentially as they occur, often lead to significant delays in accessing and recovering specific data objects due to their scattered nature across the log. ClusteredLog addresses the limitations of traditional logging methods by implementing a novel logical organization of log entries. Instead of simply storing operations sequentially, it groups related operations for each data item into clusters. As a result, ClusteredLog enables faster identification and recovery of damaged data items and thus reduces the need for extensive log scanning, improving overall efficiency in database recovery processes. We introduce data structures and algorithms that facilitate the creation of these clustered logs, which also track dependencies and update operations on data items. Simulation studies demonstrate that our clustered log method significantly accelerates damage assessment and recovery times compared to traditional sequential logs, particularly as the number of transactions and data items increases. This optimization is pivotal for maintaining data integrity and operational efficiency in databases, especially in scenarios involving potential malicious modifications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Minimum Dataset of Lung Cancer Patients: An Initial Step Towards Developing a Web-Based Personal Health Record.
- Author
-
Mahmoodi, Narges, Garavand, Ali, Samadbeik, Mahnaz, Tahvildarzadeh, Monireh, and Kiani, Ali Asghar
- Subjects
MEDICAL information storage & retrieval systems ,WORLD Wide Web ,CROSS-sectional method ,DATABASE management ,RESEARCH funding ,QUESTIONNAIRES ,CONTENT analysis ,CANCER patients ,DESCRIPTIVE statistics ,MEDICAL records ,LUNG tumors ,RESEARCH methodology ,APPLICATION software ,SOFTWARE architecture ,DELPHI method - Abstract
Background: Personal Health Records (PHR), which utilize advanced health information technology tools, play a crucial role in patient self-management and improving the control of chronic diseases such as lung cancer. To optimize the design of these systems, it is essential to determine the necessary data elements. Objectives: This study aims to identify the minimum dataset required for designing a web-based PHR for lung cancer patients. Methods: This descriptive, cross-sectional research was conducted in 2023. Initially, a lung cancer dataset was extracted through text analysis. In the next phase, a proposed minimum lung cancer dataset was formulated into a questionnaire containing 18 data groups, including 126 data elements. The dataset underwent expert validation in two phases using the Delphi technique. Data analysis was performed using SPSS version 26, with descriptive statistics employed. Results: The minimum web-based PHR dataset for lung cancer patients, consisting of 18 data groups (112 data elements), includes demographics, insurance information, emergency contact details, patient symptoms, tumor-related data, physician details, treatment information, patient-reported measurements, personal medical history, history of procedures and surgeries, visits, allergies, family history, medication information, test results, imaging data, dietary information, and educational materials. Conclusions: Based on the study findings, it is recommended that lung cancer data management encompass not only routine information but also additional dimensions such as allergies, tumor-related information, and dietary details. Collecting comprehensive and complete data can significantly enhance the treatment process and post-treatment follow-up. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Stewarding Contextual Integrity in Data Services for Indigenous Scholarship.
- Author
-
Palmer, Carole L., Sebastian, Karcher, Belarde‐Lewis, Miranda, Littletree, Sandy, and Guerrero, Nestor
- Subjects
- *
DATA integration , *METADATA , *DIGITAL preservation , *RESEARCH methodology , *DATABASE management - Abstract
The CARE Principles for Indigenous Data Governance provide essential guideposts for the stewardship of Indigenous data. To put CARE into practice in libraries and repositories, resources are needed to support implementation and integration into current research data services (RDS). Based on analysis of case studies with scholars of Indigenous language and culture, this paper articulates specific Indigenous research and data practices to guide metadata work and other areas of responsibility in RDS. The cases surface the richness of relationships and the significance of accountability in the research process—demonstrating the "relational accountability" inherent in Indigenous research methods. Robust representation of relationality is essential to retaining integrity of context in metadata for Indigenous research data. We consider the practical implications of documenting relational context with current descriptive metadata approaches and challenges in achieving CARE adherent metadata, which we argue is the backbone for broader application of CARE for Indigenous RDS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Privacy Audit of Public Access Computers and Networks at a Public College Library.
- Author
-
Angell, Katelyn
- Subjects
- *
AUDITING , *PRIVACY , *ONLINE education , *ACADEMIC libraries , *DATABASE management , *LEARNING strategies , *ACCESS to information , *COMPUTER systems , *UNIVERSITIES & colleges , *MEDICAL ethics , *INFORMATION resources , *ASSISTIVE technology , *PUBLIC libraries , *INFORMATION technology - Abstract
In 2021, the assessment-data management librarian at Lehman College Library decided to conduct a privacy audit of the Library's public computers and networks. This audit comprised one of the Library's two annual formal assessments of resources and services. The American Library Association's (ALA) Library Privacy Checklist for Public Access Computers and Networks was selected to review 17 key items related to protecting user privacy and confidentiality. Faculty and staff from Circulation, Library Technology, and Online Learning identified 10 indicators needing work. Suggestions are provided for collaboratively resolving these issues and future steps are described to continuously maximize the online security of the campus community. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.