596 results on '"Big-data"'
Search Results
2. NSF DARE-Transforming modeling in neurorehabilitation: Four threads for catalyzing progress.
- Author
-
Valero-Cuevas, Francisco, Finley, James, Orsborn, Amy, Fung, Natalie, Hicks, Jennifer, Huang, He, Reinkensmeyer, David, Schweighofer, Nicolas, Weber, Douglas, and Steele, Katherine
- Subjects
Adaptation ,Big-data ,Computational neuroscience ,Conference ,Human-device interactions ,Personalization ,Plasticity ,Rehabilitation ,Wearables ,Humans ,Neurological Rehabilitation ,Disabled Persons ,Software ,Computer Simulation ,Algorithms - Abstract
We present an overview of the Conference on Transformative Opportunities for Modeling in Neurorehabilitation held in March 2023. It was supported by the Disability and Rehabilitation Engineering (DARE) program from the National Science Foundations Engineering Biology and Health Cluster. The conference brought together experts and trainees from around the world to discuss critical questions, challenges, and opportunities at the intersection of computational modeling and neurorehabilitation to understand, optimize, and improve clinical translation of neurorehabilitation. We organized the conference around four key, relevant, and promising Focus Areas for modeling: Adaptation & Plasticity, Personalization, Human-Device Interactions, and Modeling In-the-Wild. We identified four common threads across the Focus Areas that, if addressed, can catalyze progress in the short, medium, and long terms. These were: (i) the need to capture and curate appropriate and useful data necessary to develop, validate, and deploy useful computational models (ii) the need to create multi-scale models that span the personalization spectrum from individuals to populations, and from cellular to behavioral levels (iii) the need for algorithms that extract as much information from available data, while requiring as little data as possible from each client (iv) the insistence on leveraging readily available sensors and data systems to push model-driven treatments from the lab, and into the clinic, home, workplace, and community. The conference archive can be found at (dare2023.usc.edu). These topics are also extended by three perspective papers prepared by trainees and junior faculty, clinician researchers, and federal funding agency representatives who attended the conference.
- Published
- 2024
3. New French height velocity growth charts: An innovative big‐data approach based on routine measurements.
- Author
-
Scherdel, Pauline, Taine, Marion, Bergerat, Manon, Werner, Andreas, Breton, Julien Le, Polak, Michel, Linglart, Agnès, Reynaud, Rachel, Frandji, Bruno, Carel, Jean‐Claude, Brauner, Raja, Chalumeau, Martin, and Heude, Barbara
- Subjects
- *
ELECTRONIC health records , *PRIMARY care , *VELOCITY , *WORLD health , *PHYSICIANS - Abstract
Aim: Height velocity is considered a key auxological tool to monitor growth, but updated height velocity growth charts are lacking. We aimed to derive new French height velocity growth charts by using a big‐data approach based on routine measurements. Methods: We extracted all growth data of children aged 1 month–18 years from the electronic medical records of 42 primary care physicians, between 1 January 1990 and 8 February 2018, throughout the French metropolitan territory. We derived annual and biannual height velocity growth charts until age 15 years by using the Lambda‐Mu‐Sigma method. These new growth charts were compared to the 1979 French and 2009 World Health Organisation (WHO) ones. Results: New height velocity growth charts were generated with 193 124 and 209 221 annual and biannual values from 80 204 and 87 260 children, respectively, and showed good internal fit. Median curves were close to the 1979 French or 2009 WHO ones, but SD curves displayed important differences. Similar results were found with the biannual height velocity growth charts. Conclusion: We produced new height velocity growth charts until age 15 years by using a big‐data approach applied to measurements routinely collected in clinical practice. These updated growth charts could help optimise growth‐monitoring performance. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Unveiling Linguistic Frontiers in the 21st Century: A Systematic Literature Review of the Rise of Big Data's Role in Data-Driven Language Learning in the Saudi Arabian Context.
- Author
-
Alharbi, Mohammed Salim
- Subjects
LANGUAGE ability ,EVIDENCE gaps ,LANGUAGE research ,BIG data ,LINGUISTICS education - Abstract
Copyright of Arab World English Journal is the property of Arab World English Journal and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
5. 环境病原微生物研究数据库及数据挖掘方法.
- Author
-
王尚, 冯凯, 李曈, 王洁, 顾松松, 杨兴盛, 李春格, and 邓晔
- Abstract
Environmental biosafety is closely related to social stability, human health, and even national defense security. Due to increasing pressure from climate change and anthropogenic activity, there is a risk of pathogenic ‘spillover’ and ‘spillback’. Current technologies are primarily designed for known pathogens. The rapid development and application of metagenomics-based bioinformatics may provide new opportunities and solutions for identifying unknown environmental pathogens and early warning of potential environmental health risks. The development of these technologies has not only promoted an understanding of the interactions between microbes and animals, humans, and the environment, but also is an important part of the concept of the One Health framework. This review briefly discusses the pathogenic ‘spillover’ and the increasing environmental health risks under global change. The focus is on bioinformatics technology in environmental biosafety study, covering data storage, processing, and mining. Finally, the review provide perspectives on big data-driven pathogen environmental risk assessment and new methods for pathogen control in the metagenomics era. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Social media based digital file size estimation method using sampling technique with α control chart in big data.
- Author
-
Alim, Abdul and Shukla, Diwakar
- Subjects
SOCIAL media ,ESTIMATION theory ,STATISTICAL sampling ,BIG data ,CONFIDENCE intervals ,MACHINE learning - Abstract
Due to the emergence of social networking platforms, a large number of users around the world are being part and partial of this platform. At a fraction of the time users on social media are communicating digital files in the form of text, video, images, voice and music which ultimately generates big data. The matter of interest is to estimate precisely the average file size at time duration (occasion). The time may hours or days or months. This paper presents a sample-based methodology to deal with mean size estimation of digital communication content spreading on a social media platform. An estimator is suggested using a random sample from big data and its properties are derived. A simulation method is suggested that computes the confidence interval (CI) for the prediction of précised range of digital file size. The proposed method produces an optimal confidence interval at the suitable choice of constant. These estimated confidence intervals can be used for developing α-control charts for constant monitoring of the growth in file size in social media storage at the data centre. If the growth of mean digital file size crosses the upper limit then additional storage infrastructure is needed at the administration level of the social media site. One can generate machine learning algorithms proposed method for monitoring the growth of average digital file size over time duration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Common core variables for childhood cancer data integration
- Author
-
Daniela Di Carlo, Ruth Ladenstein, Norbert Graf, Johannes Hans Merks, Gustavo Hernández-Peñaloza, Pamela Kearns, and Gianni Bisogno
- Subjects
Harmonisation ,Big-data ,Paediatric oncology ,AI ,Health research ,Neoplasms. Tumors. Oncology. Including cancer and carcinogens ,RC254-282 - Abstract
Introduction: Data-driven research has improved paediatric cancer outcomes for children. However, challenges in sharing data between institutions prevent the use of artificial intelligence (AI) to address substantial unmet needs in children diagnosed with cancer. Harmonising collected data can enable the application of AI for a greater understanding of paediatric cancers. The main goal of the paper was to analyse the currently used childhood cancer databases to identify a core of variables able to capture the most relevant data on the diagnosis and treatment of children and adolescents with cancer. Methods: We arbitrarily identified different types of existing databases dedicated to collecting data of patients with solid tumours, Umbrella, FAR-RMS; PARTNER; ERN PAEDCAN Registry; INSTRUCT and INRG; the common data elements for Rare Disease by Joint Research Centre. The different elements of the CRFs were analysed and ranked “essential” and “good to have”. Domains that included a group of variables structurally connected were identified. Each variable was defined by name, data type, description, and permissible values. Results: We identified six structural domains: Patient registration, Personal information, Disease History, Diagnosis, Treatment, and Follow-up and Events. For each of them, “essential” and “good to have” variables were defined. Discussion: Data harmonisation is essential for enhancing integration and comparability in research. By standardizing data formats and variables, researchers can facilitate data sharing, collaboration, and analysis across multiple studies and datasets. Embracing data harmonization practices will advance application of AI, scientific knowledge, improve research reproducibility, and contribute to evidence-based decision-making in various fields.
- Published
- 2024
- Full Text
- View/download PDF
8. Characteristics of sialolithiasis in Israel, a big‐data retrospective study of 5100 cases.
- Author
-
Jonas, Ehud, Muchnik, Daniel, Rabinovich, Idan, Masri, Daya, Chaushu, Gavriel, and Avishai, Gal
- Subjects
- *
SIALOLITHIASIS , *DISEASE risk factors , *ELECTRONIC health records , *SMOKING , *DEMOGRAPHIC characteristics - Abstract
Objective Methods Results Conclusions The aim of this study was to identify risk factors for sialolithiasis patients using a large community and hospital‐based cohort.A retrospective case–control study was conducted on 20,396 individuals, including 5100 sialolithiasis patients and 15,296 matched controls. Demographics and laboratory data were obtained from electronic medical records. Statistical analyses were performed to identify significant differences between the two groups. A p‐value of <0.05 was considered significant.Sialolithiasis was more prevalent in women, with a mean age at diagnosis of 55.75 years. Several geographic location variables emerged as risk factors for sialolithiasis including Israeli birth, higher socioeconomic communities, and specific areas of residency. Tobacco smoking (odds ratio = 1.46) was a significant risk factor. Low high‐density lipoprotein levels, elevated triglycerides, and elevated amylase levels were associated with sialolithiasis.This study provides valuable insights into the demographic and laboratory characteristics of sialolithiasis patients, indicating that area of residency and lifestyle factors contribute to the risk of developing sialolithiasis. The findings may contribute to a better understanding of the disease and the development of preventative measures or early diagnostics tools. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Biological big-data sources, problems of storage, computational issues, and applications: a comprehensive review.
- Author
-
Chaudhari, Jyoti Kant, Pant, Shubham, Jha, Richa, Pathak, Rajesh Kumar, and Singh, Dev Bukhsh
- Subjects
BIG data ,MULTIOMICS ,DATA warehousing ,COMPUTER performance ,TRANSCRIPTOMES ,EPIGENOMICS - Abstract
Biological big data are a massive amount of data generated from multi-omics experiments, such as genomics, transcriptomics, proteomics, metabolomics, phenomics, glycomics, epigenomics, and other omics. These data are used to study biological processes and to gain insights into how living systems work. It can also be used to develop new treatments for diseases and understand the causes of certain conditions. The storage and analysis of these data present several challenges owing to their sheer size and complexity. Storing these data efficiently requires a large amount of storage space and processing power. Furthermore, there are certain limitations in terms of the kind of insights that can be gained from multi-omics data because of their complexity. Despite these challenges, biological big data offers great potential for advancing our understanding of biology and developing new treatments for diseases. Big-data research is a rapidly growing field, with numerous applications. As the amount of data continues to increase, it is important to understand its storage, utility, limitations, and challenges. In this review article, various sources of big-data research and their storage capacities, limitations, and challenges are discussed. Factors affecting the data quality and accuracy have been reported. It will be helpful for researchers to understand the available big data in biology for their further utilization and integration into novel discovery. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Improving Administrative Data Quality on Tourism Using Big Data
- Author
-
Bianchino, Antonella, d’Aniello, Armando, Fusco, Daniela, Bini, Matilde, editor, Balzanella, Antonio, editor, Masserini, Lucio, editor, and Verde, Rosanna, editor
- Published
- 2024
- Full Text
- View/download PDF
11. Did Japanese Students Change Their Social Media Usage Toward Learning After COVID-19?
- Author
-
Iitaka, Toshikazu, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Stephanidis, Constantine, editor, Antona, Margherita, editor, Ntoa, Stavroula, editor, and Salvendy, Gavriel, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Toward a big-data approach for reconstructing regional to global paleogeography and tectonic histories: Preface
- Author
-
Li, Zheng-Xiang, Eglington, Bruce, and Wang, Tao
- Published
- 2025
- Full Text
- View/download PDF
13. Effect of traffic data set on various machine-learning algorithms when forecasting air quality
- Author
-
Sulaimon, Ismail Abiodun, Alaka, Hafiz, Olu-Ajayi, Razak, Ahmad, Mubashir, Ajayi, Saheed, and Hye, Abdul
- Published
- 2024
- Full Text
- View/download PDF
14. 数据驱动的生命科学研究进展.
- Author
-
江海平, 高纯纯, 刘文豪, 杨运桂, and 李鑫
- Abstract
The field of life sciences is rapidly evolving, driven by advancements in experimental techniques and vast biological big data which gradually arise and play an increasingly important role in life science research. First of all, biological big data has diversity and complexity, including genomic data, epigenomic data, proteomic data and other types. These data provide researchers with more comprehensive information and help reveal the laws behind life phenomena. Second, new data-driven developments and applications in life sciences cover many fields such as gene editing, precision medicine, drug development, etc., providing unprecedented possibilities for human health and quality of life. However, the era of big data for life science research also faces challenges in various aspects including data storage, sharing, and privacy protection, as well as how to transform massive data into reliable scientific discoveries. This paper provides a brief overview of the law of development of biological data in driving life sciences, sorts out the composition and characteristics of biological big data and its sources, as well as elaborates and discusses the common problems and challenges faced by our country under the new paradigm of data-driven life science research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Advances in machine intelligence‐driven virtual screening approaches for big‐data.
- Author
-
Kumar, Neeraj and Acharya, Vishal
- Subjects
VIRTUAL machine systems ,DRUG discovery ,ARTIFICIAL intelligence ,INTEGRAL domains ,MACHINE learning - Abstract
Virtual screening (VS) is an integral and ever‐evolving domain of drug discovery framework. The VS is traditionally classified into ligand‐based (LB) and structure‐based (SB) approaches. Machine intelligence or artificial intelligence has wide applications in the drug discovery domain to reduce time and resource consumption. In combination with machine intelligence algorithms, VS has emerged into revolutionarily progressive technology that learns within robust decision orders for data curation and hit molecule screening from large VS libraries in minutes or hours. The exponential growth of chemical and biological data has evolved as "big‐data" in the public domain demands modern and advanced machine intelligence‐driven VS approaches to screen hit molecules from ultra‐large VS libraries. VS has evolved from an individual approach (LB and SB) to integrated LB and SB techniques to explore various ligand and target protein aspects for the enhanced rate of appropriate hit molecule prediction. Current trends demand advanced and intelligent solutions to handle enormous data in drug discovery domain for screening and optimizing hits or lead with fewer or no false positive hits. Following the big‐data drift and tremendous growth in computational architecture, we presented this review. Here, the article categorized and emphasized individual VS techniques, detailed literature presented for machine learning implementation, modern machine intelligence approaches, and limitations and deliberated the future prospects. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Datafied Empiricism Versus Normative Publicness: A Philosophical Grounding for Assessing the Influence of New Technologies on the Digital Public Sphere.
- Author
-
Kaluža, Jernej
- Subjects
- *
DIGITAL technology , *PUBLIC sphere , *HABIT , *RELIGIOUS communities , *PUBLIC opinion , *DATA mining , *ARTIFICIAL intelligence , *EMPIRICISM - Abstract
New technologies of algorithmisation, data mining, and artificial intelligence appear to elicit contentious impacts on the public sphere. Evaluations of these effects frequently diverge. A noticeable schism exists between critical/normative perspective, which highlights the problematic aspects of data exploitation, surveillance, and imperialism, and market-oriented empirical approaches. Drawing on a conceptual–historical argumentation that links current developments to a longer tradition of social communication research rooted in Enlightenment philosophy, the article highlights the contrast between the normative conceptualisations of publicness and public use of reason on the one hand, and empirical approaches aimed at measuring and managing the public(s) and public opinion on the other. The article first identifies the role of the opposition between Humean empiricism, which is based on the principle of conformity to past habits, and Kantian pure law of publicity, which is systematically opposed to such empiricism on many different layers. This opposition is also rooted in the Enlightenment foundational divide between religious and civil communities. It seems that today, with the predominance of data-driven approaches in adapting opinion to past expectations and beliefs, we are paradoxically again returning to the principles similar to those of functioning of pre-modern (religion- and tradition-based) communities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. 面向航天器大数据安全传输的 发布/订阅系统设计.
- Author
-
覃润楠, 彭晓东, 谢文明, 惠建江, 冯渭春, and 姜加红
- Abstract
Copyright of Systems Engineering & Electronics is the property of Journal of Systems Engineering & Electronics Editorial Department and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
18. Deep Learning Inter-city Roads Conditions in East Africa for Infrastructure Prioritization using Satellite Imagery and Mobile Data
- Author
-
Davy Uwizera, Prof. Patrick McSharry, and Charles Ruranga
- Subjects
Deep Learning ,Mobile-data ,Classification ,Vision-recognition ,Big-data ,Sattelite-imagery ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Traditional survey methods for gathering information, such as questionnaires and field visits, have long been used in East Africa to evaluate road conditions and prioritize their development. These surveys are time-consuming, expensive, and vulnerable to human error. Road building and maintenance, on the other hand, has long experienced corruption due to a lack of accountability and validation of conventional approaches to determining which areas to prioritize. With the digital revolution, a lot of data is generated daily such as call detail record (CDR), that is likely to contain useful proxy data for spatial mobility distribution across different routes. In this research we focus on satellite imagery data with applications in East Africa and Google Maps suggested inter-city roads to assess road conditions and provide an approach for infrastructure prioritization given mobility patterns between cities. With increased urban population, East African cities have been expanding in multiple directions affecting the overall distribution of residential areas and consequently likely to impact the mobility trends across cities. We introduce a novel approach for infrastructure prioritization using deep learning and big data analytics. We apply deep learning to satellite imagery, to assess roads conditions by area and big data analytics to CDR data, to rank which ones could be prioritized for construction given mobility trends. Among deep learning models considered for roads condition classification, EfficientNet-B3 outperforms them and achieves accuracy of 99\%.
- Published
- 2024
19. From Raw Data to Informed Decisions: The Development of an Online Data Repository and Visualization Dashboard for Transportation Data.
- Author
-
Tsouros, Ioannis, Polydoropoulou, Amalia, Tsirimpa, Athena, Karakikes, Ioannis, Tahmasseby, Shahram, Mohammad, Anas Ahmad Nemer, and Alhajyaseen, Wael K.M.
- Subjects
DATA libraries ,DATA visualization ,BIG data ,DATA warehousing ,PUBLIC transit ,AUTOMOBILE dashboards ,DATA management - Abstract
This paper presents the design, implementation, and practical use of a specialized online data repository and an interactive visualization of a transportation dashboard. Specifically tailored to handle and interpret large-scale transportation data within the Qatari context, the combined platform serves as a comprehensive solution for managing extensive datasets, including GPS traces from taxis and e-scooters, which are examined as primary use-cases in this paper. The online data repository provides a centralized hub for efficient data storage and management of public transport including Mobility-as-a-Service (MaaS) data. Concurrently, the visualization dashboard fosters an intuitive, user-friendly interface for data exploration and analysis. Through real-world applications within Qatar's transportation ecosystem, we elucidate how these innovative developments can inform data-driven policy decisions in crucial areas such as infrastructure development, resource allocation, and safety measures. Ultimately, this study underscores the pivotal role of effective data management and advanced visualization in maximizing the potential of big data, providing valuable insights for urban mobility planning and enhancing the landscape of policy-making in Qatar and worldwide. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. An Analysis of Research Trends on the Metaverse Using BERTopic Modeling.
- Author
-
Yumi Kim and Heesop Kim
- Subjects
SHARED virtual environments ,DIGITAL divide ,TREND analysis ,STREAMING video & television ,VIRTUAL communities ,DATABASES - Abstract
Research on the metaverse has experienced significant growth in recent years, driven by advancements in technology, the thriving gaming industry, the expansion of social media and virtual communities, economic prospects, and the captivating vision of a new digital interface that transcends the current internet environment. To uncover research themes within the metaverse, we conducted a comprehensive analysis of research trends in this field using publications from Scopus, a widely recognized and extensively utilized scholarly database. Employing BERTopic, an advanced topic modeling technique, we analyzed 2,181 research articles focused on the metaverse. The exploration of the metaverse had humble beginnings with a single publication in 1995 but saw a substantial increase after 2020, reaching explosive growth with 1,041 publications in 2022. The application of the BERTopic model revealed 12 primary topics, each associated with significant keywords. These main topics encompass education, healthcare, blockchain, conferences, fashion, NFTs, cybersecurity, web3, research, video streaming, tinyML, and industry. Notably, among these subjects, education, healthcare, and blockchain exhibit significant research activity. In light of the global concern over the digital divide, we conducted investigations focusing on case studies involving digitally disadvantaged groups, such as individuals with visual impairments and the elderly. However, it is noteworthy that we identified only five studies addressing this issue, indicating limited research presence in this crucial area. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Museums and Archives in the Age of Artificial Intelligence and Post-representation
- Author
-
Van Essen, Yael Eylat, Tam, Kwok-kan, Series Editor, Barton, David, Editorial Board Member, Tompkins, Joanne, Associate Editor, Ying-hin Fung, Anthony, Editorial Board Member, Kao, Lang, Associate Editor, Lam, Sunny Sui-kwong, Editorial Board Member, and Tso, Anna Wing-bo, Associate Editor
- Published
- 2023
- Full Text
- View/download PDF
22. Das 'Spuren- und Indizienparadigma' – Bedeutung innerhalb der kriminalistischen Handlungslehre im Kontext der Cyberkriminalistik und -kriminologie
- Author
-
Plank, Holger, Fiedler, André, Rüdiger, Thomas-Gabriel, Series Editor, and Bayerl, P. Saskia, Series Editor
- Published
- 2023
- Full Text
- View/download PDF
23. Remote Sensing Imagery for Mapping and Monitoring High Nature Value Farmland Area (HNVF)
- Author
-
Fiorentino, Costanza, Donvito, Angelo R., D’Antonio, Paola, Conte, Domenico, Scalcione, Vincenzo, Toscano, Francesco, di Prisco, Marco, Series Editor, Chen, Sheng-Hong, Series Editor, Vayas, Ioannis, Series Editor, Kumar Shukla, Sanjay, Series Editor, Sharma, Anuj, Series Editor, Kumar, Nagesh, Series Editor, Wang, Chien Ming, Series Editor, Ferro, Vito, editor, Giordano, Giuseppe, editor, Orlando, Santo, editor, Vallone, Mariangela, editor, Cascone, Giovanni, editor, and Porto, Simona M. C., editor
- Published
- 2023
- Full Text
- View/download PDF
24. Research on the Problems and Countermeasures of China’s E-Government Under the Background of Big Data
- Author
-
Ke, Yan, Wu, Panpan, Zheng, Zheng, Editor-in-Chief, Xi, Zhiyu, Associate Editor, Gong, Siqian, Series Editor, Hong, Wei-Chiang, Series Editor, Mellal, Mohamed Arezki, Series Editor, Narayanan, Ramadas, Series Editor, Nguyen, Quang Ngoc, Series Editor, Ong, Hwai Chyuan, Series Editor, Sun, Zaicheng, Series Editor, Ullah, Sharif, Series Editor, Wu, Junwei, Series Editor, Zhang, Baochang, Series Editor, Zhang, Wei, Series Editor, Zhu, Quanxin, Series Editor, Zheng, Wei, Series Editor, Zeng, Ziqiang, editor, Gaikar, Vilas, editor, and Lotfi, Reza, editor
- Published
- 2023
- Full Text
- View/download PDF
25. Method of Selecting the Optimal Location of Barrier-Free Bus Stops Using Clustering
- Author
-
Kim, Se Hyoung, Pyun, Chae Won, Ryu, Jeong Yeon, Kim, Yong Hyun, Kang, Ju Young, Kacprzyk, Janusz, Series Editor, and Lee, Roger, editor
- Published
- 2023
- Full Text
- View/download PDF
26. Effect of changes in the hearing aid subsidy on the prevalence of hearing loss in South Korea.
- Author
-
Chul Young Yoon, Junhun Lee, Tae Hoon Kong, and Young Joon Seo
- Subjects
HEARING aids ,HEARING disorders ,NATIONAL health insurance ,NOSOLOGY ,DISABILITY insurance ,CONDUCTIVE hearing loss - Abstract
Objectives: South Korea's National Health Insurance has provided hearing aids to registered individuals with hearing disabilities since 1989. In 2015, hearing aid subsidies increased to approximately US$1,000. This study aimed to understand hearing loss categories in Korea by analyzing patients between 2010 and 2020 and the effect of the 2015 hearing aid policy change on the prevalence of hearing loss. Methods: The participants were patients registered on the National Health Insurance Service database from 2010 to 2020 with hearing loss. A total of 5,784,429 patients were included in this study. Hearing loss was classified into conductive, sensorineural, and other categories. Patients with hearing loss were classified according to the International Classification of Diseases diagnostic code. Disability diagnosis and hearing aid prescription were defined using the National Health Insurance Disability and Hearing Aid Code. Results: The increase in hearing aid prescriptions and hearing disability registrations following the subsidy increase impacts hearing loss prevalence. Hearing aid prescription and hearing disability were found to have an effect on increasing hearing loss prevalence in univariate and multivariate analyses. The r-value of each analysis exceeded 0.95.Other hearing losses increased rapidly after the increased subsidy. Conclusion: A hearing-impaired individual must be diagnosed with a hearing disability and prescribed a hearing aid to receive the subsidy. The prevalence of hearing loss was affected by increased hearing disabilities following changes in the hearing aid subsidy and the number of people prescribed hearing aids. Therefore, caution should be exercised when studying hearing loss prevalence over mid-long-term periods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Efecto del clima sobre la respuesta térmica en vacas de diferentes grupos raciales en trópico bajo.
- Author
-
Andrés Molina-Benavides, Raúl, Perilla–Duque, Sandra, Campos-Gaona, Rómulo, Sánchez-Guerrero, Hugo, Camilo Rivera-Palacios, Juan, Armando Muñoz-Borja, Luis, and Jiménez-Rodas, Daniel
- Subjects
- *
AUTOMATIC data collection systems , *MIDDLE ear , *RANDOM matrices , *ANIMAL adaptation , *METEOROLOGICAL stations , *MILK quality , *MILK yield - Abstract
Objective. The main idea of this study was to quantify the relationship between climatic variables and tympanic body temperature recorded through the use of wireless sensors in grazing cows located in low tropic. Material and methods. The tympanic temperature of twenty-eight cross breed grazing cows in early lactation was monitored. The sensors were manually installed in the tympanic cavity, recording hourly for 17 days. The climate data was obtained from the network of weather stations of the Centro de Investigación de la Caña de Azúcar "Cenicaña", which is a research center for sugarcane located in Cali, Colombia, this data was analyzed for the same time interval of the temperature. The information was analyzed using descriptive statistics, correlation matrices and Random Forest models, through the R software. Results. From the physiological data from automatic collection systems, the response variables that would allow the evaluation of thermoregulation processes were analyzed using big data. We find that the variables environmental temperature, relative humidity and, solar radiation were the factors that most influenced the homeothermic adaptation process of the animals. Conclusions. The introduction of remote devices, and the use of a large amount of data for the analysis of physiological indicators, avoid modifying natural animal behavior and emerges as an important diagnostic and management strategy in the livestock farm, helping in the studies of heat stress, physiological adaptation and, prevalence to hemotropic diseases, which reduce the productivity of the systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Digital file size computational procedure in multimedia big data using sampling methodology.
- Author
-
Alim, Abdul and Shukla, Diwakar
- Subjects
BIG data ,DIGITAL technology ,PARAMETERS (Statistics) ,SOCIAL media ,CONFIDENCE intervals ,DATA transmission systems - Abstract
The multimedia big data has tendency of fast growth over time span due to basic characteristics like volume, variety and velocity. Sample based estimates are used to compute the unknown population parameter. The multimedia big data is characterized various features who are prominent in terms of identification and analysis. Social media platforms are major sources of generating big data by virtue of communication among registered users in the form of text, video and images data. Registered users are also growing with drastic speed. When a user registers on a portal, default digital storage space is allotted by the system, who increases over time domain. A monitoring system is required to anticipate the increment and to alert managers of data centers for further enhancement of infrastructure. In the case of medical diagnostics, the CT-scan and MRI equipment produce the huge amount of scan files data while done over the large number of patients. These files are used to store in memory of the system for at least a prefixed duration. The digital file size of such reports, pixel densities and intensity of contents are the prime parameters of interest while comparing the quality of similar types of machines. Doctors and patients on social media platform used to exchange digital medical reports like X-ray, CT-scan, MRI, cancer diagnostics occupying the default storage. A guess value of digital file size can be helpful for the determination of expected amount of digital storage for users to be allocated to the medical processionals or other such. This paper presents sample based estimation methods for estimating the average file size over several time points. Confidence intervals are used as a tool of comparisons. A new simulation procedure is also suggested for comparative results of confidence intervals. At multiple optimum values of constant, the proposed sample based estimation methods perform better and the proposed simulation method is also result oriented. Strategy of using support information of other multimedia variable in estimation procedure is found useful and effective. Findings of the paper are numerically supported and significant percentage gain observed for proposed in the setup of multimedia big data floating over social media platforms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A sustainable Ethereum merge-based Big-Data gathering and dissemination in IIoT System
- Author
-
Ravi Sharma and Balázs Villányi
- Subjects
Data dissemination ,Big-Data ,Industrial Internet of Things ,Enterprise application software ,Cloud computing ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
The Industrial Internet of Things (IIoT) has captured the attention of various smart farming industries in recent years by connecting physical objects of smart farms over the internet so that they can be discovered and queried globally using Enterprise Application Software (EAS). This continuous transmission and processing poses a significant challenge in dealing with the massive amount of Big-Data being gathered and disseminated. As more applications use cloud platforms to increase storage capacity by storing Big-Data on an untrusted cloud server, data security precautions should be taken. However, with the rapid increase in the number of IIoT devices and EAS end-users and their diverse categories, the existing device authentication mechanisms in IIoT suffer from single factor authentication and poor adaptability. In light of the security requirements in IIoT Big-Data sharing using untrusted cloud servers, we present in this paper a sustainable Ethereum merge-based Big-Data gathering and dissemination in IIoT system. Extensive theoretical and simulation results validate data gathering with a business incentive mechanism with a proper data propagation threshold is an important parameter for determining equilibrium and analysing dynamic properties of IIoT system, which directly affects data processing speed and its final state. The Ethereum merge mechanism controls data dissemination instances in both the industry and EAS through classification and integrity verification using distributed authentication devices. Proof-of-stake is used for this purpose, which uses randomly selected validators to confirm transactions and add new ledgers to consortium blockchains to transfer data using partial secrets, making it fast, secure, and sustainable. Our proposed scheme outperforms other state-of-the-art schemes in simulation over Hyperledger Fabric with improved security measures.
- Published
- 2023
- Full Text
- View/download PDF
30. A Novel Circa situm Approach to Conserve Forest Genetic Resources of the Western Ghats
- Author
-
Vasudeva, R
- Published
- 2022
- Full Text
- View/download PDF
31. A Study on the Effect of Quality Factors of Smartphone 5G Technology on the Reliability of Information and Communication Policy
- Author
-
Chil-Yuob Choo
- Subjects
5G ,AI ,big-data ,ICT ,IOT ,smart phone quality ,Technology - Abstract
This paper analyzes the effect of the characteristics of 5G services on users' continuous intention to use, focusing on the technology acceptance model. With the start of the fourth industrial revolution in the 21st century, 5G is the best technology used in the Internet of Things, high-speed information and communication, artificial intelligence, big data, autonomous vehicles, virtual reality, augmented reality, robots, nanotechnology, and blockchain. The technical characteristics of 5G ultra-high-speed information communication are represented by ultra-high speed, ultra-high capacity, ultra-low delay, and ultra-high connectivity. 5G mobile communication technology is essential, and after the technology provided by 5G services is commercialized, it can play all its roles as a practical core new growth engine. 5G mobile communication (hereinafter referred to as 5G) is far superior to LTE, which is a 4G mobile communication, in terms of transmission speed, waiting time, and terminal capacity. 5G service is not just an axis of the process of developing mobile communication technology, but also the creation of innovative corporate value of technology. This is because higher network quality and innovation with 5G service technology will improve perceived usability, perceived ease of use, and perceived entertainment, which will ultimately have a positive impact on users' intention to use 5G services. Therefore, due to the lack of investment in information and communication bases, platforms, and applications, this paper can be used as the basis for establishing government policies.
- Published
- 2023
- Full Text
- View/download PDF
32. Probabilistic fatigue methodology for aircraft landing gear
- Author
-
Hoole, Joshua, Booker, Julian, and Cooper, Jonathan
- Subjects
629.134 ,Fatigue ,Probabilistic Design ,Reliability ,Big-Data - Abstract
Aircraft landing gear comprise of safety-critical structural components that are exposed to large cyclic loads over long in-service design lives. To prevent the occurrence of fatigue crack initiation, fatigue analysis methods are employed to identify the `safe-life’ at which the component must be retired, to guarantee the structural integrity of the component. However, the engineering parameters relating to the fatigue design of landing gear components, including material properties and loading, demonstrate significant variability. This variability is currently mitigated using design conservatism. Design conservatism can ultimately lead to over-weight components and as a result, probabilistic design approaches have been proposed to better represent design parameter variability within fatigue analysis processes. Unfortunately, a large number of inhibiting factors, or `blockers’, currently prevent the wider-scale implementation of probabilistic fatigue design approaches. The aim of this research is to develop a probabilistic fatigue methodology that overcomes the blockers to a probabilistic design approach. It is hypothesised that careful selection of a Monte Carlo Simulation based methodology, definition of systematic processes and frameworks, along with the exploitation of recent advances in surrogate modelling and `big-data’ sources, can help to combat the blockers to probabilistic design. Following application of the developed probabilistic fatigue methodology to landing gear component case studies, it was demonstrated that probabilistic methodologies can support the fatigue design of landing gear components through identifying the conservatism in existing practices, along with highlighting areas of component over-design. From implementing the methodology, it was observed that the proposed methodology could overcome the blockers to probabilistic design concerning computational expense, required assumptions, availability of data, accuracy of data characterisation and the large amount of required knowledge to implement such approaches. The remaining blockers to probabilistic design approaches therefore concern the development of reliability targets and the engineering mindset change required to implement such approaches.
- Published
- 2020
33. The role of new ICT-based systems in modern management special issue editors:.
- Author
-
Jafari Navimipour, Nima, Wan, Shaohua, Pasumarti, Srinivas Subbarao, and Fazio, Maria
- Subjects
BUSINESS process management ,MARKETING management ,CYBERNETICS ,KNOWLEDGE management ,INDUSTRIAL management - Abstract
In this special issue, we have collected eight articles that offer new points for research on information and communications technology (ICT)-based systems. We focused on the intuitive nature of the relationship between new ICT-based systems and contemporary management, forming an integrative unit of analysis instead of focusing solely on new ICT-based systems and leaving contemporary management as a moderating or mediating factor. This special issue promoted interdisciplinary research at the intersection of new ICT-based systems and contemporary management, including cybernetics systems and knowledge management, service managing and the Internet of things, cloud and marketing management, business process re-engineering and management, knowledge management, and strategic business management, among others. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Enabling Real-Time and Big Data-Driven Analysis to Detect Innovation City Patterns and Emerging Innovation Ecosystems at the Local Level
- Author
-
Oikonomaki, Eleni, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Streitz, Norbert A., editor, and Konomi, Shin'ichi, editor
- Published
- 2022
- Full Text
- View/download PDF
35. Data Localization and Privacy-Preserving Healthcare for Big Data Applications: Architecture and Future Directions
- Author
-
Verma, Ashwin, Bhattacharya, Pronaya, Patel, Yogesh, Shah, Komal, Tanwar, Sudeep, Khan, Baseem, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Singh, Pradeep Kumar, editor, Kolekar, Maheshkumar H., editor, Tanwar, Sudeep, editor, Wierzchoń, Sławomir T., editor, and Bhatnagar, Raj K., editor
- Published
- 2022
- Full Text
- View/download PDF
36. Microbiome and Big-Data Mining
- Author
-
Ning, Kang, Chen, Ming, editor, and Hofestädt, Ralf, editor
- Published
- 2022
- Full Text
- View/download PDF
37. The Context of Digital Entrepreneurship. New Technologies Between Evolution and Revolution
- Author
-
Christine, Volkmann, Ileana, Gavrilescu, Dima, Alina Mihaela, editor, and Kelemen, Mihaela, editor
- Published
- 2022
- Full Text
- View/download PDF
38. NoSQL: A Real Use Case
- Author
-
Martins, Pedro, Sá, Filipe, Caldeira, Filipe, Abbasi, Maryam, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, de Paz Santana, Juan F., editor, de la Iglesia, Daniel H., editor, and López Rivero, Alfonso José, editor
- Published
- 2022
- Full Text
- View/download PDF
39. Integrated Micro-Video Recommender Based on Hadoop and Web-Scrapper
- Author
-
Raj, Jyoti, Hoque, Amirul, Saha, Ashim, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Misra, Rajiv, editor, Shyamasundar, Rudrapatna K., editor, Chaturvedi, Amrita, editor, and Omer, Rana, editor
- Published
- 2022
- Full Text
- View/download PDF
40. Grapevine yield gap: identification of environmental limitations by soil and climate zoning in the region of Languedoc-Roussillon (South of France)
- Author
-
Hugo Fernandez-Mena, Nicolas Guilpart, Philippe Lagacherie, Renan Le Roux, Mayeul Plaige, Maxime Dumont, Marine Gautier, Nina Graveline, Jean-Marc Touzard, Hervé Hannin, and Christian Gary
- Subjects
viticulture ,classification ,mapping ,cartography ,big-data ,terroir ,Agriculture ,Botany ,QK1-989 - Abstract
Often unable to fulfill theoretical production potentials and to obtain the maximum yields set by wine quality labels, many vineyards and cellars need to solve the issue of so-called grapevine yield gaps in order to assure their durability. These yield gaps particularly occur in Mediterranean wine regions, where extreme events have intensified because of climate change. Yield gaps at the regional level have been widely studied in arable crops using big datasets, but much less so in perennial crops, such as grapevine. Understanding the environmental factors involved in yield gaps, such as those linked to climate and soil, is the first step in grapevine yield gap analysis. At a regional scale, there are numerous studies on ‘terroir’ linked to wine typicity and quality; however, none have classified spatial zones based on environmental factors identified as being involved in grapevine yield. In the present study, we aggregated into one big dataset information obtained from producers at the municipality level in the wine region Languedoc-Roussillon (South of France) between 2010 and 2018. We used a backward stepwise model selection process using linear mixed-effect models to discriminate and select the statistically significant indicators capable of estimating grapevine yield at the municipality level. We then determined spatial zones by using the selected indicators to create clusters of municipalities with similar soil and climate characteristics. Finally, we analysed the indicators of each zone related to the grapevine yield gap, as well as the variations among the grapevine varieties in the zones. Our selection process evidenced 6 factors that could explain annual grapevine yield annually (R2 = 0.112) and average yield for the whole period (R2 = 0.546): Soil Available Water Capacity (SAWC), soil pH, Huglin Index, the Climate Dryness Index, the number of Very Hot Days and Days of Frost. The clustering results show seven different zones with two marked yield gap levels, although all the zones had municipalities with no or high yield gaps. On each zone, grapevine yield was found to be driven by a combination of climate and soil factors, rather than by a single environmental factor. The white wine varieties showed larger yield gaps than the red and rosé wine varieties. Environmental factors at this scale largely explained yield variability across the municipalities, but they were not performant in terms of annual yield prediction. Further research is required on the interactions between environmental variables and plant material and farming practices, as well as on vineyard strategies, which also play an important role in grapevine yield gaps at vineyard and regional scale.
- Published
- 2023
- Full Text
- View/download PDF
41. New Technique for Monitoring High Nature Value Farmland (HNVF) in Basilicata.
- Author
-
Fiorentino, Costanza, D'Antonio, Paola, Toscano, Francesco, Donvito, Angelo, and Modugno, Felice
- Abstract
The definition of High Nature Value Farmland Areas (HNVF) was provided by Andersen in 2003: "HNVF comprises those areas in Europe where agriculture is the major (usually the dominant) land use and where that agriculture supports or is associated with either a high species and habitat diversity, or the presence of species of European conservation concern or both". The present work focuses on an overview of the techniques used to produce HNVF maps at different spatio-temporal resolutions. The proposed approach is based on the statistical approach. The study area is the Basilicata region (southern Italy) in 2012, mapped at municipal spatial resolution. The HNVF areas were identified by applying a threshold to the sum of the contributions of the main characterizing indicators. Three indicators contribute to the calculation of the HNVF areas: crop variability (CD Index), extensive practices (EP Index), and the presence of natural elements (Index Ne). Good agreement was found between our HNVF map and the results of the literature, although the analysis approaches were different. The main advantages of the proposed methodology derive from only free input data being used, and include remote sensing images and the adaptability to different spatial resolutions (local, regional, and national). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Registries: Big data, bigger problems?
- Author
-
Rubinger, Luc, Ekhtiari, Seper, Gazendam, Aaron, and Bhandari, Mohit
- Subjects
- *
BIG data , *MACHINE learning , *MEDICAL registries , *ACQUISITION of data , *EVIDENCE-based medicine - Abstract
Patient registries are data systems organized to allows the prospective collection of clinical data to assess specific outcomes. Types of registries include administrative, linked, and disease-, procedure- or pathology-, or product-specific registries. Registry studies are typically considered level II or III evidence, however the advent of registry based RCTs may change this paradigm. Strengths of registries include the volume of data available, diversity of included participants, and efficient enrollment and data collection. Limitations of registries include variable quality of data, lack of active follow-up, and, often, a lack of detail in the data collected. Patient registries have grown in size and number along with general computing power and digitization of the healthcare world. In contrast to databases, registries are typically patient data systematically created and collected for the express purpose of answering health-related questions. Registries can be disease-, procedure-, pathology-, or product-based in nature. Registry-based studies typically fit into Level II or III in the hierarchy of evidence-based medicine. However, a recent advent in the use of registry data has been the development and execution of registry-based trials, such as the TASTE trial, which may elevate registry-based studies into the realm of Level I evidence. Some strengths of registries include the sheer volume of data, the inclusion of a diverse set of participants, and their ability to be linked to other registries and databases. Limitations of registries include variable quality of the collected data, and a lack of active follow-up (which may underestimate rates of adverse events). As with any study type, the intended design does not automatically lead to a study of a certain quality. While no specific tool exists for assessing the quality of a registry-based study, some important considerations include ensuring the registry is appropriate for the question being asked, whether the patient population is representative, the presence of an appropriate comparison group, and the validity and generalizability of the registry in question. The future of clinical registries remains to be seen, but the incorporation of big data and machine learning algorithms will certainly play an important role. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. A sustainable Ethereum merge-based Big-Data gathering and dissemination in IIoT System.
- Author
-
Sharma, Ravi and Villányi, Balázs
- Subjects
COGNITIVE processing speed ,ELECTRONIC data processing ,BIG data ,APPLICATION software ,INTERNET of things ,INCENTIVE (Psychology) ,BLOCKCHAINS ,DATA integrity - Abstract
The Industrial Internet of Things (IIoT) has captured the attention of various smart farming industries in recent years by connecting physical objects of smart farms over the internet so that they can be discovered and queried globally using Enterprise Application Software (EAS). This continuous transmission and processing poses a significant challenge in dealing with the massive amount of Big-Data being gathered and disseminated. As more applications use cloud platforms to increase storage capacity by storing Big-Data on an untrusted cloud server, data security precautions should be taken. However, with the rapid increase in the number of IIoT devices and EAS end-users and their diverse categories, the existing device authentication mechanisms in IIoT suffer from single factor authentication and poor adaptability. In light of the security requirements in IIoT Big-Data sharing using untrusted cloud servers, we present in this paper a sustainable Ethereum merge-based Big-Data gathering and dissemination in IIoT system. Extensive theoretical and simulation results validate data gathering with a business incentive mechanism with a proper data propagation threshold is an important parameter for determining equilibrium and analysing dynamic properties of IIoT system, which directly affects data processing speed and its final state. The Ethereum merge mechanism controls data dissemination instances in both the industry and EAS through classification and integrity verification using distributed authentication devices. Proof-of-stake is used for this purpose, which uses randomly selected validators to confirm transactions and add new ledgers to consortium blockchains to transfer data using partial secrets, making it fast, secure, and sustainable. Our proposed scheme outperforms other state-of-the-art schemes in simulation over Hyperledger Fabric with improved security measures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. 基于微生物组新颖指数构建龋病菌群诊断模型.
- Author
-
孙雁斐, 卢洁, 杨加震, 刘育含, 刘璐, 曾飞, 牛玉芬, 董磊, and 杨芳
- Subjects
RECEIVER operating characteristic curves ,SPECIES diversity ,SEARCH engines ,CHINESE people ,RIBOSOMAL RNA - Abstract
Copyright of West China Journal of Stomatology is the property of Sichuan University, West China College of Stomatology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
45. A comprehensive study and review of tuning the performance on database scalability in big data analytics.
- Author
-
Sundarakumar, M.R., Mahadevan, G., Natchadalingam, R., Karthikeyan, G., Ashok, J., Manoharan, J. Samuel, Sathya, V., and Velmurugadass, P.
- Subjects
- *
DATABASES , *BIG data , *DATA libraries , *PYTHON programming language , *BATCH processing , *ELECTRONIC data processing - Abstract
In the modern era, digital data processing with a huge volume of data from the repository is challenging due to various data formats and the extraction techniques available. The accuracy levels and speed of the data processing on larger networks using modern tools have limitations for getting quick results. The major problem of data extraction on the repository is finding the data location and the dynamic changes in the existing data. Even though many researchers created different tools with algorithms for processing those data from the warehouse, it has not given accurate results and gives low latency. This output is due to a larger network of batch processing. The performance of the database scalability has to be tuned with the powerful distributed framework and programming languages for the latest real-time applications to process the huge datasets over the network. Data processing has been done in big data analytics using the modern tools HADOOP and SPARK effectively. Moreover, a recent programming language such as Python will provide solutions with the concepts of map reduction and erasure coding. But it has some challenges and limitations on a huge dataset at network clusters. This review paper deals with Hadoop and Spark features also their challenges and limitations over different criteria such as file size, file formats, and scheduling techniques. In this paper, a detailed survey of the challenges and limitations that occurred during the processing phase in big data analytics was discussed and provided solutions to that by selecting the languages and techniques using modern tools. This paper gives solutions to the research people who are working in big data analytics, for improving the speed of data processing with a proper algorithm over digital data in huge repositories. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Sleep and physical activity – the dynamics of bi-directional influences over a fortnight
- Author
-
Anu-Katriina Pesonen, Michal Kahn, Liisa Kuula, Topi Korhonen, Leena Leinonen, Kaisu Martinmäki, Michael Gradisar, and Jari Lipsanen
- Subjects
Physical activity ,Sleep duration ,Sleep quality ,Big-data ,Time-series ,Cross-lagged ,Public aspects of medicine ,RA1-1270 - Abstract
Abstract Study objectives The day-to-next day predictions between physical activity (PA) and sleep are not well known, although they are crucial for advancing public health by delivering valid sleep and physical activity recommendations. We used Big Data to examine cross-lagged time-series of sleep and PA over 14 days and nights. Methods Bi-directional cross-lagged autoregressive pathways over 153,154 days and nights from 12,638 Polar watch users aged 18–60 years (M = 40.1 SD = 10.1; 44.5% female) were analyzed with cross-lagged panel data modeling (RI-CPL). We tested the effects of moderate-to-vigorous physical activity (MVPA) vs. high intensity PA (vigorous, VPA) on sleep duration and quality, and vice versa. Results Within-subject results showed that more minutes spent in VPA the previous day was associated with shorter sleep duration the next night, whereas no effect was observed for MVPA. Longer sleep duration the previous night was associated with less MVPA but more VPA the next day. Neither MVPA nor VPA were associated with subsequent night’s sleep quality, but better quality of sleep predicted more MVPA and VPA the next day. Conclusions Sleep duration and PA are bi-directionally linked, but only for vigorous physical activity. More time spent in VPA shortens sleep the next night, yet longer sleep duration increases VPA the next day. The results imply that a 24-h framing for the interrelations of sleep and physical activity is not sufficient – the dynamics can even extend beyond, and are activated specifically for the links between sleep duration and vigorous activity. The results challenge the view that sleep quality can be improved by increasing the amount of PA. Yet, better sleep quality can result in more PA the next day.
- Published
- 2022
- Full Text
- View/download PDF
47. Digging Deeper: The Role of Big Data Analytics in Geotechnical Investigations
- Author
-
Vani V. Divya, Helena Raj Vijilius, Dutt Amit, Raveendranath Reshma, Tyagi Lalit Kumar, Almusawi Muntather, and Yadav Dinesh Kumar
- Subjects
big-data ,geotechnical engineering ,artificial intelligence ,machine learning ,Environmental sciences ,GE1-350 - Abstract
This review paper explores the transformative role of big data analytics in geotechnical engineering, transferring past conventional methods to a data-driven paradigm that complements decision-making and precision in subsurface investigations. By integrating large statistics analytics with geotechnical engineering, this study demonstrates big improvements in website characterization, danger assessment, and production methodologies. The research underscores the capability of big data to revolutionize geotechnical investigations through improved prediction models, threat management, and sustainable engineering practices, highlighting the critical role of big data in addressing international warming and ozone depletion. Through the examination of numerous case studies and AI-driven methodologies, this paper sheds light at the efficiency gains and environmental benefits attainable in geotechnical engineering.
- Published
- 2024
- Full Text
- View/download PDF
48. Some natural hypomethylating agents in food, water and environment are against distribution and risks of COVID-19 pandemic: Results of a big-data research
- Author
-
Mohammad Reza Besharati, Mohammad Izadi, and Alireza Talebpour
- Subjects
covid-19 ,nutrition ,risk ,survey ,hypomethylating agents ,big-data ,Therapeutics. Pharmacology ,RM1-950 - Abstract
Objective: This study analyzes the effects of lifestyle, nutrition, and diets on the status and risks of apparent (symptomatic) COVID-19 infection in Iranian families.Materials and Methods: A relatively extensive questionnaire survey was conducted on more than 20,000 Iranian families (residing in more than 1000 different urban and rural areas in the Islamic Republic of Iran) to collect the big data of COVID-19 and develop a lifestyle dataset. The collected big data included the records of lifestyle effects (e.g. nutrition, water consumption resources, physical exercise, smoking, age, gender, health and disease factors, etc.) on the status of COVID-19 infection in families (i.e. residents of homes). Therefore, an online self-reported questionnaire was used in this retrospective observational study to analyze the effects of lifestyle factors on the COVID-19 risks. The data collection process spanned from May 10, 2020 to March 19, 2021 by selecting 132 samples from more than 40 different social network communities.Results: The research results revealed that food and water sources, which contain some natural hypomethylating agents, mitigated the risks of apparent (symptomatic) COVID-19 infection. Furthermore, the computations on billions of permutations of nutrition conditions and dietary regime items, based on the data collected from people’s diets and infection status, showed that there were many dietary conditions alleviating the risks of apparent (symptomatic) COVID-19 infection by 90%. However, some other diets tripled the infection risk.Conclusion: Some natural hypomethylating agents in food, water, and environmental resources are against the spread and risks of COVID-19.
- Published
- 2022
- Full Text
- View/download PDF
49. The Tendency of Reputation as Psychological Factors Influencing Cycling Performance
- Author
-
Bon-jae Ku and Young-kil Yun
- Subjects
performace ,psychological factors ,reputation ,big-data ,topic modeling analysis ,cycling ,Sports ,GV557-1198.995 - Abstract
PURPOSE This study was conducted to estimate the tendency of psychological factors influencing cycling performance by analyzing the characteristic factors of athlete reputation in the news big-data. METHODS To explore the psychological factors influencing cycling performance, an open questionnaire was conducted on 82 cyclists, and Inductive Content Analyses was performed. Overall, 89,520 news articles were collected through BIGKinds, and forming factors of athlete reputation were derived through LDA topic modeling analysis and inductive categorization. Through regression analysis, time series tendency of the factors of athlete reputation was calculated. Finally, the tendency of psychological factors to influence cycling performance was estimated based on the previously derived results in this study. RESULTS The psychological factors influencing cycling performance were found to be; emotion control, trust capital, cognitive control, motivation and communications with the coach. The forming factors of athlete reputation was found to be; reporting of the sports event, infrastructure creation, analysis to performance, moral issue, social environmental changes and sports gossip. The time series tendency of the forming factors of athlete reputation was found to include the categories of Hot, Warm, Cool and Cold. The psychological factors influencing cycling performance are estimated to expand to exercise performance and moral intelligence. CONCLUSIONS The results of this study suggest that the discussion of psychological factors influencing cycling performance extends not only to exercise performance, but also to moral intelligence, reflecting the socio-cultural context in the discussion of performance.
- Published
- 2022
- Full Text
- View/download PDF
50. Sticky PDMP samplers for sparse and local inference problems.
- Author
-
Bierkens, Joris, Grazzi, Sebastiano, Meulen, Frank van der, and Schauer, Moritz
- Abstract
We construct a new class of efficient Monte Carlo methods based on continuous-time piecewise deterministic Markov processes (PDMPs) suitable for inference in high dimensional sparse models, i.e. models for which there is prior knowledge that many coordinates are likely to be exactly 0. This is achieved with the fairly simple idea of endowing existing PDMP samplers with “sticky” coordinate axes, coordinate planes etc. Upon hitting those subspaces, an event is triggered during which the process sticks to the subspace, this way spending some time in a sub-model. This results in non-reversible jumps between different (sub-)models. While we show that PDMP samplers in general can be made sticky, we mainly focus on the Zig-Zag sampler. Compared to the Gibbs sampler for variable selection, we heuristically derive favourable dependence of the Sticky Zig-Zag sampler on dimension and data size. The computational efficiency of the Sticky Zig-Zag sampler is further established through numerical experiments where both the sample size and the dimension of the parameter space are large. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.