1,008 results
Search Results
2. Artificial Intelligence for Knowledge Visualization: An Overview
- Author
-
Rozić, Robert, Slišković, Robert, Rosić, Marko, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Vasić, Daniel, editor, and Kundid Vasić, Mirela, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Augmented Virtual Reality in Data Visualization
- Author
-
Alves, Pedro, Portela, Filipe, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Guarda, Teresa, editor, Portela, Filipe, editor, and Augusto, Maria Fernanda, editor
- Published
- 2022
- Full Text
- View/download PDF
4. Experiences in WordNet Visualization with Labeled Graph Databases
- Author
-
Caldarola, Enrico Giacinto, Picariello, Antonio, Rinaldi, Antonio M., Diniz Junqueira Barbosa, Simone, Series editor, Chen, Phoebe, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kara, Orhun, Series editor, Kotenko, Igor, Series editor, Liu, Ting, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, Fred, Ana, editor, Dietz, Jan L.G., editor, Aveiro, David, editor, and Liu, Kecheng, editor
- Published
- 2016
- Full Text
- View/download PDF
5. Research on product paper packaging container automation system based on computer big data
- Author
-
Yang Junjun
- Subjects
Information visualization ,Data visualization ,Information engineering ,business.industry ,Computer science ,Container (abstract data type) ,Big data ,business ,Process automation system ,Automation ,Manufacturing engineering ,Visualization - Abstract
China’s packaging industry has formed a packaging big data system from three aspects: implementing information engineering, building a collaborative symbiosis network platform of packaging information resources, and building an industrial chain knowledge map centered on product chain. Packaging big data visualization design uses big data information acquisition, hierarchical division and management, problem and audience, visual transformation and other thinking modes, and adopts text information visualization, multi-dimensional information visualization, hierarchical relationship visualization and other technologies to realize the visual presentation of packaging industry data information. This paper analyzes the network data, extracts the keywords related to the key elements, and then analyzes the construction method of product paper packaging container automation system based on the keywords extraction algorithm. On the one hand, it emphasizes the importance of big data in online shopping commodity packaging design; on the other hand, it explores new ideas and methods of online shopping commodity packaging design through the application of big data in online shopping commodity packaging design.
- Published
- 2021
6. Auditor judgment and decision-making in big data environment: a proposed research framework
- Author
-
Hamdam, Adli, Jusoh, Ruzita, Yahya, Yazkhiruni, Abdul Jalil, Azlina, and Zainal Abidin, Nor Hafizah
- Published
- 2022
- Full Text
- View/download PDF
7. Challenges and trends about smart big geospatial data: A position paper
- Author
-
Luis M. Vilches-Blázquez, Andrés Tello, and Victor Saquicela
- Subjects
Spatial contextual awareness ,Geospatial analysis ,Computer science ,business.industry ,Big data ,0211 other engineering and technologies ,02 engineering and technology ,computer.file_format ,GeoSPARQL ,computer.software_genre ,Data science ,Linked Data Platform ,Visualization ,Data visualization ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,RDF ,business ,computer ,021101 geological & geomatics engineering - Abstract
Currently, we are witnessing an exponential growth in the amount of data being generated and captured at multiple locations. This trend will continue over the next years. Hence, we have envisioned a scenario in which many objects will be referencing to or generating location information. Thus, the need for appropriately managing geospatial data is evident. In this paper, we present our vision for an integral Geo Linked Data platform; pointing out the current limitations and challenges in the GeoRDFization, Storage, Query Federation, and Visualization of data with an inherent spatial context.
- Published
- 2017
8. A survey paper on big data analytics
- Author
-
B. Bharathi and M. D. Anto Praveena
- Subjects
Data collection ,Computer science ,business.industry ,Big data ,Information technology ,02 engineering and technology ,Data science ,Data warehouse ,World Wide Web ,Data visualization ,Analytics ,020204 information systems ,Data quality ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,The Internet ,business - Abstract
In recent years, the internet application and communication have seen a lot of development and reputation in the field of Information Technology. These internet applications and communication are continually generating the large size, different variety and with some genuine difficult multifaceted structure data called big data. As a consequence, we are now in the era of massive automatic data collection, systematically obtaining many measurements, not knowing which one will be relevant to the phenomenon of interest. For example, E-commerce transactions include activities such as online buying, selling or investing. Thus they generate the data which are high in dimensional and complex in structure. The traditional data storage techniques are not adequate to store and analyses those huge volume of data. Many researchers are doing their research in dimensionality reduction of the big data for effective and better analytics report and data visualization. Hence, the aim of the survey paper is to provide the overview of the big data analytics, issues, challenges and various technologies related with Big Data.
- Published
- 2017
9. Text Mining and Visualization of Time Series Data Utilizing Big Data.
- Author
-
Soo-Tai Nam, Seong-Yoon Shin, and Chan-Yong Jin
- Subjects
TEXT mining ,BIG data ,TIME series analysis ,DATA visualization ,DATABASE management ,DATA analysis - Abstract
Newly, big data utilization has been widely interested in a wide variety of industrial fields. Big data is the art of extracting value from volume sets of structured and unstructured data, beyond the capabilities of traditional database management tools, and analyzing the results. Big data is often characterized by the (3V) volume, velocity, and variety. Using the R language, a big data analysis tool, you can express various analysis results through various visualization functions using pre-processed unstructured data. The data used in this research was a comparative analysis of 104 papers from January to May 2021 and 108 papers from September 2022 to January 2023 among the papers published in the Korea Institute of Information and Communication Engineering. The analysis showed that the highest frequency was Data (2,038). Therefore, we discuss the limitations and practical implications of the study based on the analysis results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. BIG DATA VISUALIZATION IN DIGITAL MARKETPLACES – A SYSTEMATIC REVIEW AND FUTURE DIRECTIONS.
- Author
-
Kumar, Anal and Ali, ABM Shawkat
- Subjects
BIG data ,DATA visualization ,MARKETPLACES ,SOFTWARE visualization ,CONSUMERS ,SELF-efficacy - Abstract
As the digital landscape continues to evolve, digital marketplaces have become critical platforms for businesses to connect with customers and thrive in the highly competitive market. Amidst this growing complexity and influx of data, the role of big data visualization has emerged as a powerful tool for extracting meaningful insights and could also help with predictive analysis in digital marketplaces. Digital marketplaces have revolutionized the way businesses operate, creating vast streams of data generated by various transactions, customer interactions, and market dynamics. Navigating this data deluge presents a challenge, as businesses strive to uncover valuable insights that can inform strategic decisionmaking. Big data visualization has emerged as a powerful approach to transforming complex data into visually appealing representations that enable better understanding, analysis, and utilization of information in digital marketplaces. This paper explores the significance of big data visualization in the context of digital marketplaces. It highlights the growing importance of visualization techniques to unlock the hidden potential of massive datasets and facilitate data-driven decision-making. By employing innovative visualization tools and technologies, businesses can gain a comprehensive view of their marketplace, identify patterns, and extract actionable insights to optimize their operations. Additionally, the paper highlights the benefits of big data visualization for stakeholders involved in digital marketplaces. It emphasizes how visualization empowers decision-makers to identify emerging trends, understand customer behavior, and make data-informed strategic choices. Moreover, it addresses the collaborative aspect of visualization, enabling teams to share insights, foster innovation, and drive performance improvements across the marketplace ecosystem. This paper offers a multidisciplinary overview of the research problems and developments in big data and the tools and strategies used for its display. The primary goal is to give creative solutions for problems relating to the present state of big data visualization and highlight obstacles in visualization approaches for existing big data. Complex data visualization design projects frequently require collaboration between individuals with various visualization-related talents. For instance, many teams combine designers who produce fresh visualization concepts with engineers who put the resultant visualization software into practice. The authors pinpoint gaps that present difficulties for designer-developer teams trying to produce new data visualizations. Data for this study came from papers published between 2010 and 2022 and obtain using a comprehensive literature procedure (12 years). For this study, several publications from a variety of sources are utilized using the specified inclusion, exclusion, and quality criteria. The focus is primarily on the research regarding big data visualization in the context of digital marketplaces and the methods used for data visualization. The current study compiles and arranges the published literature on big data visualization in digital marketplaces that is currently available. The findings of this study indicate that there has been a rise in the number of papers published annually and that there are several studies on big data in digital marketplaces. The study will aid academics in understanding the research that is now accessible on big data in digital marketplaces and will ultimately be utilized as support in other investigations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
11. Leveraging Visualization and Machine Learning Techniques in Education: A Case Study of K-12 State Assessment Data.
- Author
-
Taylor, Loni, Gupta, Vibhuti, and Jung, Kwanghee
- Subjects
DATA-based decision making in education ,ARTIFICIAL intelligence ,DATA visualization ,MACHINE learning ,MICROSOFT Azure (Computing platform) ,INDIVIDUALIZED instruction - Abstract
As data-driven models gain importance in driving decisions and processes, recently, it has become increasingly important to visualize the data with both speed and accuracy. A massive volume of data is presently generated in the educational sphere from various learning platforms, tools, and institutions. The visual analytics of educational big data has the capability to improve student learning, develop strategies for personalized learning, and improve faculty productivity. However, there are limited advancements in the education domain for data-driven decision making leveraging the recent advancements in the field of machine learning. Some of the recent tools such as Tableau, Power BI, Microsoft Azure suite, Sisense, etc., leverage artificial intelligence and machine learning techniques to visualize data and generate insights from them; however, their applicability in educational advances is limited. This paper focuses on leveraging machine learning and visualization techniques to demonstrate their utility through a practical implementation using K-12 state assessment data compiled from the institutional websites of the States of Texas and Louisiana. Effective modeling and predictive analytics are the focus of the sample use case presented in this research. Our approach demonstrates the applicability of web technology in conjunction with machine learning to provide a cost-effective and timely solution to visualize and analyze big educational data. Additionally, ad hoc visualization provides contextual analysis in areas of concern for education agencies (EAs). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Quadruple Helix Stakeholders' Interest in the Global Landscape of AI.
- Author
-
Kaivo-oja, Jari and Ainamo, Antti
- Subjects
STAKEHOLDERS ,BIG data ,ARTIFICIAL intelligence ,CIVIL society ,DATA visualization - Abstract
A paucity of Big Data analysis has characterized research on the "Quadruple Helix"; that is, interactions across Industries (also called Business and Industrial), Government (Law & Government), Academia (Science), and Civil Society (People and Society). This empirical study elaborates on the global interest that these four groups of stakeholders have expressed on Artificial Intelligence (AI). This paper, analysing Big Data sourced from Google Trend Index over five years, describes and unpacks processes of transformation of interest in what we call the global landscape, as concerns AI. We present visualizations, comparisons, and statistical analyses, as well as classification, of how global interest levels as to AI differ or are the same across the Quadruple Helix stakeholder groups. Our findings represent a novel perspective on how the global landscape and innovation governance logic as to AI are configured, providing implications for governance of interactions across national innovation systems and civil society, in particular. [ABSTRACT FROM AUTHOR]
- Published
- 2023
13. An Information System Supporting Insurance Use Cases by Automated Anomaly Detection.
- Author
-
Reis, Thoralf, Kreibich, Alexander, Bruchhaus, Sebastian, Krause, Thomas, Freund, Florian, Bornschlegl, Marco X., and Hemmje, Matthias L.
- Subjects
INFORMATION storage & retrieval systems ,ANOMALY detection (Computer security) ,BIG data ,INSURANCE companies ,INSURANCE ,ARTIFICIAL intelligence - Abstract
The increasing availability of vast quantities of data from various sources significantly impacts the insurance industry, although this industry has always been data driven. It accelerates manual processes and enables new products or business models. On the other hand, it also burdens insurance analysts and other users that need to cope with this development parallel to other global changes. A novel information system (IS) for artificial intelligence (AI)-supported big data analysis, introduced within this paper, shall help to overcome user overload and to empower human data analysts in the insurance industry. The IS research's focus lies neither in novel algorithms nor datasets but in concepts that combine AI and big data analysis for synergies, such as usability enhancements. For this purpose, this paper systematically designs and implements an AI2VIS4BigData reference model to help information systems conform to automatically detect anomalies and increase its users' confidence and efficiency. Practical relevance is assured by an interview with an insurance analyst to verify the demand for the developed system and derive all requirements from two insurance industry user stories. A core contribution is the introduction of the IS. Another significant contribution is an extension of the AI2VIS4BigData service-based architecture and user interface (UI) concept on AI and machine learning (ML)-based user empowerment and data transformation. The implemented prototype was applied to synthetic data to enable the evaluation of the system. The quantitative and qualitative evaluations confirm the system's usability and applicability to the insurance domain yet reveal the need for improvements toward bigger quantities of data and further evaluations with a more extensive user group. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Bridging Text Visualization and Mining: A Task-Driven Survey.
- Author
-
Liu, Shixia, Wang, Xiting, Collins, Christopher, Dou, Wenwen, Ouyang, Fangxin, El-Assady, Mennatallah, Jiang, Liu, and Keim, Daniel A.
- Subjects
DATA mining ,DATA visualization ,DATA analysis ,ALGORITHMS ,BIG data - Abstract
Visual text analytics has recently emerged as one of the most prominent topics in both academic research and the commercial world. To provide an overview of the relevant techniques and analysis tasks, as well as the relationships between them, we comprehensively analyzed 263 visualization papers and 4,346 mining papers published between 1992-2017 in two fields: visualization and text mining. From the analysis, we derived around 300 concepts (visualization techniques, mining techniques, and analysis tasks) and built a taxonomy for each type of concept. The co-occurrence relationships between the concepts were also extracted. Our research can be used as a stepping-stone for other researchers to 1) understand a common set of concepts used in this research topic; 2) facilitate the exploration of the relationships between visualization techniques, mining techniques, and analysis tasks; 3) understand the current practice in developing visual text analytics tools; 4) seek potential research opportunities by narrowing the gulf between visualization and mining techniques based on the analysis tasks; and 5) analyze other interdisciplinary research areas in a similar way. We have also contributed a web-based visualization tool for analyzing and understanding research trends and opportunities in visual text analytics. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Tourism forecasting research: a bibliometric visualization review (1999–2022).
- Author
-
Wu, XiaoXi, Shi, Jinlian, and Xiong, Haitao
- Subjects
TOURISM research ,DATA visualization ,COVID-19 pandemic ,BIBLIOMETRICS ,REVENUE management - Abstract
Copyright of Tourism Review is the property of Emerald Publishing Limited and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
16. An exploratory study on visualizing big data in the internetof things.
- Author
-
Rai, Aditi, Misra, Medha, and Sar, Sumit Kumar
- Subjects
- *
BIG data , *VISUAL analytics , *DATA mining , *DEEP learning , *DATA visualization , *INTERNET of things - Abstract
As the tech industry continues to embrace the Internet of Things (IoT), a multitude of wireless devices are being developed to track various infrastructures. These devices generate vast amounts of statistics from domains such as medicine, supply chain, power, agricultural analytics or intelligence, BAS (including HVAC) and similar data-producing industries [1]. To effectively utilize this data and facilitate strategic decision-making, big data techniques are crucial in IoT processes. They serve as valuable instruments for real-time data visualization, enabling the extraction of useful information. This paper aims to provide an extensive analysis of the benefits of big data visualization on IoT approaches, related applications/ softwares and techniques. With the focus on visual analytics, our work situates data visualization as a part of the visual analysis phase. It is a review of the available tools for data visualization and provides applicatory guidelines for them, taking into account the specific requirements of each individual case. In spite of big data methods being applied across various IoT domains, the paper delves into the challenges of visualization and the influence of big data on shaping the IoT landscape. This article includes a studyof existing works to establish its foundation, wherein, in spite of not presenting any specific findings, it presents an overview of the progress made thus far in big data visualization along with application of deep learning in the sphere of IoT. The paper also highlights big data's prominent role in IoT visualization, with the focus on illustrating the key concepts of real-time visualizationof big data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. A systematic review of data-driven approaches to fault diagnosis and early warning.
- Author
-
Jieyang, Peng, Kimmig, Andreas, Dongkun, Wang, Niu, Zhibin, Zhi, Fan, Jiahai, Wang, Liu, Xiufeng, and Ovtcharova, Jivka
- Subjects
FAULT diagnosis ,DEEP learning ,BIG data ,MANUFACTURING processes ,EARLY diagnosis ,MECHANICAL engineering ,INDUSTRIALIZATION - Abstract
As an important stage of life cycle management, machinery PHM (prognostics and health management), an emerging subject in mechanical engineering, has seen a huge amount of research. Here the authors present a comprehensive overview that details previous and current efforts in PHM from an industrial big data perspective. The authors first analyze the historical development of industrial big data and its distinction from big data of other domains and summarize the sources, types, and processing modes of industrial big data. Then, the authors provide an overview of common representation and fusion (data pre-processing) methods of industrial big data. Next, the authors comprehensively review common PHM methods in the data-driven context, focusing on the application of deep learning. Finally, two industrial cases from our previous studies are included in this paper to demonstrate how the PHM technique may facilitate the manufacturing industry. Furthermore, a visual bibliography is developed for displaying current results of PHM in an appropriate theme. The bibliography is open source at "https://mango-hund.github.io/". The authors believe that future research endeavors will require an understanding of this previous work, and our efforts in this paper will make it possible to customize and integrate PHM systems quickly for a variety of applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Research on the application of big data visualization technology in urban road congestion.
- Author
-
Guo, Haitao and Xu, Lunhui
- Subjects
BIG data ,INTELLIGENT transportation systems ,TRAFFIC congestion ,DATA visualization ,CITY traffic ,TRAFFIC flow ,MULTISCALE modeling - Abstract
In order to improve the effect of urban road congestion analysis, this paper aims to study the mechanism of traffic congestion diffusion under the condition that users have real-time traffic information, and analyze the urban road congestion situation combined with big data visualization technology. According to the characteristics of traffic flow propagation, this paper conducts multi-granularity abstraction and multi-scale modeling of node-intersection-link-network for the complex and dynamic traffic congestion process, and establishes an improved SIS virus propagation model for traffic congestion propagation. In addition, this paper uses the method of state transition probability to construct an interactive dynamic model of traffic congestion propagation and early warning information propagation in a multi-layer network. The experimental research results show that the big data visualization technology introduced in this paper can play an important role in urban road congestion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Analysis and visualization of project system.
- Author
-
Shete, Dhanashri and Khobragade, Prashant
- Subjects
DATA visualization ,DATA analytics ,DATA science ,DATA analysis ,COMPUTER science ,BIG data - Abstract
In recent world data analytics is more popular term in the field of computer science. The paper focuses on the compressive survey of various data analysis tools to connect with the people by just reading their data. Some of the most popular data analysis tools have been chosen, and a comparison has been made based on some key parameters to determine the best tool in the data science industry. Scenes can uses a huge collection of data information to analyze, predict, and deliver information such as the whole determined by various countries, HEIs, government, universities, and schools. In this paper data analytics tools has taken in account with the consideration of different parameter as Cost, data handling capabilities, graphical capabilities, big data, and other considerations. Comparative study has been made using own experiences on various data analysis tools. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Research Progress of Tumor Big Data Visualization.
- Author
-
Chen, Xingyu and Liu, Bin
- Subjects
BIG data ,DATA visualization software ,DATA visualization ,TUMORS - Abstract
Background: As the number of tumor cases significantly increases, so does the quantity of tumor data. The mining and application of large-scale data have promoted the development of tumor big data. Among them, the visualization methods of tumor big data can well show the key information in a large volume of data and facilitate the human brain to receive information. Therefore, tumor big data visualization methods are a key part of the development of tumor big data. Process: This paper first summarizes the connotation, sources, characteristics, and applications of tumor big data, and expounds the current research status of tumor big data visualization at home and abroad. Then, this paper focuses on four mainstream visualization presentation methods of tumor big data, namely the visualization of tumor spatiotemporal data, the visualization of tumor hierarchy and network data, the visualization of tumor text data, and the visualization of multidimensional tumor data, and gives specific application scenarios. After this, the paper introduces the advantages, disadvantages, and scope of the use of five data visualization websites and software that can be easily obtained by readers. Finally, this paper analyzes the problems existing in tumor big data visualization, summarizes the visualization methods, and proposes the future of tumor big data visualization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Enhanced Data Mining and Visualization of Sensory-Graph-Modeled Datasets through Summarization.
- Author
-
Hashmi, Syed Jalaluddin, Alabdullah, Bayan, Al Mudawi, Naif, Algarni, Asaad, Jalal, Ahmad, and Liu, Hui
- Subjects
DATA mining ,DATA visualization - Abstract
The acquisition, processing, mining, and visualization of sensory data for knowledge discovery and decision support has recently been a popular area of research and exploration. Its usefulness is paramount because of its relationship to the continuous involvement in the improvement of healthcare and other related disciplines. As a result of this, a huge amount of data have been collected and analyzed. These data are made available for the research community in various shapes and formats; their representation and study in the form of graphs or networks is also an area of research which many scholars are focused on. However, the large size of such graph datasets poses challenges in data mining and visualization. For example, knowledge discovery from the Bio–Mouse–Gene dataset, which has over 43 thousand nodes and 14.5 million edges, is a non-trivial job. In this regard, summarizing the large graphs provided is a useful alternative. Graph summarization aims to provide the efficient analysis of such complex and large-sized data; hence, it is a beneficial approach. During summarization, all the nodes that have similar structural properties are merged together. In doing so, traditional methods often overlook the importance of personalizing the summary, which would be helpful in highlighting certain targeted nodes. Personalized or context-specific scenarios require a more tailored approach for accurately capturing distinct patterns and trends. Hence, the concept of personalized graph summarization aims to acquire a concise depiction of the graph, emphasizing connections that are closer in proximity to a specific set of given target nodes. In this paper, we present a faster algorithm for the personalized graph summarization (PGS) problem, named IPGS; this has been designed to facilitate enhanced and effective data mining and visualization of datasets from various domains, including biosensors. Our objective is to obtain a similar compression ratio as the one provided by the state-of-the-art PGS algorithm, but in a faster manner. To achieve this, we improve the execution time of the current state-of-the-art approach by using weighted, locality-sensitive hashing, through experiments on eight large publicly available datasets. The experiments demonstrate the effectiveness and scalability of IPGS while providing a similar compression ratio to the state-of-the-art approach. In this way, our research contributes to the study and analysis of sensory datasets through the perspective of graph summarization. We have also presented a detailed study on the Bio–Mouse–Gene dataset, which was conducted to investigate the effectiveness of graph summarization in the domain of biosensors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. A bibliometric mapping overview of Fintech academic literature between 1984 and 2019.
- Author
-
Purnomo, Agung, Asitah, Nur, Khan, Humera Asad Ullah, Mufliq, Achmad, and Rosyidah, Elsa
- Subjects
FINANCIAL technology ,BIBLIOMETRICS ,COMPUTER science ,CRYPTOCURRENCIES ,BIG data ,BLOCKCHAINS ,DATA visualization - Abstract
Fintech is quite important in a company. This paper aims to review the status and visual map position of research in the internationally Fintech publication indexed Scopus that used a bibliometric perspective. The research was carried out using bibliometric techniques. Data analysis as well as visualization utilizing VOS Viewer program and the Scopus function for analyze search results. In this review, the details collected applied to 745 documents issued from 1984 to 2019. The study reveal that Lee Kuo Chuen and Bina Nusantara University were the most active individual scientists and affiliated institutions in Fintech publication. In Fintech the Computer Science and Economist United Kingdom were the most areas of study and dissemination sources. There were three worldwide group maps with collaborative researchers. In order to identify the body of knowledge created from thirty-five years of publication, this study constructed a convergence axis grouping comprising of Fintech publication: Blockchain, Innovation, Big data, China, Sales, Insurance, Computers, and Cryptocurrencies, abbreviated as BIBCSICC themes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. A Clustering Visualization Method for Density Partitioning of Trajectory Big Data Based on Multi-Level Time Encoding.
- Author
-
Wei, Boan, Zhang, Jianqin, Hu, Chaonan, and Wen, Zheng
- Subjects
BIG data ,DATA visualization ,PARALLEL algorithms ,K-means clustering ,ENCODING ,VIDEO coding - Abstract
The proliferation of the Internet and the widespread adoption of mobile devices have given rise to an immense volume of real-time trajectory big data. However, a single computer and conventional databases with limited scalability struggle to manage this data effectively. During the process of visual rendering, issues such as page stuttering and subpar visual outcomes often arise. This paper, founded on a distributed architecture, introduces a multi-level time encoding method using "minutes", "hours", and "days" as fundamental units, achieving a storage model for trajectory data at multi-scale time. Furthermore, building upon an improved DBSCAN clustering algorithm and integrating it with the K-means clustering algorithm, a novel density-based partitioning clustering algorithm has been introduced, which incorporates road coefficients to circumvent architectural obstacles, successfully resolving page stuttering issues and significantly enhancing the quality of visualization. The results indicate the following: (1) when data is extracted using the units of "minutes", "hours", and "days", the retrieval efficiency of this model is 6.206 times, 12.475 times, and 18.634 times higher, respectively, compared to the retrieval efficiency of the original storage model. As the volume of retrieved data increases, the retrieval efficiency of the proposed storage model becomes increasingly superior to that of the original storage model. Under identical experimental conditions, this model's retrieval efficiency also outperforms the space–time-coded storage model; (2) Under a consistent rendering level, the clustered trajectory data, when compared to the unclustered raw data, has shown a 40% improvement in the loading speed of generating heat maps. There is an absence of page stuttering. Furthermore, the heat kernel phenomenon in the heat map was also resolved while enhancing the visualization rendering speed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Real-Time Big Data Analytics and Proactive Traffic Safety Management Visualization System.
- Author
-
Abdel-Aty, Mohamed, Zheng, Ou, Wu, Yina, Abdelraouf, Amr, and Rim, Heesub
- Subjects
BIG data ,DATA visualization ,DATA distribution ,CLOSED-circuit television ,TRANSPORTATION safety measures ,TRAFFIC safety ,ROAD safety measures - Abstract
Big data and data-driven analysis could be utilized for traffic management to improve road safety and the performance of transportation systems. This paper introduces a web-based proactive traffic safety management (PATM) and real-time big data visualization tool, which is based on an award-winning system that won the US Department of Transportation (USDOT) Solving for Safety Visualization Challenge and was selected as one of the USDOT Safety Data Initiative (SDI) Beta Tools. State-of-the-art research, especially for real-time crash prediction and PATM, are deployed in this study. A significant amount of real-time data is accessed by the system in order to conduct data-driven analysis, such as traffic data, weather data, and video data from closed-circuit television (CCTV) live streams. Based on the data, multiple modules have been developed, including real-time crash/secondary crash prediction, CCTV-based expedited detection, PATM recommendation, data sharing, and report generation. Both real-time data and the system outputs are visualized at the front end using interactive maps and various types of figures to represent the data distribution and efficiently reveal hidden patterns. Evaluation of the real-time crash prediction outputs is conducted based on one-month real-world crash data and the prediction results from the system. The comparison results indicate excellent prediction performance. When considering spatial-temporal tolerance, the sensitivity and false alarm rate of the prediction results [i.e., high crash potential event (HCPE)] are 0.802 and 0.252, respectively. Current and potential implementation are also discussed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Geospatial big data handling theory and methods
- Author
-
Stephan Winter, James Haworth, Monika Sester, Francesc Antón Castro, Tao Cheng, Christopher Pettit, Alfred Stein, Suzana Dragićević, Bin Jiang, Songnian Li, Arzu Çöltekin, University of Zurich, and Li, Songnian
- Subjects
FOS: Computer and information sciences ,Physics - Physics and Society ,Visual analytics ,Analytics ,Geospatial analysis ,010504 meteorology & atmospheric sciences ,Group method of data handling ,Big data ,0211 other engineering and technologies ,FOS: Physical sciences ,02 engineering and technology ,Physics and Society (physics.soc-ph) ,Review ,3107 Atomic and Molecular Physics, and Optics ,computer.software_genre ,01 natural sciences ,Computer Science - Computers and Society ,Data visualization ,Computers and Society (cs.CY) ,1706 Computer Science Applications ,Computers in Earth Sciences ,910 Geography & travel ,Engineering (miscellaneous) ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,business.industry ,1903 Computers in Earth Sciences ,Geospatial ,Data handling ,Data science ,Atomic and Molecular Physics, and Optics ,Spatial modeling ,Computer Science Applications ,Photogrammetry ,Geography ,10122 Institute of Geography ,Spatial Modeling ,Physics - Data Analysis, Statistics and Probability ,Position paper ,2201 Engineering (miscellaneous) ,business ,computer ,Data Analysis, Statistics and Probability (physics.data-an) - Abstract
Big data has now become a strong focus of global interest that is increasingly attracting the attention of academia, industry, government and other organizations. Big data can be situated in the disciplinary area of traditional geospatial data handling theory and methods. The increasing volume and varying format of collected geospatial big data presents challenges in storing, managing, processing, analyzing, visualizing and verifying the quality of data. This has implications for the quality of decisions made with big data. Consequently, this position paper of the International Society for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisits the existing geospatial data handling methods and theories to determine if they are still capable of handling emerging geospatial big data. Further, the paper synthesises problems, major issues and challenges with current developments as well as recommending what needs to be developed further in the near future. Keywords: Big data, Geospatial, Data handling, Analytics, Spatial Modeling, Review, Comment: 25 pages, 3 figures
- Published
- 2016
26. Wetland Classification, Attribute Accuracy, and Scale.
- Author
-
Carlson, Kate, Buttenfield, Barbara P., and Qiang, Yi
- Subjects
WETLANDS ,BIG data ,CLASSIFICATION ,DATA visualization ,SPATIAL resolution ,MULTISCALE modeling - Abstract
Quantification of all types of uncertainty helps to establish reliability in any analysis. This research focuses on uncertainty in two attribute levels of wetland classification and creates visualization tools to guide analysis of spatial uncertainty patterns over several scales. A novel variant of confusion matrix analysis compares the Cowardin and Hydrogeomorphic wetland classification systems, identifying areas and types of misclassification for binary and multivariate categories. The specific focus on uncertainty in the paper refers to categorical consistency, that is, agreement between the two classification systems, rather than comparing observed data to ground truth. Consistency is quantified using confusion matrix analysis. Aggregation across progressive focal windows transforms the confusion matrix into a multiscale data pyramid for quick determination of where attribute uncertainty is highly variant, and at what spatial resolutions classification inconsistencies emerge. The focal pyramids summarize precision, recall, and F1 scores to visualize classification differences across spatial scales. Findings show that the F1 scores appear most informative on agreement about wetlands misclassification at both coarse and fine attribute scales. The pyramid organizes multi-scale uncertainty in a single unified framework and can be "sliced" to view individual focal levels of attribute consistency. Results demonstrate how the confusion matrix can be used to quantify the percentage of a study area in which inconsistencies occur reflecting wetland presence and type. The research provides confusion metrics and display tools to focus attention on specific areas of large data sets where attribute uncertainty patterns may be complex, thus reducing land managers' workloads by highlighting areas of uncertainty where field checking might be appropriate, and improving analytics by providing visualization tools to quickly see where such areas occur. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Open Research Issues and Tools for Visualization and Big Data Analytics.
- Author
-
Gahar, Rania Mkhinini, Arfaoui, Olfa, and Hidri, Minyar Sassi
- Subjects
BIG data ,DATA visualization ,DIGITAL technology ,ELECTRONIC data processing - Abstract
The new age of digital growth has marked all fields. This technological evolution has impacted data flows which have witnessed a rapid expansion over the last decade that makes the data traditional processing unable to catch up with the rapid flow of massive data. In this context, the implementation of a big data analytics system becomes crucial to make big data more relevant and valuable. Therefore, with these new opportunities appear new issues of processing very high data volumes requiring companies to look for big data-specialized solutions. These solutions are based on techniques to process these masses of information to facilitate decision-making. Among these solutions, we find data visualization which makes big data more intelligible allowing accurate illustrations that have become accessible to all. This paper examines the big data visualization project based on its characteristics, benefits, challenges and issues. The project, also, resulted in the provision of tools surging for beginners as well as well as experienced users. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Information Visualization Design of Web under the Background of Big Data.
- Author
-
Deng, Ran and Ni, Taile
- Subjects
DATA visualization ,WEB design ,DATABASE design ,BIG data ,WEBSITES - Abstract
With the rapid development of the Internet, the information on the Internet presents an explosive growth. Cloud computing and big data analysis technology based on Internet information rise accordingly. However, all web pages contain not only important information but also the noise information irrelevant to the subject information. They seriously affect the accuracy of information extraction, so the research of web page information extraction technology arises at the historic moment and becomes the research hotspot. The quality of web page text information will directly affect the accuracy of later information processing and decision-making. If we can accurately evaluate the information of the web pages captured from the Internet and classify the extracted web pages according to the corresponding characteristics, we can not only improve the efficiency of information processing, but also improve the practical value of the information decision-making system. From the practical application requirements and user-friendly operation point of view, the information visualization of web design based on big data is studied in this paper. Specifically, the system designed in this paper improves the traditional template-based web information extraction method, establishes a web information extraction rule scheme combined with templates, and achieves the goal of web information extraction rule selection and template generation in the visual environment. Finally, the visualization algorithm based on T-SNE verifies the effectiveness of the web page information visualization algorithm designed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Smart Mobility with Big Data: Approaches, Applications, and Challenges.
- Author
-
Lee, Dohoon, Camacho, David, and Jung, Jason J.
- Subjects
BIG data ,ARTIFICIAL intelligence ,MACHINE learning ,DATA visualization - Abstract
Many vehicles are connected to the Internet, and big data are continually created. Various studies have been conducted involving the development of artificial intelligence, machine learning technology, and big data frameworks. The analysis of smart mobility big data is essential and helps to address problems that arise as society faces increased future mobility. In this paper, we analyze application issues such as personal information leakage and data visualization due to increased data exchange in detail, as well as approaches focusing on analyses exploiting machine learning and architecture research exploiting big data frameworks, such as Apache Hadoop, Apache Spark, and Apache Kafka. Finally, future research directions and open challenges are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Biological, chemical, and nutritional food risks and food safety issues from Italian online information sources: Web monitoring, content analysis, and data visualization
- Author
-
Barbara Tiozzo, Laura D'Este, Valentina Rizzoli, Mirko Ruzza, Licia Ravarotto, and Mosè Giaretta
- Subjects
Big Data ,content analysis ,Food Safety ,food risks ,Computer science ,web monitoring ,Big data ,050801 communication & media studies ,Health Informatics ,lcsh:Computer applications to medicine. Medical informatics ,0508 media and communications ,Data visualization ,risk communication ,Surveys and Questionnaires ,0502 economics and business ,Humans ,Social media ,Original Paper ,Internet ,Descriptive statistics ,business.industry ,Communication ,Data Visualization ,lcsh:Public aspects of medicine ,05 social sciences ,lcsh:RA1-1270 ,Food safety ,Data science ,Content analysis ,Food risks ,Online information sources ,Risk communication ,Web monitoring ,Nutrition Assessment ,Italy ,lcsh:R858-859.7 ,050211 marketing ,The Internet ,Web content ,business ,Social Media ,online information sources - Abstract
Background With rapid evolution of the internet and web 2.0 apps, online sources have become one of the main channels for most people to seek food risk information. Thus, it would be compelling to analyze the coverage of online information sources related to biological, chemical, and nutritional food risks, and related safety issues, to understand the type of content that online readers are exposed to, possibly influencing their perceptions. Objective The aim of this study was to identify the types of online sources that are predominantly covering this theme, and the topics that have received the most attention in terms of coverage and engagement on social media. Methods We performed an analysis of big data related to food risks by combining web monitoring techniques, content analysis, and data visualization of a large amount of unstructured text. Using a dictionary-based approach, a web monitoring app was instructed to automatically collect web content referring to the food risk and safety field. Data were retrieved from March 2017 to February 2018. The validated corpus (N=12,163) was subject to automatic and manual content analysis. Results were combined with descriptive statistics extracted from Web-Live and processed with Qlik Sense. Results Nutritional risks and news about outbreaks, controls, and alerts were the most widely covered topics. Thematic sources devoted major attention to nutritional topics, whereas national sources covered food risks, especially during food emergencies. Regarding engagement on social media, readers’ interest was higher for nutritional topics and animal welfare. Although traditional sources still publish a great amount of content related to food risks and safety, new mediators have emerged as alternative sources for food risk information. Conclusions This mixed methodological approach was demonstrated to be a useful means for obtaining an accurate characterization of the online discourse on food risks, and can provide insight into how the monitored sources contribute to the process of risk communication.
- Published
- 2020
31. A quantitative and text-based characterization of big data research.
- Author
-
Gupta, Vedika, Singh, Vivek Kumar, Ghose, Udayan, Mukhija, Pankaj, Pinto, David, and Singh, Vivek
- Subjects
THEMATIC analysis ,BIG data ,SOCIAL medicine ,CONTENT analysis ,DATA visualization ,COMPUTER science ,DEEP learning - Abstract
This paper tries to map the research work carried out in the field of Big Data through a detailed analysis of scholarly articles published on the theme during 2010-16, as indexed in Scopus. We have collected and analyzed all relevant publications on Big Data, as indexed in Scopus, through a quantitative as well as textual characterization. The analysis attempts to dwell into parameters like research productivity, growth of research and citations, thematic trends, top publication sources and emerging topics in this field. The analytical study also investigates country-wise publications output and impact in terms of average citations per paper, country-level collaboration patterns, authorship and leading contributors (countries, institutions) etc. The scholarly publication data is also subjected to a detailed textual analysis method to identify key themes in Big Data research, disciplinary variations and thematic trends and patterns. The results produce interesting inferences. Quantitative measures show that there has been a tremendous increase in number of publications related to Big Data during last few years. Research work in Big Data, though primarily considered a sub-discipline of Computer Science, is now carried out by researchers in many disciplines. Thematic analysis of publications in Big Data show that it's a discipline involving research interest from fields as diverse as Medicine to Social Sciences. The paper also identifies major keywords now associated with Big Data research such as Cloud Computing, Deep Learning, Social Media and Data Analytics. This helps in a thorough understanding and visualization of the Big Data research area. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
32. On Evaluating Runtime Performance of Interactive Visualizations.
- Author
-
Bruder, Valentin, Muller, Christoph, Frey, Steffen, and Ertl, Thomas
- Subjects
BIG data ,VISUALIZATION ,SCIENTIFIC visualization ,DATA visualization ,GRAPHICS processing units ,STATISTICS - Abstract
As our field matures, evaluation of visualization techniques has extended from reporting runtime performance to studying user behavior. Consequently, many methodologies and best practices for user studies have evolved. While maintaining interactivity continues to be crucial for the exploration of large data sets, no similar methodological foundation for evaluating runtime performance has been developed. Our analysis of 50 recent visualization papers on new or improved techniques for rendering volumes or particles indicates that only a very limited set of parameters like different data sets, camera paths, viewport sizes, and GPUs are investigated, which make comparison with other techniques or generalization to other parameter ranges at least questionable. To derive a deeper understanding of qualitative runtime behavior and quantitative parameter dependencies, we developed a framework for the most exhaustive performance evaluation of volume and particle visualization techniques that we are aware of, including millions of measurements on ten different GPUs. This paper reports on our insights from statistical analysis of this data, discussing independent and linear parameter behavior and non-obvious effects. We give recommendations for best practices when evaluating runtime performance of scientific visualization applications, which can serve as a starting point for more elaborate models of performance quantification. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. Impact on stock exchange due to Covid-19 using apache spark.
- Author
-
Gupta, Yogesh Kumar and Sharma, Ms. Nidhi
- Subjects
STOCK exchanges ,COVID-19 ,BIG data ,PRICES ,DATA visualization ,STOCK prices - Abstract
Big data analytics is used to predict and analyze the data which is available in huge amount and having structured, unstructured and sometime semi structured values. Here in our research data analysis will be done on the behalf of data available which is nifty-50 stock market data. We are going to analyze the impact on nifty-50 due to covid-19. We have collected the dataset form Kaggle.com. The techniques used here is apache spark and language is Scala. In our research results will be shown on the basis of analysis done using the closing price and opening price of different stocks in different months and weeks. The results will be expressed in the form of graph using data visualization technique in tableau. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Research on the Teaching of Visual Communication Design Based on Digital Technology.
- Author
-
Bian, Jianying and Ji, Ying
- Subjects
DIGITAL technology ,VISUAL communication ,DATA visualization ,COMMUNICATION education ,DESIGN thinking ,OPTICAL information processing ,BIG data - Abstract
In the era of big data, the rapid development of information technology has made the sharing of data and information more free, bringing convenience to the public, but at the same time, the massive amount of data also brings "information anxiety." In this paper, we use the knowledge of design discipline, combined with psychological cognitive science, statistics, and communication discipline, and analyze a large number of excellent cases to effectively combine "information visualization-visual representation-design." The purpose of this study is to enable audiences to efficiently access information through visual symbols in a large amount of data and to increase their interest in reading and satisfaction in accessing information. As a carrier of communication between designers and audiences in the process of visual representation of information visualization, visual symbols can effectively convey information content and emotional concepts to audiences. Finally, based on the theoretical support of the previous paper, combined with our own practical experience, we conduct a systematic study on the application of design thinking and the construction of design methods for the design of visual representations of information visualization. A comparative study on the aesthetics of different categories of APP interface design is conducted, and it is believed that different categories of APP can create distinctive and different categories of APP interface aesthetics through the differentiated design of interface layout, content expression, and visual form, which is considered as the ultimate goal that APP interface design must pursue. Visual representation is a method and means to realize information visualization as a representation practice that expresses the meaning of information in the form of visual symbols. The visual representation of information visualization uses visual symbols as a medium, and the audience interprets the visual symbols based on cognitive experience to obtain information, which helps to maximize the dissemination of information. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. From Raw Data to Informed Decisions: The Development of an Online Data Repository and Visualization Dashboard for Transportation Data.
- Author
-
Tsouros, Ioannis, Polydoropoulou, Amalia, Tsirimpa, Athena, Karakikes, Ioannis, Tahmasseby, Shahram, Mohammad, Anas Ahmad Nemer, and Alhajyaseen, Wael K.M.
- Subjects
DATA libraries ,DATA visualization ,BIG data ,DATA warehousing ,PUBLIC transit ,AUTOMOBILE dashboards ,DATA management - Abstract
This paper presents the design, implementation, and practical use of a specialized online data repository and an interactive visualization of a transportation dashboard. Specifically tailored to handle and interpret large-scale transportation data within the Qatari context, the combined platform serves as a comprehensive solution for managing extensive datasets, including GPS traces from taxis and e-scooters, which are examined as primary use-cases in this paper. The online data repository provides a centralized hub for efficient data storage and management of public transport including Mobility-as-a-Service (MaaS) data. Concurrently, the visualization dashboard fosters an intuitive, user-friendly interface for data exploration and analysis. Through real-world applications within Qatar's transportation ecosystem, we elucidate how these innovative developments can inform data-driven policy decisions in crucial areas such as infrastructure development, resource allocation, and safety measures. Ultimately, this study underscores the pivotal role of effective data management and advanced visualization in maximizing the potential of big data, providing valuable insights for urban mobility planning and enhancing the landscape of policy-making in Qatar and worldwide. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. HTwitt: a hadoop-based platform for analysis and visualization of streaming Twitter data.
- Author
-
Demirbaga, Umit
- Subjects
DATA visualization ,ELECTRONIC data processing ,BIG data ,MACHINE learning ,ECOSYSTEMS - Abstract
Twitter produces a massive amount of data due to its popularity that is one of the reasons underlying big data problems. One of those problems is the classification of tweets due to use of sophisticated and complex language, which makes the current tools insufficient. We present our framework HTwitt, built on top of the Hadoop ecosystem, which consists of a MapReduce algorithm and a set of machine learning techniques embedded within a big data analytics platform to efficiently address the following problems: (1) traditional data processing techniques are inadequate to handle big data; (2) data preprocessing needs substantial manual effort; (3) domain knowledge is required before the classification; (4) semantic explanation is ignored. In this work, these challenges are overcome by using different algorithms combined with a Naïve Bayes classifier to ensure reliability and highly precise recommendations in virtualization and cloud environments. These features make HTwitt different from others in terms of having an effective and practical design for text classification in big data analytics. The main contribution of the paper is to propose a framework for building landslide early warning systems by pinpointing useful tweets and visualizing them along with the processed information. We demonstrate the results of the experiments which quantify the levels of overfitting in the training stage of the model using different sizes of real-world datasets in machine learning phases. Our results demonstrate that the proposed system provides high-quality results with a score of nearly 95% and meets the requirement of a Hadoop-based classification system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Reform of college students' teaching management informatization under the background of big data and IoT.
- Author
-
Wu, Lei
- Subjects
BIG data ,STUDENT teaching ,COLLEGE students ,INTERNET of things ,DATA visualization ,REFORMS - Abstract
With the continuous layout of intelligent processing technology in the teaching field, it has become an important trend to apply big data and Internet of Things technology to the teaching management of college students. Big data technology can comprehensively analyze the teaching situation of students through massive data, and then provide the best solution, the current big data technology already has a strong technical foundation, which can provide assistance for students' teaching. The Internet of Things technology can collect personal information of students through sensors and miniature portable devices, and provide the data to the background server for big data analysis. In order to analyze the current status of the informationization of teaching management for college students, this paper first surveys some colleges and institutions through questionnaire surveys, and conducts data analysis on the collected questionnaires, and puts forward a solution based on big data and the Internet of Things based on the data analysis results. Finally, the combination of big data and Internet of Things technology for student teaching data collection, effect evaluation scheme, intelligent arrangement of integrated courses, intelligent recommendation scheme for students' teaching needs, and teaching management information visualization scheme are analyzed, and it is found that the combination of big data and Internet of Things related technologies It can effectively improve the efficiency of teaching management of college students. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. AUTOMATED STUDENT PERFORMANCE ANALYSER AND RECOMMENDER.
- Author
-
Arunkumar, Nisha Maria, Miriam A., Angela, and Christina J.
- Subjects
RATING of students ,BIG data ,ELECTRONIC data processing ,DATA warehousing ,DATA visualization - Abstract
Big data is a torrent of data streams that are complex such that traditional data-processing system software is inadequate to deal with this huge data. Dealing with big data consists of challenges such as capturing data, data processing, data storage, data analysis, search, sharing, transfer, visualization, querying and updating. Predictive modelling is used for analysing historical events or data to predict known or unknown facts. Neural networks is used to train models efficiently based on the input data sets consisting of parametric features provided. Educational institutions deal with voluminous student mark records that are utilized for understanding and interpreting the institutional academic competency with other institutions. Analysing the performance involves manual computations for report generation. In this paper we have introduced a student performance analyser and recommender that uses prediction algorithms and content based recommendation approach to predict the academic performances which is fully automated and reduces the manual calculations while enabling students to select from a range of recommended subjects based on multiple queries that are suitable to the caliber of the student. The prediction algorithm uses back propagation techniques that take multiple input parametric features to improve the performance in terms of reducing the error rate. Multiple parametric feature inputs are integrated and compared to the performance when single input features are provided. This research paper helps in analysing the data by automating the tedious calculations including prediction of students performance and recommending papers for the upcoming year. It has been observed that based on the analysis reports that are automatically generated with prediction, future performances of students are found to be accurate due to the multiple input features that gave a higher accuracy and low error rate when compared to other traditional models. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Digital Contact Tracing Based on a Graph Database Algorithm for Emergency Management During the COVID-19 Epidemic: Case Study
- Author
-
Qi Zou, Weiting Zhang, Hong Yao, Ying Dong, and Zijun Mao
- Subjects
Big Data ,China ,Databases, Factual ,digital contact tracing ,020205 medical informatics ,Computer science ,Population ,Big data ,graph database ,Data security ,Health Informatics ,emergency management ,02 engineering and technology ,Tracing ,computer.software_genre ,03 medical and health sciences ,0302 clinical medicine ,Computer Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,030212 general & internal medicine ,Epidemics ,education ,visualization ,Digital Technology ,Original Paper ,education.field_of_study ,Graph database ,Data collection ,Emergency management ,business.industry ,Data Visualization ,COVID-19 ,Contact Tracing ,business ,computer ,Algorithm ,Algorithms ,Contact tracing - Abstract
Background The COVID-19 epidemic is still spreading globally. Contact tracing is a vital strategy in epidemic emergency management; however, traditional contact tracing faces many limitations in practice. The application of digital technology provides an opportunity for local governments to trace the contacts of individuals with COVID-19 more comprehensively, efficiently, and precisely. Objective Our research aimed to provide new solutions to overcome the limitations of traditional contact tracing by introducing the organizational process, technical process, and main achievements of digital contact tracing in Hainan Province. Methods A graph database algorithm, which can efficiently process complex relational networks, was applied in Hainan Province; this algorithm relies on a governmental big data platform to analyze multisource COVID-19 epidemic data and build networks of relationships among high-risk infected individuals, the general population, vehicles, and public places to identify and trace contacts. We summarized the organizational and technical process of digital contact tracing in Hainan Province based on interviews and data analyses. Results An integrated emergency management command system and a multi-agency coordination mechanism were formed during the emergency management of the COVID-19 epidemic in Hainan Province. The collection, storage, analysis, and application of multisource epidemic data were realized based on the government’s big data platform using a centralized model. The graph database algorithm is compatible with this platform and can analyze multisource and heterogeneous big data related to the epidemic. These practices were used to quickly and accurately identify and trace 10,871 contacts among hundreds of thousands of epidemic data records; 378 closest contacts and a number of public places with high risk of infection were identified. A confirmed patient was found after quarantine measures were implemented by all contacts. Conclusions During the emergency management of the COVID-19 epidemic, Hainan Province used a graph database algorithm to trace contacts in a centralized model, which can identify infected individuals and high-risk public places more quickly and accurately. This practice can provide support to government agencies to implement precise, agile, and evidence-based emergency management measures and improve the responsiveness of the public health emergency response system. Strengthening data security, improving tracing accuracy, enabling intelligent data collection, and improving data-sharing mechanisms and technologies are directions for optimizing digital contact tracing.
- Published
- 2021
40. Cornac: Tackling Huge Graph Visualization with Big Data Infrastructure
- Author
-
Alexandre Perrot, David Auber, Laboratoire Bordelais de Recherche en Informatique (LaBRI), and Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)
- Subjects
Information Systems and Management ,Computer science ,Distributed computing ,Big data ,IEEEtran ,02 engineering and technology ,Data visualization ,Graph drawing ,0202 electrical engineering, electronic engineering, information engineering ,Cluster analysis ,Interactive visualization ,business.industry ,paper ,template ,020207 software engineering ,Computer Society ,Visualization ,IEEE ,[INFO.INFO-IR]Computer Science [cs]/Information Retrieval [cs.IR] ,Scalability ,Canopy clustering algorithm ,journal ,020201 artificial intelligence & image processing ,business ,L A T E X ,Information Systems - Abstract
International audience; The size of available graphs has drastically increased in recent years. The real-time visualization of graphs with millions of edges is a challenge but is necessary to grasp information hidden in huge datasets. This article presents an end-to-end technique to visualize huge graphs using an established Big Data ecosystem and a lightweight client running in a Web browser. For that purpose, levels of abstraction and graph tiles are generated by a batch layer and the interactive visualization is provided using a serving layer and client-side real-time computation of edge bundling and graph splatting. A major challenge is to create techniques that work without moving data to an ad hoc system and that take advantage of the horizontal scalability of these infrastructures. We introduce two novel scalable algorithms that enable to generate a canopy clustering and to aggregate graph edges. These two algorithms are both used to produce levels of abstraction and graph tiles. We prove that our technique guarantee a quality of visualization by controlling both the necessary bandwidth required for data transfer and the quality of the produced visualization. Furthermore, we demonstrate the usability of our technique by providing a complete prototype. We present benchmarks on graphs with millions of elements and we compare our results to those obtained by state of the art techniques. Our results show that new Big Data technologies can be incorporated into visualization pipeline to push out the size limits of graphs one can visually analyze.
- Published
- 2018
41. Design of a Human–Computer Interaction Method for Intelligent Electric Vehicles.
- Author
-
Ba, Tao, Li, Shan, Gao, Ying, and Wang, Shijun
- Subjects
HUMAN-computer interaction ,SATISFACTION ,BIG data ,DATA visualization ,INFORMATION design - Abstract
In order to improve the satisfaction of users during the human–machine interaction with intelligent electric vehicles, this paper presents the human–machine interaction method of intelligent electric vehicles. Firstly, the principle of human–computer interaction of intelligent electric vehicles is analyzed, the application of interaction in big data visualization is expounded, and the cognitive mechanism of big data visualization interaction is designed. According to the above mechanism, the design the of information interface and the HUD interface is completed, and the interaction model is established. So far, the design of a human–computer interaction method of intelligent electric vehicles is completed. The experimental results show that the human–computer interaction response time of the design method is was only 5 ms, and the human-computer interaction satisfaction of the intelligent electric vehicle can reach 99%, which has certain application value. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. A Vector Field Visualization Method for Trajectory Big Data.
- Author
-
Li, Aidi, Xu, Zhijie, Zhang, Jianqin, Li, Taizeng, Cheng, Xinyue, and Hu, Chaonan
- Subjects
VECTOR fields ,BIG data ,DATA visualization ,CITY traffic ,TRAFFIC density ,TRAFFIC congestion ,TRAFFIC flow - Abstract
With the rapid growth of trajectory big data, there is a need for more efficient methods to extract, analyze, and visualize these data. However, existing research on trajectory big data visualization mainly focuses on displaying trajectories for a specific period or showing spatial distribution characteristics of trajectory points in a single time slice using clustering, filtering, and other techniques. Therefore, this paper proposes a vector field visualization model for trajectory big data, aiming to effectively represent the inherent movement trends in the data and provide a more intuitive visualization of urban traffic congestion trends. The model utilizes the motion information of vehicles to create a travel vector grid and employs WebGL technology for vector field visualization rendering. The vector field effects are effectively displayed by generating many particles and simulating their movements. Furthermore, this research also designs and implements congestion trend point identification and hotspot congestion analysis, thus validating the practicality and effectiveness of trajectory big data vector field visualization. The results indicate that compared to traditional visualization methods, the vector field visualization method can demonstrate the direction and density changes in traffic flow and predict future traffic congestion. This work provides valuable data references and decision support for urban traffic management and planning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. JP-DAP: An Intelligent Data Analytics Platform for Metro Rail Transport Systems.
- Author
-
Mulerikkal, Jaison, Thandassery, Sajanraj, Dixon K, Deepa Merlin, Rejathalal, Vinith, and Ayyappan, Binu
- Abstract
This paper deals with an intelligent data analytics platform - Jaison-Paul Data Analytics Platform (JP-DAP) - for metro rail transport systems. JP-DAP is intended to ensure smooth functioning, improved customer experience, ridership forecasting, and efficient administration of metro rail transportation systems by integrating and analysing its many data sources. It consists of a middleware which is built on the top of a Hadoop Distributed File System (HDFS) and Spark framework, along with a set of open-source software tools like Apache Hive, Pandas, Google TensorFlow and Spark ML-lib for real-time and legacy data processing. The benchmarking of JP-DAP was conducted using TestDFSIO and have found that it performs well according to industry standards. The specific use case for this project is Kochi Metro Rail Limited (KMRL). The analysis of Automated Fare Collection data from KMRL on JP-DAP framework have produced descriptive statistics visualisation of inflow and outflow analysis, travel patterns during weekdays and weekends, origin-destination matrix, etc.. Moreover JP-DAP framework is capable of producing short term passenger flow predictions using SVR machine learning algorithm with linear, radial basis function and polynomial kernels. Our experiments have shown that SVR linear kernel gives the most accurate results with the least errors in predicting the next day’s passenger count using the previous five weekdays data. The station usage (one-to-all) prediction using Long Short-Term Memory (LSTM) is also integrated to this framework. The visualisation as well as analytical outcomes of JP-DAP framework have also been made available to the external world using a rich set of REST APIs and are projected on to a web-dashboard. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Chinese Character Culture Communication and Comparative Analysis Based on Big Data Visualization.
- Author
-
He, Pingting
- Subjects
CHINESE characters ,DATA visualization ,DATABASES ,COMPARATIVE studies ,GRAND strategy (Political science) ,BIG data - Abstract
With the promotion of China's international status, the study of Chinese characters has become the trend of scholars all over the world. Nowadays, the international promotion of Chinese has become an important part of the national diplomatic strategy. With the rapid development of big data visualization technology, Chinese culture can be spread quickly, and it is easy to achieve the above requirements to achieve visual accessibility. In view of the problems existing in the current dissemination of Chinese culture, such as too scattered dissemination content, unsystematic dissemination channels, and poor dissemination effect, this paper puts forward a method of Chinese character culture dissemination based on big data visualization. Firstly, the visual features of Chinese characters are recognized, and the rules of Chinese characters are detected; secondly, the large data visualization analysis of Chinese character imaging is carried out, and the performance comparison under different iterations is compared. Chinese visual technology has obvious advantages in the effect and performance of Chinese character transmission. Finally, the development trend of Chinese communication is analyzed, and the acceptance, speed, coverage, understanding, learning degree, and practical degree of Chinese communication abroad are compared. The big data visualization model has certain advantages in functional comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. MULTI COMPONENT INFORMATION FUSION METHOD OF BIG DATA VISUALIZATION BASED ON RADAR MAP.
- Author
-
Na Zhang, Hongsen Xie, Dongxiang Tao, and Kaiyuan Liu
- Subjects
DATABASES ,DATA visualization ,RADAR ,BIG data ,ELECTRONIC data processing ,DATA reduction ,DIGITAL images - Abstract
The big data of digital image contains massive information, which are from many aspects and need to be effectively fused. The multi-level information fusion method of big data in digital image is studies this paper, so as to get accurate information and make effective emergency strategies in time. According to the reasonable sleep scheduling mechanism, the perception information of the nodes in the cluster environment acquisition system, the sampling data are transmitted to the cluster head node, cluster head node is constructed to estimate sampling data in linear regression model, and the model parameters expressing the characteristics of the data are upload to the base station according to the query statistics needs. After dimensionality reduction of high-dimensional data, the hierarchical visualization of information is realized by the expression of radar map. After the radar coordinate system is mapped by the original processing data, different variables are allocated in different directions of radar map. In this paper, the multi-level information of big data in digital image is cleaned and processed, and the radar image is to make hierarchical visualized fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2020
46. Special issue on information visualisation.
- Author
-
Francese, Rita, Banissi, Ebad, and Risi, Michele
- Subjects
VISUALIZATION ,VISUAL analytics ,DATA visualization ,DIRECTED acyclic graphs ,MICROBLOGS ,BIG data - Abstract
We are in the Big Data era, characterized by an increasing amount of information generated everyday by all the phenomena concerning our life. Virtual Reality (VR) technologies are adopted by Okada et al. [[16]] to perform a spatio-temporal social media data exploration by three-dimensional temporal visualization of tweets of microblogs with location information. [[1]] explore analytics in video games by using visualizations techniques, in particular animated maps, for control and analysis of spatio-temporal information data associated to player performance within game environment. [Extracted from the article]
- Published
- 2019
- Full Text
- View/download PDF
47. Displaying Data Effectively Using an Automated Process Dashboard.
- Author
-
Riege, Jens, Lee, Rainier, and Ebrahimi, Nercy
- Subjects
SEMICONDUCTOR manufacturing ,BIG data ,MANUFACTURING processes ,PRODUCTION engineering ,IMAGE color analysis ,CUSUM technique ,COMPOUND semiconductors ,DATA modeling - Abstract
The concept of Big Data is often used to describe any extremely large set of data and the analytical methods used to derive meaning from it. In semiconductor manufacturing, more sensors are added to manufacturing equipment to capture deeper levels of equipment and process data which gives engineers the opportunity to achieve tighter control of their manufacturing processes. The amount of data is increasing at an exponential rate, but the time available to analyze the data is not. This means engineers must become more efficient in finding the most critical data and responding to out-of-control events. This paper expands on a paper presented at the 2019 Compound Semiconductor Manufacturing Technology Conference in Minneapolis. We present automated data filtering techniques and additional Data Visualization tools that effectively communicate results when data is presented in tables and graphs. We show how we implemented these tools as part of a Process Engineering Dashboard. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Literature Review on the Development of Visualization Studies (2012–2022) †.
- Author
-
Jiang, Tianyin, Hou, Yaxin, and Yang, Jaebum
- Subjects
DATA visualization ,ARTIFICIAL intelligence ,BIG data ,DEEP learning ,DIGITAL technology - Abstract
In the past decade, the visualization and transformation of data and information have attracted lots of research interest, while visualization has gradually extended to all industries. Based on the retrieval of core literature in a Web of Science search from 2012 to 2022, this study finds that these developments mainly discussed the change of visualization and its related concepts, current research hotspots, and influential journal papers. Consequently, it aims to explore research gaps and provide directional guidance for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Radiation Dose Tracking in Computed Tomography Using Data Visualization.
- Author
-
Alotaibi, Reem and Abukhodair, Felwa
- Subjects
COMPUTED tomography ,RADIATION doses ,DATA visualization software ,DATA visualization ,VISUAL analytics ,SATISFACTION - Abstract
Radiation dose tracking is becoming very important due to the popularity of computerized tomography (CT) scans. One of the challenges of radiation dose tracking is that there are several variables that affect the dose from the patient side, machine side, and procedures side. Although some tracking software programs exists, they are based on static analysis and cause integration errors due to the heterogeneity of Hospital Information Systems (HISs) and prevent users from obtaining accurate answers to their questions. In this paper, a visual analytic approach is utilized to track radiation dose data from computed tomography (CT) through the use of Tableau data visualization software. The web solution is evaluated in real-life scenarios by domain experts. The results show that the visual analytics approach improves the tracking process, as users completed the tasks with a 100% success rate. The process increased user satisfaction and also provided invaluable insight into the analytical process. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. From Data to Wisdom: A Review of Applications and Data Value in the context of Small Data.
- Author
-
Werner, Jonas, Beisswanger, Philipp, Schürger, Christoph, Klaiber, Marco, and Theissler, Andreas
- Subjects
BIG data ,MACHINE learning ,RESEARCH personnel ,WISDOM ,DATA analysis ,DATA visualization - Abstract
Small data and big data are distinct approaches to data analysis and utilization in various applications. While big data has been the focus of many research and business efforts for more than ten years, small data is increasingly being recognized as having potential value in certain settings. We systematically review literature and conclude that small data can be indeed valuable in certain scenarios. This paper incorporates the data value perspective of small data within various application areas. For this, we apply the data-information-knowledge-wisdom (DIKW) hierarchy to categorize papers and findings, and discuss the papers from the view point of "data value". Our review identifies various contexts where small data can be used to create value, such as data pre-processing, classification tasks, anomaly detection, forecasting and decision support. We also highlight industries that may be particularly promising areas for practitioners and researchers focused on small data. In addition, we provide an overview of methods and tools for small data analysis, including statistical techniques, visualization, and machine learning algorithms. Finally, based on our results, we suggest, that further research should focus on small data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.