1,046 results on '"raster data"'
Search Results
152. Data on the dynamics of landscape structure and fragmentation in Ambo district, central highlands of Ethiopia
- Author
-
Berhanu Kefale, Fanta Obsa, Moges Kidane, and Terefe Tolessa
- Subjects
Science (General) ,ved/biology.organism_classification_rank.species ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Land cover ,Shrub ,Raster data ,03 medical and health sciences ,Q1-390 ,0302 clinical medicine ,Fragmentation ,Landscape ,Central Highlands ,Data Article ,030304 developmental biology ,0303 health sciences ,Multidisciplinary ,Land use ,ved/biology ,Fragmentation (computing) ,Forestry ,Land use/Land cover ,Multispectral Scanner ,Geography ,FRAGSTAT ,Thematic Mapper ,Metrics ,030217 neurology & neurosurgery - Abstract
The data presented in this article show changes in land use/land cover and fragmentation of land at a landscape level for a period of 45 years (1973-2018) in Ambo district of the central highlands of Ethiopia. Data generated from satellite images of Multispectral Scanner (MSS), Enhanced Thematic Mapper (ETM) and Operational Land Image (OLI) with path/raw value of 181/54, 169/54 and169/54 for each images respectively were analyzed by Arc GIS 10.1 software using a standard method. The precision of the images were verified by data collected from ground control points by using Geographic Positioning System (GPS) receiver. A raster data of LULC was used as an input in to FRAGSTAT software to analyze fragmentation at the landscape level. The data presented in this article showed that cultivated land and settlement increased by 45.7% (376.5ha/yr) and 111% (78.3ha/yr) for 1973-2018 periods respectively. Forest land, shrub land and bare land shrunk by 38% (147.5ha/yr), 17.1% (88.5ha/yr) and 63.9% (218ha/yr) respectively over the periods considered. Transition matrix indicated that 64781.86ha of land unchanged over the years (1973-2018). Number of patches increased by 143% while largest patch index increased by 226% in the years (1973-2018). In contrast, however, Aggregation index has shown a negative value (9.3%) and other metrics such as SIDI (12) and IJI (8.1) has shown an overall decreasing trend.
- Published
- 2021
153. Experimental Study of Big Raster and Vector Database Systems
- Author
-
Tina Diao, Samriddhi Singla, Elia Scudiero, Ayan Mukhopadhyay, and Ahmed Eldawy
- Subjects
Raster data ,Information engineering ,Pixel ,Computer science ,Scalability ,Process (computing) ,computer.file_format ,Data mining ,Raster graphics ,computer.software_genre ,computer ,Spatial analysis ,Data modeling - Abstract
Spatial data is traditionally represented using two data models, raster and vector. Raster data refers to satellite imagery while vector data includes GPS data, Tweets, and regional boundaries. While there are many real-world applications that need to process both raster and vector data concurrently, state-of-the-art systems are limited to processing one of these two representations while converting the other one which limits their scalability. This paper draws the attention of the research community to the research problems that emerge from the concurrent processing of raster and vector data. It describes three real-world applications and explains their computation and access patterns for raster and vector data. Additionally, it runs an extensive experimental evaluation using state-of-the-art big spatial data systems with raster data of up-to a trillion pixels, and vector data with up-to hundreds of millions of edges. The results show that while most systems can analyze raster and vector concurrently, but they have limited scalability for large-scale data.
- Published
- 2021
- Full Text
- View/download PDF
154. A tile-based scalable raster data management system based on HDFS.
- Author
-
Guangqing Zhang, Chuanjie Xie, Lei Shi, and Yunyan Du
- Abstract
Hadoop has become a worldwide popular open source platform for large data analysis in commercial application and Hadoop distributed file system (HDFS) is the core part of it. However, HDFS cannot be used directly for managing raster data, for the geographic location information is involved. In this paper, we describe the implementation of a tile-based scalable raster data management system based on HDFS. While reserving the basic architecture of HDFS, we reorganize the data structure in block, add some additional metadata, design an index data structure in block, keep an overlapping region between adjacent blocks, and offer a compression option for users. Besides, we provide functions for reading the raster data from HDFS in tile stream. These optimizations match the feature of raster data to the architecture of HDFS. MapReduce Applications can be built on the raster data management system. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
155. Research on management method of raster data based on Oracle 10g spatial.
- Author
-
Li, Guangshi
- Abstract
This paper investigated raster data's storage and management mechanisms of Oracle 10g spatial, including GeoRester data model, physical storage structure, data block and pyramid strategy. On this basis, the paper demonstrated a typical example of storing and managing raster data. Through a large number of experiments, the paper pointed out that the Oracle's raster data loading tool cannot upload large capacity raster image. Aiming at the disadvantage in terms of data capacity, the paper presented a simple yet effective solution, which adopts block processing way, is consistent with “divide and conquer” idea, can improve data processing and network transmission speed. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
156. A Comparison of Raster-Based Forestland Data in Cropland Data Layer and the National Land Cover Database
- Author
-
Chinazor S. Azubike, Lyubov A. Kurkalova, and Timothy J. Mulrooney
- Subjects
Geographic Information Systems (GISs) ,mapping ,raster data ,forestland ,remote sensing ,national land-cover database ,cropland data layer ,forest dynamics ,Forestry - Abstract
The National Agricultural Statistics Service, the statistical arm of the US Department of Agriculture, and the Multi-Resolution Land Characteristics Consortium, a group of the US federal agencies, collect and publish several land-use and land-cover data sets. The aim of this study is to analyze the consistency of forestland estimates based on two widely used, publicly available products: the National Land-Cover Database (NLCD) and Cropland Data Layer (CDL). Both remote-sensing-based products provide raster-formatted land-cover categorization at a spatial resolution of 30 m. Although the processing of the yearly published CDL non-agricultural land-cover data is based on less frequently updated NLCD, the consistency of large-area forestland mapping between these two datasets has not been assessed. To assess the similarities and the differences between CDL- and NLCD-based forestland mappings for the state of North Carolina, we overlay the two data products for the years 2011 and 2016 in ArcMap 10.5.1 and analyze the location and attributes of the matched and mismatched forestland. We find that the mismatch is relatively smaller for the areas of the state where forests occupy larger shares of the total land, and that the relative mismatch is smaller in 2011 when compared to 2016. We also find that a large portion of the forestland mismatch is attributable to the dynamics of re-growth of periodically harvested and otherwise disturbed forests. Our results underscore the need for a holistic approach to data preparation, data attribution, and data accuracy when performing high-scale map-based analyses using each of these products.
- Published
- 2022
- Full Text
- View/download PDF
157. CAD and GIS in the graphic environment of MicroStation
- Author
-
Orlitová Erika and Dugáček Dušan
- Subjects
CAD ,GIS ,MicroStation ,ODBC ,MDL ,vector data ,raster data ,Mining engineering. Metallurgy ,TN1-997 ,Geology ,QE1-996.5 - Abstract
The article introduces the CAD and GIS systems and graphical environment MicroStation, as a software base. Some important systems moduls by Benley Corporation and complemen-tary moduls created in MDL language at our department are described.
- Published
- 1998
158. A web client-based online DICOM browser and NRRD converter for Studierfenster
- Author
-
Christina Gsaxner, Daniel Wild, Jan Egger, Christopher A. Ramirez Bedoya, Jianning Li, and Antonio Pepe
- Subjects
Information retrieval ,Computer science ,business.industry ,Client-side ,JavaScript ,Raster data ,DICOM ,Workflow ,Component (UML) ,Web application ,Software architecture ,business ,computer ,computer.programming_language - Abstract
Imaging data within the clinical practice in general uses standardized formats such as Digital Imaging and Communications in Medicine (DICOM). Aside from 3D volume data, DICOM files usually include relational and semantic description information. The majority of current applications for browsing and viewing DICOM files online handle the image volume data only, ignoring the relational component of the data. Alternatively, implementations that show the relational information are provided as complete pre-packaged solutions that are difficult to integrate in existing projects and workflows. This publication proposes a modular, client-side web application for viewing DICOM volume data and displaying DICOM description fields containing relational and semantic information. Furthermore, it supports conversion from DICOM data sets into the nearly raw raster data (NRRD) format, which is commonly utilized for research and academic environments, because of its simpler, easily processable structure, and the removal of all patient DICOM tags (anonymization). The application was developed in JavaScript and integrated into the online medical image processing framework StudierFenster (http://studierfenster.tugraz.at/). Since our application only requires a standard web browser, it can be used by everyone and can be easily deployed in any wider project without a complex software architecture.
- Published
- 2021
- Full Text
- View/download PDF
159. Evaluation of a NoSQL Database for Storing Big Geospatial Raster Data
- Author
-
Jörg Blankenbach and Nicole Hein
- Subjects
Raster data ,Geospatial analysis ,Database ,Computer science ,Geography, Planning and Development ,ddc:550 ,Computers in Earth Sciences ,computer.software_genre ,NoSQL ,computer ,Computer Science Applications ,Education - Abstract
12th International Symposium on Digital Earth, ISDE12, Salzburg, Austria, 6 Jul 2021 - 8 Jul 2021; GI-Forum 9(1), 76-84 (2021). doi:10.1553/giscience2021_01_s76 special issue: "12th International Symposium on Digital Earth / Thomas Blaschke - Josef Strobl - Julia Wegmayr (Eds.)" (978-3-7001-8947-3), Published by [S.l.]
- Published
- 2021
- Full Text
- View/download PDF
160. Research on Hexagonal Remote Sensing Image Sampling
- Author
-
Rui Wang, Jin Ben, Wen Cao, Jianbin Zhou, and Mingyang Zheng
- Subjects
Pixel ,Computer science ,Sampling (statistics) ,Signal ,GeneralLiterature_MISCELLANEOUS ,Raster data ,Hexagonal sampling ,Preprocessor ,Nyquist–Shannon sampling theorem ,Nonlinear Sciences::Pattern Formation and Solitons ,MathematicsofComputing_DISCRETEMATHEMATICS ,ComputingMethodologies_COMPUTERGRAPHICS ,Hexagonal tiling ,Remote sensing - Abstract
Hexagonal discrete global grids can provide an excellent solution for the massive multi-source, multi-temporal, and multi-resolution raster data integration and management. Traditional images are rectangular pixels, and cannot be expressed on a hexagonal grid. Therefore, how to obtain images based on hexagonal pixels has attracted widespread academic attention. Combining current hexagonal sampling methods, this paper studies the evaluation criteria of hexagonal sampling accuracy, summarizes the previous research, and proposes a more general hexagonal sampling method for remote sensing images. This method mainly involves signal preprocessing, spectrum analysis, calculation of sampling interval, and establishment of accuracy evaluation standards. Finally, we verify the feasibility of the proposed hexagon algorithm to provide a reference for hexagon sampling.
- Published
- 2021
- Full Text
- View/download PDF
161. Cold-Water Coral Habitat Mapping: Trends and Developments in Acquisition and Processing Methods
- Author
-
Andrew J. Wheeler, Luis Américo Conti, and Aaron Lim
- Subjects
0106 biological sciences ,Side-scan sonar ,Geographic information system ,010504 meteorology & atmospheric sciences ,Biodiversity ,SISTEMA DE INFORMAÇÃO GEOGRÁFICA ,01 natural sciences ,Raster data ,habitats ,Marine ecosystem ,14. Life underwater ,cold water corals ,mapping ,Spatial analysis ,0105 earth and related environmental sciences ,multibeam bathymetry ,business.industry ,side-scan sonar ,010604 marine biology & hydrobiology ,lcsh:QE1-996.5 ,Environmental resource management ,lcsh:Geology ,machine learning ,Geography ,Photogrammetry ,Habitat ,General Earth and Planetary Sciences ,business - Abstract
Cold-water coral (CWC) habitats are considered important centers of biodiversity in the deep sea, acting as spawning grounds and feeding area for many fish and invertebrates. Given their occurrence in remote parts of the planet, research on CWC habitats has largely been derived from remotely-sensed marine spatial data. However, with ever-developing marine data acquisition and processing methods and non-ubiquitous nature of infrastructure, many studies are completed in isolation resulting in large inconsistencies. Here, we present a concise review of marine remotely-sensed spatial raster data acquisition and processing methods in CWC habitats to highlight trends and knowledge gaps. Sixty-three studies that acquire and process marine spatial raster data since the year 2000 were reviewed, noting regional geographic location, data types (‘acquired data’) and how the data were analyzed (‘processing methods’). Results show that global efforts are not uniform with most studies concentrating in the NE Atlantic. Although side scan sonar was a popular mapping method between 2002 and 2012, since then, research has focused on the use of multibeam echosounder and photogrammetric methods. Despite advances in terrestrial mapping with machine learning, it is clear that manual processing methods are largely favored in marine mapping. On a broader scale, with large-scale mapping programs (INFOMAR, Mareano, Seabed2030), results from this review can help identify where more urgent research efforts can be concentrated for CWC habitats and other vulnerable marine ecosystems.
- Published
- 2021
162. Learning Traffic as Videos: A Spatio-Temporal VAE Approach for Traffic Data Imputation
- Author
-
Hejiao Huang, Shuo Zhang, Xiaofei Chen, Chonglin Gu, Jiayuan Chen, and Qiao Jiang
- Subjects
Raster data ,Data collection ,Artificial neural network ,Computer science ,Unsupervised learning ,Imputation (statistics) ,Data mining ,Traffic flow ,computer.software_genre ,Autoencoder ,computer ,Block (data storage) - Abstract
In the real world, data missing is inevitable in traffic data collection due to detector failures or signal interference. However, missing traffic data imputation is non-trivial since traffic data usually contains both temporal and spatial characteristics with inherent complex relations. In each time interval, the traffic measurements collected in all spatial regions can be regarded as an image with more or fewer channels. Therefore, the traffic raster data over time can be learned as videos. In this paper, we propose a novel unsupervised generative neural network for traffic raster data imputation called STVAE, which works well robustly even under different missing rates. The core idea of our model is to discover more complex spatio-temporal representations inside the traffic data under the architecture of variational autoencoder (VAE) with Sylvester normalizing flows (SNFs). After transforming the traffic raster data into multi-channel videos, a Detection-and-Calibration Block (DCB), which extends 3D gated convolution and multi-attention mechanism, is proposed to sense, extract and calibrate more flexible and accurate spatio-temporal dependencies of the original data. The experiments are employed on three real-world traffic flow datasets and demonstrate that our network STVAE achieves the lowest imputation errors and outperforms state-of-the-art traffic data imputation models.
- Published
- 2021
- Full Text
- View/download PDF
163. Multi-criteria Decision-Making Approach Using Remote Sensing and GIS for Assessment of Groundwater Resources
- Author
-
Santu Guchhait, Gour Dolui, Nirmalya Sankar Das, and Sayan Roy
- Subjects
Raster data ,Permeability (earth sciences) ,Thematic map ,Lineament ,Remote sensing (archaeology) ,Environmental science ,Groundwater recharge ,Drainage ,Groundwater ,Remote sensing - Abstract
In the field of groundwater investigation, integrated remote sensing and geographical information system (GIS) has become a significant approach to explored groundwater resources, its assessment, monitoring, and conservation. The present study of Purulia district in West Bengal, India, tries to assess the potential groundwater zones using remote sensing and GIS technique. As groundwater recharge and its availability depend on some geophysical factors, a multi-criterion decision-making approach (MCDA) has been adopted to recognize the potential zones in the studied district. Thus, most significant factors viz. geomorphology, lithology, slope, lineament, drainage, soil, and land-use land-cover have been considered and assigned weights in respect to their relative importance to find out groundwater potential zones. Thematic maps of these selected factors were transformed to raster data using raster converter tool in ArcGIS. The groundwater prospective zones were obtained through weighted overlay analysis technique and categorized into six sub-classes viz., unavailable, very poor, poor, moderate, good, and very good zones. The outcome shows that about 7.5, 36.86, and 27% areas have very high, high and moderate potentiality respectively whereas, about 19.54 and 8.56% areas are under poor to very poor condition in terms of groundwater potentiality of the study area. The recharge and availability of groundwater are mainly depending on surface topography, slope, underlying rock composition, and lineaments as these factors determine the porosity, permeability, and rate of infiltration.
- Published
- 2021
- Full Text
- View/download PDF
164. Implementation of the directly-georeferenced hyperspectral point cloud
- Author
-
J. Pablo Arroyo-Mora, George Leblanc, Deep Inamdar, and Margaret Kalacska
- Subjects
Remote sensing application ,Computer science ,Science ,Clinical Biochemistry ,Point cloud ,Hyperspectral point cloud ,010501 environmental sciences ,01 natural sciences ,Raster data ,03 medical and health sciences ,Data integrity ,Spatial data integrity ,Computer vision ,Directly-Georeferenced Hyperspectral Point Cloud ,030304 developmental biology ,0105 earth and related environmental sciences ,0303 health sciences ,Pixel ,business.industry ,Hyperspectral imaging ,computer.file_format ,Method Article ,Data fusion ,Spectral data integrity ,Sensor fusion ,Medical Laboratory Technology ,Artificial intelligence ,Raster graphics ,business ,computer - Abstract
Before pushbroom hyperspectral imaging (HSI) data can be applied in remote sensing applications, it must typically be preprocessed through radiometric correction, atmospheric compensation, geometric correction and spatial resampling procedures. After these preprocessing procedures, HSI data are conventionally given as georeferenced raster images. The raster data model compromises the spatial-spectral integrity of HSI data, leading to suboptimal results in various applications. Inamdar et al. (2021) developed a point cloud data format, the Directly-Georeferenced Hyperspectral Point Cloud (DHPC), that preserves the spatial-spectral integrity of HSI data more effectively than rasters. The DHPC is generated through a data fusion workflow that uses conventional preprocessing protocols with a modification to the digital surface model used in the geometric correction. Even with the additional elevation information, the DHPC is still stored with file sizes up to 13 times smaller than conventional rasters, making it ideal for data distribution. Our article aims to describe the DHPC data fusion workflow from Inamdar et al. (2021), providing all the required tools for its integration in pre-existing processing workflows. This includes a MATLAB script that can be readily applied to carry out the modification that must be made to the digital surface model used in the geometric correction. The MATLAB script first derives the point spread function of the HSI data and then convolves it with the digital surface model input in the geometric correction. By breaking down the MATLAB script and describing its functions, data providers can readily develop their own implementation if necessary. The derived point spread function is also useful for characterizing HSI data, quantifying the contribution of materials to the spectrum from any given pixel as a function of distance from the pixel center. Overall, our work makes the implementation of the DHPC data fusion workflow transparent and approachable for end users and data providers.•Our article describes the Directly-Georeferenced Hyperspectral Point Cloud (DHPC) data fusion workflow, which can be readily implemented with existing processing protocols by modifying the input digital surface model used in the geometric correction.•We provide a MATLAB function that performs the modification to the digital surface model required for the DHPC workflow. This MATLAB script derives the point spread function of the hyperspectral imager and convolves it with the digital surface model so that the elevation data are more spatially consistent with the hyperspectral imaging data as collected.•We highlight the increased effectiveness of the DHPC over conventional raster end products in terms of spatial-spectral data integrity, data storage requirements, hyperspectral imaging application results and site exploration via virtual and augmented reality., Graphical Abstract Image, graphical abstract
- Published
- 2021
165. Visible Light Images
- Author
-
Martin H. Trauth
- Subjects
Raster data ,Computer science ,Computer graphics (images) ,Process (computing) ,Electromagnetic radiation ,Visible spectrum - Abstract
This chapter is concerned with the acquisition, processing, and display of image data. We will first learn about the physical principles of electromagnetic waves, which include visible light (Sect. 5.2), and then discuss how cameras capture visible light images (Sect. 5.3). The various ways that raster data can be stored on a computer are explained in Sect. 5.4 and the main tools for importing, manipulating, and exporting image data are then presented in Sects. 5.5 and 5.6. We will then make use of the knowledge that we have gained to acquire, process, and display images to complete the exercises in Sect. 5.7.
- Published
- 2021
- Full Text
- View/download PDF
166. An automated technique to determine spatio-temporal changes in satellite island images with vectorization and spatial queries.
- Author
-
EKEN, SÜLEYMAN and SAYAR, AHMET
- Subjects
- *
REMOTE-sensing images , *POINT set theory , *PIXELS , *PATTERN perception , *INFORMATION science , *POLYGONS - Abstract
For spatio-temporal and topologic analyses, vectorial information (carrying coordinate values defined as point sets) gives better information than its raster (grid of pixels) counterpart. The study presented in this paper is based on (1) recognition and extraction of an island object in a set of digital images captured by LandSat-7 satellite and (2) modelling it as a polygon (vectorial) and making it easy to process and easy to understand for computers and information science applications. Polygon representations of island images then can be stored and manipulated through object-relational spatial databases. Spatial databases have built-in functions and services for spatial objects defined with geometry types such as points, lines, and polygons. By this way we will be utilizing the rapidly changing and developing object-relational database communities' studies and discoveries in spatio-temporal and topological analysis for the investigation of digital satellite images. This approach also enables service qualities as well as a better performance. The efficiency and feasibility of the proposed system will be examined by various scenarios such as earthquake, erosion and accretion. Scenarios are based on measuring the effects of the natural phenomena on a selected island on satellite images. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
167. Distributed processing practice of the massive GIS data based on HBase
- Author
-
Xuemei LI, Junfeng XING, IUDawei L, Haiyang WANG, and Wei LIU
- Subjects
big data ,HBase ,raster data ,vector data ,rowkey ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Based on the distributed database HBase,a kind of GIS data management system was designed.The system optimized the generated and stored procedures of raster data,which could be directly written into the storage and indexing of the HBase.At the same time,in view of the storing,indexing and retrieval of the vector spatial data,a new design for rowkey was proposed that considering both the latitude and longitude,and the spatial data types and attributes.So that the data needed to be returned could be quickly located by rowkey of the HBase,when retrieving vector geographic information according to the spatial location.The above methods had been verified on the HBase cluster environment with real GIS data.The results show that the proposed system has high performance for storage and retrieval of mass data,and realizes the efficient storage and real-time high-speed retrieval of the vast geographic information data.
- Published
- 2016
- Full Text
- View/download PDF
168. A data porting tool for coupling models with different discretization needs.
- Author
-
Ion, S., Marinescu, D., Cruceanu, S.G., and Iordache, V.
- Subjects
- *
DATA analysis , *DISCRETIZATION methods , *GEOGRAPHIC information systems , *OUTLIERS (Statistics) , *BIOGEOCHEMISTRY , *OPEN source software - Abstract
The presented work is part of a larger research program dealing with developing tools for coupling biogeochemical models in contaminated landscapes. The specific objective of this article is to provide researchers with a data porting tool to build hexagonal raster using information from a rectangular raster data (e.g. GIS format). This tool involves a computational algorithm and an open source software (written in C). The method of extending the reticulated functions defined on 2D networks is an essential key of this algorithm and can also be used for other purposes than data porting. The algorithm allows one to build the hexagonal raster with a cell size independent from the geometry of the rectangular raster. The extended function is a bi-cubic spline which can exactly reconstruct polynomials up to degree three in each variable. We validate the method by analyzing errors in some theoretical case studies followed by other studies with real terrain elevation data. We also introduce and briefly present an iterative water routing method and use it for validation on a case with concrete terrain data. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
169. Utilizing Spatio-Temporal Data in Multi-Agent Simulation
- Author
-
Daniel Glake, Norbert Ritter, and Thomas Clemen
- Subjects
Geographic information system ,010504 meteorology & atmospheric sciences ,Semantics (computer science) ,business.industry ,Computer science ,Perspective (graphical) ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Data modeling ,Temporal database ,Raster data ,020204 information systems ,Multiple time dimensions ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,business ,computer ,Image resolution ,0105 earth and related environmental sciences - Abstract
Spatio-temporal properties strongly influence a large proportion of multi-agent simulations (MAS) in their application domains. Time-dependent simulations benefit from correct and time-sensitive input data that match the current simulated time or offer the possibility to take into account previous simulation states in their modelling perspective. In this paper, we present the concepts and semantics of data-driven simulations with vector and raster data and extend them by a time dimension that applies at run-time within the simulation execution or in conjunction with the definition of MAS models. We show that the semantics consider the evolution of spatio-temporal objects with their temporal relationships between spatial entities.
- Published
- 2020
- Full Text
- View/download PDF
170. A Parallel Computing Approach to Spatial Neighboring Analysis of Large Amounts of Terrain Data Using Spark
- Author
-
Kai Zheng, Jianbo Zhang, and Zhuangzhuang Ye
- Subjects
010504 meteorology & atmospheric sciences ,Computer science ,Big data ,0211 other engineering and technologies ,Terrain ,02 engineering and technology ,Parallel computing ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Article ,Analytical Chemistry ,Raster data ,spatial neighboring analysis ,Spark (mathematics) ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,Distributed File System ,Instrumentation ,Spatial analysis ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Spark ,business.industry ,parallel computing ,computer.file_format ,Atomic and Molecular Physics, and Optics ,big data processing ,Scalability ,Raster graphics ,business ,computer - Abstract
Spatial neighboring analysis is an indispensable part of geo-raster spatial analysis. In the big data era, high-resolution raster data offer us abundant and valuable information, and also bring enormous computational challenges to the existing focal statistics algorithms. Simply employing the in-memory computing framework Spark to serve such applications might incur performance issues due to its lack of native support for spatial data. In this article, we present a Spark-based parallel computing approach for the focal algorithms of neighboring analysis. This approach implements efficient manipulation of large amounts of terrain data through three steps: (1) partitioning a raster digital elevation model (DEM) file into multiple square tile files by adopting a tile-based multifile storing strategy suitable for the Hadoop Distributed File System (HDFS), (2) performing the quintessential slope algorithm on these tile files using a dynamic calculation window (DCW) computing strategy, and (3) writing back and merging the calculation results into a whole raster file. Experiments with the digital elevation data of Australia show that the proposed computing approach can effectively improve the parallel performance of focal statistics algorithms. The results also show that the approach has almost the same calculation accuracy as that of ArcGIS. The proposed approach also exhibits good scalability when the number of Spark executors in clusters is increased.
- Published
- 2020
171. Analyzing Diabetics for Food Access Training on the Map with CBAM
- Author
-
Huaze Xie, Yukiko Kawai, and Yuanyuan Wang
- Subjects
education.field_of_study ,Computer science ,business.industry ,Population ,Access method ,Geographical feature ,computer.file_format ,Machine learning ,computer.software_genre ,Raster data ,Public records ,Feature (computer vision) ,Artificial intelligence ,Raster graphics ,education ,business ,computer ,Network model - Abstract
The knowledge graph obtained through medical text data mining can be used to store the disease-symptom relationship. Medical researchers expect to extract high-quality disease attribute relationships directly from electronic medical records to satisfy clinical decision-making. In our research, we correlated the influencing attributes in T2DM with geographic information features to observe the food acquisition of diabetic patients from a population perspective. In this article, we used clinical indicators and food purchase habits of diabetic patients with coordinated information to discuss the high incidence and incentives caused by an unreasonable diet in certain areas. We selected the geographic data of human settlements to reduce the complexity of time and space and constructed a statistical method of raster data through the food acquisition in the Google Maps API to evaluate the public records of type 2 diabetes in the United States from 2012 to 2016. Also, we use an attention-based convolutional network model to adapt and improve the attribute relationship structure of diabetes features and spatial distribution. The initial accuracy of the model is not sensitive to geographic features (the average accuracy is 49%, the recall rate is 53%), but by adding the CBAM model to improve the performance of the convolution kernel and raster unit approximate predictive density method, the placemark feature classification has been significantly improved. At last, combining the geographical feature prediction we can suggest for susceptible patients in food access method selection. Our geographic information rasterization method also might be applied to the study of other disease characteristics on the map.
- Published
- 2020
- Full Text
- View/download PDF
172. The Prevalence of Objects with Sharp Boundaries in GIS
- Author
-
Andrew U. Frank and A. Frank
- Subjects
business.industry ,Cadastre ,computer.file_format ,Data science ,Raster data ,Geography ,Embodied cognition ,Information system ,Object type ,Artificial intelligence ,Direct experience ,Raster graphics ,business ,computer ,Realism - Abstract
NTRODUCTIONThe debate on vector versus raster based models is nearly as old as the concept of a GIS (Dutton 1979). It wasrestated as a debate between GIS with an object concept (not to be confused with object-oriented as used insoftware engineering; (Worboys 1994) uses the term object based GIS, where objects have sharp boundariesdelimited by vectors and the GIS which model the continuous variation of attributes over space using a regulartessellation - e.g. a raster (Frank 1990). Efforts to merge the two representations were attempted (Peuquet 1983).It has been a very fruitful debate as it has forced us to consider and reconsider the epistemological bases of ourwork and has led to an extensive discussion of fundamental questions (Chrisman 1987), (Mark and Frank 1991),(Mark 1993). The debate has promoted the development of ever more powerful software, achieving a nearlycomplete integration of vector and raster data (Herring 1990).It has been pointed out repeatedly that only few objects in geographic space have natural boundaries whichare sharp and well determined (Couclelis 1992). Most geographic objects seem to be an abstraction of thingswhich have unclear, fuzzy boundaries, if they have boundaries at all. The list includes most natural phenomena,from biotope to mountain range; extensive research efforts center around soil type data (Burrough1993);(Burrough 1986) and often use the techniques of fuzzy logic (Zadeh 1974). Nevertheless, many practicallyused GIS model reality in terms of crisply delimited objects. Cadastral systems, GIS used for facilities’management and automated mapping (AM/FM) and communal information systems all are appropriately orientedtowards distinct objects with well defined boundaries. The same systems, with the same models are also used tomanage soil maps and land use data, where the fiction of sharp boundaries contrasts with our view of reality.Depending on the application area or profession, one or the other spatial concept is more appropriate andallows one to capture the aspects of interest more succinctly. Users are accustomed to see a process described ina particular spatial framework, a legal discussion is most often cast in the framework of spatial objects with crispand determined boundaries whereas a discussion of the application of fertilizer might consider soil types as havingindeterminate boundaries. (Burrough and Frank 1995) and (Couclelis 1995) classify applications or phenomenaaccording to an evolving taxonomy of object types and of boundary types. This leaves two questions open,namely- where do these models of well-defined objects come from and- why are object models with sharp boundaries very often used in applications of GIS despite all obviousproblems with delimiting geographic objects?This paper attempts to address these two questions, starting from the point of view of experiential realism(Lakoff 1987). Experiential realism argues that the natural categories of human cognition are based on theexperience we have of the world and that the way we perceive the world is influenced by our cognitive apparatus(Couclelis 1992) for a practical application to geography (Mark 1993). There are two different environments for ourexperience, namely small scale and large scale (geographic) space. Our daily experience with handling smallobjects in small scale space, e.g., apples, stones, leads to a concept of objects with well defined boundaries. Thiscontrasts with the direct experience of large scale space, which leads to a different conceptual structure of space,mostly without dividing large scale space into delimited objects. These are both fundamental experiences forhuman beings which are deeply embodied in our thinking and give rise to the 'object' and 'field' concept.Experiential realism also assumes that metaphorical mapping (Lakoff and Johnson 1980) can be used toconceive situations in terms of previous experiences in different circumstances. For example, the experience withsmall prototypical objects with well determined boundaries is metaphorically translated to conceive the large scale
- Published
- 2020
- Full Text
- View/download PDF
173. Implementation Case Study: Generalisation of Raster Data
- Author
-
S. Dowers, R.G. Healey, M.J. Mineter, and G. G. Wilkinson
- Subjects
Raster data ,Computer science ,Data mining ,computer.software_genre ,computer - Published
- 2020
- Full Text
- View/download PDF
174. Partitioning Raster Data
- Author
-
M.J. Mineter
- Subjects
Raster data ,Computer science ,Reading (computer) ,Line (geometry) ,Parallel algorithm ,Overlay ,computer.file_format ,Raster graphics ,Grid ,computer ,Bottleneck ,Computational science - Abstract
Partitioning raster data for parallel algorithms requires methods for reading data from disk, distribution of data amongst processors, gathering resultant data from processors, and writing data to disk. The choice of the manner of data distribution is influenced by whether the algorithm to be parallelised is iterative, as in moving-window filtering, or is one-pass, as in a conversion, for example to vector-topology. Raster data can be held on disk as a simple array, or as a compressed dataset. The compression can be line-based or area-based. Holding a raster on disk as a simple array allows straightforward reading, distribution, collation and writing of data. The techniques are identical to those used in other grid-based applications. A variety of algorithms require that multiple datasets be analysed, for example to generate an overlay. The issues here are the same as for one dataset, but the likelihood of a bottleneck is enhanced as more data have to be read and distributed.
- Published
- 2020
- Full Text
- View/download PDF
175. Developing the Raster Big Data Benchmark: A Comparison of Raster Analysis on Big Data Platforms
- Author
-
David Haynes, Philip M. Mitchell, and Eric Shook
- Subjects
computation ,Geospatial analysis ,010504 meteorology & atmospheric sciences ,spatial benchmark ,Computer science ,Geography, Planning and Development ,Big data ,0211 other engineering and technologies ,lcsh:G1-922 ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Raster data ,Earth and Planetary Sciences (miscellaneous) ,Satellite imagery ,Computers in Earth Sciences ,geospatial ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Pixel ,business.industry ,computer.file_format ,Benchmarking ,cybergis ,Benchmark (computing) ,Data mining ,Raster graphics ,business ,computer ,lcsh:Geography (General) - Abstract
Technologies around the world produce and interact with geospatial data instantaneously, from mobile web applications to satellite imagery that is collected and processed across the globe daily. Big raster data allow researchers to integrate and uncover new knowledge about geospatial patterns and processes. However, we are at a critical moment, as we have an ever-growing number of big data platforms that are being co-opted to support spatial analysis. A gap in the literature is the lack of a robust assessment comparing the efficiency of raster data analysis on big data platforms. This research begins to address this issue by establishing a raster data benchmark that employs freely accessible datasets to provide a comprehensive performance evaluation and comparison of raster operations on big data platforms. The benchmark is critical for evaluating the performance of spatial operations on big data platforms. The benchmarking datasets and operations are applied to three big data platforms. We report computing times and performance bottlenecks so that GIScientists can make informed choices regarding the performance of each platform. Each platform is evaluated for five raster operations: pixel count, reclassification, raster add, focal averaging, and zonal statistics using three raster different datasets.
- Published
- 2020
- Full Text
- View/download PDF
176. NOAA Marine Geophysical Data and a GEBCO Grid for the Topographical Analysis of Japanese Archipelago by Means of GRASS GIS and GDAL Library
- Author
-
Polina Lemenkova, Ocean University of China (OUC), and China Scholarship Council (CSC), State Oceanic Administration (SOA), Marine Scholarship of China, Grant No. 2016SOA002, China
- Subjects
ACM: K.: Computing Milieux/K.7: THE COMPUTING PROFESSION ,010504 meteorology & atmospheric sciences ,Geography, Planning and Development ,010502 geochemistry & geophysics ,01 natural sciences ,Gravity anomaly ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,GEBCO ,Japan ,Computer Science (miscellaneous) ,Bathymetry ,mapping ,NOAA ,ACM: I.: Computing Methodologies/I.6: SIMULATION AND MODELING/I.6.1: Simulation Theory ,NetCDF ,ACM: I.: Computing Methodologies/I.5: PATTERN RECOGNITION ,[SDU.OCEAN]Sciences of the Universe [physics]/Ocean, Atmosphere ,geography.geographical_feature_category ,ACM: K.: Computing Milieux/K.8: PERSONAL COMPUTING ,computer.file_format ,ACM: K.: Computing Milieux ,ACM: K.: Computing Milieux/K.3: COMPUTERS AND EDUCATION/K.3.1: Computer Uses in Education/K.3.1.3: Distance learning ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,[INFO.INFO-IT]Computer Science [cs]/Information Theory [cs.IT] ,Archipelago ,[SDE]Environmental Sciences ,ACM: K.: Computing Milieux/K.3: COMPUTERS AND EDUCATION/K.3.1: Computer Uses in Education/K.3.1.2: Computer-managed instruction (CMI) ,Geology ,ACM: I.: Computing Methodologies ,ACM: K.: Computing Milieux/K.4: COMPUTERS AND SOCIETY ,Environmental Engineering ,[SDU.STU.GP]Sciences of the Universe [physics]/Earth Sciences/Geophysics [physics.geo-ph] ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,[SDU.STU]Sciences of the Universe [physics]/Earth Sciences ,ACM: G.: Mathematics of Computing ,ACM: I.: Computing Methodologies/I.3: COMPUTER GRAPHICS ,Raster data ,topography ,Geoid ,ACM: I.: Computing Methodologies/I.6: SIMULATION AND MODELING/I.6.5: Model Development ,ACM: K.: Computing Milieux/K.3: COMPUTERS AND EDUCATION ,General Bathymetric Chart of the Oceans ,[INFO]Computer Science [cs] ,14. Life underwater ,[SDU.STU.GM]Sciences of the Universe [physics]/Earth Sciences/Geomorphology ,Computers in Earth Sciences ,ACM: I.: Computing Methodologies/I.6: SIMULATION AND MODELING/I.6.7: Simulation Support Systems ,[SDU.ENVI]Sciences of the Universe [physics]/Continental interfaces, environment ,[SDU.STU.AG]Sciences of the Universe [physics]/Earth Sciences/Applied geology ,[SDU.STU.OC]Sciences of the Universe [physics]/Earth Sciences/Oceanography ,0105 earth and related environmental sciences ,Earth-Surface Processes ,ACM: K.: Computing Milieux/K.3: COMPUTERS AND EDUCATION/K.3.2: Computer and Information Science Education ,[INFO.INFO-MS]Computer Science [cs]/Mathematical Software [cs.MS] ,[SDU.STU.TE]Sciences of the Universe [physics]/Earth Sciences/Tectonics ,geography ,[INFO.INFO-DB]Computer Science [cs]/Databases [cs.DB] ,[INFO.INFO-PL]Computer Science [cs]/Programming Languages [cs.PL] ,ACM: I.: Computing Methodologies/I.4: IMAGE PROCESSING AND COMPUTER VISION ,ACM: K.: Computing Milieux/K.8: PERSONAL COMPUTING/K.8.1: Application Packages ,ACM: K.: Computing Milieux/K.8: PERSONAL COMPUTING/K.8.1: Application Packages/K.8.1.3: Graphics ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,Geophysics ,[INFO.INFO-IA]Computer Science [cs]/Computer Aided Engineering ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,ACM: I.: Computing Methodologies/I.2: ARTIFICIAL INTELLIGENCE ,Tectonics ,ACM: K.: Computing Milieux/K.3: COMPUTERS AND EDUCATION/K.3.1: Computer Uses in Education ,ACM: I.: Computing Methodologies/I.6: SIMULATION AND MODELING ,[SDU]Sciences of the Universe [physics] ,[INFO.INFO-IR]Computer Science [cs]/Information Retrieval [cs.IR] ,GRASS GIS ,computer - Abstract
This article analyzes topographical and geological settings in the Japan Archipelago for comparative raster data processing using GRASS GIS. Data include bathymetric and geological grids in NetCDF format: GEBCO, EMAG2, GlobSed, marine free‐air gravity anomaly and EGM96. Data were imported to GRASS by gdalwarp utility of GDAL and projected via PROJ library. Method‐ ology includes data processing (projecting and import), mapping and spatial analysis. Visualization was done by shell scripting using a sequence of GRASS modules: ‘d.shade’ for relief mapping, ‘r.slope.aspect’; for modelling based on DEM, ‘r.contour’ for plotting isolines, ‘r.mapcalc’ for classification, ‘r.category’ for associating labels, and auxiliary modules (d.vect, d.rast, d.grid, d.legend). The results of the geophysical visualization show that marine free‐air gravitational anomalies vary in the Sea of Japan (–30 to above 40 mGal) reflecting density inhomogeneities of the tectonic structure, and correlating with the geo‐ logical structure of the seafloor. Dominating values of geoid model are 30–45 m reflecting West Pacific rise, determined by deep density inhomogeneities associated with the mantle convention. Sediment thickness varies across the sea reflecting its geological development with density of 2 km in its deepest part and thinner in central part (Yamato Rise). The aspect map and reclassified map express gradient of the steepest descent.
- Published
- 2020
- Full Text
- View/download PDF
177. BQC: A free web service to quality control solar irradiance measurements across Europe
- Author
-
Iñigo Sanz-Garcia, Ruben Urraca, and Andres Sanz-Garcia
- Subjects
Reanalysis ,pyranometer ,Pyranometer ,Satellite-based model ,Computer science ,020209 energy ,media_common.quotation_subject ,Control (management) ,Irradiance ,02 engineering and technology ,Solar irradiance ,computer.software_genre ,Raster data ,Upload ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Quality (business) ,quality control ,media_common ,Renewable Energy, Sustainability and the Environment ,021001 nanoscience & nanotechnology ,Data mining ,Web service ,0210 nano-technology ,computer ,Solar radiation measurements - Abstract
Classical quality control (QC) methods of solar irradiance apply easy-to-implement physical or statistical limits that are incapable of detecting low-magnitude measuring errors due to the large width of the intervals. We previously presented the bias-based quality control (BQC), a novel method that flags samples in which the bias of several independent gridded datasets is larger for consecutive days than the historical value. The BQC was previously validated at 313 European and 732 Spanish stations finding multiple low-magnitude errors (e.g., shadows, soiling) not detected by classical QC methods. However, the need for gridded datasets, and ground measurements to characterize the bias, was hindering the BQC implementation. To solve this issue, we present a free web service, www.bqcmethod.com, that implements the BQC algorithm incorporating both the gridded datasets and the reference stations required to use the BQC across Europe from 1983 to 2018. Users only have to upload a CSV file with the global horizontal irradiance measurements to be analyzed. Compared to previous BQC versions, gridded products have been upgraded to SARAH-2, CLARA-A2, ERA5, and the spatial coverage has been extended to all of Europe. The web service provides a flexible environment that allows users to tune the BQC parameters and upload ancillary rain data that help in finding the causes of the errors. Besides, the outputs cover not only the visual and numerical QC flags but also daily and hourly estimations from the gridded datasets, facilitating the access to raster data., We thank the Instituto de Estudios Riojanos for funding part of the web service within the program Estudios Científicos de Temática Riojana, Spain. This research used resources from the Supercomputing Castilla y Leon Center (SCAYLE, www.scayle.es), funded by the European Regional Development Fund (ERDF). We would also like to thank the EU meteorological networks that freely distribute their datasets and particularly those researchers who helped us in retrieving these data: Aku Riiëla and Anders Lindfors (FMI), Virginie Gorjoux (Météo France), Sandra Andersson (SHMI), and Jakub Walawender (IMGW-PIB). Finally, we thank the CMSAF and ECMWF for freely distributing their products, and particularly Jörg Trentmann, for providing a beta version of CLARA-A2.1. RU is a postdoc from the University of La Rioja working as a visiting scientist at the European Commission’s Joint Research Center (JRC). RU is funded by the Plan Propio de la Universidad de La Rioja, Spain and V Plan Riojano de I+D, Spain . The views expressed here are purely those of the authors and may not, under any circumstances, be regarded as an official position of the European Commission.
- Published
- 2020
- Full Text
- View/download PDF
178. Spatial Rough k-Means Algorithm for Unsupervised Multi-spectral Classification
- Author
-
Sonajharia Minz and Aditya Raj
- Subjects
Geospatial analysis ,010504 meteorology & atmospheric sciences ,Pixel ,business.industry ,Computer science ,Davies–Bouldin index ,k-means clustering ,Pattern recognition ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Raster data ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Rough set ,business ,Cluster analysis ,computer ,Spatial analysis ,0105 earth and related environmental sciences - Abstract
Geospatial applications have invaded most web- and IT-based services, adding value to information-based solutions. But there are many challenges associated with the analysis of raster data. The availability of labeled data is scarce, pixels containing multiple objects cause uncertainty of classes, the huge size of input data, etc., affect the classification accuracy. The proposed Spatial Rough k-Means (SRKM) addresses the issue of mixed pixels in raster data. In Spatial Rough k-Means, the number of boundary or mixed pixels is reduced based on the spatial neighborhood property. The clustering quality parameters are used to understand the impact of approximation of boundary pixels on the quality of clusters. The experimental results of analyzing two multi-spectral Landsat 5 TM dataset of Nagarjuna Sagar Dam and Western Ghats region using SRKM indicate the potential of the method by addressing the mixed pixel issues related to raster data.
- Published
- 2020
- Full Text
- View/download PDF
179. A fast shortest path method based on multi-resolution raster model
- Author
-
Wanfeng Dou and Qiuling Tang
- Subjects
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,020206 networking & telecommunications ,Terrain ,02 engineering and technology ,computer.file_format ,Data modeling ,Raster data ,Computer Science::Graphics ,Multi resolution ,Shortest path problem ,0202 electrical engineering, electronic engineering, information engineering ,Motion planning ,Raster graphics ,computer ,Algorithm ,021101 geological & geomatics engineering - Abstract
The shortest path problem based on spatial geography is a research hotspot in the intersection of GIS and path planning. In order to improve the efficiency of solving the shortest path based on high-precision terrain data, this paper proposes a fast spatial terrain shortest path based on multi-resolution DEM data. This method uses different multi-resolution DEM raster data models to represent the same terrain. First, the coarse-grained shortest path is solved on the low-resolution terrain, then the shortest path between adjacent grids is solved in parallel on the high-resolution terrain data according to the corresponding mapping mode. This method reduces the search range between grids and greatly improves the efficiency of solving the shortest path. Experimental results show that the method proposed in this paper can not only solve the optimal path on the DEM raster data structure, but also improve the efficiency of directly solving the shortest path on a large-scale high-resolution terrain.
- Published
- 2020
- Full Text
- View/download PDF
180. Road Vectorization Based on Image Pixel Tracking and Attribute Matching Method
- Author
-
Yang Chao, Yuxia Li, Lang Yuan, Yu Si, Kunlong Fan, and Ling Tong
- Subjects
Matching (graph theory) ,Pixel ,business.industry ,Computer science ,Image processing ,Image segmentation ,Raster data ,Vectorization (mathematics) ,Image tracing ,Computer vision ,Artificial intelligence ,business ,Focus (optics) ,Spatial analysis - Abstract
Extracting road information from remote sensing images is one of the hot topics in image processing. The extraction result is saved as raster data, which is difficult to spatial information query, so it's necessary to convert it into vector data. Existing raster data vectorization methods are difficult to maintain the shape of roads and the connection between them, and don't take the attribute information into account. Focus on these problems, this paper proposed an algorithm for raster data vectorization. The algorithm obtains road by tracking pixels in the image, and the road is segmented by nodes (endpoints and intersections), which ensures that their connections in vector data are not disrupted. Then match them with corresponding attribute information (material and width) by image masking. Finally, vector data with attribute information is obtained. The experimental results show that the accuracy of the connection between roads is 97.5% in the vector data obtained by this method, and it has a correct rate of 95% for attribute matching.
- Published
- 2020
- Full Text
- View/download PDF
181. GIS-Based Python Simulation of Infiltration over a Landscape
- Author
-
Kathleen M. Trauth and Mohammed G. Mohammed
- Subjects
Hydrology ,geography ,geography.geographical_feature_category ,Hydrological modelling ,Wetland ,04 agricultural and veterinary sciences ,Agricultural and Biological Sciences (miscellaneous) ,Raster data ,Infiltration (hydrology) ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,Environmental science ,Water content ,Water Science and Technology ,Civil and Structural Engineering - Abstract
Accounting for variation in infiltration over space and time is essential in planning for land management. One such example is the consideration of sites for mitigation wetlands, where know...
- Published
- 2020
- Full Text
- View/download PDF
182. Methods of Data Encryption for Use in Safe Space Information
- Author
-
Roberto Ferro Escobar, Javier Felipe Moncada Sánchez, and Yenny Espinosa Gómez
- Subjects
Similarity (geometry) ,Pixel ,Computer science ,business.industry ,Access control ,computer.software_genre ,Encryption ,Field (computer science) ,Image (mathematics) ,Raster data ,Transformation (function) ,Data mining ,business ,computer - Abstract
The security in the transfer of Raster-type image data using different open computer networks has been extended in recent years, in order to avoid alterations of the information it is necessary to use different encryption methods of the Pixelated data. In order to carry out this purpose, it is required at least to fulfill two objectives in the encryption of Images; the first is to eliminate the correlation between neighboring pixels that have a high similarity and the second is to encrypt the result obtained. The encryption methods of conventional text formats cannot be applied to Raster data, that is why different methods of encryption have been developed, where transformation methods and Chaos theory are used which, due to their high mathematical complexity, allow us to obtain high security encryption. According to the above, the spatial databases require security mechanisms, access control and encryption to guarantee the integrity of the information, therefore there are several types of procedures and techniques oriented to encryption and the security of the databases. Data; some of them will be presented in this article in order to open a new specific field of work in this important area of knowledge.
- Published
- 2020
- Full Text
- View/download PDF
183. Semantic Integration of Raster Data for Earth Observation: An RDF Dataset of Territorial Unit Versions with Their Land Cover
- Author
-
Ba-Huy Tran, Catherine Comparot, Cassia Trojahn, Nathalie Aussenac-Gilles, MEthodes et ingénierie des Langues, des Ontologies et du DIscours (IRIT-MELODI), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Centre National de la Recherche Scientifique (CNRS), and Université Toulouse - Jean Jaurès (UT2J)
- Subjects
Computer science ,Geography, Planning and Development ,Triplestore ,lcsh:G1-922 ,02 engineering and technology ,computer.software_genre ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,RDF ,Raster data ,land cover ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Earth and Planetary Sciences (miscellaneous) ,SPARQL ,Semantic integration ,Computers in Earth Sciences ,Information retrieval ,semantic integration ,computer.file_format ,15. Life on land ,Semantic technology ,Earth Observation ,020201 artificial intelligence & image processing ,Raster graphics ,computer ,lcsh:Geography (General) ,Data integration - Abstract
Semantic technologies are at the core of Earth Observation (EO) data integration, by providing an infrastructure based on RDF representation and ontologies. Because many EO data come in raster files, this paper addresses the integration of data calculated from rasters as a way of qualifying geographic units through their spatio-temporal features. We propose (i) a modular ontology that contributes to the semantic and homogeneous description of spatio-temporal data to qualify predefined areas, (ii) a Semantic Extraction, Transformation, and Load (ETL) process, allowing us to extract data from rasters and to link them to the corresponding spatio-temporal units and features, and (iii) a resulting dataset that is published as an RDF triplestore, exposed through a SPARQL endpoint, and exploited by a semantic interface. We illustrate the integration process with raster files providing the land cover of a specific French winery geographic area, its administrative units, and their land registers over different periods. The results have been evaluated with regards to three use-cases exploiting these EO data: integration of time series observations, EO process guidance, and data cross-comparison.
- Published
- 2020
- Full Text
- View/download PDF
184. Geospatial Clustering and Network Design for Rural Electrification in Africa
- Author
-
Malcolm McCulloch, Scot Wheeler, and Alycia Leonard
- Subjects
Raster data ,Network planning and design ,Geospatial analysis ,Computer science ,Distributed computing ,Rural electrification ,Minimum spanning tree ,Grid ,computer.software_genre ,Cluster analysis ,computer ,Sierra leone - Abstract
This paper proposes automated geospatial clustering and network design methods for rural electrification in Africa. First, clustering techniques are proposed to geographically group homes into communities for grid design. These methods dynamically adjust to regional housing density, taking the legal definition of under-grid distance as the sole user input. Next, graph theory is applied to generate a preliminary distribution network design using a minimum spanning tree of home locations. Community distribution grids are interconnected with transmission lines using a secondary minimum spanning tree of community centroids. A best-fit electrification architecture (e.g. mini-grid, grid extension, etc.) can be proposed for each community based on its density, quantity of potential connections, anticipated infrastructure costs, and distance to grid. These methods demonstrate the detail achievable using home-level geospatial vector data in rural electrification planning instead of commonly-used gridded raster data. These techniques are illustrated through a case study near Koidu in Sierra Leone.
- Published
- 2020
- Full Text
- View/download PDF
185. Building Volume Per Capita (BVPC): A Spatially Explicit Measure of Inequality Relevant to the SDGs
- Author
-
Sharolyn Anderson, Luca Coscieme, Tilottama Ghosh, and Paul C. Sutton
- Subjects
inequality ,spatially explicit GINI proxy ,010504 meteorology & atmospheric sciences ,Population ,0211 other engineering and technologies ,02 engineering and technology ,Land cover ,01 natural sciences ,EO of SDGs ,12. Responsible consumption ,Raster data ,Housing inequality ,11. Sustainability ,Affordable housing ,Per capita ,lcsh:Social sciences (General) ,lcsh:Science (General) ,education ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Sustainable development ,education.field_of_study ,1. No poverty ,General Medicine ,SDG 10 ,15. Life on land ,Geography ,Household income ,lcsh:H1-99 ,Cartography ,building volume ,lcsh:Q1-390 - Abstract
Building Volume Per Capita (BVPC - cubic meters of building per person) is presented as a proxy measure of economic inequality and a direct measure of housing inequality. Sustainable development goal 10 (SDG 10: reduced inequalities) is synergic for achieving SDG 11 on sustainable cities and communities. Access to safe and affordable housing, transport systems, and public spaces are some of the targets of SDG 11 that mostly link with reducing inequalities. The Habitat III New Urban Agenda sets equal access to urban spaces, infrastructures and basic services as crucial for developing sustainable cities. Earth Observation (EO) data including remotely sensed satellite data, airborne data, and model outputs, in combination with demographic, and other statistical data, have been gaining importance for monitoring progress of the SDGs. High spatial resolution building footprint data derived from aerial photographs, stereo imagery, and LIDAR data, obtained for the cities of California, between 2010 and 2015, were used in this study. These measures of building volume were rasterized and juxtaposed with (divided by) a variety of demographic data including vector-based census data of 2015 and LandScan raster data of population counts of 2015. The National Landcover dataset of 2011 was used to characterize the land cover variability of the cities. Using these datasets, the spatial pattern and distribution of BVPC for nine cities in California were studied. The results showed that BVPC was inversely related with intensity of development, and positively related with median household income within cities. A BV-GINI was also developed to characterize the variability of the BVPC at the census tract level and the pixel level. This measure of income inequality, housing and population density is objective and easily executable. It can be used in other cities and countries and may help overcome lack of data in SDG indicators.
- Published
- 2020
- Full Text
- View/download PDF
186. Integrating cellular automata and discrete global grid systems: a case study into wildfire modelling
- Author
-
Colin Robertson and Majid Hojati
- Subjects
business.industry ,Computer science ,Big data ,Context (language use) ,computer.software_genre ,Cellular automaton ,Raster data ,Data model (ArcGIS) ,Data access ,Wildfire modeling ,General Earth and Planetary Sciences ,Data mining ,business ,Spatial analysis ,computer ,General Environmental Science - Abstract
With new forms of digital spatial data driving new applications for monitoring and understanding environmental change, there are growing demands on traditional GIS tools for spatial data storage, management and processing. Discrete Global Grid System (DGGS) are methods to tessellate globe into multiresolution grids, which represent a global spatial fabric capable of storing heterogeneous spatial data, and improved performance in data access, retrieval, and analysis. While DGGS-based GIS may hold potential for next-generation big data GIS platforms, few of studies have tried to implement them as a framework for operational spatial analysis. Cellular Automata (CA) is a classic dynamic modeling framework which has been used with traditional raster data model for various environmental modeling such as wildfire modeling, urban expansion modeling and so on. The main objectives of this paper are to (i) investigate the possibility of using DGGS for running dynamic spatial analysis, (ii) evaluate CA as a generic data model for dynamic phenomena modeling within a DGGS data model and (iii) evaluate an in-database approach for CA modelling. To do so, a case study into wildfire spread modelling is developed. Results demonstrate that using a DGGS data model not only provides the ability to integrate different data sources, but also provides a framework to do spatial analysis without using geometry-based analysis. This results in a simplified architecture and common spatial fabric to support development of a wide array of spatial algorithms. While considerable work remains to be done, CA modelling within a DGGS-based GIS is a robust and flexible modelling framework for big-data GIS analysis in an environmental monitoring context.
- Published
- 2020
187. A Raster-Based Methodology to Detect Cross-Scale Changes in Water Body Representations Caused by Map Generalization
- Author
-
Tinghua Ai and Yilang Shen
- Subjects
land use change ,Cartographic generalization ,010504 meteorology & atmospheric sciences ,Computer science ,multi-scale ,0211 other engineering and technologies ,02 engineering and technology ,raster map ,hierarchy ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Article ,Analytical Chemistry ,Set (abstract data type) ,Raster data ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,water area ,Instrumentation ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,business.industry ,Perspective (graphical) ,Pattern recognition ,computer.file_format ,Atomic and Molecular Physics, and Optics ,visual_art ,visual_art.visual_art_medium ,Tile ,Artificial intelligence ,Raster graphics ,business ,computer ,Change detection - Abstract
In traditional change detection methods, remote sensing images are the primary raster data for change detection, and the changes produced from cartography generalization in multi-scale maps are not considered. The aim of this research was to use a new kind of raster data, named map tile data, to detect the change information of a multi-scale water system. From the perspective of spatial cognition, a hierarchical system is proposed to detect water area changes in multi-scale tile maps. The detection level of multi-scale water changes is divided into three layers: the body layer, the piece layer, and the slice layer. We also classify the water area changes and establish a set of indicators and rules used for the change detection of water areas in multi-scale raster maps. In addition, we determine the key technology in the process of water extraction from tile maps. For evaluation purposes, the proposed method is applied in several test areas using a map of Tiandi. After evaluating the accuracy of change detection, our experimental results confirm the efficiency and high accuracy of the proposed methodology.
- Published
- 2020
- Full Text
- View/download PDF
188. Arrays in Databases
- Author
-
rasdaman Team and Peter Baumann
- Subjects
050101 languages & linguistics ,SQL ,Database ,Computer science ,Online analytical processing ,Science and engineering ,05 social sciences ,02 engineering and technology ,computer.software_genre ,Raster data ,Data cube ,First-class citizen ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Architecture ,computer ,computer.programming_language ,Rasdaman - Abstract
Array Databases close a gap in the database ecosystem by adding modelling, storage, and processing support for multi-dimensional arrays. Built into “datacubes” such structures are known since long in OLAP and statistics, but they also appear as spatio-temporal sensor, image, simulation, and statistics data in all science and engineering domains.In our research we address Array Databases in all aspects, from concepts over architecture to applications. Our full-stack implementation rasdaman ("raster data manager"), which effectively has pioneered Array Databases, is in operational use on multi-Petabyte, federated Earth data assets. Based on this experience, the rasdaman team has initiated and shaped datacube standards such as ISO SQL/MDA (Multi-Dimensional Arrays) and the OGC Earth datacube standards suite.In our talk we present concepts and implementation of rasdaman and show its application to Earth datacubes, illustrated by live demonstration of operational services.
- Published
- 2020
- Full Text
- View/download PDF
189. Handling large geospatial raster data with the Earth System Data
- Author
-
Cremer, Felix
- Subjects
raster data ,Julia programming ,data handling - Published
- 2020
190. Can the establishment of National Key Ecological Functional Zones improve air quality?: An empirical study from China
- Author
-
Lingyu Li, Xing Li, Yu Chen, and Guangqin Li
- Subjects
Atmospheric Science ,Economics ,0211 other engineering and technologies ,Air pollution ,Social Sciences ,02 engineering and technology ,010501 environmental sciences ,medicine.disease_cause ,01 natural sciences ,Geographical Locations ,Empirical research ,Economic Growth ,media_common ,Conservation Science ,Multidisciplinary ,Ecology ,021107 urban & regional planning ,Pollution ,Chemistry ,Physical Sciences ,Medicine ,Economic Development ,Industrial ecology ,Environmental Monitoring ,Research Article ,Matching (statistics) ,China ,Asia ,Science ,media_common.quotation_subject ,Environment ,Resource Allocation ,Air Quality ,Raster data ,Development Economics ,Population Metrics ,Air Pollution ,medicine ,Environmental Chemistry ,Air quality index ,Environmental quality ,0105 earth and related environmental sciences ,Population Density ,Models, Statistical ,Population Biology ,Ecology and Environmental Sciences ,Biology and Life Sciences ,Atmospheric Chemistry ,People and Places ,Earth Sciences ,Environmental science ,Particulate Matter - Abstract
Drawing on fine particulate matter (PM2.5) from satellite raster data and matching with the county-level socio-economic data from 2008 to 2015 in China, this paper investigates the impacts of the establishment of National Key Ecological Functional Zones (NKEFZ) on environmental quality by employing the difference-in-differences method, which was stablished in June 2011. The results indicate that the establishment of the NKEFZ significantly decreased the concentration values of PM2.5, a drop of about 20% during the study period, after the paper controls for other factors affecting air quality. The robustness tests using the maximum and medium concentration values of PM2.5 show similar results. Through further analysis, the paper finds that the establishment of NKEFZ can improve the ecological utilization efficiency of land.
- Published
- 2020
191. Study on the Preparation and Application of Vector Reference Map
- Author
-
Zhenpo Tian, Delie Ming, Yaqiang Shi, Li Xu, and Jingyuan Bao
- Subjects
Scene matching ,Raster data ,Matching (graph theory) ,business.industry ,Computer science ,Process (computing) ,Reference map ,Computer vision ,Artificial intelligence ,business ,Image (mathematics) - Abstract
In scene matching, we use vector data instead of raster data to prepare a reference map. The BSP -algorithm is used to manage the vector data and then it is loaded on the aircraft. During the guidance process, reference maps are real-time prepared based on the track information and camera parameters. As the vector reference map is very different from the real-time map, conditional generation adversarial networks are used to convert the real-time map to a vector-style image and then match it to the vector reference map, which can achieve 86% matching accuracy.
- Published
- 2020
- Full Text
- View/download PDF
192. Image, geometry and finite element mesh datasets for analysis of relationship between abdominal aortic aneurysm symptoms and stress in walls of abdominal aortic aneurysm
- Author
-
Hozan Mufty, Angus Tavner, Inge Fourneau, Christopher Rogers, Bradley Saunders, Alastair Catlin, Grand Roman Joldes, Karol Miller, Bart Meuris, Ross Sciarrone, and Adam Wittek
- Subjects
Finite element method ,Computer science ,Patient-specific modelling ,lcsh:Computer applications to medicine. Medical informatics ,Raster data ,Stress (mechanics) ,03 medical and health sciences ,0302 clinical medicine ,Software ,Engineering ,Computer graphics (images) ,Data file ,medicine ,Polygon mesh ,Biomechanics ,lcsh:Science (General) ,030304 developmental biology ,0303 health sciences ,Multidisciplinary ,business.industry ,Image segmentation ,medicine.disease ,Abdominal aortic aneurysm ,Symptoms ,lcsh:R858-859.7 ,business ,030217 neurology & neurosurgery ,lcsh:Q1-390 - Abstract
These datasets contain Computed Tomography (CT) images of 19 patients with Abdominal Aortic Aneurysm (AAA) together with 19 patient-specific geometry data and computational grids (finite element meshes) created from these images applied in the research reported in Journal of Surgical Research article "Is There A Relationship Between Stress in Walls of Abdominal Aortic Aneurysm and Symptoms?"[1]. The images were randomly selected from the retrospective database of University Hospitals Leuven (Leuven, Belgium) and provided to The University of Western Australia's Intelligent Systems for Medicine Laboratory. The analysis was conducted using our freely-available open-source software BioPARR (Joldes et al., 2017) created at The University of Western Australia. The analysis steps include image segmentation to obtain the patient-specific AAA geometry, construction of computational grids (finite element meshes), and AAA stress computation. We use well-established and widely used data file formats (Nearly Raw Raster Data or NRRD for the images, Stereolitography or STL format for geometry, and Abaqus finite element code keyword format for the finite element meshes). This facilitates re-use of our datasets in practically unlimited range of studies that rely on medical image analysis and computational biomechanics to investigate and formulate indicators and predictors of AAA symptoms. ispartof: DATA IN BRIEF vol:30 ispartof: location:Netherlands status: published
- Published
- 2020
193. Investigating spatial error structures in continuous raster data
- Author
-
Alexis Comber, Pedro Rodriguez-Veiga, Paul Harris, Narumasa Tsutsumida, and Heiko Balzter
- Subjects
FOS: Computer and information sciences ,Earth observation ,010504 meteorology & atmospheric sciences ,Mean squared error ,Computer science ,Monte Carlo method ,0211 other engineering and technologies ,02 engineering and technology ,Management, Monitoring, Policy and Law ,Statistics - Applications ,01 natural sciences ,Raster data ,Permutation ,Applications (stat.AP) ,Computers in Earth Sciences ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Earth-Surface Processes ,Local error diagnostics ,Global and Planetary Change ,computer.file_format ,Error distribution ,Kernel (statistics) ,Spatial accuracy ,Spatial heterogeneity ,Spatial variability ,Raster graphics ,computer ,Algorithm - Abstract
The objective of this study is to investigate spatial structures of error in the assessment of continuous raster data. The use of conventional diagnostics of error often overlooks the possible spatial variation in error because such diagnostics report only average error or deviation between predicted and reference values. In this respect, this work uses a moving window (kernel) approach to generate geographically weighted (GW) versions of the mean signed deviation, the mean absolute error and the root mean squared error and to quantify their spatial variations. Such approach computes local error diagnostics from data weighted by its distance to the centre of a moving kernel and allows to map spatial surfaces of each type of error. In addition, a GW correlation analysis between predicted and reference values provides an alternative view of local error. Full abstract can be found in the pdf., This version of manuscript has been accepted for publication in International Journal of Applied Earth Observation and Geoinformation on 28th September 2018
- Published
- 2019
- Full Text
- View/download PDF
194. Data set on current and future crop suitability under the Representative Concentration Pathway (RCP) 8.5 emission scenario for the major crops in the Levant, Tigris-Euphrates, and Nile Basins
- Author
-
Hadi Jaafar and Chafik Abdallah
- Subjects
0303 health sciences ,Coupled model intercomparison project ,geography ,Multidisciplinary ,geography.geographical_feature_category ,Drainage basin ,Climate change ,lcsh:Computer applications to medicine. Medical informatics ,Crop ,Raster data ,Current (stream) ,03 medical and health sciences ,0302 clinical medicine ,Agricultural and Biological Science ,lcsh:R858-859.7 ,Environmental science ,Physical geography ,lcsh:Science (General) ,Scale (map) ,Spatial analysis ,030217 neurology & neurosurgery ,lcsh:Q1-390 ,030304 developmental biology - Abstract
This article describes crop suitability maps (raster data) for thirty five crops in the Jordan, Litani, Orontes, Nile, and Tigris-Euphrates river basins. Spatial data on crop suitability are provided for two periods: current conditions as the average of the years 1970–2000, and projected future conditions for the year 2050 as an average for the years 2041–2060. The data were generated by simulating mean monthly climatic data from the Coupled Model Intercomparison Project Phase 5 (CMIP5). These climatic data are downscaled to the 1-km scale from the Intergovernmental Panel on Climate Change 5th Assessment Report. Mean monthly climatic datasets from the WorldClim database were used to generate the suitability datasets using the FAO EcoCrop model under the Representative Concentration Pathway (RCP) 8.5 emission scenario for three General Circulation Models: CCSM4, GFDL-CM3, and HadGEM2-ES with a spatial resolution of 30 arc-seconds. The findings reveal that many crops in the Levant will witness a decrease in their suitability, whereas suitability of crops in the upper Nile Basin will increase by 2050.
- Published
- 2019
- Full Text
- View/download PDF
195. Determining the occurrence of potential groundwater zones using integrated hydro-geomorphic parameters, GIS and remote sensing in Enugu State, Southeastern, Nigeria
- Author
-
Ogbonnaya Igwe, Stanley Ikenna Ifediegwu, and O. S. Onwuka
- Subjects
Hydrogeology ,Lineament ,Land use ,Renewable Energy, Sustainability and the Environment ,0208 environmental biotechnology ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,020801 environmental engineering ,Raster data ,Thematic map ,Geological survey ,Environmental science ,Drainage density ,Groundwater ,0105 earth and related environmental sciences ,Water Science and Technology ,Remote sensing - Abstract
In many parts of Nigeria, water resource is scarce, and where available, determining their abundance is difficult. In the present paper, various groundwater potential zones for the evaluation of groundwater availability in Enugu State have been delineated using remote sensing and GIS techniques. The satellite images and conventional data obtained from US Geological Survey, World Soil Resources Office in Cooperation with Land and Water Development Division/FAO and Nigerian Geological Survey Agency are used to prepare the thematic layers viz. drainage density, geomorphology, slope, landcover/land use, soil, geology, rainfall and lineament density were converted to raster data adopting feature to raster converter device in ArcGIS. The raster maps of abovementioned factors are assigned a definite score and weight computed from multi-influencing factor approach. Additionally, each weighted thematic layer is statistically computed to generate groundwater potential zones map of the study area. The groundwater potential zones thus obtained were divided into four categories, viz., super-abundant, moderately abundant, abundant and scarce zones. The groundwater resources map demonstrates that the study area is dominated by abundant and scarce groundwater potential zones which covers surface area of about 6073.44 km2 (73.27%) of the study area while the remaining 2294.98 km2 (26.73%) are dominated by super-abundant and moderately abundant groundwater potentials. However, the result validation with boreholes/wells yield data collected from the study area showed good correlations with respect to groundwater potential zones map. It is concluded that the use of hydro-geomorphic approach is very efficient for the determination of groundwater potential zones.
- Published
- 2020
- Full Text
- View/download PDF
196. DETERMINATION OF LAND COVER AS LANDSLIDE FACTOR BASED ON MULTITEMPORAL RASTER DATA IN MALANG REGENCY
- Author
-
Gunawan Prayitno, Hyang Iman Kinasih Gusti, and Abdul Wahid Hasyim
- Subjects
Raster data ,Environmental Engineering ,Soil Science ,Cover (algebra) ,Landslide ,Building and Construction ,Land cover ,Physical geography ,Vegetation ,Geotechnical Engineering and Engineering Geology ,Geology - Abstract
Malang Regency is one of the regency administration areas in East Java Province. The complexphysical condition of Malang Regency, especially its land reliefs accompanied by high rainfall, causes MalangRegency obtain floods and landslides frequently. Based on the BPBD data in 2017, it is stated that around 80%of Malang Regency is disaster-prone area of floods and landslides, especially in the southern part of MalangRegency. This is caused by several natural factors, one of which is slope, and human factors called changes inland cover. This study aims to examine the land cover factors that influence the extent of landslides that occurin the Southern Malang Regency. The independent variable used is builtup area, dense tree vegetation,vegetation of rare trees, shrubs, grass, open space, slope, and distance. Based on the results of multiple linearregression analysis, it can be concluded that the builtup area cover with coefficient value 0.062 and open spacewith coefficient value 0.020 are variables that are directly proportional to the area of the landslide that has ahigher percentage of the area of land cover being built and open, the area of landslides will increase, while thevariable cover of vegetated land has properties inversely proportional to the extent of landslides which meansthat it can reduce the extent of landslides if the percentage of vegetated land cover increases.
- Published
- 2020
- Full Text
- View/download PDF
197. Creation of Nominal Asset Value-Based Maps using GIS: A Case Study of Istanbul Beyoglu and Gaziosmanpasa Districts
- Author
-
Muhammed Oguzhan Mete and Tahsin Yomralioglu
- Subjects
Pixel ,business.industry ,Geography, Planning and Development ,Real estate ,Terrain ,Grid ,Computer Science Applications ,Education ,Raster data ,Renting ,Expropriation ,Econometrics ,Computers in Earth Sciences ,business ,Valuation (finance) - Abstract
Estimating the value of real estate has applications in fields as diverse as taxation, buying and renting properties, expropriation and urban regeneration. Determining the most objective, accurate and acceptable value for real estate by considering spatial criteria is therefore important. One stochastic method used to determine real estate values is ‘nominal valuation’. In this approach, criteria that may affect land value are subjected to various spatial analyses, and pixel-based value maps can be produced using GIS. Land value maps are in raster data format and need to be compared with the actual market values. Pixel-resolution analyses are required that depend on the selected grid dimensions. First of all, nominal value maps were produced using a nominal valuation model, using criteria for proximity, visibility and terrain. These were weighted in order to produce a nominal asset value-based map according to the ‘Best Worst Method’. Changes in the unit land values were examined for maps at various resolutions; a resolution of 10 metres emerged as the ideal pixel size for valuation maps.
- Published
- 2019
- Full Text
- View/download PDF
198. Rasdaman as a platform for serving space-time raster data
- Author
-
S Ognjen Antonijević
- Subjects
Raster data ,Computer science ,Space time ,Computer graphics (images) ,Rasdaman - Published
- 2019
- Full Text
- View/download PDF
199. Extracting Centerlines From Dual-Line Roads Using Superpixel Segmentation
- Author
-
Yilang Shen, Min Yang, and Tinghua Ai
- Subjects
superpixel segmentation ,010504 meteorology & atmospheric sciences ,General Computer Science ,Computer science ,Feature extraction ,0211 other engineering and technologies ,02 engineering and technology ,Centerline extraction ,01 natural sciences ,Midpoint ,Raster data ,General Materials Science ,Computer vision ,Cluster analysis ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,business.industry ,image data ,General Engineering ,dual-line roads ,Image segmentation ,Line (geometry) ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Scale (map) ,business ,lcsh:TK1-9971 - Abstract
Extracting centerlines from dual-line roads is very important in urban spatial analysis and infrastructure planning. In recent decades, numerous algorithms for road centerline extraction based on the vector data have been proposed by various scholars. However, with the continual development of computer vision technology, advances in the corresponding theories and methods, such as superpixel segmentation, have provided new opportunities and challenges for road centerline extraction. In this paper, we propose a new algorithm called superpixel centerline extraction (SUCE) for dual-line roads based on the raster data. In this method, dual-line roads are first segmented using a superpixel algorithm called simple linear iterative clustering. Then, the superpixels located at road intersections are merged to generate connection points from their skeletons. Finally, the centerlines of roads are generated by connecting the center points and edge midpoints of each superpixel. To test the proposed SUCE method, the vector data of roads at a scale of 1:50 000 from Shenzhen, China, and the raster data of roads at the 18th level from the Tiandi map are used. Compared with a traditional method in ArcGIS software (version 10.2) based on the vector data and four existing thinning algorithms based on the raster data, the results indicate that the proposed SUCE method can effectively extract centerlines from dual-line roads and restore the original road intersections while avoiding burrs and noises, both for simple and complex road intersections.
- Published
- 2019
200. Landscape Evaluation Based on Gaofen Satellite in the Southern Part of the Nile Delta, Egypt
- Author
-
Hazem T. Abd El-Hamid, Wenlong Wang, and Qiaomin Li
- Subjects
Raster data ,Mineral exploration ,geography ,geography.geographical_feature_category ,Remote sensing (archaeology) ,business.industry ,Distribution (economics) ,Satellite ,Woodland ,Physical geography ,business ,Grassland ,Natural (archaeology) - Abstract
Landscape segmentation and classification is fundamental to landscape research because it provides an important frame of reference for researchers to communicate and compare their work. Anthropogenic human activities mainly lead to landscape changes. The present study aims to assess the impact of anthropogenic activities on landscape classification of the Nile Delta using remote sensing and GIS techniques. Field survey, digital databases and GIS capabilities are applied for landscapes classification. Vector data using a lot of maps and raster data using satellite image have the ability to give obvious classification about landscape. Results showed that the anthropogenic impacts affect negatively on the landscape classification. Using GF2, landscapes are classified into major eight classes: cultivated land, garden land, woodland, grassland, bare land, urban land, water bodies and mining land. It was showed that the urban occupies the highest percentage of the study area. Urban construction and development areas centered on the capital Cairo city and the city of Giza are dumbbell-shaped to the east. Bare lands occupy the second percentage of the study area, and it may be distributed on around the Nile Delta, southeast of Cairo City and southwest of Giza City. According to vegetation cover, three classes were applied as the sequence: Cultivated land > Garden land > Grass land. These classes depend mainly on the River Nile. Vegetation cover may be based mainly on the water from the Nile River. In addition, mining land occupies the least percentage of the study area. The main distribution of mines and mineral exploration is also very small, but it is distributed on the edge of the city. Landscape metric as Fractal Dimension (Frac) and the Square Pixel (SqP) was applied to validate the segmentation and classification. These metrics indicated that the landscape classification is related to natural and human changes. These changes were related to unplanned management of new projects and some anthropogenic activities.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.